CN117115571B - Fine-grained intelligent commodity identification method, device, equipment and medium - Google Patents

Fine-grained intelligent commodity identification method, device, equipment and medium Download PDF

Info

Publication number
CN117115571B
CN117115571B CN202311385653.1A CN202311385653A CN117115571B CN 117115571 B CN117115571 B CN 117115571B CN 202311385653 A CN202311385653 A CN 202311385653A CN 117115571 B CN117115571 B CN 117115571B
Authority
CN
China
Prior art keywords
commodity
fine
target
grained
newly
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311385653.1A
Other languages
Chinese (zh)
Other versions
CN117115571A (en
Inventor
孙晓刚
周强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Agaxi Intelligent Technology Co ltd
Original Assignee
Chengdu Agaxi Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Agaxi Intelligent Technology Co ltd filed Critical Chengdu Agaxi Intelligent Technology Co ltd
Priority to CN202311385653.1A priority Critical patent/CN117115571B/en
Publication of CN117115571A publication Critical patent/CN117115571A/en
Application granted granted Critical
Publication of CN117115571B publication Critical patent/CN117115571B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application provides a fine-grained intelligent commodity identification method, a device, equipment and a medium, and relates to the technical field of target identification, wherein the method comprises the following steps: inputting the target newly-added commodity image into a trained fine-granularity commodity identification model to perform feature identification, and outputting the target newly-added commodity feature; the middle layer of the trained fine-granularity commodity identification model comprises a plurality of stages, each stage respectively performs feature extraction on newly-added commodity images with different visual angles, different stages correspond to different auxiliary loss functions, and the output layer performs cross feature fusion on different features extracted from the middle layer in a transform structure and cross attribute mode; and matching the names of the target commodities from the local index base according to the characteristics of the newly added commodities. Compared with the prior art, the method and the device can improve the commodity identification accuracy in the shopping cart environment and can also identify commodities with different fine granularity.

Description

Fine-grained intelligent commodity identification method, device, equipment and medium
Technical Field
The application relates to the technical field of target identification, and provides a fine-grained intelligent commodity identification method, a fine-grained intelligent commodity identification device, fine-grained intelligent commodity identification equipment and a fine-grained intelligent commodity identification medium.
Background
With the development of scientific technology, in order to provide consumers with more intelligent and convenient shopping experience, intelligent shopping carts for identifying commodities and automatically settling accounts through code scanning are generated. However, the current commodity identification technology faces the following problems, firstly, the traditional image identification method mainly focuses on the overall appearance characteristics of objects, and has limited detail difference identification capability for fine-granularity commodities; secondly, because in the shopping cart environment, different types of commodities may have unbalance in quantity, and the sample quantity of some commodities is far more than that of other commodities, the image recognition model is easy to cause poor performance of the image recognition model on the commodity types with the small quantity in the training process; third, in the shopping cart environment, the commodity may be affected by factors such as light, angle and shielding, and the adaptability of the traditional image recognition method to conditions such as illumination change and visual angle change is limited, so that the recognition result of the commodity is unstable, and further, the commodity recognition accuracy is affected. Fourth, since the shopping cart is a real-time shopping environment, commodity identification needs to be completed in a short time to ensure that a shopper can obtain an accurate identification result in time, and the complex image identification algorithm has high computational complexity, so that the requirement of the shopping cart on real-time performance cannot be met.
Therefore, how to improve the accuracy of commodity identification in shopping cart environments is a current urgent problem to be solved.
Disclosure of Invention
The application provides a fine-grained intelligent commodity identification method, device, equipment and medium, which are used for solving the problem of low commodity identification accuracy in a shopping cart environment.
In one aspect, a fine-grained intelligent commodity identification method is provided, the method comprising:
determining whether a change value of the weight of the commodity in the target shopping cart is greater than a preset weight value;
if the change value of the weight of the commodity in the target shopping cart is larger than the preset weight value, shooting the newly-added commodity in the target shopping cart to obtain a target newly-added commodity image;
inputting the target newly-added commodity image into a trained fine-granularity commodity identification model to perform feature identification, and outputting the target newly-added commodity feature; the middle layer of the trained fine-grained commodity identification model comprises a plurality of stages, each stage respectively performs feature extraction on newly-added commodity images with different visual angles, different stages correspond to different auxiliary loss functions, and the output layer performs cross feature fusion on different features extracted from the middle layer in a transform structure and cross attribute mode;
and matching the names of the target commodities from a local index base according to the characteristics of the target newly-added commodities.
Optionally, if the change value of the weight of the commodity in the target shopping cart is greater than the preset weight value, shooting the newly-added commodity in the target shopping cart to obtain a target newly-added commodity image, including:
if the change value of the commodity weight in the target shopping cart is larger than the preset weight value, shooting the newly-added commodity in the target shopping cart to obtain an initial commodity image;
performing frame difference method segmentation on the initial newly-added commodity image to obtain a commodity segmentation image;
and carrying out main body recognition on the initial newly-added commodity image to obtain a target newly-added commodity image.
Optionally, after matching the target commodity name from the local index base according to the target newly added commodity feature, the method further includes:
determining whether the target commodity name is consistent with the obtained code scanning commodity name;
and if the target commodity name is inconsistent with the obtained code scanning commodity name, sending corresponding prompt information to a user.
Optionally, before inputting the target newly-added commodity image into the trained fine-granularity commodity identification model to perform feature identification and outputting the target newly-added commodity feature, the method further includes:
training an initial fine-grained commodity identification model according to a preset fine-grained commodity data set to obtain a model prediction result;
obtaining a plurality of cross entropy loss function values according to the model prediction result;
and updating the learning weights of different stages in the initial fine-granularity commodity identification model according to the multiple cross entropy loss function values to obtain a trained fine-granularity commodity identification model.
Optionally, before training the initial fine-grained commodity identification model according to the preset fine-grained commodity data set to obtain the model prediction result, the method further includes:
constructing an initial commodity data set according to the plurality of public data sets and the supermarket actual shopping cart scene data set;
carrying out data cleaning on the initial commodity data set to obtain a cleaned commodity data set;
and carrying out multi-stage fine granularity classification on the cleaned commodity data set to obtain the preset fine granularity commodity data set.
Optionally, the step of classifying the washed commodity data set in multiple stages of fine granularity to obtain the preset fine granularity commodity data set includes:
and carrying out multi-level fine granularity classification on the cleaned commodity data set according to commodity category, commodity brand and commodity package to obtain the preset fine granularity commodity data set.
Optionally, before training the initial fine-grained commodity identification model according to the preset fine-grained commodity data set to obtain the model prediction result, the method further includes:
aiming at color, tone and brightness, carrying out rotation, turnover and random processing at different angles on a plurality of commodity images in the preset fine-granularity commodity data set to obtain a fine-granularity commodity data set after data expansion;
the step of training the initial fine-grained commodity identification model according to the preset fine-grained commodity data set to obtain a model prediction result comprises the following steps:
training an initial fine-grained commodity identification model according to the fine-grained commodity data set after the data expansion, and obtaining a model prediction result.
In one aspect, there is provided a fine-grained intelligent merchandise identification device, the device comprising:
the weight determining unit is used for determining whether the change value of the weight of the commodity in the target shopping cart is larger than a preset weight value;
the image acquisition unit is used for shooting the newly-added commodity in the shopping cart to acquire a target newly-added commodity image if the change value of the commodity weight in the target shopping cart is larger than the preset weight value;
the feature output unit is used for inputting the target newly-added commodity image into a trained fine-granularity commodity identification model to perform feature identification and outputting the feature of the target newly-added commodity; the middle layer of the trained fine-grained commodity identification model comprises a plurality of stages, each stage respectively performs feature extraction on newly-added commodity images with different visual angles, different stages correspond to different auxiliary loss functions, and the output layer performs cross feature fusion on different features extracted from the middle layer in a transform structure and cross attribute mode;
and the name output unit is used for matching the name of the target commodity from the local index library according to the new commodity characteristics of the target.
In one aspect, an electronic device is provided that includes a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing any of the methods described above when executing the computer program.
In one aspect, a computer storage medium having stored thereon computer program instructions which, when executed by a processor, implement any of the methods described above.
In the embodiment of the application, when the commodity in the shopping cart needs to be identified, firstly, whether the variation value of the commodity weight in the target shopping cart is larger than the preset weight value or not can be determined; then, if the change value of the weight of the commodity in the target shopping cart is determined to be larger than the preset weight value, shooting the newly-added commodity in the shopping cart to obtain a target newly-added commodity image; then, the target newly-added commodity image can be input into a trained fine-granularity commodity identification model to perform characteristic identification, and the target newly-added commodity characteristics are output; finally, according to the new commodity characteristics of the target, the name of the target commodity can be matched from the local index library. The middle layer of the trained fine-granularity commodity identification model comprises a plurality of stages, each stage respectively performs feature extraction on newly-added commodity images with different visual angles, and the output layer performs cross feature fusion on different features extracted from the middle layer by adopting a transform structure and a cross attribute mode. Therefore, in the embodiment of the application, since the middle layer of the trained fine-grained commodity identification model comprises a plurality of stages, and each stage can respectively perform feature extraction on newly-added commodity images with different visual angles, the problem that the adaptability of the traditional image identification method to the conditions of visual angle change and the like is limited can be avoided, and therefore the commodity identification accuracy is improved by stabilizing the commodity identification result. And as different stages correspond to different auxiliary loss functions, commodities with different fine granularity can be identified. In addition, the output layer adopts a transformation structure and a cross-section mode to perform cross-feature fusion on different features extracted from the intermediate layer, so that commodity identification accuracy can be further improved by extracting commodity feature information which is richer and more detailed.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the related art, the drawings that are required to be used in the embodiments or the related technical descriptions will be briefly described below, and it is apparent that the drawings in the following description are only embodiments of the present application, and other drawings may be obtained according to the provided drawings without inventive effort for a person having ordinary skill in the art.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of a fine-grained intelligent commodity identification method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a fine-grained commodity identification model according to an embodiment of the present application;
FIG. 4 is a schematic flow chart of training a fine-grained commodity identification model according to an embodiment of the present disclosure;
FIG. 5 is a schematic illustration of a multi-level fine-grained classification provided in an embodiment of the application;
FIG. 6 is a schematic diagram of another flow of fine-grained intelligent commodity identification according to an embodiment of the present application;
fig. 7 is a schematic diagram of a fine-grained intelligent commodity identification apparatus according to an embodiment of the present application.
The marks in the figure: the system comprises a 10-fine-granularity intelligent commodity identification device, a 101-processor, a 102-memory, a 103-I/O interface, a 104-database, a 70-fine-granularity intelligent commodity identification device, a 701-weight determination unit, a 702-image acquisition unit, a 703-feature output unit, a 704-name output unit, a 705-information transmission unit, a 706-model training unit and a 707-data set acquisition unit.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure. Embodiments and features of embodiments in this application may be combined with each other arbitrarily without conflict. Also, while a logical order is depicted in the flowchart, in some cases, the steps depicted or described may be performed in a different order than presented herein.
With the development of scientific technology, in order to provide consumers with more intelligent and convenient shopping experience, intelligent shopping carts for identifying commodities and automatically settling accounts through code scanning are generated. However, the current commodity identification technology faces the following problems, firstly, the traditional image identification method mainly focuses on the overall appearance characteristics of objects, and has limited detail difference identification capability for fine-granularity commodities; secondly, because in the shopping cart environment, different types of commodities may have unbalance in quantity, and the sample quantity of some commodities is far more than that of other commodities, the image recognition model is easy to cause poor performance of the image recognition model on the commodity types with the small quantity in the training process; third, in the shopping cart environment, the commodity may be affected by factors such as light, angle and shielding, and the adaptability of the traditional image recognition method to conditions such as illumination change and visual angle change is limited, so that the recognition result of the commodity is unstable, and further, the commodity recognition accuracy is affected. Fourth, since the shopping cart is a real-time shopping environment, commodity identification needs to be completed in a short time to ensure that a shopper can obtain an accurate identification result in time, and the complex image identification algorithm has high computational complexity, so that the requirement of the shopping cart on real-time performance cannot be met.
Based on this, the embodiment of the application provides a fine-grained intelligent commodity identification method, in which firstly, whether a variation value of commodity weight in a target shopping cart is larger than a preset weight value can be determined; then, if the change value of the weight of the commodity in the target shopping cart is determined to be larger than the preset weight value, shooting the newly-added commodity in the shopping cart to obtain a target newly-added commodity image; then, the target newly-added commodity image can be input into a trained fine-granularity commodity identification model to perform characteristic identification, and the target newly-added commodity characteristics are output; finally, according to the new commodity characteristics of the target, the name of the target commodity can be matched from the local index library. The middle layer of the trained fine-granularity commodity identification model comprises a plurality of stages, each stage respectively performs feature extraction on newly-added commodity images with different visual angles, and the output layer performs cross feature fusion on different features extracted from the middle layer by adopting a transform structure and a cross attribute mode. Therefore, in the embodiment of the application, since the middle layer of the trained fine-grained commodity identification model comprises a plurality of stages, and each stage can respectively perform feature extraction on newly-added commodity images with different visual angles, the problem that the adaptability of the traditional image identification method to the conditions of visual angle change and the like is limited can be avoided, and therefore the commodity identification accuracy is improved by stabilizing the commodity identification result. And as different stages correspond to different auxiliary loss functions, commodities with different fine granularity can be identified. In addition, the output layer adopts a transformation structure and a cross-section mode to perform cross-feature fusion on different features extracted from the intermediate layer, so that commodity identification accuracy can be further improved by extracting commodity feature information which is richer and more detailed.
After the design concept of the embodiment of the present application is introduced, some simple descriptions are made below for application scenarios applicable to the technical solution of the embodiment of the present application, and it should be noted that the application scenarios described below are only used to illustrate the embodiment of the present application and are not limiting. In the specific implementation process, the technical scheme provided by the embodiment of the application can be flexibly applied according to actual needs.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application. The application scenario may include a fine-grained smart item identification device 10.
The fine-grained intelligent commodity identification apparatus 10 may be used for intelligent identification of commodities in a shopping cart environment, for example, a personal computer (Personal Computer, PC), a server, a portable computer, etc. The fine-grained smart item identification device 10 may include one or more central processing units 101 (Central Processing Unit, CPU), a memory 102, an I/O interface 103, and a database 104. Specifically, the processor 101 may be a central processing unit (central processing unit, CPU), or a digital processing unit or the like. The memory 102 may be a volatile memory (RAM), such as a random-access memory (RAM); the memory 102 may also be a nonvolatile memory (non-volatile memory), such as a read-only memory (rom), a flash memory (flash memory), a hard disk (HDD) or a Solid State Drive (SSD); or memory 102, is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 102 may be a combination of the above. The memory 102 may store part of program instructions of the fine-grained intelligent commodity identification method provided in the embodiments of the present application, where the program instructions, when executed by the processor 101, can be used to implement steps of the fine-grained intelligent commodity identification method provided in the embodiments of the present application, so as to solve the problem of low accuracy of commodity identification in the shopping cart environment. The database 104 may be used to store data related to the scheme provided in the embodiment of the present application, such as an original image, a detail image, a target newly-added commodity feature, a public data set, a supermarket actual shopping cart scene data set, and a preset fine-grained commodity data set.
In the embodiment of the present application, the fine-grained intelligent commodity identification apparatus 10 may acquire the target newly-added commodity image through the I/O interface 103, and then, the processor 101 of the fine-grained intelligent commodity identification apparatus 10 may improve the commodity identification accuracy in the shopping cart environment according to the program instruction of the fine-grained intelligent commodity identification method provided in the embodiment of the present application in the memory 102. In addition, the data such as the original image, the detail image, the target newly-added commodity feature, the public data set, the supermarket actual shopping cart scene data set, the preset fine-granularity commodity data set and the like can be stored in the database 104.
Of course, the method provided in the embodiment of the present application is not limited to the application scenario shown in fig. 1, but may be used in other possible application scenarios, and the embodiment of the present application is not limited. The functions that can be implemented by each device in the application scenario shown in fig. 1 will be described together in the following method embodiments, which are not described in detail herein. The method according to the embodiment of the present application will be described below with reference to the accompanying drawings.
As shown in fig. 2, a flowchart of a fine-grained intelligent commodity identification method according to an embodiment of the present application is provided, and the method may be performed by the fine-grained intelligent commodity identification apparatus 10 in fig. 1, and specifically, the flowchart of the method is described below.
Step 201: it is determined whether the value of the change in weight of the items within the target shopping cart is greater than a preset weight value.
In this embodiment of the present application, at least one weight sensor may be installed on the bottom of the target shopping cart, based on which the weight in the target shopping cart may be measured once every a preset time period, and whether the variation value of the weight of the goods in the target shopping cart is greater than the preset weight value may be determined directly according to the average value of the weights measured by the at least one weight sensor.
Step 202: if the change value of the weight of the commodity in the target shopping cart is larger than the preset weight value, shooting the newly-added commodity in the target shopping cart to obtain a target newly-added commodity image.
In this embodiment of the present application, at least one camera may be installed at a different position of the target shopping cart, and if it is determined that the change value of the weight of the commodity in the target shopping cart is greater than the preset weight value, the weight sensor triggers the camera to shoot a new commodity in the target shopping cart, so as to obtain an image of the new commodity. Of course, in the embodiment of the present application, the starting mode of the camera may be triggered by other sensors or other modes, for example, millimeter wave radar+kalman algorithm, besides triggering by a weight sensor.
Specifically, if it is determined that the change value of the weight of the commodity in the target shopping cart is greater than the preset weight value, the newly-added commodity in the target shopping cart can be shot to obtain an initial commodity image. Then, the initial newly added commodity image can be subjected to frame difference method segmentation to obtain a commodity segmentation image, so that the problem of possible overlapping of commodities in the target shopping cart is solved. Finally, a main body detection model can be adopted to perform main body recognition on the initial newly-added commodity image, namely, the initial newly-added commodity is positioned to obtain a target newly-added commodity image, so that the interference of the background in the target newly-added commodity image on the subsequent target recognition is removed. Of course, in the embodiment of the present application, other methods may be used for detecting the identification of the overlapped merchandise, for example, a conventional optical flow method, a target detection method based on deep learning, and the like.
Step 203: and inputting the target newly-added commodity image into the trained fine-granularity commodity identification model to perform feature identification, and outputting the target newly-added commodity features.
In this embodiment of the present application, as shown in fig. 3, a schematic structural diagram of a fine-grained commodity identification model is provided in this embodiment of the present application, where an intermediate layer of the trained fine-grained commodity identification model includes multiple stages, each stage performs feature extraction on newly-added commodity images with different viewing angles, different stages correspond to different auxiliary loss functions, and an output layer uses a transform structure and a cross-attribute mode to perform cross feature fusion on different features extracted from the intermediate layer.
In practical applications, the image input by the model may be an original image, a detail image or a view angle image of different view angles. As shown in fig. 3, it may be assumed that the middle layer includes 5 stages, each Stage includes a plurality of convolution layers Conv, and a feature pyramid network (FPN, feature Pyramid Networks) module is used to perform feature extraction on feature images of different scales, where an input original image may directly start feature extraction from Stage1 of the model, and feature extraction may be performed on a detail image and view images of different view angles in different stages of the model, for example, feature extraction may be performed on Stage3, stage4, and Stage5, so as to avoid limited adaptability of a conventional image recognition method to situations such as view angle variation, and thus improve accuracy of commodity recognition by stabilizing recognition results of commodities. At the fine-grained commodity identification model output layer, a transducer structure and a attention mechanism can be adopted to combine features with different granularities, specifically, a cross-feature fusion can be carried out on different feature pairs in the transducer structure by utilizing a cross-feature mode, so that commodity identification accuracy is further improved by extracting commodity feature information which is richer and more detailed. In the embodiment of the application, a feature enhancement module can be further arranged in the fine-grained commodity identification model so as to enhance the extracted original image features by using images of different positions and visual angles.
Step 204: and matching the names of the target commodities from the local index base according to the characteristics of the newly added commodities.
In this embodiment of the present application, according to the new commodity feature of the target, the name of the target commodity with the top1 score may be directly matched from a local index library according to a SKU warehouse-in tool (for example, an aicreator+a handheld PDA), where the local index library may be a fasss similarity vector search library, and the local index library may be updated periodically.
In one possible implementation manner, in order to improve the shopping experience of the user, after the target commodity name is matched, whether the target commodity name is consistent with the obtained code scanning commodity name or not can be determined, and further, if the target commodity name is determined to be inconsistent with the obtained code scanning commodity name, corresponding prompt information can be sent to the user. Specifically, the real-time identification result information (related information about whether the names of the target commodities are inconsistent or not) can be uploaded to the service platform and displayed on the client.
In the embodiment of the application, when new commodities appear in the supermarket, the trained fine-granularity commodity identification model is not required to be retrained, but commodity characteristics of the local index library are only required to be updated, so that the whole process is greatly simplified. Specifically, the super-worker may take multiple angles of the minimum inventory unit (SKU, stock Keeping Unit) commodity by using the corresponding image capturing device (e.g., mobile phone, camera, etc.), then upload the multiple images taken to the trained fine-grained commodity identification model to generate corresponding commodity features, and finally add the commodity features to the local index library for storage, so as to update the local index library, i.e., store the commodity to be sold in the commodity supermarket in sequence.
In one possible implementation, as shown in fig. 4, a schematic flow chart of training a fine-grained commodity identification model provided in the embodiment of the present application, the method may be performed by the fine-grained intelligent commodity identification apparatus 10 in fig. 1, and specifically, the flow chart of the method is described below.
Step 401: training an initial fine-grained commodity identification model according to a preset fine-grained commodity data set to obtain a model prediction result.
In the embodiment of the application, first, a large-scale initial commodity data set can be constructed according to a plurality of public data sets (for example, an alirproduct data set, an RP2K data set, a Product10K data set and the like) and a supermarket actual shopping cart scene data set. The initial merchandise data set may then be data cleaned to obtain a cleaned merchandise data set. Finally, the washed commodity data set can be subjected to multi-level fine-grained classification to obtain a preset fine-grained commodity data set. Specifically, the washed commodity data set can be classified in multiple levels of fine granularity according to commodity categories, commodity brands and commodity packages, so that a preset fine granularity commodity data set is obtained. As shown in fig. 5, a schematic diagram of multi-level fine-grained classification according to an embodiment of the present application may first perform large-class classification on each piece of goods in the washed goods data set, for example, beverages, condiments, daily necessities, foods, and clothing. Then, in each large category, fine-grained classification can be further performed to divide the commodities into more specific sub-categories, for example, commodities of different brands and different packages, and further, the classification range can be gradually reduced so as to improve the identification accuracy of fine-grained commodities.
In the embodiment of the application, the deep learning training framework of the initial fine-grained commodity identification model may be any framework of a Pytorch framework, a TensorFlow framework, a pad framework, or the like.
Step 402: and obtaining a plurality of cross entropy loss function values according to the model prediction result.
In the embodiment of the application, when model training is performed, besides the model prediction result, a label result can be obtained, and further, a plurality of cross entropy loss function values can be directly obtained according to the model prediction result and the label result so as to continuously update the weight of the model. At this step, as shown in fig. 3, the learning weights of networks of different granularity can be adjusted by calculating different auxiliary loss functions.
Step 403: and updating the learning weights of different stages in the initial fine-granularity commodity identification model according to the multiple cross entropy loss function values to obtain a trained fine-granularity commodity identification model.
Furthermore, as shown in fig. 5, in the continuous training process of the initial fine-granularity commodity identification model, as the granularity information of the image is gradually increased (for example, from fine granularity to coarse granularity and from whole to detail), the initial fine-granularity commodity identification model is continuously and iteratively trained according to the fine granularity information of the image, so that the initial fine-granularity commodity identification model learns the characteristic information of different granularities to effectively distinguish the characteristics of different types of objects under the same commodity.
In one possible implementation manner, in order to make the commodity image in the preset fine-grain commodity data set closer to the actual commodity scene, in the embodiment of the present application, data expansion may be performed on the constructed large-scale preset fine-grain commodity data set. Specifically, for color, tone and brightness, rotation, turnover and random processing at different angles can be performed on a plurality of commodity images in a preset fine-grained commodity data set, so as to obtain a fine-grained commodity data set after data expansion. Based on the method, the initial fine-grained commodity identification model can be trained directly according to the fine-grained commodity data set after data expansion, so that a model prediction result is obtained, the accuracy of model training is improved, and further, the commodity identification accuracy in the shopping cart environment is correspondingly improved. In addition, gaussian noise and other processing can be added, so that the commodity images in the preset fine-grained commodity data set are further close to the actual commodity scene.
Specific examples:
assuming that a user adds 1 bottle of A-brand beverage to the shopping cart 1, 3 weight sensors are arranged at the bottom of the shopping cart 1, 4 cameras are uniformly arranged on the edge of the upper part of the shopping cart 1, and the weight of commodities in the shopping cart 1 is measured 1 time every 1 second. As shown in fig. 6, another flow chart of fine-grained intelligent commodity identification provided in the embodiment of the present application, the method may be performed by the fine-grained intelligent commodity identification apparatus 10 in fig. 1, and specifically, the flow chart of the method is described below.
Step 601: according to the weight average value measured by 3 weight sensors arranged at the bottom of the shopping cart 1, whether the variation value of the weight of the goods in the shopping cart 1 is larger than a preset weight value is determined.
Step 602: if the change value of the weight of the commodity in the shopping cart 1 is larger than the preset weight value, triggering 4 cameras to shoot the A-brand beverage in the shopping cart 1, and obtaining a plurality of A-brand beverage initial images.
Step 603: the plurality of a-brand beverage initial images are transmitted to the computing device via WiFi.
Step 604: the power computing device identifies a plurality of A-brand beverage initial images.
Step 605: preprocessing the initial images of the A-plate beverages to obtain segmented images of the A-plate beverages.
Specifically, the frame difference method segmentation can be performed on the initial images of the plurality of A-plate beverages to obtain segmented images of the plurality of A-plate beverages, so that the situation that the A-plate beverages in the shopping cart 1 are possibly overlapped is avoided.
Step 606: and carrying out main body recognition on the plurality of A-plate beverage segmentation images to obtain a plurality of A-plate beverage target images.
Specifically, a main body detection model can be adopted to perform main body recognition on the multiple A-plate beverage segmentation images, namely, positioning the A-plate beverage in the multiple A-plate beverage segmentation images to obtain multiple A-plate beverage target images, so that the interference of the background in the multiple A-plate beverage target images on subsequent target recognition is removed.
Step 607: and inputting the plurality of A-brand beverage target images into a trained fine-grained commodity identification model for feature extraction, and outputting A-brand beverage target features.
Step 608: and matching the name of the beverage of the A brand from the local index base according to the target characteristics of the beverage of the A brand.
Specifically, the name of the A-brand beverage with the top1 score can be directly matched from the local index library according to the SKU warehouse-in tool.
Step 609: and determining whether the matched name of the A-plate beverage is consistent with the name of the A-plate beverage scanning code so as to obtain a recognition result.
Step 610: and uploading the identification result to a service platform and displaying the identification result on a client.
In summary, in the embodiment of the present application, since the middle layer of the trained fine-grained commodity identification model includes multiple stages, and each stage can perform feature extraction on the newly-added commodity image with different viewing angles, the problem that the adaptability of the conventional image identification method to the situations such as viewing angle change is limited can be avoided, and therefore, the commodity identification accuracy is improved by stabilizing the commodity identification result. And as different stages correspond to different auxiliary loss functions, commodities with different fine granularity can be identified. In addition, the output layer adopts a transformation structure and a cross-section mode to perform cross-feature fusion on different features extracted from the intermediate layer, so that commodity identification accuracy can be further improved by extracting commodity feature information which is richer and more detailed.
Based on the same inventive concept, the embodiment of the present application provides a fine-grained intelligent commodity identification apparatus 70, as shown in fig. 7, the fine-grained intelligent commodity identification apparatus 70 includes:
a weight determining unit 701, configured to determine whether a change value of the weight of the commodity in the target shopping cart is greater than a preset weight value;
an image obtaining unit 702, configured to, if it is determined that the change value of the weight of the commodity in the target shopping cart is greater than the preset weight value, shoot a new commodity in the shopping cart, and obtain an image of the new commodity;
a feature output unit 703, configured to input the target newly-added commodity image into a trained fine-grained commodity identification model for feature identification, and output the feature of the target newly-added commodity; the middle layer of the trained fine-granularity commodity identification model comprises a plurality of stages, each stage respectively performs feature extraction on newly-added commodity images with different visual angles, different stages correspond to different auxiliary loss functions, and the output layer performs cross feature fusion on different features extracted from the middle layer in a transform structure and cross attribute mode;
and the name output unit 704 is configured to match the name of the target commodity from the local index base according to the new commodity feature of the target.
Optionally, the image obtaining unit 702 is further configured to:
if the change value of the commodity weight in the target shopping cart is larger than the preset weight value, shooting the newly-added commodity in the target shopping cart to obtain an initial commodity image;
performing frame difference method segmentation on the initial newly-added commodity image to obtain a commodity segmentation image;
and carrying out main body recognition on the initial newly-added commodity image to obtain a target newly-added commodity image.
Optionally, the fine-grained intelligent commodity identification apparatus 70 further includes an information transmission unit 705, where the information transmission unit 705 is configured to:
determining whether the target commodity name is consistent with the obtained code scanning commodity name;
if the target commodity name is inconsistent with the obtained code scanning commodity name, corresponding prompt information is sent to the user.
Optionally, the fine-grained intelligent commodity identification apparatus 70 further comprises a model training unit 706, and the model training unit 706 is configured to:
training an initial fine-grained commodity identification model according to a preset fine-grained commodity data set to obtain a model prediction result;
obtaining a plurality of cross entropy loss function values according to the model prediction result;
and updating the learning weights of different stages in the initial fine-granularity commodity identification model according to the multiple cross entropy loss function values to obtain a trained fine-granularity commodity identification model.
Optionally, the fine-grained intelligent commodity identification apparatus 70 further includes a data set obtaining unit 707, where the data set obtaining unit 707 is configured to:
constructing an initial commodity data set according to the plurality of public data sets and the supermarket actual shopping cart scene data set;
data cleaning is carried out on the initial commodity data set, and a cleaned commodity data set is obtained;
and carrying out multi-level fine granularity classification on the cleaned commodity data set to obtain a preset fine granularity commodity data set.
Optionally, the data set obtaining unit 707 is further configured to:
and carrying out multi-level fine granularity classification on the cleaned commodity data set according to commodity category, commodity brand and commodity package to obtain a preset fine granularity commodity data set.
Optionally, the data set obtaining unit 707 is further configured to:
aiming at color, tone and brightness, carrying out rotation, turnover and random processing at different angles on a plurality of commodity images in a preset fine-granularity commodity data set to obtain a fine-granularity commodity data set after data expansion;
training an initial fine-grained commodity identification model according to a preset fine-grained commodity data set to obtain a model prediction result, wherein the method comprises the following steps of:
training an initial fine-grained commodity identification model according to the fine-grained commodity data set after data expansion to obtain a model prediction result.
The fine-grain intelligent commodity identification apparatus 70 may be used to perform the method performed by the fine-grain intelligent commodity identification apparatus in the embodiment shown in fig. 2-6, and therefore, the descriptions of the functions that can be implemented by the functional modules of the fine-grain intelligent commodity identification apparatus 70 and the like may be referred to in the embodiment shown in fig. 2-6, and are not repeated.
In some possible implementations, aspects of the methods provided herein may also be implemented in the form of a program product comprising program code for causing a computer device to carry out the steps of the methods described herein above according to the various exemplary embodiments of the application, when the program product is run on the computer device, e.g. the computer device may carry out the methods as carried out by the fine-grained smart item identification apparatus in the examples shown in fig. 2-6.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware associated with program instructions, where the foregoing program may be stored in a computer readable storage medium, and when executed, the program performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk or an optical disk, or the like, which can store program codes. Alternatively, the above-described integrated units of the present invention may be stored in a computer-readable storage medium if implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solutions of the embodiments of the present invention may be embodied in essence or a part contributing to the prior art in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, ROM, RAM, magnetic or optical disk, or other medium capable of storing program code.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (10)

1. A fine-grained intelligent commodity identification method, characterized in that the method comprises:
determining whether a change value of the weight of the commodity in the target shopping cart is greater than a preset weight value;
if the change value of the weight of the commodity in the target shopping cart is larger than the preset weight value, shooting the newly-added commodity in the target shopping cart to obtain a target newly-added commodity image;
inputting the target newly-added commodity image into a trained fine-granularity commodity identification model to perform feature identification, and outputting the target newly-added commodity feature; the middle layer of the trained fine-grained commodity identification model comprises a plurality of stages, each stage respectively performs feature extraction on newly-added commodity images with different visual angles, different stages correspond to different auxiliary loss functions, and the output layer performs cross feature fusion on different features extracted from the middle layer in a transform structure and cross attribute mode;
and matching the names of the target commodities from a local index base according to the characteristics of the target newly-added commodities.
2. The method of claim 1, wherein the step of capturing the new merchandise in the target shopping cart to obtain the target new merchandise image if the change value of the weight of the merchandise in the target shopping cart is determined to be greater than the preset weight value comprises:
if the change value of the commodity weight in the target shopping cart is larger than the preset weight value, shooting the newly-added commodity in the target shopping cart to obtain an initial commodity image;
performing frame difference method segmentation on the initial commodity image to obtain a commodity segmentation image;
and carrying out main body recognition on the commodity segmentation image to obtain a target newly-added commodity image.
3. The method of claim 1, wherein after matching the target commodity name from the local index library based on the target newly added commodity feature, the method further comprises:
determining whether the target commodity name is consistent with the obtained code scanning commodity name;
and if the target commodity name is inconsistent with the obtained code scanning commodity name, sending corresponding prompt information to a user.
4. The method of claim 1, wherein prior to inputting the target newly added commodity image into the trained fine-grained commodity identification model for feature identification, outputting target newly added commodity features, the method further comprises:
training an initial fine-grained commodity identification model according to a preset fine-grained commodity data set to obtain a model prediction result;
obtaining a plurality of cross entropy loss function values according to the model prediction result;
and updating the learning weights of different stages in the initial fine-granularity commodity identification model according to the multiple cross entropy loss function values to obtain a trained fine-granularity commodity identification model.
5. The method of claim 4, wherein prior to training the initial fine-grained commodity identification model according to the pre-set fine-grained commodity data set to obtain the model predictive result, the method further comprises:
constructing an initial commodity data set according to the plurality of public data sets and the supermarket actual shopping cart scene data set;
carrying out data cleaning on the initial commodity data set to obtain a cleaned commodity data set;
and carrying out multi-stage fine granularity classification on the cleaned commodity data set to obtain the preset fine granularity commodity data set.
6. The method of claim 5, wherein the step of classifying the washed commodity data set in a plurality of stages of fine-grain sizes to obtain the preset fine-grain commodity data set comprises:
and carrying out multi-level fine granularity classification on the cleaned commodity data set according to commodity category, commodity brand and commodity package to obtain the preset fine granularity commodity data set.
7. The method of claim 4, wherein prior to training the initial fine-grained commodity identification model according to the pre-set fine-grained commodity data set to obtain the model predictive result, the method further comprises:
aiming at color, tone and brightness, carrying out rotation, turnover and random processing at different angles on a plurality of commodity images in the preset fine-granularity commodity data set to obtain a fine-granularity commodity data set after data expansion;
the step of training the initial fine-grained commodity identification model according to the preset fine-grained commodity data set to obtain a model prediction result comprises the following steps:
training an initial fine-grained commodity identification model according to the fine-grained commodity data set after the data expansion, and obtaining a model prediction result.
8. A fine-grained intelligent commodity identification device, characterized in that the device comprises:
the weight determining unit is used for determining whether the change value of the weight of the commodity in the target shopping cart is larger than a preset weight value;
the image acquisition unit is used for shooting the newly-added commodity in the shopping cart to acquire a target newly-added commodity image if the change value of the commodity weight in the target shopping cart is larger than the preset weight value;
the feature output unit is used for inputting the target newly-added commodity image into a trained fine-granularity commodity identification model to perform feature identification and outputting the feature of the target newly-added commodity; the middle layer of the trained fine-grained commodity identification model comprises a plurality of stages, each stage respectively performs feature extraction on newly-added commodity images with different visual angles, different stages correspond to different auxiliary loss functions, and the output layer performs cross feature fusion on different features extracted from the middle layer in a transform structure and cross attribute mode;
and the name output unit is used for matching the name of the target commodity from the local index library according to the new commodity characteristics of the target.
9. An electronic device, the device comprising:
a memory for storing program instructions;
a processor for invoking program instructions stored in the memory and for performing the method of any of claims 1-7 in accordance with the obtained program instructions.
10. A storage medium having stored thereon computer executable instructions for causing a computer to perform the method of any one of claims 1-7.
CN202311385653.1A 2023-10-25 2023-10-25 Fine-grained intelligent commodity identification method, device, equipment and medium Active CN117115571B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311385653.1A CN117115571B (en) 2023-10-25 2023-10-25 Fine-grained intelligent commodity identification method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311385653.1A CN117115571B (en) 2023-10-25 2023-10-25 Fine-grained intelligent commodity identification method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN117115571A CN117115571A (en) 2023-11-24
CN117115571B true CN117115571B (en) 2024-01-26

Family

ID=88809637

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311385653.1A Active CN117115571B (en) 2023-10-25 2023-10-25 Fine-grained intelligent commodity identification method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN117115571B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117422937B (en) * 2023-12-18 2024-03-15 成都阿加犀智能科技有限公司 Intelligent shopping cart state identification method, device, equipment and storage medium
CN117542031A (en) * 2024-01-10 2024-02-09 成都阿加犀智能科技有限公司 Commodity identification method, device, equipment and medium based on intelligent shopping cart

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018002864A2 (en) * 2016-06-30 2018-01-04 Rami VILMOSH Shopping cart-integrated system and method for automatic identification of products
CN110164033A (en) * 2018-02-13 2019-08-23 青岛海尔特种电冰柜有限公司 Merchandise news extracting method, merchandise news extraction element and automatically vending system
CN111144871A (en) * 2019-12-25 2020-05-12 创新奇智(合肥)科技有限公司 Method for correcting image recognition result based on weight information
CN112949672A (en) * 2019-12-11 2021-06-11 顺丰科技有限公司 Commodity identification method, commodity identification device, commodity identification equipment and computer readable storage medium
CN113780248A (en) * 2021-11-09 2021-12-10 武汉星巡智能科技有限公司 Multi-view-angle identification commodity intelligent order generation method and device and intelligent vending machine
CN114267064A (en) * 2021-12-23 2022-04-01 成都阿加犀智能科技有限公司 Face recognition method and device, electronic equipment and storage medium
CN115424086A (en) * 2022-07-26 2022-12-02 北京邮电大学 Multi-view fine-granularity identification method and device, electronic equipment and medium
CN115840417A (en) * 2020-09-23 2023-03-24 科沃斯商用机器人有限公司 Target identification method, device and storage medium based on artificial intelligence
CN116071721A (en) * 2023-02-27 2023-05-05 复旦大学 Transformer-based high-precision map real-time prediction method and system
CN116229580A (en) * 2023-03-22 2023-06-06 同济大学 Pedestrian re-identification method based on multi-granularity pyramid intersection network

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7130355B2 (en) * 2017-03-06 2022-09-05 東芝テック株式会社 Check device and check program
US20200151692A1 (en) * 2018-04-18 2020-05-14 Sbot Technologies, Inc. d/b/a Caper Inc. Systems and methods for training data generation for object identification and self-checkout anti-theft
CN114445633A (en) * 2022-01-25 2022-05-06 腾讯科技(深圳)有限公司 Image processing method, apparatus and computer-readable storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018002864A2 (en) * 2016-06-30 2018-01-04 Rami VILMOSH Shopping cart-integrated system and method for automatic identification of products
CN110164033A (en) * 2018-02-13 2019-08-23 青岛海尔特种电冰柜有限公司 Merchandise news extracting method, merchandise news extraction element and automatically vending system
CN112949672A (en) * 2019-12-11 2021-06-11 顺丰科技有限公司 Commodity identification method, commodity identification device, commodity identification equipment and computer readable storage medium
CN111144871A (en) * 2019-12-25 2020-05-12 创新奇智(合肥)科技有限公司 Method for correcting image recognition result based on weight information
CN115840417A (en) * 2020-09-23 2023-03-24 科沃斯商用机器人有限公司 Target identification method, device and storage medium based on artificial intelligence
CN113780248A (en) * 2021-11-09 2021-12-10 武汉星巡智能科技有限公司 Multi-view-angle identification commodity intelligent order generation method and device and intelligent vending machine
CN114267064A (en) * 2021-12-23 2022-04-01 成都阿加犀智能科技有限公司 Face recognition method and device, electronic equipment and storage medium
CN115424086A (en) * 2022-07-26 2022-12-02 北京邮电大学 Multi-view fine-granularity identification method and device, electronic equipment and medium
CN116071721A (en) * 2023-02-27 2023-05-05 复旦大学 Transformer-based high-precision map real-time prediction method and system
CN116229580A (en) * 2023-03-22 2023-06-06 同济大学 Pedestrian re-identification method based on multi-granularity pyramid intersection network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A Cross-Layer Self-attention Learning Network for Fine-grained Classification;Jianhua Chen等;《2023 3rd International Conference on Consumer Electronics and Computer Engineering》;541-545 *
基于中层细微特征提取与多尺度特征融合细粒度图像识别;齐爱玲等;《计算机应用》;第43卷(第8期);2556-2563 *
基于深度学习的细粒度图像分类;郑智文;《中国优秀硕士学位论文全文数据库(信息科技辑)》(第1期);I138-2890 *
弱监督细粒度图像识别技术研究;闵少波;《中国博士学位论文全文数据库(信息科技辑)》(第9期);I138-34 *

Also Published As

Publication number Publication date
CN117115571A (en) 2023-11-24

Similar Documents

Publication Publication Date Title
CN117115571B (en) Fine-grained intelligent commodity identification method, device, equipment and medium
US11250487B2 (en) Computer vision and image characteristic search
CN108460389B (en) Type prediction method and device for identifying object in image and electronic equipment
US10282722B2 (en) Machine learning system, method, and program product for point of sale systems
CN108875487B (en) Training of pedestrian re-recognition network and pedestrian re-recognition based on training
CN110598084A (en) Object sorting method, commodity sorting device and electronic equipment
CN109582813A (en) A kind of search method, device, equipment and the storage medium of historical relic showpiece
CN113627411A (en) Super-resolution-based commodity identification and price matching method and system
CN113935774A (en) Image processing method, image processing device, electronic equipment and computer storage medium
Becker et al. Mad for visual tracker fusion
CN113344055A (en) Image recognition method, image recognition device, electronic equipment and medium
Gothai et al. Design features of grocery product recognition using deep learning
Liu et al. An edge computing visual system for vegetable categorization
CN112232334B (en) Intelligent commodity selling identification and detection method
CN113837257A (en) Target detection method and device
CN114821234A (en) Network training and target detection method, device, equipment and storage medium
Liu et al. Moving object detection based on improved ViBe algorithm
CN113868453B (en) Object recommendation method and device
US11842540B2 (en) Adaptive use of video models for holistic video understanding
CN114220006A (en) Commodity identification method and system based on commodity fingerprints
CN114332602A (en) Commodity identification method of intelligent container
CN109740646B (en) Image difference comparison method and system and electronic device
Liu et al. Research on image recognition of supermarket commodity based on convolutional neural network
CN113033576A (en) Image local feature extraction method, image local feature extraction model training method, image local feature extraction equipment and storage medium
CN111860516A (en) Merchant name determining method, device, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant