WO2023198182A1 - 购物车内商品数量确认的方法、系统及存储介质 - Google Patents

购物车内商品数量确认的方法、系统及存储介质 Download PDF

Info

Publication number
WO2023198182A1
WO2023198182A1 PCT/CN2023/088353 CN2023088353W WO2023198182A1 WO 2023198182 A1 WO2023198182 A1 WO 2023198182A1 CN 2023088353 W CN2023088353 W CN 2023088353W WO 2023198182 A1 WO2023198182 A1 WO 2023198182A1
Authority
WO
WIPO (PCT)
Prior art keywords
trajectory
shopping cart
product
image
shopping
Prior art date
Application number
PCT/CN2023/088353
Other languages
English (en)
French (fr)
Inventor
闫凤图
刘兵
盖程鹏
张剑
曙光
李想
Original Assignee
烟台创迹软件有限公司
株式会社Retail AI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 烟台创迹软件有限公司, 株式会社Retail AI filed Critical 烟台创迹软件有限公司
Publication of WO2023198182A1 publication Critical patent/WO2023198182A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present disclosure relates to the field of computer vision technology, for example, to methods, systems and storage media for confirming the quantity of goods in a shopping cart.
  • Supermarket shopping is a lifestyle that cannot be replaced by online shopping.
  • smart shopping carts with self-service checkout functions have appeared in large supermarkets.
  • Customers can self-scan codes for the goods they need to purchase during the shopping process.
  • quick settlement after shopping is completed, which greatly reduces the time of queuing for settlement in traditional shopping.
  • the missed scanning of smart shopping carts is mainly prevented by determining whether the number of items in the shopping cart is consistent with the number of items in the customer's shopping list.
  • the above technology can be implemented through hardware equipment.
  • the hardware equipment is, for example, a gravity induction scale.
  • the weight of the product is stored in the database in advance. During the purchase process, the weight of the scanned product obtained from the database is combined with the weight change measured by the gravity induction scale.
  • the above technology can also be implemented through software, for example, through the image difference recognition method.
  • the foreground image and background image of the image in the shopping cart are calculated through difference and background modeling, and then the foreground image is matched and identified; or through skin color modeling.
  • This method is implemented by obtaining the moving target through difference, and then using the skin color model to determine whether the moving target is holding the product; when it is determined that the moving target is holding the product, the neural network model is used to identify the product, and the camera is used to obtain the image of the product placed or taken out of the shopping cart. , and then through image preprocessing, feature extraction and neural network model to identify the quantity of purchased goods.
  • solutions that rely on hardware devices require communication between multiple devices, complex post-maintenance and high initial investment costs; traditional image processing solutions are highly dependent on the environment and background. When the environment and other factors interfere, the recognition performance is greatly reduced and the generalization is poor; the method of identifying products based on neural network models requires the establishment of a large number of product data sets. When there are new products but the model cannot be updated in time, misrecognition will occur. When products overlap, detection will be missed.
  • the present disclosure provides a method, system and storage medium for confirming the quantity of goods in a shopping cart.
  • a method for confirming the quantity of goods in a shopping cart includes: during the shopping process, acquiring an image in the shopping cart in real time, and preprocessing the image in the shopping cart to obtain a processed image; The processed image is subjected to target detection and tracking based on the deep learning model to obtain the first trajectory. The processed image is subjected to target detection and tracking based on digital image processing to obtain the second trajectory.
  • the first trajectory is The acquisition and the acquisition of the second trajectory are performed at the same time; when a product with a product code scanning operation enters and exits the shopping cart, the shopping behavior is judged according to the first trajectory; when a product without a product code scanning operation enters and exits the shopping cart, according to The second trajectory is used to judge the shopping behavior and obtain the judgment result; when it is determined based on the judgment result that the number of goods in the shopping cart is inconsistent with the number of scanned goods, the customer is reminded on the shopping cart side, and/or the inconsistency is determined Information is fed back to the supermarket.
  • a system for confirming the quantity of goods in a shopping cart includes: a shopping cart configured to place goods; an image acquisition terminal disposed above any side of the shopping cart; and the image acquisition device is configured to In order to obtain images within the shopping cart accommodation range in real time; a code scanning terminal is provided on the shopping cart and is configured to scan codes for goods; a settlement terminal is provided on the shopping cart and is configured to combine all the acquired The image and product scan code information within the shopping cart accommodation range are used to confirm the quantity of goods in the shopping cart according to the aforementioned method of confirming the quantity of goods in the shopping cart.
  • a storage medium stores computer instructions. When the instructions are executed by a processor, the aforementioned method for confirming the quantity of goods in a shopping cart is implemented.
  • Figure 1 is a schematic flowchart of a method for confirming the quantity of goods in a shopping cart provided by an embodiment of the present disclosure
  • Figure 2 is a schematic flowchart of another method for confirming the quantity of goods in a shopping cart provided by an embodiment of the present disclosure
  • Figure 3 is a schematic diagram of a system for confirming the quantity of goods in a shopping cart provided by an embodiment of the present disclosure
  • Figure 4 is a schematic diagram of a processed image provided by an embodiment of the present disclosure.
  • "and/or” describes the association of associated objects, indicating that there can be three relationships.
  • “A and/or B” can mean: A exists alone, A and B exist simultaneously, and B exists alone. three conditions.
  • the character “/” generally indicates that the related objects are in an "or” relationship.
  • Optical flow is the instantaneous speed of pixel movement of a spatially moving object on the observation plane; the optical flow method uses the changes in the time domain of pixels in the image sequence and the changes between adjacent frames. Correlation is a method to find the corresponding relationship between the previous frame and the current frame, thereby calculating the motion information of objects between adjacent frames.
  • Target detection algorithms based on deep learning models include: one-stage methods, such as the YOLO (You Only Look Once) series and the Single Shot MultiBox Detector (SSD) series of algorithms.
  • the main idea is to uniformly detect objects in the image Or the feature map is densely sampled at different locations.
  • Different scales and aspect ratios can be used during sampling, and then the features extracted by the convolutional neural network are used to directly predict the category and location information of the object; a two-stage method, such as a regional convolutional neural network (Regions-Convolutional Neural Network, R-CNN) series of algorithms, the main idea is to first generate a series of sparse region methods through heuristic methods or convolutional neural networks, then classify and regress these candidate frames, and finally synthesize the results .
  • R-CNN regional convolutional neural network
  • the Dense Inverse Search-based method (DIS) optical flow algorithm is an algorithm that strikes a balance between optical flow quality and calculation time.
  • Figure 1 is a schematic flowchart of a method for confirming the quantity of goods in a shopping cart provided by an embodiment of the present disclosure.
  • an embodiment of the present disclosure provides a method for confirming the quantity of goods in a shopping cart.
  • the method includes: Step S1: During the shopping process, obtain images in the shopping cart in real time, and compare the images in the shopping cart. Perform preprocessing to obtain the processed image; Step S2: Perform target detection and tracking based on the deep learning model on the processed image to obtain the first trajectory, perform target detection and tracking based on digital image processing on the processed image, and obtain the second trajectory.
  • Step S3 When a product with a product code scanning operation enters and exits the shopping cart, the shopping behavior is judged based on the first trajectory, and the product without a product code scanning operation is judged. When the goods enter and leave the shopping cart, the shopping behavior is judged according to the second trajectory and the judgment result is obtained; Step S4: In the judgment result, when the number of goods in the shopping cart is inconsistent with the number of scanned goods, the shopping cart end Customers provide reminders and/or feedback inconsistent information to the supermarket.
  • the method for confirming the quantity of goods in the shopping cart can achieve the following technical effects: during the shopping process, different trajectory detection and tracking methods are used to judge the shopping behavior according to different shopping states, and then carry out the shopping cart operation. Judging whether the number of products in the shopping cart is consistent with the number of products scanned, it can accurately remind customers whether the number of products in the shopping cart is consistent with the number of products in the shopping list, friendly reminder to customers, effectively improving the customer's shopping experience, and also avoiding unnecessary losses in supermarkets; It can be adapted to a variety of complex application scenarios, and can confirm the number of items in the shopping cart in application scenarios with frequent lighting changes. It has good generalization and has low equipment requirements. Compared with solutions that rely on hardware equipment, it can reduce equipment requirements. The initial investment is also beneficial to later maintenance.
  • the method for confirming the quantity of goods in a shopping cart is applied to a smart shopping cart with a self-service settlement function that is integrated with an image acquisition terminal, a code scanning terminal, and a settlement terminal.
  • the smart shopping cart with self-service checkout function requires that each item placed in the shopping cart must be a product that has been scanned. If the product that has not been scanned is placed in the smart shopping cart, the smart shopping cart will Corresponding tips.
  • image preprocessing includes: grayscale transformation, geometric transformation, mask processing, and image enhancement.
  • the deep learning models used include: SSD series models, YOLO series models and R-CNN series models.
  • Target detection and tracking based on digital image processing can use background difference or optical flow and other methods.
  • performing target detection and tracking based on a deep learning model on the processed image to obtain the first trajectory includes: Step S21: Detect whether there is a hand in the processed image of the current frame. In the processed image of the current frame When there is a hand, go to step S22. When there is no hand in the processed image of the current frame, go to step S23; Step S22: Calculate whether the hand detection frame in the processed image of the current frame matches the current trajectory. When the hand detection frame matches the current trajectory, the processed image of the current frame is added to the image set corresponding to the current trajectory, and the first trajectory is obtained based on the processed current trajectory; Step S23: Add the current frame to the image set corresponding to the current trajectory. The status of the previous track is set to the reserved state, and after the preset time interval, the status of the current track is set to the end state.
  • the first trajectory obtained in step S22 is a trajectory obtained for one hand entering and exiting the shopping cart. That is to say, for the user's multiple hand taking behaviors, multiple corresponding first trajectories will be obtained.
  • the hand detection frame matches the current trajectory, it also includes: determining whether the hand holds the product, and can correspond to the image corresponding to the hand detection frame of the hand holding the product and the hand detection frame of the hand not holding the product. Different identifiers are added to the images respectively, so that the type of the first trajectory can be subsequently determined based on the identifier, so that different judgment results can be determined based on the first trajectories of different categories.
  • the hand detection frame does not match the current trajectory, it means that the currently detected hand detection frame may be a detection frame detected by other hands entering the image. For example, when different users use the same shopping cart to shop, there will be different hands. The situation in which items are not taken from the shopping cart.
  • a new trajectory can be created, that is, a new trajectory is created for the currently detected hand detection frame. That is to say, in the solution of this embodiment, there are multiple first trajectories exist at the same time. The above solution ensures that multiple users can use a shopping cart at the same time, improving customer experience while ensuring recognition accuracy.
  • a matching operation is performed between the hand position obtained through target detection and the hand position in the current trajectory. For example, the hand position obtained through target detection and the latest determined hand in the current trajectory are calculated. The area of the overlapping area of the position. When the area of the overlapping area is greater than the preset threshold, it is determined that the hand position obtained by target detection matches the current trajectory, then the hand position obtained by target detection is located on the current trajectory, and the current frame The processed image is added to the image set corresponding to the current trajectory.
  • the trajectory of the hand position obtained by target detection may be other new trajectories. traces of.
  • the current trajectory that is not matched to a new image frame and is placed in the reserved state is the reserved trajectory, and the reserved trajectory is used to express the reserved state of the trajectory.
  • the state of the current trajectory is set to the reserved state. After the preset time interval, the state of the current trajectory is set to the end state, and the predicted trajectory can also be determined.
  • the predicted trajectory is No hand movement is detected, and the trajectory is obtained by calculating the predicted hand movement direction and distance.
  • the movement direction and movement distance of the hand in the future time period, and the trajectory determined based on the predicted movement direction and movement distance of the hand is the predicted trajectory.
  • the hand detection frame in the processed image can be compared with the latest predicted hand position in the predicted trajectory.
  • the matching method is similar to the way of matching the hand detection frame with the current trajectory, and the processed image can be determined Whether the position of the hand detection frame in the image is located in the predicted trajectory, if the position of the hand detection frame in the processed image is located in the predicted trajectory, use the processed image as a starting point to continue to determine the hand detection
  • the trajectory of the position of the frame, the trajectory that continues to be determined is called the matching trajectory, and the matching trajectory can be considered as a trajectory that matches the predicted trajectory or the current trajectory in the reserved state.
  • the current trajectory, predicted trajectory and matching trajectory in the above scheme can all be used as the final first trajectory.
  • the trajectory accuracy can be improved and the fault tolerance rate of this solution can be improved.
  • step S2 target detection and tracking based on digital image processing is performed on the processed image to obtain the second trajectory, including: Step S24: Detect whether there are moving objects in the processed image of the current frame, and the current frame is processed When there are moving objects in the final image, go to step S25. When there are no moving objects in the processed image of the current frame, go to step S26; Step S25: Add the processed image of the current frame to the image set corresponding to the current trajectory. According to the processing The second trajectory is obtained from the current trajectory; Step S26: Set the status of the current trajectory to the reserved status.
  • Step S25 includes: matching the position of the moving object with the current trajectory.
  • the position of the moving object matches the current trajectory, adding the processed image of the current frame to the image set corresponding to the current trajectory.
  • the position of the moving object matches the current trajectory
  • the current trajectory does not match, that is, for the current trajectory (optical flow) that does not match the position of the moving object
  • Optical flow method including: DIS optical flow algorithm, and optical flow direction and contour area filtering.
  • Optical flow method or background difference can be used to detect whether there are moving objects in the processed image of the current frame.
  • step S26 after the status of the current trajectory is set to the reserved state, the moving direction and moving distance of the moving object in the future time period can be predicted based on the moving direction and speed of the moving object that have been determined in the current trajectory.
  • the trajectory determined by the movement direction and movement distance of the moving object is the predicted trajectory.
  • the position of the moving object in the processed image can be compared with the latest predicted position of the moving object in the predicted trajectory.
  • the matching method is similar to the way of matching the hand detection frame with the current trajectory, which can determine whether the position of the moving object in the processed image is in the predicted trajectory, and the position of the moving object in the processed image If it is in the predicted trajectory, the processed image is used as a starting point to continue to determine the trajectory of the position of the moving object.
  • the continuously determined trajectory is called the matching trajectory.
  • the matching trajectory can be considered to be the same as the predicted trajectory or in a reserved state.
  • the trajectory matched by the current trajectory The current trajectory, predicted trajectory and matching trajectory in the above scheme can all be used as the final determined second trajectory.
  • step S3 also includes: respectively determining whether the first trajectory and the second trajectory are ended, determining whether the ended trajectory is valid, and for the unfinished trajectory, recording the trajectory information and switching to step Step S1, determine the shopping behavior for the valid first trajectory or the valid second trajectory; wherein, when the number of reserved frames of the first trajectory is greater than the first preset threshold, it is determined that the first trajectory ends; in the first trajectory When the number of reserved frames is not greater than the first preset threshold, it is determined that the first trajectory has not ended; when the number of reserved frames of the second trajectory is greater than the first preset threshold, it is determined that the second trajectory has ended. When the number of reserved frames is not greater than the first preset threshold, it is determined that the second trajectory has not ended. Among them, when the hand cannot be detected, the hand movement direction is predicted through calculation, and the trajectory is obtained by correcting the calculation. When the number of frames in the trajectory is greater than the number of reserved frames, the trajectory is considered to have ended.
  • determining whether the completed trajectory is valid includes: obtaining the trajectory length of the completed trajectory. When the trajectory length is greater than a second preset threshold, the completed trajectory is valid, and the trajectory length is not greater than the second preset threshold. , the ending trajectory is invalid.
  • step S3 when a product with a product code scan operation enters and exits the shopping cart, the shopping behavior judgment is made based on the valid first trajectory, including: Step S31: Concentrate the hands based on the image corresponding to the valid first trajectory.
  • the adding time sequence of the image of the hand holding the product and the image of the hand not holding the product in the image set is divided into a first type of trajectory, a second type of trajectory and a third type of trajectory;
  • Step S32 When the effective first trajectory is the first type of trajectory, obtain the first ratio of the number of frames of the image of the hand holding the product to the number of frames of the image of the hand not holding the product, and the first ratio is greater than the third preset
  • the threshold is used to determine that the product has entered the shopping cart. If the first ratio is not greater than the third preset threshold, it is determined that the product has not entered the shopping cart.
  • Step S33 In the case where the effective first trajectory is the third type of trajectory, obtain the mobile phone number.
  • the second ratio is greater than the fourth preset threshold. It is determined that the product is taken out of the shopping cart.
  • the second ratio is not greater than the fourth preset threshold.
  • step S31 the first trajectory in the image set in which the image of the hand holding the product is added earlier and the image of the hand not holding the product is added later can be used as the first type of trajectory;
  • the first trajectory with the adding time of the image containing the product later and the adding time of the image with the hand not holding the product first is regarded as the third type of trajectory;
  • the trajectories other than the first type of trajectory and the third type of trajectory are regarded as the second type.
  • Class trajectory This second type of trajectory can correspond to the trajectory of entering and exiting the shopping cart empty-handed.
  • step S3 when the product without product code scanning operation enters and exits the shopping cart, the shopping behavior judgment is made based on the valid second trajectory, including: Step S34: Based on the optical flow direction of the valid second trajectory , judge the items entering and exiting the shopping cart. In this way, according to the optical flow trajectory of the product, the products entering and leaving the shopping cart can be confirmed without scanning the code.
  • step S3 the judgment result is obtained, including: the product enters the shopping cart, and the product is still in the shopping cart, and the judgment result is that the product is taken; the product does not enter the shopping cart, and the product is removed from the shopping cart.
  • the shopping cart is taken out, and the judgment result is that the product is taken out; the product is entered into the shopping cart, and the product is taken out from the shopping cart, and the judgment result is that the product is replaced; the product is not entered into the shopping cart, and the product is still in the shopping cart, and the judgment result is obtained It is empty-handed entry and exit; the optical flow direction is away from the shopping cart, and the judgment result is that the product falls out; the optical flow direction is toward the shopping cart, and the judgment result is that the product is thrown in. In this way, the number of items in the shopping cart can be accurately counted, and the consistency of the number of items in the shopping cart and the number of scanned items can be judged.
  • the judgment results of shopping behavior judgment based on the first trajectory include: the product enters the shopping cart, and the product is still in the shopping cart, and the judgment result is that the product is taken; the product does not enter the shopping cart, and the product is removed from the shopping cart. Take it out, and the judgment result is that the product is taken out; the product enters the shopping cart, and the product is taken out from the shopping cart, and the judgment result is that the product is replaced; the product does not enter the shopping cart, and the product is still in the shopping cart, and the judgment result is empty hand In and out.
  • the judgment results of shopping behavior judgment based on the second trajectory include: if the optical flow direction is away from the shopping cart, the judgment result is that the product falls out; if the optical flow direction is pointing toward the shopping cart, the judgment result is that the product is thrown in.
  • the first trajectory and the second trajectory exist at the same time, it means that the goods in this behavior are entered and exited by hand-holding or only empty-handed.
  • the classification can determine whether this behavior is the behavior of entering and exiting the shopping cart empty-handed, that is, the product has not entered the shopping cart, and the product is still in the shopping cart, and the result is judged to be entering and exiting empty-handed.
  • For the behavior of entering and exiting the shopping cart by hand first determine whether the product has a corresponding QR code scanning operation before entering the shopping cart and/or after being removed from the shopping cart. If there is a QR code scanning operation, use the first track for shopping behavior.
  • the judgment results include: if the product enters the shopping cart, and the product is still in the shopping cart, the judgment result is to take the product; if the product does not enter the shopping cart, and the product is taken out of the shopping cart, the judgment result is to take the product out; Enter the shopping cart and take the product out of the shopping cart, and the judgment result is that the product is replaced.
  • the second trajectory is used to judge shopping behavior, and the optical flow direction is combined to remind the user that there are products that have not been scanned and entered into the shopping cart.
  • the shopping behavior is based on whether the product has a corresponding scan code before entering the shopping cart. Judgment, for example, when it is determined that the optical flow direction is pointing to the shopping cart and the product has a corresponding scanning operation before entering the shopping cart, the judgment result is that the product is thrown in and the product has been scanned, without prompting; when the optical flow direction is determined to be When pointing to the shopping cart and there is no corresponding scanning operation for the product before entering the shopping cart, the judgment result is that the product is thrown in and the product has not been scanned, and the user needs to be prompted to scan the code; when the optical flow direction is away from the shopping cart, the judgment result is obtained For items falling out.
  • the first preset threshold, the second preset threshold, the third preset threshold and the fourth preset threshold can be set by those skilled in the art according to actual needs.
  • Figure 2 is a schematic flowchart of a method for confirming the quantity of goods in a shopping cart provided by an embodiment of the present disclosure. picture.
  • the video frames of the scenes in the shopping cart are read in real time, image preprocessing is performed, and the processed images are subject to target detection and tracking to obtain the product trajectory.
  • the target detection and tracking include: At the same time Target detection based on deep learning model and target detection based on background difference are carried out; for target detection based on deep learning model, first perform target detection on the current frame image to determine whether there is a hand in it. When there is a hand in the current frame image When, the hand detection frame is matched with the last frame of each trajectory.
  • trajectory tracking is performed, and the current frame image is added to the image set corresponding to the trajectory.
  • a new trajectory needs to be created for the hand detection frame.
  • the status of the current trajectory is set to the reserved state. ; For target detection based on background difference, first determine whether there are moving objects in the current frame image. When there are moving objects in the current frame image and the position of the moving object matches the current trajectory, trajectory tracking is performed and the current frame image is added to In the image set corresponding to the current trajectory, when the position of the moving object does not match the current trajectory, a new trajectory is created.
  • the current trajectory is set to the reserved state; the reservation of the current trajectory If the number of frames is greater than the first preset threshold, the current trajectory ends. The number of reserved frames of the current trajectory is less than or equal to the first preset threshold. The current trajectory has not ended. Record the trajectory information, continue to read the scene video frames in the shopping cart, and perform the aforementioned Step; after the current trajectory ends, determine whether the current trajectory is valid. When the trajectory length of the current trajectory is less than or equal to the second preset threshold, the trajectory is invalid and cleared. When the trajectory length of the current trajectory is greater than the second preset threshold, the trajectory is valid and proceeds. The shopping status and behavior are judged and the judgment results are obtained. When the number of goods in the shopping cart is inconsistent with the number of scanned goods, the customer is reminded on the shopping cart and at the same time on the supermarket clerk's handheld terminal.
  • Figure 3 is a schematic diagram of a system for confirming the quantity of goods in a shopping cart provided by an embodiment of the present disclosure.
  • Figure 4 is a schematic diagram of a processed image provided by an embodiment of the present disclosure.
  • the embodiment of the present disclosure also provides a system 10 for confirming the quantity of goods in a shopping cart, including: a shopping cart 110, configured to place goods; and an image acquisition terminal 120, arranged above any side of the shopping cart 110 , the image acquisition device is configured to acquire images within the accommodation range of the shopping cart; the code scanning terminal 130 is arranged on the shopping cart 110, and is configured to perform a code scanning operation on the goods; the settlement terminal 140 is arranged on the shopping cart 110, and is configured to combine
  • the obtained images and product scan code information within the shopping cart accommodation range are used to confirm the quantity of goods in the shopping cart according to the aforementioned method of confirming the quantity of goods in the shopping cart. It can adapt to a variety of supermarket environments and has strong generalization; it reduces system costs and is simple to maintain in the future
  • Embodiments of the present disclosure also provide a computer-readable storage medium on which computer instructions are stored. When the instructions are executed by a processor, the method for confirming the quantity of goods in the shopping cart in any of the foregoing embodiments is implemented.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a particular manner, such that the instructions stored in the computer-readable memory include the manufacture of instruction means, the instruction means Implements the functionality specified in a process or processes in a flow diagram and/or in a block or blocks in a block diagram.
  • These computer program instructions may also be loaded onto a computer or other programmable data processing device, causing the computer or other programmable device to perform a series of operating steps to produce computer-implemented processes, thereby causing the instructions to be executed on the computer or other programmable device.

Abstract

本公开提供了购物车内商品数量确认的方法、系统及存储介质。该购物车内商品数量确认的方法包括:在购物过程中,实时获取购物车内图像,并对图像进行预处理,得到处理后图像;对处理后图像进行基于深度学习模型的目标检测与跟踪,得到第一轨迹,对处理后图像进行基于数字图像处理的目标检测与跟踪,得到第二轨迹;有商品扫码操作的商品进出购物车时,根据第一轨迹,进行购物行为判断,未进行商品扫码操作的商品进出购物车时,根据第二轨迹,进行购物行为判断,得到判断结果;当购物车内商品数量与扫码商品数量不一致时,在购物车端对顾客进行提醒,和/或,将不一致信息反馈至超市端。

Description

购物车内商品数量确认的方法、系统及存储介质
本申请要求在2022年04月14日提交中国专利局、申请号为202210392643.X的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本公开涉及计算机视觉技术领域,例如涉及购物车内商品数量确认的方法、系统及存储介质。
背景技术
超市购物是网购不能取代的生活方式,随着市场的需求以及技术的发展,具有自助结算功能的智能购物车已经出现在大型超市中,顾客可以在购物过程中对需要购买的商品自助扫码,并在购物结束后快速结算,大大减少了传统购物排队结算的时间。
相关技术中,主要通过判断购物车中商品数量与顾客购物清单中的商品数量是否一致,来防止智能购物车的漏扫。上述技术可以通过硬件设备实现,该硬件设备例如是重力感应秤,商品重量被预先存入数据库,在购买的过程中,将从数据库中获取的扫描商品的重量与重力感应秤测量的重量变化作对比。上述技术还可以通过软件实现,例如是通过图像差分识别法实现,通过差分与背景建模计算出购物车内图像的前景图像与背景图像,然后对前景图像进行匹配识别;或者是通过肤色建模法实现,通过差分获得运动目标,然后使用肤色模型判别运动目标是否手持商品;在确定运动目标手持商品的情况下,通过神经网络模型识别该商品,通过摄像头获取放入或取出购物车的商品图像,然后经过图像预处理、特征提取和神经网络模型识别购买的商品数量。
相关技术中至少存在如下问题:依靠硬件设备的解决方案需要多个设备之间进行通信,后期维护复杂且前期投入成本较高;传统图像处理的解决方案对于环境和背景的依赖性较强,当环境以及其他因素干扰时,识别性能大大降低,泛化性较差;基于神经网络模型识别商品的方法需要建立大量的商品数据集,当有新商品但不能及时更新模型时,造成误识别,在商品重叠时将会产生漏检现象。
发明内容
本公开提供了购物车内商品数量确认的方法、系统及存储介质。
第一方面,提供了一种购物车内商品数量确认的方法,该方法包括:在购物过程中,实时获取购物车内图像,并对所述购物车内图像进行预处理,得到处理后图像;对所述处理后图像进行基于深度学习模型的目标检测与跟踪,得到第一轨迹,对所述处理后图像进行基于数字图像处理的目标检测与跟踪,得到第二轨迹,所述第一轨迹的获取和所述第二轨迹的获取同时进行;有商品扫码操作的商品进出购物车时,根据所述第一轨迹,进行购物行为判断,未进行商品扫码操作的商品进出购物车时,根据所述第二轨迹,进行购物行为判断,得到判断结果;在根据所述判断结果确定购物车内商品数量与扫码商品数量不一致时,在购物车端对顾客进行提醒,和/或,将不一致信息反馈至超市端。
第二方面,提供了一种购物车内商品数量确认的系统,该系统包括:购物车,设置为放置商品;图像获取终端,设置于所述购物车任一侧面上方,所述图像获取装置设置为实时获取购物车容置范围内图像;扫码终端,设置于所述购物车上,设置为对商品进行扫码操作;结算终端,设置于所述购物车上,设置为结合获取到的所述购物车容置范围内图像和商品扫码信息,根据前述的购物车内商品数量确认的方法,进行购物车内商品数量确认。
第三方面,提供了一种存储介质,所述存储介质存储有计算机指令,所述指令被处理器执行时实现前述的购物车内商品数量确认的方法。
附图说明
图1是本公开实施例提供的一种购物车内商品数量确认的方法的流程示意图;
图2是本公开实施例提供的另一种购物车内商品数量确认的方法的流程示意图;
图3是本公开实施例提供的一种购物车内商品数量确认的系统的示意图;
图4是本公开实施例提供的一种处理后图像的示意图。
具体实施方式
以下结合附图及实施例,对本申请进行描述和说明。此处所描述的具体实施例仅仅用以解释本申请。
下面描述中的附图仅仅是本申请的一些示例或实施例。
在本申请中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的多个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的 实施例。此外,术语“第一”、“第二”、“第三”仅用于描述目的,而不能理解为指示或暗示相对重要性。
除非另作定义,本申请所涉及的技术术语或者科学术语应当为本申请所属技术领域内具有一般技能的人士所理解的通常意义。本申请所涉及的“一”、“一个”、“一种”、“该”等类似词语并不表示数量限制,可表示单数或复数。本申请所涉及的术语“包括”、“包含”、“具有”以及它们任何变形,意图在于覆盖不排他的包含;例如包含了一系列步骤或模块(单元)的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可以还包括没有列出的步骤或单元,或可以还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。
本公开实施例中“和/或”描述关联对象的关联关系,表示可以存在三种关系,例如,“A和/或B”可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
下面对本公开实施例所涉概念进行介绍,光流是空间运动物体在观察平面上的像素运动的瞬时速度;光流法是利用图像序列中像素在时间域上的变化以及相邻帧之间的相关性来找到上一帧跟当前帧之间存在的相应关系,从而计算出相邻帧之间物体的运动信息的一种方法。
基于深度学习模型的目标检测算法包括:one-stage方法,例如是YOLO(You Only Look Once)系列和单步多框目标检测(Single Shot MultiBox Detector,SSD)系列算法,主要思想是均匀地在图片或者特征图不同位置进行密集采样,采样时可以采用不同尺度和长宽比,然后用卷积神经网络提取的特征直接预测对象的类别和位置信息;two-stage方法,例如是区域卷积神经网络(Regions-Convolutional Neural Network,R-CNN)系列算法,主要思路是先通过启发式方法或者卷积神经网络产生一系列稀疏的区域方法,然后对这些候选框进行分类与回归,最后将结果综合起来。
基于稠密逆搜索的方法(Dense Inverse Search-based method,DIS)光流算法,是在光流质量和计算时间中取得平衡的算法。
对于可以自助结算的购物车,若因为顾客有意或无意的漏扫商品,将未支付的商品带走,会给超市带来损失,若顾客对已扫码的商品不想要了,却忘记扫码退货,将会给顾客带来损失,并且还会降低顾客的购物体验满意度。
图1是本公开实施例提供的一种购物车内商品数量确认的方法的流程示意图。如图1所示,本公开实施例提供了一种购物车内商品数量确认的方法,该方法包括:步骤S1:在购物过程中,实时获取购物车内图像,并对购物车内图像 进行预处理,得到处理后图像;步骤S2:对处理后图像进行基于深度学习模型的目标检测与跟踪,得到第一轨迹,对处理后图像进行基于数字图像处理的目标检测与跟踪,得到第二轨迹,第一轨迹和所述第二轨迹的获取同时进行;步骤S3:有商品扫码操作的商品进出购物车时,根据所述第一轨迹,进行购物行为判断,未进行商品扫码操作的商品进出购物车时,根据所述第二轨迹,进行购物行为判断,得到判断结果;步骤S4:所述判断结果中,当购物车内商品数量与扫码商品数量不一致时,在购物车端对顾客进行提醒,和/或,将不一致信息反馈至超市端。
本公开实施例提供的购物车内商品数量确认的方法,可以实现以下技术效果:在购物过程中,根据不同的购物状态,使用不同的轨迹检测与跟踪方法对购物行为进行判断,进而进行购物车内商品数量与扫码商品数量是否一致判断,可以准确提醒顾客购物车内商品数量与购物清单中商品数量是否一致,友好提醒顾客,有效改善顾客的购物体验,也可以避免超市不必要的损失;可以适应多种复杂应用场景,可以在光照变化频繁的应用场景下对购物车内商品数量进行确认,泛化性好;对设备要求低,相比于依靠硬件设备的解决方案,可以减少设备的前期投入,也有利于后期维护。
将异常信息反馈至超市终端设备,可以在顾客离店时,对有异常购物信息的购物车进行人为干预检查。对于超市可以有效规避损失。
本公开实施例提供的购物车内商品数量确认的方法,应用于集成有图像获取终端、扫码终端以及结算终端的具有自助结算功能的智能购物车。该具有自助结算功能的智能购物车要求放入购物车中的每一件商品是执行过扫码操作的商品,若未执行扫码操作的商品被放入智能购物车,该智能购物车会进行相应的提示。
在一些实施例中,图像预处理包括:灰度变换、几何变换、掩膜处理和图像增强。使用的深度学习模型包括:SSD系列模型、YOLO系列模型和R-CNN系列模型。
基于数字图像处理的目标检测与追踪,可以采用背景差分或者是光流等手段。
在一些实施例中,对处理后图像进行基于深度学习模型的目标检测与跟踪,得到第一轨迹,包括:步骤S21:检测当前帧处理后图像中是否有手部,在当前帧处理后图像中存在手部时,转入步骤S22,在当前帧处理后图像中不存在手部时,转入步骤S23;步骤S22:计算当前帧处理后图像中的手部检测框和当前轨迹是否匹配,在手部检测框和当前轨迹匹配时,将当前帧处理后图像添加至当前轨迹对应的图像集中,根据处理后的当前轨迹得到第一轨迹;步骤S23:将当 前轨迹的状态置为预留状态,经过预设时间间隔后,将当前轨迹的状态置为结束状态。
步骤S22中得到的第一轨迹是针对一次手部进出购物车得到的轨迹,也就是说,针对用户的多次手部拿取行为,会得到对应的多条第一轨迹。
在计算手部检测框和当前轨迹是否匹配之前还包括:判断手部是否持有商品,可以对手部持有商品的手部检测框对应的图像和手部未持有商品的手部检测框对应的图像分别添加不同的标识,以便于后续根据该标识确定第一轨迹的类型,从而根据不同类别的第一轨迹确定不同的判断结果。
当手部检测框和当前轨迹不匹配时,说明当前检测的手部检测框可能是其他手部进入图像而检测到的检测框,例如在不同用户使用同一购物车购物时,会存在不同的手部向购物车内拿取商品的情况。在确定手部检测框和当前轨迹不匹配的情况下,可以创建新的轨迹,即为当前检测的手部检测框创建新的轨迹,也就是说本实施例的方案中,有多条第一轨迹同时存在的情况。上述方案保证了多个用户可以同时使用一个购物车,在提高客户体验的同时保证识别准确性。
对于当前帧处理后图像,将通过目标检测获取得到的手部位置和当前轨迹中的手部的位置进行匹配运算,例如,计算目标检测获取得到的手部位置与当前轨迹中最新确定的手部的位置的重合区域的面积,在该重合区域的面积大于预设阈值时,确定目标检测获取得到的手部位置和当前轨迹匹配,则目标检测获取得到的手部位置位于当前轨迹,将当前帧处理后图像添加至当前轨迹对应的图像集中,在目标检测获取得到的手部位置和当前轨迹中的手部的位置不匹配时,目标检测获取得到的手部位置所处的轨迹可能为其他新的轨迹。未匹配到新的图像帧而被置为预留状态的当前轨迹为预留轨迹,用预留轨迹来表述轨迹的预留状态。
在当前帧处理后图像中无手部时,将当前轨迹的状态置为预留状态,经过预设时间间隔后,将当前轨迹的状态置为结束状态,并且还可以确定预测轨迹,预测轨迹为没有检测到手部移动,通过计算预测手部移动方向及移动距离得到的轨迹。本实施例中,在当前帧处理后图像中无手部时,例如由于软件或者硬件的操作失误导致图像中无手部时,可以根据当前轨迹中已经确定的手部移动方向和速度,预测在未来时间段内手部的移动方向及移动距离,根据该预测出的手部的移动方向及移动距离确定的轨迹即为预测轨迹。在当前轨迹处于预留状态的预设时间间隔内,若重新检测到了存在手部的处理后图像,可以将该处理后图像中的手部检测框与预测轨迹中最新预测出的手部位置进行匹配,该匹配方式与将手部检测框与当前轨迹进行匹配的方式相似,即可确定该处理后图 像中的手部检测框的位置是否位于预测轨迹中,在该处理后图像中的手部检测框的位置位于预测轨迹中的情况下,以该处理后图像作为起始,继续确定手部检测框的位置的轨迹,该继续确定的轨迹称为匹配轨迹,匹配轨迹可以认为是与预测轨迹或处于预留状态的当前轨迹匹配的轨迹。
上述方案中的当前轨迹,预测轨迹以及匹配轨迹均可以作为最终确定的第一轨迹。通过设置预测轨迹以及匹配轨迹,可以提高轨迹精度,并提高本方案的容错率。
在一些实施例中,步骤S2中,对处理后图像进行基于数字图像处理的目标检测与跟踪,得到第二轨迹,包括:步骤S24:检测当前帧处理后图像中是否存在运动物体,当前帧处理后图像中存在运动物体时,转入步骤S25,当前帧处理后图像中不存在运动物体时,转入步骤S26;步骤S25:将当前帧处理后图像添加至当前轨迹对应的图像集中,根据处理后的当前轨迹得到第二轨迹;步骤S26:将当前轨迹的状态置为预留状态。
步骤S25中包括:将运动物体的位置与当前轨迹进行匹配,在运动物体的位置与当前轨迹匹配的情况下,将当前帧处理后图像添加至当前轨迹对应的图像集中,在运动物体的位置与当前轨迹不匹配的情况下,即对于与运动物体的位置不匹配的当前轨迹(光流),可以创建新的轨迹,该操作与上述实施例中手部检测框和当前轨迹不匹配时的操作相似,此处不再赘述。光流法,包括:DIS光流算法,和光流方向与轮廓面积过滤。可以采用光流法或者背景差分检测当前帧处理后图像中是否存在运动物体。
步骤S26中,将当前轨迹的状态置为预留状态后,可以根据当前轨迹中已经确定的运动物体的移动方向和速度,预测在未来时间段内运动物体的移动方向及移动距离,根据该预测出的运动物体的移动方向及移动距离确定的轨迹即为预测轨迹。在当前轨迹处于预留状态的预设时间间隔内,若重新检测到了存在运动物体的处理后图像,可以将该处理后图像中的运动物体的位置与预测轨迹中最新预测出的运动物体位置进行匹配,该匹配方式与将手部检测框与当前轨迹进行匹配的方式相似,即可确定该处理后图像中的运动物体的位置是否位于预测轨迹中,在该处理后图像中的运动物体的位置位于预测轨迹中的情况下,以该处理后图像作为起始,继续确定运动物体的位置的轨迹,该继续确定的轨迹称为匹配轨迹,匹配轨迹可以认为是与预测轨迹或处于预留状态的当前轨迹匹配的轨迹。上述方案中的当前轨迹,预测轨迹以及匹配轨迹均可以作为最终确定的第二轨迹。
在一些实施例中,步骤S3还包括:分别判断第一轨迹和第二轨迹是否结束,对于结束的轨迹判断其是否有效,对于未结束的轨迹,记录轨迹信息并转入步 骤S1,对于有效的第一轨迹或有效的第二轨迹进行购物行为判断;其中,在第一轨迹的预留帧数大于第一预设设阈值时,确定第一轨迹结束;在第一轨迹的预留帧数不大于第一预设设阈值时,确定第一轨迹未结束;在第二轨迹的预留帧数大于第一预设阈值时,确定第二轨迹结束,在第二轨迹的预留帧数不大于第一预设设阈值时,确定第二轨迹未结束。其中,无法检测到手部时,通过计算预测手部移动方向,补正计算得到轨迹,当该轨迹帧数大于预留帧数时,则认为轨迹已经结束。
在一些实施例中,对于结束的轨迹判断其是否有效,包括:获取结束的轨迹的轨迹长度,轨迹长度大于第二预设阈值时,该结束的轨迹有效,轨迹长度不大于第二预设阈值时,该结束的轨迹无效。
在一些实施例中,步骤S3中,有商品扫码操作的商品进出购物车时,根据有效的第一轨迹,进行购物行为判断,包括:步骤S31:根据有效的第一轨迹对应的图像集中手部持有商品的图像与手部未持有商品的图像在所述图像集中的添加时间顺序,将有效的第一轨迹划分为第一类轨迹、第二类轨迹和第三类轨迹;步骤S32:在有效的第一轨迹为第一类轨迹的情况下,获取手部持有商品的图像的帧数与未持有商品的图像的帧数的第一比例,第一比例大于第三预设阈值,确定商品进入购物车,第一比例不大于第三预设阈值,确定商品未进入购物车;步骤S33:在所述有效的第一轨迹为所述第三类轨迹的情况下,获取手部持有商品的图像的帧数与未持有商品的图像的帧数的第二比例,第二比例大于第四预设阈值,确定商品从购物车拿出,第二比例不大于第四预设阈值,确定商品仍在购物车内。这样,根据商品的轨迹对购物行为过程进行识别,可以对进行了扫码操作的商品进出购物车的行为进行准确识别,对有效轨迹进行划分,可以很好地解决在摄像头监控范围内动作停滞造成动作误识别的情况。
步骤S31中,可以将图像集中手部持有商品的图像的添加时间在前,手部未持有商品的图像的添加时间在后的第一轨迹作为第一类轨迹;将图像集中手部持有商品的图像的添加时间在后,手部未持有商品的图像的添加时间在前的第一轨迹作为第三类轨迹;将除第一类轨迹以及第三类轨迹外的轨迹作为第二类轨迹。该第二类轨迹可以对应空手进出购物车的轨迹。
在一些实施例中,步骤S3中,未进行商品扫码操作的商品进出购物车时,根据有效的第二轨迹,进行购物行为判断,包括:步骤S34:根据有效的第二轨迹的光流方向,对商品进出购物车进行判断。这样,根据商品的光流轨迹,在未进行扫码操作的情况下,对进出购物车的商品进行确认。
在一些实施例中,步骤S3中,得到判断结果,包括:商品进入购物车,且商品仍在购物车内,得到判断结果为拿进商品;商品未进入购物车,且商品从 购物车拿出,得到判断结果为拿出商品;商品进入购物车,且商品从购物车拿出,得到判断结果为更换商品;商品未进入购物车,且商品仍在购物车内,得到判断结果为空手进出;所述光流方向为远离购物车,得到判断结果为商品掉出,所述光流方向为指向购物车,得到判断结果为扔进商品。这样,可以对购物车内商品数量进行准确统计,进而进行购物车内商品数量与扫码商品数量的一致性判断。
本实施例中,根据第一轨迹进行购物行为判断的判断结果包括:商品进入购物车,且商品仍在购物车内,得到判断结果为拿进商品;商品未进入购物车,且商品从购物车拿出,得到判断结果为拿出商品;商品进入购物车,且商品从购物车拿出,得到判断结果为更换商品;商品未进入购物车,且商品仍在购物车内,得到判断结果为空手进出。根据第二轨迹进行购物行为判断的判断结果包括:所述光流方向为远离购物车,得到判断结果为商品掉出,所述光流方向为指向购物车,得到判断结果为扔进商品。
针对一次进出购物车的行为,在同时存在第一轨迹和第二轨迹的情况下,说明本次行为中商品是通过手持方式进出购物车的或者仅是空手进出购物车的,通过对第一轨迹的分类可以确定本次行为是否为空手进出购物车的行为,即商品未进入购物车,且商品仍在购物车内,得到判断结果为空手进出。针对通过手持方式进出购物车的行为,先确定该商品在进入购物车前和/或移出购物车后是否有对应的扫码操作,在存在扫码操作的情况下,采用第一轨迹进行购物行为判断,判断结果包括:商品进入购物车,且商品仍在购物车内,得到判断结果为拿进商品;商品未进入购物车,且商品从购物车拿出,得到判断结果为拿出商品;商品进入购物车,且商品从购物车拿出,得到判断结果为更换商品。在不存在扫码操作的情况下,采用第二轨迹进行购物行为判断,结合光流方向提醒用户有商品未扫码进出购物车。
针对一次进出购物车的行为,在仅存在第二轨迹的情况下,说明在无手部的情况下商品进出购物车,结合该商品在进入购物车前是否有对应的扫码操作,进行购物行为判断,例如,在确定光流方向为指向购物车且商品在进入购物车前有对应的扫码操作时,得到判断结果为扔进商品且商品已扫码,无需提示;在确定光流方向为指向购物车且商品在进入购物车前无对应的扫码操作时,得到判断结果为扔进商品且商品未扫码,需要提示用户扫码;在光流方向为远离购物车时,得到判断结果为商品掉出。
本实施例中,第一预设阈值、第二预设阈值、第三预设阈值和第四预设阈值,本领域技术人员可以根据实际需求进行设定。
图2是本公开实施例提供的一种购物车内商品数量确认的方法的流程示意 图。如图2所示,在购物过程中,实时读取购物车内景象视频帧,进行图像预处理,并对处理后的图像进行目标检测与跟踪,获取商品轨迹,其中目标检测与追踪包括:同时进行的基于深度学习模型的目标检测和基于背景差分的目标检测;对于基于深度学习模型的目标检测,首先对当前帧图像进行目标检测,判断其中是否有手部,当当前帧图像中存在手部时,将手部检测框和每个轨迹的最后一帧进行匹配,当手部检测框和该轨迹的最后一帧匹配时,进行轨迹跟踪,将当前帧图像添加到该轨迹对应的图像集中,当手部检测框和该轨迹的最后一帧不匹配时,则需要针对该手部检测框创建新的轨迹,当当前帧图像中不存在手部时,将当前轨迹的状态置为预留状态;对于基于背景差分的目标检测,首先判断当前帧图像中是否有运动物体,当当前帧图像中有运动物体,且运动物体的位置与当前轨迹匹配时,进行轨迹跟踪,将当前帧图像添加到当前轨迹对应的图像集中,当运动物体的位置与当前轨迹不匹配时,则创建新的轨迹,当当前帧图像中不存在运动物体时,将当前轨迹置为预留状态;当前轨迹的预留帧数大于第一预设阈值,当前轨迹结束,当前轨迹的预留帧数小于或等于第一预设阈值,当前轨迹未结束,记录轨迹信息,继续读取购物车内景象视频帧,进行前述步骤;当前轨迹结束后,判断当前轨迹是否有效,当前轨迹的轨迹长度小于或等于第二预设阈值时,轨迹无效,清空,当前轨迹的轨迹长度大于第二预设阈值时,轨迹有效,进行购物状态和行为判断,得到判断结果,判断结果中购物车内商品数量与扫码商品数量不一致时,在购物车端对顾客进行提醒,同时在超市店员的手持终端进行提醒。
图3是本公开实施例提供的一种购物车内商品数量确认的系统的示意图。图4是本公开实施例提供的一种处理后图像的示意图。如图3所示,本公开实施例还提供了一种购物车内商品数量确认的系统10,包括:购物车110,设置为放置商品;图像获取终端120,设置于购物车110任一侧面上方,图像获取装置设置为获取购物车容置范围内图像;扫码终端130,设置于购物车110上,设置为对商品进行扫码操作;结算终端140,设置于购物车110上,设置为结合获取到的购物车容置范围内图像和商品扫码信息,根据前述的购物车内商品数量确认的方法,进行购物车内商品数量确认。可以适应多种超市环境,具有很强的泛化性;降低了系统成本,后期维护简单,能够准确的提醒顾客购买商品清单与购物车中商品数量是否一致。
本公开实施例还提供了一种计算机可读存储介质,其上存储有计算机指令,该指令被处理器执行时实现前述任意一些实施例中的购物车内商品数量确认的方法。
本公开是参照根据本公开实施例的方法、系统和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的 每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的步骤的功能。

Claims (10)

  1. 一种购物车内商品数量确认的方法,包括:
    在购物过程中,实时获取购物车内图像,并对所述购物车内图像进行预处理,得到处理后图像;
    对所述处理后图像进行基于深度学习模型的目标检测与跟踪,得到第一轨迹,对所述处理后图像进行基于数字图像处理的目标检测与跟踪,得到第二轨迹,其中,所述第一轨迹的获取和所述第二轨迹的获取同时进行;
    在有商品扫码操作的商品进出购物车的情况下,根据所述第一轨迹,进行购物行为判断,在未进行商品扫码操作的商品进出购物车的情况下,根据所述第二轨迹,进行购物行为判断,得到判断结果;
    在根据所述判断结果确定购物车内商品数量与扫码商品数量不一致的情况下,执行以下至少之一:在购物车端对顾客进行提醒;将不一致信息反馈至超市端。
  2. 根据权利要求1所述的方法,其中,所述对所述处理后图像进行基于深度学习模型的目标检测与跟踪,得到第一轨迹,包括:
    检测当前帧处理后图像中是否有手部;
    响应于当前帧处理后图像中有手部,计算当前帧处理后图像中的手部检测框和当前轨迹是否匹配,响应于所述手部检测框和当前轨迹匹配,将当前帧处理后图像添加至当前轨迹对应的图像集中,根据处理后的当前轨迹得到所述第一轨迹;
    响应于当前帧处理后图像中无手部,将当前轨迹的状态置为预留状态,经过预设时间间隔后,将当前轨迹的状态置为结束状态。
  3. 根据权利要求1或2所述的方法,其中,所述对所述处理后图像进行基于数字图像处理的目标检测与跟踪,得到第二轨迹,包括:
    检测当前帧处理后图像中是否存在运动物体;
    响应于当前帧处理后图像中存在运动物体,将当前帧处理后图像添加至当前轨迹对应的图像集中,根据处理后的当前轨迹得到所述第二轨迹;
    响应于当前帧处理后图像中不存在运动物体,将当前轨迹的状态置为预留状态。
  4. 根据权利要求1所述的方法,在根据所述第一轨迹和所述第二轨迹进行购物行为判断之前,还包括:
    分别判断所述第一轨迹和所述第二轨迹是否结束;
    对于结束的轨迹判断所述结束的轨迹是否有效;
    对于未结束的轨迹,记录轨迹信息并返回执行所述在购物过程中,实时获取购物车内图像,并对所述购物车内图像进行预处理,得到处理后图像;对所述处理后图像进行基于深度学习模型的目标检测与跟踪,得到第一轨迹,对所述处理后图像进行基于数字图像处理的目标检测与跟踪,得到第二轨迹的操作;
    对于有效的第一轨迹或有效的第二轨迹进行购物行为判断;
    其中,在所述第一轨迹的预留帧数大于第一预设设阈值的情况下,确定所述第一轨迹结束;在所述第一轨迹的预留帧数不大于第一预设设阈值的情况下,确定所述第一轨迹未结束;在所述第二轨迹的预留帧数大于所述第一预设阈值的情况下,确定所述第二轨迹结束,在所述第二轨迹的预留帧数不大于所述第一预设设阈值的情况下,确定所述第二轨迹未结束。
  5. 根据权利要求4所述的方法,其中,所述对于结束的轨迹判断所述结束的轨迹是否有效,包括:
    获取所述结束的轨迹的轨迹长度,在所述轨迹长度大于第二预设阈值的情况下,所述结束的轨迹有效,在所述轨迹长度不大于第二预设阈值的情况下,所述结束的轨迹无效。
  6. 根据权利要求4所述的方法,其中,所述在有商品扫码操作的商品进出购物车的情况下,根据有效的第一轨迹,进行购物行为判断,包括:
    根据所述有效的第一轨迹对应的图像集中手部持有商品的图像与手部未持有商品的图像在所述图像集中的添加时间顺序,将所述有效的第一轨迹划分为第一类轨迹、第二类轨迹或第三类轨迹;
    在所述有效的第一轨迹为所述第一类轨迹的情况下,获取所述手部持有商品的图像的帧数与所述手部未持有商品的图像的帧数的第一比例,在所述第一比例大于第三预设阈值的情况下,确定商品进入购物车,在所述第一比例不大于第三预设阈值的情况下,确定商品未进入购物车;
    在所述有效的第一轨迹为所述第三类轨迹的情况下,获取所述手部持有商品的图像的帧数与所述手部未持有商品的图像的帧数的第二比例,在所述第二比例大于第四预设阈值的情况下,确定商品从购物车拿出,在所述第二比例不大于第四预设阈值的情况下,确定商品仍在购物车内。
  7. 根据权利要求6所述的方法,其中,所述在未进行商品扫码操作的商品进出购物车的情况下,根据有效的第二轨迹,进行购物行为判断,包括:
    根据所述有效的第二轨迹的光流方向,对商品进出购物车进行判断。
  8. 根据权利要求7所述的方法,其中,所述得到判断结果,包括:
    在商品进入购物车,且所述商品仍在购物车内的情况下,得到判断结果为拿进商品;
    在商品未进入购物车,且所述商品从购物车拿出的情况下,得到判断结果为拿出商品;
    在商品进入购物车,且所述商品从购物车拿出的情况下,得到判断结果为更换商品;
    在商品未进入购物车,且所述商品仍在购物车内的情况下,得到判断结果为空手进出;
    在所述光流方向为远离购物车的情况下,得到判断结果为商品掉出,在所述光流方向为指向购物车的情况下,得到判断结果为扔进商品。
  9. 一种购物车内商品数量确认的系统,包括:
    购物车,设置为放置商品;
    图像获取终端,设置于所述购物车一侧面上方,设置为实时获取购物车容置范围内图像;
    扫码终端,设置于所述购物车上,设置为对商品进行扫码操作;
    结算终端,设置于所述购物车上,设置为结合获取到的所述购物车容置范围内图像和商品扫码信息,根据权利要求1至8中任一项所述的购物车内商品数量确认的方法,进行购物车内商品数量确认。
  10. 一种存储介质,所述存储介质存储有计算机指令,所述指令被处理器执行时实现权利要求1至8中任一项所述的购物车内商品数量确认的方法。
PCT/CN2023/088353 2022-04-14 2023-04-14 购物车内商品数量确认的方法、系统及存储介质 WO2023198182A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210392643.XA CN114898249B (zh) 2022-04-14 2022-04-14 用于购物车内商品数量确认的方法、系统及存储介质
CN202210392643.X 2022-04-14

Publications (1)

Publication Number Publication Date
WO2023198182A1 true WO2023198182A1 (zh) 2023-10-19

Family

ID=82716722

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/088353 WO2023198182A1 (zh) 2022-04-14 2023-04-14 购物车内商品数量确认的方法、系统及存储介质

Country Status (2)

Country Link
CN (1) CN114898249B (zh)
WO (1) WO2023198182A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117422937A (zh) * 2023-12-18 2024-01-19 成都阿加犀智能科技有限公司 一种智能购物车状态识别方法、装置、设备及存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114898249B (zh) * 2022-04-14 2022-12-13 烟台创迹软件有限公司 用于购物车内商品数量确认的方法、系统及存储介质
CN115565117B (zh) * 2022-11-28 2023-04-07 浙江莲荷科技有限公司 数据处理方法、装置、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408369A (zh) * 2016-08-26 2017-02-15 成坤 智能鉴别购物车内商品信息的算法
CN106672042A (zh) * 2016-11-16 2017-05-17 南京亿猫信息技术有限公司 智能购物车以及该购物车购物和退货过程的判断方法
CN109829777A (zh) * 2018-12-24 2019-05-31 深圳超嗨网络科技有限公司 一种智能购物系统和购物方法
WO2019226021A1 (ko) * 2018-05-25 2019-11-28 주식회사 비즈니스인사이트 스마트 쇼핑 카트 및 이를 이용한 쇼핑 관리 시스템
CN111507792A (zh) * 2019-03-07 2020-08-07 河源市联腾实业有限公司 一种自助购物方法、计算机可读存储介质及系统
CN114898249A (zh) * 2022-04-14 2022-08-12 烟台创迹软件有限公司 用于购物车内商品数量确认的方法、系统及存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6730079B2 (ja) * 2016-04-28 2020-07-29 東芝テック株式会社 監視装置及びプログラム
CN109409175B (zh) * 2017-08-16 2024-02-27 图灵通诺(北京)科技有限公司 结算方法、装置和系统
CN112115745A (zh) * 2019-06-21 2020-12-22 杭州海康威视数字技术股份有限公司 一种商品漏扫码行为识别方法、装置及系统
CN111311848A (zh) * 2020-01-16 2020-06-19 青岛创捷中云科技有限公司 一种自助收银ai防损系统及方法
CN113239793A (zh) * 2021-05-11 2021-08-10 上海汉时信息科技有限公司 防损方法以及装置
CN113723251A (zh) * 2021-08-23 2021-11-30 上海汉时信息科技有限公司 一种防损方法和装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408369A (zh) * 2016-08-26 2017-02-15 成坤 智能鉴别购物车内商品信息的算法
CN106672042A (zh) * 2016-11-16 2017-05-17 南京亿猫信息技术有限公司 智能购物车以及该购物车购物和退货过程的判断方法
WO2019226021A1 (ko) * 2018-05-25 2019-11-28 주식회사 비즈니스인사이트 스마트 쇼핑 카트 및 이를 이용한 쇼핑 관리 시스템
CN109829777A (zh) * 2018-12-24 2019-05-31 深圳超嗨网络科技有限公司 一种智能购物系统和购物方法
CN111507792A (zh) * 2019-03-07 2020-08-07 河源市联腾实业有限公司 一种自助购物方法、计算机可读存储介质及系统
CN114898249A (zh) * 2022-04-14 2022-08-12 烟台创迹软件有限公司 用于购物车内商品数量确认的方法、系统及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117422937A (zh) * 2023-12-18 2024-01-19 成都阿加犀智能科技有限公司 一种智能购物车状态识别方法、装置、设备及存储介质
CN117422937B (zh) * 2023-12-18 2024-03-15 成都阿加犀智能科技有限公司 一种智能购物车状态识别方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN114898249B (zh) 2022-12-13
CN114898249A (zh) 2022-08-12

Similar Documents

Publication Publication Date Title
WO2023198182A1 (zh) 购物车内商品数量确认的方法、系统及存储介质
Liu et al. A smart unstaffed retail shop based on artificial intelligence and IoT
WO2020103487A1 (zh) 自助结算方法、装置以及存储介质
US20210241490A1 (en) Image processing for tracking actions of individuals
JP5704279B1 (ja) 関連付プログラム及び情報処理装置
CN111263224B (zh) 视频处理方法、装置及电子设备
CN112215167B (zh) 一种基于图像识别的智能商店控制方法和系统
CN111222870B (zh) 结算方法、装置和系统
US20200387866A1 (en) Environment tracking
US20210398097A1 (en) Method, a device and a system for checkout
JP2023014207A (ja) 情報処理システム
CN112307864A (zh) 用于确定目标对象的方法、装置、人机交互系统
CN113366543A (zh) 用于检测自助结账终端的扫描异常的系统和方法
CN113468914B (zh) 一种商品纯净度的确定方法、装置及设备
CN109934569B (zh) 结算方法、装置和系统
EP3629276A1 (en) Context-aided machine vision item differentiation
CN111260685B (zh) 视频处理方法、装置及电子设备
CN111507792A (zh) 一种自助购物方法、计算机可读存储介质及系统
US20230005348A1 (en) Fraud detection system and method
CN108171286B (zh) 无人售货方法及其系统
CN116471384A (zh) 无人值守商店监控系统的控制方法及控制装置
JP5962747B2 (ja) 関連付プログラム及び情報処理装置
KR20220020047A (ko) 매장 내 고객동선 및 쇼핑시간 예측방법과 그 시스템
CN109858446A (zh) 一种新零售场景下物品注册方法及装置
CN113516814B (zh) 基于人脸识别的智能供货方法和终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23787824

Country of ref document: EP

Kind code of ref document: A1