WO2023184932A1 - 物品状态跟踪方法、装置、电子设备、存储介质及计算机程序产品 - Google Patents

物品状态跟踪方法、装置、电子设备、存储介质及计算机程序产品 Download PDF

Info

Publication number
WO2023184932A1
WO2023184932A1 PCT/CN2022/125762 CN2022125762W WO2023184932A1 WO 2023184932 A1 WO2023184932 A1 WO 2023184932A1 CN 2022125762 W CN2022125762 W CN 2022125762W WO 2023184932 A1 WO2023184932 A1 WO 2023184932A1
Authority
WO
WIPO (PCT)
Prior art keywords
item
information
detection
status
target
Prior art date
Application number
PCT/CN2022/125762
Other languages
English (en)
French (fr)
Inventor
龙海旭
罗棕太
伊帅
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023184932A1 publication Critical patent/WO2023184932A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F17/00Coin-freed apparatus for hiring articles; Coin-freed facilities or services
    • G07F17/10Coin-freed apparatus for hiring articles; Coin-freed facilities or services for means for safe-keeping of property, left temporarily, e.g. by fastening the property
    • G07F17/12Coin-freed apparatus for hiring articles; Coin-freed facilities or services for means for safe-keeping of property, left temporarily, e.g. by fastening the property comprising lockable containers, e.g. for accepting clothes to be cleaned
    • G07F17/13Coin-freed apparatus for hiring articles; Coin-freed facilities or services for means for safe-keeping of property, left temporarily, e.g. by fastening the property comprising lockable containers, e.g. for accepting clothes to be cleaned the containers being a postal pick-up locker
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Definitions

  • the present disclosure relates to but is not limited to the technical field of item tracking, and in particular, to an item status tracking method, device, electronic equipment, storage medium and computer program product.
  • Embodiments of the present disclosure provide at least an item status tracking method, device, electronic device, storage medium, and computer program product.
  • Embodiments of the present disclosure provide an item status tracking method, including:
  • the movement trajectory information includes the detection results of the object in at least two frames of images
  • the detection status information of the item under each frame of image is determined, wherein the detection line is provided on one side of the storage cabinet and is a distance from the storage cabinet Within a preset range, the detection status information is used to measure the position change status of the item relative to the detection line;
  • the detection status information of the item under each frame of image is determined, and based on the movement trajectory information in each of the two adjacent frames of at least two frames of images, The detection status information of the item is used to determine the target tracking status of the item under the movement trajectory information, that is, the item is dynamically tracked, and whether the item is taken is determined based on the target tracking status of the item.
  • items do not need to be attached with specific identification tags, which not only reduces costs but also improves identification efficiency; on the other hand, it helps improve the accuracy of determining the target tracking status of items, thereby improving the user experience. .
  • An embodiment of the present disclosure also provides an item status tracking device, which includes:
  • the movement trajectory information acquisition part is configured to obtain the movement trajectory information of the object, where the movement trajectory information includes the detection results of the object in at least two frames of images;
  • the detection status information determination part is configured to determine the detection status information of the item under each frame of image based on the movement trajectory information and preset detection line information, wherein the detection line is provided on one side of the storage cabinet , and the distance from the storage cabinet is within a preset range, the detection status information is used to measure the position change status of the item relative to the detection line;
  • the target tracking status determination part is configured to determine the target tracking status corresponding to the item based on the detection status information of the item under each adjacent two frame images in the at least two frames of images, wherein the target tracking status is expressed by Used to indicate that the item was taken, or that the item was replaced.
  • An embodiment of the present disclosure also provides an electronic device, including: a processor, a memory, and a bus.
  • the memory stores machine-readable instructions executable by the processor.
  • the processor and the The memories communicate through a bus, and when the machine-readable instructions are executed by the processor, the above-mentioned item status tracking method is performed.
  • Embodiments of the present disclosure also provide a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program.
  • the computer program executes the above-mentioned item status tracking method when run by a processor.
  • Embodiments of the present disclosure also provide a computer program product.
  • the computer program product includes a computer program or instructions. When the computer program or instructions are run on an electronic device, the electronic device performs the above item status tracking method.
  • Figure 1 is a schematic structural diagram of an existing storage cabinet provided by an embodiment of the present disclosure
  • Figure 2 is a schematic flowchart of an item status tracking method provided by an embodiment of the present disclosure
  • Figure 3 is a schematic diagram of the detection results of an article provided by an embodiment of the present disclosure.
  • Figure 4 is a schematic diagram of a detection line provided by an embodiment of the present disclosure.
  • Figure 5 is a schematic diagram of determining detection status information of an object provided by an embodiment of the present disclosure.
  • Figure 6 is a schematic flowchart of an item status tracking method provided by an embodiment of the present disclosure.
  • Figure 7 is a schematic diagram of an image of movement trajectory information provided by an embodiment of the present disclosure.
  • Figure 8 is a schematic flowchart of a method for determining detection status information of items under each frame image provided by an embodiment of the present disclosure
  • Figure 9 is a schematic flowchart of a method for determining the target tracking status corresponding to an item provided by an embodiment of the present disclosure
  • Figure 10 is a schematic diagram of detection status information of an item in two adjacent frames of images provided by an embodiment of the present disclosure
  • Figure 11 is a schematic diagram of a change in target tracking state provided by an embodiment of the present disclosure.
  • Figure 12 is a schematic flowchart of a method for determining target category information of an item provided by an embodiment of the present disclosure
  • Figure 13 is a schematic diagram of target type information of an item under a movement trajectory provided by an embodiment of the present disclosure
  • Figure 14 is a schematic flowchart of yet another item status tracking method provided by an embodiment of the present disclosure.
  • Figure 15 is a schematic flowchart of a method for generating transaction information for items under a multi-camera device according to an embodiment of the present disclosure
  • Figure 16 is a schematic structural diagram of a storage cabinet with multiple camera devices provided by an embodiment of the present disclosure.
  • Figure 17 is a schematic structural diagram of an item status tracking device provided by an embodiment of the present disclosure.
  • Figure 18 is a schematic structural diagram of an item status tracking device provided by an embodiment of the present disclosure.
  • Figure 19 is a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
  • a and/or B can mean: A exists alone, A and B exist simultaneously, and B exists alone.
  • at least one herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, and C, which can mean including from A, Any one or more elements selected from the set composed of B and C.
  • FIG. 1 is a schematic structural diagram of an existing storage cabinet provided by an embodiment of the present disclosure.
  • the storage cabinet 100 includes a multi-layer storage board 10 and an image acquisition device 20.
  • the storage board 10 on each layer has Put at least one item 30, such as beverages (eg, iced tea), instant products (eg, bread or instant noodles), daily necessities (eg, napkins), etc.
  • Each item 30 is attached with a radio frequency identification RFID tag 31, and the storage cabinet 100
  • the RFID tag 31 can be identified through radio frequency identification technology to identify the item 30 .
  • the image acquisition device 20 is a device with a face recognition function and is used for face-swiping payment or QR code payment.
  • the above-mentioned storage cabinet 100 has problems of high cost and low efficiency due to the need to use RFID tags.
  • the use of RFID tags is avoided and item information is identified through static photography.
  • the identification method of static photography is usually based on the detection results of two adjacent frames of images to determine whether an item has been taken and the information of the taken item.
  • this method only analyzes the results, if there is occlusion, it will lead to incorrect identification of the taken item information, thus affecting the user experience.
  • an item status tracking method which includes: obtaining movement trajectory information of the item, where the movement trajectory information includes detection results of the item in at least two frames of images; based on the movement trajectory information and preset detection line information to determine the detection status information of the item under each frame of image, wherein the detection line is provided on one side of the storage cabinet, and the distance from the storage cabinet is within a preset range, The detection status information is used to measure the position change status of the item relative to the detection line; based on the detection status information of the item under each adjacent two frame images in the at least two frames of images, determine the corresponding position of the item.
  • Target tracking state wherein the target tracking state is used to indicate that the item is taken, or that the item is put back.
  • the detection status information of the item under each frame of image is determined, and based on the movement trajectory information in each of the two adjacent frames of at least two frames of images, The detection status information of the item is used to determine the target tracking status of the item under the movement trajectory information, that is, the item is dynamically tracked, and whether the item is taken is determined based on the target tracking status of the item.
  • items do not need to be attached with specific identification tags, which not only reduces costs but also improves identification efficiency; on the other hand, it helps improve the accuracy of determining the target tracking status of items, which in turn improves the user experience. .
  • FIG. 2 is a schematic flow chart of an item status tracking method provided by an embodiment of the disclosure.
  • the item status tracking method provided by an embodiment of the disclosure includes the following steps S101 to S104, wherein:
  • Step S101 Obtain movement trajectory information of the object, where the movement trajectory information includes detection results of the object in at least two frames of images.
  • the movement trajectory information refers to the trajectory formed by the movement of the object when the user takes and puts the object back.
  • the video data of the process can be collected through a camera device, and then the video data collected by the camera device can be obtained through the terminal device, and the video data can be decoded. , target detection and trajectory tracking, the movement trajectory information of the item is obtained.
  • Video data refers to a continuous image sequence, which is essentially composed of a set of continuous image frames.
  • An image frame is the smallest visual unit that makes up a video and is a static image.
  • a dynamic video is formed by combining a sequence of temporally consecutive image frames.
  • the video data can be decoded through FFmpeg (Fast Forward Mpeg) technology and other methods to obtain at least two frames of images. After target detection is performed on the at least two frames of images, Get the detection results of at least two frames of images.
  • FFmpeg Fast Forward Mpeg
  • FFmpeg technology is an open source computer program that can be used to record, convert digital audio and video, and convert it into streams. It adopts LGPL or GPL license and provides a complete solution for recording, converting and streaming audio and video. , and includes the very advanced audio/video codec library libavcodec. In order to ensure high portability and codec quality, many codes in libavcodec can be developed from scratch, making data use easy and adaptable. Correspondingly, in actual use, the video data of the user taking items can also be saved and transmitted using FFmpeg technology.
  • the video data since the video data usually includes at least two frames of images per second (for example, 24 frames of images per second), when decoding the video data, the video data can be image frame extracted.
  • Extraction where frame extraction refers to extracting frames according to a preset number of interval frames, for example, extracting one frame of the image to be detected every 20 frames; frame extraction can also be performed according to a preset time interval, for example, every Extract one frame of image at an interval of 10 milliseconds (Millisecond, ms).
  • the number of interval frames and interval time for frame extraction can be set according to actual needs, and are not limited here.
  • the detection result includes detection frame information of the item
  • the detection frame information of the item includes the size of the detection frame (for example, the length and width of the detection frame) and the position information of the detection frame (for example, the detection frame coordinates of the center point of the box).
  • the detection results may also include item type information.
  • the type information of the item can be any suitable type information. For example, beverages, daily necessities, instant products, etc.
  • FIG 3 is a schematic diagram of the detection result of an item provided by an embodiment of the present disclosure.
  • the detection result 500 includes a detection frame 501, and the type information 502 of the item is output in text form as "mineral water 150ml ".
  • the size of the detection frame 501 is the size of the item.
  • the video data collected by the camera device is obtained through a terminal device, that is, the execution subject of the item status tracking method may be a terminal device, where the terminal device may include but is not limited to vehicle-mounted equipment, Wearable devices, user terminals and handheld devices, etc.
  • the execution subject of the item status tracking method can also be a server, where the server can be an independent physical server, a server cluster or a distributed system composed of multiple physical servers, or a server that provides Cloud servers for basic cloud computing services such as cloud services, cloud databases, cloud computing, cloud storage, big data and artificial intelligence platforms.
  • the item status tracking method can also be implemented by the processor calling computer-readable instructions stored in the memory.
  • Step S102 Determine the detection status information of the item under each frame of image based on the movement trajectory information and preset detection line information.
  • the detection line is provided on one side of the storage cabinet, and the distance from the storage cabinet is within a preset range, and the detection status information is used to measure the position change state of the item relative to the detection line.
  • FIG. 4 is a schematic diagram of a detection line provided by an embodiment of the present disclosure.
  • the detection line 300 is disposed on one side of the storage cabinet 200 , and the distance between the detection line 300 and the storage cabinet 200 is within a preset range.
  • the detection line 300 can be detected by the detection line 300 .
  • the relative position between the line 300 and the item 60 determines the detection status information of the item 60 .
  • Figure 5 is a schematic diagram for determining the detection status information of an item provided by an embodiment of the present disclosure.
  • the detection line 300 is used to determine whether the item is taken and put back. Since the size of the detection frame ID1 is is the size of the object. Therefore, the detection status information of the object can be determined by judging the relative position between the detection frame ID1 and the detection line 300 . For example, when the detection frame ID1 crosses the detection line 300 upward, the detection status information of the item can be determined to be 1, and when the detection frame ID1 crosses the detection line 300 downward, the detection status information of the item can be determined to be -1.
  • the detection status information of the determined item is 1 and the detection status information of the item is -1 is used to represent the relative change in the position between the item and the detection line.
  • the detection status The information can also be other symbols such as numbers or letters (for example, 0 and 1 or S and -S, etc.), which are not limited here.
  • the two detection frames ID1 shown in FIG. 5 are used to represent the change state of the position of the detection frame ID1 relative to the position of the detection line 300, rather than the simultaneous existence of two detection frames ID1 in the same frame image.
  • Step S103 Determine the target tracking status corresponding to the item based on the detection status information of the item in each adjacent two frame images of the at least two frames of images, where the target tracking status is used to indicate that the item is taken, or the said item is returned.
  • the movement trajectory information of the item contains at least two frames of images, it is necessary to determine the target tracking state of the item based on the detection status information of the item under each two adjacent frames of images to determine whether the item is taken or put back.
  • the movement trajectory information includes four frames of images A, B, C, and D, where A and B are adjacent, B and C are adjacent, and C and D are adjacent, if the detection status information of item O under the A image is the same as that of B If the detection status information of item O under the image is different, it is considered that the position of the item relative to the detection line has changed, and so on, and then the target tracking status corresponding to item O can be determined.
  • the detection status information of the item under each frame of image is determined, and based on the movement trajectory information in each of the two adjacent frames of at least two frames of images, The detection status information of the item is used to determine the target tracking status of the item under the movement trajectory information, that is, the item is dynamically tracked, and whether the item is taken is determined based on the target tracking status of the item.
  • items do not need to be attached with specific identification tags, which not only reduces costs but also improves identification efficiency; on the other hand, it helps improve the accuracy of determining the target tracking status of items, which in turn improves the user experience. .
  • FIG. 6 is a schematic flowchart of an item status tracking method provided by an embodiment of the present disclosure.
  • the detection results also include type information of the item. As shown in Figure 6, the method includes the following steps S201 to S205, where :
  • Step S201 Obtain movement trajectory information of the object, where the movement trajectory information includes detection results of the object in at least two frames of images.
  • step S201 corresponds to the above-mentioned step S101, and during implementation, reference may be made to the specific implementation of the above-mentioned step S101.
  • Step S202 Determine a target image from at least two frames of images in the movement trajectory information, where the target image is an image whose type information of the item is a non-target type.
  • Step S203 Delete the target image from the at least two frames of images to obtain new movement trajectory information.
  • Step S204 Based on the new movement trajectory information and the preset detection line information, determine the detection status information of the item in each frame of the image under the new movement trajectory information.
  • Step S205 Determine the target tracking state corresponding to the item based on the detection status information of the item in each adjacent two frames of the at least two frames of images under the new movement trajectory information.
  • the above-mentioned steps S204 to step S205 respectively correspond to the above-mentioned steps S102 to step S103.
  • Images whose category information is not of the target category may be images in which the category information of some items is blurred, or images that are not objects.
  • they may be images of part of the structure of a storage cabinet, or they may be images related to the storage cabinet. Different types of items are pre-placed in the image.
  • the impact of subsequent determination of the target tracking status corresponding to the item can be reduced, which is beneficial to improving the accuracy of determining the detection status information.
  • the preset detection line information includes a detection line function
  • the detection result includes detection frame information of the item
  • the detection frame information of the item includes the center point position of the detection frame
  • Step S1021 Determine the detection status information of the item under each frame of image based on the center point position of the detection frame of the item in each frame of image in the movement trajectory information and the detection line function.
  • Figure 7 is a schematic diagram of an image of movement trajectory information provided by an embodiment of the present disclosure.
  • the image coordinates can be established using the upper left corner of the image 400 shown in Figure 7 as the coordinate origin (0, 0).
  • the detection line function F(x) based on the center point position O1 of the detection frame of the item O in each frame of the image in the movement trajectory information and the detection line function F(x), all the objects under each frame of the image can be determined. Describes the detection status information of item O. In this way, the accuracy of determining the detection status information of the item under each frame of image can be improved, thereby improving the accuracy of determining the target tracking status of the item.
  • the movement trajectory information described in this embodiment and other embodiments below can also be new movement trajectory information obtained by deleting the target image from the at least two frames of images, which is not limited here. , for ease of understanding, the movement trajectory information is uniformly used for description below.
  • the center point position of the detection frame includes the abscissa coordinate of the center point and the ordinate coordinate of the center point;
  • Figure 8 is a method for determining the object under each frame image provided by an embodiment of the present disclosure. A schematic flowchart of the method for detecting status information is shown in Figure 8.
  • the above step S1021 may include the following steps S601 to S602, wherein:
  • S602 Determine the detection status information of the item under each frame of image based on the ordinate of the center point and the function value of the detection line function.
  • the detection status information of item O at time t can be expressed as S t , and the position coordinates of the center point O1 of the detection frame of item O are (x t , y t ).
  • the preset detection line information includes To detect the line function F(x), based on the abscissa x t of the center point, the function value F(x t ) of the detection line function F(x) can be determined.
  • step S602 may include step S6021 and/or step S6022, wherein:
  • Step S6021 when the ordinate of the center point and the function value of the detection function satisfy the first preset relationship, determine the detection status information of the item to be the first detection status information;
  • Step S6022 When the ordinate of the center point and the function value of the detection function satisfy a second preset relationship, determine the detection status information of the item to be the second detection status information.
  • the ordinate y t of the center point O1 and the function value F(x t ) satisfy the first preset relationship y t > F(x t )
  • the two items O shown in Figure 7 are used to represent the situation where the items O are located on both sides of the detection line at different times, rather than the two items O existing at the same time.
  • the settings for the first preset relationship and the second preset relationship will also be different. In practical applications, the settings can be based on the image It is set according to the actual situation and is not limited here.
  • step S103 is a method for determining the target tracking status corresponding to the item provided by an embodiment of the present disclosure. As shown in the flow chart of Figure 9, the above-mentioned step S103 may include the following steps S1031 to step S1034, wherein:
  • Step S1031 Determine the relationship between the detection status information of the items in the two adjacent frames of images.
  • Step S1032 When the relationship between the detection status information of the item in the two adjacent frames of images meets the preset conditions, it is determined that the target tracking status corresponding to the item has changed.
  • the change in the target tracking state means that the item changes from the first state to the second state, or the item changes from the second state to the first state;
  • the first state represents the The object and the storage cabinet are located on the same side of the detection line, and the second state represents a state where the object and the storage cabinet are located on both sides of the detection line respectively.
  • the preset conditions can be set according to actual needs and are not limited here. For example, if the relationship between the detection status information of items in two adjacent frames of images meets the preset condition, the relationship between the detection status information of items in two adjacent frames of images may be an inverse relationship.
  • FIG. 10 is a schematic diagram of detection status information of items in two adjacent frames of images provided by an embodiment of the present disclosure, wherein the relationship between the detection status information of items in two adjacent frames of images that meets the preset condition is mutual. is an inverse relationship.
  • image 1001 and image 1002 are two adjacent frame images.
  • the dotted line in each image is the detection line 300.
  • the item O of image 1001 is located above the detection line 300.
  • FIG. 11 is a schematic diagram of a target tracking state changing according to an embodiment of the present disclosure.
  • item 60 moves from position a to position b.
  • the item 60 at position b and the self-service vending device 200 are located on the same side of the detection line 300 , that is, at this time, the item 60 In the first state, at this time, the detection status information of the item may be 1; then, the item 60 continues to move to position c, and the item 60 at position c and the self-service vending device 200 are respectively located on both sides of the detection line 300, that is, this The object 60 is in the second state at this time. At this time, the detection status information of the object is -1.
  • the process of the object 60 moving from position b to position c is the process in which the target tracking state corresponding to the object 60 changes.
  • the status changes of the item during movement can be determined, and the accuracy of determining whether the target tracking status corresponding to the item has changed can be improved.
  • Step S1033 Determine the number of times the target tracking state corresponding to the item under the movement trajectory information changes.
  • Step S1034 Determine the target tracking status corresponding to the item based on the number of times the target tracking status corresponding to the item changes, wherein, if the number of times is an odd number, determine the target tracking status corresponding to the item indicated by the change. The item is taken, and if the number of times is an even number, it is determined that the target tracking state corresponding to the item indicates that the item is put back.
  • the item 60 moves from position b to position c, that is, the target tracking state corresponding to the item 60 changes once, then the target tracking state corresponding to the item indicates that the item 60 is taken, that is, , the image corresponding to item 60 at position b and the image corresponding to item 60 at position c are two adjacent frame images, and a cross-line action occurs; next, if item 60 moves from position c back to position b, at this time, the item If the target tracking state corresponding to 60 changes twice, then the target tracking state corresponding to the item indicates that the item 60 is put back, that is, the image corresponding to the item 60 at position c and the image corresponding to the item 60 moving to position b again.
  • the images are two adjacent frames, and a line-crossing action occurs.
  • the at least two frames of images corresponding to the movement trajectory information may include multiple adjacent two-frame images, it is necessary to determine whether the item has been taken based on the number of times the target tracking state corresponding to the item has changed. . In this way, the accuracy of determining the target tracking status corresponding to the item can be improved.
  • FIG. 12 is a schematic flowchart of a method for determining target category information of an item provided by an embodiment of the present disclosure. As shown in Figure 12, the method includes the following steps S1101 to S1102, wherein:
  • Step S1101 Determine the target time point at which the target tracking state corresponding to the item changes.
  • Step S1102 Based on the type information of the item in each frame of image, determine the number of images corresponding to different types of information in at least two frames of images within the target time period, and determine the type information corresponding to the maximum number of images as the Target type information of the item, wherein the target time period refers to a time period within a preset range based on the target time point.
  • the preset range can be set according to actual needs, for example, it can be 1 second (Second, s), 3 seconds or 5 seconds, etc., which is not limited here.
  • Figure 13 is a schematic diagram of determining the target type information of an item provided by an embodiment of the present disclosure.
  • the target time point when the target tracking state corresponding to the item changes is T1
  • the preset range is 1s.
  • the movement trajectory information includes 10 images (image 1101 to image 1110), each image includes an item with the tracking tag number ID1, then it can be determined that in at least two frames of images within the target time period T1-1 to T1+1
  • the number of images corresponding to the type information of the item is actually sparkling water, and the type information of the item in other images is mineral water. Then it can be determined that the target type information of the item is mineral water. water.
  • the type information corresponding to the largest number of images is determined as the target type information of the item, which can improve the determination of the items. Accuracy of item target type information.
  • Figure 14 shows an item status tracking method provided by an embodiment of the present disclosure.
  • the flow chart is different from the item status tracking method shown in Figure 2 in that the method also includes step S104, in which:
  • Step S104 When the target tracking status corresponding to the item indicates that the item is taken, transaction information of the item is generated.
  • transaction information for the item may be generated.
  • the accuracy of determining the taking status of the item is improved, the accuracy of the transaction information is also improved, which can improve the user's self-service purchasing experience.
  • the transaction information may include at least one of the type information of the item, the time when the item was taken, the quantity of the item, and the amount of the item.
  • the transaction information can also be sent to the user.
  • the transaction information received by the user side can be "You have purchased two items O at 11:50, and the amount is 6 yuan.”
  • the door of the storage cabinet can also be controlled to automatically close to end the current transaction process.
  • the above method is implemented when a camera device is provided in the storage cabinet.
  • embodiments of the present disclosure also provide a multi-view camera based on Method for generating item transaction information from a camera device.
  • Figure 15 is a schematic flowchart of a method for generating transaction information for items under multiple camera devices provided by an embodiment of the present disclosure. Multiple camera devices are provided in the storage cabinet, and the target tracking state corresponding to the item corresponds to one of the targets.
  • the camera device, as shown in Figure 15, the above-mentioned step S104 includes the following steps S1041 to S1043, wherein:
  • Step S1041 Obtain the target tracking status corresponding to the item under each camera device.
  • Step S1042 fuse the target tracking status corresponding to the item under each camera device to obtain the final tracking status of the item.
  • Step S1043 When the final tracking status of the item indicates that the item has been taken, transaction information of the item is generated.
  • the target tracking status corresponding to each camera device may be different. Therefore, it is necessary to obtain the target tracking status corresponding to the item under each camera device, and to obtain the target tracking status corresponding to the item under each camera device.
  • the states are fused to obtain the complete tracking state of the item, which is the final tracking state. If the final tracking state indicates that the item is taken, transaction information for the item can be generated. In this way, by fusing the target tracking status of the item under multiple camera devices to obtain the final tracking trajectory, the tracking trajectory of the item can be made more complete, and when the final tracking status of the item indicates that the item has been taken, , generating transaction information for the item, which is beneficial to improving the integrity of the transaction information.
  • FIG. 16 is a schematic structural diagram of a storage cabinet with multiple camera devices provided by an embodiment of the present disclosure.
  • the storage cabinet 200 includes two camera devices 40 and a plurality of stacked and spaced storage boards 50 .
  • the storage board 50 is used to carry a variety of items 60 (for example, Wahaha, 500 milliliter (mL) ice black tea, 1.5 liter (L) ice black tea, instant noodles, etc.).
  • mL milliliter
  • L liter
  • instant noodles etc.
  • the number of camera devices 40 can be set according to actual needs, and multiple camera devices need to have more common shooting fields of view to capture videos of users taking objects from different angles. In some embodiments, the settings may be based on the shooting angle, shooting range, or cost of the camera device. For example, the number of camera devices 40 may be other, such as three or five, which are not limited here. In the embodiment of the present disclosure, the number of camera devices is set to two.
  • the storage cabinets in the embodiments of the present disclosure may include but are not limited to refrigerators, insulation cabinets, self-service vending machines, lockers, etc.
  • the specifics may vary according to different application scenarios, as long as they can store items and can Cabinets and devices for users to retrieve items can all be called storage cabinets.
  • the target tracking states corresponding to the items under each camera device can be fused to obtain the final tracking state, and the final tracking state indicates that the item is taken.
  • transaction information is generated. In this way, the integrity of transaction information can be improved and duplication of transaction information can be avoided.
  • the writing order of each step does not mean a strict execution order and does not constitute any limitation on the implementation process.
  • the specific execution order of each step should be based on its function and possible The internal logic is determined.
  • the embodiment of the present disclosure also provides an item status tracking device corresponding to the item status tracking method. Since the principle of the device in the embodiment of the disclosure to solve the problem is similar to the above-mentioned item status tracking method of the embodiment of the present disclosure, therefore The implementation of the device can be found in the implementation of the method.
  • FIG 17 is a schematic structural diagram of an item status tracking device provided by an embodiment of the present disclosure.
  • the item status tracking device 1600 includes: a movement trajectory information acquisition part 1610, a detection status information determination part 1620 and a target tracking part. Status determination section 1630; where,
  • the movement trajectory information acquisition part 1610 is configured to obtain the movement trajectory information of the item, where the movement trajectory information includes the detection results of the item in at least two frames of images;
  • the detection status information determination part 1620 is configured to determine the detection status information of the item under each frame of image based on the movement trajectory information and preset detection line information, wherein the detection line is set on a side of the storage cabinet. side, and the distance from the storage cabinet is within a preset range, and the detection status information is used to measure the position change status of the item relative to the detection line;
  • the target tracking status determining part 1630 is configured to determine the target tracking status corresponding to the item based on the detection status information of the item under each adjacent two frame images in the at least two frames of images, wherein the target tracking status Used to indicate that the item has been taken, or that the item has been replaced.
  • the detection result includes type information of the item; the detection status information determination part 1620 is also configured to: determine a target image from at least two frames of images in the movement trajectory information, so The target image is an image with the type information of the item as the background; the target image is deleted from the at least two frames of images to obtain new movement trajectory information; based on the new movement trajectory information and the preset
  • the detection line information determines the detection status information of the item under each frame of image under the new movement trajectory information;
  • the target tracking status determination part 1630 is also configured to: based on the new movement trajectory information The detection status information of the item under each of the at least two adjacent frames of images is determined to determine the target tracking status corresponding to the item.
  • the preset detection line information includes a detection line function
  • the detection result includes detection frame information of the item
  • the detection frame information of the item includes the center point position of the detection frame
  • the detection The status information determination part 1620 is also configured to: determine the detection frame center point position of the item in each frame image in the movement trajectory information and the detection line function, determine the detection frame under each frame image. Detection status information of the item.
  • the center point position of the detection frame includes the abscissa coordinate of the center point and the ordinate coordinate of the center point; the detection status information determining part 1620 is also configured to: based on the center point The abscissa of the center point determines the function value of the detection line function; based on the ordinate of the center point and the function value of the detection line function, the detection status information of the item under each frame image is determined.
  • the detection status information determining part 1620 is further configured to be at least one of the following: when the ordinate of the center point and the function value of the detection function satisfy a first preset relationship, The detection status information of the item is determined to be the first detection status information; when the ordinate of the center point and the function value of the detection function satisfy a second preset relationship, the detection status information of the item is determined to be Second detection status information.
  • the target tracking status determining part 1630 is further configured to: determine the relationship between the detection status information of the items in the two adjacent frames of images; When the relationship between the detection status information meets the preset conditions, it is determined that the target tracking status corresponding to the item has changed, wherein the change in the target tracking status means that the item has changed from the first state to the second state.
  • the object changes from the second state to the first state
  • the first state represents the state in which the object and the storage cabinet are located on the same side of the detection line
  • the second state represents The state that the item and the storage cabinet are respectively located on both sides of the detection line
  • the number of changes is to determine the target tracking status corresponding to the item, wherein, when the number of times is an odd number, it is determined that the target tracking status corresponding to the item indicates that the item is taken, and when the number of times is an even number, it is determined that the target tracking status corresponding to the item indicates that the item is taken. In this case, it is determined that the target tracking status corresponding to the item indicates that the item is returned.
  • Figure 18 is a schematic structural diagram of an item status tracking device provided by an embodiment of the present disclosure.
  • the item status tracking device 1000 also includes: a target type information determination part 1640, and the detection result also includes the Item type information;
  • the target type information determining part 1640 is configured to: determine the target time point at which the target tracking state corresponding to the item changes; based on the type information of the item under each frame image, determine at least two targets within the target time period. The number of images corresponding to different types of information under the frame image, and the type information corresponding to the maximum number of images is determined as the target type information of the item, where the target time period refers to the target time point as the basis time period within the preset range.
  • the item status tracking device 1000 further includes: a first transaction information generation part 1650.
  • the first transaction information generation part 1650 is configured to: when the target tracking status corresponding to the item indicates the When an item is taken, transaction information for the item is generated.
  • the storage cabinet is provided with multiple camera devices, and the target tracking status corresponding to the item corresponds to one of the target camera devices;
  • the item status tracking device 1000 also includes: a second transaction information generation part 1660 , the second transaction information generation part 1660 is configured to: obtain the target tracking status corresponding to the item under each camera device; and fuse the target tracking status corresponding to the item under each camera device , obtain the final tracking status of the item; when the final tracking status of the item indicates that the item is taken, generate transaction information for the item.
  • the first transaction information generation portion 1650 is the same as the second transaction information generation portion 1660.
  • part may be part of a circuit, part of a processor, part of a program or software, etc., of course, it may also be a unit, it may be a module or it may be non-modular.
  • FIG. 19 is a schematic structural diagram of an electronic device 4000 provided by an embodiment of the present disclosure.
  • the electronic device 4000 includes a processor 4001, a memory 4002, and a bus 4003.
  • the memory 4002 is used to store execution instructions, including the memory 40021 and the external memory 40022; the memory 40021 here is also called the internal memory, and is used to temporarily store the operation data in the processor 4001, as well as the data exchanged with the external memory 40022 such as the hard disk.
  • the processor 4001 exchanges data with the external memory 40022 through the memory 40021.
  • the memory 4002 is specifically used to store application code for executing the technical solution of the present disclosure, and the processor 4001 controls the execution. That is, when the electronic device 4000 is running, the processor 4001 and the memory 4002 communicate through the bus 4003, so that the processor 4001 executes the application code stored in the memory 4002, and then executes the method in any of the foregoing embodiments.
  • the processor 4001 may be an integrated circuit chip with signal processing capabilities.
  • the above-mentioned processor can be a general-purpose processor, including a central processing unit (CPU), a network processor (Network Processor, NP), etc.; it can also be a digital signal processor (DSP) or an application-specific integrated circuit (ASIC) , field programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • DSP digital signal processor
  • ASIC application-specific integrated circuit
  • FPGA field programmable gate array
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc.
  • the memory 4002 may include but is not limited to random access memory (Random Access Memory, RAM), read-only memory (Read Only Memory, ROM), programmable read-only memory (Programmable Read-Only Memory, PROM), erasable memory Read-only memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable read-only memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc.
  • RAM Random Access Memory
  • ROM read-only memory
  • PROM programmable read-only memory
  • PROM Programmable Read-Only Memory
  • EPROM Erasable Programmable Read-Only Memory
  • EEPROM Electrically erasable read-only memory
  • the structure illustrated in the embodiment of the present disclosure does not constitute a specific limitation on the electronic device 800 .
  • the electronic device 800 may include more or fewer components than shown in the figures, or combine some components, or separate some components, or arrange different components.
  • the components illustrated may be implemented in hardware, software, or a combination of software and hardware.
  • Embodiments of the present disclosure also provide a computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium. When the computer program is run by a processor, it executes the steps of the item status tracking method described in the above method embodiment.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • Embodiments of the present disclosure also provide a computer program product.
  • the computer program product carries program code.
  • the instructions included in the program code can be used to execute the steps of the item status tracking method described in the above method embodiment. For details, please refer to the above. Method Examples.
  • the above-mentioned computer program product can be specifically implemented by hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium.
  • the computer program product is embodied as a software product, such as a software development kit (SDK) and so on.
  • SDK software development kit
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiments of the present disclosure.
  • each functional unit in various embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium that is executable by a processor.
  • the technical solution of the present disclosure is essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause an electronic device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code. .
  • Embodiments of the present disclosure provide an item status tracking method, device, electronic device, storage medium, and computer program product.
  • the method includes: obtaining movement trajectory information of the item, where the movement trajectory information includes detection results of the item in at least two frames of images. ; Based on the movement trajectory information and the preset detection line information, the detection status information of the items under each frame of the image is determined.
  • the detection line is set on one side of the storage cabinet, and the distance from the storage cabinet is within the preset range.
  • the detection status information Used to measure the position change state of the item relative to the detection line; based on the detection status information of the item under each adjacent two frames of images in at least two frames of images, determine the target tracking status corresponding to the item, and the target tracking status is used to indicate that the item has been taken taken, or the item returned.
  • items do not need to be attached with specific identification tags, which not only reduces costs but also improves identification efficiency; on the other hand, it helps improve the accuracy of determining the target tracking status of items, which in turn improves the user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本公开实施例提供了一种物品状态跟踪方法、装置、电子设备、存储介质及计算机程序产品,该方法包括:获取物品的移动轨迹信息,移动轨迹信息包含物品在至少两帧图像中的检测结果;基于移动轨迹信息以及预设的检测线信息,确定每帧图像下的物品的检测状态信息,检测线设置于存储柜的一侧,且与存储柜的距离在预设范围内,检测状态信息用于衡量物品相对检测线的位置变化状态;基于至少两帧图像中的每相邻两帧图像下的物品的检测状态信息,确定物品对应的目标跟踪状态,目标跟踪状态用于指示物品被拿取,或者物品被放回。

Description

物品状态跟踪方法、装置、电子设备、存储介质及计算机程序产品
相关申请的交叉引用
本公开实施例基于申请号为202210315977.7、申请日为2022年03月28日、申请名称为“物品状态跟踪方法、装置、电子设备及存储介质”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本公开作为参考。
技术领域
本公开涉及但不限于物品跟踪技术领域,尤其涉及一种物品状态跟踪方法、装置、电子设备、存储介质及计算机程序产品。
背景技术
随着互联网经济的发展,自助消费因其操作便利而逐渐成为被大众接受的消费趋势,因此,一些智能存储柜(如智能冰箱或者智能货架)应运而生。相关技术中,对于智能存储柜中被拿取的物品信息识别存在准确度不高、效率低、成本高等问题。
发明内容
本公开实施例至少提供一种物品状态跟踪方法、装置、电子设备、存储介质及计算机程序产品。
本公开实施例提供了一种物品状态跟踪方法,包括:
获取物品的移动轨迹信息,所述移动轨迹信息包含所述物品在至少两帧图像中的检测结果;
基于所述移动轨迹信息以及预设的检测线信息,确定每帧图像下的所述物品的检测状态信息,其中,所述检测线设置于存储柜的一侧,且与所述存储柜的距离在预设范围内,所述检测状态信息用于衡量所述物品相对所述检测线的位置变化状态;
基于所述至少两帧图像中的每相邻两帧图像下的物品的检测状态信息,确定所述物品对应的目标跟踪状态,其中,所述目标跟踪状态用于指示所述物品被拿取,或者所述物品被放回。
在本公开实施例中,基于移动轨迹信息以及预设的检测线信息,确定每帧图像下的物品的检测状态信息,并根据移动轨迹信息中的至少两帧图像中每相邻两帧图像中的物品的检测状态信息,确定与该移动轨迹信息下的物品的目标跟踪状态,也即,对物品进行动态跟踪,并根据物品的目标跟踪状态来确定该物品是否被拿取。这样,一方面,物品不需要附带特定的识别标签,不仅可以降低成本, 而且可以提高识别的效率;另一方面,有利于提高确定物品的目标跟踪状态的准确度,进而可以提升用户的使用体验。
本公开实施例还提供一种物品状态跟踪装置,所述装置包括:
移动轨迹信息获取部分,被配置为获取物品的移动轨迹信息,所述移动轨迹信息包含所述物品在至少两帧图像中的检测结果;
检测状态信息确定部分,被配置为基于所述移动轨迹信息以及预设的检测线信息,确定每帧图像下的所述物品的检测状态信息,其中,所述检测线设置于存储柜的一侧,且与所述存储柜的距离在预设范围内,所述检测状态信息用于衡量所述物品相对所述检测线的位置变化状态;
目标跟踪状态确定部分,被配置为基于所述至少两帧图像中的每相邻两帧图像下的物品的检测状态信息,确定所述物品对应的目标跟踪状态,其中,所述目标跟踪状态用于指示所述物品被拿取,或者所述物品被放回。
本公开实施例还提供一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行上述物品状态跟踪方法。
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述物品状态跟踪方法。
本公开实施例还提供了一种计算机程序产品,该计算机程序产品包括计算机程序或指令,在计算机程序或指令在电子设备上运行的情况下,使得电子设备执行上述物品状态跟踪方法。
关于上述物品状态跟踪装置、电子设备、计算机可读存储介质及计算机程序产品的效果描述参见上述物品状态跟踪方法的说明。
为使本公开的上述目的、特征和优点能更明显易懂,下文特举一些实施例,并配合所附附图,作详细说明如下。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。应当理解,以下附图示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1为本公开实施例提供的一种现有的存储柜的架构示意图;
图2为本公开实施例提供的一种物品状态跟踪方法的流程示意图;
图3为本公开实施例提供的一种物品的检测结果的示意图;
图4为本公开实施例提供的一种检测线的示意图;
图5为本公开实施例提供的一种确定物品的检测状态信息的示意图;
图6为本公开实施例提供的一种物品状态跟踪方法的流程示意图;
图7为本公开实施例提供的一种移动轨迹信息的图像的示意图;
图8为本公开实施例提供的一种确定每帧图像下的物品的检测状态信息方法的流程示意图;
图9为本公开实施例提供的一种确定物品对应的目标跟踪状态方法的流程示意图;
图10为本公开实施例提供的一种相邻两帧图像的物品的检测状态信息的示意图;
图11为本公开实施例提供的一种目标跟踪状态发生改变的示意图;
图12为本公开实施例提供的一种确定物品的目标种类信息方法的流程示意图;
图13为本公开实施例提供的一种移动轨迹下的物品的目标种类信息的示意图;
图14为本公开实施例提供的再一种物品状态跟踪方法的流程示意图;
图15为本公开实施例提供的一种多摄像装置下的物品的交易信息生成方法的流程示意图;
图16为本公开实施例提供的一种具有多摄像装置的存储柜的结构示意图;
图17为本公开实施例提供的一种物品状态跟踪装置的结构示意图;
图18为本公开实施例提供的一种物品状态跟踪装置的结构示意图;
图19为本公开实施例提供的一种电子设备的示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。
本文中术语“和/或”,是描述一种关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。
随着互联网经济的发展,自助消费因其操作便利而逐渐成为被大众接受的消费趋势,因此,一些自助售卖机(如智能冰箱或者智能货架)应运而生。
图1为本公开实施例提供的一种现有的存储柜的架构示意图,如图1中所示,存储柜100中包括多层置物板10以及图像采集装置20,每层置物板10上摆放 至少一个物品30,例如,饮料(例如,冰红茶)、速食产品(例如,面包或者泡面)以及日用品(例如,餐巾纸)等,每个物品30附带射频识别RFID标签31,存储柜100可以通过射频识别技术来识别RFID标签31,以对物品30进行识别。图像采集装置20为具有人脸识别功能的装置,用于刷脸支付或者扫码支付。然而,上述的存储柜100,由于需要使用RFID标签,存在成本高和效率低的问题。
因此,为了降低成本,在一些实施方式中,避免使用RFID标签,通过静态拍照来识别物品信息。其中,静态拍照的识别方式通常是根据相邻两帧图像的检测结果来确定是否有物品被拿取以及被拿取的物品信息。然而,此种方法由于只对结果进行分析,若出现遮挡的情况,将导致被拿取的物品信息识别错误,进而影响用户体验。
基于此,本公开实施例提供了一种物品状态跟踪方法,包括:获取物品的移动轨迹信息,所述移动轨迹信息包含所述物品在至少两帧图像中的检测结果;基于所述移动轨迹信息以及预设的检测线信息,确定每帧图像下的所述物品的检测状态信息,其中,所述检测线设置于存储柜的一侧,且与所述存储柜的距离在预设范围内,所述检测状态信息用于衡量所述物品相对所述检测线的位置变化状态;基于所述至少两帧图像中的每相邻两帧图像下的物品的检测状态信息,确定所述物品对应的目标跟踪状态,其中,所述目标跟踪状态用于指示所述物品被拿取,或者所述物品被放回。
在本公开实施例中,基于移动轨迹信息以及预设的检测线信息,确定每帧图像下的物品的检测状态信息,并根据移动轨迹信息中的至少两帧图像中每相邻两帧图像中的物品的检测状态信息,确定与该移动轨迹信息下的物品的目标跟踪状态,也即,对物品进行动态跟踪,并根据物品的目标跟踪状态确定该物品是否被拿取。这样,一方面,物品不需要附带特定的识别标签,不仅可以降低成本,而且可以提高识别的效率;另一方面,有利于提高确定物品的目标跟踪状态的准确度,进而可以提升用户的使用体验。
下面结合附图对本公开实施例中的物品状态跟踪方法进行详细介绍。
图2为本公开实施例提供的一种物品状态跟踪方法的流程示意图,如图2所示,本公开实施例提供的物品状态跟踪方法,包括以下步骤S101~步骤S104,其中:
步骤S101,获取物品的移动轨迹信息,所述移动轨迹信息包含所述物品在至少两帧图像中的检测结果。
这里,所述移动轨迹信息是指用户在拿取和放回物品的过程中,由于物品的移动所形成的轨迹。在一些实施方式中,在用户在拿取和放回物品的过程中,可以通过摄像装置采集该过程的视频数据,然后通过终端设备获取摄像装置所采集的视频数据,并对该视频数据进行解码、目标检测以及轨迹追踪后,得到物品的移动轨迹信息。
视频数据是指连续的图像序列,其实质是由一组连续的图像帧构成的。图像帧是组成视频的最小视觉单位,是一幅静态的图像。将时间上连续的图像帧序列合成到一起便形成动态视频。在一些实施方式中,为了方便后续的检测识别,可以通过FFmpeg(Fast Forward Mpeg)技术等方式对视频数据进行解码,进而 得到至少两帧图像,对该至少两帧图像进行目标检测后,即可得到至少两帧图像的检测结果。
其中,FFmpeg技术为可以用来记录、转换数字音频、视频,并能将其转化为流的开源计算机程序,其采用LGPL或GPL许可证,提供有录制、转换以及流化音视频的完整解决方案,并且包含了非常先进的音频/视频编解码库libavcodec,为了保证高可移植性和编解码质量,libavcodec里很多code都是可以从头开发的,从而可以数据使用的简便和适配。相应的,在实际使用中,对于拍摄的用户拿取物品的视频数据,也可以是使用FFmpeg技术进行保存和传输的。
在一些实施方式中,由于视频数据中每秒钟通常包括至少两帧图像(例如,每秒钟包括24帧图像),因此,在对视频数据进行解码时,可以对视频数据进行图像帧抽帧提取,其中,抽帧提取是指按照预设的间隔帧数进行抽帧提取,例如,每间隔20帧提取一帧待检测图像;还可以按照预设的时间间隔进行抽帧提取,例如,每间隔10毫秒(Millisecond,ms)提取一帧图像。
在实施时,抽帧提取的间隔帧数以及间隔时间,可以根据实际需求而设定,在此不做限定。
在一些实施方式中,所述检测结果包括所述物品的检测框信息,物品的检测框信息包括检测框的大小(例如,检测框的长度和宽度)以及检测框的的位置信息(例如,检测框的中心点的位置坐标)。
在一些实施方式中,所述检测结果还可以包括物品的种类信息。其中,物品的种类信息可以是任何合适的种类信息。例如,饮料、日用品、速食产品等。
图3为本公开实施例提供的一种物品的检测结果的示意图,如图3中所示,该检测结果500包括检测框501,并以文字形式输出该物品的种类信息502为“矿泉水150ml”。其中,检测框501的大小即为所述物品的大小。
另外,在本公开实施例中,通过终端设备获取摄像装置所采集的视频数据,也即,该物品状态跟踪方法的执行主体可以为终端设备,其中,终端设备可以包括但不限于车载设备、可穿戴设备、用户终端及手持设备等。在一些实施方式中,该物品状态跟踪方法的执行主体还可以是服务器,其中,该服务器可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云存储、大数据和人工智能平台等基础云计算服务的云服务器。在一些实施方式中,该物品状态跟踪方法还可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。
步骤S102,基于所述移动轨迹信息以及预设的检测线信息,确定每帧图像下的所述物品的检测状态信息。
这里,所述检测线设置于存储柜的一侧,且与所述存储柜的距离在预设范围内,所述检测状态信息用于衡量所述物品相对所述检测线的位置变化状态。
图4为本公开实施例所提供的一种检测线的示意图。如图4中所示,检测线300设置于存储柜200的一侧,并且与存储柜200之间的距离在预设范围内,在用户拿取和放回物品60的过程中,可以通过检测线300与物品60之间的相对位置,确定所述物品60的检测状态信息。
图5为本公开实施例所提供的一种确定物品的检测状态信息的示意图,如 图5中所示,检测线300用来判断物品是否被拿取和放回,由于检测框ID1的大小即为物品的大小,因此,可以通过判断检测框ID1与检测线300之间的相对位置,确定物品的检测状态信息。例如,当检测框ID1向上越过检测线300时,可以确定物品的检测状态信息为1,当检测框ID1向下越过检测线300时,可以确定物品的检测状态信息为-1。
需要说明的是,本实施方式中的确定物品的检测状态信息为1以及物品的检测状态信息为-1用于表示物品与检测线之间的位置的相对变化,在其他实施方式中,检测状态信息还可以是其他数字或者字母等符号(例如,0和1或者S和-S等),在此不做限定。并且,图5中所示的两个检测框ID1用于表示检测框ID1的位置相对于检测线300的位置的变化状态,而不是同一帧图像中同时存在两个检测框ID1。
步骤S103,基于所述至少两帧图像中的每相邻两帧图像下的物品的检测状态信息,确定所述物品对应的目标跟踪状态,其中,所述目标跟踪状态用于指示所述物品被拿取,或者所述物品被放回。
这里,由于物品的移动轨迹信息中包含至少两帧图像,因此,需要根据每相邻两帧图像下的物品的检测状态信息来确定物品的目标跟踪状态,以确定该物品被拿取或者放回。例如,若移动轨迹信息包括A、B、C、D四帧图像,其中A和B相邻、B和C相邻,C和D相邻,若A图像下的物品O的检测状态信息与B图像下的物品O的检测状态信息不同的情况下,则认为物品相对于检测线的位置发生了变化,以此类推,进而可以确定物品O对应的目标跟踪状态。
在本公开实施例中,基于移动轨迹信息以及预设的检测线信息,确定每帧图像下的物品的检测状态信息,并根据移动轨迹信息中的至少两帧图像中每相邻两帧图像中的物品的检测状态信息,确定与该移动轨迹信息下的物品的目标跟踪状态,也即,对物品进行动态跟踪,并根据物品的目标跟踪状态确定该物品是否被拿取。这样,一方面,物品不需要附带特定的识别标签,不仅可以降低成本,而且可以提高识别的效率;另一方面,有利于提高确定物品的目标跟踪状态的准确度,进而可以提升用户的使用体验。
在一些实施方式中,由于移动轨迹信息是对每一帧图像中的物品进行目标检测与识别以及跟踪得到的,因此,移动轨迹中可能会存在一些检测结果为背景的图像。图6为本公开实施例提供的一种物品状态跟踪方法的流程示意图,所述检测结果还包括所述物品的种类信息,如图6中所示,该方法包括以下步骤S201~步骤S205,其中:
步骤S201,获取物品的移动轨迹信息,所述移动轨迹信息包含所述物品在至少两帧图像中的检测结果。
这里,上述步骤S201对应于前述步骤S101,在实施时,可以参照前述步骤S101的具体实施方式。
步骤S202,从所述移动轨迹信息中的至少两帧图像中确定目标图像,所述目标图像为所述物品的种类信息为非目标种类的图像。
步骤S203,将所述目标图像从所述至少两帧图像中删除,得到新的移动轨迹信息。
步骤S204,基于所述新的移动轨迹信息以及所述预设的检测线信息,确定所述新的移动轨迹信息下的每帧图像下的所述物品的检测状态信息。
步骤S205,基于所述新的移动轨迹信息下的至少两帧图像中的每相邻两帧图像下的物品的检测状态信息,确定所述物品对应的目标跟踪状态。
这里,上述步骤S204至步骤S205分别对应于前述步骤S102至步骤S103,在实施时,可以参照前述步骤S102至步骤S103的具体实施方式。
物品的种类信息为非目标种类的图像可以是一些物品的种类信息虚化的图像,或者是一些不是物品的图像,例如,可以是存储柜的部分结构的图像,还可以是与所述存储柜中预先放置的物品的种类不同的图像。
因此,通过将目标图像从至少两帧图像中删除,这样,可以降低后续确定物品对应的目标跟踪状态的影响,有利于提高确定检测状态信息的精度。
在一些实施方式中,所述预设的检测线信息包括检测线函数,所述检测结果包括所述物品的检测框信息,所述物品的检测框信息包括检测框的中心点位置;上述步骤S102包括步骤S1021,其中:
步骤S1021,基于所述移动轨迹信息中的每帧图像中的所述物品的检测框中心点位置以及所述检测线函数,确定所述每帧图像下的所述物品的检测状态信息。
图7为本公开实施例所提供的一种移动轨迹信息的图像的示意图,如图7所示,可以以图7中所示的图像400的左上角为坐标原点(0,0)建立图像坐标系,如此,即可基于所述移动轨迹信息中的每帧图像中的所述物品O的检测框中心点位置O1以及所述检测线函数F(x),确定所述每帧图像下的所述物品O的检测状态信息。这样,可以提高确定每帧图像下的物品的检测状态信息的准确度,进而提高确定物品的目标跟踪状态的准确性。
需要说明的是,在本实施方式以及下文中的其他实施方式中所描述的移动轨迹信息还可以是从所述至少两帧图像中删除目标图像所得到新的移动轨迹信息,在此不做限定,为便于理解,在下文中统一使用移动轨迹信息进行描述。
在一些实施方式中,所述检测框的中心点位置包括所述中心点的横坐标以及所述中心点的纵坐标;图8为本公开实施例提供的一种确定每帧图像下的物品的检测状态信息方法的流程示意图,如图8所示,上述步骤S1021可以包括以下步骤S601~S602,其中:
S601,基于所述中心点的横坐标,确定所述检测线函数的函数值。
S602,基于所述中心点的纵坐标以及所述检测线函数的函数值,确定所述每帧图像下的所述物品的检测状态信息。
如图7所示,在t时刻的物品O的检测状态信息可以表示为S t,物品O的检测框中心点O1位置坐标为(x t,y t),所述预设的检测线信息包括检测线函数F(x),则基于所述中心点的横坐标x t,则可以确定检测线函数F(x)的函数值F(x t)。
在一些实施方式中,上述步骤S602可以包括步骤S6021和/或步骤S6022,其中:
步骤S6021,在所述中心点的纵坐标与所述检测函数的函数值满足第一预设关系的情况下,确定所述物品的检测状态信息为第一检测状态信息;
步骤S6022,在所述中心点的纵坐标与所述检测函数的函数值满足第二预设关系的情况下,确定所述物品的检测状态信息为第二检测状态信息。
如图7所示,根据预先建立的图像坐标系,并在所述中心点O1的纵坐标y t与所述函数值F(x t)满足第一预设关系y t>F(x t)的情况下,也即在物品O位于检测线上方的情况下,可以确定物品O的检测状态信息为第一检测状态信息,也即,S t=-1;在所述中心点O1的纵坐标y t与所述函数值满足第二预设关系y t<F(x t)的情况下,也即在物品O位于检测线下方的情况下,可以确定物品O的检测状态信息为第二检测状态信息,也即S t=1。其中,图7中所示出的两个物品O用于表示物品O在不同时刻下分别位于检测线两侧的情况,而不是同时存在两个物品O。
需要说明的是,由于建立图像坐标系的方式不同(比如,坐标原点的不同),则针对第一预设关系与第二预设关系的设置也会不相同,在实际应用中,可以根据图像的实际情况进行设置,在此不做限定。
在一些实施方式中,在确定每帧图像下的物品的检测状态信息后,可以确定所述物品对应的目标跟踪状态,图9为本公开实施例提供的一种确定物品对应的目标跟踪状态方法的流程示意图,如图9所示,上述步骤S103可以包括以下步骤S1031~步骤S1034,其中:
步骤S1031,确定所述相邻两帧图像的物品的检测状态信息之间的关系。
步骤S1032,在所述相邻两帧图像的物品的检测状态信息之间的关系符合预设条件的情况下,确定所述物品对应的目标跟踪状态发生改变。
这里,所述目标跟踪状态发生改变是指所述物品从第一状态变化至第二状态,或者所述物品从所述第二状态变化至所述第一状态;所述第一状态表征所述物品与所述存储柜位于所述检测线的同侧的状态,所述第二状态表征所述物品与所述存储柜分别位于所述检测线两侧的状态。
所述预设条件可以根据实际需要进行设置,在此不做限定。例如,相邻两帧图像的物品的检测状态信息之间的关系符合预设条件可以是相邻两帧图像的物品的检测状态信息之间的关系为相反数关系。
图10为本公开实施例提供的一种相邻两帧图像的物品的检测状态信息的示意图,其中,所述相邻两帧图像的物品的检测状态信息之间的关系符合预设条件是互为相反数关系,如图10中所示,图像1001与图像1002为相邻的两帧图像,每张图像中的虚线为检测线300,其中,图像1001的物品O位于检测线300上方,物品O的检测状态信息S1=1,图像1002的物品O位于检测线300的下方,物品O的检测状态信息为S2=-1,也即图像1001与图像1002的物品O的检测状态信息之间的关系符合互为相反数关系,则可以确定所述物品对应的目标跟踪状态发生改变。
图11为本公开实施例提供的一种目标跟踪状态发生改变的示意图。如图11中所示,在用户购买物品60的过程中,物品60从位置a移动至位置b,处于位 置b的物品60与自助售卖装置200位于检测线300的同侧,即此时物品60处于第一状态,此时,物品的检测状态信息可以为1;然后,物品60继续移动至c位置,处于c位置的物品60与自助售卖装置200分别位于检测线300的两侧,也即此时的物品60处于第二状态,此时,物品的检测状态信息为-1,也即,物品60从位置b移动至位置c的过程即为物品60对应的目标跟踪状态发生改变的过程。这样,可以将物品在移动过程中状态改变的情况都确定出来,可以提高确定物品对应的目标跟踪状态是否发生变化的准确度。
步骤S1033,确定所述移动轨迹信息下的所述物品对应的目标跟踪状态发生改变的次数。
步骤S1034,基于所述物品对应的目标跟踪状态发生改变的次数,确定所述物品对应的目标跟踪状态,其中,在所述次数为奇数的情况下,确定所述物品对应的目标跟踪状态指示所述物品被拿取,在所述次数为偶数的情况下,确定所述物品对应的目标跟踪状态指示所述物品被放回。
如图11所示,物品60从b位置移动至c位置,也即,物品60对应的目标跟踪状态发生一次改变,则所述物品对应的目标跟踪状态指示所述物品60被拿取,也即,物品60在b位置对应的图像与物品60在c位置对应的图像为相邻两帧图像,并且发生一次跨线动作;接下来,若物品60从c位置移动回b位置,此时,物品60对应的目标跟踪状态发生两次改变,则所述物品对应的目标跟踪状态指示所述物品60被放回,也即,物品60在c位置对应的图像与物品60再次移动至b位置对应的图像为相邻两帧图像,并且发生一次跨线动作。以此类推,由于移动轨迹信息所对应的至少两帧图像中可能会包括多个相邻两帧图像,因此,需要根据物品对应的目标跟踪状态发生改变的次数,确定所述物品是否被拿取。这样,可以提高确定物品对应的目标跟踪状态的准确性。
由于上述移动轨迹信息是通过检测,识别以及跟踪得到的,因此,在上述过程中可能出现识别错误或者匹配错误,例如,物品M和物品N相似,在进行检测以及跟踪时,将物品N标记为物品M,因此,在确定所述物品对应的目标跟踪状态后,还需要对物品的种类信息进行确定。图12为本公开实施例提供的一种确定物品的目标种类信息的方法流程示意图,如图12中所示,该方法包括以下步骤S1101~步骤S1102,其中:
步骤S1101,确定所述物品对应的目标跟踪状态发生改变的目标时间点。
步骤S1102,基于所述每帧图像下的物品的种类信息,确定目标时间段内的至少两帧图像下的不同种类信息所对应的图像数量,并将最多图像数量所对应的种类信息确定为所述物品的目标种类信息,其中,所述目标时间段是指以所述目标时间点为基准的预设范围内的时间段。
这里,所述预设范围可以根据实际需要进行设置,例如,可以是1秒(Second,s),3秒或者5秒等,在此不做限定。
图13为本公开实施例提供的一种确定物品的目标种类信息的示意图,如图13中所示,所述物品对应的目标跟踪状态发生改变的目标时间点为T1,预设范围为1s,移动轨迹信息包括10张图像(图像1101至图像1110),每张图像均包括跟踪标记号为ID1的物品,则可以确定在目标时间段T1-1到T1+1内的至少 两帧图像中的物品的种类信息所对应的图像数量,其中,图像1107中的物品的种类信息实际为气泡水,而其他图像中的物品的种类信息为矿泉水,则可以确定所述物品的目标种类信息为矿泉水。
在本实施方式中,由于移动轨迹信息中的每帧图像中的物品可能存在多种或者识别结果存在错误,因此,将最多图像数量所对应的种类信息确定为物品的目标种类信息,可以提升确定物品的目标种类信息的准确性。
在一些实施方式中,若存储柜为自助售卖机,则在确定所述物品对应的目标跟踪状态后,还可以生成相应的交易信息,图14为本公开实施例提供的一种物品状态跟踪方法的流程示意图,与图2中所示的物品状态跟踪方法不同的是,所述方法还包括步骤S104,其中:
步骤S104,在所述物品对应的目标跟踪状态指示所述物品被拿取的情况下,生成所述物品的交易信息。
这里,在物品对应的目标跟踪状态指示该物品被拿取的情况下,可以生成该物品的交易信息。这样,由于提升了物品的拿取状态的确定精度,进而也提升了交易信息的精度,可以提升用户自助购买体验。
交易信息可以包括物品的种类信息、物品被拿取的时间、物品的数量以及物品的金额中的至少一种。
在一些实施方式中,还可以将该交易信息发送至用户,比如,用户侧接收到的交易信息可以为“您已于11时50分购买物品O两件,金额为6元”。
在一些实施方式中,在所述物品对应的目标跟踪状态指示所述物品被拿取的情况下,还可以控制所述存储柜的柜门自动关闭,以结束当前交易的流程。
在一些实施方式中,上述方法的实施是在所述存储柜中设置有一个摄像装置的情况下,为了能够从多角度,多视野拍摄到物品的移动过程,本公开实施例还提供了基于多摄像装置的物品交易信息生成的方法。图15为本公开实施例提供的一种多摄像装置下的物品的交易信息生成方法的流程示意图,所述存储柜中设置有多个摄像装置,所述物品对应的目标跟踪状态对应其中一个目标摄像装置,如图15中所示,上述步骤S104包括以下步骤S1041~步骤S1043,其中:
步骤S1041,获取每个摄像装置下的所述物品对应的目标跟踪状态。
步骤S1042,将所述每个摄像装置下的所述物品对应的目标跟踪状态进行融合,得到所述物品的最终跟踪状态。
步骤S1043,在所述物品的最终跟踪状态指示所述物品被拿取的情况下,生成所述物品的交易信息。
这里,每个摄像装置所对应的目标跟踪状态可能存在不同,因此,需要获取每个摄像装置下的该物品所对应的目标跟踪状态,并将每个摄像装置下的该物品所对应的目标跟踪状态进行融合,得到该物品的完整跟踪状态,也即所述最终跟踪状态,若最终跟踪状态指示所述物品被拿取,则可以生成该物品的交易信息。这样,通过将多个摄像装置下的物品的目标跟踪状态进行融合得到最终跟踪轨迹,可以使得物品的跟踪轨迹更加完整,并在所述物品的最终跟踪状态指示所述物品被拿取的情况下,生成所述物品的交易信息,有利于提高交易信息的完整性。
图16为本公开实施例提供的一种具有多摄像装置的存储柜的结构示意图。 如图16中所示,存储柜200包括两个摄像装置40以及多个层叠且间隔设置置物板50。置物板50用于承载多种物品60(例如,娃哈哈、500毫升(mL)冰红茶、1.5升(L)冰红茶以及泡面等),当用户在存储柜200拿取物品的过程中,两个摄像装置40会分别从不同的角度采集用户拿取物品60的视频。因此,通过上述两个摄像装置40可以分别获取到用户从存储柜200拿取物品的视频。
在一些实施方式中,摄像装置40的数量可以根据实际需求进行设置,且多个摄像装置之间需要具有较多的共同拍摄视野,以从不同角度拍摄用户拿取物品的视频。在一些实施方式中,可以根据摄像装置的拍摄视角、拍摄范围,或者根据成本等因素进行设置,例如,摄像装置40还可以是其他数量,三个或者五个等,在此不做限定。在本公开实施例中,摄像装置的数量设置为两个。
另外,需要说明的是,本公开实施例中的存储柜可以包括但不限于冰箱、保温柜、自助售卖机、储物柜等,具体可以根据应用场景不同而不同,只要是能存放物品且能够供用户取物的柜子、装置,皆可称为存储柜。
在本实施方式中,若存储柜设置有多个摄像装置,则可以将各个摄像装置下的物品对应的目标跟踪状态进行融合,以得到最终跟踪状态,并在最终跟踪状态指示所述物品被拿取的情况下,生成交易信息。这样,可以提高交易信息的完整性以及避免交易信息的重复性。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
基于同一发明构思,本公开实施例中还提供了与物品状态跟踪方法对应的物品状态跟踪装置,由于本公开实施例中的装置解决问题的原理与本公开实施例上述物品状态跟踪方法相似,因此装置的实施可以参阅方法的实施。
图17为本公开实施例提供的一种物品状态跟踪装置的结构示意图,如图17所示,所述物品状态跟踪装置1600包括:移动轨迹信息获取部分1610、检测状态信息确定部分1620以及目标跟踪状态确定部分1630;其中,
移动轨迹信息获取部分1610,被配置为获取物品的移动轨迹信息,所述移动轨迹信息包含所述物品在至少两帧图像中的检测结果;
检测状态信息确定部分1620,被配置为基于所述移动轨迹信息以及预设的检测线信息,确定每帧图像下的所述物品的检测状态信息,其中,所述检测线设置于存储柜的一侧,且与所述存储柜的距离在预设范围内,所述检测状态信息用于衡量所述物品相对所述检测线的位置变化状态;
目标跟踪状态确定部分1630,被配置为基于所述至少两帧图像中的每相邻两帧图像下的物品的检测状态信息,确定所述物品对应的目标跟踪状态,其中,所述目标跟踪状态用于指示所述物品被拿取,或者所述物品被放回。
在一些实施方式中,所述检测结果包括所述物品的种类信息;所述检测状态信息确定部分1620,还被配置为:从所述移动轨迹信息中的至少两帧图像中确定目标图像,所述目标图像为所述物品的种类信息为背景的图像;将所述目标图像从所述至少两帧图像中删除,得到新的移动轨迹信息;基于所述新的移动轨迹信息以及所述预设的检测线信息,确定所述新的移动轨迹信息下的每帧图像下的 所述物品的检测状态信息;所述目标跟踪状态确定部分1630,还被配置为:基于所述新的移动轨迹信息下的至少两帧图像中的每相邻两帧图像下的物品的检测状态信息,确定所述物品对应的目标跟踪状态。
在一些实施方式中,所述预设的检测线信息包括检测线函数,所述检测结果包括所述物品的检测框信息,所述物品的检测框信息包括检测框的中心点位置;所述检测状态信息确定部分1620,还被配置为:基于所述移动轨迹信息中的每帧图像中的所述物品的检测框中心点位置以及所述检测线函数,确定所述每帧图像下的所述物品的检测状态信息。
在一些实施方式中,所述检测框的中心点位置包括所述中心点的横坐标以及所述中心点的纵坐标;所述检测状态信息确定部分1620,还被配置为:基于所述中心点的横坐标,确定所述检测线函数的函数值;基于所述中心点的纵坐标以及所述检测线函数的函数值,确定所述每帧图像下的所述物品的检测状态信息。
在一些实施方式中,所述检测状态信息确定部分1620,还被配置为以下至少之一:在所述中心点的纵坐标与所述检测函数的函数值满足第一预设关系的情况下,确定所述物品的检测状态信息为第一检测状态信息;在所述中心点的纵坐标与所述检测函数的函数值满足第二预设关系的情况下,确定所述物品的检测状态信息为第二检测状态信息。
在一些实施方式中,所述目标跟踪状态确定部分1630,还被配置为:确定所述相邻两帧图像的物品的检测状态信息之间的关系;在所述相邻两帧图像的物品的检测状态信息之间的关系符合预设条件的情况下,确定所述物品对应的目标跟踪状态发生改变,其中,所述目标跟踪状态发生改变是指所述物品从第一状态变化至第二状态,或者所述物品从所述第二状态变化至所述第一状态;所述第一状态表征所述物品与所述存储柜位于所述检测线的同侧的状态,所述第二状态表征所述物品与所述存储柜分别位于所述检测线两侧的状态;确定所述移动轨迹信息下的所述物品对应的目标跟踪状态发生改变的次数;基于所述物品对应的目标跟踪状态发生改变的次数,确定所述物品对应的目标跟踪状态,其中,在所述次数为奇数的情况下,确定所述物品对应的目标跟踪状态指示所述物品被拿取,在所述次数为偶数的情况下,确定所述物品对应的目标跟踪状态指示所述物品被放回。
图18为本公开实施例提供的一种物品状态跟踪装置的结构示意图,如图18所示,所述物品状态跟踪装置1000还包括:目标种类信息确定部分1640,所述检测结果还包括所述物品的种类信息;
所述目标种类信息确定部分1640,被配置为:确定所述物品对应的目标跟踪状态发生改变的目标时间点;基于所述每帧图像下的物品的种类信息,确定目标时间段内的至少两帧图像下的不同种类信息所对应的图像数量,并将最多图像数量所对应的种类信息确定为所述物品的目标种类信息,其中,所述目标时间段是指以所述目标时间点为基准的预设范围内的时间段。
在一些实施方式中,所述物品状态跟踪装置1000还包括:第一交易信息生成部分1650,所述第一交易信息生成部分1650,被配置为:在所述物品对应的目标跟踪状态指示所述物品被拿取的情况下,生成所述物品的交易信息。
在一些实施方式中,所述存储柜中设置有多个摄像装置,所述物品对应的目标跟踪状态对应其中一个目标摄像装置;所述物品状态跟踪装置1000还包括:第二交易信息生成部分1660,所述第二交易信息生成部分1660,被配置为:获取每个摄像装置下的所述物品对应的目标跟踪状态;将所述每个摄像装置下的所述物品对应的目标跟踪状态进行融合,得到所述物品的最终跟踪状态;在所述物品的最终跟踪状态指示所述物品被拿取的情况下,生成所述物品的交易信息。
在一些实施方式中,所述第一交易信息生成部分1650与所述第二交易信息生成部分1660相同。
关于装置中的各部分的处理流程、以及各部分之间的交互流程的描述可以参照上述方法实施例中的相关说明。
在本公开实施例以及其他的实施例中,“部分”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是单元,还可以是模块也可以是非模块化的。
基于同一技术构思,本公开实施例还提供了一种电子设备。图19为本公开实施例提供的电子设备4000的结构示意图,如图19所示,所述电子设备4000包括处理器4001、存储器4002、和总线4003。其中,存储器4002用于存储执行指令,包括内存40021和外部存储器40022;这里的内存40021也称内存储器,用于暂时存放处理器4001中的运算数据,以及与硬盘等外部存储器40022交换的数据,处理器4001通过内存40021与外部存储器40022进行数据交换。
在本公开实施例中,存储器4002具体用于存储执行本公开技术方案的应用程序代码,并由处理器4001来控制执行。也即,当电子设备4000运行时,处理器4001与存储器4002之间通过总线4003通信,使得处理器4001执行存储器4002中存储的应用程序代码,进而执行前述任一实施例中的方法。
处理器4001可能是一种集成电路芯片,具有信号的处理能力。上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本公开实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
其中,存储器4002可以包括但不限于随机存取存储器(Random Access Memory,RAM),只读存储器(Read Only Memory,ROM),可编程只读存储器(Programmable Read-Only Memory,PROM),可擦除只读存储器(Erasable Programmable Read-Only Memory,EPROM),电可擦除只读存储器(Electric Erasable Programmable Read-Only Memory,EEPROM)等。
可以理解的是,本公开实施例示意的结构并不构成对电子设备800的具体限定。在本公开另一些实施例中,电子设备800可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存 储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中所述的物品状态跟踪方法的步骤。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。
本公开实施例还提供一种计算机程序产品,该计算机程序产品承载有程序代码,所述程序代码包括的指令可用于执行上述方法实施例中所述的物品状态跟踪方法的步骤,具体可参阅上述方法实施例。
其中,上述计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一些实施例中,所述计算机程序产品具体体现为计算机存储介质,在一些实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和终端的具体工作过程,可以参考前述方法实施例中的对应过程。在本公开所提供的几个实施例中,应该理解到,所揭露的系统、终端和方法,可以通过其它的方式实现。以上所描述的终端实施例是示意性的,例如,所述单元的划分,为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本公开实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台电子设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上所述实施例,为本公开的具体实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。因此,本公开 的保护范围应所述以权利要求的保护范围为准。
工业实用性
本公开实施例提供了一种物品状态跟踪方法、装置、电子设备、存储介质及计算机程序产品,该方法包括:获取物品的移动轨迹信息,移动轨迹信息包含物品在至少两帧图像中的检测结果;基于移动轨迹信息以及预设的检测线信息,确定每帧图像下的物品的检测状态信息,检测线设置于存储柜的一侧,且与存储柜的距离在预设范围内,检测状态信息用于衡量物品相对检测线的位置变化状态;基于至少两帧图像中的每相邻两帧图像下的物品的检测状态信息,确定物品对应的目标跟踪状态,目标跟踪状态用于指示物品被拿取,或者物品被放回。一方面,物品不需要附带特定的识别标签,不仅可以降低成本,而且可以提高识别的效率;另一方面,有利于提高确定物品的目标跟踪状态的准确度,进而可以提升用户的使用体验。

Claims (21)

  1. 一种物品状态跟踪方法,所述方法包括:
    获取物品的移动轨迹信息,所述移动轨迹信息包含所述物品在至少两帧图像中的检测结果;
    基于所述移动轨迹信息以及预设的检测线信息,确定每帧图像下的所述物品的检测状态信息,其中,所述检测线设置于存储柜的一侧,且与所述存储柜的距离在预设范围内,所述检测状态信息用于衡量所述物品相对所述检测线的位置变化状态;
    基于所述至少两帧图像中的每相邻两帧图像下的物品的检测状态信息,确定所述物品对应的目标跟踪状态,其中,所述目标跟踪状态用于指示所述物品被拿取,或者所述物品被放回。
  2. 根据权利要求1所述的方法,其中,所述检测结果包括所述物品的种类信息;所述基于所述移动轨迹信息以及预设的检测线信息,确定每帧图像下的所述物品的检测状态信息,包括:
    从所述移动轨迹信息中的至少两帧图像中确定目标图像,所述目标图像为所述物品的种类信息为非目标种类的图像;
    将所述目标图像从所述至少两帧图像中删除,得到新的移动轨迹信息;
    基于所述新的移动轨迹信息以及所述预设的检测线信息,确定所述新的移动轨迹信息下的每帧图像下的所述物品的检测状态信息;
    所述基于所述至少两帧图像中的每相邻两帧图像下的物品的检测状态信息,确定所述物品对应的目标跟踪状态,包括:
    基于所述新的移动轨迹信息下的至少两帧图像中的每相邻两帧图像下的物品的检测状态信息,确定所述物品对应的目标跟踪状态。
  3. 根据权利要求1所述的方法,其中,所述预设的检测线信息包括检测线函数,所述检测结果包括所述物品的检测框信息,所述物品的检测框信息包括检测框的中心点位置;所述基于所述移动轨迹信息以及预设的检测线信息,确定每帧图像下的所述物品的检测状态信息,包括:
    基于所述移动轨迹信息中的每帧图像中的所述物品的检测框中心点位置以及所述检测线函数,确定所述每帧图像下的所述物品的检测状态信息。
  4. 根据权利要求3所述的方法,其中,所述检测框的中心点位置包括所述中心点的横坐标以及所述中心点的纵坐标;所述基于所述移动轨迹信息中的每帧图像中的所述物品的检测框中心点位置以及所述检测线函数,确定所述每帧图像下的所述物品的检测状态信息,包括:
    基于所述中心点的横坐标,确定所述检测线函数的函数值;
    基于所述中心点的纵坐标以及所述检测线函数的函数值,确定所述每帧图像下的所述物品的检测状态信息。
  5. 根据权利要求4所述的方法,其中,所述基于所述中心点的纵坐标以及所述检测线函数的函数值,确定所述每帧图像下的所述物品的检测状态信息,包括以下至少之一:
    在所述中心点的纵坐标与所述检测函数的函数值满足第一预设关系的情况 下,确定所述物品的检测状态信息为第一检测状态信息;
    在所述中心点的纵坐标与所述检测函数的函数值满足第二预设关系的情况下,确定所述物品的检测状态信息为第二检测状态信息。
  6. 根据权利要求1所述的方法,其中,所述基于所述至少两帧图像中的每相邻两帧图像下的物品的检测状态信息,确定所述物品对应的目标跟踪状态,包括:
    确定所述相邻两帧图像的物品的检测状态信息之间的关系;
    在所述相邻两帧图像的物品的检测状态信息之间的关系符合预设条件的情况下,确定所述物品对应的目标跟踪状态发生改变,其中,所述目标跟踪状态发生改变是指所述物品从第一状态变化至第二状态,或者所述物品从所述第二状态变化至所述第一状态;所述第一状态表征所述物品与所述存储柜位于所述检测线的同侧的状态,所述第二状态表征所述物品与所述存储柜分别位于所述检测线两侧的状态;
    确定所述移动轨迹信息下的所述物品对应的目标跟踪状态发生改变的次数;
    基于所述物品对应的目标跟踪状态发生改变的次数,确定所述物品对应的目标跟踪状态,其中,在所述次数为奇数的情况下,确定所述物品对应的目标跟踪状态指示所述物品被拿取,在所述次数为偶数的情况下,确定所述物品对应的目标跟踪状态指示所述物品被放回。
  7. 根据权利要求6所述的方法,其中,所述检测结果还包括所述物品的种类信息;所述方法还包括:
    确定所述物品对应的目标跟踪状态发生改变的目标时间点;
    基于所述每帧图像下的物品的种类信息,确定目标时间段内的至少两帧图像下的不同种类信息所对应的图像数量,并将最多图像数量所对应的种类信息确定为所述物品的目标种类信息,其中,所述目标时间段是以所述目标时间点为基准的预设范围内的时间段。
  8. 根据权利要求1-7中任意一项所述的方法,其中,所述方法还包括:
    在所述物品对应的目标跟踪状态指示所述物品被拿取的情况下,生成所述物品的交易信息。
  9. 根据权利要求1-7中任意一项所述的方法,其中,所述存储柜中设置有多个摄像装置,所述物品对应的目标跟踪状态对应其中一个目标摄像装置;所述方法还包括:
    获取每个摄像装置下的所述物品对应的目标跟踪状态;
    将所述每个摄像装置下的所述物品对应的目标跟踪状态进行融合,得到所述物品的最终跟踪状态;
    在所述物品的最终跟踪状态指示所述物品被拿取的情况下,生成所述物品的交易信息。
  10. 一种物品状态跟踪装置,包括:
    移动轨迹信息获取部分,被配置为获取物品的移动轨迹信息,所述移动轨迹信息包含所述物品在至少两帧图像中的检测结果;
    检测状态信息确定部分,被配置为基于所述移动轨迹信息以及预设的检测线信息,确定每帧图像下的所述物品的检测状态信息,其中,所述检测线设置于存储柜的一侧,且与所述存储柜的距离在预设范围内,所述检测状态信息用于衡量所述物品相对所述检测线的位置变化状态;
    目标跟踪状态确定部分,被配置为基于所述至少两帧图像中的每相邻两帧图像下的物品的检测状态信息,确定所述物品对应的目标跟踪状态,其中,所述目标跟踪状态用于指示所述物品被拿取,或者所述物品被放回。
  11. 根据权利要求10所述的装置,其中,所述检测结果包括所述物品的种类信息;
    所述检测状态信息确定部分,还被配置为:从所述移动轨迹信息中的至少两帧图像中确定目标图像,所述目标图像为所述物品的种类信息为非目标种类的图像;将所述目标图像从所述至少两帧图像中删除,得到新的移动轨迹信息;基于所述新的移动轨迹信息以及所述预设的检测线信息,确定所述新的移动轨迹信息下的每帧图像下的所述物品的检测状态信息;
    所述目标跟踪状态确定部分,还被配置为:基于所述新的移动轨迹信息下的至少两帧图像中的每相邻两帧图像下的物品的检测状态信息,确定所述物品对应的目标跟踪状态。
  12. 根据权利要求10所述的装置,其中,所述预设的检测线信息包括检测线函数,所述检测结果包括所述物品的检测框信息,所述物品的检测框信息包括检测框的中心点位置;所述检测状态信息确定部分,还被配置为:
    基于所述移动轨迹信息中的每帧图像中的所述物品的检测框中心点位置以及所述检测线函数,确定所述每帧图像下的所述物品的检测状态信息。
  13. 根据权利要求12所述的装置,其中,所述检测框的中心点位置包括所述中心点的横坐标以及所述中心点的纵坐标;所述检测状态信息确定部分,还被配置为:
    基于所述中心点的横坐标,确定所述检测线函数的函数值;
    基于所述中心点的纵坐标以及所述检测线函数的函数值,确定所述每帧图像下的所述物品的检测状态信息。
  14. 根据权利要求13所述的装置,其中,所述检测状态信息确定部分,还被配置为以下至少之一:
    在所述中心点的纵坐标与所述检测函数的函数值满足第一预设关系的情况下,确定所述物品的检测状态信息为第一检测状态信息;
    在所述中心点的纵坐标与所述检测函数的函数值满足第二预设关系的情况下,确定所述物品的检测状态信息为第二检测状态信息。
  15. 根据权利要求10所述的装置,其中,所述目标跟踪状态确定部分,还被配置为:
    确定所述相邻两帧图像的物品的检测状态信息之间的关系;
    在所述相邻两帧图像的物品的检测状态信息之间的关系符合预设条件的情况下,确定所述物品对应的目标跟踪状态发生改变,其中,所述目标跟踪状态发生改变是指所述物品从第一状态变化至第二状态,或者所述物品从所述第二状态 变化至所述第一状态;所述第一状态表征所述物品与所述存储柜位于所述检测线的同侧的状态,所述第二状态表征所述物品与所述存储柜分别位于所述检测线两侧的状态;
    确定所述移动轨迹信息下的所述物品对应的目标跟踪状态发生改变的次数;
    基于所述物品对应的目标跟踪状态发生改变的次数,确定所述物品对应的目标跟踪状态,其中,在所述次数为奇数的情况下,确定所述物品对应的目标跟踪状态指示所述物品被拿取,在所述次数为偶数的情况下,确定所述物品对应的目标跟踪状态指示所述物品被放回。
  16. 根据权利要求15所述的装置,其中,所述检测结果还包括所述物品的种类信息;所述装置还包括目标种类信息确定部分,所述目标种类信息确定部分,被配置为:
    确定所述物品对应的目标跟踪状态发生改变的目标时间点;
    基于所述每帧图像下的物品的种类信息,确定目标时间段内的至少两帧图像下的不同种类信息所对应的图像数量,并将最多图像数量所对应的种类信息确定为所述物品的目标种类信息,其中,所述目标时间段是以所述目标时间点为基准的预设范围内的时间段。
  17. 根据权利要求10-16中任意一项所述的装置,其中,所述装置还包括第一交易信息生成部分,所述第一交易信息生成部分,被配置为:
    在所述物品对应的目标跟踪状态指示所述物品被拿取的情况下,生成所述物品的交易信息。
  18. 根据权利要求10-16中任意一项所述的装置,其中,所述存储柜中设置有多个摄像装置,所述物品对应的目标跟踪状态对应其中一个目标摄像装置;所述装置还包括第二交易信息生成部分,所述第二交易信息生成部分,被配置为:
    获取每个摄像装置下的所述物品对应的目标跟踪状态;
    将所述每个摄像装置下的所述物品对应的目标跟踪状态进行融合,得到所述物品的最终跟踪状态;
    在所述物品的最终跟踪状态指示所述物品被拿取的情况下,生成所述物品的交易信息。
  19. 一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行权利要求1至9任意一项所述的物品状态跟踪方法。
  20. 一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行权利要求1至9任意一项所述的物品状态跟踪方法。
  21. 一种计算机程序产品,所述计算机程序产品包括计算机程序或指令,在所述计算机程序或指令在电子设备上运行的情况下,使得所述电子设备执行权利要求1至9中任意一项所述的物品状态跟踪方法。
PCT/CN2022/125762 2022-03-28 2022-10-17 物品状态跟踪方法、装置、电子设备、存储介质及计算机程序产品 WO2023184932A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210315977.7 2022-03-28
CN202210315977.7A CN114648719A (zh) 2022-03-28 2022-03-28 物品状态跟踪方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023184932A1 true WO2023184932A1 (zh) 2023-10-05

Family

ID=81995570

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/125762 WO2023184932A1 (zh) 2022-03-28 2022-10-17 物品状态跟踪方法、装置、电子设备、存储介质及计算机程序产品

Country Status (2)

Country Link
CN (1) CN114648719A (zh)
WO (1) WO2023184932A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114648719A (zh) * 2022-03-28 2022-06-21 上海商汤科技开发有限公司 物品状态跟踪方法、装置、电子设备及存储介质
CN117218678A (zh) * 2023-08-11 2023-12-12 浙江深象智能科技有限公司 行为检测方法、装置及电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109840504A (zh) * 2019-02-01 2019-06-04 腾讯科技(深圳)有限公司 物品取放行为识别方法、装置、存储介质及设备
CN113901955A (zh) * 2021-11-17 2022-01-07 上海商汤智能科技有限公司 一种自助交易方法、装置、电子设备及储存介质
CN114037940A (zh) * 2021-11-17 2022-02-11 上海商汤智能科技有限公司 目标商品的轨迹生成方法、装置、电子设备及存储介质
CN114648719A (zh) * 2022-03-28 2022-06-21 上海商汤科技开发有限公司 物品状态跟踪方法、装置、电子设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109840504A (zh) * 2019-02-01 2019-06-04 腾讯科技(深圳)有限公司 物品取放行为识别方法、装置、存储介质及设备
CN113901955A (zh) * 2021-11-17 2022-01-07 上海商汤智能科技有限公司 一种自助交易方法、装置、电子设备及储存介质
CN114037940A (zh) * 2021-11-17 2022-02-11 上海商汤智能科技有限公司 目标商品的轨迹生成方法、装置、电子设备及存储介质
CN114648719A (zh) * 2022-03-28 2022-06-21 上海商汤科技开发有限公司 物品状态跟踪方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN114648719A (zh) 2022-06-21

Similar Documents

Publication Publication Date Title
WO2023184932A1 (zh) 物品状态跟踪方法、装置、电子设备、存储介质及计算机程序产品
Delannay et al. Detection and recognition of sports (wo) men from multiple views
CN108416902B (zh) 基于差异识别的实时物体识别方法和装置
WO2017096881A1 (zh) 视频中加载广告的方法及装置
US20170192500A1 (en) Method and electronic device for controlling terminal according to eye action
CN109214290A (zh) 一种基于人脸识别的门店客户管理方法及装置
CN103456254A (zh) 一种多点触控交互式多媒体数字标牌系统
CN107784339B (zh) 应用于客户端、服务端的业务执行方法、装置以及设备
CN103870559A (zh) 一种基于播放的视频获取信息的方法及设备
CN109635705A (zh) 一种基于二维码和深度学习的商品识别方法及装置
US20210182566A1 (en) Image pre-processing method, apparatus, and computer program
CN111667639A (zh) 图书归还服务的实现方法、装置及智能图书柜
CN111340569A (zh) 基于跨境追踪的门店人流分析方法、装置、系统、终端和介质
Yang et al. Scene adaptive online surveillance video synopsis via dynamic tube rearrangement using octree
US20170013309A1 (en) System and method for product placement
Fatemeh Razavi et al. Integration of colour and uniform interlaced derivative patterns for object tracking
Jeon et al. A retail object classification method using multiple cameras for vision-based unmanned kiosks
Ferreira et al. Towards key-frame extraction methods for 3D video: a review
CN113901955A (zh) 一种自助交易方法、装置、电子设备及储存介质
Ferreira et al. A generic framework for optimal 2D/3D key-frame extraction driven by aggregated saliency maps
CN110363187A (zh) 一种人脸识别方法、装置、机器可读介质及设备
Chen et al. Surveillance video summarisation by jointly applying moving object detection and tracking
CN114037940A (zh) 目标商品的轨迹生成方法、装置、电子设备及存储介质
CN115661624A (zh) 一种货架的数字化方法、装置及电子设备
CN114051624A (zh) 检测游戏区域上游戏道具的方法及装置、设备、存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22934750

Country of ref document: EP

Kind code of ref document: A1