WO2023160022A1 - 包裹分拣行为的识别方法及其装置 - Google Patents

包裹分拣行为的识别方法及其装置 Download PDF

Info

Publication number
WO2023160022A1
WO2023160022A1 PCT/CN2022/131496 CN2022131496W WO2023160022A1 WO 2023160022 A1 WO2023160022 A1 WO 2023160022A1 CN 2022131496 W CN2022131496 W CN 2022131496W WO 2023160022 A1 WO2023160022 A1 WO 2023160022A1
Authority
WO
WIPO (PCT)
Prior art keywords
package
human body
detection frame
moment
target
Prior art date
Application number
PCT/CN2022/131496
Other languages
English (en)
French (fr)
Inventor
郑少杰
于伟
陈智勇
王林芳
梅涛
杨琛
Original Assignee
京东科技信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东科技信息技术有限公司 filed Critical 京东科技信息技术有限公司
Publication of WO2023160022A1 publication Critical patent/WO2023160022A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Definitions

  • the present disclosure relates to the technical field of computer vision, in particular to a method and device for identifying parcel sorting behavior.
  • an object of the present disclosure is to propose a method for identifying parcel sorting behaviors, by performing target detection on image frames in the target video, at least one human body detection frame and at least one parcel detection frame are obtained; Tracking the motion trajectory of the detection frame and the package detection frame; in the process of trajectory tracking, identifying the moment when any package is thrown out based on the tracked current trajectory for any package; obtaining the any package The movement information from the moment of being thrown out to the current moment, and based on the movement information, the sorting behavior of any package is identified.
  • the present disclosure can effectively eliminate the misjudgment that there are only people, or only parcels, or people moving together with parcels, or background-irrelevant information interference, etc., by judging the interactive relationship between the human body and the parcels, and It can identify the main body of the sorting personnel, determine the level of sorting efforts, and make the judgment more accurate.
  • the second purpose of the present disclosure is to provide an identification device for parcel sorting behavior.
  • the third object of the present disclosure is to provide an electronic device.
  • a fourth object of the present disclosure is to provide a non-transitory computer-readable storage medium.
  • a fifth object of the present disclosure is to provide a computer program product.
  • a sixth object of the present disclosure is to propose a computer program.
  • the embodiment of the first aspect of the present disclosure proposes a package sorting behavior recognition method, including: performing target detection on the image frame in the target video, and obtaining at least one human body detection frame and at least one package detection frame; Tracking the motion trajectories of the human body detection frame and the package detection frame respectively; during the trajectory tracking process, identifying the throwing moment of any package based on the tracked current trajectory; obtaining The movement information of any package from the moment when it is thrown out to the current moment, and based on the movement information, the sorting behavior of any package is identified.
  • the present disclosure can effectively eliminate the misjudgment that there are only people, or only parcels, or people moving together with parcels, or background-irrelevant information interference, etc., by judging the interactive relationship between the human body and the parcels, and It can identify the main body of the sorting personnel, determine the level of sorting efforts, and make the judgment more accurate.
  • the embodiment of the second aspect of the present disclosure proposes an identification device for parcel sorting behavior, including: a first acquisition module, which is used to perform target detection on image frames in the target video, and acquire at least one human body detection frame and at least one package detection frame; the track tracking module is used to track the motion tracks of the human body detection frame and the package detection frame respectively; the second acquisition module is used to track any package during the track tracking process Identify the moment when any package is thrown out based on the tracked current trajectory; the behavior recognition module is used to obtain the movement information of any package from the moment when it is thrown to the current moment, and based on the movement information to identify the sorting behavior of any package.
  • the identification device for parcel sorting behavior proposed in this disclosure can effectively exclude only people, or only parcels, or people moving with parcels, or background Misjudgment of irrelevant information interference, etc., and can identify the main body of the sorting personnel, determine the level of sorting efforts, and make the judgment more accurate.
  • the embodiment of the third aspect of the present disclosure provides an electronic device, including: at least one processor; and a memory connected to the at least one processor in communication; wherein, the memory stores information that can be used by the Instructions executed by at least one processor, the instructions are executed by the at least one processor, so as to implement the method for identifying parcel sorting behavior as described in the embodiment of the first aspect of the present disclosure.
  • the embodiment of the fourth aspect of the present disclosure provides a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to implement the package as described in the embodiment of the first aspect of the present disclosure. Recognition methods for sorting behavior.
  • the embodiment of the fifth aspect of the present disclosure proposes a computer program product, including a computer program, when the computer program is executed by a processor, it realizes the package sorting behavior as described in the embodiment of the first aspect of the present disclosure identification method.
  • the embodiment of the sixth aspect of the present disclosure provides a computer program, the computer program includes computer program code, when the computer program code is run on the computer, the computer executes the computer program according to the first aspect of the present disclosure.
  • Fig. 1 is a schematic diagram of a method for identifying parcel sorting behavior according to an embodiment of the present disclosure.
  • Fig. 2 is a schematic diagram of the track of sorting packages by a single human body according to an embodiment of the present disclosure.
  • Fig. 3 is a schematic diagram of identifying the moment when any package is thrown out according to an embodiment of the present disclosure.
  • Fig. 4 is a schematic diagram of identifying the sorting behavior of any package according to an embodiment of the present disclosure.
  • Fig. 5 is a schematic diagram of respectively tracking the motion trajectories of the human body detection frame and the package detection frame according to an embodiment of the present disclosure.
  • Fig. 6 is a schematic diagram of a method for identifying parcel sorting behavior according to an embodiment of the present disclosure.
  • Fig. 7 is a schematic diagram of an identification device for parcel sorting behavior according to an embodiment of the present disclosure.
  • Fig. 8 is a schematic diagram of an electronic device according to an embodiment of the present disclosure.
  • Fig. 1 is an exemplary implementation of a method for identifying parcel sorting behavior proposed in the present disclosure. As shown in Fig. 1 , the method for identifying parcel sorting behavior includes the following steps: S101-S104.
  • S101 Perform target detection on an image frame in a target video, and acquire at least one human body detection frame and at least one package detection frame.
  • the video to be analyzed for parcel sorting behavior is used as the target video, where the target video can be the video of the sorter sorting the parcel obtained in real time, or the video of the sorter sorting the parcel stored or received locally .
  • Decode the target video to extract frames obtain multiple image frames corresponding to the target video at different times, perform target detection on all the image frames corresponding to the target video, and obtain the human body detection corresponding to the sorting personnel in each image frame Box, mark the human body detection frame as P box (t1), and the package detection box corresponding to the sorted package, mark the package detection box as B box (t2), where t1 and t2 are human body detection boxes or package detection
  • the frame index of the image frame corresponding to the frame, and each detection frame contains the coordinate information of the detection frame in the image frame.
  • the target detection algorithm may use algorithms such as Feature Pyramid Networks (FPN) and Convolutional Neural Networks (CNN).
  • FPN Feature Pyramid Networks
  • CNN Convolutional Neural Networks
  • the human body detection frame in each image frame can be one or more, and the package detection frame can also be one or more.
  • the human body detection frame in the image frame is Multiple, when there are multiple sorted packages in the image frame, there are multiple package detection frames in the image frame.
  • the human body movement trajectory is recorded as P track (i, t1), wherein, i represents the identification information of the human body, and t1 represents the tracking state of the human body during the latest t1 image frame; and the package movement trajectory, the package movement trajectory is recorded as B track (j, t2), where j represents the identification information of the package, and t2 represents the tracking status of the package when it represents the latest t2 image frame.
  • tracking algorithms such as nearest neighbor matching and multi-target tracking algorithm (Simple Online And Realtime Tracking, SORT) may be used for target tracking.
  • FIG. 2 is a schematic diagram of human body sorting packages.
  • the image frame corresponding to the package is traced back from the current frame, that is, the image frame from time t shown in Figure 2 , trace back the image frames corresponding to time t-1, time t-2, time t-3, time t-4, time t-5, and time t-6 respectively, analyze the image frames, and analyze whether the package is Throwed by the sorter, if it is confirmed that the package is thrown by the sorter, the image frame in which the package is thrown can be recognized, and the moment corresponding to the image frame that is thrown out is taken as the throwing time.
  • the distance between the package detection frame and the human body detection frame of the package on the image frame at time t-4 is less than the distance threshold, that is, at time t-4, the sorter throws the package, and t-4 Time is the time when the package is thrown.
  • the sorting behavior parameters of the sorting personnel can be obtained, and the sorting behavior parameters can be compared with the parameters in the existing sorting behavior specification to identify the sorting behavior of the sorting personnel.
  • parameter comparisons can be performed based on various motion information, so as to identify the sorting behavior of the sorting personnel.
  • the parameters in the sorting behavior specification can be set differently.
  • the embodiment of the present disclosure proposes a package sorting behavior recognition method, by performing target detection on the image frame in the target video, at least one human body detection frame and at least one package detection frame are obtained; the human body detection frame and the package detection frame are respectively Tracking of the trajectory; in the process of trajectory tracking, for any package, based on the tracked current trajectory, identify the time when any package is thrown out; obtain the movement information of any package from the moment it is thrown to the current moment, And based on the movement information, the sorting behavior of any package is identified.
  • the present disclosure can effectively eliminate the misjudgment that there are only people, or only parcels, or people moving together with parcels, or background-irrelevant information interference, etc., by judging the interactive relationship between the human body and the parcels, and It can identify the main body of the sorting personnel, determine the level of sorting efforts, and make the judgment more accurate.
  • Fig. 3 is an exemplary implementation of a method for identifying parcel sorting behaviors proposed in the present disclosure. As shown in Fig. 3 , for any parcel, the moment when any parcel is thrown out is identified based on the tracked current trajectory, including The following steps: S301-S302.
  • the movement trajectory of the target package trace the current image frame forward in order from late to early, obtain the position information of the target package detection frame on each image frame, as the first position information, and obtain the human body detection on the same image frame
  • the position information of the frame is used as the second position information.
  • the first position information of the target package detection frame on each image frame and the first position information of the target package detection frame at the corresponding time are calculated.
  • the distance of the second position information of the human body detection frame is used as the target distance.
  • the key point detection algorithm of the human body may use a key point detection algorithm of the human body skeleton and the like.
  • a threshold distance is set in advance, and the moment corresponding to the image frame whose target distance is less than the distance threshold appears for the first time is taken as the moment when the package is thrown out, and the sorting personnel corresponding to the package can also be determined at the same time.
  • the embodiment of the present disclosure identifies the moment when any package is thrown based on the tracked current trajectory, and can determine the interaction relationship between the package and the human body, thereby determining who and when the package was thrown, and the throwing trajectory, so as to Parcel sorting behavior for more accurate identification.
  • Fig. 4 is an exemplary implementation of a method for identifying parcel sorting behavior proposed by the present disclosure. As shown in Fig. 4 , based on motion information, the sorting behavior of any parcel is identified, including the following steps: S401-S402 .
  • Each package is regarded as a target package, and the movement information of each target package from the moment it is thrown to the current moment is obtained.
  • the movement information may include the movement information of each target package from the moment it is thrown to the current moment The distance value, maximum speed, average speed, and the speed and acceleration of each target package at each moment from the moment it is thrown to the current moment.
  • the value of each motion information can be used as the motion parameter of the motion information. For example, if the distance value of a certain package from the moment it is thrown to the current moment is 3 meters, then 3 Meters as the motion parameter for the distance value from the moment it was thrown to the current moment.
  • S402. Determine the sorting effort level of the sorting behavior of any package according to the sorting effort parameter.
  • different sorting behavior specifications can be set. For example, when the package is fresh, the items in the package are relatively fragile. When the distance from the time when the package is thrown to the current moment is used as a parameter, the parameter range can be set to be relatively small. For example, when a fresh package is thrown from The distance from the exit time to the current time is less than 0.2 meters as normal, 0.2 to 0.4 meters as mild violence, 0.4 to 0.7 meters as normal violence, and 0.7 to 1 meter as severe violence. Compare the distance value of any package from the moment it is thrown to the current moment with the parameters of the sorting behavior specification, and identify the sorting behavior of the sorting personnel corresponding to the package.
  • the sorting behavior specification for the package when it is a fresh clothing product is set to be stricter than that for the package when it is fresh.
  • the embodiments of the present disclosure determine the sorting intensity level of any package sorting behavior according to the movement information, and can give the sorter detailed indicators when sorting packages, and in different business scenarios, sorting Different standards can be set for the sorting behavior of staff, which can improve the accuracy and versatility of sorting behavior recognition.
  • Fig. 5 is an exemplary implementation of a method for identifying parcel sorting behaviors proposed in the present disclosure. As shown in Fig. 5 , tracking the motion trajectories of the human body detection frame and the parcel detection frame respectively includes the following steps: S501-S502 .
  • tracking algorithms such as nearest neighbor matching and multi-target tracking algorithm (Simple Online And Realtime Tracking, SORT) can be used for target tracking.
  • tracking algorithms such as nearest neighbor matching and multi-target tracking algorithm (Simple Online And Realtime Tracking, SORT) can be used for target tracking.
  • the embodiment of the present disclosure can obtain the movement trajectory of the human body and the package by tracking the movement trajectories of the human body detection frame and the package detection frame, which lays a foundation for realizing the interaction between the human body and the package and obtaining the moment when the package is thrown out.
  • Fig. 6 is an exemplary embodiment of a method for identifying a parcel sorting behavior proposed in the present disclosure. As shown in Fig. 6 , the method for identifying a parcel sorting behavior includes the following steps: S601-S607.
  • S601. Perform target detection on image frames in the target video, and acquire at least one human body detection frame and at least one package detection frame.
  • S605. Determine the time when the target distance is less than the distance threshold for the first time, and use it as the time when any package is thrown out.
  • S607. Determine the sorting effort level of the sorting behavior of any package according to the sorting effort parameter.
  • the embodiment of the present disclosure proposes a package sorting behavior recognition method, by performing target detection on the image frame in the target video, at least one human body detection frame and at least one package detection frame are obtained; the human body detection frame and the package detection frame are respectively Tracking of the trajectory; in the process of trajectory tracking, for any package, based on the tracked current trajectory, identify the time when any package is thrown out; obtain the movement information of any package from the moment it is thrown to the current moment, And based on the movement information, the sorting behavior of any package is identified.
  • the present disclosure can effectively eliminate the misjudgment that there are only people, or only parcels, or people moving together with parcels, or background-irrelevant information interference, etc., by judging the interactive relationship between the human body and the parcels, and It can identify the main body of the sorting personnel, determine the level of sorting efforts, and make the judgment more accurate.
  • Fig. 7 is a schematic diagram of an identification device for parcel sorting behavior proposed in the present disclosure. As shown in Fig. Module 73 and behavior recognition module 74, wherein:
  • the first acquiring module 71 is configured to perform target detection on image frames in the target video, and acquire at least one human body detection frame and at least one package detection frame.
  • the track tracking module 72 is used to track the motion tracks of the human body detection frame and the package detection frame respectively.
  • the second acquiring module 73 is configured to identify the time when any package is thrown out based on the tracked current trajectory of any package during the track tracking process.
  • the behavior recognition module 74 is configured to acquire the movement information of any package from the moment it is thrown out to the current moment, and identify the sorting behavior of any package based on the movement information.
  • the second acquisition module 73 is also used to: compare the trajectory of the target package corresponding to any package with the trajectory of each human body, so as to obtain the detection of the target package corresponding to any package at the detected moment The target distance between the frame and each human detection frame; determine the moment when the target distance is less than the distance threshold for the first time, and use it as the moment when any package is thrown out.
  • the second acquisition module 73 is also used to: acquire the first position information of the target package detection frame on each image frame from the target package movement trajectory in order from late to early; Obtain the second position information of the human body detection frame on the same image frame; acquire the target distance according to the first position information and the second position information at the corresponding moment.
  • the second acquisition module 73 is further configured to: use the time corresponding to the first occurrence of the image frame whose target distance is less than the distance threshold as the time when the package is thrown out.
  • the second acquisition module 73 is further configured to: extract the image area marked by the second position information from the image frame corresponding to the second position information; perform human body key point detection on the image area, and obtain the human hand Position information: the distance from the first position information to the position information of the human hand is obtained as the target distance.
  • the behavior recognition module 74 is further configured to: generate sorting force parameters of any package based on the motion information, and determine the sorting force level of the sorting behavior of any package according to the sorting force parameters.
  • the motion information in the behavior recognition module 74 includes the distance value, maximum speed, and average speed of any package from the moment it is thrown to the current moment, and the time value of any package from the moment it is thrown to the current moment. The speed and acceleration of the moment.
  • the trajectory tracking module 72 is further configured to: track the human body detection frame based on the first identification information of the human body detection frame, and generate a human body movement trajectory corresponding to the human body detection frame; based on the second identification of the package detection frame Information, track the package detection frame, and generate the package movement trajectory corresponding to the package detection frame.
  • an embodiment of the present disclosure also proposes an electronic device 800. As shown in FIG. Instructions executed by the processor, the instructions are executed by at least one processor 801 to implement the method for identifying parcel sorting behavior as shown in the above-mentioned embodiments.
  • the embodiments of the present disclosure also propose a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to enable the computer to implement the method for identifying parcel sorting behavior as shown in the above-mentioned embodiments .
  • the embodiments of the present disclosure further propose a computer program product, including a computer program.
  • the computer program is executed by a processor, the method for identifying parcel sorting behavior as shown in the above-mentioned embodiments is implemented.
  • the embodiments of the present disclosure also propose a computer program, wherein the computer program includes computer program code, and when the computer program code is run on the computer, the computer executes the package sorting as shown in the above-mentioned embodiments Behavior identification method.
  • first and second are used for descriptive purposes only, and cannot be interpreted as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features. Thus, a feature defined as “first” and “second” may explicitly or implicitly include one or more of these features.
  • “plurality” means two or more, unless otherwise specifically defined.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本公开提出了一种包裹分拣行为的识别方法及其装置,通过对目标视频中的图像帧进行目标检测,获取至少一个人体检测框和至少一个包裹检测框;分别对人体检测框和包裹检测框的运动轨迹进行跟踪;在轨迹跟踪的过程中,针对任一包裹基于跟踪到的当前运动轨迹识别任一包裹的被抛出时刻;获取任一包裹从被抛出时刻到当前时刻的运动信息,并基于运动信息,对任一包裹的分拣行为进行识别。

Description

包裹分拣行为的识别方法及其装置
相关申请的交叉引用
本申请基于申请号为2022101688776、申请日为2022年2月23日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本公开涉及计算机视觉技术领域,具体涉及一种包裹分拣行为的识别方法及其装置。
背景技术
随着电商、网购、物流行业的不断发展,人们对快递服务的需求也在不断增加,在快递行业的快速发展中,在对包裹进行分拣时,分拣员为了提高分拣速度,经常出现抛、扔包裹等暴力分拣现象,从而会造成包裹内的物品受损,损害消费者和商户的利益。
发明内容
为此,本公开的一个目的在于提出一种包裹分拣行为的识别方法,通过对目标视频中的图像帧进行目标检测,获取至少一个人体检测框和至少一个包裹检测框;分别对所述人体检测框和所述包裹检测框的运动轨迹进行跟踪;在轨迹跟踪的过程中,针对任一包裹基于跟踪到的当前运动轨迹识别所述任一包裹的被抛出时刻;获取所述任一包裹从所述被抛出时刻到当前时刻的运动信息,并基于所述运动信息,对所述任一包裹的分拣行为进行识别。
本公开在对包裹分拣行为进行识别时,通过判断人体与包裹的交互关系,能够有效排除只有人、或只有包裹、或人携包裹一起运动、或背景无关信息干扰等等的误判,并且能够识别分拣人员主体,确定分拣力度等级,使得判断更准确。
本公开的第二个目的在于提出一种包裹分拣行为的识别装置。
本公开的第三个目的在于提出一种电子设备。
本公开的第四个目的在于提出一种非瞬时计算机可读存储介质。
本公开的第五个目的在于提出一种计算机程序产品。
本公开的第六个目的在于提出一种计算机程序。
为达上述目的,本公开第一方面实施例提出了一种包裹分拣行为的识别方法,包括: 对目标视频中的图像帧进行目标检测,获取至少一个人体检测框和至少一个包裹检测框;分别对所述人体检测框和所述包裹检测框的运动轨迹进行跟踪;在轨迹跟踪的过程中,针对任一包裹基于跟踪到的当前运动轨迹识别所述任一包裹的被抛出时刻;获取所述任一包裹从所述被抛出时刻到当前时刻的运动信息,并基于所述运动信息,对所述任一包裹的分拣行为进行识别。
本公开在对包裹分拣行为进行识别时,通过判断人体与包裹的交互关系,能够有效排除只有人、或只有包裹、或人携包裹一起运动、或背景无关信息干扰等等的误判,并且能够识别分拣人员主体,确定分拣力度等级,使得判断更准确。
为达上述目的,本公开第二方面实施例提出了一种包裹分拣行为的识别装置,包括:第一获取模块,用于对目标视频中的图像帧进行目标检测,获取至少一个人体检测框和至少一个包裹检测框;轨迹跟踪模块,用于分别对所述人体检测框和所述包裹检测框的运动轨迹进行跟踪;第二获取模块,用于在轨迹跟踪的过程中,针对任一包裹基于跟踪到的当前运动轨迹识别所述任一包裹的被抛出时刻;行为识别模块,用于获取所述任一包裹从所述被抛出时刻到当前时刻的运动信息,并基于所述运动信息,对所述任一包裹的分拣行为进行识别。
本公开提出的包裹分拣行为的识别装置,在对包裹分拣行为进行识别时,通过判断人体与包裹的交互关系,能够有效排除只有人、或只有包裹、或人携包裹一起运动、或背景无关信息干扰等等的误判,并且能够识别分拣人员主体,确定分拣力度等级,使得判断更准确。
为达上述目的,本公开第三方面实施例提出了一种电子设备,包括:至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以实现如本公开第一方面实施例所述的包裹分拣行为的识别方法。
为达上述目的,本公开第四方面实施例提出了一种存储有计算机指令的非瞬时计算机可读存储介质,其中,所述计算机指令用于实现如本公开第一方面实施例所述的包裹分拣行为的识别方法。
为达上述目的,本公开第五方面实施例提出了一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时实现如本公开第一方面实施例所述的包裹分拣行为的识别方法。
为达上述目的,本公开第六方面实施例提出了一种计算机程序,所述计算机程序包括计算机程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行如本公开第 一方面实施例所述的包裹分拣行为的识别方法。
附图说明
图1是本公开一个实施例的一种包裹分拣行为的识别方法的示意图。
图2是本公开一个实施例的单个人体分拣包裹的轨迹示意图。
图3是本公开一个实施例的识别任一包裹的被抛出时刻的示意图。
图4是本公开一个实施例的对任一包裹的分拣行为进行识别的示意图。
图5是本公开一个实施例的分别对人体检测框和包裹检测框的运动轨迹进行跟踪的示意图。
图6是本公开一个实施例的一种包裹分拣行为的识别方法的示意图。
图7是本公开一个实施例的一种包裹分拣行为的识别装置的示意图。
图8是本公开一个实施例的一种电子设备的示意图。
具体实施方式
下面详细描述本公开的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本公开,而不能理解为对本公开的限制。
图1是本公开提出的一种包裹分拣行为的识别方法的示例性实施方式,如图1所示,该包裹分拣行为的识别方法包括以下步骤:S101-S104。
S101,对目标视频中的图像帧进行目标检测,获取至少一个人体检测框和至少一个包裹检测框。
将待进行包裹分拣行为分析的视频作为目标视频,其中,目标视频可以为实时获得的分拣员分拣包裹的视频,也可以为本地储存或接收到的的分拣员分拣包裹的视频。将目标视频进行解码抽帧,获取多个不同时刻目标视频所对应的图像帧,对目标视频所对应的所有的图像帧分别进行目标检测,获取每个图像帧中的分拣人员对应的人体检测框,将人体检测框记为P box(t1),和被分拣的包裹对应的包裹检测框,将包裹检测框记为B box(t2),其中,t1和t2为人体检测框或包裹检测框所对应的图像帧的帧索引,每个检测框包含该检测框在图像帧中的坐标信息。
在一些实施例中,目标检测算法可采用特征金字塔网络(Feature Pyramid Networks,FPN)、卷积神经网络(Convolutional Neural Networks,CNN)等算法。其中,每个图像帧中的人体检测框可以为一个或多个,包裹检测框也可以为一个或多个,当图像 帧中有多个分拣人员时,则该图像帧中人体检测框为多个,当图像帧中有多个被分拣的包裹时,则该图像帧中包裹检测框为多个。
S102,分别对人体检测框和包裹检测框的运动轨迹进行跟踪。
对上述获得的每帧图像的人体检测框和包裹检测框分别赋予标识信息,根据人体检测框和包裹检测框各自的标识信息,分别对人体检测框和包裹检测框的运动轨迹进行跟踪,获得人体运动轨迹,将人体运动轨迹记为P track(i,t1),其中,i代表人体的标识信息,t1代表最新的t1图像帧时人体的跟踪状态;以及包裹运动轨迹,将包裹运动轨迹记为B track(j,t2),其中,j代表包裹的标识信息,t2代表代表最新的t2图像帧时包裹的跟踪状态。在一些实施例中,进行目标跟踪可采用最近邻匹配、多目标跟踪算法(Simple Online And Realtime Tracking,SORT)等跟踪算法。
S103,在轨迹跟踪的过程中,针对任一包裹基于跟踪到的当前运动轨迹识别任一包裹的被抛出时刻。
在轨迹跟踪的过程中,遍历每一个包裹的跟踪轨迹,确定每一个包裹的被抛出时刻。图2是人体分拣包裹的一个示意图,如图2所示,以一个包裹为例,将该包裹对应的图像帧,从当前帧往前追溯,即图2所示的从t时刻的图像帧,依次往前追溯t-1时刻、t-2时刻、t-3时刻、t-4时刻、t-5时刻、t-6时刻分别对应的图像帧,对图像帧进行分析,分析包裹是否是被分拣员抛出,若确认包裹被分拣员抛出,则可以识别到出现包裹被抛出的图像帧,将出现被抛出的图像帧对应的时刻作为抛出时刻。如图2所示,t-4时刻的图像帧上该包裹的包裹检测框与该人体检测框的距离小于距离阈值,即在t-4时刻,分拣员将该包裹抛出,t-4时刻即为该包裹的被抛出时刻。
S104,获取任一包裹从被抛出时刻到当前时刻的运动信息,并基于运动信息,对任一包裹的分拣行为进行识别。
图2中在t-6、t-5、t-4时刻期间,分拣员在拿着包裹运动,此时包裹没有脱离人体,在此期间的任何运动都不会被作为分拣行为的候选,在t-4时刻以后,包裹开始脱离人体,此时包裹被抛出,为了排除由于分拣员自身的大运动,或者分拣员与分拣员之间投递接货行为造成的误判,只获取该包裹从被抛出时刻到当前时刻的各项运动信息,比如加速度、运动距离、平均速度等。由于轻放包裹与暴力抛掷包裹所对应的包裹的加速度、运动距离、平均速度有很大差别,故加速度、运动距离、平均速度等运动物理量可以用来反映分拣员的分拣力度。基于所获取的各项运动信息,可获取分拣人员的分拣行为参数,将分拣行为参数与现有分拣行为规范中的参数对比,可以识别分拣人员的分拣行为。在一些实施例中,在获取分拣人员的分拣行为参数时,可基于各项运动信息分别进行参数对比,从而识别分 拣人员的分拣行为。其中,根据不同类型的包裹场景,分拣行为规范中的参数可以设置不相同。
本公开实施例提出了一种包裹分拣行为的识别方法,通过对目标视频中的图像帧进行目标检测,获取至少一个人体检测框和至少一个包裹检测框;分别对人体检测框和包裹检测框的运动轨迹进行跟踪;在轨迹跟踪的过程中,针对任一包裹基于跟踪到的当前运动轨迹识别任一包裹的被抛出时刻;获取任一包裹从被抛出时刻到当前时刻的运动信息,并基于运动信息,对任一包裹的分拣行为进行识别。本公开在对包裹分拣行为进行识别时,通过判断人体与包裹的交互关系,能够有效排除只有人、或只有包裹、或人携包裹一起运动、或背景无关信息干扰等等的误判,并且能够识别分拣人员主体,确定分拣力度等级,使得判断更准确。
图3是本公开提出的一种包裹分拣行为的识别方法的示例性实施方式,如图3所示,针对任一包裹基于跟踪到的当前运动轨迹识别任一包裹的被抛出时刻,包括以下步骤:S301-S302。
S301,将任一包裹对应的目标包裹运动轨迹分别与每个人体运动轨迹进行对比,以获取已检测的时刻上任一包裹对应的目标包裹检测框与每个人体检测框之间的目标距离。
根据目标包裹的运动轨迹,将当前图像帧按照从晚到早的顺序往前追溯,获取每个图像帧上目标包裹检测框的位置信息,作为第一位置信息,以及获取同一图像帧上人体检测框的位置信息,作为第二位置信息。
在本申请的一个实施例中,若分拣人员只有一个,即图像帧上只有一个人体检测框时,计算每个图像帧上,目标包裹检测框的第一位置信息与对应时刻的该一个人体的人体检测框第二位置信息的距离,作为目标距离。
在本申请的另一个实施例中,若分拣人员有多个时,同一个图像帧上有多个人体检测框,由于人体检测框较大,在计算目标包裹检测框的第一位置信息与对应时刻的图像帧上所有的人体检测框第二位置信息的距离时,如果某个分拣员抛包裹时包裹飞越过某个分拣员而扔的比较远,在图像帧上会首先计算到越过的那个分拣员,会出现偏差,故当分拣人员有多个时,对图像帧上人体检测框进行人体关键点检测,获取人体手部位置信息,从而获取目标包裹检测框的第一位置信息与对应时刻的图像帧上所有人体手部位置信息的距离,作为目标距离。在一些实施例中,人体关键点检测算法可采用人体骨骼关键点检测算法等。
S302,确定首次出现目标距离小于距离阈值的时刻,并作为任一包裹的被抛出时刻。
预先设置一个阈值距离,将首次出现目标距离小于距离阈值的图像帧对应的时刻作为包裹的被抛出时刻,同时也能确定该包裹对应的分拣人员。
本公开实施例基于跟踪到的当前运动轨迹识别任一包裹的被抛出时刻,能够确定包裹和人体之前的交互作用关系,从而确定包裹是由何人何时抛出,以及抛出轨迹,从而对包裹分拣行为进行更准确的识别。
图4是本公开提出的一种包裹分拣行为的识别方法的示例性实施方式,如图4所示,基于运动信息,对任一包裹的分拣行为进行识别,包括以下步骤:S401-S402。
S401,基于运动信息,生成任一包裹的分拣力度参数。
将每个包裹都作为目标包裹,获取每个目标包裹从被抛出时刻到当前时刻的各项运动信息,在一些实施例中,运动信息可包括每个目标包裹从被抛出时刻到当前时刻的距离值、最大速度、平均速度,以及每个目标包裹从被抛出时刻到当前时刻每个时刻的运动速度和加速度等。
在本申请实施例中,可将每个运动信息的数值作为该项运动信息的运动参数,比如说,若某个包裹从被抛出时刻到当前时刻的距离值为3米,则可将3米作为从被抛出时刻到当前时刻的距离值的运动参数。
S402,根据分拣力度参数,确定任一包裹的分拣行为的分拣力度等级。
根据不同的场景,可以设置不同的分拣行为规范。比如说包裹为生鲜时,包裹内物品比较脆弱,当以包裹从被抛出时刻到当前时刻的距离值为参数时,可将参数范围设置的比较小一些,比如当生鲜包裹从被抛出时刻到当前时刻的距离值小于0.2米为正常,0.2~0.4米为轻微暴力,0.4~0.7米为一般暴力,0.7~1米为严重暴力。将任一包裹从被抛出时刻到当前时刻的距离值与该分拣行为规范的参数进行比较,识别该包裹对应的分拣人员的分拣行为。
再比如,当包裹为服饰产品时,很难因抛掷产生形变,则对包裹为生鲜服饰产品时的分拣行为规范设置的比包裹为生鲜时的分拣行为规范严格一些。
类似的,也可对其他项的运动信息,比如加速度,平均速度等,设置分拣行为规范进行对比,识别目标包裹对应的分拣人员的分拣行为。
本公开实施例通过根据运动信息,确定任一包裹的分拣行为的分拣力度等级,能够给出分拣员在分拣包裹时的详细指标,并且可以在不同的业务场景下,对分拣员的分拣行为设置不同的规范,可以提高分拣行为识别的准确率和通用性。
图5是本公开提出的一种包裹分拣行为的识别方法的示例性实施方式,如图5所示,分别对人体检测框和包裹检测框的运动轨迹进行跟踪,包括以下步骤:S501-S502。
S501,基于人体检测框的第一标识信息,对人体检测框进行跟踪,生成人体检测框对应的人体运动轨迹。
对图像帧内的每个人体检测框赋予一个标识信息,作为第一标识信息,基于每个人体检测框对应的第一标识信息,对每个图像帧的人体检测框进行跟踪,生成人体检测框对应的人体运动轨迹。在一些实施例中,进行目标跟踪可采用最近邻匹配、多目标跟踪算法(Simple Online And Realtime Tracking,SORT)等跟踪算法。
S502,基于包裹检测框的第二标识信息,对包裹检测框进行跟踪,生成包裹检测框对应的包裹运动轨迹。
对图像帧内的每个包裹检测框赋予一个标识信息,作为第二标识信息,基于每个包裹检测框对应的第二标识信息,对每个图像帧的包裹检测框进行跟踪,生成包裹检测框对应的包裹运动轨迹。在一些实施例中,进行目标跟踪可采用最近邻匹配、多目标跟踪算法(Simple Online And Realtime Tracking,SORT)等跟踪算法。
本公开实施例通过对人体检测框和包裹检测框的运动轨迹进行跟踪,能够获取人体运动轨迹和包裹运动轨迹,为实现人体和包裹的交互,获取包裹被抛出的时刻奠定了基础。
图6是本公开提出的一种包裹分拣行为的识别方法的示例性实施方式,如图6所示,该包裹分拣行为的识别方法,包括以下步骤:S601-S607。
S601,对目标视频中的图像帧进行目标检测,获取至少一个人体检测框和至少一个包裹检测框。
S602,基于人体检测框的第一标识信息,对人体检测框进行跟踪,生成人体检测框对应的人体运动轨迹。
S603,基于包裹检测框的第二标识信息,对包裹检测框进行跟踪,生成包裹检测框对应的包裹运动轨迹。
关于步骤S602~S603的实现方式,可采用本公开中上述实施例中的实现方式,在此不再进行赘述。
S604,将任一包裹对应的目标包裹运动轨迹分别与每个人体运动轨迹进行对比,以获取已检测的时刻上任一包裹对应的目标包裹检测框与每个人体检测框之间的目标距离。
S605,确定首次出现目标距离小于距离阈值的时刻,并作为任一包裹的被抛出时刻。
关于步骤S604~S605的实现方式,可采用本公开中上述实施例中的实现方式,在此不再进行赘述。
S606,基于运动信息,生成任一包裹的分拣力度参数。
S607,根据分拣力度参数,确定任一包裹的分拣行为的分拣力度等级。
本公开实施例提出了一种包裹分拣行为的识别方法,通过对目标视频中的图像帧进行目标检测,获取至少一个人体检测框和至少一个包裹检测框;分别对人体检测框和包裹检 测框的运动轨迹进行跟踪;在轨迹跟踪的过程中,针对任一包裹基于跟踪到的当前运动轨迹识别任一包裹的被抛出时刻;获取任一包裹从被抛出时刻到当前时刻的运动信息,并基于运动信息,对任一包裹的分拣行为进行识别。本公开在对包裹分拣行为进行识别时,通过判断人体与包裹的交互关系,能够有效排除只有人、或只有包裹、或人携包裹一起运动、或背景无关信息干扰等等的误判,并且能够识别分拣人员主体,确定分拣力度等级,使得判断更准确。
图7是本公开提出的一种包裹分拣行为的识别装置的示意图,如图7所示,该包裹分拣行为的识别装置700,包括第一获取模块71、轨迹跟踪模块72、第二获取模块73和行为识别模块74,其中:
第一获取模块71,用于对目标视频中的图像帧进行目标检测,获取至少一个人体检测框和至少一个包裹检测框。
轨迹跟踪模块72,用于分别对人体检测框和包裹检测框的运动轨迹进行跟踪。
第二获取模块73,用于在轨迹跟踪的过程中,针对任一包裹基于跟踪到的当前运动轨迹识别任一包裹的被抛出时刻。
行为识别模块74,用于获取任一包裹从被抛出时刻到当前时刻的运动信息,并基于运动信息,对任一包裹的分拣行为进行识别。
在一些实施例中,第二获取模块73,还用于:将任一包裹对应的目标包裹运动轨迹分别与每个人体运动轨迹进行对比,以获取已检测的时刻上任一包裹对应的目标包裹检测框与每个人体检测框之间的目标距离;确定首次出现目标距离小于距离阈值的时刻,并作为任一包裹的被抛出时刻。
在一些实施例中,第二获取模块73,还用于:从目标包裹运动轨迹上按照从晚到早的顺序,获取每个图像帧上目标包裹检测框的第一位置信息;从人体运动轨迹上获取同一图像帧上人体检测框的第二位置信息;根据第一位置信息与对应时刻的第二位置信息,获取目标距离。
在一些实施例中,第二获取模块73,还用于:将首次出现目标距离小于距离阈值的图像帧对应的时刻作为包裹的被抛出时刻。
在一些实施例中,第二获取模块73,还用于:从第二位置信息对应的图像帧中,提取第二位置信息标记的图像区域;对图像区域进行人体关键点检测,获取人体手部位置信息;获取第一位置信息到人体手部位置信息的距离,作为目标距离。
在一些实施例中,行为识别模块74,还用于:基于运动信息,生成任一包裹的分拣力度参数,并根据分拣力度参数,确定任一包裹的分拣行为的分拣力度等级。
在一些实施例中,行为识别模块74中的运动信息包括任一包裹从被抛出时刻到当前时刻的距离值、最大速度、平均速度,以及任一包裹从被抛出时刻到当前时刻每个时刻的运动速度和加速度。
在一些实施例中,轨迹跟踪模块72,还用于:基于人体检测框的第一标识信息,对人体检测框进行跟踪,生成人体检测框对应的人体运动轨迹;基于包裹检测框的第二标识信息,对包裹检测框进行跟踪,生成包裹检测框对应的包裹运动轨迹。
为了实现上述实施例,本公开实施例还提出一种电子设备800,如图8所示,该电子设备800包括:处理器801和处理器通信连接的存储器802,存储器802存储有可被至少一个处理器执行的指令,指令被至少一个处理器801执行,以实现如上述实施例所示的包裹分拣行为的识别方法。
为了实现上述实施例,本公开实施例还提出一种存储有计算机指令的非瞬时计算机可读存储介质,其中,计算机指令用于使计算机实现如上述实施例所示的包裹分拣行为的识别方法。
为了实现上述实施例,本公开实施例还提出一种计算机程序产品,包括计算机程序,计算机程序在被处理器执行时实现如上述实施例所示的包裹分拣行为的识别方法。
为了实现上述实施例,本公开实施例还提出一种计算机程序,其中该计算机程序包括计算机程序代码,当该计算机程序代码在计算机上运行时,使得计算机执行如上述实施例所示的包裹分拣行为的识别方法。
需要说明的是,前述对包裹分拣行为的识别方法实施例的解释说明也适用于上述实施例中的装置、电子设备、非瞬时计算机可读存储介质、计算机程序产品和计算机程序,此处不再赘述。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本公开的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本公开的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合 和组合。
尽管上面已经示出和描述了本公开的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本公开的限制,本领域的普通技术人员在本公开的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (20)

  1. 一种包裹分拣行为的识别方法,其特征在于,包括:
    对目标视频中的图像帧进行目标检测,获取至少一个人体检测框和至少一个包裹检测框;
    分别对所述人体检测框和所述包裹检测框的运动轨迹进行跟踪;
    在轨迹跟踪的过程中,针对任一包裹基于跟踪到的当前运动轨迹识别所述任一包裹的被抛出时刻;
    获取所述任一包裹从所述被抛出时刻到当前时刻的运动信息,并基于所述运动信息,对所述任一包裹的分拣行为进行识别。
  2. 根据权利要求1所述的方法,其特征在于,所述当前运动轨迹包括所述人体检测框对应的人体运动轨迹和所述包裹检测框对应的包裹运动轨迹,其中,所述针对任一包裹基于跟踪到的当前运动轨迹识别所述任一包裹的被抛出时刻,包括:
    将所述任一包裹对应的目标包裹运动轨迹分别与每个所述人体运动轨迹进行对比,以获取已检测的时刻上所述任一包裹对应的目标包裹检测框与每个所述人体检测框之间的目标距离;
    确定首次出现所述目标距离小于距离阈值的时刻,并作为所述任一包裹的被抛出时刻。
  3. 根据权利要求2所述的方法,其特征在于,所述将所述任一包裹对应的目标包裹运动轨迹分别与每个所述人体运动轨迹进行对比,包括:
    从所述目标包裹运动轨迹上按照从晚到早的顺序,获取每个图像帧上所述目标包裹检测框的第一位置信息;
    从所述人体运动轨迹上获取同一所述图像帧上所述人体检测框的第二位置信息;
    根据所述第一位置信息与对应时刻的所述第二位置信息,获取所述目标距离。
  4. 根据权利要求2或3所述的方法,其特征在于,所述确定首次出现所述目标距离小于距离阈值的时刻,并作为所述任一包裹的被抛出时刻,包括:
    将首次出现所述目标距离小于距离阈值的图像帧对应的时刻作为包裹的被抛出时刻。
  5. 根据权利要求3所述的方法,其特征在于,所述根据所述第一位置信息与对应时 刻的所述第二位置信息,获取所述目标距离,包括:
    从所述第二位置信息对应的图像帧中,提取所述第二位置信息标记的图像区域;
    对所述图像区域进行人体关键点检测,获取人体手部位置信息;
    获取所述第一位置信息到所述人体手部位置信息的距离,作为所述目标距离。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述基于所述运动信息,对所述任一包裹的分拣行为进行识别,包括:
    基于所述运动信息,生成所述任一包裹的分拣力度参数,并根据所述分拣力度参数,确定所述任一包裹的分拣行为的分拣力度等级。
  7. 根据权利要求1至6中任一项所述的方法,其特征在于,所述运动信息包括所述任一包裹从所述被抛出时刻到当前时刻的距离值、最大速度、平均速度,以及所述任一包裹从所述被抛出时刻到所述当前时刻每个时刻的运动速度和加速度。
  8. 根据权利要求1至7中任一项所述的方法,其特征在于,所述分别对所述人体检测框和所述包裹检测框的运动轨迹进行跟踪,包括:
    基于所述人体检测框的第一标识信息,对所述人体检测框进行跟踪,生成所述人体检测框对应的人体运动轨迹;
    基于所述包裹检测框的第二标识信息,对所述包裹检测框进行跟踪,生成所述包裹检测框对应的包裹运动轨迹。
  9. 一种包裹分拣行为的识别装置,其特征在于,包括:
    第一获取模块,用于对目标视频中的图像帧进行目标检测,获取至少一个人体检测框和至少一个包裹检测框;
    轨迹跟踪模块,用于分别对所述人体检测框和所述包裹检测框的运动轨迹进行跟踪;
    第二获取模块,用于在轨迹跟踪的过程中,针对任一包裹基于跟踪到的当前运动轨迹识别所述任一包裹的被抛出时刻;
    行为识别模块,用于获取所述任一包裹从所述被抛出时刻到当前时刻的运动信息,并基于所述运动信息,对所述任一包裹的分拣行为进行识别。
  10. 根据权利要求9所述的装置,其特征在于,所述第二获取模块,还用于:
    将所述任一包裹对应的目标包裹运动轨迹分别与每个所述人体运动轨迹进行对比,以获取已检测的时刻上所述任一包裹对应的目标包裹检测框与每个所述人体检测框之间的目标距离;
    确定首次出现所述目标距离小于距离阈值的时刻,并作为所述任一包裹的被抛出时刻。
  11. 根据权利要求10所述的装置,其特征在于,所述第二获取模块,还用于:
    从所述目标包裹运动轨迹上按照从晚到早的顺序,获取每个图像帧上所述目标包裹检测框的第一位置信息;
    从所述人体运动轨迹上获取同一所述图像帧上所述人体检测框的第二位置信息;
    根据所述第一位置信息与对应时刻的所述第二位置信息,获取所述目标距离。
  12. 根据权利要求10或11所述的装置,其特征在于,所述第二获取模块,还用于:
    将首次出现所述目标距离小于距离阈值的图像帧对应的时刻作为包裹的被抛出时刻。
  13. 根据权利要求11所述的装置,其特征在于,所述第二获取模块,还用于:
    从所述第二位置信息对应的图像帧中,提取所述第二位置信息标记的图像区域;
    对所述图像区域进行人体关键点检测,获取人体手部位置信息;
    获取所述第一位置信息到所述人体手部位置信息的距离,作为所述目标距离。
  14. 根据权利要求9至13中任一项所述的装置,其特征在于,所述行为识别模块,还用于:
    基于所述运动信息,生成所述任一包裹的分拣力度参数,并根据所述分拣力度参数,确定所述任一包裹的分拣行为的分拣力度等级。
  15. 根据权利要求9至14中任一项所述的装置,其特征在于,所述运动信息包括所述任一包裹从所述被抛出时刻到当前时刻的距离值、最大速度、平均速度,以及所述任一包裹从所述被抛出时刻到所述当前时刻每个时刻的运动速度和加速度。
  16. 根据权利要求9至15中任一项所述的装置,其特征在于,所述轨迹跟踪模块,还用于:
    基于所述人体检测框的第一标识信息,对所述人体检测框进行跟踪,生成所述人体检 测框对应的人体运动轨迹;
    基于所述包裹检测框的第二标识信息,对所述包裹检测框进行跟踪,生成所述包裹检测框对应的包裹运动轨迹。
  17. 一种电子设备,包括:
    至少一个处理器;以及
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1至8中任一项所述的方法。
  18. 一种存储有计算机指令的非瞬时计算机可读存储介质,其中,所述计算机指令用于使所述计算机执行根据权利要求1至8中任一项所述的方法。
  19. 一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时实现根据权利要求1至8中任一项所述的方法。
  20. 一种计算机程序,其特征在于,所述计算机程序包括计算机程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行如权利要求1至8中任一项所述的方法。
PCT/CN2022/131496 2022-02-23 2022-11-11 包裹分拣行为的识别方法及其装置 WO2023160022A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210168877.6 2022-02-23
CN202210168877.6A CN114550294A (zh) 2022-02-23 2022-02-23 包裹分拣行为的识别方法及其装置

Publications (1)

Publication Number Publication Date
WO2023160022A1 true WO2023160022A1 (zh) 2023-08-31

Family

ID=81677737

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/131496 WO2023160022A1 (zh) 2022-02-23 2022-11-11 包裹分拣行为的识别方法及其装置

Country Status (2)

Country Link
CN (1) CN114550294A (zh)
WO (1) WO2023160022A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114550294A (zh) * 2022-02-23 2022-05-27 京东科技信息技术有限公司 包裹分拣行为的识别方法及其装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358194A (zh) * 2017-07-10 2017-11-17 南京邮电大学 一种基于计算机视觉的暴力分拣快递判断方法
CN112507760A (zh) * 2019-09-16 2021-03-16 杭州海康威视数字技术股份有限公司 暴力分拣行为的检测方法、装置及设备
CN113221819A (zh) * 2021-05-28 2021-08-06 中邮信息科技(北京)有限公司 包裹暴力分拣的检测方法、装置、计算机设备和存储介质
CN113469137A (zh) * 2021-07-28 2021-10-01 浙江大华技术股份有限公司 异常行为的识别方法、装置、存储介质及电子装置
CN113516102A (zh) * 2021-08-06 2021-10-19 上海中通吉网络技术有限公司 基于视频的深度学习抛物行为检测方法
CN114550294A (zh) * 2022-02-23 2022-05-27 京东科技信息技术有限公司 包裹分拣行为的识别方法及其装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358194A (zh) * 2017-07-10 2017-11-17 南京邮电大学 一种基于计算机视觉的暴力分拣快递判断方法
CN112507760A (zh) * 2019-09-16 2021-03-16 杭州海康威视数字技术股份有限公司 暴力分拣行为的检测方法、装置及设备
CN113221819A (zh) * 2021-05-28 2021-08-06 中邮信息科技(北京)有限公司 包裹暴力分拣的检测方法、装置、计算机设备和存储介质
CN113469137A (zh) * 2021-07-28 2021-10-01 浙江大华技术股份有限公司 异常行为的识别方法、装置、存储介质及电子装置
CN113516102A (zh) * 2021-08-06 2021-10-19 上海中通吉网络技术有限公司 基于视频的深度学习抛物行为检测方法
CN114550294A (zh) * 2022-02-23 2022-05-27 京东科技信息技术有限公司 包裹分拣行为的识别方法及其装置

Also Published As

Publication number Publication date
CN114550294A (zh) 2022-05-27

Similar Documents

Publication Publication Date Title
Hall et al. Probabilistic object detection: Definition and evaluation
CN108470332B (zh) 一种多目标跟踪方法及装置
US10878290B2 (en) Automatically tagging images to create labeled dataset for training supervised machine learning models
US8254633B1 (en) Method and system for finding correspondence between face camera views and behavior camera views
CN106846355B (zh) 基于提升直觉模糊树的目标跟踪方法及装置
US8219438B1 (en) Method and system for measuring shopper response to products based on behavior and facial expression
US9740977B1 (en) Method and system for recognizing the intentions of shoppers in retail aisles based on their trajectories
US8478048B2 (en) Optimization of human activity determination from video
US9569531B2 (en) System and method for multi-agent event detection and recognition
US20120169879A1 (en) Detecting retail shrinkage using behavioral analytics
JPWO2007026744A1 (ja) 広域分散カメラ間の連結関係推定法および連結関係推定プログラム
JP2018032078A (ja) 他の物体の画像領域も考慮して物体を追跡する装置、プログラム及び方法
JP2014038620A (ja) ハイパースペクトルデータを処理する際に使用するための追跡される対象を識別する方法
CN106934817A (zh) 基于多属性的多目标跟踪方法及装置
WO2023160022A1 (zh) 包裹分拣行为的识别方法及其装置
JP6245880B2 (ja) 情報処理装置および情報処理手法、プログラム
CN108805495A (zh) 物品存放管理方法和系统及计算机可读介质
Soleimanitaleb et al. Single object tracking: A survey of methods, datasets, and evaluation metrics
CN112528716A (zh) 一种事件信息获取方法及装置
CN113468914B (zh) 一种商品纯净度的确定方法、装置及设备
CN113763430A (zh) 检测移动目标的方法、装置和计算机可读存储介质
JP2017224148A (ja) 人流解析システム
CN106204549B (zh) 一种基于视频分析的广告牌监测方法、装置及电子设备
Gupta et al. Cost: An approach for camera selection and multi-object inference ordering in dynamic scenes
KR101508533B1 (ko) 수하물 관리 시스템

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22928283

Country of ref document: EP

Kind code of ref document: A1