WO2020088136A1 - 一种顾客路径追踪方法及系统 - Google Patents

一种顾客路径追踪方法及系统 Download PDF

Info

Publication number
WO2020088136A1
WO2020088136A1 PCT/CN2019/106514 CN2019106514W WO2020088136A1 WO 2020088136 A1 WO2020088136 A1 WO 2020088136A1 CN 2019106514 W CN2019106514 W CN 2019106514W WO 2020088136 A1 WO2020088136 A1 WO 2020088136A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
matched
database
active
key point
Prior art date
Application number
PCT/CN2019/106514
Other languages
English (en)
French (fr)
Inventor
方明
王止观
杨渼仪
顾阳
程进兴
Original Assignee
苏宁云计算有限公司
苏宁易购集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏宁云计算有限公司, 苏宁易购集团股份有限公司 filed Critical 苏宁云计算有限公司
Priority to CA3158925A priority Critical patent/CA3158925A1/en
Publication of WO2020088136A1 publication Critical patent/WO2020088136A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present invention relates to the field of data processing technology, and in particular, to a customer path tracking method and system.
  • the smart retail that has been spawned in the era of big data has provided more application scenarios for target tracking algorithms.
  • the video surveillance of customers in the store through the camera of the retail store is based on the video. Monitor and obtain data such as the customer's path and trajectory, grab movements, etc., and finally obtain the shopping needs of each customer through data analysis, and push personalized advertisements to them to meet the needs of users and improve the sales performance of the store.
  • Existing target tracking algorithms usually use SIFT or HOG feature extraction methods to perform target tracking in video surveillance images.
  • the specific process is as follows. First, detect the object features in each frame of the surveillance image, and use the SVM classifier to classify the above objects. By analyzing the feature associations in the monitoring images before and after the frame, the target path trajectory is obtained.
  • the current target tracking algorithm runs the video tracking process, when two or more target action paths in the current frame monitoring image cross-occlude, it is easy to cause target interchange or target loss, reducing the target tracking algorithm. Accuracy.
  • the purpose of the present invention is to provide a customer path tracking method and system, which solves the problem of low target tracking accuracy in the prior art.
  • one aspect of the present invention provides a customer path tracking method, including:
  • Step S1 processing and monitoring the video stream through the keypoint model to obtain the target ID in the current frame image and the corresponding keypoint data;
  • Step S2 respectively traversing and matching the target ID in the current frame image with the active ID in the active ID database, extracting the matching target ID and moving it into the active ID database to replace the corresponding active ID, and / or extracting the unmatched successful ID
  • the target ID is added to the database to be matched;
  • Step S3 respectively traversing and matching the target ID in the current frame image with the ID to be matched in the database to be matched, extracting the matching target ID into the database to be matched and replacing the corresponding ID to be matched, and / or extracting the unmatched
  • the successful target ID is recorded in the database to be matched;
  • Step S4 traverse the IDs to be matched in the database to be matched, extract the IDs to be matched that continuously appear within the activation time T2 frame, and record to the active ID database, and clear the database to be matched;
  • Step S5 traverse the active IDs in the active ID database, extract active IDs that do not appear continuously within the activation time T1 frame, and save to the permanent ID database; return to step S1,
  • Step S6 Obtain the location where the customer appears in different frame images from the permanent ID database, and draw the path trajectory.
  • step S1 before training the keypoint model to process the monitoring video stream, before acquiring the target ID in the current frame image and the corresponding keypoint data, the method further includes step S0:
  • a key point model is obtained by training based on multiple pieces of key point information and corresponding annotation information.
  • step S1 the method of processing the monitoring video stream through the keypoint model to obtain the target ID in the current frame image and the corresponding keypoint data includes:
  • step S2 the target ID in the current frame image and the active ID in the active ID database are respectively traversed and matched, and the extracted and matched target ID is moved into the active ID database to replace the corresponding active ID, and / or, extracted
  • the methods for re-recording the unmatched target ID to the database to be matched include:
  • the target ID is successfully matched
  • step S3 the target ID in the current frame image and the ID to be matched in the database to be matched are respectively traversed and matched, and the target ID that is successfully matched is extracted and moved into the database to be matched to replace the corresponding ID to be matched, and / or
  • the method for extracting the unmatched target ID and re-recording it to the database to be matched includes:
  • the target ID is successfully matched
  • the synthetic distance When the similar distance of the target ID is greater than the second confidence distance and less than the suspicious distance, the synthetic distance must be generated based on the similar distance and similar color, and then matched again. When the synthetic distance is less than the third confidence distance, the matching is successful. Unsuccessful match at three confidence distances;
  • the method of matching again after generating a comprehensive distance based on similar distances and similar colors includes:
  • R 1 , G 1 and B 1 are the three primary color values corresponding to the target ID coat color
  • R 2 , G 2 and B 2 are the three primary color values corresponding to the target ID coat color
  • the similar distance is Euclidean distance or cosine distance.
  • the customer path tracking method provided by the present invention has the following beneficial effects:
  • a keypoint model needs to be trained first to extract the target ID and corresponding keypoint data in the current frame image in the surveillance video stream, where the target ID refers to the face recognition technology based on The labeling of the portraits that appear in the current frame image. If the portraits in different frame images are recognized by the face recognition technology as the same person, the labeling of the same portrait in different frame images should be the same. Similarly, the same frame image / different frame In order to distinguish between different portraits in the image, they should be labeled differently; after the target ID and key point data in the current frame image are obtained, the target ID in the current frame image needs to be initially matched.
  • the specific process is to separate the current frame
  • the target ID in the image is initially traversed and matched with the active ID in the active ID database. If there is a matching target ID in the current frame image, the target ID is moved into the active ID database to replace the corresponding active ID, and the Target IDs that have been successfully matched are recorded in the database to be matched until all target IDs in the current frame image have been manipulated and entered
  • the specific process of the secondary matching stage is to re-traverse the target ID in the current frame image and the ID to be matched in the database to be matched.
  • the target ID Move into the database to be matched to replace the corresponding ID to be matched, and move into the database to be matched to replace the corresponding ID to be matched, until all target IDs in the current frame image are operated again, enter the active ID database merge step, this
  • the specific process of the step is to traverse the IDs to be matched in the database to be matched and merge the IDs to be matched that continuously appear in the activation time T2 frame in the database to be matched into the active ID database, because the IDs to be matched repeatedly appear within the activation time T2 frame, Therefore, the ID to be matched can be regarded as an active ID, and the ID to be matched which is not regarded as an active ID can be regarded as noise ignoring; further, after the active ID database is obtained, the active ID must be Extract the active ID from the database and save it to the permanent ID database.
  • the specific process of this step is to traverse the active ID in the active ID database and match the database to be matched. Active IDs that do not appear continuously within the activation time T1 frame are saved to the permanent ID database, complete the classification tag of the target ID in the current frame image, reacquire the next frame image, return to step S1 to perform the above operation again, the end user can pass the permanent
  • the ID database obtains the position where the customer appears in different frame images, and draws the path trajectory of the customer corresponding to different target IDs.
  • the present invention uses the key point model to extract portrait key point data, which can accurately track the portraits in different frame images. Compared with the prior art method using the bounding box tracking method, the accuracy is higher. In addition, the double matching mechanism is used. It can effectively filter the background noise in the image, improve the tracking accuracy of the customer's path, and then accurately analyze the user's shopping preferences, providing effective reference data for store managers.
  • Another aspect of the present invention provides a customer path tracking system, which is applied to the customer path tracking method described in the above technical solution, and the system includes:
  • the key point data extraction unit is used to process and monitor the video stream through the key point model to obtain the target ID in the current frame image and its corresponding key point data;
  • the first matching unit is used to traverse and match the target ID in the current frame image with the active ID in the active ID database respectively, extract the matching target ID and move it into the active ID database to replace the corresponding active ID, and / or extract The unmatched target ID is recorded in the database to be matched;
  • the second matching unit is used to traverse and match the target ID in the current frame image with the ID to be matched in the database to be matched, extract the successfully matched target ID into the database to be matched and replace the corresponding ID to be matched, and / or , Extract the unmatched target ID and make a record to the database to be matched;
  • the first re-recording unit is used to traverse the IDs to be matched in the database to be matched, extract the IDs to be matched that appear continuously within the activation time T2 frame to the active ID database, and clear the database to be matched;
  • the second supplementary recording unit is used to traverse the active IDs in the active ID database, extract active IDs that do not appear continuously within the activation time T1 frame, save them to the permanent ID database, and recall the key point data extraction unit;
  • the path analysis unit obtains the location of the customer in different frame images from the permanent ID database, and draws the path trajectory.
  • the model unit further includes a model unit connected to the input end of the key point data extraction unit, and the model unit includes a training extraction module, a labeling module and a model training module;
  • the training extraction module is used to call Openpose software to extract multiple pieces of key point information from multiple human training pictures
  • the labeling module is used to label multiple pieces of key point information through the COCO data set;
  • the model training module is used for training to obtain a key point model based on multiple pieces of key point information and corresponding annotation information.
  • the key point data extraction unit includes a portrait extraction module and a mark matching module
  • the portrait extraction module is used to call a keypoint model to process the RTSP protocol monitoring video stream, and extract multiple portraits in the current frame image and keypoint information in the portraits;
  • the mark matching module is used to obtain multiple target IDs for each portrait mark, and record the matching relationship between the target ID and key point information at the same time.
  • the beneficial effects of the customer path tracking system provided by the present invention are the same as the beneficial effects of the customer path tracking method provided by the above technical solution, and will not be repeated here.
  • FIG. 1 is a schematic flowchart of a customer path tracking method in Embodiment 1 of the present invention
  • Embodiment 1 of the present invention is an example diagram of key points of a human body in Embodiment 1 of the present invention.
  • FIG. 3 is a structural block diagram of a customer path tracking system in Embodiment 2 of the present invention.
  • FIG. 1 is a schematic flowchart of a customer path tracking method in Embodiment 1 of the present invention. Referring to FIG. 1, this embodiment provides a customer path tracking method, including:
  • Step S1 Process the monitoring video stream through the keypoint model to obtain the target ID in the current frame image and the corresponding keypoint data;
  • Step S2 respectively match the target ID in the current frame image with the active ID in the active ID database , Extract the matching target ID and move it into the active ID database to replace the corresponding active ID, and / or extract the unmatched target ID and add it to the database to be matched;
  • step S3 the target ID and the current frame image respectively
  • the IDs to be matched in the database to be matched are traversed to match, and the target IDs that have been successfully matched are extracted into the database to be matched to replace the corresponding IDs to be matched, and / or the target IDs that are not matched successfully are extracted and added to the database to be matched;
  • step S4 Traverse the IDs to be matched in the database to be matched, extract the IDs to be matched that appear continuously within the activation time T2 frame and record them to the active ID database, and empty the database to be matched;
  • a key point model needs to be trained first to extract the target ID and corresponding key point data in the current frame image in the surveillance video stream, where the target ID refers to face recognition technology
  • the target ID refers to face recognition technology
  • the labeling of the portraits appearing in the current frame image if the portraits in different frame images are recognized as the same person by face recognition technology, the labeling of the same portrait in different frame images should be the same, the same reason, the same frame image / different Different portraits in the frame image should be labeled differently in order to distinguish them; after the target ID and key point data in the current frame image are obtained, the target ID in the current frame image needs to be initially matched.
  • the specific process is to separate the current
  • the target ID in the frame image is initially traversed and matched with the active ID in the active ID database. If there is a matching target ID in the current frame image, the target ID is moved into the active ID database to replace the corresponding active ID, and the The unmatched target IDs are recorded in the database to be matched until all target IDs in the current frame image are operated backward Enter the secondary matching stage.
  • the specific process of the secondary matching stage is to re-traverse the target ID in the current frame image and the ID to be matched in the database to be matched.
  • the target ID is moved into the database to be matched to replace the corresponding ID to be matched, and will be moved into the database to be matched to replace the corresponding ID to be matched, until all target IDs in the current frame image are operated again, enter the active ID database merge step
  • the specific process of this step is to traverse the IDs to be matched in the database to be matched, merge the IDs to be matched that continuously appear in the activation time T2 frame in the database to be matched into the active ID database, because the ID to be matched repeats within the activation time T2 frame Appears, so the ID to be matched can be regarded as an active ID, and the ID to be matched that is not regarded as an active ID can be regarded as noise ignoring at this time; further, after the active ID database is obtained, it is necessary to Extract the active ID from the active ID database and save it to the permanent ID database.
  • the specific process of this step is to traverse the active ID in the active ID database and compare the data to be matched.
  • the active IDs that do not appear continuously within the activation time T1 frame are saved to the permanent ID database, complete the classification tag of the target ID in the current frame image, reacquire the next frame image, return to step S1 to perform the above operation again, the end user can pass
  • the permanent ID database obtains the location where the customer appears in different frame images, and draws the path trajectory of the customer corresponding to different target IDs.
  • the key point model is used to extract portrait key point data, which can accurately track the portraits in different frame images.
  • the accuracy is higher.
  • double matching is used.
  • the mechanism can effectively filter out the background noise in the image, improve the tracking accuracy of the customer's path, and then accurately analyze the user's shopping preferences to provide effective reference data for store managers.
  • users can help store managers understand which showcases of the store belong to high-traffic areas and which showcases belong to low-traffic areas.
  • the customer's stay time can be analyzed. The stay time can reflect the customer's interest in the showcase, help users understand the attractiveness of the showcase merchandise, and analyze the reasons for the showcase with lower customer interest and make corresponding adjustments. It also provides users with effective data support for subsequent adjustments and marketing activities of corresponding showcases to help them make accurate and effective business decisions.
  • step S1 of the above embodiment the training key point model is used to process the monitoring video stream, and the target ID in the current frame image and the corresponding key point data are obtained before step S0:
  • Openpose software to extract multiple pieces of keypoint information from multiple human training pictures; mark multiple pieces of keypoint information separately through the COCO dataset; train keypoint models based on the multiple pieces of keypoint information and the corresponding labeled information.
  • the content of the label includes the position coordinates of each key point.
  • the Openpose software uses the open source Openpose method in the prior art. As shown in FIG. 2, it can collect facial expressions of the portrait and the key point information of 18 human body joints.
  • the Openpose software is not only suitable for single-person images. Key point information extraction is also suitable for key point information extraction of multi-person images, and it has good robustness.
  • the COCO data set is also a key point labeling model commonly used by those skilled in the art, and will not be repeated in this implementation.
  • the method for processing the monitoring video stream through the keypoint model in the above embodiment to obtain the target ID in the current frame image and the corresponding keypoint data includes:
  • the target ID in the current frame image and the active ID in the active ID database are respectively traversed and matched, and the extracted and matched target ID is moved into the active ID database to replace the corresponding active ID, and / or
  • the methods for re-recording the unmatched target ID to the database to be matched include:
  • the match is successful, but when the integrated distance is greater than the third confidence distance, the match is not successful; the target ID and the corresponding key point data of the successful match are extracted into the active ID database to replace the corresponding ID to be matched and the corresponding The key point data, and / or, extract the unmatched target ID and the corresponding key point data and make a record in the database to be matched.
  • the target ID in the current frame image is sequentially extracted, and the corresponding key point data and the active ID in the active ID database are traversed and matched using the greedy matching algorithm or the Hungarian algorithm to obtain the target ID and the active ID database. Similar distance, repeat the above steps to obtain the similar distance between each target ID in the current frame image and the active ID database, and then make a matching judgment based on the similar distance of the target ID, when the similar distance of the target ID is less than the first confidence distance, Then mark the target ID matching success, but when the similar distance of the target ID is greater than the first confidence distance, it means that the target ID is doubtful.
  • this embodiment can further analyze and judge the suspicious target ID by introducing the suspicious distance and similar color judgment standards, so as to accurately remove the noise in the current frame image and improve the accuracy of portrait recognition.
  • the target ID in the current frame image and the ID to be matched in the database to be matched are respectively traversed and matched, and the target ID that is successfully matched is extracted and moved into the database to be matched to replace the corresponding ID to be matched, and / or
  • the method for extracting the unmatched target ID and re-recording it to the database to be matched includes:
  • the greedy matching algorithm or the Hungarian algorithm uses the greedy matching algorithm or the Hungarian algorithm to traverse and match the key point data corresponding to the target ID in the current frame image and the key point data corresponding to the ID to be matched in the database to be matched, and obtain the similar distance between each target ID and the database to be matched;
  • the target ID matching is successful;
  • the similar distance of the target ID is greater than the second confidence distance is less than the doubtful distance, it is necessary to generate a comprehensive distance based on the similar distance and similar color, and then match again.
  • the integrated distance is less than the third confidence distance, the matching is successful.
  • the match is not successful; the target ID and the corresponding key point data that are successfully matched are extracted and moved to the database to be matched to replace the corresponding ID and corresponding to be matched Key point data, and / or extract the unmatched target ID and corresponding key point data and make a record in the database to be matched.
  • the IDs to be matched in the database to be matched are mostly suspicious IDs
  • the IDs to be matched in the matching database need to be further verified to update and clean the database to be matched, specifically, the current frame images are sequentially extracted
  • the target ID in the database and use the greedy matching algorithm or the Hungarian algorithm to perform the traversal matching operation between the corresponding key point data and the active ID in the database to be matched to obtain the similar distance between the target ID and the database to be matched.
  • the target ID and the corresponding key point data of the successful match are extracted into the database to be matched to replace the corresponding ID and the corresponding key point data, and the unmatched target ID and the corresponding key point data are extracted.
  • the secondary matching can update and clean the database to be matched, and improve the accuracy of matching.
  • the greedy matching algorithm in the above embodiment refers to calculating the similar distance between the target ID and each active ID / ID to be matched separately, and selecting the minimum value of the similar distance as the target ID to the active ID database / waiting Match the similar distance in the database.
  • the Hungarian algorithm refers to separately calculating the similar distance between the target ID and each active ID / ID to be matched , And then sort the multiple similar distances in order from large to small, and select the minimum value of the similar distance as the similar distance from the target ID to the active ID database / database to be matched, if the similar distance is less than the first confidence distance / The second confidence distance indicates that the target ID is successfully matched.
  • the replacement in the above expression corresponds to the active ID / to be matched ID, which refers to the active ID / to be matched ID corresponding to the minimum value of the similar distance.
  • R 1 , G 1 and B 1 are the three primary color values corresponding to the color of the target ID coat
  • R 2 , G 2 and B 2 are the three primary color values corresponding to the color of the target ID coat
  • A, ⁇ represents the color specific gravity coefficient
  • the first confidence distance is 2000
  • the second confidence distance is 1000
  • the third confidence distance is 2000
  • the activation time T1 is 300
  • the activation time T2 is 5.
  • the color specific gravity coefficient ⁇ is 0.2.
  • the active ID in the above embodiments refers to the portraits that have been confirmed and are monitoring the activity of the video stream.
  • the active ID database includes the active ID in the monitoring video stream, the key point data corresponding to the active ID, and the active Face image and behavior image corresponding to ID;
  • ID to be matched refers to unconfirmed portrait, database to be matched includes ID to be matched, key point data corresponding to ID to be matched, face image corresponding to ID to be matched and behavior Action image;
  • permanent ID refers to the portrait that has been confirmed and has only been monitoring the video stream activity previously.
  • the permanent ID database includes the permanent ID, the key point data corresponding to the permanent ID, the face image corresponding to the permanent ID, and the behavior image.
  • the setting of multiple databases can not only reduce system memory usage, but also shorten matching time and improve matching accuracy. For example, if an active ID does not appear in the T1 frame, it will be deleted from the active ID database and stored in the permanent ID database. This active ID will not be matched in the future, thereby reducing the use of system memory and shortening the matching time.
  • this embodiment provides a customer path tracking system, including:
  • the key point data extraction unit 1 is used to process and monitor the video stream through the key point model to obtain the target ID in the current frame image and the corresponding key point data;
  • the first matching unit 2 is used to traverse and match the target ID in the current frame image with the active ID in the active ID database, extract the successfully matched target ID and move it into the active ID database to replace the corresponding active ID, and / or, Extract the unmatched target ID and add it to the database to be matched;
  • the second matching unit 3 is used to traverse and match the target ID in the current frame image with the ID to be matched in the database to be matched, extract the successfully matched target ID into the database to be matched and replace the corresponding ID to be matched, and // Or, extract the unmatched target ID and add it to the database to be matched;
  • the first re-recording unit 4 is used to traverse the IDs to be matched in the database to be matched, extract the IDs to be matched that appear continuously within the activation time T2 frame to re-record to the active ID database, and clear the database to be matched;
  • the second supplementary recording unit 5 is used to traverse the active IDs in the active ID database, extract the active IDs that do not appear continuously within the activation time T1 frame, save them to the permanent ID database, and recall the key point data extraction unit;
  • the path analysis unit 6 is used to obtain the location where the customer appears in different frame images from the permanent ID database, and draw the path trajectory.
  • the model unit further includes a model unit connected to the input end of the key point data extraction unit, and the model unit includes a training extraction module, a labeling module and a model training module;
  • the training extraction module is used to call Openpose software to extract multiple pieces of key point information from multiple human training pictures
  • the labeling module is used to label multiple pieces of key point information through the COCO data set;
  • the model training module is used to train a key point model based on multiple pieces of key point information and the corresponding labeled information.
  • the key point data extraction unit 1 includes a portrait extraction module and a mark matching module
  • the portrait extraction module is used to call the keypoint model to process the RTSP protocol monitoring video stream, extract multiple portraits in the current frame image and the keypoint information in the portrait;
  • the mark matching module is used to obtain multiple target IDs for each portrait mark, and record the matching relationship between the target ID and the key point information at the same time.
  • the beneficial effects of the customer path tracking system provided by the embodiments of the present invention are the same as the beneficial effects of the customer path tracking method provided by the foregoing embodiment 1, and details are not described herein again.
  • the above program can be stored in a computer-readable storage medium.
  • the program When executed, it includes Each step of the method in the foregoing embodiment, and the storage medium may be: ROM / RAM, magnetic disk, optical disk, memory card, or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明公开一种顾客路径追踪方法及系统,解决了现有技术中的目标跟踪准确度不高的问题。该方法包括:通过关键点模型处理监控视频流,获取当前帧图像中的目标ID及其对应的关键点数据;分别将当前帧图像中的目标ID与活跃ID数据库中的活跃ID遍历匹配,提取匹配成功的目标ID移入活跃ID数据库替换与之对应的活跃ID,和/或,提取未匹配成功的目标ID补录至待匹配数据库;分别将当前帧图像中的目标ID与待匹配数据库中的待匹配ID遍历匹配,提取匹配成功的目标ID移入待匹配数据库替换与之对应的待匹配ID,和/或,提取未匹配成功的目标ID补录至待匹配数据库。该系统包括上述技术方案所提的方法。

Description

一种顾客路径追踪方法及系统 技术领域
本发明涉及数据处理技术领域,尤其涉及一种顾客路径追踪方法及系统。
背景技术
近些年,得益于人工智能的飞速发展,大数据时代催生出来的智能零售给目标跟踪算法提供了更多的应用场景,例如,通过零售店的摄像头对店内的顾客视频监控,并基于视频监控获取顾客的路径轨迹、抓拿动作等数据,最后通过数据分析得到每位顾客的购物需求,并向其推送个性化广告,在满足用户需求的同时提升门店的销售业绩。
现有的目标跟踪算法通常采用SIFT或HOG特征提取方法对视频监控图像中进行目标追踪处理,具体过程如下,首先检测每帧监控图像中的物体特征,并用SVM分类器对上述物体进行分类,之后通过分析前后帧监控图像中的特征关联,得到目标路径轨迹。但是,现有的目标跟踪算法在运行视频追踪过程中,当前后帧监控图像中的两个或多个目标行动路径出现交叉遮挡时,容易导致目标互换或者目标丢失,降低了目标跟踪算法的准确度。
发明内容
本发明的目的在于提供一种顾客路径追踪方法及系统,解决了现有技术中的目标跟踪准确度不高的问题。
为了实现上述目的,本发明的一方面提供一种顾客路径追踪方法,包括:
步骤S1,通过关键点模型处理监控视频流,获取当前帧图像中的目标ID及其对应的关键点数据;
步骤S2,分别将当前帧图像中的目标ID与活跃ID数据库中的活跃ID遍历匹配,提取匹配成功的目标ID移入活跃ID数据库替换与之对应的活跃ID,和/或,提取未匹配成功的目标ID补录至待匹配数据库;
步骤S3,分别将当前帧图像中的目标ID与待匹配数据库中的待匹配ID遍历匹配,提取匹配成功的目标ID移入待匹配数据库替换与之对应的待匹配ID,和/或,提取未匹配成功的目标ID补录至待匹配数据库;
步骤S4,遍历待匹配数据库中的待匹配ID,提取激活时间T2帧内连续出现的待匹配ID补录至活跃ID数据库,并清空待匹配数据库;
步骤S5,遍历活跃ID数据库中的活跃ID,提取激活时间T1帧内未连续出现的活跃ID保存至永久ID数据库;返回步骤S1,
步骤S6,从永久ID数据库获取顾客在不同帧图像出现的位置,绘制路径轨迹。
优选地,在步骤S1,通过训练关键点模型处理监控视频流,获取当前帧图像中的目标ID及其对应的关键点数据之前还包括步骤S0:
调用Openpose软件从多张人体训练图片上提取多份关键点信息;
通过COCO数据集分别对多份关键点信息进行标注;
基于多份关键点信息及与之对应的标注信息训练得到关键点模型。
优选地,步骤S1,通过关键点模型处理监控视频流,获取当前帧图像中的目标ID及其对应的关键点数据的方法包括:
调用关键点模型处理RTSP协议的监控视频流,提取当前帧图像中的多个人像及人像中的关键点信息;
分别对每个人像标记得到多个目标ID,同时记录目标ID与关键点信息的匹配关系。
较佳地,步骤S2,分别将当前帧图像中的目标ID与活跃ID数据库中的活跃ID遍历匹配,提取匹配成功的目标ID移入活跃ID数据库替换与之对应的活跃ID,和/或,提取未匹配成功的目标ID补录至待匹配数据库的方法包括:
采用贪心匹配算法或者匈牙利算法对当前帧图像中目标ID对应的关键点数据与活跃ID数据库中的活跃ID对应的关键点数据遍历匹配,分别得到各目标ID与活跃ID数据库的相似距离;
当所述目标ID的相似距离小于第一置信距离时,则所述目标ID匹配成功;
当所述目标ID的相似距离大于第一置信距离小于存疑距离时,需基于相似距离和相似颜色生成综合距离后再次匹配,并在综合距离小于第三置信距离时匹配成功,在综合距离大于第三置信距离时未匹配成功;
提取匹配成功的目标ID及对应的关键点数据移入活跃ID数据库替换与之对应的待匹配ID及对应的关键点数据,和/或,提取未匹配成功的目标ID及对应的关键点数据补录至待匹配数据库。
较佳地,步骤S3,分别将当前帧图像中的目标ID与待匹配数据库中的待匹配ID遍历匹配,提取匹配成功的目标ID移入待匹配数据库替换与之对应的待匹配ID,和/或,提取未匹配成功的目标ID补录至待匹配数据库的方法包括:
采用贪心匹配算法或者匈牙利算法对当前帧图像中目标ID对应的关键点数据与待匹配数据库中的待匹配ID对应的关键点数据遍历匹配,分别得到各目标ID与待匹配数据库的相似距离;
当所述目标ID的相似距离小于第二置信距离时,则所述目标ID匹配成功;
当所述目标ID的相似距离大于第二置信距离小于存疑距离时,需基于相似距离和相似颜色生成综合距离后再次匹配,并在综合距离小于第三置信距离时匹配成功,在综合距离大于第三置信距离时未匹配成功;
提取匹配成功的目标ID及对应的关键点数据移入待匹配数据库替换与之对应的待匹配ID及对应的关键点数据,和/或,提取未匹配成功的目标ID及对应的关键点数据补录至待匹配数据库。
示例性地,所述基于相似距离和相似颜色生成综合距离后再次匹配的方法包括:
利用相似颜色计算公式
Figure PCTCN2019106514-appb-000001
其中R 1、G 1、B 1 为目标ID上衣颜色对应的三原色数值,R 2、G 2、B 2为目标ID下衣颜色对应的三原色数值;
综合距离的计算公式B=相似距离+λ*A,λ表示颜色比重系数。
示例性地,所述相似距离为欧式距离或者余弦距离。
与现有技术相比,本发明提供的顾客路径追踪方法具有以下有益效果:
本发明提供的顾客路径追踪方法中,首先需要训练关键点模型,以提取监控视频流中当前帧图像中的目标ID及其对应的关键点数据,其中,目标ID是指基于人脸识别技术对当前帧图像中出现的人像进行的标号,若不同帧图像中的人像被人脸识别技术识别为同一人,则不同帧图像中的同一人像的标号应当一致,同理,同一帧图像/不同帧图像中的不同人像为了加以区分也应当不同标号;在当前帧图像中的目标ID和关键点数据获取完毕后,需对当前帧图像中的目标ID进行初步匹配,具体过程为,分别将当前帧图像中的目标ID与活跃ID数据库中的活跃ID进行初步遍历匹配,若当前帧图像中存在匹配成功的目标ID,则将该目标ID移入活跃ID数据库替换与之对应的活跃ID,以及将未匹配成功的目标ID补录至待匹配数据库,直至当前帧图像中的全部目标ID均被操作后进入二次匹配阶段,二次匹配阶段的具体过程为,重新将当前帧图像中的目标ID与待匹配数据库中的待匹配ID遍历,若当前帧图像中存在匹配成功的目标ID,则将该目标ID移入待匹配数据库替换与之对应的待匹配ID,以及将移入待匹配数据库替换与之对应的待匹配ID,直至当前帧图像中的全部目标ID被再次操作后,进入活跃ID数据库合并步骤,此步骤具体过程为,遍历待匹配数据库中的待匹配ID,将待匹配数据库中激活时间T2帧内连续出现的待匹配ID合并入活跃ID数据库,由于待匹配ID在激活时间T2帧内反复出现,因此可将该待匹配ID视为活跃ID,而未被视为活跃ID的待匹配ID,此时可将其视为噪音忽略;进一步地,在活跃ID数据库获取完毕后,还需从活跃ID数据库中提取活跃ID保存至永久ID数据库,此步骤具体过程为,遍历活跃ID数据库中的活跃ID,将待匹配数据库中在激活时间T1帧内未连续出现的活跃ID保存至永久ID数据库,完成当前帧图像中目 标ID的归类标记,重新获取下一帧图像,返回步骤S1重新执行上述操作,最终用户可通过永久ID数据库获取顾客在不同帧图像出现的位置,绘制不同目标ID对应顾客路径轨迹。
可见,本发明采用关键点模型提取人像关键点数据,能够针对不同帧图像中的人像进行精准追踪,相比较于现有技术中采用边界框追踪的方法准确率更高,另外,采用双重匹配机制能够有效过滤掉图像中的背景噪音,提高顾客路径轨迹的追踪准确度,进而精准的分析出用户的购物偏好,为门店的管理人员提供有效的参考数据。
本发明的另一方面提供一种顾客路径追踪系统,应用于上述技术方案所述的顾客路径追踪方法中,所述系统包括:
关键点数据提取单元,用于通过关键点模型处理监控视频流,获取当前帧图像中的目标ID及其对应的关键点数据;
第一匹配单元,用于分别将当前帧图像中的目标ID与活跃ID数据库中的活跃ID遍历匹配,提取匹配成功的目标ID移入活跃ID数据库替换与之对应的活跃ID,和/或,提取未匹配成功的目标ID补录至待匹配数据库;
第二匹配单元,用于分别将当前帧图像中的目标ID与待匹配数据库中的待匹配ID遍历匹配,提取匹配成功的目标ID移入待匹配数据库替换与之对应的待匹配ID,和/或,提取未匹配成功的目标ID补录至待匹配数据库;
第一补录单元,用于遍历待匹配数据库中的待匹配ID,提取激活时间T2帧内连续出现的待匹配ID补录至活跃ID数据库,并清空待匹配数据库;
第二补录单元,用于遍历活跃ID数据库中的活跃ID,提取激活时间T1帧内未连续出现的活跃ID保存至永久ID数据库,并重新调用关键点数据提取单元;
路径分析单元,从永久ID数据库获取顾客在不同帧图像出现的位置,绘制路径轨迹。
优选地,还包括与所述关键点数据提取单元输入端连接的模型单元,所述模型单元包括训练提取模块、标注模块和模型训练模块;
所述训练提取模块用于调用Openpose软件从多张人体训练图片上提取多份关键点信息;
所述标注模块用于通过COCO数据集分别对多份关键点信息进行标注;
所述模型训练模块用于基于多份关键点信息及与之对应的标注信息训练得到关键点模型。
优选地,所述关键点数据提取单元包括人像提取模块和标记匹配模块;
所述人像提取模块用于调用关键点模型处理RTSP协议的监控视频流,提取当前帧图像中的多个人像及人像中的关键点信息;
所述标记匹配模块用于分别对每个人像标记得到多个目标ID,同时记录目标ID与关键点信息的匹配关系。
与现有技术相比,本发明提供的顾客路径追踪系统的有益效果与上述技术方案提供的顾客路径追踪方法的有益效果相同,在此不做赘述。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本发明的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1为本发明实施例一中顾客路径追踪方法的流程示意图;
图2为本发明实施例一中人体关键点示例图;
图3为本发明实施例二中顾客路径追踪系统的结构框图。
附图标记:
1-关键点数据提取单元,           2-第一匹配单元;
3-第二匹配单元,                 4-第一补录单元;
5-第二补录单元,                 6-路径分析单元。
具体实施方式
为使本发明的上述目的、特征和优点能够更加明显易懂,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动的前提下所获得的所有其它实施例,均属于本发明保护的范围。
实施例一
图1为本发明实施例一中顾客路径追踪方法的流程示意图。请参阅图1,本实施例提供一种顾客路径追踪方法,包括:
步骤S1,通过关键点模型处理监控视频流,获取当前帧图像中的目标ID及其对应的关键点数据;步骤S2,分别将当前帧图像中的目标ID与活跃ID数据库中的活跃ID遍历匹配,提取匹配成功的目标ID移入活跃ID数据库替换与之对应的活跃ID,和/或,提取未匹配成功的目标ID补录至待匹配数据库;步骤S3,分别将当前帧图像中的目标ID与待匹配数据库中的待匹配ID遍历匹配,提取匹配成功的目标ID移入待匹配数据库替换与之对应的待匹配ID,和/或,提取未匹配成功的目标ID补录至待匹配数据库;步骤S4,遍历待匹配数据库中的待匹配ID,提取激活时间T2帧内连续出现的待匹配ID补录至活跃ID数据库,并清空待匹配数据库;步骤S5,遍历活跃ID数据库中的活跃ID,提取激活时间T1帧内未连续出现的活跃ID保存至永久ID数据库;返回步骤S1,步骤S6,从永久ID数据库获取顾客在不同帧图像出现的位置,绘制路径轨迹。
本实施例提供的顾客路径追踪方法中,首先需要训练关键点模型,以提取监控视频流中当前帧图像中的目标ID及其对应的关键点数据,其中,目标ID是指基于人脸识别技术对当前帧图像中出现的人像进行的标号,若不同帧图像中的人像被人脸识别技术识别为同一人,则不同帧图像中的同一人像的标号应当一致,同理,同一帧图像/不同帧图像中的不同人像为了加以区分也应当不同标号;在当前帧图像中的目标ID和关键点数据获取完毕后,需对当前帧图像中的目标ID进行初步匹配,具体过程为,分别将当前帧图像中的目 标ID与活跃ID数据库中的活跃ID进行初步遍历匹配,若当前帧图像中存在匹配成功的目标ID,则将该目标ID移入活跃ID数据库替换与之对应的活跃ID,以及将未匹配成功的目标ID补录至待匹配数据库,直至当前帧图像中的全部目标ID均被操作后进入二次匹配阶段,二次匹配阶段的具体过程为,重新将当前帧图像中的目标ID与待匹配数据库中的待匹配ID遍历,若当前帧图像中存在匹配成功的目标ID,则将该目标ID移入待匹配数据库替换与之对应的待匹配ID,以及将移入待匹配数据库替换与之对应的待匹配ID,直至当前帧图像中的全部目标ID被再次操作后,进入活跃ID数据库合并步骤,此步骤具体过程为,遍历待匹配数据库中的待匹配ID,将待匹配数据库中激活时间T2帧内连续出现的待匹配ID合并入活跃ID数据库,由于待匹配ID在激活时间T2帧内反复出现,因此可将该待匹配ID视为活跃ID,而未被视为活跃ID的待匹配ID,此时可将其视为噪音忽略;进一步地,在活跃ID数据库获取完毕后,还需从活跃ID数据库中提取活跃ID保存至永久ID数据库,此步骤具体过程为,遍历活跃ID数据库中的活跃ID,将待匹配数据库中在激活时间T1帧内未连续出现的活跃ID保存至永久ID数据库,完成当前帧图像中目标ID的归类标记,重新获取下一帧图像,返回步骤S1重新执行上述操作,最终用户可通过永久ID数据库获取顾客在不同帧图像出现的位置,绘制不同目标ID对应顾客路径轨迹。
可见,本实施例采用关键点模型提取人像关键点数据,能够针对不同帧图像中的人像进行精准追踪,相比较于现有技术中采用边界框追踪的方法准确率更高,另外,采用双重匹配机制能够有效过滤掉图像中的背景噪音,提高顾客路径轨迹的追踪准确度,进而精准的分析出用户的购物偏好,为门店的管理人员提供有效的参考数据。
此外,用户通过从永久ID数据库提取顾客的路径轨迹和顾客行为,可帮助门店管理人员了解门店的哪些展柜属于高人流区域,哪些展柜属于低人流区域。进而可对顾客的停留时间进行分析,停留时间可反映顾客对展柜的感兴趣程度,帮助用户了解展柜商品的吸引程度,针对顾客感兴趣程度较低的 展柜,分析原因并进行相应的改善调整。还对用户后续开展相应展柜的调整和营销活动提供有效的数据支撑,帮助他们做出准确、有效的商业决策。
可以补充的是,在上述实施例步骤S1,通过训练关键点模型处理监控视频流,获取当前帧图像中的目标ID及其对应的关键点数据之前还包括步骤S0:
调用Openpose软件从多张人体训练图片上提取多份关键点信息;通过COCO数据集分别对多份关键点信息进行标注;基于多份关键点信息及与之对应的标注信息训练得到关键点模型。标注的内容包括每个关键点的位置坐标。需要说明的是,Openpose软件应用了现有技术中的开源Openpose方法,如图2所示,能够采集到人像的面部表情以及18个人体关节的关键点信息,Openpose软件不仅适用于单人图像的关键点信息提取,还适用于多人图像的关键点信息提取,具有很好的鲁棒性。COCO数据集也为本领域技术人员常用的关键点标记模型,本实施在此不做赘述。
具体地,上述实施例中通过关键点模型处理监控视频流,获取当前帧图像中的目标ID及其对应的关键点数据的方法包括:
调用关键点模型处理RTSP协议的监控视频流,提取当前帧图像中的多个人像及人像中的关键点信息;分别对每个人像标记得到多个目标ID,同时记录目标ID与关键点信息的匹配关系。
进一步地,上述实施例中分别将当前帧图像中的目标ID与活跃ID数据库中的活跃ID遍历匹配,提取匹配成功的目标ID移入活跃ID数据库替换与之对应的活跃ID,和/或,提取未匹配成功的目标ID补录至待匹配数据库的方法包括:
采用贪心匹配算法或者匈牙利算法对当前帧图像中目标ID对应的关键点数据与活跃ID数据库中的活跃ID对应的关键点数据遍历匹配,分别得到各目标ID与活跃ID数据库的相似距离;当目标ID的相似距离小于第一置信距离时,则目标ID匹配成功;当目标ID的相似距离大于第一置信距离小于存疑距离时,需基于相似距离和相似颜色生成综合距离后再次匹配,并在综合距离小于第三置信距离时匹配成功,在综合距离大于第三置信距离时未匹配 成功;提取匹配成功的目标ID及对应的关键点数据移入活跃ID数据库替换与之对应的待匹配ID及对应的关键点数据,和/或,提取未匹配成功的目标ID及对应的关键点数据补录至待匹配数据库。
具体实施时,依次提取当前帧图像中的目标ID,并采用贪心匹配算法或者匈牙利算法将其对应的关键点数据与活跃ID数据库中的活跃ID做遍历匹配运算,得到目标ID与活跃ID数据库的相似距离,重复上述步骤,分别得到当前帧图像中的各目标ID与活跃ID数据库的相似距离,之后,基于目标ID的相似距离做匹配判断,当目标ID的相似距离小于第一置信距离时,则标记该目标ID匹配成功,但是当目标ID的相似距离大于第一置信距离时,则说明该目标ID存疑,此时需要基于相似距离和相似颜色生成综合距离后再次匹配判断,并在综合距离小于第三置信距离时解除该目标ID的存疑,将其标记为匹配成功,反之,在综合距离大于第三置信距离时标记未匹配成功,将该目标ID视为噪音删除。最终,提取匹配成功的目标ID及对应的关键点数据移入活跃ID数据库替换掉与之对应的待匹配ID及对应的关键点数据,以及提取未匹配成功的目标ID及对应的关键点数据补录至待匹配数据库。可见,本实施例通过引入存疑距离和相似颜色的判断标准,能够对存疑的目标ID进一步的分析判断,从而精准的剔除当前帧图像中的噪音,提高人像识别的精准度。
进一步地,上述实施例中分别将当前帧图像中的目标ID与待匹配数据库中的待匹配ID遍历匹配,提取匹配成功的目标ID移入待匹配数据库替换与之对应的待匹配ID,和/或,提取未匹配成功的目标ID补录至待匹配数据库的方法包括:
采用贪心匹配算法或者匈牙利算法对当前帧图像中目标ID对应的关键点数据与待匹配数据库中的待匹配ID对应的关键点数据遍历匹配,分别得到各目标ID与待匹配数据库的相似距离;当目标ID的相似距离小于第二置信距离时,则目标ID匹配成功;当目标ID的相似距离大于第二置信距离小于存疑距离时,需基于相似距离和相似颜色生成综合距离后再次匹配,并在综合 距离小于第三置信距离时匹配成功,在综合距离大于第三置信距离时未匹配成功;提取匹配成功的目标ID及对应的关键点数据移入待匹配数据库替换与之对应的待匹配ID及对应的关键点数据,和/或,提取未匹配成功的目标ID及对应的关键点数据补录至待匹配数据库。
具体实施时,由于待匹配数据库中的待匹配ID多为存疑ID,因此还需对待匹配数据库中的待匹配ID做进一步验证处理,以对待匹配数据库进行更新清理,具体为,依次提取当前帧图像中的目标ID,并采用贪心匹配算法或者匈牙利算法将其对应的关键点数据与待匹配数据库中的活跃ID做遍历匹配运算,得到目标ID与待匹配数据库的相似距离,重复上述步骤,分别得到当前帧图像中的各目标ID与待匹配数据库的相似距离,之后,基于目标ID的相似距离做匹配判断,当目标ID的相似距离小于第二置信距离时,则标记该目标ID匹配成功,但是当目标ID的相似距离大于第二置信距离时,则说明该目标ID存疑,此时需要基于相似距离和相似颜色生成综合距离后再次匹配判断,并在综合距离小于第三置信距离时解除该目标ID的存疑,将其标记为匹配成功,反之,在综合距离大于第三置信距离时标记未匹配成功,将该目标ID视为噪音删除。最终,提取匹配成功的目标ID及对应的关键点数据移入待匹配数据库替换掉与之对应的待匹配ID及对应的关键点数据,以及提取未匹配成功的目标ID及对应的关键点数据补录至待匹配数据库。可见,本实施例通过二次匹配,能够对待匹配数据库更新清理,提高匹配的准确率。
需要说明的是,上述实施例中的贪心匹配算法是指,分别计算目标ID与各活跃ID/待匹配ID的相似距离,并选取其中的相似距离最小值作为该目标ID至活跃ID数据库/待匹配数据库中的相似距离,若该相似距离小于第一置信距离/第二置信距离,则表示该目标ID匹配成功;匈牙利算法是指,分别计算目标ID与各活跃ID/待匹配ID的相似距离,然后将多个相似距离按照由大到小的顺序排序,选取其中的相似距离最小值作为该目标ID至活跃ID数据库/待匹配数据库中的相似距离,若该相似距离小于第一置信距离/第二置信距离,则表示该目标ID匹配成功。另外,需要指明的是,上文中的表述中的替 换与之对应的活跃ID/待匹配ID,是指相似距离最小值对应的活跃ID/待匹配ID。
需要补充的是,上述实施例中的基于相似距离和相似颜色生成综合距离后再次匹配的方法包括:
利用相似颜色计算公式
Figure PCTCN2019106514-appb-000002
其中R 1、G 1、B 1为目标ID上衣颜色对应的三原色数值,R 2、G 2、B 2为目标ID下衣颜色对应的三原色数值;综合距离的计算公式B=相似距离+λ*A,λ表示颜色比重系数。
为了便于计算,本实施例给出一些参数值,示例性地,第一置信距离为2000、第二置信距离为1000、第三置信距离为2000、激活时间T1为300、激活时间T2为5、颜色比重系数λ为0.2。
另外需要解释的是,上述实施例中的活跃ID是指已经确认且正在监控视频流活动的人像,活跃ID数据库中包括监控视频流中正在活动的活跃ID、活跃ID对应的关键点数据、活跃ID对应的人脸图像和举止动作图像;待匹配ID是指未经确认的人像,待匹配数据库中包括待匹配ID、待匹配ID对应的关键点数据、待匹配ID对应的人脸图像和举止动作图像;永久ID是指已经确认且只在先前在监控视频流活动的人像,永久ID数据库中包括永久ID、永久ID对应的关键点数据、永久ID对应的人脸图像和举止动作图像。本实施例通过多个数据库的设置,不仅能够减少系统内存使用,还能够缩短匹配时间、提高匹配精度。例如,一个活跃ID在T1帧内没有出现则会将其从活跃ID数据库中删除并且存入永久ID数据库,以后不会对此活跃ID做匹配,从而减少系统内存的使用,缩短匹配时间。
实施例二
请参阅图1和图3,本实施例提供一种顾客路径追踪系统,包括:
关键点数据提取单元1,用于通过关键点模型处理监控视频流,获取当前帧图像中的目标ID及其对应的关键点数据;
第一匹配单元2,用于分别将当前帧图像中的目标ID与活跃ID数据库中的活跃ID遍历匹配,提取匹配成功的目标ID移入活跃ID数据库替换与之对应的活跃ID,和/或,提取未匹配成功的目标ID补录至待匹配数据库;
第二匹配单元3,用于分别将当前帧图像中的目标ID与待匹配数据库中的待匹配ID遍历匹配,提取匹配成功的目标ID移入待匹配数据库替换与之对应的待匹配ID,和/或,提取未匹配成功的目标ID补录至待匹配数据库;
第一补录单元4,用于遍历待匹配数据库中的待匹配ID,提取激活时间T2帧内连续出现的待匹配ID补录至活跃ID数据库,并清空待匹配数据库;
第二补录单元5,用于遍历活跃ID数据库中的活跃ID,提取激活时间T1帧内未连续出现的活跃ID保存至永久ID数据库,并重新调用关键点数据提取单元;
路径分析单元6,用于从永久ID数据库获取顾客在不同帧图像出现的位置,绘制路径轨迹。
优选地,还包括与关键点数据提取单元输入端连接的模型单元,模型单元包括训练提取模块、标注模块和模型训练模块;
训练提取模块用于调用Openpose软件从多张人体训练图片上提取多份关键点信息;
标注模块用于通过COCO数据集分别对多份关键点信息进行标注;
模型训练模块用于基于多份关键点信息及与之对应的标注信息训练得到关键点模型。
进一步地,关键点数据提取单元1包括人像提取模块和标记匹配模块;
人像提取模块用于调用关键点模型处理RTSP协议的监控视频流,提取当前帧图像中的多个人像及人像中的关键点信息;
标记匹配模块用于分别对每个人像标记得到多个目标ID,同时记录目标ID与关键点信息的匹配关系。
与现有技术相比,本发明实施例提供的顾客路径追踪系统的有益效果与上述实施例一提供的顾客路径追踪方法的有益效果相同,在此不做赘述。
本领域普通技术人员可以理解,实现上述发明方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,上述程序可以存储于计算机可读取存储介质中,该程序在执行时,包括上述实施例方法的各步骤,而所述的存储介质可以是:ROM/RAM、磁碟、光盘、存储卡等。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。

Claims (10)

  1. 一种顾客路径追踪方法,其特征在于,包括:
    步骤S1,通过关键点模型处理监控视频流,获取当前帧图像中的目标ID及其对应的关键点数据;
    步骤S2,分别将当前帧图像中的目标ID与活跃ID数据库中的活跃ID遍历匹配,提取匹配成功的目标ID移入活跃ID数据库替换与之对应的活跃ID,和/或,提取未匹配成功的目标ID补录至待匹配数据库;
    步骤S3,分别将当前帧图像中的目标ID与待匹配数据库中的待匹配ID遍历匹配,提取匹配成功的目标ID移入待匹配数据库替换与之对应的待匹配ID,和/或,提取未匹配成功的目标ID补录至待匹配数据库;
    步骤S4,遍历待匹配数据库中的待匹配ID,提取激活时间T2帧内连续出现的待匹配ID补录至活跃ID数据库,并清空待匹配数据库;
    步骤S5,遍历活跃ID数据库中的活跃ID,提取激活时间T1帧内未连续出现的活跃ID保存至永久ID数据库;
    步骤S6,从永久ID数据库获取顾客在不同帧图像出现的位置,绘制路径轨迹。
  2. 根据权利要求1所述的方法,其特征在于,在步骤S1,通过训练关键点模型处理监控视频流,获取当前帧图像中的目标ID及其对应的关键点数据之前还包括步骤S0:
    调用Openpose软件从多张人体训练图片上提取多份关键点信息;
    通过COCO数据集分别对多份关键点信息进行标注;
    基于多份关键点信息及与之对应的标注信息训练得到关键点模型。
  3. 根据权利要求1所述的方法,其特征在于,步骤S1,通过关键点模型处理监控视频流,获取当前帧图像中的目标ID及其对应的关键点数据的方法包括:
    调用关键点模型处理RTSP协议的监控视频流,提取当前帧图像中的多个 人像及人像中的关键点信息;
    分别对每个人像标记得到多个目标ID,同时记录目标ID与关键点信息的匹配关系。
  4. 根据权利要求1所述的方法,其特征在于,步骤S2,分别将当前帧图像中的目标ID与活跃ID数据库中的活跃ID遍历匹配,提取匹配成功的目标ID移入活跃ID数据库替换与之对应的活跃ID,和/或,提取未匹配成功的目标ID补录至待匹配数据库的方法包括:
    采用贪心匹配算法或者匈牙利算法对当前帧图像中目标ID对应的关键点数据与活跃ID数据库中的活跃ID对应的关键点数据遍历匹配,分别得到各目标ID与活跃ID数据库的相似距离;
    当所述目标ID的相似距离小于第一置信距离时,则所述目标ID匹配成功;
    当所述目标ID的相似距离大于第一置信距离小于存疑距离时,需基于相似距离和相似颜色生成综合距离后再次匹配,并在综合距离小于第三置信距离时匹配成功,在综合距离大于第三置信距离时未匹配成功;
    提取匹配成功的目标ID及对应的关键点数据移入活跃ID数据库替换与之对应的待匹配ID及对应的关键点数据,和/或,提取未匹配成功的目标ID及对应的关键点数据补录至待匹配数据库。
  5. 根据权利要求1所述的方法,其特征在于,步骤S3,分别将当前帧图像中的目标ID与待匹配数据库中的待匹配ID遍历匹配,提取匹配成功的目标ID移入待匹配数据库替换与之对应的待匹配ID,和/或,提取未匹配成功的目标ID补录至待匹配数据库的方法包括:
    采用贪心匹配算法或者匈牙利算法对当前帧图像中目标ID对应的关键点数据与待匹配数据库中的待匹配ID对应的关键点数据遍历匹配,分别得到各目标ID与待匹配数据库的相似距离;
    当所述目标ID的相似距离小于第二置信距离时,则所述目标ID匹配成功;
    当所述目标ID的相似距离大于第二置信距离小于存疑距离时,需基于相似距离和相似颜色生成综合距离后再次匹配,并在综合距离小于第三置信距离时匹配成功,在综合距离大于第三置信距离时未匹配成功;
    提取匹配成功的目标ID及对应的关键点数据移入待匹配数据库替换与之对应的待匹配ID及对应的关键点数据,和/或,提取未匹配成功的目标ID及对应的关键点数据补录至待匹配数据库。
  6. 根据权利要求4或5所述的方法,其特征在于,所述基于相似距离和相似颜色生成综合距离后再次匹配的方法包括:
    利用相似颜色计算公式
    Figure PCTCN2019106514-appb-100001
    其中R 1、G 1、B 1为目标ID上衣颜色对应的三原色数值,R 2、G 2、B 2为目标ID下衣颜色对应的三原色数值;
    综合距离的计算公式B=相似距离+λ*A,λ表示颜色比重系数。
  7. 根据权利要求4或5所述的方法,其特征在于,所述相似距离为欧式距离或者余弦距离。
  8. 一种顾客路径追踪系统,其特征在于,包括:
    关键点数据提取单元,用于通过关键点模型处理监控视频流,获取当前帧图像中的目标ID及其对应的关键点数据;
    第一匹配单元,用于分别将当前帧图像中的目标ID与活跃ID数据库中的活跃ID遍历匹配,提取匹配成功的目标ID移入活跃ID数据库替换与之对应的活跃ID,和/或,提取未匹配成功的目标ID补录至待匹配数据库;
    第二匹配单元,用于分别将当前帧图像中的目标ID与待匹配数据库中的待匹配ID遍历匹配,提取匹配成功的目标ID移入待匹配数据库替换与之对 应的待匹配ID,和/或,提取未匹配成功的目标ID补录至待匹配数据库;
    第一补录单元,用于遍历待匹配数据库中的待匹配ID,提取激活时间T2帧内连续出现的待匹配ID补录至活跃ID数据库,并清空待匹配数据库;
    第二补录单元,用于遍历活跃ID数据库中的活跃ID,提取激活时间T1帧内未连续出现的活跃ID保存至永久ID数据库,并重新调用关键点数据提取单元;
    路径分析单元,用于从永久ID数据库获取顾客在不同帧图像出现的位置,绘制路径轨迹。
  9. 根据权利要求8所述的系统,其特征在于,还包括与所述关键点数据提取单元输入端连接的模型单元,所述模型单元包括训练提取模块、标注模块和模型训练模块;
    所述训练提取模块用于调用Openpose软件从多张人体训练图片上提取多份关键点信息;
    所述标注模块用于通过COCO数据集分别对多份关键点信息进行标注;
    所述模型训练模块用于基于多份关键点信息及与之对应的标注信息训练得到关键点模型。
  10. 根据权利要求8所述的系统,其特征在于,所述关键点数据提取单元包括人像提取模块和标记匹配模块;
    所述人像提取模块用于调用关键点模型处理RTSP协议的监控视频流,提取当前帧图像中的多个人像及人像中的关键点信息;
    所述标记匹配模块用于分别对每个人像标记得到多个目标ID,同时记录目标ID与关键点信息的匹配关系。
PCT/CN2019/106514 2018-10-29 2019-09-18 一种顾客路径追踪方法及系统 WO2020088136A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA3158925A CA3158925A1 (en) 2018-10-29 2019-09-18 Customer path tracking method and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811267609.XA CN109544595B (zh) 2018-10-29 2018-10-29 一种顾客路径追踪方法及系统
CN201811267609.X 2018-10-29

Publications (1)

Publication Number Publication Date
WO2020088136A1 true WO2020088136A1 (zh) 2020-05-07

Family

ID=65845748

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/106514 WO2020088136A1 (zh) 2018-10-29 2019-09-18 一种顾客路径追踪方法及系统

Country Status (3)

Country Link
CN (1) CN109544595B (zh)
CA (1) CA3158925A1 (zh)
WO (1) WO2020088136A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111898471A (zh) * 2020-07-09 2020-11-06 北京捷通华声科技股份有限公司 一种行人跟踪方法和装置
CN112861637A (zh) * 2021-01-14 2021-05-28 国网浙江省电力有限公司湖州供电公司 一种基于多视频人员跟踪的站内人像特征比对识别方法
CN111898471B (zh) * 2020-07-09 2024-06-07 北京捷通华声科技股份有限公司 一种行人跟踪方法和装置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544595B (zh) * 2018-10-29 2020-06-16 苏宁易购集团股份有限公司 一种顾客路径追踪方法及系统
CN110414324A (zh) * 2019-06-17 2019-11-05 深圳壹账通智能科技有限公司 视频录制过程监控的方法、装置、计算机设备和存储介质
CN112527785B (zh) * 2020-12-16 2023-07-21 平安银行股份有限公司 数据补录方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102496009A (zh) * 2011-12-09 2012-06-13 北京汉邦高科数字技术股份有限公司 智能银行视频监控中的多人脸跟踪方法
US20160012280A1 (en) * 2013-02-28 2016-01-14 Hitachi Kokusai Electric Inc. Person search method and device for searching person staying on platform
CN106448160A (zh) * 2016-09-22 2017-02-22 江苏理工学院 结合车辆行驶轨迹和监控视频数据的目标人物追踪方法
CN106778470A (zh) * 2016-11-15 2017-05-31 东软集团股份有限公司 一种人脸识别方法及装置
CN109544595A (zh) * 2018-10-29 2019-03-29 苏宁易购集团股份有限公司 一种顾客路径追踪方法及系统

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7916897B2 (en) * 2006-08-11 2011-03-29 Tessera Technologies Ireland Limited Face tracking for controlling imaging parameters
CN107977610B (zh) * 2017-11-21 2021-10-12 电子科技大学 一种基于海量视频处理的人脸追踪方法
CN108055501A (zh) * 2017-11-22 2018-05-18 天津市亚安科技有限公司 一种目标检测及跟踪的视频监控系统及方法
CN108460933B (zh) * 2018-02-01 2019-03-05 王曼卿 一种基于图像处理的管理系统及方法
CN108710868B (zh) * 2018-06-05 2020-09-04 中国石油大学(华东) 一种基于复杂场景下的人体关键点检测系统及方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102496009A (zh) * 2011-12-09 2012-06-13 北京汉邦高科数字技术股份有限公司 智能银行视频监控中的多人脸跟踪方法
US20160012280A1 (en) * 2013-02-28 2016-01-14 Hitachi Kokusai Electric Inc. Person search method and device for searching person staying on platform
CN106448160A (zh) * 2016-09-22 2017-02-22 江苏理工学院 结合车辆行驶轨迹和监控视频数据的目标人物追踪方法
CN106778470A (zh) * 2016-11-15 2017-05-31 东软集团股份有限公司 一种人脸识别方法及装置
CN109544595A (zh) * 2018-10-29 2019-03-29 苏宁易购集团股份有限公司 一种顾客路径追踪方法及系统

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111898471A (zh) * 2020-07-09 2020-11-06 北京捷通华声科技股份有限公司 一种行人跟踪方法和装置
CN111898471B (zh) * 2020-07-09 2024-06-07 北京捷通华声科技股份有限公司 一种行人跟踪方法和装置
CN112861637A (zh) * 2021-01-14 2021-05-28 国网浙江省电力有限公司湖州供电公司 一种基于多视频人员跟踪的站内人像特征比对识别方法

Also Published As

Publication number Publication date
CN109544595A (zh) 2019-03-29
CN109544595B (zh) 2020-06-16
CA3158925A1 (en) 2020-05-07

Similar Documents

Publication Publication Date Title
WO2020088136A1 (zh) 一种顾客路径追踪方法及系统
Gall et al. Hough forests for object detection, tracking, and action recognition
Santra et al. A comprehensive survey on computer vision based approaches for automatic identification of products in retail store
Feris et al. Large-scale vehicle detection, indexing, and search in urban surveillance videos
CN108288051B (zh) 行人再识别模型训练方法及装置、电子设备和存储介质
CN109978918A (zh) 一种轨迹追踪方法、装置和存储介质
CN103988232B (zh) 使用运动流形来改进图像匹配
CN106557728B (zh) 查询图像处理和图像检索方法和装置以及监视系统
JP2016533782A (ja) 映像医療機器により取得した画像を特徴づける方法およびシステム
Wu et al. Optimizing mean reciprocal rank for person re-identification
TW201227629A (en) Method, system and computer program product for reconstructing moving path of vehicle
JP6500374B2 (ja) 画像処理装置及び画像処理プログラム
US20210326646A1 (en) Automated generation of training data for contextually generated perceptions
JP6823686B2 (ja) オブジェクト検出装置及び方法及び記憶媒体
Buehler et al. An automated program to find animals and crop photographs for individual recognition
JP2021503139A (ja) 画像処理装置、画像処理方法および画像処理プログラム
JP2014016968A (ja) 人物検索装置及びデータ収集装置
Hsu et al. Human body motion parameters capturing using kinect
Wu et al. Privacy leakage of sift features via deep generative model based image reconstruction
Del Pero et al. Behavior discovery and alignment of articulated object classes from unstructured video
KR101313285B1 (ko) 하이퍼 비디오 정보 파일 제작장치 및 제작방법, 그 기록매체
JP2017130061A (ja) 画像処理システム、画像処理方法およびプログラム
Cartas et al. Detecting hands in egocentric videos: Towards action recognition
Yang et al. Semi-automatic image and video annotation system for generating ground truth information
Zhang et al. A review of human action recognition in video

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19879020

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19879020

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 3158925

Country of ref document: CA

122 Ep: pct application non-entry in european phase

Ref document number: 19879020

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14.01.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19879020

Country of ref document: EP

Kind code of ref document: A1