WO2019080669A1 - 封闭场所内人物再识别的方法、系统及终端设备 - Google Patents

封闭场所内人物再识别的方法、系统及终端设备

Info

Publication number
WO2019080669A1
WO2019080669A1 PCT/CN2018/106143 CN2018106143W WO2019080669A1 WO 2019080669 A1 WO2019080669 A1 WO 2019080669A1 CN 2018106143 W CN2018106143 W CN 2018106143W WO 2019080669 A1 WO2019080669 A1 WO 2019080669A1
Authority
WO
WIPO (PCT)
Prior art keywords
character
feature information
person
area
identity number
Prior art date
Application number
PCT/CN2018/106143
Other languages
English (en)
French (fr)
Inventor
周佩明
陈宇
叶韵
张爱喜
武军晖
翁志
Original Assignee
北京京东尚科信息技术有限公司
北京京东世纪贸易有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京京东尚科信息技术有限公司, 北京京东世纪贸易有限公司 filed Critical 北京京东尚科信息技术有限公司
Priority to US16/758,556 priority Critical patent/US11263446B2/en
Publication of WO2019080669A1 publication Critical patent/WO2019080669A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Definitions

  • the present disclosure relates to the field of video processing, and in particular, to a method, system, and terminal device for recognizing a person in a closed place.
  • Video processing is currently the most popular technology in the multimedia field, and it is mainly divided into two categories: video encoding and decoding and target information recognition.
  • the former In order to save the transmission bandwidth of video data, the former relies on traditional information theory, which is relatively mature.
  • the latter is the intersection of artificial intelligence, computer and information theory and other research results in order to extract user information. development stage.
  • the character identification technology especially the technology for recognizing the identity of people in public places, can be used for intelligent settlement, security, passenger flow analysis, etc. in shopping malls, schools, hospitals, subways, and the like.
  • a method for recognizing a person in a closed place including:
  • the feature information of the character obtained by the camera in the area where the character is located is matched with the recorded feature information, and the identity number bound by the feature information with the highest matching degree is used as the character. Identity number.
  • the method further includes deleting the feature information and marking the departure when the character leaves.
  • the photographing of at least one camera in any of the areas includes:
  • the tracking based on the trajectory of the person in the entry area to confirm the newly entered character includes: determining whether the character is a newly entered person according to the character trajectory displayed by the continuous multi-frame shooting picture of the entrance area.
  • the acquiring feature information of a person is performed by convolutional neural network technology.
  • the feature information of the character includes: a figure, a dress, and/or a face.
  • the matching the feature information of the character acquired by the camera of the region where the character is located with the recorded feature information includes: performing the matching on all characters in the area.
  • the feature information and the identity number are saved in a character profile database.
  • a system for recognizing a person in a closed place comprising:
  • a monitoring module configured to divide the closed place into a plurality of areas including the entrance area and photograph at least one camera for any area;
  • a new entry character confirmation module for tracking the trajectory of a person based on the entry area to confirm the newly entered character and assigning his unique identity number, acquiring and recording his feature information and binding with the identity number;
  • the re-identification module is configured to match the feature information of the person acquired by the camera in the area where the character is located with the recorded feature information when the person in the closed place needs to be re-recognized, and tie the feature information with the highest matching degree.
  • the ID number is the identity number of the character.
  • the system further includes a feature database for storing feature information and an identity number.
  • a computer readable storage medium having stored thereon is stored a computer program, wherein the program is executed by a processor to implement the following method steps:
  • the feature information of the character obtained by the camera in the area where the character is located is matched with the recorded feature information, and the identity number bound by the feature information with the highest matching degree is used as the character. Identity number.
  • a terminal device including:
  • a memory storing instructions for the processor to control the following operations:
  • the feature information of the character obtained by the camera in the area where the character is located is matched with the recorded feature information, and the identity number bound by the feature information with the highest matching degree is used as the character. Identity number.
  • the occlusion problem between the characters can be solved by arranging a plurality of cameras in the venue to cover different regions of different angles, and any region can be radiated from different angles by more than two cameras.
  • matching all the characters to the identity by matching the matching of all the characters in the entire screen can help find the global optimal match, compared with re-recognizing the characters in a single detection frame. , greatly improving the tolerance of feature comparison.
  • FIG. 1 illustrates a flow chart of a method of recognizing a person in a closed place in accordance with an example embodiment of the present disclosure.
  • Figure 2 shows a real shot view of the entrance area in a closed space.
  • Figure 3 shows a real shot view of a non-inlet area in a closed space.
  • Figure 4 shows another real shot view of the entrance area in a closed space.
  • FIG. 5 illustrates a flow chart of a method of recognizing a person in a closed place, according to another example embodiment of the present disclosure.
  • FIG. 6 shows a block diagram of a system for recognizing a person in a closed place, in accordance with an example embodiment of the present disclosure.
  • FIG. 7 illustrates a terminal device according to an example embodiment of the present disclosure.
  • the purpose of the present disclosure is to provide a method, system and terminal device for recognizing a person in a closed place.
  • the method for recognizing a person in a closed place includes: dividing the closed place into a plurality of areas including the entrance area and shooting at least one camera in any area; tracking based on the trajectory of the person in the entrance area to confirm the newly entered person And give it a unique identification number, obtain and record its characteristic information and bind it with the identity number; and when it is necessary to re-recognize a certain person in the closed place, the feature information of the character obtained through the camera of the area where the character is located is The recorded feature information is matched, and the identity number of the feature information with the highest matching degree is used as the identity number of the character.
  • FIG. 1 illustrates a method for recognizing a person in a closed place according to an exemplary embodiment of the present disclosure.
  • Figure 2 shows a real shot view of the entrance area in a closed space;
  • Figure 3 shows a real shot view of a non-inlet area in a closed place;
  • Figure 4 shows another real view of the entrance area in a closed place
  • FIG. 5 illustrates a flow chart of a method for recognizing a person in a closed place according to another exemplary embodiment of the present disclosure;
  • FIG. 6 illustrates a system for recognizing a person in a closed place according to an exemplary embodiment of the present disclosure.
  • FIG. 7 illustrates a terminal device in accordance with an example embodiment of the present disclosure.
  • the exemplary embodiment of the present disclosure is described by taking the re-recognition of the person in the unmanned convenience store as an example, but the disclosure is not limited thereto, and the method for recognizing the person in the closed place of the present disclosure,
  • the system and terminal equipment can also be applied to smart settlement, security, passenger flow analysis and other fields such as shopping malls, schools, hospitals, and subways.
  • FIG. 1 is a flowchart showing a method for recognizing a person in a closed place according to an exemplary embodiment of the present disclosure
  • a real shot view of an entrance area in a closed space is shown
  • FIG. 3 shows a real shot view of a non-inlet area in a closed place
  • FIG. 4 shows another real shot view of the entrance area in a closed place
  • a flowchart of a method of character re-identification in a closed place in accordance with another example embodiment of the present disclosure is shown.
  • the model used in the re-recognition of the characters in the closed place is introduced: when the character enters the closed place, the system notifies the algorithm that there is a new entrant, and the present disclosure binds the new entrant based on the person tracking and collects it. Feature information. When the character leaves, the system notifies someone to leave, deletes its feature information and signs the departure. When the character is in the place, the person's identity information can be given at any time at any time. For example, at 9:30 on October 10, 2017, the person who appears in a certain area in a certain area is Tonny (just the assumed name, also Can not name, but named with the identity number, directly named as 1, 2...n characters).
  • FIG. 1 illustrates a flow chart of a method of recognizing a person in a closed place in accordance with an example embodiment of the present disclosure.
  • the enclosed place is divided into a plurality of areas including the entrance area and at least one camera of any area is photographed.
  • the main camera monitors the entrance area to monitor newly entered characters and track the character to obtain characteristics of the person (eg, front, side, back clothing, physical features).
  • the reason why the entrance area is selected to collect features is that the area is relatively wide, and there is less interference such as occlusion and overlap between characters, and the confusion of features due to tracking errors is not caused.
  • the large white frame is the tracking area range
  • the small white frame is the (substantially drawn) entrance position.
  • the new entrants can be monitored according to the trajectory analysis of the people in the area, and their characteristic information can be obtained.
  • the number of cameras for monitoring the entrance area may be one or multiple, and the disclosure is not limited thereto.
  • the feature information is acquired and recorded and bound to the identity number.
  • the existing character tracking technology can be divided into real-time tracking and multi-frame global tracking according to time.
  • the former directly predicts the position of the person in the next frame based on the position and feature information of the person in the past, and can be used for real-time video monitoring.
  • the latter is to "detect + associate" the characters of all frames of the captured video, and can only be used for post-mortem video analysis.
  • multi-frame global tracking can make full use of multi-frame information, and it can better solve the occlusion between characters, and the long-term and short-term disappearance from the picture, and have stronger anti-interference ability.
  • the disadvantage is that the tracking results cannot be given in real time.
  • the problem to be solved by the present disclosure is the binding of new entrants and identities, and the collection of feature information, so there is no need to give a tracking path in real time, so the present disclosure adopts a multi-frame global tracking scheme.
  • the present disclosure activates the doorway area for about a predetermined period of time, such as 5 seconds ( Tracking and detecting can be selected according to the need of 3-10 seconds), collecting the characteristic information of the newly entered person on the tracking path, and calculating the characteristic information of the new entrant after 5 seconds and assigning it to the unique identification number Bind and then record the feature information and its corresponding identity number.
  • the present disclosure collects a convolutional neural network (CNN) feature of a person in the detection frame of these consecutive frames as its feature information
  • CNN convolutional neural network
  • “detection” is a person detection based on a convolutional neural network (CNN).
  • Feature information is a collection of various aspects of the shape, wearing, and body of a person extracted by convolutional neural network (CNN) technology.
  • CNN convolutional neural network
  • the feature information of the character includes: a figure, a dress, and/or a face.
  • the body shape may include the tall, thin, and the like of the character, and the wearing may include the type and color of the clothes of the character.
  • the feature information and the identity number are saved in a character profile database.
  • the feature information of the person acquired by the camera of the area where the character is located is matched with the recorded feature information, and the identity information is bound with the highest matching feature information. Number as the identity number of the character.
  • the identity of the person framed by any person detection frame can be identified based on the comparison of the feature information.
  • the feature information of the character obtained by the designated duty camera in the area where the character is located is matched with the recorded feature information, and the feature information with the highest matching degree is bound.
  • the identity number is used as the identity number of the character. For example, as shown in FIG.
  • the characteristic information of the man obtained by the designated duty camera of the man's area can be The recorded feature information is matched, and the identity number of the feature information with the highest matching degree is used as the identity number of the man, and the identity of the man is determined to be the customer XXX (or XX number).
  • obtaining the characteristic information of the man is also performed by convolutional neural network technology.
  • the present disclosure performs association matching on all characters in the entire screen. This matching of all the characters and identities can help to find the global optimal match, which greatly improves the tolerance of the feature information comparison compared with the re-identification of the characters in a single detection frame. This process enables real-time processing.
  • the collected feature information is used to perform optimal association matching on all the people in the area. That is to say, although we only want to re-identify a woman in black on the far right side of the picture in Figure 4, it can greatly improve the tolerance of feature comparison by matching all the characters in its area. That is to say, since the lady in black is not easily distinguishable from other people wearing black or dark clothes (that is, the feature information of the color of the clothes is more than one black), it is easy to cause a mismatch, so for all characters Matching, while others wearing black or dark clothes may have other types of features that clearly distinguish themselves from others, and quickly identify other black or dark clothes by the features that are clearly distinguishable from others.
  • the identity of the person (number) and exclude these people (identity number) so that the color of the clothing color information is only one of the black identity numbers, the identity number is the identity number of the lady in black .
  • FIG. 6 shows a block diagram of a system for character re-identification within a closed venue, in accordance with an example embodiment of the present disclosure.
  • the system 600 for recognizing a person in a closed location may include a monitoring module 602, a new entry person confirmation module 604, and a re-identification module 606.
  • the monitoring module 602 is configured to divide the closed place into a plurality of areas including the entrance area and photograph at least one camera for any area.
  • the new entry person confirmation module 604 is configured to track and recognize the newly entered person based on the trajectory of the person in the entry area and assign a unique identity number thereof, acquire and record its feature information, and bind to the identity number.
  • the re-identification module 606 is configured to match the feature information of the person acquired by the camera in the area where the character is located with the recorded feature information when the re-recognition of a certain person in the closed place is required, and the feature information with the highest matching degree is tied.
  • the ID number is the identity number of the character.
  • system 600 can also include a feature database 608 for storing feature information and an identification number.
  • the present disclosure also provides a computer readable medium, which may be included in the system described in the above embodiments, or may be separately present and not incorporated into the system.
  • the computer readable medium carries one or more programs, and when the one or more programs are executed by a system, the system can be executed: dividing the closed place into a plurality of areas including the entrance area and An area is photographed with at least one camera; tracking based on the trajectory of the person in the entrance area to confirm the newly entered character and giving it a unique identity number, acquiring and recording its feature information and binding with the identity number; and when needed to close
  • the feature information of the character obtained by the camera in the area where the character is located is matched with the recorded feature information, and the identity number bound by the feature information with the highest matching degree is used as the identity number of the character.
  • FIG. 7 illustrates a terminal device according to an example embodiment of the present disclosure.
  • the terminal device 700 can include a processor 710 and a memory 720. Additionally, according to an embodiment, the terminal device may further include a transmitter and a receiver.
  • the processor 710 can call the instructions stored in the memory 720 to control related operations, such as controlling the transmitter and receiver for signal transceiving and the like.
  • the memory 720 stores instructions for the processor 710 to control the partitioning of the enclosed area into a plurality of areas including the entrance area and photographing at least one camera for any of the areas; based on the Tracing the character's trajectory to confirm the newly entered character and assigning it a unique identity number, acquiring and recording its feature information and binding it with the identity number; and when it is necessary to re-identify a character in the closed place, the character will be The feature information of the character acquired by the camera of the area is matched with the recorded feature information, and the identity number bound by the feature information with the highest matching degree is used as the identity number of the character.
  • Processor 710 can invoke instructions stored in memory 720 to control related operations. It is easy to understand that the memory 720 can also store instructions for the processor 710 to control other operations in accordance with embodiments of the present disclosure, and details are not described herein
  • the occlusion problem between the characters can be solved by arranging a plurality of cameras in the venue to cover different regions of different angles, and any region can be radiated from different angles by more than two cameras.
  • matching all the characters to the identity by matching the matching of all the characters in the entire screen can help find the global optimal match, compared with re-recognizing the characters in a single detection frame. , greatly improving the tolerance of feature comparison.

Abstract

本公开涉及视频处理技术领域,提供一种封闭场所内人物再识别的方法、系统及终端设备。封闭场所内人物再识别的方法包括:将封闭场所划分为包括入口区域在内的多个区域并且任一区域至少有一个摄像头进行拍摄;基于入口区域内的人物的轨迹跟踪以确认新进入的人物并赋予其唯一的身份号,获取和记录其特征信息并与身份号绑定;以及当需要对封闭场所内某一人物进行再识别时,将通过人物所在区域的摄像头获取的人物的特征信息与已记录的特征信息进行匹配,以匹配度最高的特征信息绑定的身份号作为人物的身份号。通过采用基于轨迹跟踪与特征匹配相结合的方案,来解决实时的、复杂场景下的人物身份再识别问题,取得较好的人物再识别效果。

Description

封闭场所内人物再识别的方法、系统及终端设备 技术领域
本公开涉及视频处理领域,具体涉及一种封闭场所内人物再识别的方法、系统及终端设备。
背景技术
视频处理是目前多媒体领域最热门的技术,主要分为视频编解码和目标信息识别两大类。前者为了节省视频数据的传输带宽,主要依靠传统的信息论理论,目前已经比较成熟;后者则为了提取用户信息,是了人工智能、计算机以及信息论等多门学科研究成果的交集,目前还处于快速发展阶段。
其中,随着人工智能的快速发展和日益被人们关注和追捧,其中的人物身份识别技术也得到了快速发展和并日益被人们关注。人物身份识别技术特别是封闭公共场所内人物身份再识别的技术可用于商场、学校、医院、地铁等智能结算、安防、客流分析等。
现有人物身份再识别多采用两种方式,一种是基于人物的轨迹跟踪,一种是基于人物的特征(如衣物颜色、身材等的特征)。上述两种方式均属于发展当中,未达到成熟。
在应用于视频监控领域的实时人物身份识别,尚有很多解决不力的地方,具体如下。
a.当监控区域较为狭窄,单一摄像头画面存在死角,且人物之间存在遮挡、交叉等情况,人物会随机的进、出场所。现有人物轨迹跟踪技术难以正确跟踪场所内多个人物的轨迹。
b.人物穿着、形态等特征之间,利用现有神经网络所提取的特征难以区分。故纯粹基于人物特征进行身份识别,精确度较差。
c.现有技术的效率也很难做到实时处理。
因此,需要一种新的人物身份再识别的方法。
在所述背景技术部分公开的上述信息仅用于加强对本公开的背景的理解,因此它可以包括不构成对本领域普通技术人员已知的现有技术的信息。
发明内容
本公开的目的在于提供一种封闭场所内人物再识别的方法、系统及终端设备,进而至少在一定程度上克服由于相关技术的限制和缺陷而导致的一个或者多个问题。
本公开的其他特性和优点将通过下面的详细描述变得显然,或部分地通过本公开的实践而习得。
根据本公开的第一方面,公开一种封闭场所内人物再识别的方法,包括:
将封闭场所划分为包括入口区域在内的多个区域并且任一区域至少有一个摄像头进行拍摄;
基于入口区域内的人物的轨迹跟踪以确认新进入的人物并赋予其唯一的身份号,获取和记录其特征信息并与身份号绑定;以及
当需要对封闭场所内某一人物进行再识别时,将通过人物所在区域的摄像头获取的人物的特征信息与已记录的特征信息进行匹配,以匹配度最高的特征信息绑定的身份号作为人物的身份号。
根据本公开的一示例实施方式,所述的方法还包括:当人物离开时,删除其特征信息并标志离开。
根据本公开的一示例实施方式,所述任一区域至少有一个摄像头进行拍摄包括:
任一区域有两个拍摄角度不同的摄像头进行拍摄。
根据本公开的一示例实施方式,所述基于入口区域内的人物的轨迹跟踪以确认新进入的人物包括:根据入口区域连续多帧拍摄画面所展示的人物轨迹以判断是否为新进入的人物。
根据本公开的一示例实施方式,所述获取人物的特征信息通过卷积神经网络技术来进行。
根据本公开的一示例实施方式,所述人物的特征信息包括:体形、穿着和/或相貌。
根据本公开的一示例实施方式,所述将通过人物所在区域的摄像头获取的人物的特征信息与已记录的特征信息进行匹配包括:对所在区域内的所有人物进行所述匹配。
根据本公开的一示例实施方式,特征信息和身份号保存在人物特征数据库中。
根据本公开的第二方面,公开一种封闭场所内人物再识别的系统,包括:
监控模块,用于将封闭场所划分为包括入口区域在内的多个区域并且对任一区域至少用一个摄像头进行拍摄;
新进入人物确认模块,用于基于入口区域内的人物的轨迹跟踪以确认新进入的人物并赋予其唯一的身份号,获取和记录其特征信息并与身份号绑定;以及
再识别模块:用于当需要对封闭场所内某一人物进行再识别时,将通过人物所在区域的摄像头获取的人物的特征信息与已记录的特征信息进行匹配,以匹配度最高的特征信息绑定的身份号作为人物的身份号。
根据本公开的一示例实施方式,所述系统还包括特征数据库,用于保存特征信息和身份号。
根据本公开的第三方面,公开一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现以下的方法步骤:
将封闭场所划分为包括入口区域在内的多个区域并且对任一区域至少用一个摄像头进行拍摄;
基于入口区域内的人物的轨迹跟踪以确认新进入的人物并赋予其唯一的身份号,获取和记录其特征信息并与身份号绑定;以及
当需要对封闭场所内某一人物进行再识别时,将通过人物所在区域的摄像头获取的人物的特征信息与已记录的特征信息进行匹配,以匹配度最高的特征信息绑定的身份号作为人物的身份号。
根据本公开的第四方面,公开一种终端设备,其特征在于,包括:
处理器;
存储器,存储用于所述处理器控制以下操作的指令:
将封闭场所划分为包括入口区域在内的多个区域并且对任一区域至少用一个摄像头进行拍摄;
基于入口区域内的人物的轨迹跟踪以确认新进入的人物并赋予其唯一的身份号,获取和记录其特征信息并与身份号绑定;以及
当需要对封闭场所内某一人物进行再识别时,将通过人物所在区域的摄像头获取的人物的特征信息与已记录的特征信息进行匹配,以匹配度最高的特征信息绑定的身份号作为人物的身份号。
根据本公开的一些示例实施方式,通过采用基于多摄像头的、轨迹跟踪与特征匹配相结合的方案,来解决实时的、复杂场景下的人物身份再识别问题,取得较好的人物再识别效果。
根据本公开的一些示例实施方式,通过在场所内架设多个摄像头以覆盖不同角度不同区域,任意区域由两个以上摄像头可以从不同角度辐射到,可以解决人物之间的遮挡问题。
根据本公开的一些示例实施方式,通过对整个画面内的所有人物进行关联匹配,将所 有人物与身份的关联匹配可以帮助找到全局最优匹配,与对单一检测框内的人物进行再识别相比,大大提高了特征比较的容错度。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性的,并不能限制本公开。
附图说明
通过参照附图详细描述其示例实施例,本公开的上述和其它目标、特征及优点将变得更加显而易见。
图1示出根据本公开一示例实施方式的一封闭场所内人物再识别的方法的流程图。
图2示出一封闭场所中入口区域的实拍图。
图3示出一封闭场所中某一非入口区域的实拍图。
图4示出一封闭场所中入口区域的另一实拍图。
图5示出根据本公开另一示例实施方式的一封闭场所内人物再识别的方法的流程图。
图6示出根据本公开一示例实施方式的一封闭场所内人物再识别的系统的方框图。
图7示出根据本公开一示例实施方式的终端设备。
具体实施方式
现在将参考附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这些示例实施方式使得本公开的描述将更加全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。附图仅为本公开的示意性图解,并非一定是按比例绘制。图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。
此外,所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多示例实施方式中。在下面的描述中,提供许多具体细节从而给出对本公开的示例实施方式的充分理解。然而,本领域技术人员将意识到,可以实践本公开的技术方案而省略所述特定细节中的一个或更多,或者可以采用其它的方法、组元、步骤等。在其它情况下,不详细示出或描述公知结构、方法、实现或者操作以避免喧宾夺主而使得本公开的各方面变得模糊。
附图中所示的一些方框图是功能实体,不一定必须与物理或逻辑上独立的实体相对应。可以采用软件形式来实现这些功能实体,或在一个或多个硬件模块或集成电路中实现这些功能实体,或在不同网络和/或处理器装置和/或微控制器装置中实现这些功能实体。
本公开的目的在于提供一种封闭场所内人物再识别的方法、系统及终端设备。封闭 场所内人物再识别的方法包括:将封闭场所划分为包括入口区域在内的多个区域并且任一区域至少有一个摄像头进行拍摄;基于入口区域内的人物的轨迹跟踪以确认新进入的人物并赋予其唯一的身份号,获取和记录其特征信息并与身份号绑定;以及当需要对封闭场所内某一人物进行再识别时,将通过人物所在区域的摄像头获取的人物的特征信息与已记录的特征信息进行匹配,以匹配度最高的特征信息绑定的身份号作为人物的身份号。通过采用基于轨迹跟踪与特征匹配相结合的方案,来解决实时的、复杂场景下的人物身份再识别问题,取得较好的人物再识别效果。同时,通过在场所内架设多个摄像头以覆盖不同角度不同区域,任意区域由两个以上摄像头可以从不同角度辐射到,可以解决人物之间的遮挡问题。此外,通过对整个画面内的所有人物进行关联匹配,将所有人物与身份的关联匹配可以帮助找到全局最优匹配,与对单一检测框内的人物进行再识别相比,大大提高了特征比较的容错度。
下面结合图1-7对本公开的封闭场所内人物再识别的方法、系统及终端设备进行详细说明,其中,图1示出根据本公开一示例实施方式的一封闭场所内人物再识别的方法的流程图;图2示出一封闭场所中入口区域的实拍图;图3示出一封闭场所中某一非入口区域的实拍图;图4示出一封闭场所中入口区域的另一实拍图;图5示出根据本公开另一示例实施方式的一封闭场所内人物再识别的方法的流程图;图6示出根据本公开一示例实施方式的一封闭场所内人物再识别的系统的方框图;图7示出根据本公开一示例实施方式的终端设备。
在此需要特别说明的是,本公开的示例实施方式以无人便利店内的人物再识别为例进行说明,但本公开不以此为限,本公开的封闭场所内人物再识别的方法、系统及终端设备还可应用于商场、学校、医院、地铁等智能结算、安防、客流分析等领域。
首先结合图1-5对本公开的封闭场所内人物再识别的方法进行具体说明,其中,图1示出根据本公开一示例实施方式的一封闭场所内人物再识别的方法的流程图;图2示出一封闭场所中入口区域的实拍图;图3示出一封闭场所中某一非入口区域的实拍图;图4示出一封闭场所中入口区域的另一实拍图;图5示出根据本公开另一示例实施方式的一封闭场所内人物再识别的方法的流程图。
在进行具体说明之前先简介本公开进行封闭场所内人物再识别所使用的模型:当人物进入封闭场所时,系统通知算法有新进入者,本公开基于人物跟踪绑定新进入者,并收集其特征信息。当人物离开时,系统通知有人离开,则删除其特征信息并标志离开。当人物在场所内时,可以在任意时刻给出任意位置的人物身份信息,如:2017年10月10日9点30分,某一区域某一位置出现的人为Tonny(只是假定的人名,也可以不起人名而是用身份号命名,直接命名为1、2…n号人物)。
图1示出根据本公开一示例实施方式的一封闭场所内人物再识别的方法的流程图。
在S102,将封闭场所划分为包括入口区域在内的多个区域并且任一区域至少有一个摄像头进行拍摄。
由于前述场所内可能较为狭窄,单一摄像头画面存在死角、人物之间存在重叠、遮挡等情形,往往采用多摄像头的布置,在场所内架设多个摄像头以覆盖不同角度不同区域。其原则是任意区域由两个以上摄像头可以从不同角度辐射到,可以解决人物之间的遮挡问题。
其中,多摄像头除了使得场所内无死角外,这些摄像头有其不同的职责,阐述如下:
a.主摄像头来监视入口区域,以监测新进入的人物,并跟踪该人物以获取人物的特征(如正面、侧面、背面的衣物、体貌特征)。之所以选取入口区域来收集特征,是由于此区域较为开阔,人物之间的遮挡、重叠等干扰较少,而不至于由于跟踪错误而造成特征的混淆。
如图2所示,大的白色框为跟踪区域范围,小的白色框为(大致画出的)入口的位置。可以根据区域范围内人的轨迹分析,来监测新进入入口的人,并获取其特征信息。
b.用于人物再识别的其他摄像头。将场所内各个区域,依据各摄像头的角度,分别指定一个职责摄像头。当需要识别某个区域的人的身份时,可以找到该区域对应的指定职责摄像头,调取其对应时刻的画面,如图3所示。
在此需要特别说明的是,监视入口区域的摄像头可以为一个,也可以为多个,本公开不以此为限。例如可以只用一个摄像头,既用于监测新进入的人物又用于入口区域内的人物再识别;或者为了提高效率和处理的实时性,也可以用一个摄像头用于监测新进入的人物,而同时用另外一个摄像头或另外多个不同拍摄角度的摄像头用于入口区域内的人物再识别。
在S104,基于入口区域内的人物的轨迹跟踪以确认新进入的人物并赋予其唯一的身份号,获取和记录其特征信息并与身份号绑定。
现有人物跟踪技术可依据时间分为实时跟踪与多帧全局跟踪两种。前者直接依据以往时刻的人物位置、特征信息来预测下一帧的人物位置,可用于实时视频监控。后者是将已拍摄视频的所有帧的人物进行“检测+关联”,只能用于事后的视频分析。
从性能上考虑,多帧全局跟踪能够充分利用多帧的信息,在解决人物之间的遮挡,及长时、短时从画面中失踪上表现更好,有更强的抗干扰能力。缺点是不能实时给出跟踪结果。本公开跟踪所要解决的问题是新入者与身份的绑定,以及特征信息的收集,故不需要 实时给出跟踪路径,所以本公开采用多帧全局跟踪方案。
具体做法是,根据入口区域连续多帧拍摄画面所展示的人物轨迹以判断是否为新进入的人物,比如,当某一人物的轨迹显示其是由入口外的方向向入口内的方向延伸的(当然也可加入其它辅助判定条件),可认为其为新进入的人物并赋予其唯一的身份号;当确认有新进入者后,本公开对门口区域启动约一预定时间段比如5秒钟(可根据需要选择3-10秒的时间)的跟踪和检测,在跟踪路径上收集新进入人物的特征信息,5秒后经过计算确认新进入者的特征信息并将其与赋予其唯一的身份号进行绑定,然后将特征信息和其对应身份号进行记录。
其中,本公开收集这些连续帧的检测框内的人物的卷积神经网络(CNN)特征作为其特征信息,具体来说,“检测”为基于卷积神经网络(CNN)进行的人物检测。“特征信息”为卷积神经网络(CNN)技术所提取的人物的形态、穿着、身材等各方面特征的集合。我们基于公开的用于人物检测及特征提取的CNN网络,加上收集、标注的多摄像头人物跟踪数据进行训练,来做行人检测,以及改善人物特征网络。
根据本公开的一示例实施方式,所述人物的特征信息包括:体形、穿着和/或相貌。其中体形可包括人物的高矮胖瘦等,穿着可包括人物的衣服类型和颜色等。
根据本公开的一示例实施方式,特征信息和身份号保存在人物特征数据库中。
在S106,当需要对封闭场所内某一人物进行再识别时,将通过人物所在区域的摄像头获取的人物的特征信息与已记录的特征信息进行匹配,以匹配度最高的特征信息绑定的身份号作为人物的身份号。
由于在S104中,对每个新进入者都收集有特征信息,故可以基于特征信息的比对来识别任意人物检测框所框的人物的身份。当需要对封闭场所某一区域内某一人物进行再识别时,将通过人物所在区域的指定职责摄像头获取的人物的特征信息与已记录的特征信息进行匹配,以匹配度最高的特征信息绑定的身份号作为人物的身份号。例如如图3所示:当想知道右下角(站立于白色框处)在智能货架上取商品的男士的身份时,可通过该男士所在区域的指定职责摄像头获取的该男士的特征信息与已记录的特征信息进行匹配,以匹配度最高的特征信息绑定的身份号作为该男士的身份号,确定该男士的身份为顾客XXX(或XX号)。同样的,获取该男士的特征信息也通过卷积神经网络技术来进行。
进一步地,考虑到人物之间特征信息区分未必很分明(如穿着黑色衣服的两个人就不易区分),本公开对整个画面内的所有人物进行关联匹配。这种将所有人物与身份的关联匹配可以帮助找到全局最优匹配,与对单一检测框内的人物进行再识别相比,大大提高了特征信息比较的容错度。该过程可实现实时处理。
具体示例如图4,在需要再识别的任意一帧内,利用已收集的特征信息,对区域内所 有人进行最优关联匹配。也就是说,虽然我们只想对某一人物如图4中画面最右侧的黑衣女士进行再识别,但却可以通过对其所在区域内的所有人物进行匹配来大大提高特征比较的容错度,也就是说,由于该黑衣女士与其他穿着黑色或深色衣服的人不容易区分(即衣服颜色这一特征信息为黑色的身份号不止一个)而容易导致匹配错误,因此对所有人物都进行匹配,而其他穿着黑色或深色衣服的人可能具有其他类型的可以快速明确确认身份的明显区别于他人的特征,于是通过所述明显区别于他人的特征快速确认其他穿着黑色或深色衣服的人的身份(号)并将这些人(的身份号)排除,这样此时衣服颜色这一特征信息为黑色的身份号就只剩一个了,此身份号即为该黑衣女士的身份号。
根据本公开的一示例实施方式,当人物离开时,删除其特征信息并标志离开,如附图5中的S508所示。其中,S502-S506与S102-S106相同,在此不再赘述。
在S508,当监视入口(出入口共用的封闭场所)或出口(出入口分开的封闭场所)区域的摄像头检测到某一人物离开时,删除其特征信息并标志离开,同时释放其绑定的身份号。
图6示出根据本公开示例实施方式的一封闭场所内人物再识别的系统的方框图。
如图6所示,封闭场所内人物再识别的系统600可包括监控模块602、新进入人物确认模块604和再识别模块606。
其中,监控模块602用于将封闭场所划分为包括入口区域在内的多个区域并且对任一区域至少用一个摄像头进行拍摄。新进入人物确认模块604用于基于入口区域内的人物的轨迹跟踪以确认新进入的人物并赋予其唯一的身份号,获取和记录其特征信息并与身份号绑定。再识别模块606用于当需要对封闭场所内某一人物进行再识别时,将通过人物所在区域的摄像头获取的人物的特征信息与已记录的特征信息进行匹配,以匹配度最高的特征信息绑定的身份号作为人物的身份号。
另外,系统600还可包括特征数据库608,用于保存特征信息和身份号。
作为另一方面,本公开还提供了一种计算机可读介质,该计算机可读介质可以是上述实施例中描述的系统中所包含的;也可以是单独存在,而未装配入该系统中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被一个该系统执行时,使得该系统可以执行:将封闭场所划分为包括入口区域在内的多个区域并且对任一区域至少用一个摄像头进行拍摄;基于入口区域内的人物的轨迹跟踪以确认新进入的人物并赋予其 唯一的身份号,获取和记录其特征信息并与身份号绑定;以及当需要对封闭场所内某一人物进行再识别时,将通过人物所在区域的摄像头获取的人物的特征信息与已记录的特征信息进行匹配,以匹配度最高的特征信息绑定的身份号作为人物的身份号。
图7示出根据本公开一示例实施方式的终端设备。
如图7所示,终端设备700可包括处理器710、存储器720。另外,根据一实施例,终端设备还可包括发射器及接收器。
处理器710可调用存储器720中存储的指令控制相关操作,如控制发射器和接收器进行信号收发等。根据一实施例,存储器720存储用于处理器710控制以下操作的指令:将封闭场所划分为包括入口区域在内的多个区域并且对任一区域至少用一个摄像头进行拍摄;基于入口区域内的人物的轨迹跟踪以确认新进入的人物并赋予其唯一的身份号,获取和记录其特征信息并与身份号绑定;以及当需要对封闭场所内某一人物进行再识别时,将通过人物所在区域的摄像头获取的人物的特征信息与已记录的特征信息进行匹配,以匹配度最高的特征信息绑定的身份号作为人物的身份号。处理器710可调用存储器720中存储的指令控制相关操作。易于理解,存储器720还可存储用于处理器710控制根据本公开实施例的其他操作的指令,这里不再赘述。
通过以上的详细描述,本领域的技术人员易于理解,根据本公开实施例的方法、系统及终端设备具有以下优点中的一个或多个。
根据本公开的一些示例实施方式,通过采用基于多摄像头的、轨迹跟踪与特征匹配相结合的方案,来解决实时的、复杂场景下的人物身份再识别问题,取得较好的人物再识别效果。
根据本公开的一些示例实施方式,通过在场所内架设多个摄像头以覆盖不同角度不同区域,任意区域由两个以上摄像头可以从不同角度辐射到,可以解决人物之间的遮挡问题。
根据本公开的一些示例实施方式,通过对整个画面内的所有人物进行关联匹配,将所有人物与身份的关联匹配可以帮助找到全局最优匹配,与对单一检测框内的人物进行再识别相比,大大提高了特征比较的容错度。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其它实施方案。本公开旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。

Claims (12)

  1. 一种封闭场所内人物再识别的方法,包括:
    将封闭场所划分为包括入口区域在内的多个区域并且任一区域至少有一个摄像头进行拍摄;
    基于入口区域内的人物的轨迹跟踪以确认新进入的人物并赋予其唯一的身份号,获取和记录其特征信息并与身份号绑定;以及
    当需要对封闭场所内某一人物进行再识别时,将通过人物所在区域的摄像头获取的人物的特征信息与已记录的特征信息进行匹配,以匹配度最高的特征信息绑定的身份号作为人物的身份号。
  2. 根据权利要求1所述的方法,其特征在于,还包括:当人物离开时,删除其特征信息并标志离开。
  3. 根据权利要求1所述的方法,其特征在于,所述任一区域至少有一个摄像头进行拍摄包括:
    任一区域有至少两个拍摄角度不同的摄像头进行拍摄。
  4. 根据权利要求1所述的方法,其特征在于,所述基于入口区域内的人物的轨迹跟踪以确认新进入的人物包括:根据入口区域连续多帧拍摄画面所展示的人物轨迹以判断是否为新进入的人物。
  5. 根据权利要求1所述的方法,其特征在于,所述获取人物的特征信息通过卷积神经网络技术来进行。
  6. 根据权利要求1所述的方法,其特征在于,所述人物的特征信息包括:体形、穿着和/或相貌。
  7. 根据权利要求1所述的方法,其特征在于,所述将通过人物所在区域的摄像头获取的人物的特征信息与已记录的特征信息进行匹配包括:对所在区域内的所有人物进行所述匹配。
  8. 根据权利要求1所述的方法,其特征在于,特征信息和身份号保存在人物特征数据库中。
  9. 一种封闭场所内人物再识别的系统,包括:
    监控模块,用于将封闭场所划分为包括入口区域在内的多个区域并且对任一区域至少用一个摄像头进行拍摄;
    新进入人物确认模块,用于基于入口区域内的人物的轨迹跟踪以确认新进入的人物并 赋予其唯一的身份号,获取和记录其特征信息并与身份号绑定;以及
    再识别模块:用于当需要对封闭场所内某一人物进行再识别时,将通过人物所在区域的摄像头获取的人物的特征信息与已记录的特征信息进行匹配,以匹配度最高的特征信息绑定的身份号作为人物的身份号。
  10. 根据权利要求9所述的系统,其特征在于,还包括特征数据库,用于保存特征信息和身份号。
  11. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现以下的方法步骤:
    将封闭场所划分为包括入口区域在内的多个区域并且对任一区域至少用一个摄像头进行拍摄;
    基于入口区域内的人物的轨迹跟踪以确认新进入的人物并赋予其唯一的身份号,获取和记录其特征信息并与身份号绑定;以及
    当需要对封闭场所内某一人物进行再识别时,将通过人物所在区域的摄像头获取的人物的特征信息与已记录的特征信息进行匹配,以匹配度最高的特征信息绑定的身份号作为人物的身份号。
  12. 一种终端设备,其特征在于,包括:
    处理器;
    存储器,存储用于所述处理器控制以下操作的指令:
    将封闭场所划分为包括入口区域在内的多个区域并且对任一区域至少用一个摄像头进行拍摄;
    基于入口区域内的人物的轨迹跟踪以确认新进入的人物并赋予其唯一的身份号,获取和记录其特征信息并与身份号绑定;以及
    当需要对封闭场所内某一人物进行再识别时,将通过人物所在区域的摄像头获取的人物的特征信息与已记录的特征信息进行匹配,以匹配度最高的特征信息绑定的身份号作为人物的身份号。
PCT/CN2018/106143 2017-10-23 2018-09-18 封闭场所内人物再识别的方法、系统及终端设备 WO2019080669A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/758,556 US11263446B2 (en) 2017-10-23 2018-09-18 Method for person re-identification in closed place, system, and terminal device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710996027.4A CN109697391A (zh) 2017-10-23 2017-10-23 封闭场所内人物再识别的方法、系统及终端设备
CN201710996027.4 2017-10-23

Publications (1)

Publication Number Publication Date
WO2019080669A1 true WO2019080669A1 (zh) 2019-05-02

Family

ID=66226882

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/106143 WO2019080669A1 (zh) 2017-10-23 2018-09-18 封闭场所内人物再识别的方法、系统及终端设备

Country Status (3)

Country Link
US (1) US11263446B2 (zh)
CN (1) CN109697391A (zh)
WO (1) WO2019080669A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110191285A (zh) * 2019-06-12 2019-08-30 深圳前海微众银行股份有限公司 一种开展业务的方法及装置
CN110781817A (zh) * 2019-10-25 2020-02-11 南京大学 一种解决部件不对齐的行人再识别方法
CN112906483A (zh) * 2021-01-25 2021-06-04 中国银联股份有限公司 一种目标重识别方法、装置及计算机可读存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11138469B2 (en) * 2019-01-15 2021-10-05 Naver Corporation Training and using a convolutional neural network for person re-identification
US11087489B2 (en) 2019-06-03 2021-08-10 Disney Enterprises, Inc. Systems and methods to facilitate interaction by one or more participants with content presented across multiple distinct physical locations
CN111310728B (zh) * 2020-03-16 2022-07-15 中国科学技术大学 基于监控相机和无线定位的行人重识别系统
CN111476183A (zh) * 2020-04-13 2020-07-31 腾讯科技(深圳)有限公司 客流信息处理方法及装置
CN113096162B (zh) * 2021-04-21 2022-12-13 青岛海信智慧生活科技股份有限公司 一种行人识别跟踪方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101795395A (zh) * 2009-02-04 2010-08-04 深圳市先进智能技术研究所 一种人群态势监控系统及方法
CN102708606A (zh) * 2012-03-26 2012-10-03 北京普惠恒丰科技有限公司 基于人脸识别的监狱会见区域进出监控系统
CN103049734A (zh) * 2011-10-12 2013-04-17 杜惠红 一种在公共场所找人的方法及系统
CN103793721A (zh) * 2014-03-04 2014-05-14 武汉大学 一种基于区域相关反馈的行人重识别方法及系统
CN105872979A (zh) * 2016-05-31 2016-08-17 王方松 一种取得设定场所中人群信息的方法及装置

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8010402B1 (en) * 2002-08-12 2011-08-30 Videomining Corporation Method for augmenting transaction data with visually extracted demographics of people using computer vision
CN102831683B (zh) * 2012-08-28 2014-11-12 华南理工大学 一种基于人流量统计的室内动态冷负载智能检测方法
CN102982598B (zh) * 2012-11-14 2015-05-20 三峡大学 基于单个摄像头场景配置的视频人数统计方法和系统
US9453904B2 (en) * 2013-07-18 2016-09-27 Golba Llc Hybrid multi-camera based positioning
CN105160319B (zh) * 2015-08-31 2018-10-16 电子科技大学 一种在监控视频下实现行人再识别的方法
CN105933650A (zh) * 2016-04-25 2016-09-07 北京旷视科技有限公司 视频监控系统及方法
CN106096568B (zh) * 2016-06-21 2019-06-11 同济大学 一种基于cnn和卷积lstm网络的行人再识别方法
CN106650639A (zh) * 2016-12-05 2017-05-10 华南理工大学 一种适用于安防的监控系统行人再识别方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101795395A (zh) * 2009-02-04 2010-08-04 深圳市先进智能技术研究所 一种人群态势监控系统及方法
CN103049734A (zh) * 2011-10-12 2013-04-17 杜惠红 一种在公共场所找人的方法及系统
CN102708606A (zh) * 2012-03-26 2012-10-03 北京普惠恒丰科技有限公司 基于人脸识别的监狱会见区域进出监控系统
CN103793721A (zh) * 2014-03-04 2014-05-14 武汉大学 一种基于区域相关反馈的行人重识别方法及系统
CN105872979A (zh) * 2016-05-31 2016-08-17 王方松 一种取得设定场所中人群信息的方法及装置

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110191285A (zh) * 2019-06-12 2019-08-30 深圳前海微众银行股份有限公司 一种开展业务的方法及装置
CN110191285B (zh) * 2019-06-12 2021-08-31 深圳前海微众银行股份有限公司 一种开展业务的方法及装置
CN110781817A (zh) * 2019-10-25 2020-02-11 南京大学 一种解决部件不对齐的行人再识别方法
CN112906483A (zh) * 2021-01-25 2021-06-04 中国银联股份有限公司 一种目标重识别方法、装置及计算机可读存储介质
CN112906483B (zh) * 2021-01-25 2024-01-23 中国银联股份有限公司 一种目标重识别方法、装置及计算机可读存储介质

Also Published As

Publication number Publication date
US20200349348A1 (en) 2020-11-05
US11263446B2 (en) 2022-03-01
CN109697391A (zh) 2019-04-30

Similar Documents

Publication Publication Date Title
WO2019080669A1 (zh) 封闭场所内人物再识别的方法、系统及终端设备
US11704936B2 (en) Object tracking and best shot detection system
CN110609920B (zh) 一种视频监控场景下的行人混合搜索方法及系统
CN108629791B (zh) 行人跟踪方法和装置及跨摄像头行人跟踪方法和装置
CN109299683B (zh) 一种基于人脸识别和行为大数据的安防评估系统
WO2020094091A1 (zh) 一种图像抓拍方法、监控相机及监控系统
CN105160321A (zh) 一种基于视觉与无线定位的移动终端身份确认方法
CN109583373B (zh) 一种行人重识别实现方法
CN101095149A (zh) 图像比较
CN107153820A (zh) 一种面向高噪声的人脸识别及运动轨迹判别方法
CN110633648B (zh) 一种自然行走状态下的人脸识别方法和系统
CN110969118A (zh) 轨迹监控系统和方法
CN109344792A (zh) 一种目标自动识别跟踪方法
CN112818854B (zh) 一种应用于校园安防的全天候视频寻人方法
JP5758165B2 (ja) 物品検出装置および静止人物検出装置
WO2022134916A1 (zh) 身份特征生成方法、设备及存储介质
CN110598551A (zh) 一种提高行人身份识别效率的方法、装置、设备及介质
Taha et al. Exploring behavior analysis in video surveillance applications
CN111368608B (zh) 一种人脸识别方法、装置及系统
CN113065491A (zh) 一种基于多摄的客流统计方法及系统
AU2020202221A1 (en) Privacy preserving visitor recognition and movement pattern analysis based on computer vision
Yoshioka et al. Improving person re-identification based on human height information
CN112733576A (zh) 一种通过动态影像进行面部识别的方法
Shamoushaki et al. A high-accuracy, cost-effective people counting solution based on visual depth data
Gautam et al. YOLORe-IDNet: An Efficient Multi-Camera System for Person-Tracking

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18871006

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 20/08/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18871006

Country of ref document: EP

Kind code of ref document: A1