WO2020093829A1 - 开放场景的实时人流统计方法和装置 - Google Patents

开放场景的实时人流统计方法和装置 Download PDF

Info

Publication number
WO2020093829A1
WO2020093829A1 PCT/CN2019/110014 CN2019110014W WO2020093829A1 WO 2020093829 A1 WO2020093829 A1 WO 2020093829A1 CN 2019110014 W CN2019110014 W CN 2019110014W WO 2020093829 A1 WO2020093829 A1 WO 2020093829A1
Authority
WO
WIPO (PCT)
Prior art keywords
pedestrian
statistical area
frame
current
effective frame
Prior art date
Application number
PCT/CN2019/110014
Other languages
English (en)
French (fr)
Inventor
张晓博
侯章军
杨旭东
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2020093829A1 publication Critical patent/WO2020093829A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • This specification relates to the field of data processing technology, and in particular to real-time crowd flow statistics methods and devices in open scenes.
  • People flow statistics play a role in a variety of business applications. For example, it can count the number of people distributed in the mall according to different time periods, the direction of crowd movement, etc., provide a reference for the mall to hold activities; it can obtain the statistics of crowd flow on the sidewalk outside the mall , Is conducive to assess whether the shopping mall site is appropriate; etc.
  • Real-time crowd flow statistics have more important significance.
  • the real-time flow statistics in the monitoring area can get the number of people and crowd flow data on the spot in time, which is conducive to the more efficient organization of the management unit and provides data support for scientific decision-making.
  • Real-time video-based flow statistics usually require the camera to have a vertically downward viewing angle, which is difficult to meet in many applications.
  • this specification provides an open scene real-time flow statistics method, including:
  • a pedestrian re-identification algorithm is used to identify the same pedestrian in the current valid frame as at least one valid frame before;
  • the number of people passing through the open scene is calculated.
  • This manual also provides an open scene real-time flow statistics device, including:
  • An effective frame extraction unit used for extracting the current effective frame from the real-time video stream shooting the open scene; the video stream is shot by a camera arranged above the open scene;
  • Pedestrian detection unit used to detect each pedestrian in the current valid frame
  • Pedestrian re-identification unit which is used to recognize the same pedestrian in the current valid frame and at least one valid frame before using the pedestrian re-recognition algorithm
  • the flow calculation unit is used to calculate the number of people passing through the open scene according to the result of the pedestrian re-identification algorithm.
  • a computer device includes: a memory and a processor; a computer program executable by the processor is stored on the memory; when the processor runs the computer program, the method described in the method for implementing web access described above is executed step.
  • This specification provides a computer-readable storage medium on which a computer program is stored.
  • the steps described in the method for implementing web access on a CDN node described above are executed.
  • the pedestrians in the current valid frame are detected by using the pedestrian re-recognition algorithm and the pedestrians in the current valid frame are identified with the previous valid.
  • the number of people passing through the open scene is obtained, thus solving the limitation of requiring a vertically downward camera perspective to calculate the flow of people, and while reducing the computational cost, it can not only provide timeliness of detection, but also improve The accuracy of detection.
  • FIG. 1 is an exemplary diagram of an open scene, camera angle, and statistical area in an embodiment of this specification
  • Figure 3 is a schematic diagram of the structure of the software for counting the number of people running on the embedded development board in the application example of this specification;
  • FIG. 5 is a logical structure diagram of an open-scene real-time flow statistics device in an embodiment of the present specification.
  • the embodiment of the present specification proposes a new real-time flow statistics method for open scenes, which extracts the current effective frame from the real-time video stream captured by the camera placed above the open scene, detects the pedestrians in the current effective frame and identifies the previous effective frame Whether the pedestrians in are the same, to count the number of people passing through the open scene.
  • the embodiments of this specification do not require a vertically downward camera angle of view, can be applied to most applications, and have low computational load, and high accuracy under the premise of good timeliness.
  • the embodiments of this specification can run on any device with computing and storage capabilities, such as mobile phones, tablet computers, PCs (Personal Computers), notebooks, servers, etc .; it can also be run on two or more
  • the logical node of the device implements various functions in the embodiments of this specification.
  • the embodiment of this specification is used to count the real-time crowd flow in the open scene.
  • the camera is placed above the open scene, and the crowd in the open scene is photographed at an oblique downward angle to generate a real-time video stream.
  • what needs to be counted is the number of people passing through a predetermined statistical area in the open scene.
  • the statistical area is a predetermined area within the open scene, and the shooting range of the camera can completely cover the statistical area.
  • An example of an open scene, statistical area and camera angle is shown in Figure 1, where the solid area is inside the statistical area.
  • FIG. 2 the flow of the real-time people flow statistics method of the open scene is shown in FIG. 2.
  • Step 210 Extract the currently valid frame from the real-time video stream shooting the open scene.
  • the camera installed above the open scene will continue to output a video stream shot at an obliquely downward viewing angle.
  • the video stream consists of a continuous frame of images. Based on a certain condition, each frame of images that meet this condition can be continuously extracted from the video stream as an effective frame, and the flow of people can be counted by continuously identifying the pedestrians in the effective frame. Take the last valid frame extracted from the video stream as the current valid frame.
  • the conditions for extracting effective frames can be set according to factors such as the requirements of the actual application scenario for statistical time accuracy, the processing capabilities of the device running this embodiment, and so on.
  • the frame can be separated from the last effective frame by N (N is a natural number)
  • N is a natural number
  • the frame image is taken as the next effective frame, and one frame can be extracted as an effective frame from every M (M is a natural number greater than 1) consecutive frames.
  • Step 220 Detect each pedestrian in the current valid frame.
  • the target detection algorithm of deep learning determines whether there is a human body in the current effective frame, and if so, locates the position of each pedestrian and part of the image area occupied by the pedestrian.
  • the target detection algorithm is not limited.
  • Faster R-CNN Full Regions with Convolutional Neural Network Features
  • SSD Single Shot MultiBox Detector
  • SSD Single target multi-frame prediction
  • Step 230 A pedestrian re-identification algorithm is used to identify the same pedestrian in the current valid frame as at least one valid frame before.
  • Pedestrian ReID can use computer vision technology to determine whether there is a specific pedestrian in the image, which can be used to track people in the same camera or across cameras.
  • a pedestrian re-identification algorithm is used to determine which of all pedestrians detected in the current effective frame are pedestrians that have already appeared in the previous effective frame, and which are new pedestrians that have appeared in the current effective frame.
  • N N effective frames before the current effective frame. Because the camera shoots an open scene at an oblique downward angle in the embodiment of the present specification, when pedestrians are dense, a pedestrian may be blocked by others in a valid frame or several consecutive valid frames without being detected In a situation where a larger N value is selected, it is possible to avoid incorrectly counting the pedestrian repeatedly in this case, but a larger N value will also bring a greater calculation load. In an actual application scenario, an appropriate N value may be selected according to factors such as the pedestrian density of the open scenario, the interval between adjacent effective frames, and the processing capability of the device running this embodiment.
  • the appearance characteristics and position characteristics of each pedestrian in the current effective frame can be obtained, and then according to the appearance characteristics and position characteristics of the pedestrian, whether a certain pedestrian in the current effective frame is the previous N effective frames If it is not already in the pedestrian, generate a new character identification to mark the pedestrian; if it is, mark the pedestrian with the existing character identification.
  • the position characteristics of the pedestrian are generated from the position of each pedestrian (such as the partial area occupied by the pedestrian in the picture coordinate system Coordinates), the image of the partial area occupied by the pedestrian generates the appearance characteristics of the pedestrian (such as clothing color, clothing texture, handbag, backpack, hat, etc.).
  • the position and appearance characteristics of a pedestrian in the current valid frame to find whether the pedestrian already exists in the previous N valid frames. If it does not exist, generate a new character ID for the pedestrian and mark the pedestrian with the generated character ID , Character identification is used to uniquely represent a pedestrian, which can be an index number, a string, etc., without limitation.
  • the pedestrian already exists the pedestrian already has his own character identification, and it is sufficient to mark the pedestrian in the currently valid frame with the character identification that has been previously available.
  • the above search process is performed on all pedestrians detected in the current valid frame one by one until each detected pedestrian is marked with a person identification.
  • the algorithm used when identifying whether the pedestrians in different valid frames are the same person can be selected according to the needs of the actual application scenario, without limitation.
  • a Hungarian algorithm can be used to match the pedestrians in the current effective frame with the previous effective frames according to the appearance characteristics and position characteristics of the pedestrians.
  • Step 240 Calculate the number of people passing through the open scene according to the result of the pedestrian re-identification algorithm.
  • the specific method for counting the number of people in the open scene can be determined according to the needs of actual applications, and the embodiments of this specification are not limited.
  • the first example you can accumulate the newly appeared pedestrians in each valid frame in the statistical time period, and use the accumulated result as the number of people in the statistical time period. If a pedestrian in the current valid frame is different from the pedestrians in the previous N valid frames, the pedestrian is the new pedestrian in the current valid frame; the total number of new pedestrians in all valid frames in the statistical time period can be The total number of pedestrians considered as the statistical period.
  • the second example in the implementation manner of marking each pedestrian with a character identification, it is possible to obtain the newly appeared pedestrians and the left pedestrians in each effective frame within the statistical time period.
  • the leaving pedestrian may be a pedestrian that appears in the adjacent valid frame before a valid frame, and does not appear in the valid frame and the N valid frames after it, so as to avoid the pedestrian from being temporarily blocked by other pedestrians and temporarily disappearing The resulting statistical deviation.
  • the number of pedestrians in the statistical time period can be the sum of the number of new pedestrians in all valid frames in the statistical period, or the total number of pedestrians leaving in all valid frames in the statistical period, or the two values above
  • the result of performing a mathematical operation (such as the average of the total number of new pedestrians and the total number of pedestrians leaving).
  • the length of time that each pedestrian is in the open scene can also be counted, such as the time elapsed between a valid frame where a pedestrian leaves and the newly valid frame that the pedestrian appears as the pedestrian is open The duration in the scene.
  • the pedestrian is regarded as a newly-emerging pedestrian in the valid frame (ie, a pedestrian entering the statistical area); if a pedestrian appears before a valid frame If the statistical area adjacent to the effective frame and the N effective frames following the effective frame all appear in the statistical area, the pedestrian is regarded as a pedestrian leaving in the effective frame (ie, a pedestrian leaving the statistical area).
  • the pedestrian when a pedestrian appears outside the statistical area but not in the statistical area in the previous N valid frames, and appears in the statistical area in the current effective frame, the pedestrian is considered to enter the statistics in the current effective frame Area; when a pedestrian has appeared in the statistical area but not outside the statistical area in the previous N valid frames, and appears outside the statistical area in the current effective frame, the pedestrian is considered to leave the statistics in the current effective frame region.
  • the number of people passing through the statistical area can be calculated based on the number of people entering the statistical area, the number of people leaving the statistical area, or the number of people entering the statistical area and the number of people leaving the statistical area.
  • the current effective frame is extracted from the real-time video stream captured by the camera placed above the open scene, the pedestrian in the current effective frame is detected, and the pedestrian re-identification algorithm is used to identify the current effective frame.
  • Pedestrians are the same as the pedestrians in the previous valid frame, and the number of people passing through the open scene is obtained. Not only does it not require a vertically downward camera angle, it can be applied to most applications, and the calculation load is low, which can provide detection timeliness. , And can improve the accuracy of detection.
  • the method of the embodiment of the present specification is suitable for running on an embedded development board, and has no special requirements on the hardware environment of the embedded development board.
  • the embedded development board running the embodiment of this specification can be installed near the camera and send the real-time statistics of the flow of people to the server responsible for collecting flow data through its own communication module without uploading the video or image captured by the camera. Obtain accurate statistics on the flow of people under conditions that violate the privacy of pedestrians.
  • the number of people passing through a specific area in an indoor open scene needs to be counted.
  • the RGB (Red Green, Blue, Red, Green, Blue) camera is installed in a high place on the wall, and can shoot indoor open scenes from an oblique downward perspective.
  • the statistical area (that is, a specific area) is located in the central part of the shooting range, and The boundaries of the shooting range are separated by a certain distance.
  • the number of people counting is carried out by a program running on the embedded development board.
  • the embedded development board is installed near the camera, which includes a communication unit, and can be connected with the camera through a short-range wireless method to obtain the video data captured by the camera.
  • the embedded development board can also upload the statistical flow data obtained by the communication unit to a predetermined server.
  • the RGB camera continuously captures images of open scenes at a rate of 25 frames per second to form a video stream.
  • the embedded development board extracts a frame of RGB images every fixed number of frames from the captured video stream as the current effective frame.
  • the number of people counting software uses the YOLO target detection algorithm to identify each pedestrian in the current valid frame, determine the position coordinates (a type of position feature) of each pedestrian in the image coordinate system, and the partial area occupied by each pedestrian. image.
  • the pedestrian re-recognition algorithm For each pedestrian in the current valid frame, the pedestrian re-recognition algorithm extracts the appearance characteristics of the pedestrian from the image of the area occupied by the pedestrian, based on the appearance characteristics and position coordinates of the pedestrian, the pedestrian re-recognition algorithm uses the Hungarian algorithm to determine the pedestrian Whether the pedestrian matches each pedestrian in the 3 valid frames before the current valid frame, identify whether the pedestrian has appeared in the previous 3 valid frames, and if it has not appeared, generate a new character ID for the pedestrian Only represent the pedestrian, and mark the pedestrian with a new character logo; if it has ever appeared, use the character logo that the pedestrian has to mark the pedestrian.
  • the person identification of each pedestrian mark in the current valid frame, and the position coordinates of each pedestrian output by the YOLO target detection algorithm if a pedestrian appears outside the statistical area in the previous 3 valid frames When it does not appear in the statistical area and appears in the statistical area in the current effective frame, mark the pedestrian as a pedestrian who enters the statistical area in the current effective frame; if a pedestrian has appeared in the previous 3 valid frames When the inside of the statistical area does not appear outside the statistical area and appears outside the statistical area in the current valid frame, the pedestrian is marked as a pedestrian leaving the statistical area in the current effective frame.
  • the number of people counting software takes a predetermined statistical time period as a cycle, and accumulates the number of pedestrians leaving the statistical area in all valid frames in a cycle as the number of pedestrians, and takes the effective frames of a single pedestrian leaving the statistical area and the effective frames entering the statistical area The time interval between them is taken as the length of time that the pedestrian stays in the statistical area, and the stay time of all these pedestrians is accumulated. After the end of a period, the number of people counting software sends the number of people in the period and the total length of stay of these people to the predetermined server.
  • the embodiments of the present specification also provide an open scene real-time flow statistics device.
  • the device can be implemented by software, or by hardware or a combination of hardware and software.
  • software implementation as an example, as a logical device, it is formed by reading the corresponding computer program instructions into the memory through the CPU (Central Processing Unit) of the device where it is located.
  • the CPU Central Processing Unit
  • the device where the real-time people flow statistics device of the open scene is located usually includes other hardware such as chips for wireless signal transmission and reception, and / or Other hardware such as boards that implement network communication functions.
  • FIG. 5 shows an open-scene real-time flow statistics device provided by an embodiment of the present specification, which includes an effective frame extraction unit, a pedestrian detection unit, a pedestrian re-identification unit, and a flow calculation unit, where the effective frame extraction unit is used for shooting from Extract the current effective frame from the real-time video stream of the open scene; the video stream is shot by a camera placed above the open scene; the pedestrian detection unit is used to detect each pedestrian in the current effective frame; the pedestrian re-identification unit is used to The pedestrian re-identification algorithm recognizes the same pedestrian in the current valid frame as at least one previous valid frame; the flow calculation unit is used to calculate the number of people passing through the open scene according to the result of the pedestrian re-identification algorithm.
  • the effective frame extraction unit is used for shooting from Extract the current effective frame from the real-time video stream of the open scene; the video stream is shot by a camera placed above the open scene; the pedestrian detection unit is used to detect each pedestrian in the current effective frame; the pedestrian re-identification unit is used to The pedestrian
  • the number of people passing through the open scene includes: the number of people passing through a predetermined statistical area in the open scene; the flow calculation unit is specifically used for at least one of the following: when a pedestrian is in the previous valid frame When it appears outside the statistical area but does not appear in the statistical area and appears in the statistical area in the current valid frame, the pedestrian is considered to enter the statistical area; the number of people passing through the statistical area is calculated based on the number of people entering the statistical area; when When a pedestrian has appeared in the statistical area but not outside the statistical area in the previous effective frame, and appears outside the statistical area in the current effective frame, the pedestrian is considered to have left the statistical area; the passage is calculated based on the number of people leaving the statistical area The number of people in the statistical area.
  • the pedestrian re-identification unit is specifically used to: obtain the appearance characteristics, or appearance characteristics and position characteristics of each pedestrian in the current valid frame; determine the current according to the appearance characteristics, or appearance characteristics and position characteristics of the pedestrian Whether each pedestrian in the valid frame is a pedestrian already existing in at least one valid frame before, if not, a new character identification is generated to mark the pedestrian, and if so, the pedestrian is marked with an existing character identification.
  • the pedestrian re-identification unit determines whether a pedestrian in the current effective frame is an existing pedestrian in the previous effective frame according to the appearance characteristics of the pedestrian, or the appearance characteristics and position characteristics, including: using the Hungarian algorithm, Match the pedestrian in the current effective frame with the previous effective frame according to the appearance characteristics of the pedestrian, or appearance characteristics and position characteristics.
  • the pedestrian detection unit is specifically configured to: use the YOLO target detection method to extract the image range and position information of each pedestrian from the current valid frame.
  • the camera is a red, green and blue RGB camera.
  • the device runs on an embedded development board.
  • the embodiments of the present specification provide a computer device including a memory and a processor.
  • the memory stores a computer program that can be executed by the processor; when the processor runs the stored computer program, it executes the steps of the real-time people flow statistics method of the open scene in the embodiments of the present specification.
  • the processor runs the stored computer program, it executes the steps of the real-time people flow statistics method of the open scene in the embodiments of the present specification.
  • the embodiments of the present specification provide a computer-readable storage medium that stores computer programs, which when executed by a processor, execute the steps of the real-time crowd counting method of the open scene in the embodiments of the present specification .
  • a processor executes the steps of the real-time crowd counting method of the open scene in the embodiments of the present specification .
  • the computing device includes one or more processors (CPUs), input / output interfaces, network interfaces, and memory.
  • processors CPUs
  • input / output interfaces output interfaces
  • network interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-permanent memory, random access memory (RAM) and / or non-volatile memory in computer-readable media, such as read only memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
  • RAM random access memory
  • ROM read only memory
  • flash RAM flash memory
  • Computer-readable media including permanent and non-permanent, removable and non-removable media, can store information by any method or technology.
  • the information may be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technologies, read-only compact disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices.
  • computer-readable media does not include temporary computer-readable media (transitory media), such as modulated data signals and carrier waves.
  • the embodiments of the present specification may be provided as methods, systems, or computer program products. Therefore, the embodiments of the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware. Moreover, the embodiments of this specification may take the form of computer program products implemented on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer usable program code .
  • computer usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

一种开放场景的实时人流统计方法,包括:从拍摄所述开放场景的实时视频流中提取当前有效帧;所述视频流由安置在开放场景上方的摄像头拍摄(210);检测当前有效帧中的每个行人(220);采用行人重识别算法,识别当前有效帧中与之前的至少一个有效帧中相同的行人(230);根据行人重识别算法的结果,计算经过所述开放场景的人流数量(240)。

Description

开放场景的实时人流统计方法和装置 技术领域
本说明书涉及数据处理技术领域,尤其涉及开放场景的实时人流统计方法和装置。
背景技术
人流统计在多种商业应用场合发挥着作用,例如,可以统计商场内部按不同时段分布的人数、人群流动方向等信息,为商场举办活动提供参考;可以获得商场外人行道上的人群流动量统计信息,有利于评估商场选址是否适当;等等。
实时人群流量的统计具有更为重要的意义,对监控区域实时流量的统计能够及时得到现场的人数和人群流量数据,有利于管理单位更高效的组织工作,为科学决策提供数据支持。基于视频的实时人流统计通常要求摄像头具有垂直向下的视角,而在很多应用场合都难以满足这一条件。
发明内容
有鉴于此,本说明书提供一种开放场景的实时人流统计方法,包括:
从拍摄所述开放场景的实时视频流中提取当前有效帧;所述视频流由安置在开放场景上方的摄像头拍摄;
检测当前有效帧中的每个行人;
采用行人重识别算法,识别当前有效帧中与之前的至少一个有效帧中相同的行人;
根据行人重识别算法的结果,计算经过所述开放场景的人流数量。
本说明书还提供了一种开放场景的实时人流统计装置,包括:
有效帧提取单元,用于从拍摄所述开放场景的实时视频流中提取当前有效帧;所述视频流由安置在开放场景上方的摄像头拍摄;
行人检测单元,用于检测当前有效帧中的每个行人;
行人重识别单元,用于采用行人重识别算法,识别当前有效帧中与之前的至少一个有效帧中相同的行人;
流量计算单元,用于根据行人重识别算法的结果,计算经过所述开放场景的人流 数量。
本说明书提供的一种计算机设备,包括:存储器和处理器;所述存储器上存储有可由处理器运行的计算机程序;所述处理器运行所述计算机程序时,执行上述web访问实现方法所述的步骤。
本说明书提供的一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器运行时,执行上述应用在CDN节点上的web访问的实现方法所述的步骤。
由以上技术方案可见,本说明书的实施例中,基于摄像头在开放场景上方拍摄的视频流,通过检测当前有效帧中的行人,以及采用行人重识别算法识别出当前有效帧中的行人与之前有效帧中相同的行人,得出经过开放场景的人流数量,从而解决了需要垂直向下的摄像头视角进行人流统计的局限性,并且在减少运算代价的同时既能提供检测的时效性,又能提高检测的准确性。
附图说明
图1是本说明书实施例中一种开放场景、摄像头角度与统计区域的示例图;
图2是本说明书实施例中一种开放场景的实时人流统计方法的流程图;
图3是本说明书应用示例中嵌入式开发板上运行的人流数量统计软件的结构示意图;
图4是运行本说明书实施例的设备的一种硬件结构图;
图5是本说明书实施例中一种开放场景的实时人流统计装置的逻辑结构图。
具体实施方式
本说明书的实施例提出一种新的开放场景的实时人流统计方法,从安置在开放场景上方的摄像头拍摄的实时视频流中提取当前有效帧,检测当前有效帧中的行人并识别与之前有效帧中的行人是否相同,来统计经过开放场景的人流数量。本说明书的实施例无需垂直向下的摄像头视角,能够适用于绝大多数的应用场合,并且运算负荷低,在具有良好时效性的前提下还具有很高的准确性。
本说明书的实施例可以运行在任何具有计算和存储能力的设备上,如手机、平板电脑、PC(Personal Computer,个人电脑)、笔记本、服务器等设备;还可以由运行在 两个或两个以上设备的逻辑节点来实现本说明书实施例中的各项功能。
本说明书的实施例用来统计在开放场景中经过实时人群流量。摄像头安置在开放场景的上方,以斜向下的角度对开放场景上的人群进行拍摄,生成实时视频流。在一些实际应用中,需要统计的是经过开放场景中某个预定的统计区域的人流数量。统计区域是开放场景内的一个既定区域,摄像头的拍摄范围能够完全覆盖统计区域。一种开放场景、统计区域与摄像头角度的示例如图1所示,其中实线框内部为统计区域。
本说明书的实施例中,开放场景的实时人流统计方法的流程如图2所示。
步骤210,从拍摄开放场景的实时视频流中提取当前有效帧。
安装在开放场景上方的摄像头将持续输出以斜向下视角拍摄的视频流,视频流由连续的一帧帧图像构成。可以基于一定的条件,持续不断的从视频流中将符合该条件的各帧图像提取出来作为有效帧,通过连续的辨识有效帧中的行人来进行人群流量的统计。将最后一个从视频流中提取的有效帧作为当前有效帧。
提取有效帧的条件可以根据实际应用场景对统计时间精度的要求、运行本实施例的设备的处理能力等因素来设置,例如,可以将与上一个有效帧间隔N(N为自然数)帧的一帧图像作为下一个有效帧,也可以从每M(M为大于1的自然数)个连续帧中提取一帧作为有效帧。
步骤220,检测当前有效帧中的每个行人。
在提取当前有效帧后,通过深度学习的目标检测算法判断当前有效帧中是否存在人体,如果存在则定位每个行人的位置、以及该行人所占据的部分图像区域。
本说明书实施例中对采用的目标检测算法不做限定,如可以采用Faster R-CNN(Faster Regions with Convolutional Neural Network features,卷积神经网络特征的快速目标区域识别)、SSD(Single Shot MultiBox Detector,单次目标多框预测)等。
在既要求较低计算量,又要求检测准确率的应用场景中,可以采用YOLO(You Only Live Once)目标检测算法,从当前有效帧中提取每个行人的图像范围和位置信息,往往可以达到更好的效果。
步骤230,采用行人重识别算法,识别当前有效帧中与之前的至少一个有效帧中相同的行人。
当某个行人从开放场景经过时,会被拍摄到多个有效帧中。在进行人流统计时, 需要在各个有效帧中找出相同的行人,避免对同一个人多次计数,才能得到准确的数据。
行人重识别(Person ReID,Person Re-identification)能够利用计算机视觉技术来判断图像中是否存在特定行人,可以用来进行同一个摄像头或跨摄像头的人物追踪。本说明书的实施例中,采用行人重识别算法来判断在当前有效帧中检测到的所有行人中,哪些是已经出现在之前有效帧中的行人,哪些是在当前有效帧中新出现的行人。
可以通过查找当前有效帧中的某个行人是否出现在当前有效帧之前的N个有效帧里,来判断该行人是否是新出现的行人。由于本说明书实施例中摄像头以倾斜向下的角度拍摄开放场景,行人较为密集时,可能会出现某个行人在某个有效帧或某几个连续的有效帧中被他人遮挡而没有被检测到的情形,选取较大的N值可以避免在这种情况下错误的将该行人重复计数,但较大的N值也会带来更大的运算负荷。实际应用场景中,可以根据开放场景的行人密集程度、相邻有效帧的间隔时间、运行本实施例的设备的处理能力等因素,来选择适当的N值。
在一种实现方式中,可以获取当前有效帧中每个行人的外观特征和位置特征,再根据行人的外观特征和位置特征,判定当前有效帧中的某个行人是否是之前的N个有效帧中已存在的行人,如果不是,生成新的人物标识标记所述行人;如果是,以已有的人物标识标记所述行人。
具体而言,对目标检测算法输出的每个行人的位置、以及该行人所占据的部分图像区域,由每个行人的位置生成该行人的位置特征(如该行人占据的部分区域在图片坐标系中的坐标),由该行人所占据的部分区域的图像生成该行人的外观特征(如衣服颜色、衣服纹理、手提包、背包、帽子等)。采用当前有效帧中某个行人的位置特征和外观特征,在之前的N个有效帧中查找是否已经存在该行人,如果不存在,为该行人生成新的人物标识并用生成的人物标识标记该行人,人物标识用来唯一的代表一个行人,可以是索引号、字符串等,不做限定。如果已经存在该行人,则该行人已经具有自己的人物标识,沿用之前已有的人物标识标记当前有效帧中的该行人即可。对当前有效帧中检测出的所有行人逐个执行上述查找过程,直到检测出的每个行人都标记有人物标识。
可以根据实际应用场景的需要,来选择识别不同有效帧中的行人是否是同一个人时采用的算法,不做限定。例如,可以采用匈牙利算法,来根据行人的外观特征和位置特征进行当前有效帧与之前有效帧中的行人匹配。
步骤240,根据行人重识别算法的结果,计算经过所述开放场景的人流数量。
可以根据实际应用的需要来确定统计开放场景人流数量的具体方式,本说明书的实施例不做限定。以下举例说明(以下的例子中,统计时间段是对人流进行累计的时间段):
第一个例子:可以将在统计时间段内的各个有效帧中新出现的行人进行累加,将累加结果作为统计时间段内的人流数量。如果当前有效帧中的某个行人与之前N个有效帧中的行人均不同,则该行人为当前有效帧中新出现的行人;统计时间段内所有有效帧中新出现的行人数量总和,可以视为统计时间段的行人总数量。
第二个例子:在以人物标识对每个行人进行标记的实现方式中,可以得到统计时间段内在每个有效帧中新出现的行人和离开的行人。其中,离开的行人可以是出现在某个有效帧之前的相邻有效帧中、并且在该有效帧以及之后的N个有效帧中未曾出现的行人,以避免该行人被其他行人遮挡而暂时消失导致的统计偏差。统计时间段内的人流数量,可以是统计时间段内所有有效帧中新出现的行人数量总和,也可以是统计时间段内所有有效帧中离开的行人数量总和,还可以是对上述两个值进行数学运算的结果(如新出现的行人数量总和与离开的行人数量总和的均值)。
在第二个例子中,还可以统计每个行人处于开放场景中的时间长度,如将某个行人离开的有效帧、与该行人新出现的有效帧之间经过的时间,作为该行人处于开放场景中的时长。
在需要统计的是经过开放场景中某个预定的统计区域的人流数量的应用场合,对某个有效帧,如果某个行人在该有效帧中出现在统计区域内、并且在该有效帧之前的N个有效帧中该行人均未出现在统计区域内,则将该行人作为该有效帧中新出现的行人(即进入统计区域的行人);如果某个行人出现在某个有效帧之前的相邻有效帧的统计区域、并且在该有效帧及其后的N个有效帧中均为出现在统计区域,则将该行人作为该有效帧中离开的行人(即离开统计区域的行人)。
在上述应用场合,对进入和离开统计区域可以采用更加严格的判断标准,以得到更为准确的人流数量。例如,可以在某个行人在之前的N个有效帧中出现在统计区域外而不曾出现在统计区域内、并且在当前有效帧中出现在统计区域内时,认为该行人在当前有效帧进入统计区域;当某个行人在之前的N个有效帧中曾出现在统计区域内而不曾出现在统计区域外、并且在当前有效帧中出现在统计区域外时,认为该行人在当前有效帧离开统计区域。经过统计区域的人流数量可以根据进入统计区域的人数来计算,也可以根据离开统计区域的人数来计算,还可以根据进入统计区域的人数和离开统计区域的 人数来计算。
可见,本说明书的实施例中,从安置在开放场景上方的摄像头拍摄的实时视频流中提取当前有效帧,通过检测当前有效帧中的行人,以及采用行人重识别算法识别出当前有效帧中的行人与之前有效帧中相同的行人,得出经过开放场景的人流数量,不仅无需垂直向下的摄像头视角,能够适用于绝大多数的应用场合,而且运算负荷低,既能提供检测的时效性,又能提高检测的准确性。
由于运算负荷较低,本说明书实施例的方法适于运行在嵌入式开发板上,并且对嵌入式开发板的硬件环境没有特别要求。运行本说明书实施例的嵌入式开发板可以安装在摄像机附近,将实时统计出的人流数据通过自身的通信模块发送给负责采集流量数据的服务器,而无需上传摄像机拍摄的视频或者图像,能够在不侵犯行人的隐私的条件下得到精确的人流统计数据。
上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。
在本说明书的一个应用示例中,需要对经过室内开放场景中特定区域的人流数量进行统计。RGB(Red Green Blue,红绿蓝)摄像头安装在墙壁上较高的地点,能够以斜向下的视角对室内的开放场景进行拍摄,统计区域(即特定区域)位于拍摄范围的中央部分,与拍摄范围的边界相隔一定距离。
人流数量统计由运行在嵌入式开发板上的程序进行,嵌入式开发板安装在摄像头附近,其上包括通信单元,可以与摄像头通过近距离无线方式连接,从摄像头获取其拍摄的视频数据。嵌入式开发板还可以通过通信单元将统计得出的人流数据上传给预定的服务器。
嵌入式开发板上运行的人流数量统计软件的结构如图3所示。
RGB摄像头以25帧/秒的速度持续拍摄开放场景的图像,形成视频流。嵌入式开发板从拍摄的视频流中每隔固定数量的帧提取一帧RGB图像,作为当前有效帧。
人流数量统计软件采用YOLO目标检测算法,识别出当前有效帧中的每个行人,确定每个行人在图像坐标系中的位置坐标(一种位置特征),以及每个行人所占据的部 分区域的图像。
针对当前有效帧中的每个行人,行人重识别算法从该行人占据区域的图像中提取该行人的外观特征,以该行人的外观特征和位置坐标为依据,行人重识别算法采用匈牙利算法判断该行人是否与当前有效帧之前的3个有效帧中的各个行人是否匹配,鉴别出该行人是否在之前的3个有效帧中出现过,如果未曾出现过,则为该行人生成新的人物标识来唯一代表该行人,并用新的人物标识标记该行人;如果曾经出现过,则采用该行人已有的人物标识来标记该行人。
基于行人重识别算法给当前有效帧中每个行人标记的人物标识,以及YOLO目标检测算法输出的每个行人的位置坐标,如果某个行人在之前的3个有效帧中出现在统计区域外而不曾出现在统计区域内、并且在当前有效帧中出现在统计区域内时,将该行人标记为在当前有效帧进入统计区域的行人;如果某个行人在之前的3个有效帧中曾出现在统计区域内而不曾出现在统计区域外、并且在当前有效帧中出现在统计区域外时,将该行人标记为在当前有效帧离开统计区域的行人。
人流数量统计软件以预定的统计时间段为周期,累加在一个周期中的所有有效帧中离开统计区域的行人数量作为人流数量,并且将单个行人离开统计区域的有效帧与进入统计区域的有效帧之间的时间间隔作为该行人在统计区域的停留时长,累加所有这些行人的停留时长。在一个周期结束后,人流数量统计软件向预定的服务器发送该周期的人流数量和这些人流的停留总时长。
与上述流程实现对应,本说明书的实施例还提供了一种开放场景的实时人流统计装置。该装置可以通过软件实现,也可以通过硬件或者软硬件结合的方式实现。以软件实现为例,作为逻辑意义上的装置,是通过所在设备的CPU(Central Process Unit,中央处理器)将对应的计算机程序指令读取到内存中运行形成的。从硬件层面而言,除了图4所示的CPU、内存以及存储器之外,开放场景的实时人流统计装置所在的设备通常还包括用于进行无线信号收发的芯片等其他硬件,和/或用于实现网络通信功能的板卡等其他硬件。
图5所示为本说明书实施例提供的一种开放场景的实时人流统计装置,包括有效帧提取单元、行人检测单元、行人重识别单元和流量计算单元,其中:有效帧提取单元用于从拍摄所述开放场景的实时视频流中提取当前有效帧;所述视频流由安置在开放场景上方的摄像头拍摄;行人检测单元用于检测当前有效帧中的每个行人;行人重识别单元用于采用行人重识别算法,识别当前有效帧中与之前的至少一个有效帧中相同的行人; 流量计算单元用于根据行人重识别算法的结果,计算经过所述开放场景的人流数量。
可选的,所述经过开放场景的人流数量包括:经过所述开放场景中预定的统计区域的人流数量;所述流量计算单元具体用于以下至少一项:当某个行人在之前有效帧中出现在统计区域外而不曾出现在统计区域内、并且在当前有效帧中出现在统计区域内时,认为该行人进入统计区域;根据进入统计区域的人数计算经过所述统计区域的人流数量;当某个行人在之前有效帧中曾出现在统计区域内而不曾出现在统计区域外、并且在当前有效帧中出现在统计区域外时,认为该行人离开统计区域;根据离开统计区域的人数计算经过所述统计区域的人流数量。
一种实现方式中,所述行人重识别单元具体用于:获取当前有效帧中每个行人的外观特征、或外观特征和位置特征;根据行人的外观特征、或外观特征和位置特征,判定当前有效帧中的每个行人是否是之前的至少一个有效帧中已存在的行人,如果不是则生成新的人物标识标记所述行人,如果是则以已有的人物标识标记所述行人。
上述实现方式中,所述行人重识别单元根据行人的外观特征、或外观特征和位置特征,判定当前有效帧中的某个行人是否是之前有效帧中已存在的行人,包括:采用匈牙利算法,根据行人的外观特征、或外观特征和位置特征进行当前有效帧与之前有效帧中的行人匹配。
可选的,所述行人检测单元具体用于:采用YOLO目标检测方法,从当前有效帧中提取每个行人的图像范围和位置信息。
可选的,所述摄像头为红绿蓝RGB摄像头。
可选的,所述装置运行在嵌入式开发板上。
本说明书的实施例提供了一种计算机设备,该计算机设备包括存储器和处理器。其中,存储器上存储有能够由处理器运行的计算机程序;处理器在运行存储的计算机程序时,执行本说明书实施例中开放场景的实时人流统计方法的各个步骤。对开放场景的实时人流统计方法的各个步骤的详细描述请参见之前的内容,不再重复。
本说明书的实施例提供了一种计算机可读存储介质,该存储介质上存储有计算机程序,这些计算机程序在被处理器运行时,执行本说明书实施例中开放场景的实时人流统计方法的各个步骤。对开放场景的实时人流统计方法的各个步骤的详细描述请参见之前的内容,不再重复。
以上所述仅为本说明书的较佳实施例而已,并不用以限制本申请,凡在本申请的 精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本领域技术人员应明白,本说明书的实施例可提供为方法、系统或计算机程序产品。因此,本说明书的实施例可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本说明书的实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。

Claims (16)

  1. 一种开放场景的实时人流统计方法,包括:
    从拍摄所述开放场景的实时视频流中提取当前有效帧;所述视频流由安置在开放场景上方的摄像头拍摄;
    检测当前有效帧中的每个行人;
    采用行人重识别算法,识别当前有效帧中与之前的至少一个有效帧中相同的行人;
    根据行人重识别算法的结果,计算经过所述开放场景的人流数量。
  2. 根据权利要求1所述的方法,所述经过开放场景的人流数量包括:经过所述开放场景中预定的统计区域的人流数量;
    所述根据行人重识别算法的结果,计算经过所述开放场景的人流数量,包括以下至少一项:
    当某个行人在之前有效帧中出现在统计区域外而不曾出现在统计区域内、并且在当前有效帧中出现在统计区域内时,认为该行人进入统计区域;根据进入统计区域的人数计算经过所述统计区域的人流数量;
    当某个行人在之前有效帧中曾出现在统计区域内而不曾出现在统计区域外、并且在当前有效帧中出现在统计区域外时,认为该行人离开统计区域;根据离开统计区域的人数计算经过所述统计区域的人流数量。
  3. 根据权利要求1所述的方法,所述采用行人重识别算法,识别当前有效帧中与之前的至少一个有效帧中相同的行人,包括:获取当前有效帧中每个行人的外观特征、或外观特征和位置特征;根据行人的外观特征、或外观特征和位置特征,判定当前有效帧中的每个行人是否是之前的至少一个有效帧中已存在的行人,如果不是则生成新的人物标识标记所述行人,如果是则以已有的人物标识标记所述行人。
  4. 根据权利要求3所述的方法,所述根据行人的外观特征、或外观特征和位置特征,判定当前有效帧中的某个行人是否是之前有效帧中已存在的行人,包括:采用匈牙利算法,根据行人的外观特征、或外观特征和位置特征进行当前有效帧与之前有效帧中的行人匹配。
  5. 根据权利要求1所述的方法,所述检测当前有效帧中的每个行人,包括:采用YOLO目标检测方法,从当前有效帧中提取每个行人的图像范围和位置信息。
  6. 根据权利要求1所述的方法,所述摄像头为红绿蓝RGB摄像头。
  7. 根据权利要求1所述的方法,所述方法运行在嵌入式开发板上。
  8. 一种开放场景的实时人流统计装置,包括:
    有效帧提取单元,用于从拍摄所述开放场景的实时视频流中提取当前有效帧;所述视频流由安置在开放场景上方的摄像头拍摄;
    行人检测单元,用于检测当前有效帧中的每个行人;
    行人重识别单元,用于采用行人重识别算法,识别当前有效帧中与之前的至少一个有效帧中相同的行人;
    流量计算单元,用于根据行人重识别算法的结果,计算经过所述开放场景的人流数量。
  9. 根据权利要求8所述的装置,所述经过开放场景的人流数量包括:经过所述开放场景中预定的统计区域的人流数量;
    所述流量计算单元具体用于以下至少一项:
    当某个行人在之前有效帧中出现在统计区域外而不曾出现在统计区域内、并且在当前有效帧中出现在统计区域内时,认为该行人进入统计区域;根据进入统计区域的人数计算经过所述统计区域的人流数量;
    当某个行人在之前有效帧中曾出现在统计区域内而不曾出现在统计区域外、并且在当前有效帧中出现在统计区域外时,认为该行人离开统计区域;根据离开统计区域的人数计算经过所述统计区域的人流数量。
  10. 根据权利要求8所述的装置,所述行人重识别单元具体用于:获取当前有效帧中每个行人的外观特征、或外观特征和位置特征;根据行人的外观特征、或外观特征和位置特征,判定当前有效帧中的每个行人是否是之前的至少一个有效帧中已存在的行人,如果不是则生成新的人物标识标记所述行人,如果是则以已有的人物标识标记所述行人。
  11. 根据权利要求10所述的装置,所述行人重识别单元根据行人的外观特征、或外观特征和位置特征,判定当前有效帧中的某个行人是否是之前有效帧中已存在的行人,包括:采用匈牙利算法,根据行人的外观特征、或外观特征和位置特征进行当前有效帧与之前有效帧中的行人匹配。
  12. 根据权利要求8所述的装置,所述行人检测单元具体用于:采用YOLO目标检测方法,从当前有效帧中提取每个行人的图像范围和位置信息。
  13. 根据权利要求8所述的装置,所述摄像头为红绿蓝RGB摄像头。
  14. 根据权利要求8所述的装置,所述装置运行在嵌入式开发板上。
  15. 一种计算机设备,包括:存储器和处理器;所述存储器上存储有可由处理器运行的计算机程序;所述处理器运行所述计算机程序时,执行如权利要求1到7任意一项所述的步骤。
  16. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器运行时,执行如权利要求1到7任意一项所述的步骤。
PCT/CN2019/110014 2018-11-09 2019-10-08 开放场景的实时人流统计方法和装置 WO2020093829A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811331786.XA CN109902551A (zh) 2018-11-09 2018-11-09 开放场景的实时人流统计方法和装置
CN201811331786.X 2018-11-09

Publications (1)

Publication Number Publication Date
WO2020093829A1 true WO2020093829A1 (zh) 2020-05-14

Family

ID=66943300

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/110014 WO2020093829A1 (zh) 2018-11-09 2019-10-08 开放场景的实时人流统计方法和装置

Country Status (3)

Country Link
CN (1) CN109902551A (zh)
TW (1) TWI729454B (zh)
WO (1) WO2020093829A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113034544A (zh) * 2021-03-19 2021-06-25 奥比中光科技集团股份有限公司 一种基于深度相机的人流分析方法及装置
CN113297888A (zh) * 2020-09-18 2021-08-24 阿里巴巴集团控股有限公司 一种图像内容检测结果核查方法及装置
CN116385969A (zh) * 2023-04-07 2023-07-04 暨南大学 基于多摄像头协同和人类反馈的人员聚集检测系统
CN116665243A (zh) * 2021-03-23 2023-08-29 福建诺诚数字科技有限公司 一种基于tof深度图测量行人间距的方法和装置以及设备

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109902551A (zh) * 2018-11-09 2019-06-18 阿里巴巴集团控股有限公司 开放场景的实时人流统计方法和装置
CN112149457A (zh) * 2019-06-27 2020-12-29 西安光启未来技术研究院 人流统计的方法、装置、服务器和计算机可读存储介质
CN110659588A (zh) * 2019-09-02 2020-01-07 平安科技(深圳)有限公司 一种客流量统计方法、装置及计算机可读存储介质
CN110705494A (zh) * 2019-10-10 2020-01-17 北京东软望海科技有限公司 人流量监测方法、装置、电子设备及计算机可读存储介质
WO2021248479A1 (zh) * 2020-06-12 2021-12-16 深圳盈天下视觉科技有限公司 人流量数据监控系统及其人流量数据的展示方法和装置
CN112560765A (zh) * 2020-12-24 2021-03-26 上海明略人工智能(集团)有限公司 基于行人重识别的人流统计方法、系统、设备及存储介质
CN112732792A (zh) * 2021-01-13 2021-04-30 北京明略昭辉科技有限公司 基于ToF的客流量统计和标签化方法、系统、设备及存储介质
CN113657304B (zh) * 2021-08-20 2024-08-16 小马国炬(玉溪)科技有限公司 一种人流追踪统计方法、装置、设备及存储介质
TWI796033B (zh) * 2021-12-07 2023-03-11 巨鷗科技股份有限公司 人流分析辨識系統

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809178A (zh) * 2014-12-31 2016-07-27 中国科学院深圳先进技术研究院 一种基于人脸属性的人群分析方法及装置
CN108388851A (zh) * 2018-02-09 2018-08-10 北京京东金融科技控股有限公司 信息统计方法、装置、存储介质及电子设备
CN108764167A (zh) * 2018-05-30 2018-11-06 上海交通大学 一种时空关联的目标重识别方法和系统
CN109902551A (zh) * 2018-11-09 2019-06-18 阿里巴巴集团控股有限公司 开放场景的实时人流统计方法和装置

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070098303A1 (en) * 2005-10-31 2007-05-03 Eastman Kodak Company Determining a particular person from a collection
CN101231755B (zh) * 2007-01-25 2013-03-06 上海遥薇(集团)有限公司 运动目标跟踪及数量统计方法
TW201033908A (en) * 2009-03-12 2010-09-16 Micro Star Int Co Ltd System and method for counting people flow
CN103425967B (zh) * 2013-07-21 2016-06-01 浙江大学 一种基于行人检测和跟踪的人流监控方法
CN104318263A (zh) * 2014-09-24 2015-01-28 南京邮电大学 一种实时高精度人流计数方法
US9996752B2 (en) * 2016-08-30 2018-06-12 Canon Kabushiki Kaisha Method, system and apparatus for processing an image
CN108021848B (zh) * 2016-11-03 2021-06-01 浙江宇视科技有限公司 客流量统计方法及装置
CN106650695A (zh) * 2016-12-30 2017-05-10 苏州万店掌网络科技有限公司 一种基于视频分析技术的跟踪统计人流量的系统
CN107423708A (zh) * 2017-07-25 2017-12-01 成都通甲优博科技有限责任公司 一种确定视频中行人人流量的方法及其装置
CN108664946A (zh) * 2018-05-18 2018-10-16 上海极歌企业管理咨询中心(有限合伙) 基于图像的人流特征获取方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809178A (zh) * 2014-12-31 2016-07-27 中国科学院深圳先进技术研究院 一种基于人脸属性的人群分析方法及装置
CN108388851A (zh) * 2018-02-09 2018-08-10 北京京东金融科技控股有限公司 信息统计方法、装置、存储介质及电子设备
CN108764167A (zh) * 2018-05-30 2018-11-06 上海交通大学 一种时空关联的目标重识别方法和系统
CN109902551A (zh) * 2018-11-09 2019-06-18 阿里巴巴集团控股有限公司 开放场景的实时人流统计方法和装置

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113297888A (zh) * 2020-09-18 2021-08-24 阿里巴巴集团控股有限公司 一种图像内容检测结果核查方法及装置
CN113297888B (zh) * 2020-09-18 2024-06-07 阿里巴巴集团控股有限公司 一种图像内容检测结果核查方法及装置
CN113034544A (zh) * 2021-03-19 2021-06-25 奥比中光科技集团股份有限公司 一种基于深度相机的人流分析方法及装置
CN116665243A (zh) * 2021-03-23 2023-08-29 福建诺诚数字科技有限公司 一种基于tof深度图测量行人间距的方法和装置以及设备
CN116385969A (zh) * 2023-04-07 2023-07-04 暨南大学 基于多摄像头协同和人类反馈的人员聚集检测系统
CN116385969B (zh) * 2023-04-07 2024-03-12 暨南大学 基于多摄像头协同和人类反馈的人员聚集检测系统

Also Published As

Publication number Publication date
CN109902551A (zh) 2019-06-18
TW202036375A (zh) 2020-10-01
TWI729454B (zh) 2021-06-01

Similar Documents

Publication Publication Date Title
WO2020093829A1 (zh) 开放场景的实时人流统计方法和装置
WO2020093830A1 (zh) 指定区域的人流状况估算方法和装置
US10880524B2 (en) System and method for activity monitoring using video data
CN108629791B (zh) 行人跟踪方法和装置及跨摄像头行人跟踪方法和装置
US11138442B2 (en) Robust, adaptive and efficient object detection, classification and tracking
US9530221B2 (en) Context aware moving object detection
WO2020211624A1 (zh) 对象追踪方法、追踪处理方法、相应的装置、电子设备
WO2020094088A1 (zh) 一种图像抓拍方法、监控相机及监控系统
Cetin et al. Methods and techniques for fire detection: signal, image and video processing perspectives
US10659680B2 (en) Method of processing object in image and apparatus for same
KR20160033800A (ko) 카운팅 방법 및 카운팅 장치
Fu et al. Scene-adaptive accurate and fast vertical crowd counting via joint using depth and color information
Sandifort et al. An entropy model for loiterer retrieval across multiple surveillance cameras
WO2016172262A1 (en) Systems and methods for processing video data for activity monitoring
CN109523573A (zh) 目标对象的跟踪方法和装置
US11334751B2 (en) Systems and methods for processing video data for activity monitoring
Han et al. Gait Recognition in Large-scale Free Environment via Single LiDAR
CN110572618B (zh) 一种非法拍照行为监控方法、装置及系统
Wang Distributed multi-object tracking with multi-camera systems composed of overlapping and non-overlapping cameras
Seidenari et al. Non-parametric anomaly detection exploiting space-time features
CN110956644A (zh) 一种运动轨迹确定方法及系统
Feng et al. Crowd Anomaly Scattering Detection Based on Information Entropy
Ma et al. Location and Fusion Algorithm of High-Rise Building Rescue Drill Scene Based on Binocular Vision
Turchini et al. Open Set Recognition for Unique Person Counting via Virtual Gates
Shukla et al. Software Auto Trigger Recording for Super Slow Motion Videos Using Statistical Change Detection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19883194

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19883194

Country of ref document: EP

Kind code of ref document: A1