CN114049378A - Queuing analysis method and device - Google Patents

Queuing analysis method and device Download PDF

Info

Publication number
CN114049378A
CN114049378A CN202111308427.4A CN202111308427A CN114049378A CN 114049378 A CN114049378 A CN 114049378A CN 202111308427 A CN202111308427 A CN 202111308427A CN 114049378 A CN114049378 A CN 114049378A
Authority
CN
China
Prior art keywords
human body
target human
queuing
queue
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111308427.4A
Other languages
Chinese (zh)
Inventor
刘诗男
杨昆霖
侯军
伊帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202111308427.4A priority Critical patent/CN114049378A/en
Publication of CN114049378A publication Critical patent/CN114049378A/en
Priority to PCT/CN2022/097274 priority patent/WO2023077797A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C11/00Arrangements, systems or apparatus for checking, e.g. the occurrence of a condition, not provided for elsewhere
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C11/00Arrangements, systems or apparatus for checking, e.g. the occurrence of a condition, not provided for elsewhere
    • G07C2011/04Arrangements, systems or apparatus for checking, e.g. the occurrence of a condition, not provided for elsewhere related to queuing systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure provides a queuing analysis method and a queuing analysis device, wherein the method comprises the following steps: detecting image frames in a video stream, and determining at least one target human body in the image frames, wherein the target human body is in a queue; tracking the target human body detected in a plurality of image frames in the video stream, and determining a tracking identifier of each target human body, wherein the tracking identifiers are used for marking the same target human body in different image frames; and determining a queuing analysis result of the queuing queue according to the tracking identifier of the target human body in at least one image frame. The method can obtain richer queuing information when the queuing is analyzed, has stronger practicability and has better adaptability to various queuing scenes.

Description

Queuing analysis method and device
Technical Field
The embodiment of the disclosure relates to the technical field of computer vision, in particular to a queuing analysis method and device.
Background
Queuing events in life are visible everywhere, such as queuing at cash registers of supermarkets, queuing for ticket buying in subways, queuing for security inspection, queuing in canteens and the like. The traditional queuing analysis method, such as the queuing number-taking method, is not enough to be suitable for various queuing scenarios, and the available information about the queuing situation is limited.
Disclosure of Invention
In view of this, the disclosed embodiments provide at least one queue analysis method and apparatus.
Specifically, the embodiment of the present disclosure is implemented by the following technical solutions:
in a first aspect, a queuing analysis method is provided, where the method includes:
detecting image frames in a video stream, and determining at least one target human body in the image frames, wherein the target human body is in a queue;
tracking the target human body detected in a plurality of image frames in the video stream, and determining a tracking identifier of each target human body, wherein the tracking identifiers are used for marking the same target human body in different image frames;
and determining a queuing analysis result of the queuing queue according to the tracking identifier of the target human body in at least one image frame.
In some optional embodiments, the detecting image frames in the video stream, and determining at least one target human body in a queue in the image frames, includes:
detecting image frames in a video stream to obtain detection frames of all human bodies in the image frames;
and responding to the detection frame in a preset queuing area in the image frame, and determining the human body corresponding to the detection frame as the target human body in the queuing.
In some alternative embodiments, the queue analysis results include the number of people in the queue; the determining a queuing analysis result of the queuing queue according to the tracking identifier of the target human body in at least one image frame comprises the following steps:
and determining the number of people in a queue at the acquisition time corresponding to the image frame according to the number of the tracking marks of the target human body in the image frame.
In some alternative embodiments, the queue analysis results include queue wait times; the determining a queuing analysis result of the queuing queue according to the tracking identifier of the target human body in at least one image frame comprises the following steps:
for an image frame of a target human body marked by any tracking identifier at the tail of a queuing queue, determining the queuing start time of the target human body marked by the tracking identifier;
for an image frame of a target human body marked by any tracking identifier, which is positioned at the head of a queuing queue, determining the queuing end time of the target human body marked by the tracking identifier;
and determining the queuing waiting time according to the queuing starting time and the queuing ending time of the target human body.
In some optional embodiments, the queuing analysis result includes a service usage time; the determining a queuing analysis result of the queuing queue according to the tracking identifier of the target human body in at least one image frame comprises the following steps:
for any tracking identifier, determining a plurality of image frames of the target human body marked by the tracking identifier at the head of a queue in a queue;
determining the acquisition time of the image frame with the earliest acquisition time in the plurality of image frames as the service start time of the target human body marked by the tracking identification;
determining the service end time of the target human body marked by the tracking identification based on the detection that the target human body marked by the tracking identification leaves the queue;
and determining the service using time of the target human body according to the service starting time and the service ending time of the target human body.
In some optional embodiments, the method further comprises:
extracting first characteristic information of a first target human body in a first image frame and extracting second characteristic information of a second target human body in an adjacent second image frame; the first target human body is a human body at the head of a queue in the first image frame, and the second target human body is a human body at the head of the queue in the second image frame;
and determining that the first target human body leaves the queue in response to the similarity between the first characteristic information and the second characteristic information being smaller than a similarity threshold.
In some optional embodiments, the method further comprises:
and in response to the fact that the second target human body leaves the queuing queue, determining service use time corresponding to the second target human body according to the time when the first target human body leaves the queuing queue and the time when the second target human body leaves the queuing queue.
In some optional embodiments, the determining a queuing analysis result of the queuing queue according to the tracking identifier of the target human body in at least one of the image frames includes:
and determining the target human body queue insertion marked by the newly added tracking identifier in the queuing queue in response to the fact that the number of the tracking identifiers of the target human body in the adjacent image frames is different and the target human body marked by the newly added tracking identifier in the image frame with the later acquisition time is not positioned at the tail of the queuing queue.
In a second aspect, there is provided a queue analysis apparatus, the apparatus comprising:
the human body detection module is used for detecting image frames in a video stream and determining at least one target human body in the image frames, wherein the target human body is in a queue;
a human body tracking module, configured to track the target human body detected in a plurality of image frames in the video stream, and determine a tracking identifier of each target human body, where the tracking identifier is used to mark the same target human body in different image frames;
and the result analysis module is used for determining a queuing analysis result of the queuing queue according to the tracking identifier of the target human body in at least one image frame.
In a third aspect, an electronic device is provided, which includes a memory for storing computer instructions executable on a processor, and the processor is configured to implement the queuing analysis method according to any one of the embodiments of the present disclosure when executing the computer instructions.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the queuing analysis method according to any of the embodiments of the present disclosure.
In a fifth aspect, a computer program product is provided, the product comprising a computer program/instructions which, when executed by a processor, implement the queuing analysis method according to any of the embodiments of the present disclosure.
According to the queuing analysis method provided by the technical scheme of the embodiment of the disclosure, the video stream under the queuing scene is used for tracking and analyzing the target human body of the queuing queue, so that richer queuing information can be obtained, the practicability is higher, the queuing analysis method is not limited by the queuing scene, the queuing analysis method has better adaptability to various queuing scenes, and the queuing information can be mastered, so that the resource configuration of manpower, material resources and the like facing the queuing crowd is optimized, the service efficiency is greatly improved, and the cost is reduced.
Drawings
In order to more clearly illustrate one or more embodiments of the present disclosure or technical solutions in related arts, the drawings used in the description of the embodiments or related arts will be briefly described below, it is obvious that the drawings in the description below are only some embodiments described in one or more embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without inventive exercise.
FIG. 1 is a flow diagram illustrating a method of queue analysis in accordance with at least one embodiment of the present disclosure;
FIG. 2 is a flow diagram illustrating another queuing analysis method in accordance with at least one embodiment of the present disclosure;
fig. 2A is a diagram of a subway queuing scenario shown in at least one embodiment of the present disclosure;
FIG. 2B is a diagram illustrating a queuing scenario, according to at least one embodiment of the present disclosure;
FIG. 2C is a statistical graph illustrating similarity according to at least one embodiment of the present disclosure;
FIG. 3 is a block diagram of a queue analysis device, shown in at least one embodiment of the present disclosure;
FIG. 4 is a block diagram of another queue analysis device, shown in at least one embodiment of the present disclosure;
fig. 5 is a schematic diagram illustrating a hardware structure of an electronic device according to at least one embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the specification, as detailed in the appended claims.
The terminology used in the description herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the description. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the present specification. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In life, a plurality of queuing scenes exist, and service providers need to count the queuing conditions of users to optimize service configuration. The existing queuing statistics management scheme mainly comprises two methods: 1. the method for counting the number of people in the queue by using the queuing number taking is limited by scenes and is inflexible. The method is only suitable for queuing in small-scale scenes and is not suitable for large-scale scenes such as playgrounds, zoos and the like. 2. The method for counting the number of queuing people by using the mobile terminal requires that a user actively accesses a network of the mobile terminal and is also limited to a local space, for example, the method is only suitable for scenes such as toilets and closed stadium channels, and the accuracy is not high because people who do not queue can also access the network of the mobile terminal, and the obtained queuing information is limited.
In view of this, at least one embodiment of the present disclosure provides a queuing analysis method, which analyzes a video stream in a queuing scene when performing queuing analysis on a queuing queue, so that queuing information can be mastered without being limited by the queuing scene.
As shown in fig. 1, fig. 1 is a flowchart illustrating a queuing analysis method according to at least one embodiment of the present disclosure, which may include the following processes:
in step 102, image frames in a video stream are detected, and at least one target human body in a queue in the image frames is determined.
In this embodiment, the video stream is composed of a plurality of image frames acquired from a queuing scene, and the video stream may be obtained by monitoring the queuing queue in real time or may be a video obtained by recording the queuing queue.
In this step, the detection of the image frame may be the detection of the entire image frame, or the detection of a calibrated queuing area in the image frame.
The present embodiment is not limited to the detection method for detecting the image frames in the video stream, and for example, the detection method may be performed by a neural network method, or may be performed by other methods.
In step 104, a tracking identifier of each target human body is determined based on the target human body detected in a plurality of image frames in the video stream for tracking.
Wherein, the tracking identification is used for marking the same target human body in different image frames. The same target human body exists in different video frames, the target human body in a plurality of image frames is tracked, the position of the same target human body in different image frames can be determined, and the target human body is marked by the tracking identification.
The present embodiment does not limit the method used for tracking the target human body, and for example, a kalman filter tracking algorithm, a siamsRPN (visual target tracking network) -based tracking algorithm, or the like may be used to track the target human body.
In step 106, determining a queuing analysis result of the queuing queue according to the tracking identifier of the target human body in at least one image frame.
The queuing analysis result may include a preliminary analysis result obtained by analyzing each image frame, and may further include a result obtained by further summarizing the preliminary analysis result.
For example, the queuing analysis result may include the number of people in the queuing queue, and the number of people in the queuing queue at the acquisition time corresponding to the image frame may be determined according to the number of the tracking identifiers of the target human body in the image frame. And the number of people who queue in a plurality of image frames of the video stream is further analyzed, so that the number of people who queue in different time periods can be obtained, and information such as the peak value, the peak time, the valley value, the valley time and the like of the number of people who queue in different time periods can be obtained.
According to the queuing analysis method provided by the technical scheme of the embodiment of the disclosure, the video stream under the queuing scene is used for tracking and analyzing the target human body of the queuing queue, so that richer queuing information can be obtained, the practicability is higher, the queuing analysis method is not limited by the queuing scene, the queuing analysis method has better adaptability to various queuing scenes, and the queuing information can be mastered, so that the resource configuration of manpower, material resources and the like facing the queuing crowd is optimized, the service efficiency is greatly improved, and the cost is reduced.
Fig. 2 is a flowchart of a queuing analysis method according to at least one embodiment of the present disclosure, which describes in more detail a process of detecting image frames in a video stream for queuing analysis in conjunction with the subway queuing scenario shown in fig. 2A. As shown in fig. 2, the method may include the following processing, where it should be noted that the execution order of each step is not limited by the present embodiment.
In step 202, image frames in a video stream are detected to obtain detection frames of respective human bodies in the image frames.
In this step, the image frames in the video stream may be detected by using a human body detection network, which is a pre-trained neural network for detecting a human body, and correspondingly obtains detection frames of each human body in the image frames. In another embodiment, the image frames in the video stream may be detected by using a human head key point detection network, and the detection frames of the heads of the human bodies in the image frames may be obtained correspondingly. Alternatively, the detection frames of other parts of the human body, such as the feet, the legs, etc., can be detected.
Inputting the image frame into a human body detection network, and outputting detection frames of all human bodies in the image frame, wherein the detection frames contain coordinate information and are used for representing the positions of the human bodies.
In step 204, in response to that the detection frame is in a preset queuing area in the image frame, determining that the human body corresponding to the detection frame is the target human body in a queuing queue.
The position and the size of a queuing area can be defined in advance in a video picture collected by a camera near a subway ticket vending machine. In particular, for a scene in which multiple queuing queues may exist, the queuing queue of the queuing area required for queuing analysis may be specified by delimiting the queuing area in the image frame of the video stream.
Generally, the view angle position of a camera for acquiring a video stream in a queuing scene is fixed, and a queuing area is calibrated for the video stream acquired by the camera in advance. One or more of the edges of the queuing area may also be selected and the direction of the team indicated.
In this embodiment, the predetermined queuing area in the image frame may be an area in a quadrilateral frame formed by black lines as shown in fig. 2A, and the direction of departure is shown by an arrow. Or may be an area within a frame of black lines as shown in fig. 2B, with the direction of departure being indicated by the arrow.
In this step, whether the human body corresponding to the detection frame is the target human body in the queuing queue may be determined by determining whether the feature point in the detection frame is in the queuing area, or in other examples, the determination may be performed according to whether the edge of the detection frame is in the queuing area or the overlapping degree of the detection frame and the queuing area.
The feature point is a point in the inner area of the detection frame or a point on the edge of the detection frame, and the selection of the feature point is not limited in this embodiment, for example, the feature point may be a point position in the lower edge of the detection frame, as shown by a white circle on the detection frame in fig. 2A. The feature point may also be the lower left corner or the lower right corner of the detection box.
And for the detection frame with the characteristic point positioned in the queuing area, determining that the target human body corresponding to the detection frame is also positioned in the queuing area, namely the target human body is positioned in the queuing.
In step 206, a tracking identifier of each target human body is determined based on the target human body detected in a plurality of image frames in the video stream for tracking.
It should be noted that, for a target human body in a certain image frame, the target human body may not be in a queuing queue because the queuing is not started yet in other image frames, but may be in an area outside the queuing queue or outside the queuing area in the image frame. Therefore, when the target human body is tracked, the detection frame of the target human body in the queuing area and the detection frames of the human bodies around the queuing area can be tracked, namely, a tracking algorithm is used for tracking each human body in a video picture, the same human body at each moment is associated, and the tracking identifier (trackID) of each target human body is further determined.
The tracking identifier may be represented by a number, such as the numbers 0-6 in fig. 2A, which mark seven target persons in the queue. For example, the tracking identifier may be determined according to the time sequence of the target human body appearing in the picture of the video stream, for example, the target human body 1 marked by the tracking identifier 1 may come to the queuing scene earlier than the target human body 5 marked by the tracking identifier 5 arranged before the target human body 1, but the target human body 1 does not start queuing until the target human body 5 enters the queuing queue.
The position of the target human body marked by the tracking identifier may be the position of the feature point, the position of the center of the area occupied by the detection frame, or the positions of other points on the detection frame.
In step 208, a queuing analysis result of the queuing queue is determined according to the tracking identifier of the target human body in at least one image frame.
By carrying out sequence analysis on the change of the tracking identifier in the queuing area at each moment, the position, the serial number, the real-time waiting time and other information of each target human body in the queuing queue at each moment can be obtained, and the queuing analysis result can be obtained by carrying out analysis statistics on the information.
For example, the queuing analysis result may include queuing waiting time and service usage time of each target human body, the number of people in the queuing at each moment, and average queuing time and average service usage time, etc.
For example, for an image frame of any tracking identifier, where a target human body marked by the tracking identifier is located at the tail of a queuing queue, the queuing start time of the target human body marked by the tracking identifier may be determined; for an image frame of a target human body marked by any tracking identifier, which is positioned at the head of a queuing queue, determining the queuing end time of the target human body marked by the tracking identifier; and determining the queuing waiting time according to the queuing starting time and the queuing ending time of the target human body.
Specifically, when the queuing waiting time of the target human body is calculated, for a plurality of image frames of which the target human body marked by any tracking identifier is located at the tail of the queuing queue, the acquisition time of the image frame with the earliest acquisition time in the plurality of image frames may be determined as the queuing start time of the target human body marked by the tracking identifier.
And for a plurality of image frames of which the target human body marked by the tracking identification is positioned at the head of a queue, determining the acquisition time of the image frame with the earliest acquisition time in the plurality of image frames as the queue end time of the target human body marked by the tracking identification.
And determining the queuing waiting time of the target human body according to the queuing starting time of the target human body and the queuing ending time of the target human body.
Generally, a target human body located at the end of the departure direction, that is, the end of the direction indicated by the arrow, is using a service provided by a subway ticket machine, such as ticket taking, ticket buying, or inquiry service, and the target human body can be regarded as a target human body at the head of a queue. And the opposite target human body at the other end of the queue is the target human body at the tail of the queue which enters the queue latest.
According to the position of a tracking identifier of a certain target human body, a plurality of image frames of the target human body at the tail of a queuing queue can be determined, wherein the image frame with the earliest acquisition time is the image frame of the target human body marked by the tracking identifier when the target human body just enters the queuing queue, and the acquisition time of the image frame is recorded as the queuing start time of the target human body.
Similarly, a plurality of image frames of the target human body at the head of the queue can be determined according to the position of the tracking identifier of the target human body, wherein the image frame with the earliest acquisition time is the image frame when the head of the queue, marked by the tracking identifier, of the target human body just enters the queue and is ready to start using the subway ticket vending machine, and the acquisition time of the image frame is recorded as the queue ending time of the target human body, which can also be said to be the service starting time of the target human body.
And subtracting the queuing start time and the queuing end time of the target human body to obtain the queuing waiting time of the target human body.
In addition, based on the calculated queuing waiting time of the plurality of target human bodies in the queuing, the queuing waiting time of the target human body which just enters the queuing can be estimated, for example, the average queuing waiting time of the plurality of target human bodies can be used as the estimated queuing waiting time, and the queuing waiting time can also be estimated according to an equation which is fit for the relationship between the queuing waiting time of the plurality of target human bodies and the number of waiting people.
For another example, for any one of the tracking identifiers, a plurality of image frames of the target human body marked by the tracking identifier at the head of the queue in the queue are determined.
And determining the acquisition time of the image frame with the earliest acquisition time in the plurality of image frames as the service starting time of the target human body marked by the tracking identification.
And determining the service end time of the target human body marked by the tracking identification based on the detection that the target human body marked by the tracking identification leaves the queue. For example, when the target human body appears at the head of the queue last time, it is considered that the target human body is detected to leave the queue, and the acquisition time of the image frame with the latest acquisition time among the plurality of image frames is determined as the service end time of the target human body marked by the tracking identifier.
And determining the service using time of the target human body according to the service starting time and the service ending time of the target human body.
The method of calculating the service use time is similar to the method of calculating the queuing wait time in the above example, and the queuing end time of the target human body can be used as the service start time of the target human body.
And determining a plurality of image frames of the target human body at the head of the queue according to the position of the tracking identifier of the target human body, wherein the image frame with the latest acquisition time is the image frame when the target human body marked by the tracking identifier uses up the subway ticket vending machine and is ready to leave the queue, and the acquisition time of the image frame is recorded as the service end time of the target human body.
And subtracting the service starting time and the service ending time of the target human body to obtain the service using time of the target human body.
The above calculation of the queuing waiting time and the service using time can be carried out on each target human body, and the average queuing time, the average service using time and the like are further obtained.
For another example, in response to that the number of the tracking identifiers of the target human body in the adjacent image frames is different, and the target human body marked by the tracking identifier newly added in the image frame after the acquisition time is not located at the tail of the queuing queue, the target human body in the queuing queue is determined to be inserted.
For two adjacent image frames in the video stream, when the number of tracking identifiers in the queuing queue is different, the reason is generally that there is a target human body newly entering the queuing queue or a target human body newly leaving the queuing queue. Normally, the target human body newly entering the queuing queue should be located at the tail of the queuing queue, and if the target human body marked by the newly-added tracking identifier in the image frame after the acquisition time is not located at the tail of the queuing queue, the target human body newly entering the queuing queue is determined to be inserted into the queue.
The queuing analysis method provided by the technical scheme of the embodiment of the disclosure can mark the position and size of the queuing area in the image frame of the video stream, track and analyze the target human body of the queuing in the queuing area to be detected, can be suitable for various queuing scenes, flexibly configure the queuing area to be analyzed, control the queuing information in a targeted manner, and have good adaptability to various queuing scenes, so that service operators can optimize the resource configuration of manpower, material resources and the like facing the queuing crowd, thereby greatly improving the service efficiency and reducing the cost.
In the above embodiment, when it is determined whether the target human body at the head of the queuing queue is replaced, that is, when it is determined whether the target human body has just entered the head of the queuing queue or has just left the head of the queuing queue, the determination is performed according to the position of the tracking identifier. However, sometimes, the tracking identifier of the head of the queue may jump, for example, when two target human bodies are close to each other or when the result of human body tracking is not accurate, so that the method in the above embodiment may make an erroneous determination on whether the target human body at the head of the queue changes, thereby affecting the accuracy of the result of queue analysis.
In an implementation manner, on the basis of the foregoing embodiment, the present disclosure provides a queuing analysis method for making a queuing analysis result more accurate, where the method uses a method for comparing ReID (Person-identification, pedestrian re-identification) characteristics of a target human body at a head of a queuing queue when determining whether the target human body at the head of the queuing queue leaves, and after step 104 or step 206 of the foregoing embodiment, the method of the foregoing embodiment further includes:
extracting first feature information of a first target human body in a first image frame, extracting second feature information of a second target human body in a second image frame, comparing the first feature information with the second feature information, and determining that the first target human body leaves a queuing queue in response to the similarity between the first feature information and the second feature information being smaller than a similarity threshold.
The first target human body is a human body at the head of a queue in a first image frame, the second target human body is a human body at the head of a queue in a second image frame, the first image frame and the second image frame are adjacent image frames in a video stream, and the acquisition time of the first image frame is before the acquisition time of the second image frame.
The first feature information and the second feature information may be information of ReID features of the target human body in general, and the ReID features may include features of various types of attributes of the target human body, such as features of attributes of clothing, posture, hair style, and font of the human body.
Generally, people in a queue are divided into three states:
entering a queue: for each trackID that newly appears at the tail of the queuing queue, we consider it to be the person who newly entered the queuing queue.
In the queue: the person who exists in the queue continuously is considered to be always queued in the queue
And (3) dequeuing: the person leaving the queue. The method is important for analyzing and judging the queuing, and because the change of the trackID may generate jump, the ReiD characteristic of the target human body at the head of the queue in the last frame of the video stream is compared with the ReiD characteristic of the target human body at the head of the queue in the current frame to judge whether the target human body at the head of the queue in the last frame is out of queue.
In specific implementation, the ReID feature of the target human body at the head of the line in each image frame of the video stream may be extracted by using a pedestrian re-identification technology, and the ReID features of the target human body at the head of the line in two adjacent image frames are compared to obtain the similarity between the ReID features, that is, the greater the similarity is, the greater the possibility that the first target human body and the second target human body are the same target human body is, and conversely, the less the possibility that the first target human body and the second target human body are the same target human body is.
For example, for a certain video stream, the result of comparing the similarity between each image frame and the adjacent previous image frame is shown in fig. 2C. In fig. 2C, the abscissa indicates the number of image frames, "0" indicates the start point of the video stream, "500" indicates the 500 th image frame in the video stream, "2000" indicates the 2000 th image frame in the video stream, and the ordinate indicates the similarity, and the closer the similarity is to "1.0" indicates that the head of the image frame is the same person in two adjacent image frames.
The similarity threshold value can be set to 0.5, and if the similarity is less than 0.5, the head of the queue in the two adjacent image frames is not considered as the same person. The first target human body at the head of the previous frame queue is already out of the queue at the time corresponding to the current frame, the acquisition time corresponding to the image frame of the previous frame may be determined as the service end time of the first target human body, and the acquisition time corresponding to the image frame of the current frame may be determined as the service start time, or the queuing end time, of the second target human body.
Like the method for judging that the first target human body leaves the queue, the method can also judge that the second target human body leaves the queue at the acquisition time corresponding to a certain image frame.
And in response to the fact that the second target human body leaves the queuing queue, performing subtraction operation on the first target human body and the second target human body according to the time when the first target human body leaves the queuing queue and the time when the second target human body leaves the queuing queue, and determining the service using time corresponding to the second target human body.
Similarly, the queuing service time of the target human body may also be calculated according to the queuing end time of the target human body determined in the method in the embodiment and by combining the queuing start time of the target human body obtained by the method in the previous embodiment.
In the embodiment, whether the target human body at the head of the queue leaves or not is judged by using the method of ReID feature comparison, so that the condition that whether the target human body is out of the queue and wrong in judgment caused by inaccurate tracking is effectively avoided, and the accuracy of queuing analysis is improved, so that queuing waiting time and service using time are more accurately counted when a queuing area is analyzed, and the customer queuing experience is better improved according to the queuing analysis result.
As shown in fig. 3, fig. 3 is a block diagram of a queue analyzing apparatus according to at least one embodiment of the present disclosure, where the apparatus includes:
the human body detection module 31 is configured to detect image frames in a video stream, and determine at least one target human body in a queue in the image frames;
a human body tracking module 32, configured to track the target human body detected in a plurality of image frames in the video stream, and determine a tracking identifier of each target human body, where the tracking identifier is used to mark the same target human body in different image frames;
and the result analysis module 33 is configured to determine a queuing analysis result of the queuing queue according to the tracking identifier of the target human body in at least one image frame.
In an example, the human body detection module 31 is specifically configured to: detecting image frames in a video stream to obtain detection frames of all human bodies in the image frames; and responding to the detection frame in a preset queuing area in the image frame, and determining the human body corresponding to the detection frame as the target human body in the queuing.
In one example, the queue analysis results include the number of people in the queue; the result analysis module 33 is specifically configured to: and determining the number of people in a queue at the acquisition time corresponding to the image frame according to the number of the tracking marks of the target human body in the image frame.
In one example, the queue analysis results include queue wait times; the result analysis module 33 is specifically configured to: for an image frame of a target human body marked by any tracking identifier at the tail of a queuing queue, determining the queuing start time of the target human body marked by the tracking identifier; for an image frame of a target human body marked by any tracking identifier, which is positioned at the head of a queuing queue, determining the queuing end time of the target human body marked by the tracking identifier; and determining the queuing waiting time according to the queuing starting time and the queuing ending time of the target human body.
In one example, the queuing analysis results include service usage time; the result analysis module 33 is specifically configured to: for any tracking identifier, determining a plurality of image frames of the target human body marked by the tracking identifier at the head of a queue in a queue; determining the acquisition time of the image frame with the earliest acquisition time in the plurality of image frames as the service start time of the target human body marked by the tracking identification; determining the service end time of the target human body marked by the tracking identification based on the detection that the target human body marked by the tracking identification leaves the queue; and determining the service using time of the target human body according to the service starting time and the service ending time of the target human body.
In one example, the result analysis module 33 is specifically configured to: and determining the target human body queue insertion marked by the newly added tracking identifier in the queuing queue in response to the fact that the number of the tracking identifiers of the target human body in the adjacent image frames is different and the target human body marked by the newly added tracking identifier in the image frame with the later acquisition time is not positioned at the tail of the queuing queue.
In one example, as shown in fig. 4, the apparatus further comprises: a feature alignment module 34.
Extracting first characteristic information of a first target human body in a first image frame and extracting second characteristic information of a second target human body in an adjacent second image frame; the first target human body is a human body at the head of a queue in the first image frame, and the second target human body is a human body at the head of the queue in the second image frame; and determining that the first target human body leaves the queue in response to the similarity between the first characteristic information and the second characteristic information being smaller than a similarity threshold.
In one example, the result analysis module 33 is specifically configured to: and in response to the fact that the second target human body leaves the queuing queue, determining service use time corresponding to the second target human body according to the time when the first target human body leaves the queuing queue and the time when the second target human body leaves the queuing queue.
The implementation process of the functions and actions of each module in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
An embodiment of the present disclosure further provides an electronic device, as shown in fig. 5, where the electronic device includes a memory 51 and a processor 52, the memory 51 is configured to store computer instructions executable on the processor, and the processor 52 is configured to implement the queuing analysis method according to any embodiment of the present disclosure when executing the computer instructions.
Embodiments of the present disclosure also provide a computer program product, which includes a computer program/instruction, and when the computer program/instruction is executed by a processor, the queuing analysis method according to any embodiment of the present disclosure is implemented.
The embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the queuing analysis method according to any embodiment of the present disclosure.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution in the specification. One of ordinary skill in the art can understand and implement it without inventive effort.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Other embodiments of the present description will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This specification is intended to cover any variations, uses, or adaptations of the specification following, in general, the principles of the specification and including such departures from the present disclosure as come within known or customary practice within the art to which the specification pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the specification being indicated by the following claims.
It will be understood that the present description is not limited to the precise arrangements described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present description is limited only by the appended claims.
The above description is only a preferred embodiment of the present disclosure, and should not be taken as limiting the present disclosure, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (12)

1. A method of queue analysis, the method comprising:
detecting image frames in a video stream, and determining at least one target human body in the image frames, wherein the target human body is in a queue;
tracking the target human body detected in a plurality of image frames in the video stream, and determining a tracking identifier of each target human body, wherein the tracking identifiers are used for marking the same target human body in different image frames;
and determining a queuing analysis result of the queuing queue according to the tracking identifier of the target human body in at least one image frame.
2. The method of claim 1, wherein the detecting image frames in a video stream, and determining at least one target human body in the image frames in a queue comprises:
detecting image frames in a video stream to obtain detection frames of all human bodies in the image frames;
and responding to the detection frame in a preset queuing area in the image frame, and determining the human body corresponding to the detection frame as the target human body in the queuing.
3. The method of claim 1 or 2, wherein the queue analysis results include the number of people in the queue; the determining a queuing analysis result of the queuing queue according to the tracking identifier of the target human body in at least one image frame comprises the following steps:
and determining the number of people in a queue at the acquisition time corresponding to the image frame according to the number of the tracking marks of the target human body in the image frame.
4. The method of any of claims 1-3, wherein the queue analysis results include queue wait times; the determining a queuing analysis result of the queuing queue according to the tracking identifier of the target human body in at least one image frame comprises the following steps:
for an image frame of a target human body marked by any tracking identifier at the tail of a queuing queue, determining the queuing start time of the target human body marked by the tracking identifier;
for an image frame of a target human body marked by any tracking identifier, which is positioned at the head of a queuing queue, determining the queuing end time of the target human body marked by the tracking identifier;
and determining the queuing waiting time according to the queuing starting time and the queuing ending time of the target human body.
5. The method of any of claims 1-4, wherein the queue analysis results include service usage time; the determining a queuing analysis result of the queuing queue according to the tracking identifier of the target human body in at least one image frame comprises the following steps:
for any tracking identifier, determining a plurality of image frames of the target human body marked by the tracking identifier at the head of a queue in a queue;
determining the acquisition time of the image frame with the earliest acquisition time in the plurality of image frames as the service start time of the target human body marked by the tracking identification;
determining the service end time of the target human body marked by the tracking identification based on the detection that the target human body marked by the tracking identification leaves the queue;
and determining the service using time of the target human body according to the service starting time and the service ending time of the target human body.
6. The method according to any one of claims 1-5, further comprising:
extracting first characteristic information of a first target human body in a first image frame and extracting second characteristic information of a second target human body in an adjacent second image frame; the first target human body is a human body at the head of a queue in the first image frame, and the second target human body is a human body at the head of the queue in the second image frame;
and determining that the first target human body leaves the queue in response to the similarity between the first characteristic information and the second characteristic information being smaller than a similarity threshold.
7. The method of claim 6, further comprising:
and in response to the fact that the second target human body leaves the queuing queue, determining service use time corresponding to the second target human body according to the time when the first target human body leaves the queuing queue and the time when the second target human body leaves the queuing queue.
8. The method according to any one of claims 1-7, wherein the determining a queuing analysis result of the queuing queue according to the tracking identifier of the target human body in at least one of the image frames comprises:
and determining the target human body queue insertion marked by the newly added tracking identifier in the queuing queue in response to the fact that the number of the tracking identifiers of the target human body in the adjacent image frames is different and the target human body marked by the newly added tracking identifier in the image frame with the later acquisition time is not positioned at the tail of the queuing queue.
9. A queuing analysis apparatus, the apparatus comprising:
the human body detection module is used for detecting image frames in a video stream and determining at least one target human body in the image frames, wherein the target human body is in a queue;
a human body tracking module, configured to track the target human body detected in a plurality of image frames in the video stream, and determine a tracking identifier of each target human body, where the tracking identifier is used to mark the same target human body in different image frames;
and the result analysis module is used for determining a queuing analysis result of the queuing queue according to the tracking identifier of the target human body in at least one image frame.
10. An electronic device, comprising a memory for storing computer instructions executable on a processor, the processor being configured to implement the method of any one of claims 1 to 8 when executing the computer instructions.
11. A computer program product comprising computer programs/instructions, characterized in that the computer programs/instructions, when executed by a processor, implement the method of any of claims 1 to 8.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 8.
CN202111308427.4A 2021-11-05 2021-11-05 Queuing analysis method and device Pending CN114049378A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111308427.4A CN114049378A (en) 2021-11-05 2021-11-05 Queuing analysis method and device
PCT/CN2022/097274 WO2023077797A1 (en) 2021-11-05 2022-06-07 Method and apparatus for analyzing queue

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111308427.4A CN114049378A (en) 2021-11-05 2021-11-05 Queuing analysis method and device

Publications (1)

Publication Number Publication Date
CN114049378A true CN114049378A (en) 2022-02-15

Family

ID=80207717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111308427.4A Pending CN114049378A (en) 2021-11-05 2021-11-05 Queuing analysis method and device

Country Status (2)

Country Link
CN (1) CN114049378A (en)
WO (1) WO2023077797A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114565894A (en) * 2022-03-03 2022-05-31 成都佳华物链云科技有限公司 Work garment identification method and device, electronic equipment and storage medium
CN114719767A (en) * 2022-03-30 2022-07-08 中国工商银行股份有限公司 Distance detection method and device, storage medium and electronic equipment
CN114972298A (en) * 2022-06-16 2022-08-30 中国电建集团中南勘测设计研究院有限公司 Method and system for detecting urban drainage pipeline video
WO2023077797A1 (en) * 2021-11-05 2023-05-11 上海商汤智能科技有限公司 Method and apparatus for analyzing queue

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101615311B (en) * 2009-06-19 2011-05-04 无锡骏聿科技有限公司 Method for evaluating queuing time based on vision
US11704782B2 (en) * 2018-10-03 2023-07-18 The Toronto-Dominion Bank Computerized image analysis for automatically determining wait times for a queue area
CN111325057B (en) * 2018-12-14 2024-02-27 杭州海康威视数字技术股份有限公司 Queuing queue detection method and device
CN112016731B (en) * 2019-05-31 2024-02-27 杭州海康威视系统技术有限公司 Queuing time prediction method and device and electronic equipment
CN114049378A (en) * 2021-11-05 2022-02-15 北京市商汤科技开发有限公司 Queuing analysis method and device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023077797A1 (en) * 2021-11-05 2023-05-11 上海商汤智能科技有限公司 Method and apparatus for analyzing queue
CN114565894A (en) * 2022-03-03 2022-05-31 成都佳华物链云科技有限公司 Work garment identification method and device, electronic equipment and storage medium
CN114719767A (en) * 2022-03-30 2022-07-08 中国工商银行股份有限公司 Distance detection method and device, storage medium and electronic equipment
CN114972298A (en) * 2022-06-16 2022-08-30 中国电建集团中南勘测设计研究院有限公司 Method and system for detecting urban drainage pipeline video
CN114972298B (en) * 2022-06-16 2024-04-09 中国电建集团中南勘测设计研究院有限公司 Urban drainage pipeline video detection method and system

Also Published As

Publication number Publication date
WO2023077797A1 (en) 2023-05-11

Similar Documents

Publication Publication Date Title
CN114049378A (en) Queuing analysis method and device
CN110334569B (en) Passenger flow volume in-out identification method, device, equipment and storage medium
CN108269333A (en) Face identification method, application server and computer readable storage medium
CN105989331B (en) Face feature extraction element, facial feature extraction method, image processing equipment and image processing method
CN107515825A (en) Fluency method of testing and device, storage medium, terminal
CN104462530A (en) Method and device for analyzing user preferences and electronic equipment
CN112257660B (en) Method, system, equipment and computer readable storage medium for removing invalid passenger flow
KR102550964B1 (en) Apparatus and Method for Measuring Concentrativeness using Personalization Model
CN110555349B (en) Working time length statistics method and device
CN107992591A (en) People search method and device, electronic equipment and computer-readable recording medium
Liang et al. Accurate facial landmarks detection for frontal faces with extended tree-structured models
CN111666915A (en) Monitoring method, device, equipment and storage medium
US10853829B2 (en) Association method, and non-transitory computer-readable storage medium
CN114783037B (en) Object re-recognition method, object re-recognition apparatus, and computer-readable storage medium
CN111414948A (en) Target object detection method and related device
CN114155488A (en) Method and device for acquiring passenger flow data, electronic equipment and storage medium
CN113837006A (en) Face recognition method and device, storage medium and electronic equipment
JP7383435B2 (en) Image processing device, image processing method, and program
CN110738149A (en) Target tracking method, terminal and storage medium
CN106446837B (en) A kind of detection method of waving based on motion history image
WO2022078134A1 (en) People traffic analysis method and system, electronic device, and readable storage medium
CN115482569A (en) Target passenger flow statistical method, electronic device and computer readable storage medium
Stefański et al. The problem of detecting boxers in the boxing ring
CN113592427A (en) Method and apparatus for counting man-hours and computer readable storage medium
CN109614893B (en) Intelligent abnormal behavior track identification method and device based on situation reasoning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40061852

Country of ref document: HK