WO2023077797A1 - Method and apparatus for analyzing queue - Google Patents

Method and apparatus for analyzing queue Download PDF

Info

Publication number
WO2023077797A1
WO2023077797A1 PCT/CN2022/097274 CN2022097274W WO2023077797A1 WO 2023077797 A1 WO2023077797 A1 WO 2023077797A1 CN 2022097274 W CN2022097274 W CN 2022097274W WO 2023077797 A1 WO2023077797 A1 WO 2023077797A1
Authority
WO
WIPO (PCT)
Prior art keywords
queuing
target object
queue
tracking
image frame
Prior art date
Application number
PCT/CN2022/097274
Other languages
French (fr)
Chinese (zh)
Inventor
刘诗男
杨昆霖
侯军
伊帅
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023077797A1 publication Critical patent/WO2023077797A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C11/00Arrangements, systems or apparatus for checking, e.g. the occurrence of a condition, not provided for elsewhere
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C11/00Arrangements, systems or apparatus for checking, e.g. the occurrence of a condition, not provided for elsewhere
    • G07C2011/04Arrangements, systems or apparatus for checking, e.g. the occurrence of a condition, not provided for elsewhere related to queuing systems

Definitions

  • the embodiments of the present disclosure relate to the technical field of computer vision, and in particular to a queuing analysis method and device.
  • Queuing events in life can be seen everywhere, such as queuing at the cashier counter in the supermarket, queuing for subway tickets, queuing for security checks, queuing for canteens, etc.
  • Traditional queuing analysis methods such as the queuing number extraction method, are not suitable for the above-mentioned various queuing scenarios, and the information about the queuing situation that traditional queuing analysis methods can obtain is limited.
  • a queuing analysis method applied to a terminal device comprising: performing target detection on one or more image frames in a video stream, and determining the one or more image frames One or more target objects in the queuing queue; track the one or more target objects in the video stream, and assign a tracking identifier to each target object in the one or more target objects; according to The tracking identification of the one or more target objects in at least one image frame in the video stream determines the queuing analysis result of the queuing queue.
  • a queuing analysis device comprising: an object detection module, configured to perform target detection on one or more image frames in a video stream, and determine the one or more image frames One or more target objects in the queue in the frame; an object tracking module, configured to track the one or more target objects in the video stream, for each target in the one or more target objects Object allocation tracking identification; a result analysis module, configured to determine the queuing analysis result of the queuing queue according to the tracking identification of the one or more target objects in at least one image frame in the video stream.
  • an electronic device the device includes a memory and a processor, the memory is used to store computer instructions executable on the processor, and the processor is used to execute the computer
  • the instruction implements the queuing analysis method described in any embodiment of the present disclosure.
  • a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the queuing analysis method described in any embodiment of the present disclosure is implemented.
  • a computer program product includes a computer program, and when the computer program is executed by a processor, the queuing analysis method described in any embodiment of the present disclosure is implemented.
  • Fig. 1A is a flowchart of a queuing analysis method according to at least one embodiment of the present disclosure
  • Fig. 1B is a flowchart of another queuing analysis method according to at least one embodiment of the present disclosure
  • Fig. 2A is a schematic diagram of a subway queuing scene according to at least one embodiment of the present disclosure
  • Fig. 2B is a schematic diagram of a queuing scenario according to at least one embodiment of the present disclosure
  • Fig. 2C is a statistical diagram of similarity shown according to at least one embodiment of the present disclosure.
  • Fig. 3 is a block diagram of a queuing analysis device according to at least one embodiment of the present disclosure
  • Fig. 4 is a block diagram of another queuing analysis device according to at least one embodiment of the present disclosure.
  • Fig. 5 is a schematic diagram of a hardware structure of an electronic device according to at least one embodiment of the present disclosure.
  • first, second, third, etc. may be used in this specification to describe various information, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another. For example, without departing from the scope of this specification, first information may also be called second information, and similarly, second information may also be called first information. Depending on the context, the word “if” as used herein may be interpreted as “at” or “when” or “in response to a determination.”
  • the product applying the disclosed technical solution has clearly informed the user of the information processing rules and obtained the user's independent consent before processing the user information.
  • the product applying the disclosed technical solution has obtained the separate consent of the user before processing the sensitive user information, and at the same time meets the requirement of "express consent". For example, at the user information collection device such as a camera, set a clear and prominent sign to inform that the user information has entered the collection range, and the user information will be collected.
  • the user information processing rules may include user Information processor, user information processing purpose, processing method, type of user information processed and other information.
  • the usual queuing statistics management scheme mainly has two methods: 1. Use the queuing number to count the number of queuing people. This method is limited to the scene and is not flexible. It is only suitable for small-scale scene queuing. Suitable for large-scale scenes such as playgrounds and zoos. 2.
  • the method of using mobile terminals to count the number of people in the queue requires users to actively access the network of mobile terminals, and is also limited to local spaces. For example, it is only applicable to scenarios such as toilets and closed venue passages, and because there People queuing can also access the network of mobile terminals, so the statistical accuracy is not high, and the queuing information that can be obtained is also limited.
  • At least one embodiment of the present disclosure provides a queuing analysis method.
  • the method analyzes the video stream in the queuing scene, so that it is not limited by the queuing scene. , you can control the queued information.
  • Fig. 1A is a flowchart showing a queuing analysis method according to at least one embodiment of the present disclosure, and the method may include steps 102 to 106.
  • step 102 target detection is performed on one or more image frames in the video stream, and one or more target objects in the queue in the one or more image frames are determined.
  • the video stream includes a plurality of image frames collected from the queuing scene, and the video stream may be obtained by real-time monitoring of the queuing queue, or may be a recorded video of the queuing queue.
  • performing object detection on the image frame may include performing object detection on the entire image frame, or may include performing object detection on the marked queuing area in the image frame.
  • This embodiment does not limit the manner of detecting the image frame in the video stream, for example, it may be detected by a neural network, or may be detected by other methods.
  • step 104 the one or more target objects are tracked in the video stream, and a tracking identifier is assigned to each of the one or more target objects.
  • the tracking identifier is used to mark the same target object in different image frames.
  • the same target object may exist in different video frames, and one or more target objects in multiple image frames may be tracked to determine the position of the same target object in different image frames and mark the target object with a tracking mark.
  • This embodiment does not limit the method used for tracking the target object.
  • the kalman (Kalman) filter tracking algorithm the tracking algorithm based on siamsRPN (visual target tracking network), etc. can be used to track the target object.
  • siamsRPN visual target tracking network
  • a queuing analysis result of the queuing queue is determined according to the tracking identifiers of the one or more target objects in at least one image frame in the video stream.
  • the queuing analysis result may include a preliminary analysis result obtained by analyzing each image frame, and may also include a result obtained by further summarizing the preliminary analysis results.
  • the queuing analysis result can include the number of people in the queue (the number of target objects in the queue), and according to the number of tracking marks of the target object in the image frame, the number of people in the queue at the acquisition time corresponding to the image frame can be determined . Moreover, by further analyzing the number of people in the queue in multiple image frames of the video stream, the number of people in the queue in different time periods can be obtained, so as to know the peak value, peak time, valley value, valley time and other information of the queue number.
  • the queuing analysis method uses the video stream in the queuing scene to track and analyze the target objects in the queuing queue, and can obtain richer queuing information, which is more practical and is not limited by the queuing scene.
  • Various queuing scenarios have relatively good adaptability, and can control queuing information in order to optimize resource allocation such as manpower and material resources for queuing objects, thereby greatly improving service efficiency and reducing costs.
  • Fig. 1B is a flow chart of a queuing analysis method provided according to at least one embodiment of the present disclosure.
  • the method combines the subway queuing scene shown in Fig. 2A and describes in more detail how to perform queuing analysis by detecting image frames in a video stream process.
  • the method may include steps 202 to 208 , where it should be noted that, this embodiment does not limit the execution order of the steps.
  • step 202 object detection is performed on any one of the one or more image frames in the video stream to obtain detection frames of each object in the image frame.
  • an object detection network may be used to detect the image frames in the video stream.
  • the object detection network is a pre-trained neural network for detecting objects, and correspondingly obtains the detection frame of each object in the image frame.
  • the head key point detection network may also be used to detect the image frames in the video stream, and correspondingly obtain the detection frames of the heads of the objects in the image frames.
  • detection frames of other parts of the object may also be detected, for example, human feet, human legs, car wheels, animal feet, and the like.
  • the image frame is input to the object detection network, and the detection frame of each object in the image frame is output.
  • the detection frame contains coordinate information, which is used to represent the position of the object.
  • step 204 in response to detecting that the detection frame is in the preset queuing area in the image frame, the object corresponding to the detection frame is determined as the target object in the queuing queue.
  • the position and size of the queuing area can be delineated in advance in the video images collected by the camera near the subway ticket vending machine.
  • the queuing area in the queuing area that needs to be analyzed can be designated by delimiting the queuing area in the image frame of the video stream.
  • the viewing angle position of the camera that collects the video stream in the queuing scene is fixed, it is sufficient to pre-mark the queuing area only once for the video stream collected by the camera. You can also choose one or several sides of the queuing area and indicate the direction of the queue.
  • the preset queuing area in the image frame may be an area in a quadrangular frame composed of black lines as shown in FIG. 2A , and the direction of leaving the queue is shown by the arrow. It can also be the area in the box formed by the black lines shown in Figure 2B, and the direction of leaving the team is shown by the arrow.
  • whether the object corresponding to the detection frame is the target object in the queuing queue can be judged by whether the feature points in the detection frame are in the queuing area. In other examples, it can also be determined according to whether the edge of the detection frame is in the queuing area. or the degree of overlap between the detection frame and the queuing area.
  • the feature point is a point in the inner area of the detection frame or a point on the edge of the detection frame. This embodiment does not limit the selection of the feature point.
  • the feature point can be the midpoint position of the lower edge of the detection frame, as shown at the bottom of the detection frame in Figure 2A. Indicated by the white circle. Feature points can also be located in the lower left corner or lower right corner of the detection frame.
  • the target object corresponding to the detection frame is also located in the queuing area, that is, the target object is in the queuing queue.
  • step 206 one or more target objects are tracked in the video stream, and a tracking identifier is assigned to each of the one or more target objects.
  • the target object may not be in the queuing queue at this time because the queuing has not yet started, but the target object is in the other image frame.
  • the area of an image frame outside the queuing queue or outside the queuing area Therefore, when tracking the target object, the detection frame of the target object in the queuing area and the detection frame of the surrounding objects in the queuing area can be tracked.
  • the same object in the image frame at the moment is associated, and then a tracking identifier (trackID) is assigned to each object.
  • trackID tracking identifier
  • the tracking identifiers can be represented by numbers, for example, the numbers 0-6 in Figure 2A mark seven target objects in the queuing queue.
  • the tracking identifier can be determined according to the chronological order in which the target object appears in the frame of the video stream, for example, the target object 1 marked by the tracking identifier 1 can be earlier than the target object 5 marked by the preceding tracking identifier 5 To the queuing scene, but target object 1 starts queuing after target object 5 enters the queuing queue.
  • the position of the target object marked by the tracking mark may use the position of the feature point, the position of the center of the detection frame, or the position of other points on the detection frame.
  • a queuing analysis result of the queuing queue is determined according to the tracking identifiers of the one or more target objects in at least one image frame in the video stream.
  • the queuing analysis result may include the queuing waiting time and service usage time of each target object, the number of target objects in the queuing queue at each moment, and the average queuing time and service usage time.
  • the queuing start time of the target object marked by the tracking mark can be determined; For the image frame at the head of the queue, determine the queuing end time of the target object marked by the tracking identifier; determine the queuing waiting time of the target object according to the queuing start time and queuing end time of the target object.
  • the earliest of the plurality of image frames may be The acquisition time is determined as the queue start time of the target object marked by the tracking identifier.
  • the earliest acquisition time among the plurality of image frames is determined as the queuing end time of the target object marked by the tracking identifier.
  • the queuing waiting time of the target object is determined according to the queuing start time of the target object and the queuing end time of the target object.
  • the target object at the head of the queue is generally located at the end of the direction of leaving the queue, that is, the target object at the end of the direction pointed by the arrow is using the services provided by the subway ticket vending machine, such as ticket collection, ticket purchase, or query service.
  • An object can be thought of as a target object at the head of a queuing queue.
  • the target object at the end of the queuing queue is generally located at the other end of the queue, for example, the target object newly entered the queuing queue is at the end of the queuing queue.
  • the position of the tracking mark of a certain target object it can be determined that the target object is located at the end of the queue, and the image frame with the earliest acquisition time is when the target object marked by the tracking mark just entered the queue.
  • the image frame of the image frame, the acquisition time of the image frame is recorded as the queuing start time of the target object.
  • the target object is located in a plurality of image frames at the head of the queue, and the image frame with the earliest collection time is the target object marked by the tracking mark and has just entered The image frame when the leader of the queuing queue is about to start using the subway ticket vending machine.
  • the acquisition time of the image frame is recorded as the queuing end time of the target object, or the service start time of the target object.
  • the queuing start time and queuing end time of the target object are subtracted to obtain the queuing waiting time of the target object.
  • the queuing waiting time of the target object that has just entered the queuing queue can be estimated, for example, the average queuing waiting time of multiple target objects can be used as the estimate
  • the queuing waiting time can also be estimated according to the equation fitted by the relationship between the queuing waiting time of multiple target objects and the number of people waiting in line.
  • a plurality of image frames in which the target object marked by the tracking identifier is at the head of the queue is determined.
  • the service end time of the target object marked by the tracking mark is determined. For example, when the target object appears at the head of the queue for the last time, it may be considered that the target object has been detected to leave the queue, and the latest acquisition time among the plurality of image frames is determined as the target object marked by the tracking identifier. Service end time.
  • the service usage time of the target object is determined according to the service start time and service end time of the target object.
  • the method for calculating the service usage time is similar to the method for calculating the queue waiting time in the above example, and the queue end time of the target object can be used as the service start time of the target object.
  • the acquisition time of the image frame is recorded as the service end time of the target object.
  • the service usage time of the target object can be obtained by subtracting the service start time and the service end time of the target object.
  • the above-mentioned queuing waiting time and service usage time can be calculated, and the average queuing time and service usage time can be further obtained.
  • the target object newly entering the queuing queue should be at the end of the queuing queue, if the target object marked by the new tracking mark in the image frame after the acquisition time is not at the end of the queuing queue, then determine the new The target object that enters the enqueuing queue jumps into the queue.
  • the queuing analysis method can mark the position and size of the queuing area in the image frame in the video stream, track and analyze the target objects in the queuing queue in the queuing area, and can be applied to various types of queuing Scenarios, flexibly configure the queuing areas that need to be analyzed, and control queuing information in a targeted manner. It has better adaptability to various queuing scenarios, so that service operators can optimize the allocation of resources such as human and material resources for queuing objects, thereby greatly Improve service efficiency and reduce costs.
  • the judgment is made based on the position of the tracking mark of.
  • the tracking identification at the head of the queuing queue may jump, for example, when two target objects are very close together, or when the result of object tracking is inaccurate, the method in the above-mentioned embodiment It is possible to make a wrong judgment on whether the target object of the queue head has changed, which in turn affects the accuracy of the queuing analysis results.
  • the present disclosure provides a queuing analysis method to make the queuing analysis result more accurate.
  • the method judges whether the target object at the head of the queue leaves the A method for comparing the ReID (Person re-identification, pedestrian re-identification) feature of the target object at the head of the queue, after step 104 or step 206 of the above-mentioned embodiment, the method of the above-mentioned embodiment also includes: extracting the video The first feature information of the first target object in the first image frame in the stream, and extract the second feature information of the second target object in the second image frame adjacent to the first image frame, and compare the first The characteristic information and the second characteristic information, in response to the similarity between the first characteristic information and the second characteristic information being less than a similarity threshold, it is determined that the first target object leaves the queuing queue.
  • the ReID Person re-identification, pedestrian re-identification
  • the first target object is the target object at the head of the queuing queue in the first image frame
  • the second target object is the target object at the head of the queuing queue in the second image frame
  • the first image frame and the second image The frames are adjacent image frames in the video stream, and the acquisition time of the first image frame is before the acquisition time of the second image frame.
  • the first feature information and the second feature information may include the information of the ReID feature of the target object, and the ReID feature may include features of various types of attributes of the target object, such as attributes of clothing, posture, hairstyle, and limbs of the human body.
  • the target object in the queuing queue includes three states: entering the queue, in the queue, and exiting the queue, and in special cases, it also includes the state of cutting the queue.
  • the target object that persists in the queuing queue we think that this target object has been queuing in the queuing queue.
  • Dequeue The target object that leaves the queuing queue. This is very important for the analysis and judgment of the queuing queue. Since the change of the tracking logo may jump, we need to use the ReID feature of the target object at the head of the line in the previous frame of the video stream and the target at the head of the line in the current frame. The ReID feature of the object is compared to determine whether the target object at the head of the queue in the previous frame is out of the queue.
  • the ReID feature of the target object at the head of the line in each image frame of the video stream can be extracted using the pedestrian re-identification technology, and the ReID feature of the target object at the head of the line in two adjacent image frames can be compared , to obtain the similarity between the two, in other words, comparing the first feature information of the first target object in the first image frame with the second feature information of the second target object in the second image frame, the greater the similarity between the two , then the possibility that the first target object and the second target object are the same target object is higher, and vice versa, the possibility that the first target object and the second target object are the same target object is lower.
  • Figure 2C Exemplarily, for a certain video stream, the result of comparing the similarity between each image frame and the adjacent previous image frame is shown in Figure 2C.
  • the abscissa is the serial number of the image frame
  • "0" indicates the starting point of the video stream
  • "500” indicates the 500th image frame in the video stream
  • "2000” indicates the 2000th image frame in the video stream
  • the ordinate is the similarity, the closer the similarity is to "1.0", the more the team leader in two adjacent image frames is the same person.
  • the similarity threshold can be set to 0.5. If the similarity is less than 0.5, it is considered that the target object at the head of the team in two adjacent image frames is not the same person.
  • the first target object at the head of the queue in the previous frame has left the queue at the time corresponding to the current frame, the acquisition time corresponding to the image frame of the previous frame can be determined as the service end time of the first target object, and the image frame of the current frame
  • the corresponding collection time is determined as the service start time of the second target object, or the queue end time of the second target object.
  • the service usage time corresponding to the second target object is determined by subtracting the time when the first target object leaves the queuing queue and the time when the second target object leaves the queuing queue.
  • the queuing service time of the target object can also be calculated according to the queuing end time of the target object determined in the method of this embodiment combined with the queuing start time of the target object.
  • the method of ReID feature comparison is used to judge whether the target object at the head of the queue has left, which effectively avoids the wrong judgment of whether the target object is out of the queue caused by inaccurate tracking, and improves the accuracy of queuing analysis.
  • the queuing area make more accurate statistics on the queuing waiting time and service usage time, so as to better improve the queuing experience of the target object based on the queuing analysis results.
  • Fig. 3 is a block diagram showing a queuing analysis device according to at least one embodiment of the present disclosure, the device includes an object detection module 31, an object tracking module 32, and a result analysis module 33.
  • the object detection module 31 is configured to perform target detection on one or more image frames in the video stream, and determine one or more target objects in the queue in the one or more image frames.
  • the object tracking module 32 is configured to track the one or more target objects in the video stream, assign a tracking identifier to each target object in the one or more target objects, and the tracking identifier is used for marking The same target object in different said image frames.
  • the result analysis module 33 is configured to determine the queuing analysis result of the queuing queue according to the tracking identifiers of the one or more target objects in at least one image frame in the video stream.
  • the human body detection module 31 is configured to: perform target detection on one or more image frames in the video stream to obtain the detection frame of each object in the image frame; In the preset queuing area in the image frame, the object corresponding to the detection frame is determined as the target object in the queuing queue.
  • the queuing analysis result includes the number of the one or more target objects in the queuing queue; the result analysis module 33 is configured to: according to the tracking identification in one image frame in the at least one image frame in the video stream The number determines the number of target objects in the queuing queue at the acquisition time corresponding to the image frame.
  • the queuing analysis result includes queuing waiting time;
  • the result analysis module 33 is configured to: for any image frame in which the target object marked by any tracking mark is at the end of the queuing queue, determine the target object marked by the tracking mark The starting time of queuing; for the image frame in which the target object marked by the tracking mark is at the head of the queuing queue, determine the queuing end time of the target object marked by the tracking mark; according to the queuing start time of the target object and the target The queuing end time of the object determines the queuing waiting time of the target object.
  • the queuing analysis result includes service usage time;
  • the result analysis module 33 is configured to: for any one of the tracking markers, determine a plurality of image frames in which the target object marked by the tracking marker is at the head of the queuing queue; Determining the earliest acquisition time among the plurality of image frames as the service start time of the target object marked by the tracking mark; based on detecting that the target object marked by the tracking mark leaves the queuing queue, determining the target marked by the tracking mark The service end time of the object; according to the service start time of the target object and the service end time of the target object, the service usage time of the target object is determined.
  • the result analysis module 33 is configured to: respond to the number of tracking identifiers of the one or more target objects in adjacent image frames in the video stream being different, and the image frame whose acquisition time is later The target object marked by the newly added tracking identifier in is not located at the end of the queuing queue, and it is determined that the target object marked by the newly added tracking identifier in the queuing queue is inserted into the queue.
  • the device further includes: a feature comparison module 34 .
  • first target object is the target object at the head of the queuing queue in the first image frame
  • second target object is the head of the queuing queue in the second image frame the target object
  • the result analysis module 33 is configured to: in response to determining that the second target object leaves the queuing queue, according to the time when the first target object leaves the queuing queue and the time when the second target object leaves the queuing queue , to determine the service usage time corresponding to the second target object.
  • An embodiment of the present disclosure also provides an electronic device. As shown in FIG.
  • the device 52 is configured to implement the queuing analysis method described in any embodiment of the present disclosure when executing the computer instructions.
  • An embodiment of the present disclosure further provides a computer program product, which includes a computer program/instruction, and when the computer program/instruction is executed by a processor, implements the queuing analysis method described in any embodiment of the present disclosure.
  • An embodiment of the present disclosure also provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the queuing analysis method described in any embodiment of the present disclosure is implemented.
  • the device embodiment since it basically corresponds to the method embodiment, for related parts, please refer to the part description of the method embodiment.
  • the device embodiments described above are only illustrative, and the modules described as separate components may or may not be physically separated, and the components shown as modules may or may not be physical modules, that is, they may be located in One place, or it can be distributed to multiple network modules. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution in this specification. It can be understood and implemented by those skilled in the art without creative effort.

Abstract

A method and apparatus for analyzing a queue are provided, and applied to a terminal device, the method comprises: performing target detection on one or more image frames in a video stream, and determining one or more target objects in a queue in the one or more image frames; tracking the one or more target objects in the video stream, and assigning a tracking identifier to each of the one or more target objects; and determining a queuing analysis result of the queue according to the tracking identifiers of the one or more target objects in at least one image frame in the video stream.

Description

排队分析方法和装置Queue analysis method and device
相关申请交叉引用Related Application Cross Reference
本申请主张申请号为202111308427.4、申请日为2021年11月05日的中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。This application claims the priority of the Chinese patent application with the application number 202111308427.4 and the filing date of November 05, 2021. The entire content of the Chinese patent application is hereby incorporated by reference into this application.
技术领域technical field
本公开实施例涉及计算机视觉技术领域,具体涉及一种排队分析方法和装置。The embodiments of the present disclosure relate to the technical field of computer vision, and in particular to a queuing analysis method and device.
背景技术Background technique
生活中的排队事件随处可见,比如超市的收银台的排队、地铁买票排队、安检排队、食堂排队等。传统的排队分析方法,比如排队取号方法,不足以适用于上述各种排队场景,并且传统的排队分析方法能得到的关于排队情况的信息有限。Queuing events in life can be seen everywhere, such as queuing at the cashier counter in the supermarket, queuing for subway tickets, queuing for security checks, queuing for canteens, etc. Traditional queuing analysis methods, such as the queuing number extraction method, are not suitable for the above-mentioned various queuing scenarios, and the information about the queuing situation that traditional queuing analysis methods can obtain is limited.
发明内容Contents of the invention
根据本公开的第一方面,提供了一种排队分析方法,应用于终端设备,所述方法包括:对视频流中的一个或多个图像帧进行目标检测,确定所述一个或多个图像帧中处于排队队列的一个或多个目标对象;在所述视频流中对所述一个或多个目标对象进行跟踪,为所述一个或多个目标对象中的每个目标对象分配跟踪标识;根据所述视频流中至少一个图像帧中所述一个或多个目标对象的跟踪标识,确定所述排队队列的排队分析结果。According to a first aspect of the present disclosure, there is provided a queuing analysis method applied to a terminal device, the method comprising: performing target detection on one or more image frames in a video stream, and determining the one or more image frames One or more target objects in the queuing queue; track the one or more target objects in the video stream, and assign a tracking identifier to each target object in the one or more target objects; according to The tracking identification of the one or more target objects in at least one image frame in the video stream determines the queuing analysis result of the queuing queue.
根据本公开的第二方面,提供了一种排队分析装置,所述装置包括:对象检测模块,用于对视频流中的一个或多个图像帧进行目标检测,确定所述一个或多个图像帧中处于排队队列的一个或多个目标对象;对象跟踪模块,用于在所述视频流中对所述一个或多个目标对象进行跟踪,为所述一个或多个目标对象中每个目标对象分配跟踪标识;结果分析模块,用于根据所述视频流中至少一个图像帧中所述一个或多个目标对象的跟踪标识,确定所述排队队列的排队分析结果。According to a second aspect of the present disclosure, a queuing analysis device is provided, the device comprising: an object detection module, configured to perform target detection on one or more image frames in a video stream, and determine the one or more image frames One or more target objects in the queue in the frame; an object tracking module, configured to track the one or more target objects in the video stream, for each target in the one or more target objects Object allocation tracking identification; a result analysis module, configured to determine the queuing analysis result of the queuing queue according to the tracking identification of the one or more target objects in at least one image frame in the video stream.
根据本公开的第三方面,提供了一种电子设备,所述设备包括存储器、处理器,所述存储器用于存储可在处理器上运行的计算机指令,所述处理器用于在执行所述计算机指令时实现本公开任一实施例所述的排队分析方法。According to a third aspect of the present disclosure, there is provided an electronic device, the device includes a memory and a processor, the memory is used to store computer instructions executable on the processor, and the processor is used to execute the computer The instruction implements the queuing analysis method described in any embodiment of the present disclosure.
根据本公开的第四方面,提供了一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现本公开任一实施例所述的排队分析方法。According to a fourth aspect of the present disclosure, a computer-readable storage medium is provided, on which a computer program is stored, and when the program is executed by a processor, the queuing analysis method described in any embodiment of the present disclosure is implemented.
根据本公开的第五方面,提供了一种计算机程序产品,所述产品包括计算机程序,所述计算机程序被处理器执行时实现本公开任一实施例所述的排队分析方法。According to a fifth aspect of the present disclosure, a computer program product is provided, the product includes a computer program, and when the computer program is executed by a processor, the queuing analysis method described in any embodiment of the present disclosure is implemented.
附图说明Description of drawings
为了更清楚地说明本公开一个或多个实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显见地,下面描述中的附图仅仅是本公开一个或多个实施例中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in one or more embodiments of the present disclosure, the drawings that need to be used in the description of the embodiments will be briefly introduced below. Obviously, the drawings in the following description are only one example of the present disclosure. Or some embodiments described in multiple embodiments, for those skilled in the art, other drawings can also be obtained according to these drawings without creative work.
图1A是根据本公开至少一个实施例示出的一种排队分析方法的流程图;Fig. 1A is a flowchart of a queuing analysis method according to at least one embodiment of the present disclosure;
图1B是根据本公开至少一个实施例示出的另一种排队分析方法的流程图;Fig. 1B is a flowchart of another queuing analysis method according to at least one embodiment of the present disclosure;
图2A是根据本公开至少一个实施例示出的地铁排队场景示意图;Fig. 2A is a schematic diagram of a subway queuing scene according to at least one embodiment of the present disclosure;
图2B是根据本公开至少一个实施例示出的排队场景示意图;Fig. 2B is a schematic diagram of a queuing scenario according to at least one embodiment of the present disclosure;
图2C是根据本公开至少一个实施例示出的相似度的统计图;Fig. 2C is a statistical diagram of similarity shown according to at least one embodiment of the present disclosure;
图3是根据本公开至少一个实施例示出的一种排队分析装置的框图;Fig. 3 is a block diagram of a queuing analysis device according to at least one embodiment of the present disclosure;
图4是根据本公开至少一个实施例示出的另一种排队分析装置的框图;Fig. 4 is a block diagram of another queuing analysis device according to at least one embodiment of the present disclosure;
图5是根据本公开至少一个实施例示出的一种电子设备的硬件结构示意图。Fig. 5 is a schematic diagram of a hardware structure of an electronic device according to at least one embodiment of the present disclosure.
具体实施方式Detailed ways
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本说明书相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本说明书的一些方面相一致的装置和方法的例子。Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numerals in different drawings refer to the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with this specification. Rather, they are merely examples of apparatuses and methods consistent with aspects of the present specification as recited in the appended claims.
在本说明书中使用的术语仅仅出于描述特定实施例的目的,而非旨在限制本说明书。在本说明书和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。The terms used in this specification are for the purpose of describing particular embodiments only, and are not intended to limit this specification. As used in this specification and the appended claims, the singular forms "a", "the", and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise. It should also be understood that the term "and/or" as used herein refers to and includes any and all possible combinations of one or more of the associated listed items.
应当理解,尽管在本说明书中可能采用术语“第一”、“第二”、“第三”等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本说明书范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。It should be understood that although the terms "first", "second", "third", etc. may be used in this specification to describe various information, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another. For example, without departing from the scope of this specification, first information may also be called second information, and similarly, second information may also be called first information. Depending on the context, the word "if" as used herein may be interpreted as "at" or "when" or "in response to a determination."
若本公开技术方案涉及用户信息,应用本公开技术方案的产品在处理用户信息前,已明确告知用户信息处理规则,并取得用户自主同意。若本公开技术方案涉及敏感用户信息,应用本公开技术方案的产品在处理敏感用户信息前,已取得用户单独同意,并且 同时满足“明示同意”的要求。例如,在摄像头等用户信息采集装置处,设置明确显著的标识告知已进入用户信息采集范围,将会对用户信息进行采集,若用户自愿进入采集范围即视为同意对其用户信息进行采集;或者在用户信息处理的装置上,利用明显的标识/信息告知用户信息处理规则的情况下,通过弹窗信息或请用户自行上传其用户信息等方式获得用户授权;其中,用户信息处理规则可包括用户信息处理者、用户信息处理目的、处理方式、处理的用户信息种类等信息。If the disclosed technical solution involves user information, the product applying the disclosed technical solution has clearly informed the user of the information processing rules and obtained the user's independent consent before processing the user information. If the disclosed technical solution involves sensitive user information, the product applying the disclosed technical solution has obtained the separate consent of the user before processing the sensitive user information, and at the same time meets the requirement of "express consent". For example, at the user information collection device such as a camera, set a clear and prominent sign to inform that the user information has entered the collection range, and the user information will be collected. If the user voluntarily enters the collection range, it is deemed to agree to the collection of his user information; or On the user information processing device, when the user information processing rules are notified with obvious signs/information, the user authorization is obtained through pop-up information or asking the user to upload their user information; among them, the user information processing rules may include user Information processor, user information processing purpose, processing method, type of user information processed and other information.
生活中存在许多排队场景,服务商需要统计用户排队的情况来优化服务配置。通常的排队统计管理方案,主要有两种方法:1、使用排队取号进行排队人数的统计的方法,这种方法比较受限于场景,而且不灵活,只适用于小规模的场景排队,不适合游乐场、动物园等大规模的场景。2、使用移动终端统计排队人数的方法,这种方法需要用户主动接入移动终端的网络,且同样受限于局部空间,比如,仅适用于卫生间、封闭场馆通道之类的场景,而且由于不排队的人群也可以接入移动终端的网络,因此导致统计精度不高,所能得到的排队信息也有限。There are many queuing scenarios in life, and service providers need to collect statistics on user queuing to optimize service configuration. The usual queuing statistics management scheme mainly has two methods: 1. Use the queuing number to count the number of queuing people. This method is limited to the scene and is not flexible. It is only suitable for small-scale scene queuing. Suitable for large-scale scenes such as playgrounds and zoos. 2. The method of using mobile terminals to count the number of people in the queue requires users to actively access the network of mobile terminals, and is also limited to local spaces. For example, it is only applicable to scenarios such as toilets and closed venue passages, and because there People queuing can also access the network of mobile terminals, so the statistical accuracy is not high, and the queuing information that can be obtained is also limited.
有鉴于此,本公开至少一个实施例提供了一种排队分析方法,该方法在对排队队列进行排队分析时,是通过对排队场景下的视频流进行分析,这样就能够不受排队场景的限制,可以对排队的信息进行掌控。In view of this, at least one embodiment of the present disclosure provides a queuing analysis method. When performing queuing analysis on the queuing queue, the method analyzes the video stream in the queuing scene, so that it is not limited by the queuing scene. , you can control the queued information.
图1A是根据本公开至少一个实施例示出的一种排队分析方法的流程图,该方法可以包括步骤102至步骤106。Fig. 1A is a flowchart showing a queuing analysis method according to at least one embodiment of the present disclosure, and the method may include steps 102 to 106.
在步骤102中,对视频流中的一个或多个图像帧进行目标检测,确定所述一个或多个图像帧中处于排队队列的一个或多个目标对象。In step 102, target detection is performed on one or more image frames in the video stream, and one or more target objects in the queue in the one or more image frames are determined.
本实施例中,视频流包括从排队场景采集到的多个图像帧,该视频流可以是对排队队列实时监控得到的,也可以是对排队队列进行录制的视频。In this embodiment, the video stream includes a plurality of image frames collected from the queuing scene, and the video stream may be obtained by real-time monitoring of the queuing queue, or may be a recorded video of the queuing queue.
本步骤中,对图像帧进行目标检测可以包括对整个图像帧进行目标检测,也可以包括对图像帧中标定的排队区域进行目标检测。In this step, performing object detection on the image frame may include performing object detection on the entire image frame, or may include performing object detection on the marked queuing area in the image frame.
本实施例不限制对视频流中的图像帧进行检测的方式,例如,可以通过神经网络方式检测,或者也可以通过其他方式检测。This embodiment does not limit the manner of detecting the image frame in the video stream, for example, it may be detected by a neural network, or may be detected by other methods.
在步骤104中,在所述视频流中对所述一个或多个目标对象进行跟踪,为所述一个或多个目标对象中的每个目标对象分配跟踪标识。In step 104, the one or more target objects are tracked in the video stream, and a tracking identifier is assigned to each of the one or more target objects.
其中,跟踪标识用于标记不同的图像帧中的同一目标对象。不同的视频帧中会存在同样的目标对象,对多个图像帧中的一个或多个目标对象进行跟踪,可以确定同一目标对象在不同图像帧的位置并用跟踪标识标记该目标对象。Wherein, the tracking identifier is used to mark the same target object in different image frames. The same target object may exist in different video frames, and one or more target objects in multiple image frames may be tracked to determine the position of the same target object in different image frames and mark the target object with a tracking mark.
本实施例不限制对目标对象进行跟踪所使用的方法,例如,可以使用kalman(卡尔 曼)滤波器跟踪算法,基于siamsRPN(视觉目标跟踪网络)的跟踪算法等对目标对象进行跟踪。This embodiment does not limit the method used for tracking the target object. For example, the kalman (Kalman) filter tracking algorithm, the tracking algorithm based on siamsRPN (visual target tracking network), etc. can be used to track the target object.
在步骤106中,根据所述视频流中的至少一个图像帧中所述一个或多个目标对象的跟踪标识,确定所述排队队列的排队分析结果。In step 106, a queuing analysis result of the queuing queue is determined according to the tracking identifiers of the one or more target objects in at least one image frame in the video stream.
排队分析结果可以包括对每个图像帧进行分析得到的初步分析结果,还可以包括对初步分析结果进行进一步汇总得到的结果。The queuing analysis result may include a preliminary analysis result obtained by analyzing each image frame, and may also include a result obtained by further summarizing the preliminary analysis results.
例如,排队分析结果可以包括排队队列的人数(在排队队列中的目标对象的数量),根据图像帧中目标对象的跟踪标识的数目,可以确定在该图像帧对应的采集时间的排队队列的人数。并且,对视频流的多个图像帧中的排队人数进行进一步分析,可以得到不同时间段的排队队列的人数,从而得知排队队列人数的峰值、峰值时间、谷值、谷值时间等信息。For example, the queuing analysis result can include the number of people in the queue (the number of target objects in the queue), and according to the number of tracking marks of the target object in the image frame, the number of people in the queue at the acquisition time corresponding to the image frame can be determined . Moreover, by further analyzing the number of people in the queue in multiple image frames of the video stream, the number of people in the queue in different time periods can be obtained, so as to know the peak value, peak time, valley value, valley time and other information of the queue number.
本公开实施例的技术方案提供的排队分析方法利用排队场景下的视频流对排队队列的目标对象进行跟踪分析,可以获得更丰富的排队信息,实用性更强,不受排队场景的限制,对各种排队场景都有比较好的适应性,可以对排队的信息进行掌控,以便于优化面向排队对象的人力物力等资源配置,从而大大提高服务效率,降低成本。The queuing analysis method provided by the technical solution of the embodiment of the present disclosure uses the video stream in the queuing scene to track and analyze the target objects in the queuing queue, and can obtain richer queuing information, which is more practical and is not limited by the queuing scene. Various queuing scenarios have relatively good adaptability, and can control queuing information in order to optimize resource allocation such as manpower and material resources for queuing objects, thereby greatly improving service efficiency and reducing costs.
图1B为根据本公开至少一个实施例提供的排队分析方法的流程图,该方法结合图2A所示的地铁排队场景,更详细地描述了对视频流中的图像帧进行检测来进行排队分析的过程。如图1B所示,该方法可以包括步骤202至步骤208,其中,需要说明的是,本实施例不限制各步骤的执行顺序。Fig. 1B is a flow chart of a queuing analysis method provided according to at least one embodiment of the present disclosure. The method combines the subway queuing scene shown in Fig. 2A and describes in more detail how to perform queuing analysis by detecting image frames in a video stream process. As shown in FIG. 1B , the method may include steps 202 to 208 , where it should be noted that, this embodiment does not limit the execution order of the steps.
在步骤202中,对视频流中的一个或多个图像帧中的任一图像帧进行目标检测,得到该图像帧中各个对象的检测框。In step 202, object detection is performed on any one of the one or more image frames in the video stream to obtain detection frames of each object in the image frame.
本步骤中,可以利用对象检测网络对视频流中的图像帧进行检测,对象检测网络是预先训练的用于检测对象的神经网络,对应得到图像帧中各对象的检测框。在其他实施方式中,也可以利用头部关键点检测网络对视频流中的图像帧进行检测,对应得到图像帧中各对象的头部的检测框。或者,也可以检测得到对象的其他部位的检测框,比如,人的脚部、人的腿部、车的车轮、动物的脚部等。In this step, an object detection network may be used to detect the image frames in the video stream. The object detection network is a pre-trained neural network for detecting objects, and correspondingly obtains the detection frame of each object in the image frame. In other implementation manners, the head key point detection network may also be used to detect the image frames in the video stream, and correspondingly obtain the detection frames of the heads of the objects in the image frames. Alternatively, detection frames of other parts of the object may also be detected, for example, human feet, human legs, car wheels, animal feet, and the like.
将图像帧输入对象检测网络,输出图像帧中各个对象的检测框,检测框包含坐标信息,用于表示对象的位置。The image frame is input to the object detection network, and the detection frame of each object in the image frame is output. The detection frame contains coordinate information, which is used to represent the position of the object.
在步骤204中,响应于检测到所述检测框在该图像帧中预设的排队区域中,将所述检测框对应的对象确定为处于排队队列的目标对象。In step 204, in response to detecting that the detection frame is in the preset queuing area in the image frame, the object corresponding to the detection frame is determined as the target object in the queuing queue.
可以预先在地铁售票机附近的摄像头采集到的视频画面中划定出排队区域的位置和大小。特别的,对于可能存在多个排队队列的场景,可以通过在视频流的图像帧中划定出排队区域,来指定所需要进行排队分析的排队区域的排队队列。The position and size of the queuing area can be delineated in advance in the video images collected by the camera near the subway ticket vending machine. In particular, for a scene where there may be multiple queuing queues, the queuing area in the queuing area that needs to be analyzed can be designated by delimiting the queuing area in the image frame of the video stream.
一般来说,由于排队场景下采集视频流的摄像头的视角位置是固定的,因此对该摄像头所采集的视频流只预先标定一次排队区域即可。还可以选择排队区域其中一条边或几条边,并指明出队伍的方向。Generally speaking, since the viewing angle position of the camera that collects the video stream in the queuing scene is fixed, it is sufficient to pre-mark the queuing area only once for the video stream collected by the camera. You can also choose one or several sides of the queuing area and indicate the direction of the queue.
本实施例中,图像帧中预设的排队区域,可以是如图2A中所示的黑色线条组成的四边形框中的区域,出队伍的方向如箭头所示。也可以是图2B中所示的黑色线条组成的框中的区域,出队伍的方向如箭头所示。In this embodiment, the preset queuing area in the image frame may be an area in a quadrangular frame composed of black lines as shown in FIG. 2A , and the direction of leaving the queue is shown by the arrow. It can also be the area in the box formed by the black lines shown in Figure 2B, and the direction of leaving the team is shown by the arrow.
本步骤中,可以通过检测框中的特征点是否在排队区域中,来判断检测框对应的对象是否为处于排队队列的目标对象,在其他例子中,也可以根据检测框的边是否在排队区域中或者检测框与排队区域的重合程度等进行判断。In this step, whether the object corresponding to the detection frame is the target object in the queuing queue can be judged by whether the feature points in the detection frame are in the queuing area. In other examples, it can also be determined according to whether the edge of the detection frame is in the queuing area. or the degree of overlap between the detection frame and the queuing area.
特征点为检测框内部区域的点或者检测框边缘上的点,本实施例不限制特征点的选取,比如,特征点可以是检测框的下边缘中点位置,如图2A中检测框底部的白色圆圈所示。特征点也可以位于检测框的左下角或者右下角。The feature point is a point in the inner area of the detection frame or a point on the edge of the detection frame. This embodiment does not limit the selection of the feature point. For example, the feature point can be the midpoint position of the lower edge of the detection frame, as shown at the bottom of the detection frame in Figure 2A. Indicated by the white circle. Feature points can also be located in the lower left corner or lower right corner of the detection frame.
对于特征点位于排队区域的检测框,确定该检测框对应的目标对象也位于排队区域,即该目标对象处于排队队列之中。For the detection frame whose feature point is located in the queuing area, it is determined that the target object corresponding to the detection frame is also located in the queuing area, that is, the target object is in the queuing queue.
在步骤206中,在所述视频流中对一个或多个目标对象进行跟踪,为所述一个或多个目标对象中的每个目标对象分配跟踪标识。In step 206, one or more target objects are tracked in the video stream, and a tracking identifier is assigned to each of the one or more target objects.
需要说明的是,对于在某个图像帧中处于排队队列的目标对象,在其他图像帧中可能因为还没有开始排队而该目标对象此时并不处于排队队列,而是该目标对象在该其他图像帧中排队队列之外或者排队区域之外的区域。所以,在对目标对象进行跟踪时,可以对排队区域内的目标对象的检测框以及排队区域的周围对象的检测框进行跟踪,例如使用跟踪算法对视频画面中的每个对象进行跟踪,将各个时刻的的图像帧中相同的对象进行关联,进而为每个对象分配跟踪标识(trackID)。It should be noted that, for a target object that is in the queuing queue in a certain image frame, in other image frames, the target object may not be in the queuing queue at this time because the queuing has not yet started, but the target object is in the other image frame. The area of an image frame outside the queuing queue or outside the queuing area. Therefore, when tracking the target object, the detection frame of the target object in the queuing area and the detection frame of the surrounding objects in the queuing area can be tracked. The same object in the image frame at the moment is associated, and then a tracking identifier (trackID) is assigned to each object.
跟踪标识可以用数字表示,如图2A中的数字0-6标记了处于排队队列的七个目标对象。示例性的,跟踪标识可以根据目标对象出现在视频流的画面中时间顺序确定,如跟踪标识1所标记的目标对象1可以是比排在其之前的跟踪标识5所标记的目标对象5更早来到该排队场景,但是目标对象1在目标对象5进入排队队列之后才开始排队。The tracking identifiers can be represented by numbers, for example, the numbers 0-6 in Figure 2A mark seven target objects in the queuing queue. Exemplarily, the tracking identifier can be determined according to the chronological order in which the target object appears in the frame of the video stream, for example, the target object 1 marked by the tracking identifier 1 can be earlier than the target object 5 marked by the preceding tracking identifier 5 To the queuing scene, but target object 1 starts queuing after target object 5 enters the queuing queue.
跟踪标识所标记的目标对象的位置可以使用特征点的位置,也可以使用检测框中心的位置,也可以使用检测框上的其他点的位置。The position of the target object marked by the tracking mark may use the position of the feature point, the position of the center of the detection frame, or the position of other points on the detection frame.
在步骤208中,根据所述视频流中的至少一个图像帧中所述一个或多个目标对象的跟踪标识,确定所述排队队列的排队分析结果。In step 208, a queuing analysis result of the queuing queue is determined according to the tracking identifiers of the one or more target objects in at least one image frame in the video stream.
通过对每个时刻的排队区域内的跟踪标识的变化进行序列分析,可以得到每个目标对象在每个时刻在排队队列中的位置、序号、实时等待时间等信息,对这些信息进行分析统计可以得到排队分析结果。By sequentially analyzing the changes in the tracking marks in the queuing area at each moment, information such as the position, serial number, and real-time waiting time of each target object in the queuing queue at each moment can be obtained. Analyzing and counting these information can Get the queuing analysis result.
示例性的,排队分析结果可以包括每个目标对象的排队等待时间和服务使用时间,每个时刻排队队列中的目标对象的数量,以及平均排队时间和平均服务使用时间等。Exemplarily, the queuing analysis result may include the queuing waiting time and service usage time of each target object, the number of target objects in the queuing queue at each moment, and the average queuing time and service usage time.
比如,可以对于任一跟踪标识所标记的目标对象处于排队队列的队尾的图像帧,确定该跟踪标识所标记的目标对象的排队开始时间;对于该跟踪标识所标记的目标对象处于排队队列的队首的图像帧,确定该跟踪标识所标记的目标对象的排队结束时间;根据该目标对象的排队开始时间和排队结束时间,确定该目标对象的排队等待时间。For example, for any image frame in which the target object marked by the tracking mark is at the end of the queuing queue, the queuing start time of the target object marked by the tracking mark can be determined; For the image frame at the head of the queue, determine the queuing end time of the target object marked by the tracking identifier; determine the queuing waiting time of the target object according to the queuing start time and queuing end time of the target object.
在一些实施例中,在计算目标对象的排队等待时间时,可以是对于任一跟踪标识所标记的目标对象处于排队队列的队尾的多个图像帧,将所述多个图像帧中最早的采集时间确定为该跟踪标识所标记的目标对象的排队开始时间。In some embodiments, when calculating the queuing waiting time of the target object, for a plurality of image frames in which the target object marked by any tracking identifier is at the end of the queuing queue, the earliest of the plurality of image frames may be The acquisition time is determined as the queue start time of the target object marked by the tracking identifier.
对于该跟踪标识所标记的目标对象处于排队队列的队首的多个图像帧,将所述多个图像帧中最早的采集时间确定为该跟踪标识所标记的目标对象的排队结束时间。For the plurality of image frames in which the target object marked by the tracking identifier is at the head of the queue, the earliest acquisition time among the plurality of image frames is determined as the queuing end time of the target object marked by the tracking identifier.
根据所述目标对象的排队开始时间和目标对象的排队结束时间,确定所述目标对象的排队等待时间。The queuing waiting time of the target object is determined according to the queuing start time of the target object and the queuing end time of the target object.
例如,处于排队队列的队首的目标对象一般位于出队伍方向尽头,即箭头所指方向的尽头的目标对象正在使用地铁售票机所提供的服务,比如取票、购票或者查询服务,该目标对象可以认为是处于排队队列的队首的目标对象。而与之相反,处于排队队列的队尾的目标对象一般位于队伍另一端,例如目标对象最新进入排队队列处于排队队列的队尾。For example, the target object at the head of the queue is generally located at the end of the direction of leaving the queue, that is, the target object at the end of the direction pointed by the arrow is using the services provided by the subway ticket vending machine, such as ticket collection, ticket purchase, or query service. An object can be thought of as a target object at the head of a queuing queue. On the contrary, the target object at the end of the queuing queue is generally located at the other end of the queue, for example, the target object newly entered the queuing queue is at the end of the queuing queue.
根据某个目标对象的跟踪标识的位置可以确定该目标对象位于排队队列的队尾的多个图像帧,而其中采集时间最早的图像帧为该跟踪标识所标记的该目标对象刚进入排队队列时的图像帧,该图像帧的采集时间记为该目标对象的排队开始时间。According to the position of the tracking mark of a certain target object, it can be determined that the target object is located at the end of the queue, and the image frame with the earliest acquisition time is when the target object marked by the tracking mark just entered the queue The image frame of the image frame, the acquisition time of the image frame is recorded as the queuing start time of the target object.
同样的,根据该目标对象的跟踪标识的位置也可以确定该目标对象位于排队队列的队首的多个图像帧,而其中采集时间最早的图像帧为该跟踪标识所标记的该目标对象刚进入排队队列的队首准备开始使用地铁售票机时的图像帧,该图像帧的采集时间记为该目标对象的排队结束时间,也可以是该目标对象的服务开始时间。Similarly, according to the position of the tracking mark of the target object, it can also be determined that the target object is located in a plurality of image frames at the head of the queue, and the image frame with the earliest collection time is the target object marked by the tracking mark and has just entered The image frame when the leader of the queuing queue is about to start using the subway ticket vending machine. The acquisition time of the image frame is recorded as the queuing end time of the target object, or the service start time of the target object.
将该目标对象的排队开始时间和排队结束时间进行减法运算,即可得到该目标对象的排队等待时间。The queuing start time and queuing end time of the target object are subtracted to obtain the queuing waiting time of the target object.
此外,基于已计算的排队队列中的多个目标对象的排队等待时间,可以预估刚进入排队队列的目标对象的排队等待时间,比如,可以使用多个目标对象的平均排队等待时间作为预估的排队等待时间,也可以根据多个目标对象的排队等待时间与等待人数的关系拟合成的方程来预估排队等待时间。In addition, based on the calculated queuing waiting time of multiple target objects in the queuing queue, the queuing waiting time of the target object that has just entered the queuing queue can be estimated, for example, the average queuing waiting time of multiple target objects can be used as the estimate The queuing waiting time can also be estimated according to the equation fitted by the relationship between the queuing waiting time of multiple target objects and the number of people waiting in line.
又比如,对于任一所述跟踪标识,确定该跟踪标识所标记的目标对象处于排队队列的队首的多个图像帧。For another example, for any one of the tracking identifiers, a plurality of image frames in which the target object marked by the tracking identifier is at the head of the queue is determined.
将所述多个图像帧中最早的采集时间确定为该跟踪标识所标记的目标对象的服务开始时间。Determining the earliest acquisition time among the plurality of image frames as the service start time of the target object marked by the tracking identifier.
基于检测到该跟踪标识所标记的目标对象离开排队队列,确定该跟踪标识所标记的目标对象的服务结束时间。比如,可以在目标对象最后一次出现在排队队列的队首时,认为检测到目标对象离开排队队列,将所述多个图像帧中最晚的采集时间确定为该跟踪标识所标记的目标对象的服务结束时间。Based on detecting that the target object marked by the tracking mark leaves the queuing queue, the service end time of the target object marked by the tracking mark is determined. For example, when the target object appears at the head of the queue for the last time, it may be considered that the target object has been detected to leave the queue, and the latest acquisition time among the plurality of image frames is determined as the target object marked by the tracking identifier. Service end time.
根据所述目标对象的服务开始时间和服务结束时间,确定所述目标对象的服务使用时间。The service usage time of the target object is determined according to the service start time and service end time of the target object.
计算服务使用时间的方法和上述例子中计算排队等待时间的方法类似,同时可以使用目标对象的排队结束时间,作为该目标对象的服务开始时间。The method for calculating the service usage time is similar to the method for calculating the queue waiting time in the above example, and the queue end time of the target object can be used as the service start time of the target object.
根据该目标对象的跟踪标识的位置确定的该目标对象位于排队队列的队首的多个图像帧,其中采集时间最晚的图像帧为该跟踪标识所标记的目标对象已使用完地铁售票机、准备离开排队队列时的图像帧,该图像帧的采集时间记为该目标对象的服务结束时间。A plurality of image frames in which the target object is located at the head of the queue determined according to the position of the tracking mark of the target object, wherein the image frame with the latest collection time is that the target object marked by the tracking mark has used up the subway ticket vending machine, When the image frame is ready to leave the queuing queue, the acquisition time of the image frame is recorded as the service end time of the target object.
将该目标对象的服务开始时间和服务结束时间进行减法运算,即可得到该目标对象的服务使用时间。The service usage time of the target object can be obtained by subtracting the service start time and the service end time of the target object.
对每个目标对象都可以进行上述的排队等待时间和服务使用时间的计算,并进一步得到平均排队时间和平均服务使用时间等。For each target object, the above-mentioned queuing waiting time and service usage time can be calculated, and the average queuing time and service usage time can be further obtained.
又比如,响应于所述视频流中相邻的图像帧中的跟踪标识的数目不同,且采集时间在后的图像帧中新增的跟踪标识所标记的目标对象不位于所述排队队列的队尾,确定所述排队队列中的该目标对象插队。For another example, in response to the fact that the number of tracking markers in adjacent image frames in the video stream is different, and the target object marked by the tracking marker added in the image frame after the acquisition time is not located in the queue of the queuing queue At the end, it is determined that the target object in the queuing queue is queued.
对于视频流中相邻的两个图像帧,当排队队列中的跟踪标识的数目不同时,通常是有新进入排队队列的目标对象,或者有新离开的排队队列的目标对象。正常来讲,新进入排队队列的目标对象应该位于排队队列的队尾,如果采集时间在后的图像帧中新增的跟踪标识所标记的目标对象不位于排队队列的队尾,则确定该新进入排队队列的目标对象插队。For two adjacent image frames in the video stream, when the number of tracking identifiers in the queuing queue is different, usually there is a target object newly entering the queuing queue, or a target object newly leaving the queuing queue. Normally, the target object newly entering the queuing queue should be at the end of the queuing queue, if the target object marked by the new tracking mark in the image frame after the acquisition time is not at the end of the queuing queue, then determine the new The target object that enters the enqueuing queue jumps into the queue.
本公开实施例的技术方案提供的排队分析方法可以在视频流中的图像帧中标定排队区域的位置和大小,对排队区域中排队队列的目标对象进行跟踪分析,能够适用于各种类型的排队场景,灵活配置所需要进行分析的排队区域,针对性地掌控排队信息,对各种排队场景都有比较好的适应性,以便于服务运营商优化面向排队对象的人力物力等资源配置,从而大大提高服务效率,降低成本。The queuing analysis method provided by the technical solution of the embodiment of the present disclosure can mark the position and size of the queuing area in the image frame in the video stream, track and analyze the target objects in the queuing queue in the queuing area, and can be applied to various types of queuing Scenarios, flexibly configure the queuing areas that need to be analyzed, and control queuing information in a targeted manner. It has better adaptability to various queuing scenarios, so that service operators can optimize the allocation of resources such as human and material resources for queuing objects, thereby greatly Improve service efficiency and reduce costs.
上述实施例中,在判断排队队列的队首的目标对象是否发生更替时,即判断目标对象是否刚进入排队队列的队首或者刚离开排队队列的队首时,是根据跟踪标识的位置进行判断的。但是,有时位于排队队列的队首的跟踪标识会出现跳变的情况,比如,在 两个目标对象挨得很近时,或者说在对象跟踪的结果不准确时,因此上述实施例中的方法可能会对队首的目标对象是否变更做出错误判断,进而影响到排队分析结果的准确性。In the above embodiment, when judging whether the target object at the head of the queuing queue is replaced, that is, when judging whether the target object has just entered the head of the queuing queue or just left the head of the queuing queue, the judgment is made based on the position of the tracking mark of. However, sometimes the tracking identification at the head of the queuing queue may jump, for example, when two target objects are very close together, or when the result of object tracking is inaccurate, the method in the above-mentioned embodiment It is possible to make a wrong judgment on whether the target object of the queue head has changed, which in turn affects the accuracy of the queuing analysis results.
在一种实施方式中,在上述实施例的基础上,本公开提供了使排队分析结果更为准确的排队分析方法,该方法在判断位于排队队列的队首的目标对象是否离开的时候,采用了比对排队队列队首的目标对象的ReID(Person re-identification,行人重识别)特征的方法,在上述实施例的步骤104或者步骤206之后,上述实施例的方法还包括:提取所述视频流中的第一图像帧中第一目标对象的第一特征信息,以及提取与所述第一图像帧相邻的第二图像帧中的第二目标对象的第二特征信息,比对第一特征信息与第二特征信息,响应于所述第一特征信息与第二特征信息之间的相似度小于相似度阈值,确定所述第一目标对象离开排队队列。In one embodiment, on the basis of the above-mentioned embodiments, the present disclosure provides a queuing analysis method to make the queuing analysis result more accurate. When the method judges whether the target object at the head of the queue leaves, the A method for comparing the ReID (Person re-identification, pedestrian re-identification) feature of the target object at the head of the queue, after step 104 or step 206 of the above-mentioned embodiment, the method of the above-mentioned embodiment also includes: extracting the video The first feature information of the first target object in the first image frame in the stream, and extract the second feature information of the second target object in the second image frame adjacent to the first image frame, and compare the first The characteristic information and the second characteristic information, in response to the similarity between the first characteristic information and the second characteristic information being less than a similarity threshold, it is determined that the first target object leaves the queuing queue.
其中,第一目标对象为第一图像帧中位于排队队列的队首的目标对象,第二目标对象为第二图像帧中位于排队队列的队首的目标对象,第一图像帧和第二图像帧是视频流中相邻的图像帧,第一图像帧的采集时间在第二图像帧的采集时间之前。Wherein, the first target object is the target object at the head of the queuing queue in the first image frame, and the second target object is the target object at the head of the queuing queue in the second image frame, and the first image frame and the second image The frames are adjacent image frames in the video stream, and the acquisition time of the first image frame is before the acquisition time of the second image frame.
第一特征信息和第二特征信息可以包括目标对象的ReID特征的信息,ReID特征可以包含目标对象的各种类型属性的特征,比如人体的衣着、体态、发型和肢体等属性的特征。The first feature information and the second feature information may include the information of the ReID feature of the target object, and the ReID feature may include features of various types of attributes of the target object, such as attributes of clothing, posture, hairstyle, and limbs of the human body.
一般而言,排队队列中的目标对象包括进入队列、队列中、出队列三种状态,在特殊情况下还包括插队状态。Generally speaking, the target object in the queuing queue includes three states: entering the queue, in the queue, and exiting the queue, and in special cases, it also includes the state of cutting the queue.
其中,进入队列:对于每一个新出现在排队队列尾部的跟踪标识,我们认为是新进入排队队列的目标对象。Among them, entering the queue: for each new tracking identifier that appears at the end of the queue, we consider it to be a target object that newly enters the queue.
队列中:在排队队列中持续存在的目标对象,我们认为这个目标对象一直在排队队列中排队。In the queue: the target object that persists in the queuing queue, we think that this target object has been queuing in the queuing queue.
出队列:离开排队队列的目标对象。这对于排队队列的分析判断至关重要,由于跟踪标识的变化可能会出现跳变,我们需要使用视频流的上一帧中位于队首的目标对象的ReID特征与当前帧中位于队首的目标对象的ReID特征进行对比,来判断上一帧中的位于队首的目标对象是否出队列。Dequeue: The target object that leaves the queuing queue. This is very important for the analysis and judgment of the queuing queue. Since the change of the tracking logo may jump, we need to use the ReID feature of the target object at the head of the line in the previous frame of the video stream and the target at the head of the line in the current frame. The ReID feature of the object is compared to determine whether the target object at the head of the queue in the previous frame is out of the queue.
在一些实施例中,可以使用行人重识别技术提取视频流的各个图像帧中处于队首的目标对象的ReID特征,并比对两个相邻的图像帧中位于队首的目标对象的ReID特征,得到二者的相似度,换言之,比对第一图像帧中第一目标对象的第一特征信息和第二图像帧中的第二目标对象的第二特征信息,二者的相似度越大,则第一目标对象和第二目标对象为同一个目标对象的可能性越大,反之,则第一目标对象和第二目标对象为同一个目标对象的可能性越小。In some embodiments, the ReID feature of the target object at the head of the line in each image frame of the video stream can be extracted using the pedestrian re-identification technology, and the ReID feature of the target object at the head of the line in two adjacent image frames can be compared , to obtain the similarity between the two, in other words, comparing the first feature information of the first target object in the first image frame with the second feature information of the second target object in the second image frame, the greater the similarity between the two , then the possibility that the first target object and the second target object are the same target object is higher, and vice versa, the possibility that the first target object and the second target object are the same target object is lower.
示例性的,对于某个视频流,比对每个图像帧与相邻的上一个图像帧之间的相似 度的结果如图2C所示。在图2C中,横坐标为图像帧的序号,“0”表示视频流的起点,“500”表示视频流中第500个图像帧,“2000”表示视频流中第2000个图像帧,纵坐标为相似度,相似度越接近“1.0”越表示相邻两个图像帧中的队首为同一个人。Exemplarily, for a certain video stream, the result of comparing the similarity between each image frame and the adjacent previous image frame is shown in Figure 2C. In Figure 2C, the abscissa is the serial number of the image frame, "0" indicates the starting point of the video stream, "500" indicates the 500th image frame in the video stream, "2000" indicates the 2000th image frame in the video stream, and the ordinate is the similarity, the closer the similarity is to "1.0", the more the team leader in two adjacent image frames is the same person.
相似度阈值可以设置为0.5,如果相似度小于0.5,则认为相邻两个图像帧中位于队首的目标对象不为同一个人。上一帧队首的第一目标对象在当前帧对应的时刻已经出队伍,可以将上一帧的图像帧对应的采集时间确定为第一目标对象的服务结束时间,以及将当前帧的图像帧对应的采集时间确定为第二目标对象的服务开始时间,或者说是第二目标对象的排队结束时间。The similarity threshold can be set to 0.5. If the similarity is less than 0.5, it is considered that the target object at the head of the team in two adjacent image frames is not the same person. The first target object at the head of the queue in the previous frame has left the queue at the time corresponding to the current frame, the acquisition time corresponding to the image frame of the previous frame can be determined as the service end time of the first target object, and the image frame of the current frame The corresponding collection time is determined as the service start time of the second target object, or the queue end time of the second target object.
和判断第一目标对象离开排队队列的方法一样,同样可以判断第二目标对象在某个图像帧对应的采集时间离开排队队列。Similar to the method for judging that the first target object leaves the queuing queue, it can also be judged that the second target object leaves the queuing queue at the acquisition time corresponding to a certain image frame.
响应于确定第二目标对象离开排队队列,根据第一目标对象离开排队队列的时刻与第二目标对象离开排队队列的时刻,将二者作减法运算,确定第二目标对象对应的服务使用时间。In response to determining that the second target object leaves the queuing queue, the service usage time corresponding to the second target object is determined by subtracting the time when the first target object leaves the queuing queue and the time when the second target object leaves the queuing queue.
同样,也可以根据本实施例中的方法中确定的目标对象的排队结束时间并结合目标对象的排队开始时间,来计算该目标对象的排队服务时间。Similarly, the queuing service time of the target object can also be calculated according to the queuing end time of the target object determined in the method of this embodiment combined with the queuing start time of the target object.
本实施例中使用ReID特征比对的方法判断位于队首的目标对象是否离开,有效地避免了跟踪不准导致的对目标对象是否出队判断错误的情况,提升了排队分析的精度,从而在对排队区域进行分析时,对排队等待时间及服务使用时间进行更准确的统计,以更好地根据排队分析结果提升目标对象的排队体验。In this embodiment, the method of ReID feature comparison is used to judge whether the target object at the head of the queue has left, which effectively avoids the wrong judgment of whether the target object is out of the queue caused by inaccurate tracking, and improves the accuracy of queuing analysis. When analyzing the queuing area, make more accurate statistics on the queuing waiting time and service usage time, so as to better improve the queuing experience of the target object based on the queuing analysis results.
图3是根据本公开至少一个实施例示出的一种排队分析装置的框图,所述装置包括对象检测模块31、对象跟踪模块32、结果分析模块33。Fig. 3 is a block diagram showing a queuing analysis device according to at least one embodiment of the present disclosure, the device includes an object detection module 31, an object tracking module 32, and a result analysis module 33.
其中,对象检测模块31,用于对视频流中的一个或多个图像帧进行目标检测,确定所述一个或多个图像帧中处于排队队列的一个或多个目标对象。Wherein, the object detection module 31 is configured to perform target detection on one or more image frames in the video stream, and determine one or more target objects in the queue in the one or more image frames.
对象跟踪模块32,用于在所述视频流中对所述一个或多个目标对象进行跟踪,为所述一个或多个目标对象中每个目标对象分配跟踪标识,所述跟踪标识用于标记不同的所述图像帧中的同一目标对象。The object tracking module 32 is configured to track the one or more target objects in the video stream, assign a tracking identifier to each target object in the one or more target objects, and the tracking identifier is used for marking The same target object in different said image frames.
结果分析模块33,用于根据所述视频流中至少一个图像帧中所述一个或多个目标对象的跟踪标识,确定所述排队队列的排队分析结果。The result analysis module 33 is configured to determine the queuing analysis result of the queuing queue according to the tracking identifiers of the one or more target objects in at least one image frame in the video stream.
在一个例子中,人体检测模块31,用于:对所述视频流中的一个或多个图像帧进行目标检测,得到该图像帧中各个对象的检测框;响应于检测到所述检测框在该图像帧中预设的排队区域中,将所述检测框对应的对象确定为处于排队队列的目标对象。In one example, the human body detection module 31 is configured to: perform target detection on one or more image frames in the video stream to obtain the detection frame of each object in the image frame; In the preset queuing area in the image frame, the object corresponding to the detection frame is determined as the target object in the queuing queue.
在一个例子中,排队分析结果包括排队队列的所述一个或多个目标对象的数量;结果分析模块33,用于:根据所述视频流中至少一个图像帧中的一个图像帧中跟踪标识 的数目,确定在该图像帧对应的采集时间下位于所述排队队列中的目标对象的数量。In one example, the queuing analysis result includes the number of the one or more target objects in the queuing queue; the result analysis module 33 is configured to: according to the tracking identification in one image frame in the at least one image frame in the video stream The number determines the number of target objects in the queuing queue at the acquisition time corresponding to the image frame.
在一个例子中,排队分析结果包括排队等待时间;结果分析模块33,用于:对于任一跟踪标识所标记的目标对象处于排队队列的队尾的图像帧,确定该跟踪标识所标记的目标对象的排队开始时间;对于该跟踪标识所标记的该目标对象处于排队队列的队首的图像帧,确定该跟踪标识所标记的目标对象的排队结束时间;根据该目标对象的排队开始时间和该目标对象的排队结束时间,确定该目标对象的排队等待时间。In one example, the queuing analysis result includes queuing waiting time; the result analysis module 33 is configured to: for any image frame in which the target object marked by any tracking mark is at the end of the queuing queue, determine the target object marked by the tracking mark The starting time of queuing; for the image frame in which the target object marked by the tracking mark is at the head of the queuing queue, determine the queuing end time of the target object marked by the tracking mark; according to the queuing start time of the target object and the target The queuing end time of the object determines the queuing waiting time of the target object.
在一个例子中,排队分析结果包括服务使用时间;结果分析模块33,用于:对于任一所述跟踪标识,确定该跟踪标识所标记的目标对象处于排队队列的队首的多个图像帧;将所述多个图像帧中最早的采集时间确定为该跟踪标识所标记的目标对象的服务开始时间;基于检测到该跟踪标识所标记的目标对象离开排队队列,确定该跟踪标识所标记的目标对象的服务结束时间;根据该目标对象的服务开始时间和该目标对象的服务结束时间,确定该目标对象的服务使用时间。In one example, the queuing analysis result includes service usage time; the result analysis module 33 is configured to: for any one of the tracking markers, determine a plurality of image frames in which the target object marked by the tracking marker is at the head of the queuing queue; Determining the earliest acquisition time among the plurality of image frames as the service start time of the target object marked by the tracking mark; based on detecting that the target object marked by the tracking mark leaves the queuing queue, determining the target marked by the tracking mark The service end time of the object; according to the service start time of the target object and the service end time of the target object, the service usage time of the target object is determined.
在一个例子中,结果分析模块33,用于:响应于所述视频流中相邻的图像帧中的所述一个或多个目标对象的跟踪标识的数目不同,且采集时间在后的图像帧中新增的跟踪标识所标记的目标对象不位于所述排队队列的队尾,确定所述排队队列中新增的跟踪标识所标记的目标对象插队。In one example, the result analysis module 33 is configured to: respond to the number of tracking identifiers of the one or more target objects in adjacent image frames in the video stream being different, and the image frame whose acquisition time is later The target object marked by the newly added tracking identifier in is not located at the end of the queuing queue, and it is determined that the target object marked by the newly added tracking identifier in the queuing queue is inserted into the queue.
在一个例子中,如图4所示,所述装置还包括:特征比对模块34。In one example, as shown in FIG. 4 , the device further includes: a feature comparison module 34 .
提取所述视频流中的第一图像帧中第一目标对象的第一特征信息,以及提取与所述第一图像帧相邻的第二图像帧中的第二目标对象的第二特征信息;其中,所述第一目标对象为所述第一图像帧中位于所述排队队列的队首的目标对象,所述第二目标对象为所述第二图像帧中位于所述排队队列的队首的目标对象;响应于所述第一特征信息与第二特征信息之间的相似度小于相似度阈值,确定所述第一目标对象离开排队队列。extracting first feature information of a first target object in a first image frame in the video stream, and extracting second feature information of a second target object in a second image frame adjacent to the first image frame; Wherein, the first target object is the target object at the head of the queuing queue in the first image frame, and the second target object is the head of the queuing queue in the second image frame the target object; in response to the similarity between the first characteristic information and the second characteristic information being less than a similarity threshold, determining that the first target object leaves the queuing queue.
在一个例子中,结果分析模块33,用于:响应于确定所述第二目标对象离开排队队列,根据所述第一目标对象离开排队队列的时刻与所述第二目标对象离开排队队列的时刻,确定所述第二目标对象对应的服务使用时间。In one example, the result analysis module 33 is configured to: in response to determining that the second target object leaves the queuing queue, according to the time when the first target object leaves the queuing queue and the time when the second target object leaves the queuing queue , to determine the service usage time corresponding to the second target object.
上述装置中各个模块的功能和作用的实现过程具体详见上述方法中对应步骤的实现过程,在此不再赘述。For the implementation process of the functions and effects of each module in the above-mentioned device, please refer to the implementation process of the corresponding steps in the above-mentioned method for details, and details will not be repeated here.
本公开实施例还提供了一种电子设备,如图5所示,所述电子设备包括存储器51、处理器52,所述存储器51用于存储可在处理器上运行的计算机指令,所述处理器52用于在执行所述计算机指令时实现本公开任一实施例所述的排队分析方法。An embodiment of the present disclosure also provides an electronic device. As shown in FIG. The device 52 is configured to implement the queuing analysis method described in any embodiment of the present disclosure when executing the computer instructions.
本公开实施例还提供了一种计算机程序产品,该产品包括计算机程序/指令,该计算机程序/指令被处理器执行时实现本公开任一实施例所述的排队分析方法。An embodiment of the present disclosure further provides a computer program product, which includes a computer program/instruction, and when the computer program/instruction is executed by a processor, implements the queuing analysis method described in any embodiment of the present disclosure.
本公开实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,所述 程序被处理器执行时实现本公开任一实施例所述的排队分析方法。An embodiment of the present disclosure also provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the queuing analysis method described in any embodiment of the present disclosure is implemented.
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本说明书方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。As for the device embodiment, since it basically corresponds to the method embodiment, for related parts, please refer to the part description of the method embodiment. The device embodiments described above are only illustrative, and the modules described as separate components may or may not be physically separated, and the components shown as modules may or may not be physical modules, that is, they may be located in One place, or it can be distributed to multiple network modules. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution in this specification. It can be understood and implemented by those skilled in the art without creative effort.
上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。The foregoing describes specific embodiments of this specification. Other implementations are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in an order different from that in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. Multitasking and parallel processing are also possible or may be advantageous in certain embodiments.
本领域技术人员在考虑说明书及实践本公开后,将容易想到本说明书的其它实施方案。本说明书旨在涵盖本说明书的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本说明书的一般性原理并包括本说明书未申请的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本说明书的真正范围和精神由下面的权利要求指出。Other embodiments of the specification will be readily apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This description is intended to cover any modification, use or adaptation of this description. These modifications, uses or adaptations follow the general principles of this description and include common knowledge or conventional technical means in this technical field for which this description does not apply . The specification and examples are to be considered exemplary only, with a true scope and spirit of the specification being indicated by the following claims.
应当理解的是,本说明书并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本说明书的范围仅由所附的权利要求来限制。It should be understood that this specification is not limited to the precise constructions which have been described above and shown in the accompanying drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the specification is limited only by the appended claims.
以上所述仅为本说明书的一些实施例而已,并不用以限制本说明书,凡在本说明书的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本说明书保护的范围之内。The above descriptions are only some examples of this specification, and are not intended to limit this specification. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of this specification shall be included in the protection of this specification. within the range.

Claims (12)

  1. 一种排队分析方法,应用于终端设备,包括:A queuing analysis method applied to a terminal device, comprising:
    对视频流中的一个或多个图像帧进行目标检测,确定所述一个或多个图像帧中处于排队队列的一个或多个目标对象;Perform target detection on one or more image frames in the video stream, and determine one or more target objects in the queue in the one or more image frames;
    在所述视频流中对所述一个或多个目标对象进行跟踪,为所述一个或多个目标对象中的每个目标对象分配跟踪标识;Tracking the one or more target objects in the video stream, and assigning a tracking identifier to each of the one or more target objects;
    根据所述视频流中至少一个图像帧中所述一个或多个目标对象的跟踪标识,确定所述排队队列的排队分析结果。A queuing analysis result of the queuing queue is determined according to the tracking identifiers of the one or more target objects in at least one image frame in the video stream.
  2. 根据权利要求1所述的方法,其中,对所述视频流中的一个或多个图像帧进行目标检测,确定所述图像帧中处于排队队列的所述一个或多个目标对象,包括:The method according to claim 1, wherein performing target detection on one or more image frames in the video stream, and determining the one or more target objects in the queue in the image frame, comprising:
    对所述一个或多个图像帧中的任一图像帧进行目标检测,得到该图像帧中各个对象的检测框;performing target detection on any one of the one or more image frames to obtain a detection frame of each object in the image frame;
    响应于检测到所述检测框在该图像帧中预设的排队区域中,将所述检测框对应的对象确定为处于所述排队队列的目标对象。In response to detecting that the detection frame is in the preset queuing area in the image frame, the object corresponding to the detection frame is determined as the target object in the queuing queue.
  3. 根据权利要求1或2所述的方法,其中,所述排队分析结果包括所述排队队列的所述一个或多个目标对象的数量;The method according to claim 1 or 2, wherein said queuing analysis result comprises the number of said one or more target objects of said queuing queue;
    根据所述视频流中至少一个图像帧中所述一个或多个目标对象的跟踪标识,确定所述排队队列的排队分析结果,包括:Determining a queuing analysis result of the queuing queue according to the tracking identifiers of the one or more target objects in at least one image frame in the video stream, including:
    根据所述至少一个图像帧中的一个图像帧中跟踪标识的数目,确定在该图像帧对应的采集时间下位于所述排队队列中的目标对象的数量。According to the number of tracking markers in one image frame of the at least one image frame, the number of target objects located in the queuing queue at the acquisition time corresponding to the image frame is determined.
  4. 根据权利要求1-3中任一项所述的方法,其中,所述排队分析结果包括排队等待时间;The method according to any one of claims 1-3, wherein said queuing analysis result comprises queuing waiting time;
    根据所述视频流中至少一个图像帧中所述一个或多个目标对象的跟踪标识,确定所述排队队列的排队分析结果,包括:Determining a queuing analysis result of the queuing queue according to the tracking identifiers of the one or more target objects in at least one image frame in the video stream, including:
    对于任一所述跟踪标识所标记的目标对象处于所述排队队列的队尾的图像帧,确定该跟踪标识所标记的该目标对象的排队开始时间;For any image frame in which the target object marked by the tracking mark is at the end of the queuing queue, determine the queuing start time of the target object marked by the tracking mark;
    对于该跟踪标识所标记的该目标对象处于所述排队队列的队首的图像帧,确定该跟踪标识所标记的该目标对象的排队结束时间;For the image frame in which the target object marked by the tracking mark is at the head of the queuing queue, determine the queuing end time of the target object marked by the tracking mark;
    根据该目标对象的所述排队开始时间和该目标对象的所述排队结束时间,确定该目标对象的排队等待时间。The queuing waiting time of the target object is determined according to the queuing start time of the target object and the queuing end time of the target object.
  5. 根据权利要求1-4中任一项所述的方法,其中,所述排队分析结果包括服务使用时间;The method according to any one of claims 1-4, wherein the queuing analysis result includes service usage time;
    根据所述视频流中至少一个图像帧中所述一个或多个目标对象的跟踪标识,确定所述排队队列的排队分析结果,包括:Determining a queuing analysis result of the queuing queue according to the tracking identifiers of the one or more target objects in at least one image frame in the video stream, including:
    对于任一所述跟踪标识,确定该跟踪标识所标记的目标对象处于所述排队队列的队首的多个图像帧;For any of the tracking marks, determine a plurality of image frames in which the target object marked by the tracking mark is at the head of the queuing queue;
    将所述多个图像帧中最早的采集时间确定为该跟踪标识所标记的该目标对象的服务开始时间;Determining the earliest acquisition time among the plurality of image frames as the service start time of the target object marked by the tracking identifier;
    基于检测到该跟踪标识所标记的该目标对象离开所述排队队列,确定该跟踪标识所标记的该目标对象的服务结束时间;determining the service end time of the target object marked by the tracking mark based on detecting that the target object marked by the tracking mark leaves the queuing queue;
    根据该目标对象的所述服务开始时间和该目标对象的所述服务结束时间,确定该目标对象的服务使用时间。The service usage time of the target object is determined according to the service start time of the target object and the service end time of the target object.
  6. 根据权利要求1-5中任一项所述的方法,还包括:The method according to any one of claims 1-5, further comprising:
    提取所述视频流中的第一图像帧中第一目标对象的第一特征信息,以及提取与所述第一图像帧相邻的第二图像帧中的第二目标对象的第二特征信息;其中,所述第一目标对象为所述第一图像帧中位于所述排队队列的队首的目标对象,所述第二目标对象为所述第二图像帧中位于所述排队队列的队首的目标对象;extracting first feature information of a first target object in a first image frame in the video stream, and extracting second feature information of a second target object in a second image frame adjacent to the first image frame; Wherein, the first target object is the target object at the head of the queuing queue in the first image frame, and the second target object is the head of the queuing queue in the second image frame target audience;
    响应于所述第一特征信息与所述第二特征信息之间的相似度小于相似度阈值,确定所述第一目标对象离开所述排队队列。In response to the similarity between the first feature information and the second feature information being less than a similarity threshold, it is determined that the first target object leaves the queuing queue.
  7. 根据权利要求6所述的方法,还包括:The method of claim 6, further comprising:
    响应于确定所述第二目标对象离开所述排队队列,根据所述第一目标对象离开所述排队队列的时刻与所述第二目标对象离开所述排队队列的时刻,确定所述第二目标对象对应的服务使用时间。In response to determining that the second target object leaves the queuing queue, determining the second target object based on the time when the first target object left the queuing queue and the time when the second target object left the queuing queue The service usage time corresponding to the object.
  8. 根据权利要求1-7中任一项所述的方法,其中,根据所述视频流中至少一个图像帧中所述一个或多个目标对象的跟踪标识,确定所述排队队列的排队分析结果,包括:The method according to any one of claims 1-7, wherein the queuing analysis result of the queuing queue is determined according to the tracking identifiers of the one or more target objects in at least one image frame in the video stream, include:
    响应于所述视频流中相邻的图像帧中的所述一个或多个目标对象的跟踪标识的数目不同,且采集时间在后的图像帧中新增的跟踪标识所标记的目标对象不位于所述排队队列的队尾,确定所述排队队列中所述新增的跟踪标识所标记的目标对象插队。In response to the fact that the number of tracking identifiers of the one or more target objects in adjacent image frames in the video stream is different, and the target object marked by the new tracking identifier in the image frame after the acquisition time is not located The queue tail of the queuing queue determines that the target object marked by the newly added tracking identifier in the queuing queue jumps into the queue.
  9. 一种排队分析装置,包括:A queuing analysis device, comprising:
    对象检测模块,用于对视频流中的一个或多个图像帧进行目标检测,确定所述一个或多个图像帧中处于排队队列的一个或多个目标对象;An object detection module, configured to perform target detection on one or more image frames in the video stream, and determine one or more target objects in the queue in the one or more image frames;
    对象跟踪模块,用于在所述视频流中对所述一个或多个目标对象进行跟踪,为所述一个或多个目标对象中每个目标对象分配跟踪标识;An object tracking module, configured to track the one or more target objects in the video stream, and assign a tracking identifier to each of the one or more target objects;
    结果分析模块,用于根据所述视频流中至少一个图像帧中所述一个或多个目标对象的跟踪标识,确定所述排队队列的排队分析结果。The result analysis module is configured to determine the queuing analysis result of the queuing queue according to the tracking identifiers of the one or more target objects in at least one image frame in the video stream.
  10. 一种电子设备,包括存储器、处理器,所述存储器用于存储可在处理器上运行的计算机指令,所述处理器用于在执行所述计算机指令时实现权利要求1至8任一所述的方法。An electronic device, comprising a memory and a processor, the memory is used to store computer instructions that can be run on the processor, and the processor is used to implement any one of claims 1 to 8 when executing the computer instructions method.
  11. 一种计算机程序产品,该产品包括计算机程序,该计算机程序被处理器执行时实现权利要求1至8任一所述的方法。A computer program product, the product includes a computer program, and when the computer program is executed by a processor, the method according to any one of claims 1 to 8 is realized.
  12. 一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现权利要求1至8任一所述的方法。A computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the method according to any one of claims 1 to 8 is realized.
PCT/CN2022/097274 2021-11-05 2022-06-07 Method and apparatus for analyzing queue WO2023077797A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111308427.4 2021-11-05
CN202111308427.4A CN114049378A (en) 2021-11-05 2021-11-05 Queuing analysis method and device

Publications (1)

Publication Number Publication Date
WO2023077797A1 true WO2023077797A1 (en) 2023-05-11

Family

ID=80207717

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/097274 WO2023077797A1 (en) 2021-11-05 2022-06-07 Method and apparatus for analyzing queue

Country Status (2)

Country Link
CN (1) CN114049378A (en)
WO (1) WO2023077797A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114049378A (en) * 2021-11-05 2022-02-15 北京市商汤科技开发有限公司 Queuing analysis method and device
CN114719767A (en) * 2022-03-30 2022-07-08 中国工商银行股份有限公司 Distance detection method and device, storage medium and electronic equipment
CN114972298B (en) * 2022-06-16 2024-04-09 中国电建集团中南勘测设计研究院有限公司 Urban drainage pipeline video detection method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101615311A (en) * 2009-06-19 2009-12-30 无锡骏聿科技有限公司 A kind of method for evaluating queuing time based on vision
US20200111031A1 (en) * 2018-10-03 2020-04-09 The Toronto-Dominion Bank Computerized image analysis for automatically determining wait times for a queue area
CN111325057A (en) * 2018-12-14 2020-06-23 杭州海康威视数字技术股份有限公司 Queuing queue detection method and device
CN112016731A (en) * 2019-05-31 2020-12-01 杭州海康威视系统技术有限公司 Queuing time prediction method and device and electronic equipment
CN114049378A (en) * 2021-11-05 2022-02-15 北京市商汤科技开发有限公司 Queuing analysis method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101615311A (en) * 2009-06-19 2009-12-30 无锡骏聿科技有限公司 A kind of method for evaluating queuing time based on vision
US20200111031A1 (en) * 2018-10-03 2020-04-09 The Toronto-Dominion Bank Computerized image analysis for automatically determining wait times for a queue area
CN111325057A (en) * 2018-12-14 2020-06-23 杭州海康威视数字技术股份有限公司 Queuing queue detection method and device
CN112016731A (en) * 2019-05-31 2020-12-01 杭州海康威视系统技术有限公司 Queuing time prediction method and device and electronic equipment
CN114049378A (en) * 2021-11-05 2022-02-15 北京市商汤科技开发有限公司 Queuing analysis method and device

Also Published As

Publication number Publication date
CN114049378A (en) 2022-02-15

Similar Documents

Publication Publication Date Title
WO2023077797A1 (en) Method and apparatus for analyzing queue
CN108596277B (en) Vehicle identity recognition method and device and storage medium
WO2020056677A1 (en) Violation detection method, system, and device for building construction site
Fernandez-Sanjurjo et al. Real-time visual detection and tracking system for traffic monitoring
WO2020093830A1 (en) Method and apparatus for estimating pedestrian flow conditions in specified area
CN109784274B (en) Method for identifying trailing and related product
CN108269333A (en) Face identification method, application server and computer readable storage medium
JP7303384B2 (en) Passenger number counting system and passenger number counting device
CN111062967B (en) Electric power business hall passenger flow statistical method and system based on target dynamic tracking
WO2023029574A1 (en) Method and apparatus for acquiring passenger flow information, and computer device and storage medium
WO2017092269A1 (en) Passenger flow information collection method and apparatus, and passenger flow information processing method and apparatus
CN107992591A (en) People search method and device, electronic equipment and computer-readable recording medium
WO2022156234A1 (en) Target re-identification method and apparatus, and computer-readable storage medium
WO2022205632A1 (en) Target detection method and apparatus, device and storage medium
CN111091057A (en) Information processing method and device and computer readable storage medium
Liang et al. Accurate facial landmarks detection for frontal faces with extended tree-structured models
CN111666915A (en) Monitoring method, device, equipment and storage medium
CN111476160A (en) Loss function optimization method, model training method, target detection method, and medium
CN112651398A (en) Vehicle snapshot control method and device and computer readable storage medium
JP5004181B2 (en) Region identification device and content identification device
CN109359689B (en) Data identification method and device
WO2022078134A1 (en) People traffic analysis method and system, electronic device, and readable storage medium
Schauerte et al. How the distribution of salient objects in images influences salient object detection
JP5552946B2 (en) Face image sample collection device, face image sample collection method, program
CN108024148A (en) The multimedia file recognition methods of Behavior-based control feature, processing method and processing device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22888833

Country of ref document: EP

Kind code of ref document: A1