WO2023077783A1 - Method and apparatus for determining queuing waiting time - Google Patents

Method and apparatus for determining queuing waiting time Download PDF

Info

Publication number
WO2023077783A1
WO2023077783A1 PCT/CN2022/095777 CN2022095777W WO2023077783A1 WO 2023077783 A1 WO2023077783 A1 WO 2023077783A1 CN 2022095777 W CN2022095777 W CN 2022095777W WO 2023077783 A1 WO2023077783 A1 WO 2023077783A1
Authority
WO
WIPO (PCT)
Prior art keywords
queuing
area
dequeue
objects
queue
Prior art date
Application number
PCT/CN2022/095777
Other languages
French (fr)
Chinese (zh)
Inventor
刘诗男
杨昆霖
侯军
伊帅
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023077783A1 publication Critical patent/WO2023077783A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"

Definitions

  • This specification relates to the field of computer vision technology, and in particular to a method and device for determining the waiting time in a queue.
  • the waiting time in the queue will be estimated so that the queue objects can arrange their time reasonably.
  • the head tracking method is used to estimate the waiting time in the queue.
  • a scene with dense heads such as a serpentine formation
  • due to the increase in the difficulty of head tracking it is difficult to guarantee the accuracy of tracking, and the accuracy of the estimated waiting time is also reduced.
  • a method for determining the waiting time in queuing comprising: acquiring the queuing video stream of the target area, wherein the target area includes a queuing area and an out-of-queue area, and the out-of-queue The area includes the exit line that needs to pass through to end the queuing and judge whether to end the queuing; identify the exit area in the queued video stream, track the objects in the exit area, and determine the line that crosses the line in the unit time.
  • a device for determining the waiting time in queuing includes: an acquisition module, configured to acquire the queuing video stream of the target area, wherein the target area includes at least a queuing area and a dequeue Area, the dequeue area includes the dequeue line that needs to pass through to end the queuing, and judges whether to end the queuing; the dequeue speed determination module is used to identify the dequeue area in the queued video stream, and for the dequeue Objects in the area are tracked to determine the number of dequeue objects crossing the dequeue line per unit time to obtain the dequeue speed; the number of queuing objects is determined by a module for obtaining the queuing from the target image frame of the queuing video stream The number of queuing objects in the area; the queuing waiting time determination module, configured to obtain the queuing waiting time according to the dequeue speed and the number of queuing objects
  • a computer device including a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein, when the processor executes the program, the above-mentioned Any one of the methods for determining the waiting time in queue.
  • a computer-readable storage medium on which computer instructions are stored, and when the instructions are executed by a processor, the method for determining the queuing waiting duration as described above is implemented.
  • a computer program product including a computer program stored in a memory, and when the computer program instructions are executed by a processor, the method for determining the queuing waiting time as described above is implemented.
  • Fig. 1 is a schematic diagram of a scene according to an exemplary embodiment of this specification.
  • Fig. 2 is a schematic flowchart of a method for determining the waiting time in a queue according to an exemplary embodiment of the present specification.
  • FIGS. 3A and 3B are schematic diagrams of a queuing area scene according to an exemplary embodiment of the present specification
  • FIG. 3B is a schematic diagram of a queuing area scene including object distribution.
  • Fig. 4 is a schematic diagram showing another queuing area scene according to an exemplary embodiment of the present specification.
  • FIGS. 5A and 5B are schematic diagrams of another queuing area scene according to an exemplary embodiment of the present specification, and FIG. 5B is a schematic diagram of a queuing area scene including object distribution.
  • Fig. 6 is a schematic diagram showing object positioning points according to an exemplary embodiment of the present specification.
  • Fig. 7 is a schematic diagram of a scene of a multi-camera instrument according to an exemplary embodiment of the present specification.
  • Fig. 8 is a schematic flowchart of another method for determining the waiting time in a queue according to an exemplary embodiment of the present specification.
  • Fig. 9 is a schematic block diagram of a device for determining a queue waiting time according to an exemplary embodiment of the present specification.
  • Fig. 10 is a schematic diagram of a computing device according to an exemplary embodiment of this specification.
  • first, second, third, etc. may be used in this specification to describe various information, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another. For example, without departing from the scope of this specification, first information may also be called second information, and similarly, second information may also be called first information. Depending on the context, the word “if” as used herein may be interpreted as “at” or “when” or “in response to a determination.”
  • the product applying the disclosed technical solution has clearly informed the user of the information processing rules and obtained the user's independent consent before processing the user information.
  • the product applying the disclosed technical solution has obtained the separate consent of the user before processing the sensitive user information, and at the same time meets the requirement of "express consent". For example, at the user information collection device such as a camera, set a clear and prominent sign to inform that the user information has entered the collection range, and the user information will be collected.
  • the user information processing rules may include user Information processor, user information processing purpose, processing method, type of user information processed and other information.
  • This method for determining the waiting time in queue needs to track each queuing object. Therefore, in the case of dense queuing objects (such as serpentine queuing), the tracking difficulty will increase. On the one hand, the queuing objects are dense and need to be tracked. The number of queuing objects is large, and the difficulty will increase. On the other hand, for each queuing object, it takes a long time to track, and the probability of tracking failure and tracking error will increase. The accuracy of estimating the queuing waiting time of queuing objects based on this method for determining the queuing waiting time will be greatly reduced.
  • dense queuing objects such as serpentine queuing
  • this specification provides a method for determining the waiting time in a queue, which is to track the head of the queued objects in the area near the queue (ending the queue), determine the number of queued objects per unit time, and obtain the queued speed. At the same time, the number of queuing objects in the queuing area is counted to obtain the number of queuing objects. According to the dequeue speed and the number of queued objects, the queue waiting time is obtained.
  • the method for determining the queuing waiting time is for the person who is about to enter the queuing area and become the last queuing object.
  • the estimate of that is, the estimate of the waiting time in line.
  • FIG. 1 it is a schematic diagram of a scene shown in this specification, 101 is a camera, and an area 102 is an area where queuing objects line up.
  • 101 is a camera
  • area 102 is an area where queuing objects line up.
  • the actual queuing video stream can also be obtained through other means, such as a mobile phone with a camera.
  • the camera device uses the queued video stream to estimate the queued waiting time, or the camera device sends the queued video stream to a certain terminal device, and the terminal device uses the queued video stream to estimate the queued waiting time.
  • the method for determining the waiting time in line can be performed by electronic equipment such as a terminal device or a server
  • the terminal device can be a fixed terminal or a mobile terminal, such as a mobile phone, a tablet computer, a game console, a desktop computer, an advertising machine, an all-in-one machine, a vehicle-mounted terminal, etc. terminal, unmanned aerial vehicle, aircraft, etc.
  • the server includes a local server or a cloud server, etc.
  • the method can also be realized by calling a computer-readable instruction stored in a memory by a processor.
  • FIG. 2 is a schematic flowchart of a method for determining the waiting time in a queue according to an exemplary embodiment of the present specification, including steps 201 to 207 .
  • Step 201 acquire queued video streams in the target area.
  • the target area includes the queuing area and the dequeue area
  • the dequeue area includes the dequeue line that needs to pass through to end the queuing and judge whether to end the queuing.
  • Queued video streams can be acquired in real time or from historical video streams.
  • Step 203 Identify the dequeue area in the queuing video stream, track the objects in the dequeue area, determine the number of dequeue objects crossing the dequeue line per unit time, and obtain the dequeue speed.
  • the out-of-queue area is an area range including the out-of-queue line.
  • the exit line is the line that the queuing object needs to pass through when it ends (completes) the queuing. It is used to judge whether any queuing object ends the queuing line. It is generally calibrated according to the shooting angle and the actual queuing situation, or generated by the computing system.
  • the dequeuing area is the area where object tracking needs to be performed, so it needs to include the queuing area where the queuing object is about to end queuing and the non-queuing area where the queuing has just ended.
  • the method shown in this description needs to track the head in the dequeue area, determine the number of dequeue objects crossing the dequeue line per unit time, and obtain the dequeue speed. Therefore, in order to determine whether the queued object is dequeue Object, the dequeue area for head tracking needs to include the dequeue line.
  • the dequeuing area may also include a queuing end area and a non-queuing area, at least part of the queuing end area overlaps with the queuing area, and the dequeuing line is used as a bridge between the first queuing area and the non-queuing area. dividing line.
  • the last queuing area refers to an area where queuing is about to end, as shown in FIG. 3A and FIG. 4 , the area 302 can be used as the last queuing area.
  • the method shown in this manual only needs to track the people located in the dequeue area. Specifically, when a person enters the dequeue area, he is assigned a tracking ID and tracked. When he leaves the dequeue area, Determines to end tracking of the person, deleting the tracking ID assigned to the person.
  • object tracking refer to specific implementations of related methods, such as head tracking, body tracking, etc., which will not be described in detail in this specification.
  • the queued video stream obtained is a queued video stream with a relatively fixed viewing angle. In other words, the queued video stream will not change the viewing angle in a short time.
  • the exit area and exit line can be determined according to the actual scene.
  • the location of the camera equipment that shoots the queuing video stream relative to the queue area is different, and the obtained exit area and line will be different.
  • the exit line, exit The team area may be in the upper left part of the image frame, may also be in the lower right part of the image frame, and may also be in the middle part of the image frame, etc.
  • FIG. 3A and Figure 4 it is a schematic diagram of two scenes shown in this specification, the queue line and the queue area in Figure 3A are in the middle of the right side of the image frame, and the queue line and the queue area in Figure 4 are in the middle of the image frame Middle left.
  • FIG. 3B is the same as the area schematic diagram shown in FIG. 3A , and an example of objects is added in FIG. 3B , the direction of the dotted line is the direction of queuing, and each black circle represents a queuing object.
  • the following method may be used to determine, including step 2031 to step 2032 .
  • Step 2031 track the objects in the dequeuing area.
  • Step 2032 Determine the dequeue object based on the movement trajectory obtained by tracking the objects in the dequeue area.
  • the dequeuing object is determined according to the movement trajectory of the object moving from the queuing area across the dequeuing line to the non-queuing area.
  • the position of the head of the same person in different frames of images will be different.
  • the movement of the head of the object can be obtained
  • the trajectory, the moving trajectory line will have a direction, pointing according to the time sequence, and the moving trajectory line with direction will be obtained.
  • the objects in the dequeue area may be tracked based on the dequeue area of the queued video stream, and the moving trajectory of each object may be determined respectively.
  • the object is determined to be the dequeuing object.
  • the moving trajectory of any queuing object intersects with the dequeuing line, and the moving direction of the moving trajectory is from the queuing area to the non-queuing area, it can be determined that the queuing object is dequeued, and the queuing object is determined to be dequeued. For dequeuing objects.
  • Step 205 acquiring the number of queuing objects in the queuing area from the target image frame in the queuing video stream.
  • the target image frame is an image frame in the queuing video stream.
  • the queuing area refers to the area where the queuing objects are queued, and other areas are non-queuing areas, as shown in Figure 3A and Figure 4, which are schematic diagrams of the queuing area shown in this specification, and area 301 and area 302 are queuing areas, Other areas are non-queuing areas.
  • FIG. 5A it is a schematic diagram of another queuing area shown in this specification, wherein, area 301 and area 302 are queuing areas, area 305 is the position where the working object stands, and the working object will not move along with the queuing object. Therefore, the position where the work object stands does not belong to the queuing area, therefore, the area 305 is not included in the queuing area and belongs to the non-queuing area, so, in the scene shown in Figure 5A, the non-queuing area includes area 305, area 303 , area 304 .
  • FIG. 5B it is the same as the schematic diagram of the area shown in Figure 5A, and a schematic diagram of object distribution is added.
  • the direction of the dotted line is a schematic diagram of the queuing path.
  • This object provides the estimated queuing time, and the objects marked on the right are the objects that have just finished queuing.
  • the above are schematic diagrams of some queuing areas exemplified in this manual.
  • the queuing area may be more complicated.
  • the queuing area can be divided according to the angle of the shooting instrument and the actual queuing situation. For example, due to emergencies, The expansion of the queuing area, or setting up staff to stand guard in some areas, does not need to divide the area into the queuing area, or changes in the shooting angle, etc., can cause changes in the actual queuing area, thereby redefining the queuing area.
  • the changed queuing area is obtained, and the changed queuing area is used as a new queuing area.
  • the current queuing area may be determined first, and then the target image frame is obtained from the queuing video stream, and the number of queuing objects in the current queuing area is obtained according to the target image frame.
  • the head of the object in the target image frame when acquiring the number of queuing objects located in the queuing area from the target image frame in the queuing video stream, can be first positioned to obtain the head points of each object in the target image frame gather. Then, in the set of head points, based on the queuing area in the target image frame, the number of head points located in the queuing area is obtained to obtain the number of queuing objects.
  • Figure 6 it is a schematic diagram of the scene shown in this specification. Some objects are in the queuing area, and some objects are not in the queuing area.
  • the coordinates of B, C, and D are [(5,5), (5,20), (25,5), (25,20)], after positioning each head point in the target image frame, we get Several head point coordinate sets, if the abscissa of the head point coordinates is greater than 5 and less than 25, and the ordinate is greater than 5 and less than 20, then the head point coordinates are located in the queuing area and belong to the queuing object; otherwise, the head point coordinates The point coordinates are located in the non-queuing area and belong to the non-queuing object. Then, in the set of head points, determine the number of head point coordinates that are queued objects, and obtain the number of queued objects.
  • a subject's head point may refer to the center point of the subject's head region.
  • the head area (that is, the head area) of the target image frame can also be positioned, and any object corresponding to Head area, if the area of the head area of the object in the queuing area is greater than the preset value, then it is determined that the object is in the queuing area and determined as the queuing object (for example, the preset value set is 90%, 100% 90 and above objects whose head area is in the queuing area are determined to be in the queuing area and determined to be queuing objects).
  • Step 207 Obtain the queue waiting time according to the dequeue speed and the number of queued objects.
  • the queue waiting time can be obtained by dividing the number of queued objects by the dequeue speed.
  • the dequeue speed can be v
  • the unit is (for example, person/hour) or (for example, person/minute)
  • the number of queued objects is s
  • the unit is (for example, 1)
  • the estimated waiting time in the queue is 15 minutes.
  • a merchant needs to replace the equipment every hour, and the equipment replacement takes about two minutes.
  • the waiting time in the queue it is necessary to determine the estimated length of the waiting time in the queue based on the time of the last equipment replacement. This needs to be considered. The time ⁇ spent by the device and so on.
  • the method for determining the waiting time in queue may include steps 801 to 807.
  • Step 801. Obtain queued video streams in the target area.
  • the target area includes a queuing area and a dequeue area
  • the dequeue area includes a dequeue line that needs to pass through to end the queuing and judge whether to end the queuing.
  • Step 803 Based on the multiple image frames in the queuing video stream at the first moment and the predetermined time period before the first moment, track the objects in the dequeuing area in the multiple image frames, and determine the unit time Get the dequeue speed corresponding to the first moment by the number of dequeue objects that cross the dequeue line.
  • the first moment may be a certain moment corresponding to the queued video stream, but not the moment corresponding to the first image frame, and there need to be multiple image frames within a predetermined period of time before the first moment. For example, before the time 9:40, there needs to be an image frame of the previous minute, that is, there needs to be an image frame corresponding to the time period from 9:39 to 9:40.
  • the predetermined time period can be set according to the actual situation. For example, if the actual queuing situation is good and the speed of leaving the queue changes rapidly, the predetermined time period can be shorter; if the actual queuing situation is poor and the speed of leaving the queue changes slowly, the predetermined time period can be longer.
  • the predetermined period of time may be two minutes, and for a certain moment, based on the image frames within the two minutes before the moment and the moment, head tracking is performed on the queued objects in the out-of-queue area in these image frames, and the unit time is determined Get the number of heads of the dequeue objects that cross the dequeue line, and get the corresponding dequeue speed at that moment.
  • the time difference between two adjacent frames of images is too small, the content of the two frames of images will be almost the same, for example, the time difference between the first frame of images and the second frame of images is 0.01s.
  • the plurality of image frames within the predetermined time period before the first moment may be part of the images within the predetermined period before the first moment, for example, from the queued video stream
  • One frame of image is acquired every 1s, and 120 frames of images will be obtained for the image frames within two minutes before the moment, and head tracking is performed on the image corresponding to the moment and the queuing objects in the out-of-queue area in the 120 frames of images.
  • Step 805 Obtain the number of queuing objects in the queuing area from the image frame corresponding to the first moment in the queuing video stream.
  • Step 807 according to the dequeue speed and the number of queued objects corresponding to the first moment, the queue waiting time corresponding to the first moment is obtained.
  • Step 805 may further include: acquiring the number of queuing objects in the queuing area from any image frame in the corresponding video stream within a fixed period of time before the first moment in the queuing video stream.
  • the queuing area includes 702A and 702B, and the imaging equipment includes 701A and 701B.
  • the imaging equipment 701A is used for shooting the area 702A
  • the imaging equipment 701B is used for shooting the area 702B.
  • the queued video stream includes: multiple sub-video streams respectively captured by multiple cameras, wherein the area captured by each of the sub-video streams includes a part of the queuing area.
  • step 205 may include: according to the image frames corresponding to the plurality of sub-video streams corresponding to the second moment, determine the objects located in the queuing area in each image frame; count the objects located in the queue area in each image frame The number of objects in the queuing area corresponding to the second moment is obtained from the number of objects in the queuing area.
  • the second moment may be any moment of the video stream, and it is only necessary to ensure that the image frames of each sub-queuing video stream are simultaneously acquired at the second moment.
  • multiple imaging devices can be used to cooperate together, regardless of the shape of the queuing area and how wide the scope is, it can be achieved through the multi-camera shown in this manual.
  • the instruments cooperate with each other to make statistics on the objects in the queue, and then get the waiting time in the queue.
  • this specification also provides embodiments of a device and a terminal to which it is applied.
  • the device includes: an acquisition module 901, configured to acquire the queuing video stream of the target area, wherein the target area includes a queuing area and an out-of-queue area , the dequeue area includes the dequeue line that needs to pass through to end the queuing and judge whether to end the queuing; the dequeue speed determination module 903 is used to identify the dequeue area in the queued video stream, and for the dequeue Objects in the area are tracked, and the number of dequeue objects crossing the dequeue line in unit time is determined to obtain the dequeue speed; the queued object quantity determination module 905 is used to obtain all the objects from the target image frames in the queued video stream.
  • the out-of-queue area may include: a queuing end area and a non-queuing area; at this time, at least part of the area in the queuing end area overlaps with the queuing area, and the out-of-queue area in the out-of-queue area
  • the line is used as a boundary between the first queuing area and the non-queuing area.
  • the queuing object quantity determining module 905 may also be used for: performing head positioning on objects in the target image frame in the queuing video stream to obtain a set of head points in the target image frame; point collection and the queuing area, obtaining head points located in the queuing area; counting the number of head points in the queuing area to obtain the number of queuing objects.
  • the dequeuing speed determination module 903 may also be configured to: based on the first moment and the plurality of image frames in the queued video stream within a predetermined period of time before the first moment, the plurality of image frames Objects in the dequeue area in the frame are tracked, the number of dequeue heads crossing the dequeue line per unit time is determined, and the dequeue speed corresponding to the first moment is obtained.
  • the queuing object quantity determination module 905 is used to: acquire the queuing object quantity in the queuing area from the image frame corresponding to the first moment in the queuing video stream; the queuing waiting duration determination module 907 is used to : Obtain the queue waiting time corresponding to the first moment according to the dequeue speed and the number of queued objects.
  • the dequeue speed determining module 903 is configured to: track the objects in the dequeue area; determine the dequeue object based on the moving track of the object obtained by tracking the objects in the dequeue area ;Determine the number of dequeue objects per unit time, and get the dequeue speed.
  • the dequeue speed determination module 903 can also be used for: tracking the objects in the dequeue area; based on tracking the objects in the dequeue area, respectively determine the movement trajectory of each object; respond to any The moving track line of an object intersects with the described dequeuing line, and the moving direction of the described moving track line is from the queuing area to the non-queuing area, and it is determined that the object is the dequeuing object; the number of dequeuing objects per unit time is determined, Get the queue speed.
  • the queuing video stream may include: a plurality of sub-video streams respectively captured by multiple cameras, wherein the area captured by each of the sub-video streams includes a part of the queuing area; at this time, the queuing object quantity determination module 905 It can be used to: determine the objects located in the queuing area in each of the image frames according to the image frames corresponding to the plurality of sub-video streams corresponding to the second moment; count the objects located in the queuing area in each of the image frames The number of objects in the queue is obtained to obtain the number of queued objects corresponding to the second moment.
  • the device embodiment since it basically corresponds to the method embodiment, please refer to the part description of the method embodiment for relevant parts.
  • the device embodiments described above are only illustrative, and the modules described as separate components may or may not be physically separated, and the components shown as modules may or may not be physical modules, that is, they may be located in One place, or it can be distributed to multiple network modules. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution in this specification. Objects of ordinary skill in the art can be understood and implemented without creative effort.
  • Embodiments of the apparatus in this specification can be applied to computer equipment, such as servers or terminal equipment.
  • the device embodiments can be implemented by software, or by hardware or a combination of software and hardware. Taking software implementation as an example, as a device in a logical sense, it is formed by reading the corresponding computer program instructions in the non-volatile memory into the memory for operation through the processor of the file processing where it is located.
  • this specification also provides a computer device, including a memory, a processor, and a computer program stored in the memory and operable on the processor, wherein, when the processor executes the program, any one of the above-mentioned The method for determining the waiting time in the queue described above.
  • FIG. 10 it is a schematic diagram of a more specific hardware structure of a computing device provided by the embodiment of this specification.
  • the device may include: a processor 1010 , a memory 1020 , an input/output interface 1030 , a communication interface 1040 and a bus 1050 .
  • the processor 1010 , the memory 1020 , the input/output interface 1030 and the communication interface 1040 are connected to each other within the device through the bus 1050 .
  • the processor 1010 may be implemented by a general-purpose CPU (Central Processing Unit, central processing unit), a microprocessor, an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, and is used to execute related programs to realize the technical solutions provided by the embodiments of this specification.
  • a general-purpose CPU Central Processing Unit, central processing unit
  • a microprocessor an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits
  • ASIC Application Specific Integrated Circuit
  • the memory 1020 can be implemented in the form of ROM (Read Only Memory, read-only memory), RAM (Random Access Memory, random access memory), static storage device, dynamic storage device, etc.
  • the memory 1020 can store operating systems and other application programs. When implementing the technical solutions provided by the embodiments of this specification through software or firmware, the relevant program codes are stored in the memory 1020 and invoked by the processor 1010 for execution.
  • the input/output interface 1030 is used to connect the input/output module to realize information input and output.
  • the input/output/module can be configured in the device as a component (not shown in the figure), or can be externally connected to the device to provide corresponding functions.
  • the input device may include a keyboard, mouse, touch screen, microphone, various sensors, etc.
  • the output device may include a display, a speaker, a vibrator, an indicator light, and the like.
  • the communication interface 1040 is used to connect a communication module (not shown in the figure), so as to realize the communication interaction between the device and other devices.
  • the communication module can realize communication through wired means (such as USB, network cable, etc.), and can also realize communication through wireless means (such as mobile network, WIFI, Bluetooth, etc.).
  • Bus 1050 includes a path that carries information between the various components of the device (eg, processor 1010, memory 1020, input/output interface 1030, and communication interface 1040).
  • the above device only shows the processor 1010, the memory 1020, the input/output interface 1030, the communication interface 1040 and the bus 1050, in the specific implementation process, the device may also include other components.
  • the above-mentioned device may only include components necessary to implement the solutions of the embodiments of this specification, and does not necessarily include all the components shown in the figure.
  • This specification also provides a computer-readable storage medium, on which computer instructions are stored, and when the instructions are executed by a processor, the steps of any method for determining the waiting time in a queue as described above are realized.
  • This specification also provides a computer program product, including a computer program stored in a memory, and when the computer program instructions are executed by a processor, the steps of the method for determining the queuing waiting time described in any one of the above are implemented.
  • Computer-readable media including both permanent and non-permanent, removable and non-removable media, can be implemented by any method or technology for storage of information.
  • Information may be computer readable instructions, data structures, modules of a program, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic cassettes, disk storage, quantum memory, graphene-based storage media or other magnetic storage devices or any other non-transmission media that can be used to store information that can be accessed by computing devices.
  • computer-readable media excludes transitory computer-readable media, such as modulated data signals and carrier waves.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A method and apparatus for determining queuing waiting time, the method comprises: obtaining a queue video stream of a target region, where the target region includes a queuing region and a dequeuing region, and the dequeuing region includes a dequeuing line which needs to be passed to end queuing and is used to determine whether queuing has ended; identifying the dequeuing region in the queue video stream, tracking objects in the dequeuing region, determining a number of dequeuing objects that has crossed the dequeuing line in a unit time, and obtaining a dequeuing speed; obtaining a number of queuing objects in the queuing region from a target image frame of the queue video stream; obtaining a queuing waiting time on the basis of the dequeuing speed and the number of queuing objects.

Description

一种排队等待时长确定方法及装置Method and device for determining waiting time in queue
相关申请交叉引用Related Application Cross Reference
本申请主张申请号为202111308447.1、申请日为2021年11月05日的中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。This application claims the priority of the Chinese patent application with the application number 202111308447.1 and the filing date of November 05, 2021. The entire content of the Chinese patent application is hereby incorporated by reference into this application.
技术领域technical field
本说明书涉及计算机视觉技术领域,尤其涉及一种排队等待时长确定方法及装置。This specification relates to the field of computer vision technology, and in particular to a method and device for determining the waiting time in a queue.
背景技术Background technique
实际生活中,为了更好的管理,会对排队等待时长进行预估,以使排队对象合理安排时间。In real life, for better management, the waiting time in the queue will be estimated so that the queue objects can arrange their time reasonably.
一般通过头部跟踪的方法,对排队等待时长进行预估。但这在头部密集的场景下(例如蛇形队形),由于头部跟踪难度的提升,使得跟踪的精度难以得到保证,进而预估的等待时长的准确度也有所下降。Generally, the head tracking method is used to estimate the waiting time in the queue. However, in a scene with dense heads (such as a serpentine formation), due to the increase in the difficulty of head tracking, it is difficult to guarantee the accuracy of tracking, and the accuracy of the estimated waiting time is also reduced.
发明内容Contents of the invention
根据本说明书实施例的第一方面,提供一种排队等待时长确定方法,所述方法包括:获取目标区域的排队视频流,其中,所述目标区域包括排队区域和出队区域,所述出队区域包括结束排队需经过的、判断是否结束排队的出队线;识别所述排队视频流中的所述出队区域,对所述出队区域中的对象进行跟踪,确定单位时间内跨过所述出队线的出队对象数量,得到出队速度;从所述排队视频流的目标图像帧中获取所述排队区域中的排队对象数量;根据所述出队速度和所述排队对象数量,得到排队等待时长。According to the first aspect of the embodiments of this specification, there is provided a method for determining the waiting time in queuing, the method comprising: acquiring the queuing video stream of the target area, wherein the target area includes a queuing area and an out-of-queue area, and the out-of-queue The area includes the exit line that needs to pass through to end the queuing and judge whether to end the queuing; identify the exit area in the queued video stream, track the objects in the exit area, and determine the line that crosses the line in the unit time. Describe the quantity of dequeuing objects of the dequeuing line to obtain the dequeue speed; obtain the quantity of queuing objects in the queuing area from the target image frame of the queuing video stream; according to the dequeue speed and the number of queuing objects, Get the waiting time in line.
根据本说明书实施例的第二方面,提供一种排队等待时长确定装置,所述装置包括:获取模块,用于获取目标区域的排队视频流,其中,所述目标区域至少包括排队区域和出队区域,所述出队区域包括结束排队需经过的、判断是否结束排队的出队线;出队速度确定模块,用于识别所述排队视频流中的所述出队区域,对所述出队区域中的对象进行跟踪,确定单位时间内跨过出队线的出队对象数量,得到出队速度;排队对象数量确定模块,用于从所述排队视频流的目标图像帧中获取所述排队区域中的排队对象数量;排队等待时长确定模块,用于根据所述出队速度和所述排队对象数量,得到排队等待时长。According to the second aspect of the embodiment of this specification, there is provided a device for determining the waiting time in queuing, the device includes: an acquisition module, configured to acquire the queuing video stream of the target area, wherein the target area includes at least a queuing area and a dequeue Area, the dequeue area includes the dequeue line that needs to pass through to end the queuing, and judges whether to end the queuing; the dequeue speed determination module is used to identify the dequeue area in the queued video stream, and for the dequeue Objects in the area are tracked to determine the number of dequeue objects crossing the dequeue line per unit time to obtain the dequeue speed; the number of queuing objects is determined by a module for obtaining the queuing from the target image frame of the queuing video stream The number of queuing objects in the area; the queuing waiting time determination module, configured to obtain the queuing waiting time according to the dequeue speed and the number of queuing objects.
根据本说明书的第三方面,提供了一种计算机设备,包括存储器、处理器及存储在 存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述程序时实现如上述任一所述的排队等待时长确定方法。According to a third aspect of the present specification, there is provided a computer device, including a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein, when the processor executes the program, the above-mentioned Any one of the methods for determining the waiting time in queue.
根据本说明书第四方面,提供了一种计算机可读存储介质,其上存储有计算机指令,该指令被处理器执行时实现如上述任一所述排队等待时长确定方法。According to the fourth aspect of the present specification, there is provided a computer-readable storage medium, on which computer instructions are stored, and when the instructions are executed by a processor, the method for determining the queuing waiting duration as described above is implemented.
根据本说明书第五方面,提供了一种计算机程序产品,包括存储于存储器中的计算机程序,所述计算机程序指令被处理器执行时实现如上述任一所述排队等待时长确定方法。According to the fifth aspect of the present specification, a computer program product is provided, including a computer program stored in a memory, and when the computer program instructions are executed by a processor, the method for determining the queuing waiting time as described above is implemented.
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本说明书。It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the specification.
附图说明Description of drawings
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本说明书的实施例,并与说明书一起用于解释本说明书的原理。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the specification and together with the description serve to explain the principles of the specification.
图1是根据本说明书一示例性实施例示出的一种场景示意图。Fig. 1 is a schematic diagram of a scene according to an exemplary embodiment of this specification.
图2是根据本说明书一示例性实施例示出的一种排队等待时长确定方法的流程示意图。Fig. 2 is a schematic flowchart of a method for determining the waiting time in a queue according to an exemplary embodiment of the present specification.
图3A、3B是根据本说明书一示例性实施例示出的一种排队区域场景示意图,图3B是包括对象分布排队区域场景示意图。3A and 3B are schematic diagrams of a queuing area scene according to an exemplary embodiment of the present specification, and FIG. 3B is a schematic diagram of a queuing area scene including object distribution.
图4是根据本说明书一示例性实施例示出的另一种排队区域场景示意图。Fig. 4 is a schematic diagram showing another queuing area scene according to an exemplary embodiment of the present specification.
图5A、5B是根据本说明书一示例性实施例示出的又一种排队区域场景示意图,图5B是包括对象分布的排队区域场景示意图。5A and 5B are schematic diagrams of another queuing area scene according to an exemplary embodiment of the present specification, and FIG. 5B is a schematic diagram of a queuing area scene including object distribution.
图6是根据本说明书一示例性实施例示出的对象定位点示意图。Fig. 6 is a schematic diagram showing object positioning points according to an exemplary embodiment of the present specification.
图7是根据本说明书一示例性实施例示出的多摄像仪器的场景示意图。Fig. 7 is a schematic diagram of a scene of a multi-camera instrument according to an exemplary embodiment of the present specification.
图8是根据本说明书一示例性实施例示出的另一种排队等待时长确定方法的流程示意图。Fig. 8 is a schematic flowchart of another method for determining the waiting time in a queue according to an exemplary embodiment of the present specification.
图9是根据本说明书一示例性实施例示出的一种排队等待时长确定装置的框图示意图。Fig. 9 is a schematic block diagram of a device for determining a queue waiting time according to an exemplary embodiment of the present specification.
图10是根据本说明书一示例性实施例示出的一种计算设备的示意图。Fig. 10 is a schematic diagram of a computing device according to an exemplary embodiment of this specification.
具体实施方式Detailed ways
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附 图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本说明书相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本说明书的一些方面相一致的装置和方法的例子。Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numerals in different drawings refer to the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with this specification. Rather, they are merely examples of apparatuses and methods consistent with aspects of the present specification as recited in the appended claims.
在本说明书中使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本说明书。在本说明书和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。The terms used in this specification are for the purpose of describing particular embodiments only, and are not intended to limit the specification. As used in this specification and the appended claims, the singular forms "a", "the", and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise. It should also be understood that the term "and/or" as used herein refers to and includes any and all possible combinations of one or more of the associated listed items.
应当理解,尽管在本说明书可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本说明书范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。It should be understood that although the terms first, second, third, etc. may be used in this specification to describe various information, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another. For example, without departing from the scope of this specification, first information may also be called second information, and similarly, second information may also be called first information. Depending on the context, the word "if" as used herein may be interpreted as "at" or "when" or "in response to a determination."
若本公开技术方案涉及用户信息,应用本公开技术方案的产品在处理用户信息前,已明确告知用户信息处理规则,并取得用户自主同意。若本公开技术方案涉及敏感用户信息,应用本公开技术方案的产品在处理敏感用户信息前,已取得用户单独同意,并且同时满足“明示同意”的要求。例如,在摄像头等用户信息采集装置处,设置明确显著的标识告知已进入用户信息采集范围,将会对用户信息进行采集,若用户自愿进入采集范围即视为同意对其用户信息进行采集;或者在用户信息处理的装置上,利用明显的标识/信息告知用户信息处理规则的情况下,通过弹窗信息或请用户自行上传其用户信息等方式获得用户授权;其中,用户信息处理规则可包括用户信息处理者、用户信息处理目的、处理方式、处理的用户信息种类等信息。If the disclosed technical solution involves user information, the product applying the disclosed technical solution has clearly informed the user of the information processing rules and obtained the user's independent consent before processing the user information. If the disclosed technical solution involves sensitive user information, the product applying the disclosed technical solution has obtained the separate consent of the user before processing the sensitive user information, and at the same time meets the requirement of "express consent". For example, at the user information collection device such as a camera, set a clear and prominent sign to inform that the user information has entered the collection range, and the user information will be collected. If the user voluntarily enters the collection range, it is deemed to agree to the collection of his user information; or On the user information processing device, when the user information processing rules are notified with obvious signs/information, the user authorization is obtained through pop-up information or asking the user to upload their user information; among them, the user information processing rules may include user Information processor, user information processing purpose, processing method, type of user information processed and other information.
实际生活中,很多情况下需要对排队等待时长进行预估,以使排队对象合理安排时间,例如,游乐园有众多娱乐项目,游乐园的管理者需要通过终端设备对每个项目的排队等待时长进行预估,然后将排队等待时长推送至进入游乐园游玩的游客,以使游客能够根据各个项目的排队等待时长合理安排游玩顺序。在其他实施例中,可能需要预估车辆拥堵区域的车辆排队等待时长,例如高速路出入口处,节假日期间车辆拥堵至某高速路出入口等待进出,驾驶人可能想知道某高速路口的排队等待时长,因此向服务器请求获知车辆排队等待时长。In real life, in many cases, it is necessary to estimate the waiting time in line so that the queuing objects can arrange their time reasonably. For example, there are many entertainment items in the amusement park, and the manager of the amusement park needs to use the terminal device to estimate the waiting time of each item. Make an estimate, and then push the waiting time in line to the tourists entering the amusement park, so that tourists can reasonably arrange the order of play according to the waiting time in line for each item. In other embodiments, it may be necessary to estimate the waiting time of vehicles in the vehicle congestion area, such as at the entrance and exit of the expressway. During holidays, vehicles are congested to a certain expressway entrance and exit, and the driver may want to know the queuing time at a certain expressway intersection. Therefore, a request is made to the server to know the waiting time of the vehicle in line.
在对排队等待时长进行预估时,一般需要对每个排队对象进行跟踪,对每个排队对象的排队时长进行计时,然后根据每个排队对象的排队时长,预估新进排队对象的排队等待时长。When estimating the queuing time, it is generally necessary to track each queuing object, time the queuing time of each queuing object, and then estimate the queuing time of new queuing objects based on the queuing time of each queuing object duration.
这种排队等待时长确定方法,由于需要对每个排队对象进行跟踪,因此,在排队对象密集的情况下(例如蛇形排队),跟踪难度会有所增加,一方面,排队对象密集,需要跟踪的排队对象数量多,难度有会有提升,另一方面,针对每个排队对象,需要跟踪的时间长,跟踪失败以及跟踪错误的几率会增加。基于这种排队等待时长确定方法对排队对象的排队等待时长进行预估,准确度会大打折扣。This method for determining the waiting time in queue needs to track each queuing object. Therefore, in the case of dense queuing objects (such as serpentine queuing), the tracking difficulty will increase. On the one hand, the queuing objects are dense and need to be tracked. The number of queuing objects is large, and the difficulty will increase. On the other hand, for each queuing object, it takes a long time to track, and the probability of tracking failure and tracking error will increase. The accuracy of estimating the queuing waiting time of queuing objects based on this method for determining the queuing waiting time will be greatly reduced.
基于此,本说明书提供一种排队等待时长确定方法,在即将出队(结束排队)附近的区域对排队对象进行头部跟踪,确定单位时间内出队对象的数量,得到出队速度。同时,统计排队区域内排队对象的数量,得到排队对象数量。根据所述出队速度和排队对象数量,得到排队等待时长。Based on this, this specification provides a method for determining the waiting time in a queue, which is to track the head of the queued objects in the area near the queue (ending the queue), determine the number of queued objects per unit time, and obtain the queued speed. At the same time, the number of queuing objects in the queuing area is counted to obtain the number of queuing objects. According to the dequeue speed and the number of queued objects, the queue waiting time is obtained.
通过本说明书一个或多个实施例,仅需要对即将结束排队的出队区域的排队对象进行跟踪,一方面,由于跟踪区域减小、跟踪目标数量减少,所以跟踪难度会有所下降,使得计算资源的耗费也会有所下降。另一方面,由于跟踪目标的时长减短,跟踪失败及错误的几率降低,所以跟踪准确度有所提高,进而预估的排队等待时长更准确。Through one or more embodiments of this specification, it is only necessary to track the queuing objects in the dequeue area that is about to end queuing. On the one hand, since the tracking area is reduced and the number of tracking targets is reduced, the tracking difficulty will be reduced, making the calculation Resource consumption will also be reduced. On the other hand, since the duration of tracking the target is shortened, the probability of tracking failure and error is reduced, so the tracking accuracy is improved, and the estimated waiting time in line is more accurate.
接下来对本说明书提出的排队等待时长确定方法进行详细的说明。Next, the method for determining the queuing waiting time proposed in this specification will be described in detail.
需要说明的是,本说明书提供的排队等待时长确定方法,是针对即将进入排队区域,成为最后一个排队对象的人而言的,是对该最后一个排队对象,从开始排队到出队的整个时长的预估,即,排队等待时长的预估。It should be noted that the method for determining the queuing waiting time provided in this manual is for the person who is about to enter the queuing area and become the last queuing object. The estimate of , that is, the estimate of the waiting time in line.
首先对本说明书的应用场景进行描述。如图1所示,为本说明书示出的场景示意图,101为摄像仪器,区域102为排队对象排队的区域,摄像仪器对排队区域进行拍摄,得到排队对象在排队区域进行排队的排队视频流。当然,实际排队视频流还可以通过其他方式,例如有摄像头的手机等获取。Firstly, the application scenario of this manual is described. As shown in FIG. 1 , it is a schematic diagram of a scene shown in this specification, 101 is a camera, and an area 102 is an area where queuing objects line up. Of course, the actual queuing video stream can also be obtained through other means, such as a mobile phone with a camera.
然后由摄像仪器利用排队视频流进行排队等待时长的预估,或者摄像仪器将排队视频流发送至某一终端设备,由该终端设备利用排队视频流进行排队等待时长的预估。Then the camera device uses the queued video stream to estimate the queued waiting time, or the camera device sends the queued video stream to a certain terminal device, and the terminal device uses the queued video stream to estimate the queued waiting time.
其中,排队等待时长的确定方法可以由终端设备或服务器等电子设备执行,所述终端设备可以是固定终端或移动终端,例如手机、平板电脑、游戏机、台式机、广告机、一体机、车载终端、无人机、飞行器等等,所述服务器包括本地服务器或云端服务器等,所述方法还可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。Wherein, the method for determining the waiting time in line can be performed by electronic equipment such as a terminal device or a server, and the terminal device can be a fixed terminal or a mobile terminal, such as a mobile phone, a tablet computer, a game console, a desktop computer, an advertising machine, an all-in-one machine, a vehicle-mounted terminal, etc. terminal, unmanned aerial vehicle, aircraft, etc., the server includes a local server or a cloud server, etc., and the method can also be realized by calling a computer-readable instruction stored in a memory by a processor.
如图2所示,图2是本说明书根据一示例性实施例示出的一种排队等待时长确定方法的流程图示意图,包括步骤201至步骤207。As shown in FIG. 2 , FIG. 2 is a schematic flowchart of a method for determining the waiting time in a queue according to an exemplary embodiment of the present specification, including steps 201 to 207 .
步骤201、获取目标区域的排队视频流。 Step 201, acquire queued video streams in the target area.
其中,目标区域包括排队区域和出队区域,出队区域包括结束排队需经过的、判断 是否结束排队的出队线。Wherein, the target area includes the queuing area and the dequeue area, and the dequeue area includes the dequeue line that needs to pass through to end the queuing and judge whether to end the queuing.
排队视频流可以是实时获取的,也可以是从历史视频流中获取的。Queued video streams can be acquired in real time or from historical video streams.
步骤203、识别所述排队视频流中的所述出队区域,对出队区域中的对象进行跟踪,确定单位时间内跨过出队线的出队对象数量,得到出队速度。Step 203: Identify the dequeue area in the queuing video stream, track the objects in the dequeue area, determine the number of dequeue objects crossing the dequeue line per unit time, and obtain the dequeue speed.
其中,出队区域为包括所述出队线的区域范围。Wherein, the out-of-queue area is an area range including the out-of-queue line.
出队线是排队对象结束(完成)排队时需要经过的线,用于判断任一排队对象是否结束排队的线,一般根据拍摄的视角以及实际排队情况标定,或者由计算系统生成。The exit line is the line that the queuing object needs to pass through when it ends (completes) the queuing. It is used to judge whether any queuing object ends the queuing line. It is generally calibrated according to the shooting angle and the actual queuing situation, or generated by the computing system.
出队区域是需要进行对象跟踪的区域,因此需要包括排队对象即将结束排队所处的排队区域以及刚结束排队所处的非排队区域。The dequeuing area is the area where object tracking needs to be performed, so it needs to include the queuing area where the queuing object is about to end queuing and the non-queuing area where the queuing has just ended.
本说明所示出的方法,由于需要在出队区域对头部进行跟踪,确定单位时间内跨过出队线的出队对象数量,得到出队速度,所以,为了判定排队对象是否为出队对象,进行头部跟踪的出队区域需要包括出队线。The method shown in this description needs to track the head in the dequeue area, determine the number of dequeue objects crossing the dequeue line per unit time, and obtain the dequeue speed. Therefore, in order to determine whether the queued object is dequeue Object, the dequeue area for head tracking needs to include the dequeue line.
在一些实施例中,出队区域还可以包括排队末段区域和非排队区域,排队末段区域至少部分区域与排队区域重合,出队线用于作为第一排队区域与非排队区域之间的分界线。In some embodiments, the dequeuing area may also include a queuing end area and a non-queuing area, at least part of the queuing end area overlaps with the queuing area, and the dequeuing line is used as a bridge between the first queuing area and the non-queuing area. dividing line.
其中,末段排队区域是指即将结束排队的区域,如图3A以及图4所示的排队区域示意图,区域302可以作为末段排队区域。Wherein, the last queuing area refers to an area where queuing is about to end, as shown in FIG. 3A and FIG. 4 , the area 302 can be used as the last queuing area.
本说明书示出的方法,仅需要对位于出队区域内的人进行跟踪,具体而言,当某一人进入出队区域后,为其分配跟踪ID并进行跟踪,当其离开出队区域后,确定结束对该人的跟踪,删除为该人分配的跟踪ID。对象跟踪的方法可参见相关方法的具体实现,例如头部跟踪、身体跟踪等等,本说明书不进行详细赘述。The method shown in this manual only needs to track the people located in the dequeue area. Specifically, when a person enters the dequeue area, he is assigned a tracking ID and tracked. When he leaves the dequeue area, Determines to end tracking of the person, deleting the tracking ID assigned to the person. For methods of object tracking, refer to specific implementations of related methods, such as head tracking, body tracking, etc., which will not be described in detail in this specification.
其中,获取的排队视频流是相对固定视角的排队视频流,换言之,排队视频流在短时间内不会变换视角,因此,排队视频流中的各个图像帧针对同一区域具有同一视角,因此有着相同的排队区域、出队区域、出队线。Among them, the queued video stream obtained is a queued video stream with a relatively fixed viewing angle. In other words, the queued video stream will not change the viewing angle in a short time. The queuing area, dequeue area, and dequeue line.
出队区域、出队线可以根据实际场景而定,拍摄排队视频流的摄像仪器相对于排队区域的位置不同,得到的出队区域、出队线会有所不同,例如,出队线、出队区域可能在图像帧的左上部分,也可能在图像帧的右下部分,还可能在图像帧的中间部分等。The exit area and exit line can be determined according to the actual scene. The location of the camera equipment that shoots the queuing video stream relative to the queue area is different, and the obtained exit area and line will be different. For example, the exit line, exit The team area may be in the upper left part of the image frame, may also be in the lower right part of the image frame, and may also be in the middle part of the image frame, etc.
如图3A和图4,为本说明书示出的两个场景示意图,图3A的出队线以及出队区域在图像帧的右侧中部,图4的出队线以及出队区域在图像帧的左侧中部。其中,图3B与图3A所示的区域示意图是相同的,图3B中增加了对象的示例,虚线方向是排队的方向,每个黑圆圈代表一个排队对象。As shown in Figure 3A and Figure 4, it is a schematic diagram of two scenes shown in this specification, the queue line and the queue area in Figure 3A are in the middle of the right side of the image frame, and the queue line and the queue area in Figure 4 are in the middle of the image frame Middle left. Wherein, FIG. 3B is the same as the area schematic diagram shown in FIG. 3A , and an example of objects is added in FIG. 3B , the direction of the dotted line is the direction of queuing, and each black circle represents a queuing object.
在确定出队对象时,可以使用如下方法确定,包括步骤2031至步骤2032。When determining the dequeued object, the following method may be used to determine, including step 2031 to step 2032 .
步骤2031、对出队区域中的对象进行跟踪。Step 2031, track the objects in the dequeuing area.
步骤2032、基于对出队区域中的对象进行跟踪得到的移动轨迹,确定出队对象。Step 2032: Determine the dequeue object based on the movement trajectory obtained by tracking the objects in the dequeue area.
其中,根据对象由排队区域跨过出队线移动至非排队区域的移动轨迹线确定出出队对象。Wherein, the dequeuing object is determined according to the movement trajectory of the object moving from the queuing area across the dequeuing line to the non-queuing area.
在进行对象跟踪时,例如头部跟踪,连续的多帧图像中,同一个人在不同帧的图像中,头部所处的位置会有所不同,按照时序,可以得到该对象的头部的移动轨迹,移动轨迹线会有方向,按照时序进行指向,得到具有方向的移动轨迹线。只要确定移动轨迹线的方向是由排队区域至非排队区域,且经过出队线,就可以确定该移动轨迹线对应的对象是出队对象。When performing object tracking, such as head tracking, in continuous multi-frame images, the position of the head of the same person in different frames of images will be different. According to the timing, the movement of the head of the object can be obtained The trajectory, the moving trajectory line will have a direction, pointing according to the time sequence, and the moving trajectory line with direction will be obtained. As long as it is determined that the direction of the moving trajectory is from the queuing area to the non-queuing area, and passes through the dequeuing line, it can be determined that the object corresponding to the moving trajectory is the dequeuing object.
在一些实施例中,可以基于排队视频流的出队区域,对出队区域中的对象进行跟踪,分别确定各个对象的移动轨迹线。在任一对象的移动轨迹线与出队线交叉,且移动轨迹线的移动方向为由排队区域至非排队区域的情况下,确定该对象为出队对象。换言之,当在任一排队对象的移动轨迹线与所述出队线交叉,且移动轨迹线的移动方向为由排队区域至非排队区域的情况下,可以确定该排队对象出队,确定该排队对象为出队对象。In some embodiments, the objects in the dequeue area may be tracked based on the dequeue area of the queued video stream, and the moving trajectory of each object may be determined respectively. When the moving trajectory of any object intersects the dequeuing line, and the moving direction of the moving trajectory is from the queuing area to the non-queuing area, the object is determined to be the dequeuing object. In other words, when the moving trajectory of any queuing object intersects with the dequeuing line, and the moving direction of the moving trajectory is from the queuing area to the non-queuing area, it can be determined that the queuing object is dequeued, and the queuing object is determined to be dequeued. For dequeuing objects.
步骤205、从排队视频流中的目标图像帧中获取排队区域的排队对象数量。 Step 205, acquiring the number of queuing objects in the queuing area from the target image frame in the queuing video stream.
其中,由于短时间内,排队区域的排队对象数量不会发生变化,因此,目标图像帧为排队视频流中的一帧图像。Wherein, since the number of queuing objects in the queuing area does not change within a short period of time, the target image frame is an image frame in the queuing video stream.
在本说明书中,排队区域指排队对象进行排队的区域,其他区域为非排队区域,如图3A以及图4所示,为本说明书示出的排队区域示意图,区域301以及区域302为排队区域,其他区域为非排队区域。In this description, the queuing area refers to the area where the queuing objects are queued, and other areas are non-queuing areas, as shown in Figure 3A and Figure 4, which are schematic diagrams of the queuing area shown in this specification, and area 301 and area 302 are queuing areas, Other areas are non-queuing areas.
如图5A所示,为本说明书示出的另一排队区域示意图,其中,区域301、区域302为排队区域,区域305为工作对象所站立的位置,工作对象不会随着排队对象进行移动,因此,工作对象所站立的位置不属于排队区域,因此,区域305并不包括在排队区域中,属于非排队区域,所以,在图5A所示的场景中,非排队区域包括区域305、区域303、区域304。As shown in Figure 5A, it is a schematic diagram of another queuing area shown in this specification, wherein, area 301 and area 302 are queuing areas, area 305 is the position where the working object stands, and the working object will not move along with the queuing object. Therefore, the position where the work object stands does not belong to the queuing area, therefore, the area 305 is not included in the queuing area and belongs to the non-queuing area, so, in the scene shown in Figure 5A, the non-queuing area includes area 305, area 303 , area 304 .
如图5B所示,与图5A所示的区域示意图相同,增加了对象分布的示意图,虚线方向是排队路径的示意,左侧标定的对象为即将进入排队成为最后一个排队的对象,服务器可向该对象提供排队预估时间,右侧标定的对象为刚结束排队的对象。As shown in Figure 5B, it is the same as the schematic diagram of the area shown in Figure 5A, and a schematic diagram of object distribution is added. The direction of the dotted line is a schematic diagram of the queuing path. This object provides the estimated queuing time, and the objects marked on the right are the objects that have just finished queuing.
以上是本说明书示例性给出的一些排队区域的示意图,实际应用中,排队区域可能会更复杂,可以根据拍摄仪器的角度,以及实际的排队情况,划分排队区域,例如,由 于突发状况,排队区域扩大,或者在部分区域设置工作人员站岗,不需要将该区域划入排队区域,又或者拍摄角度发生变化等,都可引起实际排队的区域的变化,从而重新划定排队区域。The above are schematic diagrams of some queuing areas exemplified in this manual. In practical applications, the queuing area may be more complicated. The queuing area can be divided according to the angle of the shooting instrument and the actual queuing situation. For example, due to emergencies, The expansion of the queuing area, or setting up staff to stand guard in some areas, does not need to divide the area into the queuing area, or changes in the shooting angle, etc., can cause changes in the actual queuing area, thereby redefining the queuing area.
因此,在一个或多个实施例中,在确定排队区域发生变化时(例如,区域的分布从图4变化至图5A),获取变化后的排队区域,将变化后的排队区域作为新的排队区域。Therefore, in one or more embodiments, when it is determined that the queuing area changes (for example, the distribution of the area changes from FIG. 4 to FIG. 5A), the changed queuing area is obtained, and the changed queuing area is used as a new queuing area.
因此,在本说明书一个或多个实施例中,在步骤205之前,可以先确定当前排队区域,然后从排队视频流中获取目标图像帧,依据该目标图像帧获取当前排队区域的排队对象数量。Therefore, in one or more embodiments of this specification, before step 205, the current queuing area may be determined first, and then the target image frame is obtained from the queuing video stream, and the number of queuing objects in the current queuing area is obtained according to the target image frame.
其中,在从所述排队视频流中的目标图像帧中获取位于排队区域中的排队对象数量时,可以先对目标图像帧的对象进行头部定位,得到目标图像帧中各个对象的头部点集合。然后在头部点集合中,基于目标图像帧中的排队区域,获取位于排队区域的头部点的数量,得到排队对象数量。如图6所示,为本说明书示出的场景示意图,一些对象在排队区域内,一些对象不在排队区域内,图中的点为定位的各个对象,假设图中的排队区域的四个顶点A,B,C,D的坐标为[(5,5),(5,20),(25,5),(25,20)],对目标图像帧中的各个头部点进行定位后,得到若干个头部点坐标集合,若头部点坐标的横坐标大于5,小于25,并且纵坐标大于5,小于20,那么该头部点坐标位于排队区域,属于排队对象;否则,该头部点坐标位于非排队区域,属于非排队对象。然后在头部点集合中,确定是排队对象的头部点坐标数量,得到排队对象数量。一个对象的头部点可以指该对象的头部区域的中心点。Wherein, when acquiring the number of queuing objects located in the queuing area from the target image frame in the queuing video stream, the head of the object in the target image frame can be first positioned to obtain the head points of each object in the target image frame gather. Then, in the set of head points, based on the queuing area in the target image frame, the number of head points located in the queuing area is obtained to obtain the number of queuing objects. As shown in Figure 6, it is a schematic diagram of the scene shown in this specification. Some objects are in the queuing area, and some objects are not in the queuing area. , the coordinates of B, C, and D are [(5,5), (5,20), (25,5), (25,20)], after positioning each head point in the target image frame, we get Several head point coordinate sets, if the abscissa of the head point coordinates is greater than 5 and less than 25, and the ordinate is greater than 5 and less than 20, then the head point coordinates are located in the queuing area and belong to the queuing object; otherwise, the head point coordinates The point coordinates are located in the non-queuing area and belong to the non-queuing object. Then, in the set of head points, determine the number of head point coordinates that are queued objects, and obtain the number of queued objects. A subject's head point may refer to the center point of the subject's head region.
以上是本说明书示出的头部点定位的方式确定排队对象数量的方法,实际应用中,还可以对目标图像帧的头部区域(即,头部区域)进行定位,针对任一对象对应的头部区域,若该对象的头部区域在排队区域内的区域大于预设值,则判定该对象在排队区域,确定为排队对象(例如设置的预设值为百分之九十,百分之九十及以上的头部区域在排队区域的对象,判定为在排队区域,确定为排队对象)。The above is the method for determining the number of queued objects in the way of head point positioning shown in this specification. In practical applications, the head area (that is, the head area) of the target image frame can also be positioned, and any object corresponding to Head area, if the area of the head area of the object in the queuing area is greater than the preset value, then it is determined that the object is in the queuing area and determined as the queuing object (for example, the preset value set is 90%, 100% 90 and above objects whose head area is in the queuing area are determined to be in the queuing area and determined to be queuing objects).
当然,还有其他方法确定排队区域的排队对象数量的方法,例如根据身体结构定位每个对象,根据五官定位每个对象等等,本说明书不一一进行赘述。Of course, there are other methods for determining the number of queuing objects in the queuing area, such as locating each object according to body structure, locating each object according to facial features, etc., and this description will not repeat them one by one.
步骤207、根据所述出队速度和所述排队对象数量,得到排队等待时长。Step 207: Obtain the queue waiting time according to the dequeue speed and the number of queued objects.
其中,可以用排队对象数量除以出队速度,得到排队等待时长。Among them, the queue waiting time can be obtained by dividing the number of queued objects by the dequeue speed.
例如,可以令出队速度为v,单位为(例如,人/小时)或(例如,人/分钟),排队对象数量为s,单位为(例如,1个),那么排队等待时长t=s/v,单位为小时或分钟。For example, the dequeue speed can be v, the unit is (for example, person/hour) or (for example, person/minute), the number of queued objects is s, and the unit is (for example, 1), then the waiting time in line is t=s /v, in hours or minutes.
假设出队速度为每分钟2人,排队对象数量为30人,那么预计排队等待时长为15 分钟。Assuming that the speed of leaving the queue is 2 people per minute, and the number of people in the queue is 30, then the estimated waiting time in the queue is 15 minutes.
实际应用中,还会有很多其他因素影响排队等待时长,因此,排队等待时长可以为t=s/v+α,α为其他影响因素所耗费的时间或者误差等。In practical applications, there are many other factors that affect the waiting time in the queue. Therefore, the waiting time in the queue can be t=s/v+α, where α is the time spent or error by other influencing factors.
例如,商家需要每一个小时更换一次设备,更换一次设备大约两分钟,那么在确定排队等待时长时,需要根据上一次更换设备的时刻确定此时的排队等待时长的预估时长,这需要考虑更换设备所耗费的时间α等。For example, a merchant needs to replace the equipment every hour, and the equipment replacement takes about two minutes. When determining the waiting time in the queue, it is necessary to determine the estimated length of the waiting time in the queue based on the time of the last equipment replacement. This needs to be considered. The time α spent by the device and so on.
在实际应用中,还会有很多其他影响排队等待时长的因素,此处不进行详细赘述。In practical applications, there will be many other factors that affect the waiting time in the queue, which will not be described in detail here.
实际应用中,排队速度不是一成不变的,会随着时间变化,排队等待时长也会随着时间发送变化。因此,在本说明书一个或多个实施例中,在如图8所示的流程示意图中,排队等待时长确定的方法可以包括步骤801至步骤807。In practical applications, the queuing speed is not constant, it will change with time, and the waiting time in queuing will also change with time. Therefore, in one or more embodiments of this specification, in the schematic flowchart shown in FIG. 8 , the method for determining the waiting time in queue may include steps 801 to 807.
步骤801、获取目标区域的排队视频流。 Step 801. Obtain queued video streams in the target area.
其中,目标区域包括排队区域和出队区域,所述出队区域包括结束排队需经过的、判断是否结束排队的出队线。Wherein, the target area includes a queuing area and a dequeue area, and the dequeue area includes a dequeue line that needs to pass through to end the queuing and judge whether to end the queuing.
步骤803、基于第一时刻以及第一时刻之前的预定时间段内的所述排队视频流中的多个图像帧,对多个图像帧中的出队区域中的对象进行跟踪,确定单位时间内跨过出队线的出队对象数量,得到该第一时刻对应的出队速度。Step 803: Based on the multiple image frames in the queuing video stream at the first moment and the predetermined time period before the first moment, track the objects in the dequeuing area in the multiple image frames, and determine the unit time Get the dequeue speed corresponding to the first moment by the number of dequeue objects that cross the dequeue line.
其中,第一时刻可以是排队视频流对应的某一时刻,但不是首帧图像对应的时刻,第一时刻之前预定时间段内需要有多个图像帧。例如,时刻9点40分之前需要有前一分钟的图像帧,即需要有时间段9点39至9点40分对应的图像帧。Wherein, the first moment may be a certain moment corresponding to the queued video stream, but not the moment corresponding to the first image frame, and there need to be multiple image frames within a predetermined period of time before the first moment. For example, before the time 9:40, there needs to be an image frame of the previous minute, that is, there needs to be an image frame corresponding to the time period from 9:39 to 9:40.
预定时间段可以根据实际情况设定,例如实际排队情况良好,出队速度变化快,预定时间段可短一些;实际排队情况差,出队速度变化慢,预定时间段可长一些。The predetermined time period can be set according to the actual situation. For example, if the actual queuing situation is good and the speed of leaving the queue changes rapidly, the predetermined time period can be shorter; if the actual queuing situation is poor and the speed of leaving the queue changes slowly, the predetermined time period can be longer.
例如,预定时段可以是两分钟,针对某一时刻,基于该时刻以及该时刻之前的两分钟内的图像帧,对这些图像帧中的出队区域中的排队对象进行头部跟踪,确定单位时间内跨过出队线的出队对象的头部数量,得到该时刻对应的出队速度。For example, the predetermined period of time may be two minutes, and for a certain moment, based on the image frames within the two minutes before the moment and the moment, head tracking is performed on the queued objects in the out-of-queue area in these image frames, and the unit time is determined Get the number of heads of the dequeue objects that cross the dequeue line, and get the corresponding dequeue speed at that moment.
实际应用中,相邻的两帧图像之间的时差过小时,两帧图像的内容会相差无几,例如,第一帧图像与第二帧图像之间的时差为0.01s。In practical applications, if the time difference between two adjacent frames of images is too small, the content of the two frames of images will be almost the same, for example, the time difference between the first frame of images and the second frame of images is 0.01s.
因此,在本说明书一个或多个实施例中,第一时刻之前的预定时间段内的多个图像帧,可以是第一时刻之前的预定时段内的部分图像,例如,从所述排队视频流中每1s获取一帧图像,该时刻之前的两分钟内的图像帧,会获取到120帧图像,对该时刻对应的图像以及这120帧图像中的出队区域的排队对象进行头部跟踪,确定单位时间内跨过出队线的出队对象的头部数量,得到该第一时刻对应的出队速度。Therefore, in one or more embodiments of this specification, the plurality of image frames within the predetermined time period before the first moment may be part of the images within the predetermined period before the first moment, for example, from the queued video stream One frame of image is acquired every 1s, and 120 frames of images will be obtained for the image frames within two minutes before the moment, and head tracking is performed on the image corresponding to the moment and the queuing objects in the out-of-queue area in the 120 frames of images. Determine the number of heads of dequeue objects that cross the dequeue line per unit time, and obtain the dequeue speed corresponding to the first moment.
步骤805、从排队视频流中所述第一时刻对应的图像帧中获取排队区域中的排队对象数量。Step 805: Obtain the number of queuing objects in the queuing area from the image frame corresponding to the first moment in the queuing video stream.
步骤807、根据第一时刻对应的出队速度和排队对象数量,得到第一时刻对应的排队等待时长。 Step 807, according to the dequeue speed and the number of queued objects corresponding to the first moment, the queue waiting time corresponding to the first moment is obtained.
实际应用中,若排队情况变化慢,排队对象数量在一定时长内不会发生变化,例如,排队速度慢,例如,30s内排队对象数量无变化,因此,在本说明书一个或多个实施例中,步骤805还可以包括:从排队视频流中所述第一时刻之前固定时间段内对应的视频流中的任一图像帧中获取排队区域中的排队对象数量。In practical applications, if the queuing situation changes slowly, the number of queuing objects will not change within a certain period of time. For example, the queuing speed is slow, for example, the number of queuing objects does not change within 30s. Therefore, in one or more embodiments of this specification , Step 805 may further include: acquiring the number of queuing objects in the queuing area from any image frame in the corresponding video stream within a fixed period of time before the first moment in the queuing video stream.
此外,实际应用中,排队区域很庞大,一个摄像仪器难以将所有排队区域拍摄完全,需要多个摄像仪器进行拍摄,如图7所示,为本说明书示出的多摄像仪器协作的场景示意图,排队区域包括702A、702B,摄像仪器包括701A、701B,摄像仪器701A用于拍摄区域702A,摄像仪器701B用于拍摄区域702B。In addition, in practical applications, the queuing area is very large, and it is difficult for one camera to capture all the queuing areas, and multiple cameras are required to take pictures. As shown in Figure 7, it is a schematic diagram of the multi-camera collaboration scene shown in this manual. The queuing area includes 702A and 702B, and the imaging equipment includes 701A and 701B. The imaging equipment 701A is used for shooting the area 702A, and the imaging equipment 701B is used for shooting the area 702B.
因此,在本说明书一个或多个实施例中,排队视频流包括:多个摄像头分别拍摄的多个子视频流,其中,每个所述子视频流所拍摄的区域包括部分排队区域。Therefore, in one or more embodiments of the present specification, the queued video stream includes: multiple sub-video streams respectively captured by multiple cameras, wherein the area captured by each of the sub-video streams includes a part of the queuing area.
此时,步骤205可以包括:根据第二时刻对应的所述多个子视频流分别对应的图像帧,确定各个所述图像帧中位于所述排队区域的对象;统计各个所述图像帧中位于所述排队区域中的对象的数量得到所述第二时刻对应的排队对象数量。At this time, step 205 may include: according to the image frames corresponding to the plurality of sub-video streams corresponding to the second moment, determine the objects located in the queuing area in each image frame; count the objects located in the queue area in each image frame The number of objects in the queuing area corresponding to the second moment is obtained from the number of objects in the queuing area.
其中,第二时刻可以是视频流的任一时刻,确保同时在第二时刻获取各个子排队视频流的图像帧即可。Wherein, the second moment may be any moment of the video stream, and it is only necessary to ensure that the image frames of each sub-queuing video stream are simultaneously acquired at the second moment.
如此,在一个摄像仪器拍摄的区域不能将全部排队区域囊括的情况下,利用多个摄像仪器共同协作,无论排队区域是何种形状以及涉及范围多广,都可以通过本说明书示出的多摄像仪器协同拍摄,进行排队对象的统计,然后得到排队等待时长。In this way, when the area photographed by one imaging device cannot cover all the queuing areas, multiple imaging devices can be used to cooperate together, regardless of the shape of the queuing area and how wide the scope is, it can be achieved through the multi-camera shown in this manual. The instruments cooperate with each other to make statistics on the objects in the queue, and then get the waiting time in the queue.
与前述方法的实施例相对应,本说明书还提供了装置及其所应用的终端的实施例。Corresponding to the foregoing method embodiments, this specification also provides embodiments of a device and a terminal to which it is applied.
本说明书提供了一种排队等待时长确定装置,如图9所示,所述装置包括:获取模块901,用于获取目标区域的排队视频流,其中,所述目标区域包括排队区域和出队区域,所述出队区域包括结束排队需经过的、判断是否结束排队的出队线;出队速度确定模块903,用于识别所述排队视频流中的所述出队区域,对所述出队区域中的对象进行跟踪,确定单位时间内跨过出队线的出队对象数量,得到出队速度;排队对象数量确定模块905,用于从所述排队视频流中的目标图像帧中获取所述排队区域的排队对象数量;排队等待时长确定模块907,用于根据所述出队速度和所述排队对象数量,得到排队等待时长。This specification provides a device for determining the queuing waiting time length, as shown in FIG. 9 , the device includes: an acquisition module 901, configured to acquire the queuing video stream of the target area, wherein the target area includes a queuing area and an out-of-queue area , the dequeue area includes the dequeue line that needs to pass through to end the queuing and judge whether to end the queuing; the dequeue speed determination module 903 is used to identify the dequeue area in the queued video stream, and for the dequeue Objects in the area are tracked, and the number of dequeue objects crossing the dequeue line in unit time is determined to obtain the dequeue speed; the queued object quantity determination module 905 is used to obtain all the objects from the target image frames in the queued video stream. The number of queuing objects in the queuing area; the queuing waiting duration determination module 907, configured to obtain the queuing waiting duration according to the dequeue speed and the number of queuing objects.
其中,出队区域可以包括:排队末段区域和非排队区域;此时,所述排队末段区域中的至少部分区域与所述排队区域重合,且所述出队区域中的所述出队线用于作为所述第一排队区域与所述非排队区域的分界线。Wherein, the out-of-queue area may include: a queuing end area and a non-queuing area; at this time, at least part of the area in the queuing end area overlaps with the queuing area, and the out-of-queue area in the out-of-queue area The line is used as a boundary between the first queuing area and the non-queuing area.
所述排队对象数量确定模块905还可以用于:对所述排队视频流中的目标图像帧中的对象进行头部定位,得到所述目标图像帧中的头部点集合;基于所述头部点集合以及所述排队区域,获取位于所述排队区域中的头部点;统计所述排队区域中的头部点的数量,得到排队对象数量。The queuing object quantity determining module 905 may also be used for: performing head positioning on objects in the target image frame in the queuing video stream to obtain a set of head points in the target image frame; point collection and the queuing area, obtaining head points located in the queuing area; counting the number of head points in the queuing area to obtain the number of queuing objects.
此外,所述出队速度确定模块903还可以用于:基于第一时刻以及所述第一时刻之前的预定时间段内的所述排队视频流中的多个图像帧,对所述多个图像帧中的出队区域中的对象进行跟踪,确定单位时间内跨过所述出队线的出队头部数量,得到所述第一时刻对应的出队速度。In addition, the dequeuing speed determination module 903 may also be configured to: based on the first moment and the plurality of image frames in the queued video stream within a predetermined period of time before the first moment, the plurality of image frames Objects in the dequeue area in the frame are tracked, the number of dequeue heads crossing the dequeue line per unit time is determined, and the dequeue speed corresponding to the first moment is obtained.
此时,所述排队对象数量确定模块905用于:从所述排队视频流中所述第一时刻对应的图像帧中获取排队区域中的排队对象数量;所述排队等待时长确定模块907用于:根据所述出队速度和所述排队对象数量,得到所述第一时刻对应的排队等待时长。At this time, the queuing object quantity determination module 905 is used to: acquire the queuing object quantity in the queuing area from the image frame corresponding to the first moment in the queuing video stream; the queuing waiting duration determination module 907 is used to : Obtain the queue waiting time corresponding to the first moment according to the dequeue speed and the number of queued objects.
此外,所述出队速度确定模块903用于:对所述出队区域中的对象进行跟踪;基于对所述出队区域中的对象进行跟踪得到的所述对象的移动轨迹,确定出队对象;确定单位时间内的出队对象数量,得到出队速度。In addition, the dequeue speed determining module 903 is configured to: track the objects in the dequeue area; determine the dequeue object based on the moving track of the object obtained by tracking the objects in the dequeue area ;Determine the number of dequeue objects per unit time, and get the dequeue speed.
所述出队速度确定模块903还可以用于:对所述出队区域中的对象进行跟踪;基于对所述出队区域中的对象进行跟踪,分别确定各个对象的移动轨迹线;响应于任一对象的移动轨迹线与所述出队线交叉,且所述移动轨迹线的移动方向为由排队区域至非排队区域,确定该对象为出队对象;确定单位时间内的出队对象数量,得到出队速度。The dequeue speed determination module 903 can also be used for: tracking the objects in the dequeue area; based on tracking the objects in the dequeue area, respectively determine the movement trajectory of each object; respond to any The moving track line of an object intersects with the described dequeuing line, and the moving direction of the described moving track line is from the queuing area to the non-queuing area, and it is determined that the object is the dequeuing object; the number of dequeuing objects per unit time is determined, Get the queue speed.
此外,所述排队视频流可以包括:多个摄像头分别拍摄的多个子视频流,其中,每个所述子视频流所拍摄的区域包括部分排队区域;此时,所述排队对象数量确定模块905可以用于:根据第二时刻对应的所述多个子视频流分别对应的图像帧,确定各个所述图像帧中位于所述排队区域的对象;统计各个所述图像帧中位于所述排队区域中的对象的数量,得到所述第二时刻对应的排队对象数量。In addition, the queuing video stream may include: a plurality of sub-video streams respectively captured by multiple cameras, wherein the area captured by each of the sub-video streams includes a part of the queuing area; at this time, the queuing object quantity determination module 905 It can be used to: determine the objects located in the queuing area in each of the image frames according to the image frames corresponding to the plurality of sub-video streams corresponding to the second moment; count the objects located in the queuing area in each of the image frames The number of objects in the queue is obtained to obtain the number of queued objects corresponding to the second moment.
上述装置中各个模块的功能和作用的实现过程具体详见上述方法中对应步骤的实现过程,在此不再赘述。For the implementation process of the functions and effects of each module in the above-mentioned device, please refer to the implementation process of the corresponding steps in the above-mentioned method for details, and details will not be repeated here.
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施 例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本说明书方案的目的。本领域普通技术对象在不付出创造性劳动的情况下,即可以理解并实施。As for the device embodiment, since it basically corresponds to the method embodiment, please refer to the part description of the method embodiment for relevant parts. The device embodiments described above are only illustrative, and the modules described as separate components may or may not be physically separated, and the components shown as modules may or may not be physical modules, that is, they may be located in One place, or it can be distributed to multiple network modules. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution in this specification. Objects of ordinary skill in the art can be understood and implemented without creative effort.
本说明书装置的实施例可以应用在计算机设备上,例如服务器或终端设备。装置实施例可以通过软件实现,也可以通过硬件或者软硬件结合的方式实现。以软件实现为例,作为一个逻辑意义上的装置,是通过其所在文件处理的处理器将非易失性存储器中对应的计算机程序指令读取到内存中运行形成的。Embodiments of the apparatus in this specification can be applied to computer equipment, such as servers or terminal equipment. The device embodiments can be implemented by software, or by hardware or a combination of software and hardware. Taking software implementation as an example, as a device in a logical sense, it is formed by reading the corresponding computer program instructions in the non-volatile memory into the memory for operation through the processor of the file processing where it is located.
相应的,本说明书还提供一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述程序时实现如上述任一所述的排队等待时长确定方法。Correspondingly, this specification also provides a computer device, including a memory, a processor, and a computer program stored in the memory and operable on the processor, wherein, when the processor executes the program, any one of the above-mentioned The method for determining the waiting time in the queue described above.
如图10所示,为本说明书实施例所提供的一种更为具体的计算设备硬件结构示意图,该设备可以包括:处理器1010、存储器1020、输入/输出接口1030、通信接口1040和总线1050。其中处理器1010、存储器1020、输入/输出接口1030和通信接口1040通过总线1050实现彼此之间在设备内部的通信连接。As shown in FIG. 10 , it is a schematic diagram of a more specific hardware structure of a computing device provided by the embodiment of this specification. The device may include: a processor 1010 , a memory 1020 , an input/output interface 1030 , a communication interface 1040 and a bus 1050 . The processor 1010 , the memory 1020 , the input/output interface 1030 and the communication interface 1040 are connected to each other within the device through the bus 1050 .
处理器1010可以采用通用的CPU(Central Processing Unit,中央处理器)、微处理器、应用专用集成电路(Application Specific Integrated Circuit,ASIC)、或者一个或多个集成电路等方式实现,用于执行相关程序,以实现本说明书实施例所提供的技术方案。The processor 1010 may be implemented by a general-purpose CPU (Central Processing Unit, central processing unit), a microprocessor, an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, and is used to execute related programs to realize the technical solutions provided by the embodiments of this specification.
存储器1020可以采用ROM(Read Only Memory,只读存储器)、RAM(Random Access Memory,随机存取存储器)、静态存储设备,动态存储设备等形式实现。存储器1020可以存储操作系统和其他应用程序,在通过软件或者固件来实现本说明书实施例所提供的技术方案时,相关的程序代码保存在存储器1020中,并由处理器1010来调用执行。The memory 1020 can be implemented in the form of ROM (Read Only Memory, read-only memory), RAM (Random Access Memory, random access memory), static storage device, dynamic storage device, etc. The memory 1020 can store operating systems and other application programs. When implementing the technical solutions provided by the embodiments of this specification through software or firmware, the relevant program codes are stored in the memory 1020 and invoked by the processor 1010 for execution.
输入/输出接口1030用于连接输入/输出模块,以实现信息输入及输出。输入输出/模块可以作为组件配置在设备中(图中未示出),也可以外接于设备以提供相应功能。其中输入设备可以包括键盘、鼠标、触摸屏、麦克风、各类传感器等,输出设备可以包括显示器、扬声器、振动器、指示灯等。The input/output interface 1030 is used to connect the input/output module to realize information input and output. The input/output/module can be configured in the device as a component (not shown in the figure), or can be externally connected to the device to provide corresponding functions. The input device may include a keyboard, mouse, touch screen, microphone, various sensors, etc., and the output device may include a display, a speaker, a vibrator, an indicator light, and the like.
通信接口1040用于连接通信模块(图中未示出),以实现本设备与其他设备的通信交互。其中通信模块可以通过有线方式(例如USB、网线等)实现通信,也可以通过无线方式(例如移动网络、WIFI、蓝牙等)实现通信。The communication interface 1040 is used to connect a communication module (not shown in the figure), so as to realize the communication interaction between the device and other devices. The communication module can realize communication through wired means (such as USB, network cable, etc.), and can also realize communication through wireless means (such as mobile network, WIFI, Bluetooth, etc.).
总线1050包括一通路,在设备的各个组件(例如处理器1010、存储器1020、输入/输出接口1030和通信接口1040)之间传输信息。 Bus 1050 includes a path that carries information between the various components of the device (eg, processor 1010, memory 1020, input/output interface 1030, and communication interface 1040).
需要说明的是,尽管上述设备仅示出了处理器1010、存储器1020、输入/输出接口1030、通信接口1040以及总线1050,但是在具体实施过程中,该设备还可以包括实现正常运行所必需的其他组件。此外,本领域的技术人员可以理解的是,上述设备中也可以仅包含实现本说明书实施例方案所必需的组件,而不必包含图中所示的全部组件。It should be noted that although the above device only shows the processor 1010, the memory 1020, the input/output interface 1030, the communication interface 1040 and the bus 1050, in the specific implementation process, the device may also include other components. In addition, those skilled in the art can understand that the above-mentioned device may only include components necessary to implement the solutions of the embodiments of this specification, and does not necessarily include all the components shown in the figure.
本说明书还提供了一种计算机可读存储介质,其上存储有计算机指令,该指令被处理器执行时实现如上述任一所述的排队等待时长确定方法的步骤。This specification also provides a computer-readable storage medium, on which computer instructions are stored, and when the instructions are executed by a processor, the steps of any method for determining the waiting time in a queue as described above are realized.
本说明书还提供了一种计算机程序产品,包括存储于存储器中的计算机程序,所述计算机程序指令被处理器执行时实现如上述任一所述的排队等待时长确定方法的步骤。This specification also provides a computer program product, including a computer program stored in a memory, and when the computer program instructions are executed by a processor, the steps of the method for determining the queuing waiting time described in any one of the above are implemented.
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带、磁盘存储、量子存储器、基于石墨烯的存储介质或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。Computer-readable media, including both permanent and non-permanent, removable and non-removable media, can be implemented by any method or technology for storage of information. Information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic cassettes, disk storage, quantum memory, graphene-based storage media or other magnetic storage devices or any other non-transmission media that can be used to store information that can be accessed by computing devices. As defined herein, computer-readable media excludes transitory computer-readable media, such as modulated data signals and carrier waves.
上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。The foregoing describes specific embodiments of this specification. Other implementations are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in an order different from that in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. Multitasking and parallel processing are also possible or may be advantageous in certain embodiments.
本领域技术人员在考虑说明书及实践这里申请的发明后,将容易想到本说明书的其 它实施方案。本说明书旨在涵盖本说明书的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本说明书的一般性原理并包括本说明书未申请的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本说明书的真正范围和精神由下面的权利要求指出。Other embodiments of the specification will readily occur to those skilled in the art from consideration of the specification and practice of the invention claimed herein. This description is intended to cover any modification, use or adaptation of this description. These modifications, uses or adaptations follow the general principles of this description and include common knowledge or conventional technical means in this technical field for which this description does not apply . The specification and examples are to be considered exemplary only, with a true scope and spirit of the specification being indicated by the following claims.
应当理解的是,本说明书并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本说明书的范围仅由所附的权利要求来限制。It should be understood that this specification is not limited to the precise constructions which have been described above and shown in the accompanying drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the specification is limited only by the appended claims.
以上所述仅为本说明书的一些实施例而已,并不用以限制本说明书,凡在本说明书的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本说明书保护的范围之内。The above descriptions are only some examples of this specification, and are not intended to limit this specification. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of this specification shall be included in the protection of this specification. within the range.

Claims (11)

  1. 一种排队等待时长确定方法,应用于终端设备,包括:A method for determining the waiting time in a queue, applied to a terminal device, comprising:
    获取目标区域的排队视频流,其中,所述目标区域包括排队区域和出队区域,所述出队区域包括结束排队需经过的、判断是否结束排队的出队线;Acquiring the queuing video stream of the target area, wherein the target area includes a queuing area and an out-of-queue area, and the out-of-queue area includes an out-of-queue line that needs to pass through to end the queuing and judge whether to end the queuing;
    识别所述排队视频流中的所述出队区域,对所述出队区域中的对象进行跟踪,确定单位时间内跨过所述出队线的出队对象数量,得到出队速度;Identify the dequeue area in the queuing video stream, track the objects in the dequeue area, determine the number of dequeue objects crossing the dequeue line per unit time, and obtain the dequeue speed;
    从所述排队视频流的目标图像帧中获取所述排队区域中的排队对象数量;Obtain the number of queuing objects in the queuing area from the target image frame of the queuing video stream;
    根据所述出队速度和所述排队对象数量,得到排队等待时长。According to the dequeue speed and the number of queuing objects, the queuing waiting time is obtained.
  2. 根据权利要求1所述的方法,其中,所述出队区域还包括:排队末段区域和非排队区域;The method according to claim 1, wherein the out-of-queue area further comprises: an end-of-queue area and a non-queuing area;
    所述排队末段区域中的至少部分区域与所述排队区域重合,且所述出队区域中的所述出队线用于作为所述排队末段区域与所述非排队区域的分界线。At least a part of the queuing end area overlaps with the queuing area, and the dequeuing line in the out queuing area is used as a boundary between the queuing end area and the non-queuing area.
  3. 根据权利要求1所述的方法,其中,从所述排队视频流的目标图像帧中获取所述排队区域中的排队对象数量,包括:The method according to claim 1, wherein obtaining the number of queuing objects in the queuing area from the target image frame of the queuing video stream comprises:
    对所述排队视频流中的目标图像帧中的对象进行头部定位,得到所述目标图像帧中的头部点集合;Perform head positioning on objects in the target image frame in the queued video stream to obtain a set of head points in the target image frame;
    基于所述头部点集合以及所述排队区域,确定位于所述排队区域中的头部点的数量,得到所述排队对象数量。Based on the set of head points and the queuing area, determine the number of head points located in the queuing area to obtain the number of queuing objects.
  4. 根据权利要求1所述的方法,其中,对所述出队区域中的对象进行跟踪,确定单位时间内跨过所述出队线的出队对象数量,得到所述出队速度,包括:The method according to claim 1, wherein tracking the objects in the dequeue area, determining the number of dequeue objects crossing the dequeue line per unit time, and obtaining the dequeue speed includes:
    基于第一时刻以及所述第一时刻之前的预定时间段内的所述排队视频流中的多个图像帧,对所述多个图像帧中的出队区域中的对象进行跟踪,确定单位时间内跨过所述出队线的出队对象数量,得到所述第一时刻对应的出队速度;Based on the first moment and the plurality of image frames in the queuing video stream within a predetermined time period before the first moment, tracking the objects in the out-of-queue area in the plurality of image frames, and determining the unit time The number of dequeue objects that cross the dequeue line in the interior, and obtain the dequeue speed corresponding to the first moment;
    从所述排队视频流的目标图像帧中获取所述排队区域的排队对象数量,包括:Obtaining the number of queuing objects in the queuing area from the target image frame of the queuing video stream includes:
    从所述排队视频流中所述第一时刻对应的图像帧中获取所述排队区域中的排队对象数量;Obtaining the number of queuing objects in the queuing area from the image frame corresponding to the first moment in the queuing video stream;
    根据所述出队速度和所述排队对象数量,得到排队等待时长,包括:According to the dequeue speed and the number of queuing objects, the queuing waiting time is obtained, including:
    根据所述第一时刻对应的出队速度和所述排队对象数量,得到所述第一时刻对应的排队等待时长。According to the dequeue speed corresponding to the first moment and the number of queuing objects, the queuing waiting time corresponding to the first moment is obtained.
  5. 根据权利要求1所述的方法,其中,对所述出队区域中的对象进行跟踪,确定单位时间内跨过所述出队线的所述出队对象数量,得到所述出队速度,包括:The method according to claim 1, wherein tracking the objects in the dequeue area, determining the number of dequeue objects crossing the dequeue line per unit time, and obtaining the dequeue speed includes :
    对所述出队区域中的对象进行跟踪;Tracking objects in the dequeuing area;
    基于对所述出队区域中的所述对象进行跟踪得到的所述对象的移动轨迹,确定所述出队对象;determining the dequeuing object based on the movement track of the object obtained by tracking the object in the dequeuing area;
    确定单位时间内的所述出队对象数量,得到所述出队速度。Determine the number of dequeue objects per unit time to obtain the dequeue speed.
  6. 根据权利要求5所述的方法,其中,基于对所述出队区域中的对象进行跟踪得到的所述对象的移动轨迹,确定所述出队对象,包括:The method according to claim 5, wherein, based on the moving track of the object obtained by tracking the objects in the dequeue area, determining the dequeue object comprises:
    基于对所述出队区域中的对象进行跟踪,分别确定各个对象的移动轨迹线;Based on tracking the objects in the dequeuing area, respectively determine the movement trajectory of each object;
    响应于任一对象的移动轨迹线与所述出队线交叉,且所述移动轨迹线的移动方向为由排队区域至非排队区域,确定该对象为所述出队对象。In response to the movement trajectory of any object intersecting the dequeue line, and the movement direction of the movement trajectory is from the queuing area to the non-queuing area, it is determined that the object is the dequeue object.
  7. 根据权利要求1所述的方法,其中,所述排队视频流包括:多个摄像头分别拍摄的多个子视频流,其中,每个所述子视频流所拍摄的区域包括部分排队区域;The method according to claim 1, wherein the queuing video stream comprises: a plurality of sub-video streams respectively captured by a plurality of cameras, wherein the area captured by each of the sub-video streams includes a part of the queuing area;
    获取所述排队视频流的目标图像帧中,所述排队区域中的排队对象数量,包括:Acquiring the number of queuing objects in the queuing area in the target image frame of the queuing video stream, including:
    根据第二时刻对应的所述多个子视频流分别对应的图像帧,确定各个所述图像帧中位于所述排队区域的对象;According to the image frames corresponding to the plurality of sub-video streams corresponding to the second moment, determine the objects located in the queuing area in each of the image frames;
    统计各个所述图像帧中位于所述排队区域中的对象的数量得到所述第二时刻对应的排队对象数量。Count the number of objects in the queuing area in each of the image frames to obtain the number of queuing objects corresponding to the second moment.
  8. 一种排队等待时长确定装置,包括:A device for determining the waiting time in a queue, comprising:
    获取模块,用于获取目标区域的排队视频流,其中,所述目标区域包括排队区域和出队区域,所述出队区域包括结束排队需经过的、判断是否结束排队的出队线;An acquisition module, configured to acquire a queuing video stream in a target area, wherein the target area includes a queuing area and an out-of-queue area, and the out-of-queue area includes an out-of-queue line that needs to pass through to end the queuing and judge whether to end the queuing;
    出队速度确定模块,用于识别所述排队视频流中的所述出队区域,对所述出队区域中的对象进行跟踪,确定单位时间内跨过所述出队线的出队对象数量,得到出队速度;The dequeue speed determination module is used to identify the dequeue area in the queuing video stream, track the objects in the dequeue area, and determine the number of dequeue objects crossing the dequeue line per unit time , get the dequeue speed;
    排队对象数量确定模块,用于从所述排队视频流的目标图像帧中获取所述排队区域中的排队对象数量;A queuing object quantity determination module, configured to acquire the queuing object quantity in the queuing area from the target image frame of the queuing video stream;
    排队等待时长确定模块,用于根据所述出队速度和所述排队对象数量,得到排队等待时长。The queue waiting time determination module is configured to obtain the queue waiting time according to the dequeue speed and the number of queued objects.
  9. 一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述程序时实现如权利要求1-7任一所述的方法。A computer device, comprising a memory, a processor, and a computer program stored in the memory and operable on the processor, wherein the method according to any one of claims 1-7 is implemented when the processor executes the program .
  10. 一种计算机可读存储介质,其上存储有计算机指令,该指令被处理器执行时实现如权利要求1-7中任一项所述的方法。A computer-readable storage medium, on which computer instructions are stored, and when the instructions are executed by a processor, the method according to any one of claims 1-7 is realized.
  11. 一种计算机程序产品,包括存储于存储器中的计算机程序,所述计算机程序指令被处理器执行时实现如权利要求1-7中任一项所述的方法。A computer program product comprising a computer program stored in a memory, the computer program instructions implementing the method according to any one of claims 1-7 when executed by a processor.
PCT/CN2022/095777 2021-11-05 2022-05-27 Method and apparatus for determining queuing waiting time WO2023077783A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111308447.1 2021-11-05
CN202111308447.1A CN114037146A (en) 2021-11-05 2021-11-05 Queuing waiting time length determining method and device

Publications (1)

Publication Number Publication Date
WO2023077783A1 true WO2023077783A1 (en) 2023-05-11

Family

ID=80136484

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/095777 WO2023077783A1 (en) 2021-11-05 2022-05-27 Method and apparatus for determining queuing waiting time

Country Status (2)

Country Link
CN (1) CN114037146A (en)
WO (1) WO2023077783A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114037146A (en) * 2021-11-05 2022-02-11 北京市商汤科技开发有限公司 Queuing waiting time length determining method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139040A (en) * 2015-10-13 2015-12-09 商汤集团有限公司 Queuing state information detection method and system thereof
US20180061161A1 (en) * 2016-08-30 2018-03-01 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and storage medium
US20190213422A1 (en) * 2018-01-10 2019-07-11 Canon Kabushiki Kaisha Information processing apparatus and method of controlling the same
US20200111031A1 (en) * 2018-10-03 2020-04-09 The Toronto-Dominion Bank Computerized image analysis for automatically determining wait times for a queue area
CN114037146A (en) * 2021-11-05 2022-02-11 北京市商汤科技开发有限公司 Queuing waiting time length determining method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111008611B (en) * 2019-12-20 2023-07-14 浙江大华技术股份有限公司 Queuing time length determining method and device, storage medium and electronic device
JP7206229B2 (en) * 2020-02-13 2023-01-17 Kddi株式会社 Waiting Time Estimating Device, Waiting Time Estimating System, Waiting Time Estimating Method, and Computer Program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139040A (en) * 2015-10-13 2015-12-09 商汤集团有限公司 Queuing state information detection method and system thereof
US20180061161A1 (en) * 2016-08-30 2018-03-01 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and storage medium
US20190213422A1 (en) * 2018-01-10 2019-07-11 Canon Kabushiki Kaisha Information processing apparatus and method of controlling the same
US20200111031A1 (en) * 2018-10-03 2020-04-09 The Toronto-Dominion Bank Computerized image analysis for automatically determining wait times for a queue area
CN114037146A (en) * 2021-11-05 2022-02-11 北京市商汤科技开发有限公司 Queuing waiting time length determining method and device

Also Published As

Publication number Publication date
CN114037146A (en) 2022-02-11

Similar Documents

Publication Publication Date Title
JP7480823B2 (en) Information processing device, information processing method, and program
US10970915B2 (en) Virtual viewpoint setting apparatus that sets a virtual viewpoint according to a determined common image capturing area of a plurality of image capturing apparatuses, and related setting method and storage medium
US11102413B2 (en) Camera area locking
US20180182114A1 (en) Generation apparatus of virtual viewpoint image, generation method, and storage medium
CN109495686B (en) Shooting method and equipment
CN106295598A (en) A kind of across photographic head method for tracking target and device
CN108848301A (en) A kind of bill shooting exchange method, device, processing equipment and client
CN107084740B (en) Navigation method and device
WO2023077783A1 (en) Method and apparatus for determining queuing waiting time
US20210133495A1 (en) Model providing system, method and program
CN110245641A (en) A kind of target tracking image pickup method, device, electronic equipment
CN106289180A (en) The computational methods of movement locus and device, terminal
JP2019003428A (en) Image processing device, image processing method, and program
CN110544268A (en) Multi-target tracking method based on structured light and SiamMask network
CN113992860B (en) Behavior recognition method and device based on cloud edge cooperation, electronic equipment and medium
JP5263748B2 (en) Method, system and computer-readable recording medium for providing information about an object using a viewing frustum
CN112640419A (en) Following method, movable platform, device and storage medium
CN110309330A (en) The treating method and apparatus of vision map
WO2024055967A1 (en) Video processing method and apparatus, computer device, and storage medium
CN112215036B (en) Cross-mirror tracking method, device, equipment and storage medium
CN115514887A (en) Control method and device for video acquisition, computer equipment and storage medium
CN114913470A (en) Event detection method and device
CN113986094A (en) Map marking method, device, terminal and storage medium
CN112818743A (en) Image recognition method and device, electronic equipment and computer storage medium
CN112581497A (en) Multi-target tracking method, system, computing device and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22888819

Country of ref document: EP

Kind code of ref document: A1