WO2018161889A1 - 一种获取排队信息的方法、装置及计算机可读存储介质 - Google Patents

一种获取排队信息的方法、装置及计算机可读存储介质 Download PDF

Info

Publication number
WO2018161889A1
WO2018161889A1 PCT/CN2018/078126 CN2018078126W WO2018161889A1 WO 2018161889 A1 WO2018161889 A1 WO 2018161889A1 CN 2018078126 W CN2018078126 W CN 2018078126W WO 2018161889 A1 WO2018161889 A1 WO 2018161889A1
Authority
WO
WIPO (PCT)
Prior art keywords
human body
queuing
current
area
pixel
Prior art date
Application number
PCT/CN2018/078126
Other languages
English (en)
French (fr)
Inventor
刘琳
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Priority to EP18763346.6A priority Critical patent/EP3594848A4/en
Publication of WO2018161889A1 publication Critical patent/WO2018161889A1/zh
Priority to US16/563,632 priority patent/US11158035B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Definitions

  • the present application relates to the field of video surveillance, and in particular, to a method, an apparatus, and a computer readable storage medium for acquiring queuing information.
  • the administrator can obtain queuing information, which can include the number of queues, etc., adjust the number of service windows according to the queuing information, or evaluate the work efficiency of the service staff.
  • the queuing information can be obtained as follows: first, the queuing area is defined, a first camera is set in the queue of the queuing area, and a second camera is set in the tail of the queuing area.
  • the first image is captured by the first camera at the queue head to obtain a first image
  • the second camera is used to capture the queued area at the tail of the team to obtain a second image.
  • the human body obtains queuing information according to the human body in the queued state. For example, the number of people in the queuing state can be counted.
  • the present application provides a method, apparatus, and computer readable storage medium for obtaining queuing information.
  • the technical solution is as follows:
  • a method of obtaining queuing information comprising:
  • the first video image is a video image obtained by the camera capturing the scene for queuing
  • the queuing information is obtained according to the behavior state of each human body included in the current venue.
  • the determining, according to the location and height of each pixel, the behavior state of each human body included in the site including:
  • the time difference is the preset time threshold.
  • the acquiring the position and height of each pixel included in the first video image in the world coordinate system includes:
  • the obtaining the location of each human body included in the current venue according to the location and height of each pixel point includes:
  • the position of the human body corresponding to each human head image currently in the field is acquired through a preset conversion matrix.
  • the acquiring the depth map corresponding to the first video image according to the height of each pixel in the first video image includes:
  • the determining, according to the location of each human body included in the site at the first time and the location of each human body included in the current site, determining a behavior state of each human body included in the current site including:
  • the behavior state of the human body whose moving speed exceeds the preset speed threshold is set to the traveling state, and the behavior state of the human body whose moving speed does not exceed the preset speed threshold is set to the queuing state.
  • the obtaining the queuing information according to the behavior state of each human body included in the current venue includes:
  • the method further includes:
  • the method further includes:
  • the determining, according to the area occupied by each human body included in the current site, the at least one queuing area included in the current site including:
  • At least one connected domain is determined in the selected area, and each connected domain in the at least one connected domain is a queuing area.
  • the method before determining the queuing area where each human body is included in the queuing set in the current venue, the method further includes:
  • the behavior state of the first human body is modified to a queuing state and added to the queuing set.
  • the method further includes:
  • the behavior state at the first time is a queuing state and the human body that does not currently appear in the field is an abnormally disappearing human body and adding the abnormally disappearing human body to the disappearing human body queue;
  • Obtaining the queuing information of the current venue according to the determined queuing area including:
  • the method further includes:
  • the abnormally appearing human body is deleted from the disappearing human body queue.
  • an apparatus for obtaining queuing information comprising:
  • a first acquiring module configured to acquire a position and a height of each pixel included in the first video image in a world coordinate system, where the first video image is a video image obtained by the camera for capturing a field for queuing;
  • a first determining module configured to determine a behavior state of each human body included in the site according to the position and height of each pixel point;
  • the second obtaining module is configured to obtain queuing information according to behavior states of each human body included in the current venue.
  • the first determining module includes:
  • a first acquiring unit configured to acquire, according to the position and height of each pixel point, a position of each human body included in the current site
  • a first determining unit configured to determine, according to a location of each human body included in the site at a first time and a location of each human body included in the current site, a behavior state of each human body included in the current site, where the first time is The current time difference between the current time and the current time is a preset time threshold.
  • the first acquiring module includes:
  • a second acquiring unit configured to acquire a first video image captured by the camera
  • a third acquiring unit configured to acquire, according to a coordinate of each pixel point included in the image coordinate system of the first video image, a position and a height of each pixel in a world coordinate system by using a preset conversion matrix .
  • the first acquiring unit is configured to:
  • the position of the human body corresponding to each human head image currently in the field is acquired through a preset conversion matrix.
  • the first acquiring unit is configured to:
  • the first determining unit is configured to:
  • the behavior state of the human body whose moving speed exceeds the preset speed threshold is set to the traveling state, and the behavior state of the human body whose moving speed does not exceed the preset speed threshold is set to the queuing state.
  • the second obtaining module includes:
  • a second determining unit configured to determine, according to a location of each human body included in the queuing set, a queuing area of each human body included in the queuing set in the current venue, where the queuing set includes a queuing state in the current venue Human body
  • a fourth acquiring unit configured to acquire queuing information of the current venue according to the determined queuing area.
  • the second obtaining module further includes:
  • a fifth obtaining unit configured to acquire, according to the position and height of each pixel point, an area occupied by each human body included in the current site;
  • the device also includes:
  • a second determining module configured to determine at least one queuing area included in the current venue according to an area occupied by each human body included in the current venue.
  • the second determining module includes:
  • An adjusting unit configured to increase a weight corresponding to an area occupied by a human body in a queuing state in the field, and reduce weights of other areas;
  • a selecting unit configured to select an area from the area included in the site that has a weight exceeding a preset weight threshold
  • a second determining unit configured to determine, in the selected area, at least one connected domain, where each connected domain in the at least one connected domain is a queuing area.
  • the device further includes:
  • a modification module configured to determine a queuing area where the first human body and the first human body are located at a first time, where the first human body is in a traveling state and the behavior state in a first time is a queuing state; The current position of the first human body is located in the determined queuing area, and the behavior state of the first human body is modified to a queuing state and added to the queuing set.
  • the device further includes:
  • a third determining module configured to determine that the behavior state at the first time is a queuing state, and the human body that does not currently appear in the field is an abnormally disappearing human body and adds the abnormally disappearing human body to the disappearing human body queue;
  • the fourth obtaining unit is configured to count the number of the first human body located in the queuing area and in the queuing state; and count the number of abnormally disappearing human bodies in the queuing area in the disappearing human body queue; The sum of the number of human bodies and the number of abnormally disappearing human bodies is the number of people queued in the queuing area.
  • the device further includes:
  • a fourth determining module configured to determine that the current behavior state is a queuing state, and the human body that does not appear in the venue at the first time is an abnormal human body; if the disappearing human body queue includes the abnormality, the human body is The abnormality occurs when the human body is removed from the disappearing body queue.
  • an apparatus for obtaining queuing information comprising:
  • At least one processor At least one processor
  • At least one memory At least one memory
  • the at least one memory stores one or more programs, the one or more programs configured to be executed by the at least one processor, the one or more programs comprising for performing the aspect or the An instruction of any of the alternative methods in one aspect.
  • a non-transitory computer readable storage medium for storing a computer program, the computer program being loaded by a processor to perform any of the aspects described or one of the aspects. The instruction of the selected method.
  • the behavioral state of each human body included in the site can be determined according to the position and height of each pixel point, according to each person
  • the behavior state of the body acquires queuing information, so that a camera can be set up on the entire site to obtain queuing information, which reduces the cost.
  • 1-1 is a flowchart of a method for obtaining queuing information according to Embodiment 1 of the present application;
  • 1-2 is a flowchart of another method for obtaining queuing information according to Embodiment 1 of the present application.
  • Embodiment 2 of the present application is a schematic diagram of a site for queuing provided in Embodiment 2 of the present application;
  • FIG. 3 is a schematic structural diagram of an apparatus for acquiring queuing information according to Embodiment 3 of the present application.
  • FIG. 4 is a schematic structural diagram of a terminal provided in Embodiment 4 of the present application.
  • the queuing information of the venue is obtained by any one of the following embodiments, and the queuing information is sent to the manager's terminal, so that the manager adjusts the number of the service window or evaluates the working efficiency of the service staff according to the queuing information. .
  • an embodiment of the present application provides a method for obtaining queuing information, including:
  • Step 101 Acquire a position and a height of each pixel point included in the first video image in a world coordinate system, and the first video image is a video image obtained by the camera capturing the scene for queuing.
  • Step 102 Determine a behavior state of each human body included in the site according to the position and height of each pixel.
  • Step 103 Obtain queuing information according to the behavior state of each human body currently included in the site.
  • Step 1011 Acquire a position and a height of each pixel included in the first video image in a world coordinate system.
  • the first video image is a video image obtained by the camera capturing the scene for queuing.
  • the first video image captured by the camera is acquired; according to the coordinates of each pixel included in the image coordinate system included in the first video image, the position of each pixel in the world coordinate system is obtained through a preset conversion matrix. height.
  • one or more queuing areas may be included in the venue, ie there may be one or more queuing teams in the venue.
  • the camera can take a picture of all of the queuing areas in the venue or take a shot of some of the queuing areas in the venue.
  • the partial queuing area may include one queuing area or a plurality of queuing areas. The embodiment of the present application can reduce the number of cameras and reduce the cost, compared to the current solution of setting a camera in each of the queue head and the tail of each queuing area.
  • Step 1021 Obtain the position of each human body currently included in the site according to the position and height of each pixel.
  • each human head image included in the depth map is identified in image coordinates.
  • the position in the system; according to the position of each human head image in the image coordinate system, the position of the human body corresponding to each human head image currently in the field is acquired through a preset conversion matrix.
  • Step 1031 Determine, according to the position of each human body included in the site at the first time and the position of each human body currently included in the site, the behavior state of each human body currently included in the site, the first time is before the current time and the current time difference is Preset time threshold.
  • the moving speed of each human body included in the site is calculated according to the position of each human body included in the site at the first time, the position of each human body currently included in the site, and the time interval between the first time and the current time;
  • the behavior state of the human body whose moving speed exceeds the preset speed threshold is set to the traveling state, and the behavior state of the human body whose moving speed does not exceed the preset speed threshold is set to the queuing state.
  • Step 1041 Obtain queuing information according to behavior states of each human body currently included in the site.
  • the queuing information may include at least one of the current total number of queues, the number of queued areas, the number of queues in each queuing area, the head position, the tail position, the number of queues, the number of people leaving the queue, or the average queue length.
  • each human body included in the site may be determined according to the position and height of each pixel point.
  • the behavior state according to the behavior state of each human body to obtain queuing information, so that a camera can be set on the entire site to obtain queuing information, reducing costs.
  • an embodiment of the present application provides a method for obtaining queuing information, including:
  • Step 201 Acquire a first video image obtained by the camera currently capturing the field used for queuing.
  • a camera can be set in advance at the ticket office or checkout counter of the venue used for queuing.
  • the camera can be installed vertically or tilted above the queuing channel of the venue so that the camera can at least capture
  • the queuing area included in the site for queuing can also capture other areas than the queuing area. That is to say, in Fig. 2-2, the camera can photograph the traveling human body outside the queuing area in addition to the queued human body located in the queuing area.
  • the camera can be a binocular camera or a depth camera.
  • one or more queuing areas may be included in the venue, ie there may be one or more queued teams in the venue. Setting up the camera can take a picture of all queued areas in the venue or take a shot of some of the queued areas in the venue.
  • the partial queuing area may include one queuing area or a plurality of queuing areas.
  • traveling human body refers to non-lining people who are currently walking in the field.
  • Step 202 Obtain the position and height of each pixel in the world coordinate system by using a preset conversion matrix according to the coordinates of each pixel included in the image coordinate system included in the first video image.
  • the image coordinate system is the coordinate system of the first video image
  • the world coordinate system is the coordinate system of the site.
  • the coordinate origin of the coordinate system of the first video image may be a certain point in the first video image, the direction of the abscissa axis may be the same as the width direction of the first video image, and the longitudinal axis direction may be the same as the length direction of the first video image. .
  • the coordinate origin of the world coordinate system is the projection point of the camera on the horizontal ground.
  • the direction of the abscissa axis of the world coordinate system may be the east direction
  • the direction of the ordinate axis may be the forward direction
  • the direction of the vertical axis may be with the site. vertical.
  • the camera that takes the first video image has a binocular camera or a depth camera. This step is implemented differently with different cameras. Next, the implementations adopted by different cameras are separately described, including:
  • the preset conversion matrix includes a first conversion matrix and a second conversion matrix. This step may be:
  • the coordinates of each pixel point included in the first video image in the camera coordinate system are calculated by the following formula (1) according to the coordinates of each pixel point included in the image coordinate system included in the first video image.
  • x a , y a , and z a are the abscissa, ordinate, and binocular parallax of the pixel point in the image coordinate system, respectively, and x b , y b , and z b are pixel points in the camera coordinate system, respectively.
  • P is the preset first conversion matrix.
  • x c , y c , and z c are the abscissa, ordinate, and height of the pixel in the world coordinate system, respectively, and x c and y c constitute the position of the pixel in the world coordinate system, M
  • the second conversion matrix is preset.
  • the binocular camera photographs the field, and the first video image and the second video image are obtained, and each pixel in the first video image corresponds to one pixel in the second video image.
  • the binocular disparity of each pixel in the first video image may be calculated in advance.
  • the calculation process can be:
  • Reading a first column number of the first pixel from the first video image the first pixel is any pixel in the first video image; and reading the second pixel corresponding to the first pixel from the second video image
  • the second column number of the point is subtracted from the first column number using the second column number to obtain a difference, which is the binocular parallax of the first pixel point.
  • the binocular disparity of each pixel in the first video image can be calculated in the above manner.
  • the preset conversion matrix includes an internal parameter matrix and a rotation translation matrix. This step can be:
  • the position and height of each pixel point included in the first video image in the world coordinate system are calculated by the following formula (3) according to the coordinates of each pixel point included in the image coordinate system included in the first video image.
  • x a and y a are the abscissa and ordinate of the pixel point in the image coordinate system, respectively, and x c , y c , and z c are the abscissa and ordinate of the pixel point in the world coordinate system, respectively.
  • height, x c , y c constitute the position of the pixel in the world coordinate system
  • C is the internal parameter matrix
  • R is the rotation translation matrix.
  • each pixel in the first video image corresponds to one physical point in the field, so the position of each pixel in the first video image is substantially the physical point corresponding to each pixel.
  • the corresponding abscissa and ordinate in the field are composed.
  • Step 203 Acquire a position and an occupied area of each human body included in the current site according to the position and height of each pixel in the first video image.
  • the position of the human body may be the position of a certain point of the human head in the field, which is composed of the abscissa and the ordinate of the point in the world coordinate system.
  • the point may be the center point or any point of the head of the human body.
  • This step can be completed by the following operations 2031 to 2033, including:
  • the pixel values of each pixel in the first video image are replaced with the height of each pixel to obtain a depth map.
  • the image coordinate system in the blank image is the same as the image coordinate system in the first video image; reading the abscissa and the ordinate of the first pixel from the first video image, first The pixel is any pixel in the first video image, and the height of the first pixel is filled in the abscissa of the generated image and the position of the ordinate.
  • the generated image is the depth map corresponding to the first video image.
  • the height of each pixel is taken as the depth value of each pixel, and a depth map is formed.
  • the height of the head of the human body is higher than that of other parts of the human body.
  • the depth value of each pixel in the head region is larger than the depth map of each pixel in other surrounding regions. Based on this feature, the depth value local maximum region can be extracted in the depth map, and the extracted local maximum region is the region of the human head image in the image coordinate system.
  • the position of the human head image in the image coordinate system may be formed by the abscissa and the ordinate of the arbitrary point in the image of the human body in the image coordinate system; or the center point in the image of the human head may be The abscissa and the ordinate in the image coordinate system constitute the position of the human head image in the image coordinate system.
  • the position of the human body in the field is determined according to the position of the human head image in the image coordinate system by the following formula (4); Wherein the abscissa x c and the ordinate y c constitute the position of the human body in the field.
  • x c and y c are the abscissa and ordinate in the world coordinate system respectively, and x c and y c constitute the position of the human body in the world coordinate system, and x a and y a are respectively in the image coordinate system.
  • the abscissa and ordinate, x a and y a form the position of the human head image in the image coordinate system, and x b and y b are the abscissa and ordinate in the camera coordinate system, respectively.
  • the position of each pixel in the field is determined by the above formula (4), thereby obtaining the human head.
  • the area occupied by the field; or, according to the position of each pixel included in the image coordinate system according to the edge of the human head image, the position of each pixel in the field is determined by the above formula (4), thereby obtaining The area in which the human head is occupied in the field.
  • the area occupied by the human body in the field can be determined by the area occupied by the human body.
  • an area of a preset size may be enlarged on the left side of the area, and an area of a preset size may be enlarged on the right side of the area, and the area after the majority is determined by the vast area.
  • the position of the human body in the field is determined by the above formula (3) according to the position of the human head image in the image coordinate system;
  • the abscissa x c and the ordinate y c calculated by the above formula (3) constitute the position of the human body in the field.
  • the position of each pixel in the field is determined by the above formula (3), thereby obtaining the human head.
  • the area occupied by the field; or, according to the position of each pixel included in the image coordinate system according to the edge of the image of the human head, the position of each pixel in the field is determined by the above formula (3), thereby obtaining The area in which the human head is occupied in the field.
  • the area occupied by the human body in the field can be determined by the area occupied by the human body.
  • an area of a preset size may be enlarged on the left side of the area, and an area of a preset size may be enlarged on the right side of the area, and the area after the majority is determined by the vast area.
  • Step 204 Determine the behavior state of each human body currently included in the site according to the location of each human body included in the site at the first time and the position of each human body currently included in the site.
  • the first time is before the current time and the current time difference is a preset time threshold.
  • the first time can be the frame number of the video image or a certain time point.
  • the first time may be the frame number of the second video image or the time of capturing the second video image.
  • the second video image is a video image taken by the current camera, the first video image and the second video image being separated by N frames of video images, N being an integer greater than or equal to zero.
  • the position of each human body included in the site at the first time is obtained according to the second video image before the current time.
  • For the detailed acquisition process refer to the operations of steps 202 and 203 above.
  • the step may be: calculating the moving speed of each human body included in the site according to the position of each human body included in the site at the first time, the position of each human body currently included in the site, and the time interval between the first time and the current time.
  • the behavior state of the human body whose moving speed exceeds the preset speed threshold is set to the traveling state, and the behavior state of the human body whose moving speed does not exceed the preset speed threshold is set to the queuing state, and the human body in the queuing state is queued.
  • the speed of the human body moving in the queuing state is usually slow, and the speed of the human body moving in the moving state is usually faster, and the time interval between the first time and the current is often only a few seconds or shorter, for example, the time interval may be 5 seconds, 3 seconds, 1 second or 0.5 seconds, etc. Therefore, in this step, the behavior state of the human body whose moving speed exceeds the preset speed threshold may be set as the traveling state, and the behavior state of the human body whose moving speed does not exceed the preset speed threshold is set to the queuing state.
  • the human body When queuing, there may be a human body passing through the team. Usually, the human body moves faster than the preset speed threshold, so the first human body position in the field and the current human body in the field are passed. The position of the human body is calculated, and the behavior state of the human body whose moving speed exceeds the preset speed threshold is set as the traveling state. The human body that is not in the queuing state can avoid setting the behavior state of the human body passing the team to the queue. State, improve the accuracy of acquiring human behavior.
  • abnormal disappearance of the human body refers to the human body that is in the queue state at the first time and does not appear in the field at present, which may be caused by not recognizing the image of the human head when the image of the human head is currently recognized.
  • abnormal human body refers to the human body that is currently in the queue state and does not appear at the site at the first time. This may be that the shoulder or other part of another person is mistaken for the head image of the human body when the image of the human head is currently recognized. Caused.
  • an abnormally disappearing human body or an abnormally occurring human body can be identified, and the implementation process can be:
  • the behavior state at the first time is a queuing state and the human body that does not currently appear in the field is an abnormally disappearing human body; if the human body queue includes the abnormal disappearing human body, the abnormality disappears from the appearance of the human body queue, otherwise the human body is lost; otherwise The abnormally disappeared human body is added to the disappearing body queue.
  • the body queue includes the abnormal body that is determined before the step. If the behavioral state of the human body included in the human body queue is queued at the first time, it indicates that the human body is judged to be abnormally present in the human body at the first time.
  • the human body If at present the human body is judged to be abnormally disappearing from the human body, it indicates that the shoulder or other part of another person is mistaken for the human body's head image at the first time, and the current position is judged to be abnormally disappeared.
  • the human body appears in the human body.
  • the queue When the queue is queued, it is not necessary to further judge the human body whose abnormality disappears, and the abnormality disappears directly from the appearance queue of the human body.
  • the current behavior state is a queuing state and that the human body that does not appear in the venue in the first time is abnormally present in the human body; if the disappearing human body queue includes the abnormality in the human body, the abnormally appearing human body is deleted from the disappearing human body queue, Otherwise, the abnormal body appears to be added to the human body queue, and the abnormal body appears from the queued set.
  • the human body included in the disappearance body queue is currently judged to be abnormally present in the human body, it indicates that the human body head image is not recognized when the human head image can be recognized before, so that the human body is judged to be abnormally disappearing and the human body is Join the disappearing body queue.
  • the storage time of each abnormally disappearing human body in the disappearing human body queue in the disappearing human body queue it is also possible to monitor the storage time of each abnormally disappearing human body in the disappearing human body queue in the disappearing human body queue, and remove the abnormally disappearing human body whose storage time exceeds the preset storage time threshold from the disappearing human body queue. Abnormal disappearance when the storage time exceeds the preset storage time threshold
  • the human body may be a non-existent human body, and thus can be deleted from the disappearing human body queue, and the abnormally disappeared human body is no longer tracked.
  • the abnormality disappears from the human body when the storage time exceeds the preset storage time threshold, it indicates that the abnormal body may be a human body that does not exist. In a certain judgment before this step, the shoulder or other part of another person may be mistaken for the human body. Caused by the head image, so it can be removed directly from the disappearing body queue.
  • An abnormality occurs when the storage time exceeds the preset storage time threshold, indicating that the abnormality occurs.
  • the human body may not recognize the human body's head image when identifying the human head image before this step, and each subsequent When the image of the human head is recognized, it can be recognized. Therefore, the behavior state of the human body whose abnormality exceeds the preset storage time threshold can be set to the queuing state.
  • Step 205 Determine at least one queuing area currently included in the site according to an area occupied by each human body currently included in the site.
  • the weight corresponding to each occupied area of the human body included in the queuing set is increased in the field, and the weight of the area other than the area occupied by each human body included in the queuing set in the field is reduced; the area included from the site And selecting an area in which the weight exceeds the preset weight threshold; determining at least one connected domain in the selected area, where each connected domain in the at least one connected domain is a queuing area.
  • the areas included in the queuing area are always occupied by the human body, and the weights of the various areas in the queuing area are continuously accumulated and eventually exceed the preset weight threshold.
  • the human bodies arranged in a group are often close to each other, resulting in the adjacent two human body occupied areas in the same queuing area being connected, so that one connected domain can be used as a queuing area.
  • the queuing area is determined by the area where the weight exceeds the preset weight threshold, so that it is not necessary to divide the queuing channel in the field in advance, and the application is more flexible.
  • the human body queued in the queuing area may find the walking phenomenon, which causes the behavior state of the human body to be determined as the traveling state at present, but the behavior state of the human body is still substantially in the queuing state, and needs to be corrected.
  • the detailed process is as follows:
  • the first human body is in a traveling state in a current behavior state and the behavior state in a first time is a queuing state; if the current first human body position is located in the determined state
  • the queuing area changes the behavior state of the first human body to the queuing state and is added to the queuing set.
  • the head position and the tail position of the queuing area can also be determined.
  • the implementation process can be:
  • the corresponding position of each human body currently in the queuing area is obtained, and the number of human bodies corresponding to each position obtained by the statistics is determined, and the position with the largest number of human bodies is determined as the tail of the queuing area.
  • Position, the other end position of the queue area except the tail position is determined as the head position.
  • the queued human body enters the team from the non-team tail position. This step determines the body of the queue by the following method, which can be: determining the behavior state at the first time as the traveling state and the current behavior state. For the human body in the queuing state, the queuing area where the current position of the human body is located is determined. If the current position of the human body is the non-captail position of the queuing area, it is determined that the human body is a queued human body.
  • the departing human body leaves the team from the non-team head position.
  • This step determines the departure body by the following method. It can be determined that the behavior state at the first time is the queue state and the current behavior state is The human body in the traveling state determines the queuing area where the position of the human body is located at the first time. If the position of the human body is the non-team head position of the queuing area at the first time, it is determined that the human body is a departed human body.
  • the human body leaving from the head of the team can also be determined by determining that the behavior state at the first time is the queuing state and the current behavior state is traveling.
  • the human body in the state determines the queuing area where the position of the human body is located at the first time. If the position of the human body is the head position of the queuing area at the first time, it is determined that the human body is the human body leaving from the head of the team.
  • Step 206 Obtain queuing information according to behavior states of each human body currently included in the site.
  • the queuing information may include at least one of the current total number of queues, the number of queued areas, the number of queues in each queuing area, the head position, the tail position, the number of queues, the number of people leaving the queue, or the average queue length.
  • the determined queuing area can be directly counted to obtain the number of queuing areas.
  • the queuing area of each human body included in the queuing set may be determined in the current site; for each queuing area, the statistics are located in the queuing area.
  • the first human body number of the human body, the number of abnormally disappearing human bodies located in the queuing area in the statistical disappearance queue, and the sum of the number of the first human body and the number of abnormally disappearing human bodies is calculated to obtain the number of queuing people in the queuing area.
  • the human body that may be judged to be in the queuing state may also be determined in step 204.
  • the position of the human body it is determined in the current venue that the human body does not belong to any one of the queuing areas, so the calculated first human body number in the queuing area does not include the human body, and the calculation accuracy can be improved.
  • For the average queue length first determine each human body that leaves the team leader, and obtain the time when the human body first appears in the queue state and the time when the queue state appears for the first time, according to the time when the queue state appears for the first time and the last occurrence.
  • the time of the queuing state calculates the queuing time of the human body; the average queuing time is calculated according to the queuing time of each human body leaving from the head of the team.
  • f represents the frame rate of the video captured by the camera.
  • each human body included in the site may be determined according to the position and height of each pixel point.
  • the behavior state according to the behavior state of each human body to obtain queuing information, so that a camera can be set on the entire site to obtain queuing information, reducing costs.
  • the queuing area is determined by the area where the weight exceeds the preset weight threshold, so that the queuing area is not required to be demarcated in the site in advance, and the application flexibility is increased; in the process of acquiring the queuing information, there is no need to preset.
  • the human body features only the human body image can be obtained by recognizing the human head image from the depth map, and the computational complexity is simple compared to the related art.
  • an embodiment of the present application provides an apparatus 300 for acquiring queuing information, where the apparatus 300 includes:
  • a first acquiring module 301 configured to acquire a position and a height of each pixel included in the first video image in a world coordinate system, where the first video image is a video image obtained by the camera for capturing a field for queuing;
  • a first determining module 302 configured to determine, according to the location and height of each pixel point, a behavior state of each human body included in the site;
  • the second obtaining module 303 is configured to obtain queuing information according to the behavior state of each human body included in the current venue.
  • the first determining module 302 includes:
  • a first acquiring unit configured to acquire, according to the position and height of each pixel point, a position of each human body included in the current site
  • a first determining unit configured to determine, according to a location of each human body included in the site at a first time and a location of each human body included in the current site, a behavior state of each human body included in the current site, where the first time is The current time difference between the current time and the current time is a preset time threshold.
  • the first obtaining module 301 includes:
  • a second acquiring unit configured to acquire a first video image captured by the camera
  • a third acquiring unit configured to acquire, according to a coordinate of each pixel point included in the image coordinate system of the first video image, a position and a height of each pixel in a world coordinate system by using a preset conversion matrix .
  • the first acquiring unit is configured to:
  • the position of the human body corresponding to each human head image currently in the field is acquired through a preset conversion matrix.
  • the first acquiring unit is configured to:
  • the first determining unit is configured to:
  • the behavior state of the human body whose moving speed exceeds the preset speed threshold is set to the traveling state, and the behavior state of the human body whose moving speed does not exceed the preset speed threshold is set to the queuing state.
  • the second obtaining module 303 includes:
  • a second determining unit configured to determine, according to a location of each human body included in the queuing set, a queuing area of each human body included in the queuing set in the current venue, where the queuing set includes a human body in a queuing state in the current venue ;
  • a fourth acquiring unit configured to acquire queuing information of the current venue according to the determined queuing area.
  • the second obtaining module 303 further includes:
  • a fifth acquiring unit configured to acquire, according to the position and height of each pixel point, an area occupied by each human body included in the current venue;
  • the device also includes:
  • a second determining module configured to determine at least one queuing area included in the current venue according to an area occupied by each human body included in the current venue.
  • the second determining module includes:
  • An adjusting unit configured to increase a weight corresponding to an area occupied by a human body in a queuing state in the field, and reduce weights of other areas;
  • a selecting unit configured to select an area from the area included in the site that has a weight exceeding a preset weight threshold
  • a second determining unit configured to determine, in the selected area, at least one connected domain, where each connected domain in the at least one connected domain is a queuing area.
  • the device further includes:
  • a modification module configured to determine a queuing area where the first human body and the first human body are located at a first time, where the first human body is in a traveling state and the behavior state in a first time is a queuing state; The current position of the first human body is located in the determined queuing area, and the behavior state of the first human body is modified to a queuing state and added to the queuing set.
  • the device further includes:
  • a third determining module configured to determine that the behavior state at the first time is a queuing state, and the human body that does not currently appear in the field is an abnormally disappearing human body and adds the abnormally disappearing human body to the disappearing human body queue;
  • the fourth acquiring unit is configured to count the first human body number of the human body located in the queuing area; and count the number of abnormally disappearing human bodies located in the queuing area in the disappearing human body queue; and calculate the first human body number The sum of the number of abnormally disappearing human bodies gives the number of people queued in the queuing area.
  • the device further includes:
  • a fourth determining module configured to determine that the current behavior state is a queuing state, and the human body that does not appear in the venue at the first time is an abnormal human body; if the disappearing human body queue includes the abnormality, the human body is The abnormality occurs when the human body is removed from the disappearing body queue.
  • each human body included in the site may be determined according to the position and height of each pixel point.
  • the behavior state according to the behavior state of each human body to obtain queuing information, so that a camera can be set on the entire site to obtain queuing information, reducing costs.
  • an embodiment of the present application provides a terminal 400, where the terminal is configured to perform the foregoing method for acquiring queuing information, and at least includes a processor 401 having one or more processing cores.
  • the terminal 400 may include other components in addition to the processor 401.
  • the terminal 400 may further include a transceiver 402, a memory 403, an input unit 404, a display unit 405, a sensor 406, an audio circuit 407, and a WiFi.
  • the component 408, such as a wireless fidelity module includes one or more computer readable storage media. It should be emphasized that those skilled in the art can understand that the UE structure shown in FIG. 4 does not constitute a limitation on the terminal 400, may include more or less components than the illustration, or combine some components, or Different parts are arranged.
  • the transceiver 402 can also be used for receiving and transmitting signals during receiving or transmitting information or during a call, in particular, after receiving downlink information of the base station, and processing it by one or more processors 401; The data is sent to the base station.
  • the transceiver 402 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a SIM (Subscriber Identity Module) card, a transceiver, a coupler, and an LNA (Low Noise Amplifier, Low noise amplifier), duplexer, etc.
  • transceiver 402 can also communicate with the network and other devices via wireless communication.
  • the wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System of Mobile communication), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access). , Code Division Multiple Access), WCDMA (Wideband Code Division Multiple Access), LTE (Long Term Evolution), e-mail, SMS (Short Messaging Service), and the like.
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • e-mail Short Messaging Service
  • the memory 403 can also be used to store software programs and modules, and the processor 401 can execute various functional applications and data processing by running software programs and modules stored in the memory 403.
  • the memory 403 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to The data created by the use of the terminal 400 (such as audio data, phone book, etc.) and the like.
  • memory 403 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, memory 403 may also include a memory controller to provide access to memory 403 by processor 401 and input unit 404.
  • the input unit 404 can be configured to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function controls.
  • input unit 404 can include touch-sensitive surface 441 as well as other input devices 442.
  • a touch-sensitive surface 441, also referred to as a touch display or trackpad, can collect touch operations on or near the user (eg, the user uses a finger, stylus, etc., on any touch-sensitive surface 441 or on the touch-sensitive surface 441 The operation near the touch-sensitive surface 441) and driving the corresponding connecting device according to a preset program.
  • the touch-sensitive surface 441 can include two portions of a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 401 is provided and can receive commands from the processor 401 and execute them.
  • the touch sensitive surface 441 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 404 can also include other input devices 442.
  • other input devices 442 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • Display unit 405 can be used to display information entered by the user or information provided to the user and various graphical user interfaces of terminal 400, which can be constructed from graphics, text, icons, video, and any combination thereof.
  • the display unit 405 can include a display panel 451.
  • the display panel 451 can be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like.
  • the touch-sensitive surface 441 can cover the display panel 451, and when the touch-sensitive surface 441 detects a touch operation thereon or nearby, it is transmitted to the processor 401 to determine the type of the touch event, and then the processor 401 according to the touch event The type provides a corresponding visual output on the display panel 451.
  • touch-sensitive surface 441 and display panel 451 are implemented as two separate components to implement input and input functions, in some embodiments, touch-sensitive surface 441 can be integrated with display panel 451 for input. And output function.
  • Terminal 400 includes at least one type of sensor 406, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 451 according to the brightness of the ambient light, and the proximity sensor may close the display panel 451 when the terminal 400 moves to the ear. / or backlight.
  • the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally three axes), and can detect the magnitude and direction of gravity when stationary, and can be used to identify the attitude of the terminal 400 (such as horizontal and vertical screen switching, Related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; as well as other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which can be configured in the terminal 400, No longer.
  • the audio circuit 407 includes a speaker 471 and a microphone 472, and the speaker 471 and the microphone 472 can provide an audio interface between the user and the terminal 400.
  • the audio circuit 407 can transmit the converted electrical data of the received audio data to the speaker 471, and convert it into a sound signal output by the speaker 471.
  • the microphone 472 converts the collected sound signal into an electrical signal, and the audio circuit 407 After receiving, it is converted into audio data, processed by the audio data output processor 401, transmitted to the UE, for example, by the transceiver 402, or outputted to the memory 403 for further processing.
  • the audio circuit 407 may also include an earbud jack to provide communication of the peripheral earphones with the terminal 400.
  • WiFi is a short-range wireless transmission technology
  • the terminal 400 can help users to send and receive emails, browse web pages, and access streaming media through the WiFi module 408, which provides wireless broadband Internet access for users.
  • FIG. 4 shows the WiFi module 408, it can be understood that it does not belong to the essential configuration of the terminal 400, and may be omitted as needed within the scope of not changing the essence of the application.
  • Processor 401 is the control center of terminal 400, which connects various portions of the entire UE using various interfaces and lines, by running or executing software programs and/or modules stored in memory 403, and recalling data stored in memory 403, The various functions and processing data of the terminal 400 are performed to thereby perform overall monitoring of the terminal 400.
  • the processor 401 can integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface, an application, and the like, and the modem processor mainly processes the wireless communication. It can be understood that the above modem processor may not be integrated into the processor 401.
  • the terminal 400 further includes a power source 409 (such as a battery) for supplying power to the various components.
  • a power source 409 can be logically connected to the processor 401 through the power management system to manage functions such as charging, discharging, and power management through the power management system.
  • the power supply 409 may also include any one or more of a DC or AC power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
  • the terminal 400 may further include a camera, a Bluetooth module, and the like, and details are not described herein.
  • the processor 201 of the terminal 400 may perform the method for obtaining queuing information provided by any of the foregoing embodiments.
  • a person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium.
  • the storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

本申请公开了一种获取排队信息的方法、装置及计算机可读存储介质,属于视频监控领域。所述方法包括:获取第一视频图像包括的每个像素点在世界坐标系中的位置和高度,所述第一视频图像是摄像机对用于排队的场地进行拍摄得到的视频图像;根据所述每个像素点的位置和高度确定所述场地包括的各人体的行为状态;根据当前所述场地包括的各人体的行为状态获取排队信息。所述装置包括:第一获取模块、第一确定模块和第二获取模块。本申请能够减少成本。

Description

一种获取排队信息的方法、装置及计算机可读存储介质
本申请要求于2017年3月7日提交中国国家知识产权局、申请号为201710130666.2、发明名称为“一种获取排队信息的方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及视频监控领域,特别涉及一种获取排队信息的方法、装置及计算机可读存储介质。
背景技术
目前在超市收银台、景区售票处、办事大厅等场景经常存在排队情况。管理者可以获取排队信息,该排队信息可以包括排队人数等,根据该排队信息调整办事窗口的数目或考核办事员工的工作效率等。
当前可以按如下方式获取排队信息,具体为:首先划定排队区域,在排队区域的队头设置一台第一摄像机,在排队区域的队尾设置一台第二摄像机。通过第一摄像机在队头对排队区域进行拍摄得到第一图像,通过第二摄像机在队尾对排队区域进行拍摄得到第二图像。根据预设的人体特征识别出第一图像中包括的各人体图像,以及识别出第二图像中包括的各人体图像,将同时在第一图像和第二图像中同时出现的人体确定为排队状态的人体,根据排队状态的人体获取排队信息。例如可以统计排队状态的人体得到排队人数。
在实现本申请的过程中,发明人发现相关技术至少存在以下问题:
目前需要在每个排队区域设置两个摄像机,通过该两个摄像机获取一个排队区域的排队信息,当有多个排队区域时,所需要的摄像机数目成倍增加,增加了成本。
发明内容
为了减少成本,本申请提供了一种获取排队信息的方法、装置及计算机可读存储介质。所述技术方案如下:
一方面,一种获取排队信息的方法,所述方法包括:
获取第一视频图像包括的每个像素点在世界坐标系中的位置和高度,所述第一视频图像是摄像机对用于排队的场地进行拍摄得到的视频图像;
根据所述每个像素点的位置和高度确定所述场地包括的各人体的行为状态;
根据当前所述场地包括的各人体的行为状态获取排队信息。
可选的,所述根据所述每个像素点的位置和高度确定所述场地包括的各人体的行为状态,包括:
根据所述每个像素点的位置和高度获取当前所述场地包括的各人体的位置;
根据第一时间所述场地包括的各人体的位置和当前所述场地包括的各人体的位置,确定当前所述场地包括的各人体的行为状态,所述第一时间是在当前之前且距离当前的时间差为预设时间阈值。
可选的,所述获取第一视频图像包括的每个像素点在世界坐标系中的位置和高度,包括:
获取摄像机拍摄的第一视频图像;
根据所述第一视频图像包括的每个像素点在图像坐标系中的坐标,通过预设的转换矩阵获取所述每个像素点在世界坐标系中的位置和高度。
可选的,所述根据所述每个像素点的位置和高度获取当前所述场地包括的各人体的位置,包括:
根据所述第一视频图像中的每个像素点的高度获取所述第一视频图像对应的深度图;
在所述深度图中识别出所述深度图包括的各人体头部图像在图像坐标系中的位置;
根据所述各人体头部图像在图像坐标系中的位置,通过预设的转换矩阵获取当前在所述场地中各人体头部图像对应的人体的位置。
可选的,所述根据所述第一视频图像中的每个像素点的高度获取所述第一视频图像对应的深度图,包括:
将所述第一视频图像中的每个像素点的像素值分别替换为所述每个像素点的高度得到深度图;或者,
根据所述第一视频图像中的每个像素点的位置,将所述第一视频图像中的每个像素点的高度填写在空白的图像中得到深度图。
可选的,所述根据第一时间所述场地包括的各人体的位置和当前所述场地包括的各人体的位置确定当前所述场地包括的各人体的行为状态,包括:
根据第一时间所述场地包括的各人体的位置、当前所述场地包括的各人体的位置和所述第一时间与当前之间的时间间隔,计算当前所述场地包括的各人体的移动速度;
将移动速度超过预设速度阈值的人体的行为状态设置为行进状态,以及将移动速度未超过预设速度阈值的人体的行为状态设置为排队状态。
可选的,所述根据当前所述场地包括的各人体的行为状态获取排队信息,包括:
根据排队集合包括的各人体的位置,在当前所述场地中确定所述排队集合包括的各人体所在的排队区域,所述排队集合包括当前所述场地中处于排队状态的人体;
根据所述确定的排队区域获取当前所述场地的排队信息。
可选的,所述获取第一视频图像包括的每个像素点在世界坐标系中的位置和高度之后,还包括:
根据所述每个像素点的位置和高度获取当前所述场地包括的各人体占用的区域;
所述在当前所述场地中确定排队状态的所述各人体所在的排队区域之前,还包括:
根据当前所述场地包括的各人体占用的区域,确定当前所述场地包括的至少一个排队区域。
可选的,所述根据当前所述场地包括的各人体占用的区域,确定当前所述场地包括的至少一个排队区域,包括:
在所述场地中增加处于排队状态的人体占用的区域对应的权重,以及减小其他区域的权重;
从所述场地包括的区域中选择权重超过预设权重阈值的区域;
在所述选择的区域中确定至少一个连通域,所述至少一个连通域中的每个连通域为一个排队区域。
可选的,所述在当前所述场地中确定所述排队集合包括的各人体所在的排队区域之前,还包括:
确定第一人体和所述第一人体在第一时间所在的排队区域,所述第一人体 在当前的行为状态为行进状态并且在第一时间的行为状态为排队状态;
如果当前所述第一人体的位置位于所述确定的排队区域,则将所述第一人体的行为状态修改为排队状态并添加到所述排队集合中。
可选的,所述根据所述每个像素点的位置和高度确定所述场地包括的各人体的行为状态之后,还包括:
确定在第一时间的行为状态为排队状态并且在当前没有出现在所述场地中的人体为异常消失人体并将所述异常消失人体添加到消失人体队列中;
所述根据所述确定的排队区域获取当前所述场地的排队信息,包括:
统计位于所述排队区域内的人体的第一人体数目;
统计所述消失人体队列中位于所述排队区域内的异常消失人体数目;
计算所述第一人体数目与所述异常消失人体数目之和得到所述排队区域的排队人数。
可选的,所述根据所述每个像素点的位置和高度确定所述场地包括的各人体的行为状态之后,还包括:
确定当前的行为状态为排队状态并且在第一时间没有出现在所述场地中的人体为异常出现人体;
如果所述消失人体队列包括所述异常出现人体,则将所述异常出现人体从消失人体队列中删除。
另一方面,提供了一种获取排队信息的装置,所述装置包括:
第一获取模块,用于获取第一视频图像包括的每个像素点在世界坐标系中的位置和高度,所述第一视频图像是摄像机对用于排队的场地进行拍摄得到的视频图像;
第一确定模块,用于根据所述每个像素点的位置和高度确定所述场地包括的各人体的行为状态;
第二获取模块,用于根据当前所述场地包括的各人体的行为状态获取排队信息。
可选的,所述第一确定模块包括:
第一获取单元,用于根据所述每个像素点的位置和高度获取当前所述场地包括的各人体的位置;
第一确定单元,用于根据第一时间所述场地包括的各人体的位置和当前所 述场地包括的各人体的位置确定当前所述场地包括的各人体的行为状态,所述第一时间是在当前之前且距离当前的时间差为预设时间阈值。
可选的,所述第一获取模块包括:
第二获取单元,用于获取摄像机拍摄的第一视频图像;
第三获取单元,用于根据所述第一视频图像包括的每个像素点在图像坐标系中的坐标,通过预设的转换矩阵获取所述每个像素点在世界坐标系中的位置和高度。
可选的,所述第一获取单元,用于:
根据所述第一视频图像中的每个像素点的高度获取所述第一视频图像对应的深度图;
在所述深度图中识别出所述深度图包括的各人体头部图像在图像坐标系中的位置;
根据所述各人体头部图像在图像坐标系中的位置,通过预设的转换矩阵获取当前在所述场地中各人体头部图像对应的人体的位置。
可选的,所述第一获取单元,用于:
将所述第一视频图像中的每个像素点的像素值分别替换为所述每个像素点的高度得到深度图;或者,
根据所述第一视频图像中的每个像素点的位置,将所述第一视频图像中的每个像素点的高度填写在空白的图像中得到深度图。
可选的,所述第一确定单元,用于:
根据第一时间所述场地包括的各人体的位置、当前所述场地包括的各人体的位置和所述第一时间与当前之间的时间间隔,计算当前所述场地包括的各人体的移动速度;
将移动速度超过预设速度阈值的人体的行为状态设置为行进状态,以及将移动速度未超过预设速度阈值的人体的行为状态设置为排队状态。
可选的,所述第二获取模块包括:
第二确定单元,用于根据排队集合包括的各人体的位置,在当前所述场地中确定所述排队集合包括的各人体所在的排队区域,所述排队集合包括当前所述场地中处于排队状态的人体;
第四获取单元,用于根据所述确定的排队区域获取当前所述场地的排队信息。
可选的,所述第二获取模块还包括:
第五获取单元,用于根据所述每个像素点的位置和高度获取当前所述场地包括的各人体占用的区域;
所述装置还包括:
第二确定模块,用于根据当前所述场地包括的各人体占用的区域,确定当前所述场地包括的至少一个排队区域。
可选的,所述第二确定模块包括:
调整单元,用于在所述场地中增加处于排队状态的人体占用的区域对应的权重,以及减小其他区域的权重;
选择单元,用于从所述场地包括的区域中选择权重超过预设权重阈值的区域;
第二确定单元,用于在所述选择的区域中确定至少一个连通域,所述至少一个连通域中的每个连通域为一个排队区域。
可选的,所述装置还包括:
修改模块,用于确定第一人体和所述第一人体在第一时间所在的排队区域,所述第一人体在当前的行为状态为行进状态并且在第一时间的行为状态为排队状态;如果当前所述第一人体的位置位于所述确定的排队区域,则将所述第一人体的行为状态修改为排队状态并添加到所述排队集合中。
可选的,所述装置还包括:
第三确定模块,用于确定在第一时间的行为状态为排队状态并且在当前没有出现在所述场地中的人体为异常消失人体并将所述异常消失人体添加到消失人体队列中;
所述第四获取单元,用于统计位于所述排队区域内且处于排队状态的第一人体数目;统计所述消失人体队列中位于所述排队区域内的异常消失人体数目;计算所述第一人体数目与所述异常消失人体数目之和得到所述排队区域的排队人数。
可选的,所述装置还包括:
第四确定模块,用于确定当前的行为状态为排队状态并且在第一时间没有出现在所述场地中的人体为异常出现人体;如果所述消失人体队列包括所述异常出现人体,则将所述异常出现人体从消失人体队列中删除。
另一方面,提供了一种获取排队信息的装置,所述装置包括:
至少一个处理器;和
至少一个存储器;
所述至少一个存储器存储有一个或多个程序,所述一个或多个程序被配置成由所述至少一个处理器执行,所述一个或多个程序包含用于进行如所述一方面或所述一方面中的任一可选的方法的指令。
另一方面、提供了一种非易失性计算机可读存储介质,用于存储计算机程序,所述计算机程序通过处理器进行加载来执行如所述一方面或所述一方面中的任一可选的方法的指令。
本申请提供的技术方案的有益效果是:
通过获取一台摄像机拍摄的视频图像包括的每个像素点在世界坐标系中的位置和高度,根据每个像素点的位置和高度可以确定该场地包括的每个人体的行为状态,根据每个人体的行为状态获取排队信息,这样可以在整个场地上设置一台摄像机就可以获取排队信息,减少了成本。
附图说明
图1-1是本申请实施例1提供的一种获取排队信息的方法流程图;
图1-2是本申请实施例1提供的另一种获取排队信息的方法流程图;
图2-1是本申请实施例2提供的一种获取排队信息的方法流程图;
图2-2是本申请实施例2提供的一种用于排队的场地示意图;
图3是本申请实施例3提供的一种获取排队信息的装置结构示意图;
图4是本申请实施例4提供的一种终端结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
目前在超市收银台、景区售票处、办事大厅等场地经常存在排队情况。在本申请中通过如下任一实施例获取该场地的排队信息,并将该排队信息发送至 管理者的终端,以使管理者根据该排队信息调整办事窗口的数目或考核办事员工的工作效率等。
实施例1
参见图1-1,本申请实施例提供了一种获取排队信息的方法,包括:
步骤101:获取第一视频图像包括的每个像素点在世界坐标系中的位置和高度,第一视频图像是摄像机对用于排队的场地进行拍摄得到的视频图像。
步骤102:根据每个像素点的位置和高度确定该场地包括的各人体的行为状态。
步骤103:根据当前该场地包括的各人体的行为状态获取排队信息。
对于上述图1-1所示的流程的实现方式有多种,例如参见图1-2,在本申请中提供了图1-1所示的流程的一种实现方式,可以为:
步骤1011:获取第一视频图像包括的每个像素点在世界坐标系中的位置和高度,第一视频图像是摄像机对用于排队的场地进行拍摄得到的视频图像。
具体地,获取摄像机拍摄的第一视频图像;根据第一视频图像包括的每个像素点在图像坐标系中的坐标,通过预设的转换矩阵获取每个像素点在世界坐标系中的位置和高度。
其中,该场地中可以包括一个或多个排队区域,即在该场地中可能有一个或多个排队队伍。摄像机可以对该场地中的全部排队区域进行拍摄,或对该场地中的部分排队区域进行拍摄。该部分排队区域可以包括一个排队区域或多个排队区域。相比目前需要在每个排队区域的队头和队尾各设置一个摄像机的方案,本申请实施例可以减小摄像机的数目,减小了成本。
步骤1021:根据每个像素点的位置和高度获取当前该场地包括的各人体的位置。
具体地,将第一视频图像中的每个像素点的像素值分别替换为每个像素点的高度得到深度图;在该深度图中识别出该深度图包括的各人体头部图像在图像坐标系中的位置;根据各人体头部图像在图像坐标系中的位置,通过预设的转换矩阵获取当前在该场地中各人体头部图像对应的人体的位置。
步骤1031:根据第一时间该场地包括的各人体的位置和当前该场地包括的各人体的位置确定当前该场地包括的各人体的行为状态,第一时间是在当前之前且距离当前的时间差为预设时间阈值。
具体地,根据第一时间该场地包括的各人体的位置、当前该场地包括的各人体的位置和第一时间与当前之间的时间间隔,计算当前该场地包括的各人体的移动速度;将移动速度超过预设速度阈值的人体的行为状态设置为行进状态,以及将移动速度未超过预设速度阈值的人体的行为状态设置为排队状态。
步骤1041:根据当前该场地包括的各人体的行为状态获取排队信息。
该排队信息可以包括当前排队总人数,排队区域数目,每个排队区域内的排队人数,队首位置、队尾位置、插队人数、离队人数或平均排队时长中的至少一者。
在本申请实施例中,通过获取一台摄像机拍摄的视频图像包括的每个像素点在世界坐标系中的位置和高度,根据每个像素点的位置和高度可以确定该场地包括的每个人体的行为状态,根据每个人体的行为状态获取排队信息,这样可以在整个场地上设置一台摄像机就可以获取排队信息,减少了成本。
实施例2
参见图2-1,本申请实施例提供了一种获取排队信息的方法,包括:
步骤201:获取摄像机当前对用于排队的场地进行拍摄得到的第一视频图像。
参见图2-2,事先可以在用于排队的场地的售票处或收银台等位置设置一个摄像机,可以将该摄像机垂直或倾斜安装在该场地的排队通道的上方,使该摄像机至少能拍摄到该场地包括的用于排队的排队区域,还可以拍摄到除排队区域以外的其他区域。即参见图2-2,该摄像机除了可以拍摄到位于排队区域的排队人体,还可以拍摄到位于排队区域外的行进人体。该摄像机可以是双目摄像机或深度摄像机等。
可选的,该场地中可以包括一个或多个排队区域,即在该场地中可能有一个或多个排队队伍。设置摄像机可以对该场地中的全部排队区域进行拍摄,或对该场地中的部分排队区域进行拍摄。该部分排队区域可以包括一个排队区域或多个排队区域。
所谓行进人体是指当前行走在场地中的非排队的人员。
步骤202:根据第一视频图像包括的每个像素点在图像坐标系中的坐标,通过预设的转换矩阵获取每个像素点在世界坐标系中的位置和高度。
图像坐标系是第一视频图像的坐标系,世界坐标系是该场地的坐标系。
第一视频图像的坐标系的坐标原点可以是第一视频图像中的某一点,横坐标轴的方向可以与第一视频图像的宽度方向相同,纵轴方向可以与第一视频图像的长度方向相同。
世界坐标系的坐标原点是摄像机在水平地面上的投影点,该世界坐标系的横坐标轴的方向可以是正东方向,纵坐标轴的方向可以是正向方向,竖坐标轴的方向可以与该场地垂直。
拍摄第一视频图像的摄像机有双目摄像机或深度摄像机。采用不同摄像机,本步骤的实现方式不同。接下来对采用不同摄像机所采用的实现方式进行分别说明,包括:
当采用双目摄像机时,预设的转换矩阵包括第一转换矩阵和第二转换矩阵,本步骤可以为:
首先根据第一视频图像包括的每个像素点在图像坐标系中的坐标,通过如下公式(1)计算第一视频图像包括的每个像素点在摄像机坐标系中的坐标。
Figure PCTCN2018078126-appb-000001
在公式(1)中x a、y a、z a分别为像素点在图像坐标系中的横坐标、纵坐标和双目视差,x b、y b、z b分别为像素点在摄像机坐标系中的横坐标、纵坐标和竖坐标,P为预设第一转换矩阵。
然后根据第一视频图像包括的每个像素点在摄像机坐标系中的坐标,通过如下公式(2)计算第一视频图像包括的每个像素点在世界坐标系中的位置和高度。
Figure PCTCN2018078126-appb-000002
在公式(2)中x c、y c、z c分别为像素点在世界坐标系中的横坐标、纵坐标和高度,x c、y c组成了像素点在世界坐标系中的位置,M为预设第二转换矩阵。
其中,双目摄像机对场地进行拍摄,可以得到第一视频图像和第二视频图像,第一视频图像中的每个像素点在第二视频图像中对应一个像素点。
可选的,根据第一视频图像和第二视频图像,可以事先计算出第一视频图像中的每个像素的双目视差。计算过程可以为:
从第一视频图像读取第一像素点的第一列号,第一像素点是第一视频图像中的任一像素点;从第二视频图像中读取第一像素点对应的第二像素点的第二列号,使用第二列号减去第一列号,得到差值,该差值为第一像素点的双目视差。可以按上述方式计算得到第一视频图像中的每个像素点的双目视差。
当采用深度摄像机时,预设的转换矩阵包括内参矩阵和旋转平移矩阵,本步骤可以为:
根据第一视频图像包括的每个像素点在图像坐标系中的坐标,通过如下公式(3)计算第一视频图像包括的每个像素点在世界坐标系中的位置和高度。
Figure PCTCN2018078126-appb-000003
在公式(3)中x a和y a分别为像素点在图像坐标系中的横坐标和纵坐标,x c、y c、z c分别为像素点在世界坐标系中的横坐标、纵坐标和高度,x c、y c组成了像素点在世界坐标系中的位置,C为内参矩阵,以及R为旋转平移矩阵。
其中,需要说明的是:第一视频图像中的每个像素点在场地中会对应一个物理点,所以第一视频图像中的每个像素点的位置实质是每个像素点对应的物理点在场地中对应的横坐标和纵坐标组成。
步骤203:根据第一视频图像中的每个像素点的位置和高度获取当前场地包括的各人体的位置和占用的区域。
人体的位置可以为人体的头部的某个点在场地中的位置,该位置由该点在世界坐标系中的横坐标和纵坐标组成。可选的,该点可以为人体的头部的中心点或任意一点等。
本步骤可以通过如下2031至2033的操作来完成,包括:
2031:根据第一视频图像中的每个像素点的高度获取第一视频图像对应的深度图。
可选的,将第一视频图像中的每个像素点的像素值分别替换为每个像素点的高度得到深度图。或者,
可选的,生成一个空白图像,该空白图像中的图像坐标系和第一视频图像 中的图像坐标系相同;从第一视频图像中读取第一像素点的横坐标和纵坐标,第一像素点为第一视频图像中的任一像素点,在该生成的图像的该横坐标和该纵坐标的位置处填写第一像素点的高度。
对第一视频图像中的每个像素点均执行上述操作后,生成的图像即为第一视频图像对应的深度图。
在本步骤中实质将每个像素点的高度分别作为每个像素点的深度值,并组成一幅深度图。
2032:在该深度图中识别出该深度图包括的各人体头部图像在图像坐标系中的位置和占用的区域。
人体的头部高度比人体其他部分高度要高,在深度图中表现为头部区域中的各像素点的深度值大于周围其他区域中的各像素点的深度图。基于这个特征,可以在深度图中提取深度值局部极大区域,该提取的局部极大区域即为人体头部图像在图像坐标系中的区域。
可以将该人体头部图像中的任意一点在图像坐标系中的横坐标和纵坐标组成该人体头部图像在图像坐标系中的位置;或者,可以将该人体头部图像中的中心点在图像坐标系中的横坐标和纵坐标组成该人体头部图像在图像坐标系中的位置。
2033:根据各人体头部图像在图像坐标系中的位置和区域,通过预设的转换矩阵获取当前在该场地中各人体头部图像对应的人体的位置和占用的区域。
当采用双目摄像机拍摄第一视频图像,则对于每个人体头部图像,根据该人体头部图像在图像坐标系中的位置,通过如下公式(4)确定在该场地中该人体的位置;其中,横坐标x c和纵坐标y c组成了在该场地中该人体的位置。
Figure PCTCN2018078126-appb-000004
在公式(4)中x c、y c分别为世界坐标系中的横坐标、纵坐标,x c、y c组成人体在世界坐标系中的位置,x a、y a分别为图像坐标系中的横坐标、纵坐标,x a、y a组成了人体头部图像在图像坐标系中的位置,x b、y b分别为摄像机坐标系中的横坐标、纵坐标。
可选的,可以根据人体头部图像中的每个像素点在图像坐标系中的位置,通过上述公式(4)确定每个像素点在该场地中的位置,从而得到该人体头部 在该场地中占用的区域;或者,根据人体头部图像的边缘包括的每个像素点在图像坐标系中的位置,通过上述公式(4)确定每个像素点的在该场地中的位置,从而得到该人体头部在该场地中占用的区域。
可以将人体头在该场地中占用的区域确定该人体占用的区域。或者,可以在该区域的左侧扩大预设大小的区域,以及在区域的右侧扩大预设大小的区域,将广大后的区域确定该人体占用的区域。
当采用深度摄像机拍摄第一视频图像,则对于每个人体头部图像,根据该人体头部图像在图像坐标系中的位置,通过上述公式(3)确定在该场地中该人体的位置;其中,通过上述公式(3)中计算得到的横坐标x c和纵坐标y c组成了在该场地中该人体的位置。
可选的,可以根据人体头部图像中的每个像素点在图像坐标系中的位置,通过上述公式(3)确定每个像素点在该场地中的位置,从而得到该人体头部在该场地中占用的区域;或者,根据人体头部图像的边缘包括的每个像素点在图像坐标系中的位置,通过上述公式(3)确定每个像素点的在该场地中的位置,从而得到该人体头部在该场地中占用的区域。
可以将人体头在该场地中占用的区域确定该人体占用的区域。或者,可以在该区域的左侧扩大预设大小的区域,以及在区域的右侧扩大预设大小的区域,将广大后的区域确定该人体占用的区域。
步骤204:根据第一时间该场地包括的各人体的位置和当前该场地包括的各人体的位置确定当前该场地包括的各人体的行为状态。
第一时间是在当前之前且距离当前的时间差为预设时间阈值。第一时间可以为视频图像的帧号或某个时间点。其中第一时间可以为第二视频图像的帧号或拍摄第二视频图像的时间。第二视频图像是在当前之前摄像机拍摄的视频图像,所述第一视频图像和所述第二视频图像间隔N帧视频图像,N为大于或等于0的整数。
在第一时间该场地包括的各人体的位置是在当前之前根据第二视频图像获取的,详细获取过程可以参见上述步骤202和203的操作。
本步骤可以为:根据第一时间该场地包括的各人体的位置、当前该场地包括的各人体的位置和第一时间与当前之间的时间间隔,计算当前该场地包括的各人体的移动速度;将移动速度超过预设速度阈值的人体的行为状态设置为行进状态,以及将移动速度未超过预设速度阈值的人体的行为状态设置为排队状 态,将处于排队状态的人体组成排队集合。
其中,排队状态的人体移动的速度通常较慢,而移动状态的人体移动的速度通常较快,第一时间与当前之间的时间间隔往往只有几秒钟或更短,例如该时间间隔可以为5秒、3秒、1秒或0.5秒等。所以在本步骤可以将移动速度超过预设速度阈值的人体的行为状态设置为行进状态,以及将移动速度未超过预设速度阈值的人体的行为状态设置为排队状态。
在排队时可能会存在人体从队伍中经过,通常情况下该人体的移动速度较快,超过预设速度阈值,所以通过第一时间该场地中的各人体的位置和当前该场地中的各人体的位置,计算得到各人体的移动速度,将移动速度超过预设速度阈值的人体的行为状态设置为行进状态,不作为排队状态的人体,可以避免将从队伍经过的人体的行为状态设置为排队状态,提高获取人体行为状态的精度。
在很少的情况下,可能有人站立在场地中,通常该人并不排在队伍中,即该人不在排队区域内,其移动速度可能始终为零,在本步骤中也可能被判断为排队状态。
其中,需要说明的是:由于在识别人体头部图像会出现误差,导致出现异常消失人体和异常出现人体。
所谓异常消失人体是指第一时间处于排队状态的人体并且在当前在该场地中未出现,这可能是在当前识别人体头部图像时未识别出该人体头部图像导致的。
所谓异常出现人体是指在当前处于排队状态的人体并且在第一时间在该场地未出现,这可能是在当前识别人体头部图像时将他人的肩部或其他部分误认为人体的头部图像导致的。
可选的,在本步骤还可以识别出异常消失的人体或异常出现的人体,实现过程可以为:
确定在第一时间的行为状态为排队状态并且在当前没有出现在该场地中的人体为异常消失人体;如果出现人体队列包括该异常消失人体,则从出现人体队列中删除该异常消失人体,否则,将该异常消失人体加入消失人体队列中。
其中,出现人体队列中包括在本步骤之前判断出的异常出现人体。如果出现人体队列中包括的人体在第一时间的行为状态为排队状态,表明在第一时间该人体被判断出为异常出现人体。
如果在当前该人体被判断为异常消失人体,表明在第一时间将他人的肩部或其他部分误认为该人体的人体头部图像所导致的,所在当前被判断出异常消失人体出现在出现人体队列时,则后续不需要对该异常消失人体进行进一步判断,可以直接从出现人体队列中删除该异常消失人体。
确定当前的行为状态为排队状态并且在第一时间没有出现在该场地中的人体为异常出现人体;如果该消失人体队列包括该异常出现人体,则将该异常出现人体从消失人体队列中删除,否则,将该异常出现人体加入出现人体队列中,并从排队集合中去除异常出现人体。
如果消失人体队列中包括的人体在当前被判断为异常出现人体,表明在当前之前可以识别人体头部图像时未识别出该人体的人体头部图像,导致该人体被判断为异常消失人体并被加入到消失人体队列中。
所以在当前当该人体被判断为异常出现人体,则后续不需要对消失人体队列中的该人体进行进一步判断,可以直接从消失人体队列中删除该异常出现人体。
另外,还可以监测消失人体队列中的每个异常消失人体在消失人体队列中的存储时间,将存储时间超过预设存储时间阈值的异常消失人体从消失人体队列中删除。存储时间超过预设存储时间阈值的异常消失人体可能就是不存在的人体,因而可以从消失人体队列中删除,不再对异常消失人体进行跟踪。
对于存储时间超过预设存储时间阈值的异常消失人体,表明该异常消息人体可能是不存在的人体,在本步骤之前的某次判断时可能将他人的肩部或其他部分误认为该人体的人体头部图像所导致的,所以可以直接从消失人体队列中删除。
另外,还可以监测出现人体队列中的每个异常出现人体在出现人体队列中的存储时间,将存储时间超过预设存储时间阈值的异常出现人体的行为状态设置为排队状态,并从出现人体队列中移动到排队集合中。
对于存储时间超过预设存储时间阈值的异常出现人体,表明该异常出现人体可能在本步骤之前的某次识别人体头部图像时未识别出该人体的人体头部图像导致的,在之后的每次识别人体头部图像时均可以被识别出,所以可以将存储时间超过预设存储时间阈值的异常出现人体的行为状态设置为排队状态。
步骤205:根据当前该场地包括的各人体占用的区域,确定当前该场地包括的至少一个排队区域。
具体地,在该场地中增加排队集合包括的各人体占用的区域对应的权重,以及减小该场地中除排队集合包括的各人体占用的区域以外的其他区域的权重;从该场地包括的区域中选择权重超过预设权重阈值的区域;在选择的区域中确定至少一个连通域,该至少一个连通域中的每个连通域为一个排队区域。
由于排队状态的人体移动的速度较慢,导致排队区域内包括的各区域总是有人体占用,导致排队区域内的各区域的权重会不断的累加,并最终超过预设权重阈值。排成一队的人体往往相互之间靠的较近,导致位于同一排队区域处于排队状态的相邻两个人体占用的区域连通,所以可以将一个连通域作为一个排队区域。
在本实施例中通过权重超过预设权重阈值的区域确定排队区域,这样不需要事先在场地中划分排队通道,应用更加灵活。
在排队区域内排队的人体可能发现走动现象,导致在当前确定该人体的行为状态为行进状态,但是该人体的行为状态实质上仍为排队状态,需要进行修正,详细过程如下:
确定第一人体和第一人体在第一时间所在的排队区域,第一人体在当前的行为状态为行进状态并且在第一时间的行为状态为排队状态;如果当前第一人体的位置位于确定的排队区域,则将第一人体的行为状态修改为排队状态并添加到排队集合中。
对于每个排队区域,还可以确定该排队区域的队首位置和队尾位置,实现过程可以为:
对于任一排队区域,获取当前位于该排队区域内的各人体首次出现排队状态时对应的位置,统计获取的每个位置对应的人体数目,将人体数目最大的位置确定为该排队区域的队尾位置,将该排队区域除队尾位置的另一端位置确定为队首位置。
在排队时还有可能存在插队人体,插队人体从非队尾位置入队,本步骤通过如下方式确定出插队人体,可以为:确定在第一时间的行为状态为行进状态以及在当前的行为状态为排队状态的人体,确定当前该人体的位置所在的排队区域,如果当前该人体的位置为该排队区域的非队尾位置,则确定该人体为插队人体。
在排队时还有可能存在离队人体,离队人体从非队首位置离队,本步骤通过如下方式确定出离队人体,可以为:确定在第一时间的行为状态为排队状态 以及在当前的行为状态为行进状态的人体,确定第一时间该人体的位置所在的排队区域,如果第一时间该人体的位置为该排队区域的非队首位置,则确定该人体为离队人体。
在排队时有的人体办理完业务后从队首离开,本步骤还可以通过如下方式确定出从队首离开的人体:确定在第一时间的行为状态为排队状态以及在当前的行为状态为行进状态的人体,确定第一时间该人体的位置所在的排队区域,如果第一时间该人体的位置为该排队区域的队首位置,则确定该人体为从队首离开的人体。
步骤206:根据当前该场地包括的各人体的行为状态获取排队信息。
排队信息可以包括当前排队总人数,排队区域数目,每个排队区域内的排队人数,队首位置、队尾位置、插队人数、离队人数或平均排队时长中的至少一者。
对于当前排队总人数,可以直接统计当前处于排队状态的人体数目得到当前排队总人数。
对于排队区域数目,可以直接统计确定的排队区域,得到排队区域数目。
对于每个排队区域内的排队人数,可以根据排队集合包括的各人体的位置,在当前该场地中确定排队集合包括的各人体所在的排队区域;对于每个排队区域,统计位于该排队区域内的人体的第一人体数目,统计消失队列中位于该排队区域内的异常消失人体数目,计算第一人体数目与异常消失人体数目之和得到该排队区域的排队人数。
其中,在一些很少的情况下,可能有人站立在场地中且该人不在排队区域内,由于其移动速度可能始终为零,在步骤204中也可能被判断为排队状态的人体。在此处根据该人体的位置,在当前该场地中确定该人体不属于任何一个排队区域,所以计算出的该排队区域内的人体的第一人体数目中不包括该人体,可以提高计算的精度。
为了提高计算出的当前排队总人数精度,也可以对各个排队区域内的排队人数进行累加得到提成队总人数。
对于插队人数,可以直接统计插队人体的数目,得到插队人数。
对于离队人数,可以直接统计离队人体的数目,得到离队人数。
对于平均排队时长,先确定每个从队首离开的人体,获取该人体在第一次出现排队状态的时间和最后一次出现排队状态的时间,根据第一次出现排队状 态的时间和最后一次出现排队状态的时间计算该人体的排队时间;根据每个从队首离开的人体的排队时间计算出平均排队时长。
其中,如果该时间为帧号,则假设第一次出现排队状态的时间为帧号F1,以及最后一次出现排队状态的时间为帧号F2,这样该人体的排队时间
Figure PCTCN2018078126-appb-000005
秒,f表示摄像机拍摄视频的帧率。
在本申请实施例中,通过获取一台摄像机拍摄的视频图像包括的每个像素点在世界坐标系中的位置和高度,根据每个像素点的位置和高度可以确定该场地包括的每个人体的行为状态,根据每个人体的行为状态获取排队信息,这样可以在整个场地上设置一台摄像机就可以获取排队信息,减少了成本。另外本实施例,通过权重超过预设权重阈值的区域确定排队区域,这样不需要事先在场地中划定排队区域,增加了应用灵活性;在获取排队信息的过程中,不需要要预设的人体特征,只是从深度图中识别人体头部图像就可以得到场地包括的人体,相比相关技术运算复杂度简单。
实施例3
参见图3,本申请实施例提供了一种获取排队信息的装置300,所述装置300包括:
第一获取模块301,用于获取第一视频图像包括的每个像素点在世界坐标系中的位置和高度,所述第一视频图像是摄像机对用于排队的场地进行拍摄得到的视频图像;
第一确定模块302,用于根据所述每个像素点的位置和高度确定所述场地包括的各人体的行为状态;
第二获取模块303,用于根据当前所述场地包括的各人体的行为状态获取排队信息。
可选的,所述第一确定模块302包括:
第一获取单元,用于根据所述每个像素点的位置和高度获取当前所述场地包括的各人体的位置;
第一确定单元,用于根据第一时间所述场地包括的各人体的位置和当前所述场地包括的各人体的位置确定当前所述场地包括的各人体的行为状态,所述第一时间是在当前之前且距离当前的时间差为预设时间阈值。
可选的,所述第一获取模块301包括:
第二获取单元,用于获取摄像机拍摄的第一视频图像;
第三获取单元,用于根据所述第一视频图像包括的每个像素点在图像坐标系中的坐标,通过预设的转换矩阵获取所述每个像素点在世界坐标系中的位置和高度。
可选的,所述第一获取单元,用于:
根据所述第一视频图像中的每个像素点的高度获取所述第一视频图像对应的深度图;
在所述深度图中识别出所述深度图包括的各人体头部图像在图像坐标系中的位置;
根据所述各人体头部图像在图像坐标系中的位置,通过预设的转换矩阵获取当前在所述场地中各人体头部图像对应的人体的位置。
可选的,所述第一获取单元,用于:
将所述第一视频图像中的每个像素点的像素值分别替换为所述每个像素点的高度得到深度图;或者,
根据所述第一视频图像中的每个像素点的位置,将所述第一视频图像中的每个像素点的高度填写在空白的图像中得到深度图。
可选的,所述第一确定单元,用于:
根据第一时间所述场地包括的各人体的位置、当前所述场地包括的各人体的位置和所述第一时间与当前之间的时间间隔,计算当前所述场地包括的各人体的移动速度;
将移动速度超过预设速度阈值的人体的行为状态设置为行进状态,以及将移动速度未超过预设速度阈值的人体的行为状态设置为排队状态。
可选的,所述第二获取模块303包括:
第二确定单元,用于根据排队集合包括的各人体的位置,在当前所述场地中确定排队集合包括的各人体所在的排队区域,所述排队集合包括当前所述场地中处于排队状态的人体;
第四获取单元,用于根据所述确定的排队区域获取当前所述场地的排队信息。
可选的,所述第二获取模块303还包括:
第五获取单元,用于根据所述每个像素点的位置和高度获取当前所述场地 包括的各人体占用的区域;
所述装置还包括:
第二确定模块,用于根据当前所述场地包括的各人体占用的区域,确定当前所述场地包括的至少一个排队区域。
可选的,所述第二确定模块包括:
调整单元,用于在所述场地中增加处于排队状态的人体占用的区域对应的权重,以及减小其他区域的权重;
选择单元,用于从所述场地包括的区域中选择权重超过预设权重阈值的区域;
第二确定单元,用于在所述选择的区域中确定至少一个连通域,所述至少一个连通域中的每个连通域为一个排队区域。
可选的,所述装置还包括:
修改模块,用于确定第一人体和所述第一人体在第一时间所在的排队区域,所述第一人体在当前的行为状态为行进状态并且在第一时间的行为状态为排队状态;如果当前所述第一人体的位置位于所述确定的排队区域,则将所述第一人体的行为状态修改为排队状态并添加到所述排队集合中。
可选的,所述装置还包括:
第三确定模块,用于确定在第一时间的行为状态为排队状态并且在当前没有出现在所述场地中的人体为异常消失人体并将所述异常消失人体添加到消失人体队列中;
所述第四获取单元,用于统计位于所述排队区域内的人体的第一人体数目;统计所述消失人体队列中位于所述排队区域内的异常消失人体数目;计算所述第一人体数目与所述异常消失人体数目之和得到所述排队区域的排队人数。
可选的,所述装置还包括:
第四确定模块,用于确定当前的行为状态为排队状态并且在第一时间没有出现在所述场地中的人体为异常出现人体;如果所述消失人体队列包括所述异常出现人体,则将所述异常出现人体从消失人体队列中删除。
在本申请实施例中,通过获取一台摄像机拍摄的视频图像包括的每个像素点在世界坐标系中的位置和高度,根据每个像素点的位置和高度可以确定该场地包括的每个人体的行为状态,根据每个人体的行为状态获取排队信息,这样 可以在整个场地上设置一台摄像机就可以获取排队信息,减少了成本。
实施例4
参见图4,本申请实施例提供了一种终端400,该终端用于执行上述获取排队信息的方法,至少包括有一个或者一个以上处理核心的处理器401。
需要说明的是:终端400除了包括处理器401外,还可以包括其他部件,例如,终端400还可以包括收发器402、存储器403、输入单元404、显示单元405、传感器406、音频电路407和WiFi(wireless fidelity,无线保真)模块408等部件,存储器403包括有一个或一个以上计算机可读存储介质。需要强调说明的是:本领域技术人员可以理解,图4中示出的UE结构并不构成对终端400的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
以及,收发器402还可用于在收发信息或通话过程中进行信号的接收和发送,特别地,将基站的下行信息接收后,交由一个或者一个以上处理器401处理;另外,将涉及上行的数据发送给基站。通常,收发器402包括但不限于天线、至少一个放大器、调谐器、一个或多个振荡器、SIM(Subscriber Identity Module,客户识别模块)卡、收发信机、耦合器、LNA(Low Noise Amplifier,低噪声放大器)、双工器等。此外,收发器402还可以通过无线通信与网络和其他设备通信。所述无线通信可以使用任一通信标准或协议,包括但不限于GSM(Global System of Mobile communication,全球移动通讯系统)、GPRS(General Packet Radio Service,通用分组无线服务)、CDMA(Code Division Multiple Access,码分多址)、WCDMA(Wideband Code Division Multiple Access,宽带码分多址)、LTE(Long Term Evolution,长期演进)、电子邮件、SMS(Short Messaging Service,短消息服务)等。
存储器403还可用于存储软件程序以及模块,处理器401可以通过运行存储在存储器403的软件程序以及模块,从而执行各种功能应用以及数据处理。存储器403可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据终端400的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器403可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存 储器件。相应地,存储器403还可以包括存储器控制器,以提供处理器401和输入单元404对存储器403的访问。
输入单元404可用于接收输入的数字或字符信息,以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。具体地,输入单元404可包括触敏表面441以及其他输入设备442。触敏表面441,也称为触摸显示屏或者触控板,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触敏表面441上或在触敏表面441附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触敏表面441可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器401,并能接收处理器401发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触敏表面441。除了触敏表面441,输入单元404还可以包括其他输入设备442。具体地,其他输入设备442可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
显示单元405可用于显示由用户输入的信息或提供给用户的信息以及终端400的各种图形用户接口,这些图形用户接口可以由图形、文本、图标、视频和其任意组合来构成。显示单元405可包括显示面板451,可选的,可以采用LCD(Liquid Crystal Display,液晶显示器)、OLED(Organic Light-Emitting Diode,有机发光二极管)等形式来配置显示面板451。进一步的,触敏表面441可覆盖显示面板451,当触敏表面441检测到在其上或附近的触摸操作后,传送给处理器401以确定触摸事件的类型,随后处理器401根据触摸事件的类型在显示面板451上提供相应的视觉输出。虽然在图4中,触敏表面441与显示面板451是作为两个独立的部件来实现输入和输入功能,但是在某些实施例中,可以将触敏表面441与显示面板451集成而实现输入和输出功能。
终端400包括至少一种传感器406,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板451的亮度,接近传感器可在终端400移动到耳边时,关闭显示面板451和/或背光。作为运动传感器的一种,重力加速度传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可 检测出重力的大小及方向,可用于识别终端400姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于终端400还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
音频电路407包括扬声器471和传声器472,扬声器471和传声器472可提供用户与终端400之间的音频接口。音频电路407可将接收到的音频数据转换后的电信号,传输到扬声器471,由扬声器471转换为声音信号输出;另一方面,传声器472将收集的声音信号转换为电信号,由音频电路407接收后转换为音频数据,再将音频数据输出处理器401处理后,经收发器402以发送给比如另一UE,或者将音频数据输出至存储器403以便进一步处理。音频电路407还可能包括耳塞插孔,以提供外设耳机与终端400的通信。
WiFi属于短距离无线传输技术,终端400通过WiFi模块408可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图4示出了WiFi模块408,但是可以理解的是,其并不属于终端400的必须构成,完全可以根据需要在不改变申请的本质的范围内而省略。
处理器401是终端400的控制中心,利用各种接口和线路连接整个UE的各个部分,通过运行或执行存储在存储器403内的软件程序和/或模块,以及调用存储在存储器403内的数据,执行终端400的各种功能和处理数据,从而对终端400进行整体监控。可选的,处理器401可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器401中。
终端400还包括给各个部件供电的电源409(比如电池),优选的,电源409可以通过电源管理系统与处理器401逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。电源409还可以包括一个或一个以上的直流或交流电源、再充电系统、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。
尽管未示出,终端400还可以包括摄像头、蓝牙模块等,在此不再赘述。具体在本实施例中,终端400的处理器201可以执行上述任一实施例提供的获取排队信息的方法。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (26)

  1. 一种获取排队信息的方法,其特征在于,所述方法包括:
    获取第一视频图像包括的每个像素点在世界坐标系中的位置和高度,所述第一视频图像是摄像机对用于排队的场地进行拍摄得到的视频图像;
    根据所述每个像素点的位置和高度确定所述场地包括的各人体的行为状态;
    根据当前所述场地包括的各人体的行为状态获取排队信息。
  2. 如权利要求1所述的方法,其特征在于,所述根据所述每个像素点的位置和高度确定所述场地包括的各人体的行为状态,包括:
    根据所述每个像素点的位置和高度获取当前所述场地包括的各人体的位置;
    根据第一时间所述场地包括的各人体的位置和当前所述场地包括的各人体的位置,确定当前所述场地包括的各人体的行为状态,所述第一时间是在当前之前且距离当前的时间差为预设时间阈值。
  3. 如权利要求1所述的方法,其特征在于,所述获取第一视频图像包括的每个像素点在世界坐标系中的位置和高度,包括:
    获取摄像机拍摄的第一视频图像;
    根据所述第一视频图像包括的每个像素点在图像坐标系中的坐标,通过预设的转换矩阵获取所述每个像素点在世界坐标系中的位置和高度。
  4. 如权利要求2所述的方法,其特征在于,所述根据所述每个像素点的位置和高度获取当前所述场地包括的各人体的位置,包括:
    根据所述第一视频图像中的每个像素点的高度获取所述第一视频图像对应的深度图;
    在所述深度图中识别出所述深度图包括的各人体头部图像在图像坐标系中的位置;
    根据所述各人体头部图像在图像坐标系中的位置,通过预设的转换矩阵获取当前在所述场地中各人体头部图像对应的人体的位置。
  5. 如权利要求4所述的方法,其特征在于,所述根据所述第一视频图像中的每个像素点的高度获取所述第一视频图像对应的深度图,包括:
    将所述第一视频图像中的每个像素点的像素值分别替换为所述每个像素点的高度得到深度图;或者,
    根据所述第一视频图像中的每个像素点的位置,将所述第一视频图像中的每个像素点的高度填写在空白的图像中得到深度图。
  6. 如权利要求2所述的方法,其特征在于,所述根据第一时间所述场地包括的各人体的位置和当前所述场地包括的各人体的位置确定当前所述场地包括的各人体的行为状态,包括:
    根据第一时间所述场地包括的各人体的位置、当前所述场地包括的各人体的位置和所述第一时间与当前之间的时间间隔,计算当前所述场地包括的各人体的移动速度;
    将移动速度超过预设速度阈值的人体的行为状态设置为行进状态,以及将移动速度未超过预设速度阈值的人体的行为状态设置为排队状态。
  7. 如权利要求1至6任一项所述的方法,其特征在于,所述根据当前所述场地包括的各人体的行为状态获取排队信息,包括:
    根据排队集合包括的各人体的位置,在当前所述场地中确定所述排队集合包括的各人体所在的排队区域,所述排队集合包括当前所述场地中处于排队状态的人体;
    根据所述确定的排队区域获取当前所述场地的排队信息。
  8. 如权利要求7所述的方法,其特征在于,所述获取第一视频图像包括的每个像素点在世界坐标系中的位置和高度之后,还包括:
    根据所述每个像素点的位置和高度获取当前所述场地包括的各人体占用的区域;
    所述在当前所述场地中确定排队状态的所述各人体所在的排队区域之前,还包括:
    根据当前所述场地包括的各人体占用的区域,确定当前所述场地包括的至 少一个排队区域。
  9. 如权利要求8所述的方法,其特征在于,所述根据当前所述场地包括的各人体占用的区域,确定当前所述场地包括的至少一个排队区域,包括:
    在所述场地中增加处于排队状态的人体占用的区域对应的权重,以及减小其他区域的权重;
    从所述场地包括的区域中选择权重超过预设权重阈值的区域;
    在所述选择的区域中确定至少一个连通域,所述至少一个连通域中的每个连通域为一个排队区域。
  10. 如权利要求7所述的方法,其特征在于,所述在当前所述场地中确定所述排队集合包括的各人体所在的排队区域之前,还包括:
    确定第一人体和所述第一人体在第一时间所在的排队区域,所述第一人体在当前的行为状态为行进状态并且在第一时间的行为状态为排队状态;
    如果当前所述第一人体的位置位于所述确定的排队区域,则将所述第一人体的行为状态修改为排队状态并添加到所述排队集合中。
  11. 如权利要求7所述的方法,其特征在于,所述根据所述每个像素点的位置和高度确定所述场地包括的各人体的行为状态之后,还包括:
    确定在第一时间的行为状态为排队状态并且在当前没有出现在所述场地中的人体为异常消失人体并将所述异常消失人体添加到消失人体队列中;
    所述根据所述确定的排队区域获取当前所述场地的排队信息,包括:
    统计位于所述排队区域内的人体的第一人体数目;
    统计所述消失人体队列中位于所述排队区域内的异常消失人体数目;
    计算所述第一人体数目与所述异常消失人体数目之和得到所述排队区域的排队人数。
  12. 如权利要求11所述的方法,其特征在于,所述根据所述每个像素点的位置和高度确定所述场地包括的各人体的行为状态之后,还包括:
    确定当前的行为状态为排队状态并且在第一时间没有出现在所述场地中的人体为异常出现人体;
    如果所述消失人体队列包括所述异常出现人体,则将所述异常出现人体从所述消失人体队列中删除。
  13. 一种获取排队信息的装置,其特征在于,所述装置包括:
    第一获取模块,用于获取第一视频图像包括的每个像素点在世界坐标系中的位置和高度,所述第一视频图像是摄像机对用于排队的场地进行拍摄得到的视频图像;
    第一确定模块,用于根据所述每个像素点的位置和高度确定所述场地包括的各人体的行为状态;
    第二获取模块,用于根据当前所述场地包括的各人体的行为状态获取排队信息。
  14. 如权利要求13所述的装置,其特征在于,所述第一确定模块包括:
    第一获取单元,用于根据所述每个像素点的位置和高度获取当前所述场地包括的各人体的位置;
    第一确定单元,用于根据第一时间所述场地包括的各人体的位置和当前所述场地包括的各人体的位置确定当前所述场地包括的各人体的行为状态,所述第一时间是在当前之前且距离当前的时间差为预设时间阈值。
  15. 如权利要求13述的装置,其特征在于,所述第一获取模块包括:
    第二获取单元,用于获取摄像机拍摄的第一视频图像;
    第三获取单元,用于根据所述第一视频图像包括的每个像素点在图像坐标系中的坐标,通过预设的转换矩阵获取所述每个像素点在世界坐标系中的位置和高度。
  16. 如权利要求14所述的装置,其特征在于,所述第一获取单元,用于:
    根据所述第一视频图像中的每个像素点的高度获取所述第一视频图像对应的深度图;
    在所述深度图中识别出所述深度图包括的各人体头部图像在图像坐标系中的位置;
    根据所述各人体头部图像在图像坐标系中的位置,通过预设的转换矩阵获 取当前在所述场地中各人体头部图像对应的人体的位置。
  17. 如权利要求16所述的装置,其特征在于,所述第一获取单元,用于:
    将所述第一视频图像中的每个像素点的像素值分别替换为所述每个像素点的高度得到深度图;或者,
    根据所述第一视频图像中的每个像素点的位置,将所述第一视频图像中的每个像素点的高度填写在空白的图像中得到深度图。
  18. 如权利要求14所述的装置,其特征在于,所述第一确定单元,用于:
    根据第一时间所述场地包括的各人体的位置、当前所述场地包括的各人体的位置和所述第一时间与当前之间的时间间隔,计算当前所述场地包括的各人体的移动速度;
    将移动速度超过预设速度阈值的人体的行为状态设置为行进状态,以及将移动速度未超过预设速度阈值的人体的行为状态设置为排队状态。
  19. 如权利要求13至18任一项所述的装置,其特征在于,所述第二获取模块包括:
    第二确定单元,用于根据排队集合包括的各人体的位置,在当前所述场地中确定所述排队集合包括的各人体所在的排队区域,所述排队集合包括当前所述场地中处于排队状态的人体;
    第四获取单元,用于根据所述确定的排队区域获取当前所述场地的排队信息。
  20. 如权利要求19所述的装置,其特征在于,所述第二获取模块还包括:
    第五获取单元,用于根据所述每个像素点的位置和高度获取当前所述场地包括的各人体占用的区域;
    所述装置还包括:
    第二确定模块,用于根据当前所述场地包括的各人体占用的区域,确定当前所述场地包括的至少一个排队区域。
  21. 如权利要求20所述的装置,其特征在于,所述第二确定模块包括:
    调整单元,用于在所述场地中增加处于排队状态的人体占用的区域对应的权重,以及减小其他区域的权重;
    选择单元,用于从所述场地包括的区域中选择权重超过预设权重阈值的区域;
    第二确定单元,用于在所述选择的区域中确定至少一个连通域,所述至少一个连通域中的每个连通域为一个排队区域。
  22. 如权利要求19所述的装置,其特征在于,所述装置还包括:
    修改模块,用于确定第一人体和所述第一人体在第一时间所在的排队区域,所述第一人体在当前的行为状态为行进状态并且在第一时间的行为状态为排队状态;如果当前所述第一人体的位置位于所述确定的排队区域,则将所述第一人体的行为状态修改为排队状态并添加到所述排队集合中。
  23. 如权利要求19所述的装置,其特征在于,所述装置还包括:
    第三确定模块,用于确定在第一时间的行为状态为排队状态并且在当前没有出现在所述场地中的人体为异常消失人体并将所述异常消失人体添加到消失人体队列中;
    所述第四获取单元,用于统计位于所述排队区域内的人体的第一人体数目;统计所述消失人体队列中位于所述排队区域内的异常消失人体数目;计算所述第一人体数目与所述异常消失人体数目之和得到所述排队区域的排队人数。
  24. 如权利要求23所述的装置,其特征在于,所述装置还包括:
    第四确定模块,用于确定当前的行为状态为排队状态并且在第一时间没有出现在所述场地中的人体为异常出现人体;如果所述消失人体队列包括所述异常出现人体,则将所述异常出现人体从所述消失人体队列中删除。
  25. 一种获取排队信息的装置,其特征在于,所述装置包括:
    至少一个处理器;和
    至少一个存储器;
    所述至少一个存储器存储有一个或多个程序,所述一个或多个程序被配置成由所述至少一个处理器执行,所述一个或多个程序包含用于进行如权利要求1 至12任一项权利要求所述的方法的指令。
  26. 一种非易失性计算机可读存储介质,其特征在于,用于存储计算机程序,所述计算机程序通过处理器进行加载来执行如权利要求1至12任一项权利要求所述的方法的指令。
PCT/CN2018/078126 2017-03-07 2018-03-06 一种获取排队信息的方法、装置及计算机可读存储介质 WO2018161889A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18763346.6A EP3594848A4 (en) 2017-03-07 2018-03-06 METHOD FOR DETECTING QUEUE INFORMATION, DEVICE AND COMPUTER READABLE STORAGE MEDIUM
US16/563,632 US11158035B2 (en) 2017-03-07 2019-09-06 Method and apparatus for acquiring queuing information, and computer-readable storage medium thereof

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710130666.2A CN108573189B (zh) 2017-03-07 2017-03-07 一种获取排队信息的方法及装置
CN201710130666.2 2017-03-07

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/563,632 Continuation US11158035B2 (en) 2017-03-07 2019-09-06 Method and apparatus for acquiring queuing information, and computer-readable storage medium thereof

Publications (1)

Publication Number Publication Date
WO2018161889A1 true WO2018161889A1 (zh) 2018-09-13

Family

ID=63447251

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/078126 WO2018161889A1 (zh) 2017-03-07 2018-03-06 一种获取排队信息的方法、装置及计算机可读存储介质

Country Status (4)

Country Link
US (1) US11158035B2 (zh)
EP (1) EP3594848A4 (zh)
CN (1) CN108573189B (zh)
WO (1) WO2018161889A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109326237A (zh) * 2018-09-30 2019-02-12 广西田东泰如鲜品农副产品配送有限公司 一种芒果庄园的导览系统
CN109584380A (zh) * 2018-11-29 2019-04-05 南宁思飞电子科技有限公司 一种地铁自助购票机防止插队抢票得智能处理系统及其处理方法
CN109584379A (zh) * 2018-11-29 2019-04-05 南宁思飞电子科技有限公司 一种基于重量与人数控制自主售票机插队买票系统及其控制方法
CN109840982B (zh) * 2019-01-02 2020-07-14 京东方科技集团股份有限公司 排队推荐方法及装置、计算机可读存储介质
CN110099253A (zh) * 2019-05-20 2019-08-06 浙江大华技术股份有限公司 告警操作的执行方法、显示设备
JP7487901B1 (ja) 2023-01-24 2024-05-21 Necプラットフォームズ株式会社 行列情報提供システム、方法、及びプログラム

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070253595A1 (en) * 2006-04-18 2007-11-01 Sorensen Associates Inc Still Image Queue Analysis System and Method
US20090034846A1 (en) * 2007-07-30 2009-02-05 Senior Andrew W High density queue estimation and line management
JP2009193412A (ja) * 2008-02-15 2009-08-27 Tokyo Univ Of Science 動的窓口設定装置及び動的窓口設定方法
CN103903445A (zh) * 2014-04-22 2014-07-02 北京邮电大学 一种基于视频的车辆排队长度检测方法与系统
CN103916641A (zh) * 2014-04-22 2014-07-09 浙江宇视科技有限公司 一种队列长度计算方法以及装置
CN103942773A (zh) * 2013-01-21 2014-07-23 浙江大华技术股份有限公司 一种通过图像分析获取排队长度的方法及装置
CN104994360A (zh) * 2015-08-03 2015-10-21 北京旷视科技有限公司 视频监控方法和视频监控系统

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3727798B2 (ja) * 1999-02-09 2005-12-14 株式会社東芝 画像監視システム
CN102867414B (zh) * 2012-08-18 2014-12-10 湖南大学 一种基于ptz摄像机快速标定的车辆排队长度测量方法
CN102855760B (zh) * 2012-09-27 2014-08-20 中山大学 基于浮动车数据的在线排队长度检测方法
CN104215237A (zh) * 2013-06-05 2014-12-17 北京掌尚无限信息技术有限公司 室内地图导航线路指引算法
US20160191865A1 (en) * 2014-12-30 2016-06-30 Nice-Systems Ltd. System and method for estimating an expected waiting time for a person entering a queue

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070253595A1 (en) * 2006-04-18 2007-11-01 Sorensen Associates Inc Still Image Queue Analysis System and Method
US20090034846A1 (en) * 2007-07-30 2009-02-05 Senior Andrew W High density queue estimation and line management
JP2009193412A (ja) * 2008-02-15 2009-08-27 Tokyo Univ Of Science 動的窓口設定装置及び動的窓口設定方法
CN103942773A (zh) * 2013-01-21 2014-07-23 浙江大华技术股份有限公司 一种通过图像分析获取排队长度的方法及装置
CN103903445A (zh) * 2014-04-22 2014-07-02 北京邮电大学 一种基于视频的车辆排队长度检测方法与系统
CN103916641A (zh) * 2014-04-22 2014-07-09 浙江宇视科技有限公司 一种队列长度计算方法以及装置
CN104994360A (zh) * 2015-08-03 2015-10-21 北京旷视科技有限公司 视频监控方法和视频监控系统

Also Published As

Publication number Publication date
CN108573189A (zh) 2018-09-25
EP3594848A4 (en) 2020-03-25
US20190392572A1 (en) 2019-12-26
EP3594848A1 (en) 2020-01-15
CN108573189B (zh) 2020-01-10
US11158035B2 (en) 2021-10-26

Similar Documents

Publication Publication Date Title
WO2018161889A1 (zh) 一种获取排队信息的方法、装置及计算机可读存储介质
CN108513070B (zh) 一种图像处理方法、移动终端及计算机可读存储介质
KR101951135B1 (ko) 비디오 이미지를 스케일링하는 방법 및 이동 단말
WO2018103525A1 (zh) 人脸关键点跟踪方法和装置、存储介质
WO2021057267A1 (zh) 图像处理方法及终端设备
CN109743504B (zh) 一种辅助拍照方法、移动终端和存储介质
US9948856B2 (en) Method and apparatus for adjusting a photo-taking direction, mobile terminal
CN111263071B (zh) 一种拍摄方法及电子设备
CN108038825B (zh) 一种图像处理方法及移动终端
WO2017084295A1 (zh) 自动对焦方法及装置
CN109195213B (zh) 移动终端屏幕控制方法、移动终端及计算机可读存储介质
CN110180181B (zh) 精彩时刻视频的截图方法、装置及计算机可读存储介质
CN110099217A (zh) 一种基于tof技术的图像拍摄方法、移动终端及计算机可读存储介质
CN110275794B (zh) 终端的屏幕显示方法、终端以及计算机可读存储介质
CN109889695A (zh) 一种图像区域确定方法、终端及计算机可读存储介质
WO2019137535A1 (zh) 物距测量方法及终端设备
CN107277364B (zh) 一种拍摄方法、移动终端及计算机可读存储介质
CN108243489B (zh) 一种拍照控制方法及移动终端
CN112866685A (zh) 投屏延时测量方法、移动终端及计算机可读存储介质
CN108063894B (zh) 一种视频处理方法及移动终端
CN112532838B (zh) 一种图像处理方法、移动终端以及计算机存储介质
CN109510941A (zh) 一种拍摄处理方法、设备及计算机可读存储介质
CN112822548B (zh) 一种投屏显示方法及装置、移动终端、存储介质
CN112015508B (zh) 一种投屏交互控制方法、设备及计算机可读存储介质
CN108537836A (zh) 一种深度数据获取方法及移动终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18763346

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018763346

Country of ref document: EP

Effective date: 20191007