CN112116556A - Passenger flow volume statistical method and device and computer equipment - Google Patents

Passenger flow volume statistical method and device and computer equipment Download PDF

Info

Publication number
CN112116556A
CN112116556A CN202010805614.2A CN202010805614A CN112116556A CN 112116556 A CN112116556 A CN 112116556A CN 202010805614 A CN202010805614 A CN 202010805614A CN 112116556 A CN112116556 A CN 112116556A
Authority
CN
China
Prior art keywords
passenger flow
image frame
frame sequence
state
human body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010805614.2A
Other languages
Chinese (zh)
Other versions
CN112116556B (en
Inventor
潘思伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010805614.2A priority Critical patent/CN112116556B/en
Publication of CN112116556A publication Critical patent/CN112116556A/en
Application granted granted Critical
Publication of CN112116556B publication Critical patent/CN112116556B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a passenger flow volume statistical method, a passenger flow volume statistical device and computer equipment, wherein the passenger flow volume statistical method comprises the following steps: compared with the prior art, the passenger flow volume statistical method provided by the application inputs the passenger flow image frame sequence into the detection model by acquiring the passenger flow image frame sequence within the preset time under the condition that the passenger flow type is the passenger flow of a specific crowd, and the detection model detects the target human body in the image frame sequence so as to output the position and the state of the target human body in the image frame sequence, wherein the state comprises at least one of the following states: the method comprises the steps of generating a state, deleting the state and updating the state, and performing geometric calculation according to the position and the state of a target human body to determine the number of regional people or the number of regional people in and out of a passenger flow image frame sequence in the preset time, so that the problem of low passenger flow volume statistical accuracy in the passenger flow statistical method for specific people by using target detection and target tracking in the related art is solved, and the passenger flow volume statistical accuracy of the specific people is improved.

Description

Passenger flow volume statistical method and device and computer equipment
Technical Field
The present application relates to the field of image analysis technologies, and in particular, to a passenger flow volume statistical method, an apparatus, and a computer device.
Background
The passenger flow volume statistics means that the number of people entering and exiting each entrance in real time or the number of people in a business area is counted by installing passenger flow statistics equipment in a business area, and the passenger flow volume statistics method is a common image analysis method.
In the related art, a passenger flow volume statistical method is applied to a scene needing passenger flow statistics, images in the scene are collected, target detection and target tracking of passenger flow are carried out, geometric calculation is carried out on tracking results, and therefore passenger flow statistical information in the images is obtained.
At present, aiming at the problem of low passenger flow volume statistics accuracy of a passenger flow volume statistics method for specific people by using target detection and target tracking in the related technology, an effective solution is not provided.
Disclosure of Invention
The embodiment of the application provides a passenger flow volume statistical method, a passenger flow volume statistical device and computer equipment, and aims to at least solve the problem that in the related art, the passenger flow volume statistical accuracy is low when a passenger flow statistical method of target detection and target tracking is applied to a specific group.
In a first aspect, an embodiment of the present application provides a passenger flow volume statistics method, where the method includes:
acquiring a passenger flow image frame sequence in preset time;
under the condition that the passenger flow type is the passenger flow of a specific crowd, inputting the passenger flow image frame sequence into a detection model, and detecting a target human body in the image frame sequence by the detection model so as to output the position and the state of the target human body in the image frame sequence; wherein the state comprises at least one of: generating a state, deleting the state and updating the state;
and performing geometric calculation according to the position and the state of the target human body to determine the number of people in the region or the number of people in and out of the region in the passenger flow image frame sequence within the preset time.
In some of these embodiments, the method of training the detection model comprises:
establishing a detection model, wherein the detection model uses a full convolution network and adopts a residual error structure;
acquiring a plurality of specific passenger flow image frame sequences from a passenger flow image database, and labeling the position and the state of a target human body in the specific passenger flow image frame sequences, wherein the specific passenger flow image frame sequences and the labels are divided into groups;
and training a detection model by using the specific passenger flow image frame sequence and the label in the same group.
In some embodiments, the training of the detection model using the particular passenger flow image frame sequence and the label in the same group comprises:
clustering the specific passenger flow image frame sequence according to a clustering algorithm to obtain a prior detection box of the specific passenger flow image frame sequence;
determining a training matrix of the detection model according to the specific passenger flow image frame sequence and a prior detection frame corresponding to the specific passenger flow image frame sequence;
and taking the labels in the same group with the specific passenger flow image frame sequence as a correction matrix of the detection model, and training the detection model according to the correction matrix and the training matrix.
In some embodiments, after acquiring the passenger flow image frame sequence within the preset time, the method further includes:
acquiring the passenger flow type of the passenger flow image frame sequence;
and under the condition that the passenger flow type is the conventional passenger flow, performing target detection on the passenger flow image frame sequence to generate a target detection result, and performing target tracking according to the target detection result to determine the position and the state of the target human body in the image frame sequence.
In some embodiments, the performing geometric computation according to the position and the state of the target human body to determine the number of people in the region in the passenger flow image frame sequence within the preset time includes:
the area in which the number of people needs to be counted is determined,
and determining a geometric relationship according to the position of the target human body and the region, and determining the number of regional people in the region according to the geometric relationship and the state.
In some embodiments, the performing geometric computation according to the position and state of the target human body to determine the number of people coming in or going out of the area in the passenger flow image frame sequence within a preset time includes:
determining an access area and an access tripwire; the access area is an area in which access of the target human body needs to be counted, and the access tripwire is a reference line used for counting whether the access action is triggered by the target human body;
under the condition that the target human body leaves the access area, acquiring a position connecting line of the target human body entering the access area and leaving the access area, acquiring a geometric relationship between the position connecting line and the trip line, and determining the number of people in and out of the area according to the geometric relationship.
In a second aspect, an embodiment of the present application provides a passenger flow volume statistics apparatus, where the apparatus includes: the device comprises an image acquisition module, a first detection module and a statistic module;
the image acquisition module is used for acquiring a passenger flow image frame sequence in preset time;
the first detection module is used for inputting the passenger flow image frame sequence into a detection model under the condition that the passenger flow type is the passenger flow of a specific crowd, and the detection model detects a target human body in the image frame sequence so as to output the position and the state of the target human body in the image frame sequence; wherein the state comprises at least one of: generating a state, deleting the state and updating the state;
and the counting module is used for performing geometric calculation according to the position and the state of the target human body so as to determine the number of people in the region or the number of people in and out of the region in the passenger flow image frame sequence within the preset time.
In a third aspect, an embodiment of the present application provides a passenger flow volume statistics system, where the system includes: a camera and a central processing unit;
the camera is used for acquiring a passenger flow image frame sequence in preset time;
the central processing unit is used for inputting the passenger flow image frame sequence into a detection model under the condition that the passenger flow type is the passenger flow of a specific crowd, the detection model detects a target human body in the image frame sequence to output the position and the state of the target human body in the image frame sequence, and the central processing unit is further used for performing geometric calculation according to the position and the state of the target human body to determine the number of people in an area or the number of people in and out of the area in the passenger flow image frame sequence; wherein the state comprises at least one of: generating state, deleting state, and updating state.
In a fourth aspect, an embodiment of the present application provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor, when executing the computer program, implements the passenger flow volume statistics method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, where the program, when executed by a processor, implements the passenger flow volume statistics method as described in the first aspect above.
Compared with the related art, the passenger flow volume statistical method provided by the embodiment of the application inputs the passenger flow image frame sequence into the detection model by acquiring the passenger flow image frame sequence within the preset time under the condition that the passenger flow type is the passenger flow of a specific group of people, and the detection model detects the target human body in the image frame sequence to output the position and the state of the target human body in the image frame sequence, wherein the state includes at least one of the following states: and generating a state, deleting the state and updating the state, and performing geometric calculation according to the position and the state of the target human body to determine the number of people in the region or the number of people in or out of the region in the passenger flow image frame sequence within the preset time, so that the problem of low passenger flow volume counting accuracy in the passenger flow counting method for specific people by using target detection and target tracking in the related art is solved, and the passenger flow volume counting accuracy of the specific people is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a first flowchart of a passenger flow statistics method according to an embodiment of the present application;
FIG. 2 is a flow chart of a method of training a detection model in a passenger flow statistics method according to an embodiment of the present application;
FIG. 3 is another flow chart of a method of training a detection model in a passenger flow statistics method according to an embodiment of the present application;
FIG. 4 is a flow chart diagram II of a passenger flow volume statistics method according to an embodiment of the present application;
FIG. 5 is a flow chart III of a passenger flow volume statistical method according to an embodiment of the present application;
FIG. 6 is a flow chart diagram four of a passenger flow statistics method according to an embodiment of the present application;
FIG. 7 is a block diagram of a passenger flow statistics apparatus according to an embodiment of the present application;
FIG. 8 is another block diagram of a passenger flow statistics apparatus according to an embodiment of the present application;
FIG. 9 is a block diagram of a passenger flow statistics system according to an embodiment of the present application;
FIG. 10 is another block diagram of a passenger flow statistics system according to an embodiment of the present application;
fig. 11 is a hardware configuration diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It is obvious that the drawings in the following description are only examples or embodiments of the present application, and that it is also possible for a person skilled in the art to apply the present application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The passenger flow volume statistical method is applied to passenger flow statistics of specific crowds, such as groups of employees and the like of a company, a camera device is in communication connection with a server, the camera device acquires a passenger flow image frame sequence of the specific crowds in a certain area within preset time, the server inputs the passenger flow image frame sequence into a detection model, the detection model detects a target human body in the image frame sequence, to output the position and state of the target human body in the image frame sequence, the server performs geometric calculation according to the position and state of the target human body, to determine the number of people in the area or the number of people in or out of the area in the passenger flow image frame sequence within the preset time, it should be noted that, the server may be integrated with the camera device or may be independent of the camera device, and the server may be implemented by an independent server or a server cluster formed by a plurality of servers.
The present embodiment provides a passenger flow volume statistical method, and fig. 1 is a first flowchart of a passenger flow volume statistical method according to an embodiment of the present application, and as shown in fig. 1, the method includes the following steps:
and step S101, acquiring a passenger flow image frame sequence in preset time. It should be noted that the preset time may be a time period required for counting the passenger flow in a certain time period, for example, a company needs to count the number of people in a certain area from 8 am to 9 am, so the preset time is from 8 am to 9 am.
Step S102, under the condition that the passenger flow type is the passenger flow of a specific crowd, inputting the passenger flow image frame sequence into a detection model, and detecting a target human body in the image frame sequence by the detection model so as to output the position and the state of the target human body in the image frame sequence; wherein the status includes at least one of: generating state, deleting state, and updating state.
It should be noted that, first, the passenger flow of a specific group may be a company employee of a certain company, and may be understood that the company employee wears a work card, and the passenger flow of the specific group may also be a student on a campus, and may be understood that the student on the campus wears a student certificate or wears uniform clothes, so that the passenger flow of the specific group can be generally understood as a group that is worn or worn and distinguishable from the normal passenger flow; the target human body refers to a passenger flow individual in a passenger flow image frame sequence, and can also be understood as a general name of specific crowd passenger flow, and the target human body refers to an independent individual in the crowd passenger flow. Secondly, the generation state of the target human body in a certain picture frame can be understood as the state that the target human body appears for the first time; the deleted state of the target human body in a certain picture frame can be understood as the state that the target human body disappears in the certain picture frame; the updated state of the target human body in a certain picture frame can be understood as the state that the target human body is the existing target.
It should be further noted that, the image capturing device obtains a passenger flow image frame sequence within a preset time, and in the case that the passenger flow image frame sequence is passenger flow of a specific group of people, the image capturing device may send the passenger flow image frame sequence to a server independent of the image capturing device, the server detects the passenger flow image frame sequence to determine the position and state of the target human body in the image frame sequence, and the server independent of the image capturing device sends the position and state of the target human body in the image frame sequence to the camera, so that time consumption and memory of the image capturing device can be effectively reduced.
And step S103, performing geometric calculation according to the position and the state of the target human body to determine the number of people in the region or the number of people in and out of the region in the passenger flow image frame sequence within the preset time. For example, the number of people in the area in the preset time may be the number of people in a restaurant from 8 am to 9 am on a campus, and the number of people in and out of the area in the preset time may be the number of people in and out of an entrance area from 8 am to 9 am on a company.
Through the steps S101 to S103, when the current passenger flow is the passenger flow of the specific group, the image frame sequence of a certain period of time is detected by using the detection model trained in advance to obtain the position and the state of the target human body in the image frame sequence, and the position and the state of the target human body in the image frame sequence are obtained by using target detection and target tracking for the specific group, so that the method is more accurate, requires shorter processing time, and occupies less memory.
In some embodiments, fig. 2 is a flowchart of a method for training a detection model in a passenger flow volume statistical method according to an embodiment of the present application, and as shown in fig. 2, the method includes the following steps:
step S201, a detection model is established, wherein the detection model uses a full convolution network and adopts a residual error structure. It should be noted that the detection model established in this step is untrained, and the detection model may use head and shoulder detection or face detection, wherein the detection model employs a full convolution network, so that the feature layer of the model is deeper, that is, more feature information is extracted; as the depth of the full convolution network is increased, the detection model has the capability of extracting features or shows the phenomenon of gradient disappearance, and the adoption of a residual structure can solve the phenomena of gradient disappearance and gradient explosion while deepening the depth of the detection model network.
Step S202, a plurality of specific passenger flow image frame sequences are obtained from a passenger flow image database, position and state labeling is carried out on a target human body in the specific passenger flow image frame sequences, and the specific passenger flow image frame sequences and the labels are divided into groups. The plurality of specific passenger flow image frame sequences in the passenger flow image database may be understood as training materials for training the detection model, for example, if the specific passenger flow image frame sequences are used for counting employees of a certain company, the detection model is trained by using the plurality of specific passenger flow image frame sequences related to the employees, and of course, the detection model may also be trained by using the specific passenger flow image frame sequences of the plurality of employee models.
And step S203, training the detection model by using the specific passenger flow image frame sequence and the label of the same group.
Through the steps S201 to S203, according to the user population of the specific traffic statistical method, the specific passenger flow image frame sequence corresponding to the user population is used to train the detection model, so that the trained detection model has higher accuracy when detecting the corresponding user population.
In some embodiments, fig. 3 is another flowchart of a method for training a detection model in a passenger flow statistics method according to an embodiment of the present application, and as shown in fig. 3, the method for training the detection model by using a specific passenger flow image frame sequence and labels of the same group includes the following steps:
step S301, clustering the specific passenger flow image frame sequence according to a clustering algorithm to obtain a prior detection box of the specific passenger flow image frame sequence; the Clustering Algorithm (also called as K-Means Algorithm) is a Clustering analysis Algorithm for iterative solution, and the Clustering Algorithm is used for Clustering a specific passenger flow graph image frame sequence to obtain a priori detection box of the specific passenger flow graph image frame sequence, and can also be understood as that the priori detection box is a help computer, the width and the height of a position box are determined for the passenger flow graph image frame sequence, and the determined width and height can be used for processing under the condition that a detection model is used for detecting.
Step S302, determining a training matrix of the detection model according to the specific passenger flow image frame sequence and the prior detection frame corresponding to the specific passenger flow image frame sequence.
Step S303, the labels in the same group with the specific passenger flow image frame sequence are used as a correction matrix of the detection model, and the detection model is trained according to the correction matrix and the training matrix. For example, if the specific population detected by the detection model is a company employee, a plurality of specific passenger flow image frame sequences related to the company employee in the image database are used as training matrices for training the detection model, and labels corresponding to the specific passenger flow image frame sequences are used as correction matrices, so as to determine parameters of the detection model applied to the detection of the company employee.
Through the steps S301 to S303, the specific passenger flow graph image frame sequence is clustered according to the clustering algorithm to obtain the prior detection box of the specific passenger flow graph image frame sequence, and the detection model is trained according to the training matrix and the correction matrix of the corresponding crowd, so that the network convergence can be accelerated and the detection accuracy of the detection model can be improved.
In some embodiments, fig. 4 is a flowchart illustrating a second passenger flow volume statistics method according to an embodiment of the present application, and as shown in fig. 4, the method further includes the following steps:
step S401, obtaining the passenger flow type of the passenger flow image frame sequence; and under the condition that the passenger flow type is conventional passenger flow, performing target detection on the passenger flow image frame sequence to generate a target detection result, and performing target tracking according to the target detection result to determine the position and the state of the target human body in the image frame sequence.
It should be noted that before obtaining the passenger flow type of the current passenger flow image frame sequence that needs statistics, the passenger flow type may be determined manually, or the passenger flow type in the passenger flow image frame sequence may be determined by a machine, for example, by identifying whether the passenger flow is wearing a specific wearing object or wearing specific clothing.
Through the step S401, when the current passenger flow type is the specific crowd, the detection model is used for obtaining the position and the state, and the precision is high; when the current passenger flow type is the conventional passenger flow, the conventional target detection and target tracking are used for acquiring the position and the state, and the range of the use scene of the method is widened under the condition of ensuring the high-precision detection of specific people.
In some embodiments, fig. 5 is a flowchart three of a passenger flow volume statistical method according to an embodiment of the present application, and as shown in fig. 5, the method for performing geometric calculation to determine the number of regional people in a passenger flow image frame sequence within a preset time according to the position and state of a target human body includes the following steps:
in step S501, an area in which the number of people needs to be counted is determined. The area in which the number of people needs to be counted can be set as the size of the area, and the area can also be understood as an area of interest, and the area of interest can be represented by the number of polygonal points and the position coordinates of each point.
And step S502, determining a geometric relationship according to the position and the area of the target human body, and determining the number of regional people in the area according to the geometric relationship and the state. It should be noted that the geometric relationship between the position of the target human body and the region may be determined by the coordinates of the center point of the position of the target human body and the coordinates of each point in the region of interest, for example, whether the center point of the position of the target human body is located in the coordinates of the point to determine whether the target human body is located in the region of interest, and count the number of people in the region by counting the number of the positions of the target human body in the region of interest.
Through the steps S501 to S502, whether the target human body is located in the region is determined according to the geometric relation between the position of the target human body and the region, and then the number of people in the region is counted by counting the number of the target human bodies located in the region.
In some embodiments, fig. 6 is a flowchart illustrating a fourth passenger flow volume statistical method according to an embodiment of the present application, and as shown in fig. 6, the method for performing geometric calculation to determine the number of people coming in and going out of an area in a passenger flow image frame sequence within a preset time according to the position and state of a target human body includes the following steps;
step S601, determining an access area and an access tripwire; the access area is an area where the access of a target human body needs to be counted, and the access tripwire is a reference line used for counting whether the access action is triggered by the target human body. When the number of people coming in and going out of the region is counted, the coming in and going out region is set as the region of interest in the same way.
Step S602, when the target human body leaves the access area, acquiring a connecting line of positions of the target human body entering the access area and leaving the access area, acquiring a geometric relationship between the position connecting line and the tripwire, and determining the number of people entering and leaving the area according to the geometric relationship.
Similarly, the region of interest is represented by the number of the polygonal points and the position coordinates of each point, so that the position of the target human body entering the region and the position of the target human body leaving the access region can be determined by comparing the center point coordinates of the position of the target human body with the position coordinates of each point, if the current picture frame has the target human body leaving the access region, whether an intersection point exists between a connecting line of the center point coordinates of the positions of the target human body entering the access region and leaving the access region and the trip line is judged, if the intersection point exists, the target human body leaves the access region is judged, and if the intersection point exists, the target human body does not leave the access region.
Through the steps S601 to S602, an access tripwire is set, and whether the target human body leaves the access area is determined according to the access tripwire, so that the access result in the access area of the current picture frame in the passenger flow picture frame sequence is easy to count.
It should be noted that the steps illustrated in the above-described flow diagrams or in the flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order different than here.
The present embodiment further provides a passenger flow volume statistics apparatus, which is used to implement the foregoing embodiments and preferred embodiments, and the description of the apparatus is omitted here. As used hereinafter, the terms "module," "unit," "subunit," and the like may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 7 is a block diagram of a passenger flow volume statistic device according to an embodiment of the present application, and as shown in fig. 7, the device includes: an image acquisition module 70, a first detection module 71 and a statistic module 72;
an image acquisition module 70, configured to acquire a passenger flow image frame sequence within a preset time;
the first detection module 71 is configured to, when the passenger flow type is a passenger flow of a specific crowd, input a passenger flow image frame sequence into a detection model, where the detection model detects a target human body in the image frame sequence to output a position and a state of the target human body in the image frame sequence; wherein the status includes at least one of: generating a state, deleting the state and updating the state;
and the counting module 72 is configured to perform geometric calculation according to the position and the state of the target human body, so as to determine the number of people in the area or the number of people in and out of the area in the passenger flow image frame sequence within the preset time.
In some embodiments, fig. 8 is another block diagram of a passenger flow statistics apparatus according to the embodiment of the present application, and as shown in fig. 8, the apparatus further includes a second detection module 80;
the second detection module 80 is configured to, in a case that the type of the passenger flow is a conventional passenger flow, perform target detection on the passenger flow image frame sequence to generate a target detection result, and perform target tracking according to the target detection result to determine a position and a state of a target human body in the image frame sequence.
In some embodiments, the first detecting module 71, the second detecting module 80, and the counting module 72 are further configured to implement other steps in the passenger flow volume counting method provided in the foregoing embodiments, and are not described herein again.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
The present application further provides a passenger flow volume statistics system, fig. 9 is a structural block diagram of the passenger flow volume statistics system according to the embodiment of the present application, the system includes a camera 90 and a central processing unit 91, the camera 90 is configured to obtain a passenger flow image frame sequence within a preset time, the central processing unit 91 may be integrated with the camera 90, when the passenger flow type is passenger flow of a specific group, the passenger flow image frame sequence is input into a detection model, and the detection model detects a target human body in the image frame sequence to output a position and a state of the target human body in the image frame sequence, where the state includes at least one of: generating a state, deleting the state and updating the state; under the condition that the passenger flow type is conventional passenger flow, target detection is carried out on the passenger flow image frame sequence to generate a target detection result, target tracking is carried out according to the target detection result to determine the position and the state of a target human body in the image frame sequence, and geometric calculation is carried out according to the position and the state of the target human body to determine the number of people in an area or the number of people in and out of the area in the passenger flow image frame sequence within preset time.
In some embodiments, fig. 10 is another structural block diagram of the passenger flow volume statistic system according to the embodiment of the present application, as shown in fig. 10, the central processor 91 may include a first central processor 100 and a second central processor 101, the first central processor 100 is independent from the camera 90, is disposed outside the camera 90, and is in communication connection with the camera 90, the second central processor 101 is integrated in the camera 90, the camera 90 is configured to obtain a passenger flow image frame sequence within a preset time, in case that the passenger flow type is passenger flow of a specific group, the camera 90 sends the passenger flow image frame sequence to the first central processor 100, the first central processor 100 inputs the passenger flow image frame sequence into a detection model, the detection model detects a target human body in the image frame sequence to output a position and a state of the target human body in the image frame sequence, wherein, the status includes at least one of: and generating a state, deleting the state and updating the state, sending the position and the state of the target human body in the image frame sequence to the camera 90 by the first central processing unit 100, under the condition that the passenger flow type is the conventional passenger flow, carrying out target detection on the passenger flow image frame sequence by the second central processing unit 101 to generate a target detection result, carrying out target tracking according to the target detection result to determine the position and the state of the target human body in the image frame sequence, and carrying out geometric calculation according to the position and the state of the target human body to determine the number of people in an area or the number of people in and out of the area in the passenger flow image frame sequence within the preset time by the second central processing unit 101. In the second case, the time consumption and memory inside the camera 90 can be effectively reduced under the condition of counting the passenger flow of a specific group.
In addition, the human body image quality evaluation method of the embodiment of the present application described in conjunction with fig. 1 may be implemented by a computer device. The computer device may include a processor and a memory storing computer program instructions.
In particular, the processor may include a Central Processing Unit (CPU), or A Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
The memory may include, among other things, mass storage for data or instructions. By way of example, and not limitation, memory may include a Hard Disk Drive (Hard Disk Drive, abbreviated to HDD), a floppy Disk Drive, a Solid State Drive (SSD), flash memory, an optical Disk, a magneto-optical Disk, tape, or a Universal Serial Bus (USB) Drive or a combination of two or more of these. The memory may include removable or non-removable (or fixed) media, where appropriate. The memory may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory is a Non-Volatile (Non-Volatile) memory. In particular embodiments, the Memory includes Read-Only Memory (ROM) and Random Access Memory (RAM). The ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), Electrically rewritable ROM (earrom), or FLASH Memory (FLASH), or a combination of two or more of these, where appropriate. The RAM may be a Static Random-Access Memory (SRAM) or a Dynamic Random-Access Memory (DRAM), where the DRAM may be a Fast Page Mode Dynamic Random-Access Memory (FPMDRAM), an Extended data output Dynamic Random-Access Memory (EDODRAM), a Synchronous Dynamic Random-Access Memory (SDRAM), and the like.
The memory may be used to store or cache various data files for processing and/or communication use, as well as possibly computer program instructions for execution by the processor.
The processor reads and executes the computer program instructions stored in the memory to realize the human body image quality evaluation method in any one of the above embodiments.
In some embodiments, the computer device may further include a communication interface 113 and a bus 110, fig. 11 is a schematic diagram of a hardware structure of the computer device according to the embodiment of the present application, and as shown in fig. 11, the processor 111, the memory 112, and the communication interface 113 are connected through the bus 110 and complete communication therebetween.
The communication interface 113 is used for implementing communication between modules, apparatuses, units and/or devices in the embodiments of the present application. The communication interface 113 may also enable communication with other components such as: the data communication is carried out among external equipment, image/data acquisition equipment, a database, external storage, an image/data processing workstation and the like.
Bus 110 comprises hardware, software, or both to couple the components of the computer device to each other. Bus 110 includes, but is not limited to, at least one of the following: data Bus (Data Bus), Address Bus (Address Bus), Control Bus (Control Bus), Expansion Bus (Expansion Bus), and Local Bus (Local Bus). By way of example, and not limitation, Bus 110 may include an Accelerated Graphics Port (AGP) or other Graphics Bus, an Enhanced Industry Standard Architecture (EISA) Bus, a Front-Side Bus (FSB), a HyperTransport (HT) interconnect, an ISA (ISA) Bus, a Wireless Bandwidth interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a Micro Channel Architecture (MCA) Bus, a Peripheral Component Interconnect (PCI) Bus, a PCI-Express (PCI-X) Bus, a Serial Advanced Technology Attachment (SATA) Bus, a Video Electronics Standards Association (VLB) Bus, or other suitable Bus or a combination of two or more of these. Bus 110 may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
In addition, in combination with the human body image quality evaluation method in the foregoing embodiment, the embodiment of the present application may provide a computer-readable storage medium to implement. The computer readable storage medium having stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement any of the human image quality assessment methods in the above embodiments.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method for statistics of passenger flow, the method comprising:
acquiring a passenger flow image frame sequence in preset time;
under the condition that the passenger flow type is the passenger flow of a specific crowd, inputting the passenger flow image frame sequence into a detection model, and detecting a target human body in the image frame sequence by the detection model so as to output the position and the state of the target human body in the image frame sequence; wherein the state comprises at least one of: generating a state, deleting the state and updating the state;
and performing geometric calculation according to the position and the state of the target human body to determine the number of people in the region or the number of people in and out of the region in the passenger flow image frame sequence within the preset time.
2. The method of claim 1, wherein the method of training the detection model comprises:
establishing a detection model, wherein the detection model uses a full convolution network and adopts a residual error structure;
acquiring a plurality of specific passenger flow image frame sequences from a passenger flow image database, and labeling the position and the state of a target human body in the specific passenger flow image frame sequences, wherein the specific passenger flow image frame sequences and the labels are divided into groups;
and training a detection model by using the specific passenger flow image frame sequence and the label in the same group.
3. The method of claim 2, wherein the training of the inspection model with the particular sequence of passenger flow image frames and the labels of the same group comprises:
clustering the specific passenger flow image frame sequence according to a clustering algorithm to obtain a prior detection box of the specific passenger flow image frame sequence;
determining a training matrix of the detection model according to the specific passenger flow image frame sequence and a prior detection frame corresponding to the specific passenger flow image frame sequence;
and taking the labels in the same group with the specific passenger flow image frame sequence as a correction matrix of the detection model, and training the detection model according to the correction matrix and the training matrix.
4. The method of claim 1, wherein after obtaining the sequence of passenger flow image frames within the predetermined time, the method further comprises:
acquiring the passenger flow type of the passenger flow image frame sequence;
and under the condition that the passenger flow type is the conventional passenger flow, performing target detection on the passenger flow image frame sequence to generate a target detection result, and performing target tracking according to the target detection result to determine the position and the state of the target human body in the image frame sequence.
5. The method of claim 1, wherein performing geometric calculations to determine the number of people in the region in the passenger flow image frame sequence within the preset time based on the position and state of the target human body comprises:
the area in which the number of people needs to be counted is determined,
and determining a geometric relationship according to the position of the target human body and the region, and determining the number of regional people in the region according to the geometric relationship and the state.
6. The method of claim 1, wherein the performing geometric calculations based on the position and state of the target human body to determine the number of people coming in and going out of the area in the passenger flow image frame sequence within a preset time comprises:
determining an access area and an access tripwire; the access area is an area in which access of the target human body needs to be counted, and the access tripwire is a reference line used for counting whether the access action is triggered by the target human body;
under the condition that the target human body leaves the access area, acquiring a position connecting line of the target human body entering the access area and leaving the access area, acquiring a geometric relationship between the position connecting line and the trip line, and determining the number of people in and out of the area according to the geometric relationship.
7. A passenger flow volume statistic device, said device comprising: the device comprises an image acquisition module, a first detection module and a statistic module;
the image acquisition module is used for acquiring a passenger flow image frame sequence in preset time;
the first detection module is used for inputting the passenger flow image frame sequence into a detection model under the condition that the passenger flow type is the passenger flow of a specific crowd, and the detection model detects a target human body in the image frame sequence so as to output the position and the state of the target human body in the image frame sequence; wherein the state comprises at least one of: generating a state, deleting the state and updating the state;
and the counting module is used for performing geometric calculation according to the position and the state of the target human body so as to determine the number of people in the region or the number of people in and out of the region in the passenger flow image frame sequence within the preset time.
8. A passenger flow statistics system, characterized in that the system comprises: a camera and a central processing unit;
the camera is used for acquiring a passenger flow image frame sequence in preset time;
the central processing unit is used for inputting the passenger flow image frame sequence into a detection model under the condition that the passenger flow type is the passenger flow of a specific crowd, the detection model detects a target human body in the image frame sequence to output the position and the state of the target human body in the image frame sequence, and the central processing unit is further used for performing geometric calculation according to the position and the state of the target human body to determine the number of people in an area or the number of people in and out of the area in the passenger flow image frame sequence; wherein the state comprises at least one of: generating state, deleting state, and updating state.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 6.
CN202010805614.2A 2020-08-12 2020-08-12 Passenger flow volume statistics method and device and computer equipment Active CN112116556B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010805614.2A CN112116556B (en) 2020-08-12 2020-08-12 Passenger flow volume statistics method and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010805614.2A CN112116556B (en) 2020-08-12 2020-08-12 Passenger flow volume statistics method and device and computer equipment

Publications (2)

Publication Number Publication Date
CN112116556A true CN112116556A (en) 2020-12-22
CN112116556B CN112116556B (en) 2024-07-05

Family

ID=73804006

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010805614.2A Active CN112116556B (en) 2020-08-12 2020-08-12 Passenger flow volume statistics method and device and computer equipment

Country Status (1)

Country Link
CN (1) CN112116556B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565715A (en) * 2020-12-30 2021-03-26 浙江大华技术股份有限公司 Scenic spot passenger flow monitoring method and device, electronic equipment and storage medium
CN112822046A (en) * 2021-01-04 2021-05-18 新华三大数据技术有限公司 Flow prediction method and device
CN113592785A (en) * 2021-07-09 2021-11-02 浙江大华技术股份有限公司 Target flow statistical method and device
WO2022142413A1 (en) * 2020-12-31 2022-07-07 深圳云天励飞技术股份有限公司 Method and apparatus for predicting customer flow volume of mall, and electronic device and storage medium
CN114821676A (en) * 2022-06-29 2022-07-29 珠海视熙科技有限公司 Passenger flow human body detection method and device, storage medium and passenger flow statistical camera

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103986910A (en) * 2014-05-20 2014-08-13 中国科学院自动化研究所 Method and system for passenger flow statistics based on cameras with intelligent analysis function
WO2016180323A1 (en) * 2015-05-12 2016-11-17 杭州海康威视数字技术股份有限公司 Method and device for calculating customer traffic volume
CN108764181A (en) * 2018-05-31 2018-11-06 北京学之途网络科技有限公司 A kind of passenger flow statistical method and device, computer readable storage medium
CN109389589A (en) * 2018-09-28 2019-02-26 百度在线网络技术(北京)有限公司 Method and apparatus for statistical number of person
CN110659588A (en) * 2019-09-02 2020-01-07 平安科技(深圳)有限公司 Passenger flow volume statistical method and device and computer readable storage medium
CN110765940A (en) * 2019-10-22 2020-02-07 杭州姿感科技有限公司 Target object statistical method and device
CN110874583A (en) * 2019-11-19 2020-03-10 北京精准沟通传媒科技股份有限公司 Passenger flow statistics method and device, storage medium and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103986910A (en) * 2014-05-20 2014-08-13 中国科学院自动化研究所 Method and system for passenger flow statistics based on cameras with intelligent analysis function
WO2016180323A1 (en) * 2015-05-12 2016-11-17 杭州海康威视数字技术股份有限公司 Method and device for calculating customer traffic volume
CN108764181A (en) * 2018-05-31 2018-11-06 北京学之途网络科技有限公司 A kind of passenger flow statistical method and device, computer readable storage medium
CN109389589A (en) * 2018-09-28 2019-02-26 百度在线网络技术(北京)有限公司 Method and apparatus for statistical number of person
CN110659588A (en) * 2019-09-02 2020-01-07 平安科技(深圳)有限公司 Passenger flow volume statistical method and device and computer readable storage medium
CN110765940A (en) * 2019-10-22 2020-02-07 杭州姿感科技有限公司 Target object statistical method and device
CN110874583A (en) * 2019-11-19 2020-03-10 北京精准沟通传媒科技股份有限公司 Passenger flow statistics method and device, storage medium and electronic equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565715A (en) * 2020-12-30 2021-03-26 浙江大华技术股份有限公司 Scenic spot passenger flow monitoring method and device, electronic equipment and storage medium
WO2022142413A1 (en) * 2020-12-31 2022-07-07 深圳云天励飞技术股份有限公司 Method and apparatus for predicting customer flow volume of mall, and electronic device and storage medium
CN112822046A (en) * 2021-01-04 2021-05-18 新华三大数据技术有限公司 Flow prediction method and device
CN113592785A (en) * 2021-07-09 2021-11-02 浙江大华技术股份有限公司 Target flow statistical method and device
CN114821676A (en) * 2022-06-29 2022-07-29 珠海视熙科技有限公司 Passenger flow human body detection method and device, storage medium and passenger flow statistical camera

Also Published As

Publication number Publication date
CN112116556B (en) 2024-07-05

Similar Documents

Publication Publication Date Title
CN112116556A (en) Passenger flow volume statistical method and device and computer equipment
CN109858371B (en) Face recognition method and device
US12020473B2 (en) Pedestrian re-identification method, device, electronic device and computer-readable storage medium
CN109697416B (en) Video data processing method and related device
CN110443110B (en) Face recognition method, device, terminal and storage medium based on multipath camera shooting
CN111667001B (en) Target re-identification method, device, computer equipment and storage medium
CN110287889A (en) A kind of method and device of identification
CN111612104B (en) Vehicle loss assessment image acquisition method, device, medium and electronic equipment
CN110853033A (en) Video detection method and device based on inter-frame similarity
CN108509994B (en) Method and device for clustering character images
CN110298230A (en) Silent biopsy method, device, computer equipment and storage medium
CN109547748B (en) Object foot point determining method and device and storage medium
CN111325107B (en) Detection model training method, device, electronic equipment and readable storage medium
CN111027450A (en) Bank card information identification method and device, computer equipment and storage medium
JP2018045302A (en) Information processing device, information processing method and program
CN110163265A (en) Data processing method, device and computer equipment
CN111861998A (en) Human body image quality evaluation method, device and system and computer equipment
CN106056083A (en) Information processing method and terminal
CN109902550A (en) The recognition methods of pedestrian's attribute and device
CN111881740A (en) Face recognition method, face recognition device, electronic equipment and medium
CN115223022A (en) Image processing method, device, storage medium and equipment
CN111126159A (en) Method, apparatus, electronic device, and medium for tracking pedestrian in real time
CN112889061B (en) Face image quality evaluation method, device, equipment and storage medium
CN110929613A (en) Image screening algorithm for intelligent traffic violation audit
WO2023071180A1 (en) Authenticity identification method and apparatus, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant