CN114155488A - Method and device for acquiring passenger flow data, electronic equipment and storage medium - Google Patents

Method and device for acquiring passenger flow data, electronic equipment and storage medium Download PDF

Info

Publication number
CN114155488A
CN114155488A CN202111446732.XA CN202111446732A CN114155488A CN 114155488 A CN114155488 A CN 114155488A CN 202111446732 A CN202111446732 A CN 202111446732A CN 114155488 A CN114155488 A CN 114155488A
Authority
CN
China
Prior art keywords
target
information
objects
line
passenger flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111446732.XA
Other languages
Chinese (zh)
Inventor
孙贺然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202111446732.XA priority Critical patent/CN114155488A/en
Publication of CN114155488A publication Critical patent/CN114155488A/en
Priority to PCT/CN2022/104256 priority patent/WO2023098081A1/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a method, a device, electronic equipment and a storage medium for acquiring passenger flow data, and firstly, a video clip obtained by shooting a target area and a distinguishing mark are acquired; the video clip comprises a plurality of target images; the target image comprises an entrance area and an exit area of the target area; then, respectively carrying out object identification on each target image in the multiple target images, and respectively tracking multiple objects appearing in the video clip to obtain multiple target objects, key point information of the target objects in the target images and moving speed information of the target objects in the target image sequence; and finally, determining passenger flow data of the target area based on the key point information, the moving speed information and the distinguishing mark respectively corresponding to the target objects.

Description

Method and device for acquiring passenger flow data, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for acquiring passenger flow data, an electronic device, and a storage medium.
Background
Many places need to know the passenger flow information, and relevant control strategies are formulated according to the passenger flow information so as to ensure the safety, normal operation and the like of the corresponding places. For example, for a subway place, the passenger flow information in the rush hour of going to and from work needs to be known so as to adopt a corresponding passenger flow control strategy; further, for example, for offline physical business, knowledge of customer flow information is needed to develop relevant operational policies. But at present the statistics of the passenger flow for a certain location are not accurate enough.
Disclosure of Invention
The embodiment of the disclosure at least provides a method, a device, electronic equipment and a storage medium for acquiring passenger flow data.
In a first aspect, an embodiment of the present disclosure provides a method for acquiring passenger flow data, including:
acquiring a video clip obtained by shooting a target area and a distinguishing mark; the video clip comprises a plurality of target images; the target image comprises an entrance area and an exit area of the target area;
respectively carrying out object identification on each target image in the target images, and respectively tracking a plurality of objects appearing in the video clip to obtain a plurality of target objects, key point information of the target objects in the target images and moving speed information of the target objects in the target image sequence;
and determining passenger flow data of the target area based on the key point information, the moving speed information and the distinguishing mark respectively corresponding to the target objects.
In this respect, object identification, object tracking and key point detection can be accurately realized in an image identification mode, and then passenger flow data in the target area can be accurately determined by combining the detected key point information and the running speed of the target object. Compared with the prior art that passenger flow statistics is carried out by using head and shoulder detection and the like of the object, the defect of low detection precision caused by shielding can be effectively avoided by using the key points, and the precision of passenger flow detection can be further improved by combining the running speed of the target object on the basis of the key point information.
In a possible implementation manner, the key point information includes first position information of at least one key point of a target object in at least one target image corresponding to the key point information; the distinguishing mark comprises a flow distinguishing line;
the determining passenger flow data of the target area based on the key point information, the moving speed information, and the discrimination indicator respectively corresponding to each of the plurality of target objects includes:
for each target object, determining cross-line result information corresponding to the target object based on first position information of at least one key point corresponding to the target object and second position information of the flow discrimination line; the cross-line result information is used for representing whether the target object crosses the flow judging line or not;
and determining passenger flow data of the target area based on the cross-line result information and the moving speed information corresponding to the target objects.
According to the embodiment, whether the target object crosses the flow distinguishing line can be accurately determined according to the first position information of the key point corresponding to the target object and the second position information of the flow distinguishing line, namely accurate cross-line result information can be obtained; and then passenger flow statistics can be accurately carried out on the target area by combining the cross-line result information and the moving speed information of the target object.
In one possible implementation, the key point information includes at least one set of relative key point information of the target subject, wherein the relative key point information includes at least one of foot key point information of left and right feet of the target subject, knee key point information of left and right knees of the target subject, and leg key point information of left and right legs of the target subject.
According to the embodiment, the positions of the key points, namely the step key point information, the knee key point information or the leg key point information, can accurately represent the position of the target object, so that whether the target object crosses the flow judgment line can be accurately determined according to the key points.
In a possible implementation manner, the determining, based on the first position information of the at least one key point corresponding to the target object and the second position information of the flow rate discrimination line, the line crossing result information corresponding to the target object includes:
and determining that the cross-line result information comprises information that the target object crosses the flow judgment line when determining that a connection line of two key points corresponding to at least one group of relative key point information exists and the two key points are intersected with the flow judgment line based on the first position information and the second position information.
In this embodiment, a connection line of a certain group of key points intersects with the flow rate discrimination line, which indicates that the target object crosses the flow rate discrimination line, and at this time, the cross-line result information needs to be set to include information that the target object crosses the flow rate discrimination line, so that the obtained cross-line result information is more accurate.
In a possible implementation manner, the determining, based on the first position information of the at least one key point corresponding to the target object and the second position information of the flow rate discrimination line, the line crossing result information corresponding to the target object includes:
and determining that the cross-line result information comprises information that the target object crosses the flow discrimination line under the condition that the first position information of at least one key point of the target object is matched with the second position information.
In this embodiment, when a certain key point is on the flow rate discrimination line, it indicates that the target object crosses the flow rate discrimination line, and at this time, the cross-line result information needs to be set to include information that the target object crosses the flow rate discrimination line, so that the obtained cross-line result information is more accurate.
In one possible implementation, the determining passenger flow data of the target area based on the cross-line result information and the moving speed information corresponding to the plurality of target objects includes:
determining moving direction information of the target object based on the video clip;
and determining passenger flow data of the target area based on the cross-line result information, the moving speed information and the moving direction information corresponding to the target objects.
According to the embodiment, on the basis of the line crossing result information and the moving speed information, the moving direction information of the target object is combined, whether the target object enters the target area or leaves the target area can be determined accurately, and more detailed passenger flow data can be obtained.
In one possible embodiment, the passenger flow data comprises a first number of target objects entering the target area;
the determining passenger flow data of the target area based on the cross-line result information, the moving speed information and the moving direction information corresponding to the plurality of target objects includes:
screening the first object, of which the speed indicated by the moving speed information is greater than a first preset speed, from the plurality of target objects, wherein the cross-line result information indicates that the flow rate distinguishing line is crossed, the moving direction information indicates that the running direction when the flow rate distinguishing line is crossed is the target area, and the speed indicated by the moving speed information is greater than the first preset speed;
the number of first objects is taken as the first number.
According to the embodiment, the target objects entering the target area can be screened out more accurately according to the line crossing result information, the moving speed information and the moving direction information, namely, the first quantity is determined more accurately.
In one possible embodiment, the passenger flow data comprises a second number of target objects leaving the target area;
the determining passenger flow data of the target area based on the cross-line result information, the moving speed information and the moving direction information corresponding to the plurality of target objects includes:
screening a second object, of which the line crossing result information indicates that the traffic discrimination line is crossed, the moving direction information indicates that the running direction when the traffic discrimination line is crossed is away from the target area, and the speed indicated by the moving speed information is greater than a second preset speed, from the plurality of target objects;
taking the number of the second objects as the second number.
According to the embodiment, the target objects which are away from the target area can be screened out more accurately according to the line crossing result information, the moving speed information and the moving direction information, namely, the second quantity is determined more accurately.
In one possible embodiment, the passenger flow data comprises a third number of target objects located in the target area;
the determining passenger flow data of the target area based on the cross-line result information, the moving speed information and the moving direction information corresponding to the plurality of target objects further includes:
determining the third number based on the first number and the second number;
the first number is the number of first objects, of the plurality of target objects, of which the cross-line result information indicates that the flow rate distinguishing line is crossed, the moving direction information indicates that the moving direction when the flow rate distinguishing line is crossed is the target area, and the speed indicated by the moving speed information is greater than a first preset speed;
the second number is the number of second objects, among the plurality of target objects, in which the cross-line result information indicates that the flow rate discrimination line is crossed, the moving direction information indicates that the moving direction when the flow rate discrimination line is crossed is away from the target area, and the speed indicated by the moving speed information is greater than a second preset speed.
In this embodiment, the number of target objects located in the target area, that is, the third number can be determined more accurately based on the number of target objects entering the target area, that is, the first data amount, and the number of target objects leaving the target area, that is, the second number.
In a possible implementation, the performing object recognition on each of the plurality of target images and tracking a plurality of objects appearing in the video segment to obtain a plurality of target objects respectively includes:
respectively carrying out object identification on the target images to obtain a plurality of object detection frames; each object detection frame corresponds to one object;
determining the overlapping degree between the object detection frames respectively belonging to the two adjacent target images, determining the same object corresponding to the object detection frames respectively belonging to the two adjacent target images under the condition that the overlapping degree is greater than a preset value, and taking the object as a target object.
According to the embodiment, whether the objects corresponding to the two object detection frames have different objects can be accurately determined according to the overlapping degree of the two object detection frames, namely, the objects can be accurately tracked.
In a possible implementation, after obtaining the plurality of target objects, the method further includes:
determining, for each of the plurality of target objects, that the target object fails to track if the target object is not tracked within a preset time.
In this embodiment, for target objects that are not detected within a certain period of time, tracking may be failed at a high probability, and at this time, such target objects may be set as tracking failures, so as to improve the accuracy of object tracking.
In a possible embodiment, the determination flag includes a preset determination frame;
the performing object recognition on each target image in the plurality of target images, and tracking a plurality of objects appearing in the video clip to obtain a plurality of target objects respectively includes:
respectively carrying out object identification on image areas corresponding to the preset distinguishing frame in a plurality of target images based on the third position information of the preset distinguishing frame, and respectively tracking a plurality of objects appearing in the image areas corresponding to the preset distinguishing frame in the video clip to obtain a plurality of target objects; and at least part of the flow distinguishing line corresponding to the distinguishing mark is positioned in the preset distinguishing frame.
According to the embodiment, only the objects in the preset judgment frame are identified and tracked, the number of data processing can be effectively reduced, and the efficiency and the accuracy are improved.
In a possible implementation, after determining the first object and the second object, the method further includes:
and sending at least one of time when the first object enters the target area, time when the second object leaves the target area, entering identification information when the first object enters the target area, leaving identification information when the second object leaves the target area, and second position information of the flow judgment line to a remote server, so as to realize passenger flow statistics on the target area through the remote server.
According to the embodiment, the information such as the entrance time and the entrance identifier of the object entering the target area and the information such as the exit time and the exit identifier of the object leaving the target area are transmitted to the remote server, so that the accuracy of passenger flow statistics performed by the remote server can be improved.
In a second aspect, an embodiment of the present disclosure further provides an apparatus for acquiring passenger flow data, including:
the information acquisition module is used for acquiring a video clip obtained by shooting a target area and a distinguishing mark; the video clip comprises a plurality of target images; the target image comprises an entrance area and an exit area of the target area;
the information processing module is used for respectively carrying out object identification on each target image in the target images and respectively tracking a plurality of objects appearing in the video clip to obtain a plurality of target objects, key point information of the target objects in the target images and moving speed information of the target objects in the target image sequences;
and the passenger flow statistics module is used for determining passenger flow data of the target area based on the key point information, the moving speed information and the distinguishing mark respectively corresponding to the target objects.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect described above, or any possible implementation of the first aspect.
In a fourth aspect, this disclosed embodiment also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
For the above description of the effect of the apparatus, the electronic device, and the computer-readable storage medium for acquiring passenger flow data, reference is made to the above description of the method for acquiring passenger flow data, and details are not repeated here.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 is a flowchart illustrating a method for acquiring passenger flow data according to an embodiment of the present disclosure;
FIG. 2 is a flow chart illustrating the determination of passenger flow data in another method for acquiring passenger flow data according to the embodiment of the disclosure;
FIG. 3 is a schematic diagram of a region for performing passenger flow statistics provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating an apparatus for acquiring passenger flow data according to an embodiment of the disclosure;
fig. 5 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
According to research, at present, passenger flow statistics needs to be carried out in a plurality of places, but at present, the passenger flow statistics carried out in a certain place is not accurate enough. In view of the above, the present disclosure provides a method, an apparatus, an electronic device, and a storage medium for passenger flow data, where the method can more accurately implement object identification, object tracking, and key point detection by means of image identification, and then can more accurately determine passenger flow data in a target area by combining detected key point information and a running speed of a target object. Compared with the prior art that passenger flow statistics is carried out by using head and shoulder detection and the like of the object, the defect of low detection precision caused by shielding can be effectively avoided by using the key points, and the precision of passenger flow detection can be further improved by combining the running speed of the target object on the basis of the key point information.
The following describes a method for acquiring passenger flow data according to an embodiment of the present disclosure, by taking an execution subject as a device having a computing capability as an example.
As shown in fig. 1, the method for acquiring passenger flow data provided by the present disclosure may include the following steps:
s110, acquiring a video clip obtained by shooting a target area and a distinguishing mark; the video clip comprises a plurality of target images; the target image includes an entrance area of the target area.
The target area is an area for which the passenger flow data statistics are to be performed, and may be a certain shop, for example. The video clips may be captured by cameras installed in the entrance and exit areas of the target area. The discrimination flag may include a traffic discrimination line across which the object is likely to become valid data in the passenger flow data.
In addition, the discrimination flag may further include a preset discrimination frame, and only when the object enters the preset discrimination frame, the object may become valid data in the passenger flow data.
And S120, respectively carrying out object identification on each target image in the target images, and respectively tracking a plurality of objects appearing in the video clip to obtain a plurality of target objects, the key point information of the target objects in the target images and the moving speed information of the target objects in the target image sequence.
For example, a trained neural network may be used to perform object recognition and tracking on a target image, and to determine keypoint information of the target object.
In the object recognition, an object detection frame of each object in each target image is determined. And then tracking the object according to the intersection ratio of the object detection frames, specifically, taking two objects corresponding to two object detection frames with the intersection ratio larger than a preset value as the same object in two adjacent target images or two target images with short shooting time intervals, wherein the object is considered to be successfully tracked and can be taken as a target object.
And determining the corresponding key point information of each object by using the trained neural network while identifying the object. Or after the target object is determined, detecting the target object by using the trained neural network, and determining the key point information corresponding to the target object.
The key point information may be information of key points capable of characterizing the position of the target object, and may include first position information of at least one key point of the corresponding target object in the at least one target image. For example, the information on the key points may be at least one of information on a key point corresponding to a foot of the target subject, information on a key point corresponding to a knee of the target subject, and information on a key point corresponding to a leg of the target subject.
And determining a plurality of target images to which the target object belongs according to the tracking of the target object, and sequencing the images according to the shooting time to obtain the target image sequence.
After the target image sequence of a certain target object is determined, the moving distance of the target object in the shooting time interval of any two target images can be determined according to the target image sequence, and then the moving speed of the target object in the shooting time interval of the two target images can be determined according to the shooting time interval of the two target images. The moving speed information may be determined based on the moving speeds corresponding to a plurality of arbitrary two target images. Illustratively, the moving speed information directly includes the moving speeds corresponding to a plurality of arbitrary two target images.
And S130, determining passenger flow data of the target area based on the key point information, the moving speed information and the distinguishing mark corresponding to the target objects respectively.
For example, whether the target object crosses the traffic discrimination line can be determined according to the key point information and the discrimination flag, the cross-line speed of the target object when crossing the traffic discrimination line can be determined according to the moving speed information, and only the target object with the cross-line speed meeting the preset requirement can become effective data in the passenger flow data. Therefore, the passenger flow data of the target area can be determined more accurately according to the key point information, the moving speed information and the distinguishing mark.
For example, this can be achieved by: firstly, for each target object, determining cross-line result information corresponding to the target object based on first position information of at least one key point corresponding to the target object and second position information of the flow discrimination line; the cross-line result information is used for representing whether the target object crosses the flow judging line. And then determining passenger flow data of the target area based on the cross-line result information and the moving speed information corresponding to the target objects.
In some embodiments, the cross-line result information corresponding to the target subject may be determined by using relative key point information corresponding to at least one set of relative key points, for example, the relative key point information may include at least one of foot key point information of left and right feet of the target subject, knee key point information of left and right knees of the target subject, and leg key point information of left and right legs of the target subject. The relative key point information at least includes position information of corresponding key points, for example, the foot key point information of the left foot and the right foot of the target object at least includes first position information of the left foot key point and first position information of the right foot key point of the target object; for another example, the knee key point information of the left and right knees of the target subject includes at least first position information of a left knee key point and first position information of a right knee key point of the target subject.
The position information of the key points, such as the step key point information, the knee key point information or the leg key point information, can accurately represent the position of the target object, so that whether the target object crosses the flow judgment line can be accurately determined according to the key points.
In specific implementation, the following steps may be utilized to determine the cross-line result information corresponding to the target object:
and determining that the cross-line result information comprises information that the target object crosses the flow judgment line when determining that a connection line of two key points corresponding to at least one group of relative key point information exists and the two key points are intersected with the flow judgment line based on the first position information and the second position information.
The connecting line of a certain group of key points is intersected with the flow judgment line to indicate that the target object crosses the flow judgment line, and at the moment, the cross-line result information needs to be set to include the information of the target object crossing the flow judgment line, so that the obtained cross-line result information is more accurate.
In some embodiments, the cross-line result information corresponding to the target object may also be determined by using a single key point, and specifically, in a case that the first position information of the at least one key point of the target object matches the second position information, it is determined that the cross-line result information includes information that the target object crosses the flow discrimination line. The matching may specifically be that the first position information of the key point is the same as the second position information, and at this time, it may be indicated that one key point of the target object is located on the flow rate discrimination line, that is, the target object is located on the flow rate discrimination line, and at this time, the target object is considered to cross the flow rate discrimination line.
When a certain key point is on the flow judgment line, the target object crosses the flow judgment line, and at the moment, the cross-line result information needs to be set to include the information of the target object crossing the flow judgment line, so that the obtained cross-line result information is more accurate.
On the basis of the line crossing result information and the moving speed information, the passenger flow data of the target area can be further determined by combining the moving direction information of the target object, so that whether the target object enters the target area or leaves the target area can be further accurately determined, and more detailed passenger flow data can be obtained.
The moving direction information may be determined according to the video clip, for example, a target image sequence corresponding to the target object is determined from the video clip, positions of the target object in any two target images belonging to the target image sequence are determined, and then the moving direction of the target object in the shooting time interval of the two target images may be determined according to the shooting time interval of the two target images. Any two target images here may be two adjacent images in the target image sequence. The moving direction information may be determined based on the moving directions corresponding to a plurality of arbitrary two target images. Illustratively, the moving direction information directly includes the moving directions corresponding to a plurality of arbitrary two target images. Therefore, the moving direction of the target object when crossing the flow rate discrimination line can be determined according to the moving direction information.
In some embodiments, as shown in fig. 2, the determining passenger flow data of the target area based on the cross-line result information, the moving speed information, and the moving direction information corresponding to the plurality of target objects may specifically be implemented by the following steps:
s210, screening the first object, of which the line crossing result information indicates that the traffic flow distinguishing line is crossed, the running direction when the moving direction information indicates that the traffic flow distinguishing line is crossed is the target area, and the speed indicated by the moving speed information is greater than a first preset speed, from a plurality of target objects; taking the number of the first objects as a first number of target objects entering the target area;
s220, screening a second object, of which the line crossing result information indicates that the traffic distinguishing line is crossed, the moving direction information indicates that the running direction when the traffic distinguishing line is crossed is away from the target area, and the speed indicated by the moving speed information is greater than a second preset speed, from the plurality of target objects; taking the number of the second objects as a second number of target objects leaving the target area;
s230, determining the third number based on the first number and the second number; illustratively, calculating a difference between the first number and the second number, and taking the difference as a third number, where the third number is the number of target objects located in the target area;
and S240, taking the first quantity, the second quantity and the third quantity as the passenger flow data of the target area.
According to the cross-line result information, the moving speed information and the moving direction information, the number of target objects leaving the target area, the number of target objects entering the target area and the number of target objects located in the target area can be screened out accurately, and the accuracy of the determined passenger flow data is effectively improved.
In some embodiments, the performing object identification on each target image in the plurality of target images and tracking the plurality of objects appearing in the video segment to obtain the plurality of target objects may specifically be implemented by using the following steps:
firstly, respectively carrying out object identification on a plurality of target images to obtain a plurality of object detection frames; each object detection frame corresponds to one object; and then determining the overlapping degree between the object detection frames respectively belonging to the two adjacent target images, determining the same object corresponding to the object detection frames respectively belonging to the two adjacent target images under the condition that the overlapping degree is greater than a preset value, and taking the object as the target object.
In specific implementation, the overlapping degree of any two object detection frames in any two target images with short shooting time can be determined, and under the condition that the overlapping degree is greater than a preset value, two objects corresponding to the two object detection frames are determined to be the same object, and the object is taken as a target object.
Whether the objects corresponding to the two object detection frames have different objects can be accurately determined according to the overlapping degree of the two object detection frames, and therefore object tracking can be accurately carried out.
The overlapping degree here may specifically be an intersection ratio of two object detection frames.
In some embodiments, if a target object is not tracked within a preset time, it is determined that the target object fails to track. For target objects which are not detected within a certain time, tracking is likely to fail, and at this time, such target objects may be set as tracking failures, so as to improve the accuracy of object tracking.
After a certain target object is determined for the first time, an object identifier is set for the target object, and for the target object with tracking failure, the object identifier of the target object is not allocated to other target objects obtained by subsequent detection.
In some embodiments, the determination mark includes not only a flow determination line, but also a preset determination frame; at this time, the above-mentioned performing object recognition on each target image in the multiple target images, and tracking multiple objects appearing in the video clip, respectively, to obtain multiple target objects may specifically be implemented by using the following steps:
respectively carrying out object identification on image areas corresponding to the preset distinguishing frame in a plurality of target images based on the third position information of the preset distinguishing frame, and respectively tracking a plurality of objects appearing in the image areas corresponding to the preset distinguishing frame in the video clip to obtain a plurality of target objects; and at least part of the flow distinguishing line corresponding to the distinguishing mark is positioned in the preset distinguishing frame.
In specific implementation, the corresponding sub-image can be intercepted from the target image according to the third position information of the preset decision frame, and the sub-image sequence is subjected to object recognition and tracking to obtain a plurality of target objects.
Only the objects in the preset judgment frame are identified and tracked, the data processing quantity can be effectively reduced, and the efficiency and the accuracy are improved.
The preset determination frame and the flow determination line are determined by calibrating a shooting device for shooting the video clip. The preset determination frame may be a frame having a regular shape such as a rectangle, or may be a frame having an irregular shape not formed by a broken line, and the present application is not limited thereto. When the preset judgment frame is set, the upper half body of the object entering or leaving the target area needs to be ensured to be positioned in the frame, so that the accuracy of object identification, tracking and passenger flow data statistics is improved.
Illustratively, the entrance and exit area of the target area is 310 shown in fig. 3, a preset decision frame 320 is disposed in the entrance and exit area 310, and the embodiment of the present disclosure only identifies and tracks objects within the preset decision frame. The flow rate decision line 330 is at least partially located within the predetermined decision box 320. When the passenger flow data is counted, it is necessary to perform the counting according to whether the tracked target object crosses the traffic discrimination line 330. Specifically, as shown in fig. 3, a connecting line 370 of a set of key points (for example, the middle point of the sole is taken as a key point, and then the set of key points may be the middle points of both feet of the target object 350) of the target object 350 intersects with the flow rate determination line 330, and it is considered that the target object 350 crosses the flow rate determination line 330; a key point of another object 360 is located on the flow judgment line 330, and it is also considered that another object 360 crosses the flow judgment line 330.
Further, if the tracking result shows that the same target object is the target object 350 and the target object 360 at different times, the moving direction information of the target object may be determined according to the time when the image to which the target object 350 belongs is captured and the time when the image to which the target object 360 belongs is captured. For example, if the time when the image to which the target object 350 belongs is captured is earlier than the time when the image to which the target object 360 belongs, it is determined that the moving direction information of the target object indicates that the target object moves across the flow rate discrimination line in the traveling direction as entering the target area. If the speed indicated by the moving speed information of the target object, which is further determined according to the time of photographing the image to which the target object 350 belongs and the time of photographing the image to which the target object 360 belongs, is greater than the first preset speed, the target object is determined as the object entering the target area. The first number, i.e. the number of objects entering the target area, can be determined according to the counted number of the class of target objects.
Further, if the time of capturing the image to which the target object 350 belongs is later than the time of capturing the image to which the target object 360 belongs, it is determined that the moving direction information of the target object indicates that the moving direction of the target object when crossing the flow rate discrimination line is away from the target area. If the speed indicated by the moving speed information of the target object, which is further determined according to the time of capturing the image to which the target object 350 belongs and the time of capturing the image to which the target object 360 belongs, is greater than the second preset speed, the target object is determined as an object leaving the target area. The second number, i.e. the number of objects leaving the target area, may be determined according to the counted number of the class of target objects.
Before passenger flow data statistics is carried out, a first direction entering a target area and a second direction leaving the target area need to be determined, and the first direction and the second direction are used for determining whether the running direction of a target object when crossing a traffic discrimination line is entering the target area or leaving the target area.
After the first object and the second object are determined, at least one of time when the first object, that is, the target object entering the target area enters the target area, time when the second object, that is, the target object leaving the target area leaves the target area, entry identification information when the first object enters the target area, exit identification information when the second object leaves the target area, second position information of the traffic discrimination line, identification of the traffic discrimination line, a video clip, and a target image sequence corresponding to each target object may be sent to a remote service, so that passenger flow statistics on the target area is realized through the remote server.
The information such as the entering time and the entering identification of the object entering the target area and the information such as the leaving time and the leaving identification of the object leaving the target area are transmitted to the remote server, so that the accuracy of the remote server in carrying out passenger flow statistics can be improved.
The method of the above embodiment of the present disclosure may also be executed by a shooting device that shoots a video clip, and the execution subject of the method of the above embodiment is not limited by the present disclosure.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, the embodiment of the present disclosure further provides a device for acquiring passenger flow data corresponding to the method for acquiring passenger flow data, and as the principle of solving the problem of the device in the embodiment of the present disclosure is similar to that of the method for acquiring passenger flow data in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are omitted.
Referring to fig. 4, there is shown a schematic architecture diagram of an apparatus for acquiring passenger flow data according to an embodiment of the present disclosure, where the apparatus includes:
an information obtaining module 410, configured to obtain a video clip obtained by shooting a target area and a distinguishing mark; the video clip comprises a plurality of target images; the target image includes an entrance area of the target area.
The information processing module 420 is configured to perform object identification on each of the multiple target images, and track multiple objects appearing in the video clip, respectively, to obtain multiple target objects, key point information of the target object in the target image to which the target object belongs, and moving speed information of the target object in the target image sequence to which the target object belongs.
A passenger flow statistics module 430, configured to determine passenger flow data of the target area based on the key point information, the moving speed information, and the distinguishing flag respectively corresponding to the multiple target objects.
In some embodiments, the key point information includes first position information of at least one key point of a target object in at least one target image corresponding to the key point information; the distinguishing mark comprises a flow distinguishing line;
the passenger flow statistics module 430 is configured to, when determining the passenger flow data of the target area based on the key point information, the moving speed information, and the discrimination flag respectively corresponding to each of the plurality of target objects,:
for each target object, determining cross-line result information corresponding to the target object based on first position information of at least one key point corresponding to the target object and second position information of the flow discrimination line; the cross-line result information is used for representing whether the target object crosses the flow judging line or not;
and determining passenger flow data of the target area based on the cross-line result information and the moving speed information corresponding to the target objects.
In some embodiments, the keypoint information comprises at least one set of relative keypoint information of the target subject, wherein the relative keypoint information comprises at least one of foot keypoint information of the left and right feet of the target subject, knee keypoint information of the left and right knees of the target subject, and leg keypoint information of the left and right legs of the target subject.
In some embodiments, the passenger flow statistics module 430, when determining the cross-line result information corresponding to the target object based on the first location information of the at least one key point corresponding to the target object and the second location information of the traffic discrimination line, is configured to:
and determining that the cross-line result information comprises information that the target object crosses the flow judgment line when determining that a connection line of two key points corresponding to at least one group of relative key point information exists and the two key points are intersected with the flow judgment line based on the first position information and the second position information.
In some embodiments, the passenger flow statistics module 430, when determining the cross-line result information corresponding to the target object based on the first location information of the at least one key point corresponding to the target object and the second location information of the traffic discrimination line, is configured to:
and determining that the cross-line result information comprises information that the target object crosses the flow discrimination line under the condition that the first position information of at least one key point of the target object is matched with the second position information.
In some embodiments, the passenger flow statistics module 430, when determining the passenger flow data of the target area based on the cross-line result information and the moving speed information corresponding to a plurality of the target objects, is configured to:
determining moving direction information of the target object based on the video clip;
and determining passenger flow data of the target area based on the cross-line result information, the moving speed information and the moving direction information corresponding to the target objects.
In some embodiments, the passenger flow data includes a first number of target objects entering the target area;
the passenger flow statistics module 430 is configured to, when determining passenger flow data of the target area based on the cross-line result information, the moving speed information, and the moving direction information corresponding to the plurality of target objects,:
screening the first object, of which the speed indicated by the moving speed information is greater than a first preset speed, from the plurality of target objects, wherein the cross-line result information indicates that the flow rate distinguishing line is crossed, the moving direction information indicates that the running direction when the flow rate distinguishing line is crossed is the target area, and the speed indicated by the moving speed information is greater than the first preset speed;
taking the number of the first objects as the first number.
In some embodiments, the passenger flow data includes a second number of target objects leaving the target area;
the passenger flow statistics module 430 is configured to, when determining passenger flow data of the target area based on the cross-line result information, the moving speed information, and the moving direction information corresponding to the plurality of target objects,:
screening a second object, of which the line crossing result information indicates that the traffic discrimination line is crossed, the moving direction information indicates that the running direction when the traffic discrimination line is crossed is away from the target area, and the speed indicated by the moving speed information is greater than a second preset speed, from the plurality of target objects;
taking the number of the second objects as the second number.
In some embodiments, the passenger flow data includes a third number of target objects located in the target area;
the passenger flow statistics module 430 is configured to, when determining passenger flow data of the target area based on the cross-line result information, the moving speed information, and the moving direction information corresponding to the plurality of target objects,:
determining the third number based on the first number and the second number;
the first number is the number of first objects, of the plurality of target objects, of which the cross-line result information indicates that the flow rate distinguishing line is crossed, the moving direction information indicates that the moving direction when the flow rate distinguishing line is crossed is the target area, and the speed indicated by the moving speed information is greater than a first preset speed;
the second number is the number of second objects, among the plurality of target objects, in which the cross-line result information indicates that the flow rate discrimination line is crossed, the moving direction information indicates that the moving direction when the flow rate discrimination line is crossed is away from the target area, and the speed indicated by the moving speed information is greater than a second preset speed.
In some embodiments, the information processing module 420, when performing object recognition on each of the plurality of target images and tracking a plurality of objects appearing in the video segment to obtain a plurality of target objects, is configured to:
respectively carrying out object identification on the target images to obtain a plurality of object detection frames; each object detection frame corresponds to one object;
determining the overlapping degree between the object detection frames respectively belonging to the two adjacent target images, determining the same object corresponding to the object detection frames respectively belonging to the two adjacent target images under the condition that the overlapping degree is greater than a preset value, and taking the object as a target object.
In some embodiments, the information processing module 420, after obtaining the plurality of target objects, is further configured to:
determining, for each of the plurality of target objects, that the target object fails to track if the target object is not tracked within a preset time.
In some embodiments, the decision flag comprises a preset decision frame;
the information processing module 420 is configured to, when performing object identification on each of the plurality of target images and tracking a plurality of objects appearing in the video clip to obtain a plurality of target objects, respectively:
respectively carrying out object identification on image areas corresponding to the preset distinguishing frame in a plurality of target images based on the third position information of the preset distinguishing frame, and respectively tracking a plurality of objects appearing in the image areas corresponding to the preset distinguishing frame in the video clip to obtain a plurality of target objects; and at least part of the flow distinguishing line corresponding to the distinguishing mark is positioned in the preset distinguishing frame.
In some embodiments, the passenger flow statistics module 430, after determining the first object and the second object, is further configured to:
and sending at least one of time when the first object enters the target area, time when the second object leaves the target area, entering identification information when the first object enters the target area, leaving identification information when the second object leaves the target area, and second position information of the flow judgment line to a remote server, so as to realize passenger flow statistics on the target area through the remote server.
Based on the same technical concept, the embodiment of the disclosure also provides an electronic device. Referring to fig. 5, a schematic structural diagram of an electronic device 500 provided in the embodiment of the present disclosure includes a processor 51, a memory 52, and a bus 53. The memory 52 is used for storing execution instructions, and includes a memory 521 and an external memory 522; the memory 521 is also referred to as an internal memory, and is used for temporarily storing the operation data in the processor 51 and the data exchanged with the external memory 522 such as a hard disk, the processor 51 exchanges data with the external memory 522 through the memory 521, and when the electronic device 500 operates, the processor 51 communicates with the memory 52 through the bus 53, so that the processor 51 executes the following instructions:
acquiring a video clip obtained by shooting a target area and a distinguishing mark; the video clip comprises a plurality of target images; the target image comprises an entrance area and an exit area of the target area; respectively carrying out object identification on each target image in the target images, and respectively tracking a plurality of objects appearing in the video clip to obtain a plurality of target objects, key point information of the target objects in the target images and moving speed information of the target objects in the target image sequence; and determining passenger flow data of the target area based on the key point information, the moving speed information and the distinguishing mark respectively corresponding to the target objects.
The embodiment of the present disclosure further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method for acquiring passenger flow data in the above method embodiment are executed. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product of the method for acquiring passenger flow data provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute steps of the method for acquiring passenger flow data described in the above method embodiments, which may be referred to specifically for the above method embodiments, and are not described herein again. The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (16)

1. A method of obtaining passenger flow data, comprising:
acquiring a video clip obtained by shooting a target area and a distinguishing mark; the video clip comprises a plurality of target images; the target image comprises an entrance area and an exit area of the target area;
respectively carrying out object identification on each target image in the target images, and respectively tracking a plurality of objects appearing in the video clip to obtain a plurality of target objects, key point information of the target objects in the target images and moving speed information of the target objects in the target image sequence;
and determining passenger flow data of the target area based on the key point information, the moving speed information and the distinguishing mark respectively corresponding to the target objects.
2. The method according to claim 1, wherein the key point information comprises first position information of at least one key point of a target object corresponding to the key point information in at least one target image; the distinguishing mark comprises a flow distinguishing line;
the determining passenger flow data of the target area based on the key point information, the moving speed information, and the discrimination indicator respectively corresponding to each of the plurality of target objects includes:
for each target object, determining cross-line result information corresponding to the target object based on first position information of at least one key point corresponding to the target object and second position information of the flow discrimination line; the cross-line result information is used for representing whether the target object crosses the flow judging line or not;
and determining passenger flow data of the target area based on the cross-line result information and the moving speed information corresponding to the target objects.
3. The method of claim 2, wherein the keypoint information comprises at least one set of relative keypoint information of the target subject, wherein relative keypoint information comprises at least one of foot keypoint information of the left and right feet of the target subject, knee keypoint information of the left and right knees of the target subject, leg keypoint information of the left and right legs of the target subject.
4. The method according to claim 3, wherein the determining the cross-line result information corresponding to the target object based on the first position information of the at least one key point corresponding to the target object and the second position information of the flow discrimination line comprises:
and determining that the cross-line result information comprises information that the target object crosses the flow judgment line when determining that a connection line of two key points corresponding to at least one group of relative key point information exists and the two key points are intersected with the flow judgment line based on the first position information and the second position information.
5. The method according to claim 2 or 3, wherein the determining the cross-line result information corresponding to the target object based on the first position information of the at least one key point corresponding to the target object and the second position information of the flow discrimination line comprises:
and determining that the cross-line result information comprises information that the target object crosses the flow discrimination line under the condition that the first position information of at least one key point of the target object is matched with the second position information.
6. The method according to any one of claims 2 to 5, wherein the determining passenger flow data of the target area based on the cross-line result information and the moving speed information corresponding to the plurality of target objects comprises:
determining moving direction information of the target object based on the video clip;
and determining passenger flow data of the target area based on the cross-line result information, the moving speed information and the moving direction information corresponding to the target objects.
7. The method of claim 6, wherein the passenger flow data comprises a first number of target objects entering the target area;
the determining passenger flow data of the target area based on the cross-line result information, the moving speed information and the moving direction information corresponding to the plurality of target objects includes:
screening the first object, of which the speed indicated by the moving speed information is greater than a first preset speed, from the plurality of target objects, wherein the cross-line result information indicates that the flow rate distinguishing line is crossed, the moving direction information indicates that the running direction when the flow rate distinguishing line is crossed is the target area, and the speed indicated by the moving speed information is greater than the first preset speed;
taking the number of the first objects as the first number.
8. The method according to claim 6 or 7, wherein the passenger flow data comprises a second number of target objects leaving the target area;
the determining passenger flow data of the target area based on the cross-line result information, the moving speed information and the moving direction information corresponding to the plurality of target objects includes:
screening a second object, of which the line crossing result information indicates that the traffic discrimination line is crossed, the moving direction information indicates that the running direction when the traffic discrimination line is crossed is away from the target area, and the speed indicated by the moving speed information is greater than a second preset speed, from the plurality of target objects;
taking the number of the second objects as the second number.
9. The method according to any of claims 6 to 8, characterized in that the passenger flow data comprises a third number of target objects located in the target area;
the determining passenger flow data of the target area based on the cross-line result information, the moving speed information and the moving direction information corresponding to the plurality of target objects further includes:
determining the third number based on the first number and the second number;
the first number is the number of first objects, of the plurality of target objects, of which the cross-line result information indicates that the flow rate distinguishing line is crossed, the moving direction information indicates that the moving direction when the flow rate distinguishing line is crossed is the target area, and the speed indicated by the moving speed information is greater than a first preset speed;
the second number is the number of second objects, among the plurality of target objects, in which the cross-line result information indicates that the flow rate discrimination line is crossed, the moving direction information indicates that the moving direction when the flow rate discrimination line is crossed is away from the target area, and the speed indicated by the moving speed information is greater than a second preset speed.
10. The method according to any one of claims 1 to 9, wherein the performing object recognition on each of the plurality of target images and tracking a plurality of objects appearing in the video segment respectively to obtain a plurality of target objects comprises:
respectively carrying out object identification on the target images to obtain a plurality of object detection frames; each object detection frame corresponds to one object;
determining the overlapping degree between the object detection frames respectively belonging to the two adjacent target images, determining the same object corresponding to the object detection frames respectively belonging to the two adjacent target images under the condition that the overlapping degree is greater than a preset value, and taking the object as a target object.
11. The method of claim 10, after obtaining the plurality of target objects, further comprising:
determining, for each of the plurality of target objects, that the target object fails to track if the target object is not tracked within a preset time.
12. The method according to any one of claims 1 to 11, wherein the decision flag comprises a preset decision box;
the performing object recognition on each target image in the plurality of target images, and tracking a plurality of objects appearing in the video clip to obtain a plurality of target objects respectively includes:
respectively carrying out object identification on image areas corresponding to the preset distinguishing frame in a plurality of target images based on the third position information of the preset distinguishing frame, and respectively tracking a plurality of objects appearing in the image areas corresponding to the preset distinguishing frame in the video clip to obtain a plurality of target objects; and at least part of the flow distinguishing line corresponding to the distinguishing mark is positioned in the preset distinguishing frame.
13. The method of claim 9, after determining the first object, the second object, further comprising:
and sending at least one of time when the first object enters the target area, time when the second object leaves the target area, entering identification information when the first object enters the target area, leaving identification information when the second object leaves the target area, and second position information of the flow judgment line to a remote server, so as to realize passenger flow statistics on the target area through the remote server.
14. An apparatus for obtaining passenger flow data, comprising:
the information acquisition module is used for acquiring a video clip obtained by shooting a target area and a distinguishing mark; the video clip comprises a plurality of target images; the target image comprises an entrance area and an exit area of the target area;
the information processing module is used for respectively carrying out object identification on each target image in the target images and respectively tracking a plurality of objects appearing in the video clip to obtain a plurality of target objects, key point information of the target objects in the target images and moving speed information of the target objects in the target image sequences;
and the passenger flow statistics module is used for determining passenger flow data of the target area based on the key point information, the moving speed information and the distinguishing mark respectively corresponding to the target objects.
15. An electronic device, comprising: processor, memory and bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the method of obtaining passenger flow data according to any one of claims 1 to 13.
16. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, performs the steps of the method of acquiring passenger flow data according to any one of claims 1 to 13.
CN202111446732.XA 2021-11-30 2021-11-30 Method and device for acquiring passenger flow data, electronic equipment and storage medium Withdrawn CN114155488A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111446732.XA CN114155488A (en) 2021-11-30 2021-11-30 Method and device for acquiring passenger flow data, electronic equipment and storage medium
PCT/CN2022/104256 WO2023098081A1 (en) 2021-11-30 2022-07-07 Method and apparatus for acquiring passenger flow data, and electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111446732.XA CN114155488A (en) 2021-11-30 2021-11-30 Method and device for acquiring passenger flow data, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114155488A true CN114155488A (en) 2022-03-08

Family

ID=80455499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111446732.XA Withdrawn CN114155488A (en) 2021-11-30 2021-11-30 Method and device for acquiring passenger flow data, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN114155488A (en)
WO (1) WO2023098081A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116055867A (en) * 2022-05-30 2023-05-02 荣耀终端有限公司 Shooting method and electronic equipment
WO2023098081A1 (en) * 2021-11-30 2023-06-08 上海商汤智能科技有限公司 Method and apparatus for acquiring passenger flow data, and electronic device and storage medium
CN117523472A (en) * 2023-09-19 2024-02-06 浙江大华技术股份有限公司 Passenger flow data statistics method, computer equipment and computer readable storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109413584B (en) * 2017-08-14 2020-08-25 中国电信股份有限公司 Passenger flow trajectory tracking method, device and system
CN110991224B (en) * 2019-10-18 2024-05-28 平安科技(深圳)有限公司 Pedestrian red light running detection method and device based on image recognition and related equipment
CN111160243A (en) * 2019-12-27 2020-05-15 深圳云天励飞技术有限公司 Passenger flow volume statistical method and related product
CN113705470A (en) * 2021-08-30 2021-11-26 北京市商汤科技开发有限公司 Method and device for acquiring passenger flow information, computer equipment and storage medium
CN114155488A (en) * 2021-11-30 2022-03-08 北京市商汤科技开发有限公司 Method and device for acquiring passenger flow data, electronic equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023098081A1 (en) * 2021-11-30 2023-06-08 上海商汤智能科技有限公司 Method and apparatus for acquiring passenger flow data, and electronic device and storage medium
CN116055867A (en) * 2022-05-30 2023-05-02 荣耀终端有限公司 Shooting method and electronic equipment
CN116055867B (en) * 2022-05-30 2023-11-24 荣耀终端有限公司 Shooting method and electronic equipment
CN117523472A (en) * 2023-09-19 2024-02-06 浙江大华技术股份有限公司 Passenger flow data statistics method, computer equipment and computer readable storage medium

Also Published As

Publication number Publication date
WO2023098081A1 (en) 2023-06-08

Similar Documents

Publication Publication Date Title
CN114155488A (en) Method and device for acquiring passenger flow data, electronic equipment and storage medium
CN110334569B (en) Passenger flow volume in-out identification method, device, equipment and storage medium
EP2450832A1 (en) Image processing apparatus and image processing method
KR101492180B1 (en) Video analysis
CN111369805B (en) Fake plate detection method and device, electronic equipment and computer readable storage medium
JPH11175730A (en) Human body detection and trace system
CN111369801B (en) Vehicle identification method, device, equipment and storage medium
CN108694399A (en) Licence plate recognition method, apparatus and system
CN111145555A (en) Method and device for detecting vehicle violation
CN109740609A (en) A kind of gauge detection method and device
CN110866692A (en) Generation method and generation device of early warning information and readable storage medium
CN114049378A (en) Queuing analysis method and device
CN110175553B (en) Method and device for establishing feature library based on gait recognition and face recognition
CN111126257A (en) Behavior detection method and device
CN112115946B (en) License plate fake-license plate identification method and device, storage medium and electronic equipment
CN108921072A (en) A kind of the people flow rate statistical method, apparatus and system of view-based access control model sensor
CN109344829A (en) A kind of Train number recognition method and device of High Speed Railway Trains
CN113642455B (en) Pedestrian number determining method, device and computer readable storage medium
CN110781710B (en) Target object clustering method and device
CN111060507B (en) Vehicle verification method and device
CN114677608A (en) Identity feature generation method, device and storage medium
CN112597924A (en) Electric bicycle track tracking method, camera device and server
JP2017151582A (en) Image analyzer, program, and method for tracking person shown in photographed camera image
CN113239900B (en) Human body position detection method, device and computer readable storage medium
CN114879177B (en) Target analysis method and device based on radar information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40065209

Country of ref document: HK

WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220308