CN112541938A - Pedestrian speed measuring method, system, medium and computing device - Google Patents

Pedestrian speed measuring method, system, medium and computing device Download PDF

Info

Publication number
CN112541938A
CN112541938A CN202011497958.8A CN202011497958A CN112541938A CN 112541938 A CN112541938 A CN 112541938A CN 202011497958 A CN202011497958 A CN 202011497958A CN 112541938 A CN112541938 A CN 112541938A
Authority
CN
China
Prior art keywords
time
distance
pixel difference
difference
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011497958.8A
Other languages
Chinese (zh)
Inventor
毛少将
郭宇鹏
石雷
周昌锋
王晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CRSC Institute of Smart City Research and Design Co Ltd
Original Assignee
CRSC Institute of Smart City Research and Design Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CRSC Institute of Smart City Research and Design Co Ltd filed Critical CRSC Institute of Smart City Research and Design Co Ltd
Priority to CN202011497958.8A priority Critical patent/CN112541938A/en
Publication of CN112541938A publication Critical patent/CN112541938A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P3/00Measuring linear or angular speed; Measuring differences of linear or angular speeds
    • G01P3/64Devices characterised by the determination of the time taken to traverse a fixed distance
    • G01P3/68Devices characterised by the determination of the time taken to traverse a fixed distance using optical means, i.e. using infrared, visible, or ultraviolet light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Abstract

The invention relates to a pedestrian speed measuring method, a system, a medium and a calculating device, comprising the following steps: loading original image data; carrying out target detection on the original image data and outputting a detection result; performing associated tracking of targets in different frames on target detection results, and outputting detection results meeting tracking constraint conditions; calculating the pixel difference of the vertex of the detection frame and the time difference between the previous frame and the next frame; judging the accumulated time of the time difference and the pixel difference to ensure that the stored data is within a preset time threshold; mapping the pixel difference to obtain an actual distance, and accumulating according to pixels to obtain moving distances in the X direction and the Y direction around a time threshold; calculating the obtained distance between the X direction and the Y direction in the real scene according to the pythagorean theorem to obtain the movement distance; and obtaining the target movement speed by adopting a speed calculation formula according to the movement distance and the accumulated time. The invention can realize accurate real-time speed detection of pedestrians.

Description

Pedestrian speed measuring method, system, medium and computing device
Technical Field
The invention relates to the technical field of speed measurement, in particular to a pedestrian speed measurement method, a pedestrian speed measurement system, a pedestrian speed measurement medium and computing equipment based on a monocular camera.
Background
Pedestrian tests speed is an important support technology for realizing functions of intelligent safety warning, protection, monitoring and the like of pedestrians in various scenes. Such as recognition of running, walking, etc. of pedestrians, rapid movement of large areas due to abnormalities occurring in the crowd, etc. The traditional mode is realized by security personnel patrolling or observing a monitoring picture on site. Or using a speed radar. But because a large amount of monitoring equipment is installed in the existing subway station, the cost can be greatly saved by using the monitoring equipment. With the development of artificial intelligence technology, some manufacturers have tried to use monitoring pictures to measure speed. But the effect of speed measurement is not ideal for technical reasons.
The existing speed measuring radar mainly comprises a transmitter of radar equipment, an antenna and a controller, wherein the transmitter of the radar equipment emits electromagnetic wave energy to a certain direction in space, and an object positioned in the direction reflects the contacted electromagnetic wave; the radar antenna receives this reflected wave and sends it to a receiving device for processing and extracting certain information about the object (distance of the target object to the radar, rate of change of range or radial velocity, azimuth, altitude, etc.). The main disadvantages are high cost and difficulty in measuring speed of pedestrians in high complexity scenes.
Disclosure of Invention
In view of the above problems, it is an object of the present invention to provide a method, a system, a medium, and a computing device for measuring pedestrian speed based on a monocular camera, which can realize accurate real-time pedestrian speed detection.
In order to achieve the purpose, the invention adopts the following technical scheme: a pedestrian speed measurement method, comprising: step S1: loading original image data; step S2: carrying out target detection on the original image data and outputting a detection result; step S3: performing associated tracking of targets in different frames on target detection results, and outputting detection results meeting tracking constraint conditions; step S4: calculating the pixel difference of the vertex of the detection frame and the time difference between the previous frame and the next frame; step S5: judging the accumulated time of the time difference and the pixel difference to ensure that the stored data is within a preset time threshold; step S6: mapping the pixel difference to obtain an actual distance, and accumulating according to pixels to obtain moving distances in an X direction and a Y direction of a time threshold value moving left and right; step S7: the obtained time thresholds in the X direction and the Y direction in the real scene are moved to the left and right, and the movement distance is calculated according to the pythagorean theorem; step S8: and obtaining the target movement speed by adopting a speed calculation formula according to the movement distance and the accumulated time.
Further, in step S1, the original image data is loaded in the form of a monocular camera or a video file.
Further, in step S2, a YOLOv3 deep neural network model is used to complete multi-target detection, and the detection box category and coordinate information are output.
Further, in step S3, the method uses a DeepSORT algorithm to realize the association tracking of the targets in different frames, and outputs the target types and coordinate information obtained by association tracking.
Further, in step S4, the specifically calculating includes:
step S41: initializing a preset time memory and a pixel difference memory of a detection frame vertex, and decomposing an original motion pixel difference of the detection frame vertex into an X direction and a Y direction;
step S42: calculating the pixel difference of the vertexes of the front and rear frame detection frames in the X direction and storing the pixel difference into a pixel difference storage;
step S43: calculating the time difference between the two frames before and after and recording the time difference into a time memory;
step S44: and calculating the pixel difference of the vertexes of the front frame detection frame and the rear frame detection frame in the Y direction and storing the pixel difference into a pixel difference storage.
Further, in step S5, the method for determining the cumulative time includes: recording a timestamp of a current frame while storing the pixel difference, then judging whether the time difference between the beginning and the end in the time memory is greater than a set time threshold, if so, performing cumulative summation on the pixel difference in the pixel difference memory, entering a step S6 to perform mapping calculation of the pixel and the actual distance, and deleting data of the initial position in the time memory and the pixel difference memory; if so, the process returns to step S4 to recalculate until the result is satisfied.
Further, in step S6, the pixel difference mapping formula is:
Figure BDA0002842742810000021
Figure BDA0002842742810000022
Figure BDA0002842742810000023
in the formula: w is a1The distance between the position of the vertical point of the camera and the nearest end of the vertical view field; w is a2The distance from the nearest end of the vertical field of view to the center line of the field of view; w is a3The distance from the farthest end of the vertical visual field to the central line of the visual field; w is a4The distance from the farthest end of the vertical visual field to the extension line of the central line of the visual field; h is the height of the camera; theta1Is an included angle between a vertical line of the camera and a central line of a camera view field; n is the number of pixel points in the Y direction; m is a constant with a value range of [1, n-2 ]];x0The actual distance corresponding to the first pixel vertically upwards from the center of the imaging plane of the camera is obtained; x is the number of1The actual distance corresponding to the second pixel vertically upwards from the center of the imaging plane of the camera; y is the distance in the real scene corresponding to each pixel in the vertical direction of the imaging plane of the camera; h0The distance from each pixel point to the ground in the vertical direction of the imaging plane of the camera is calculated; theta0Half of the horizontal field angle of the camera; w is a5Half the width of the horizontal field of view; and X is the distance in the real scene corresponding to each pixel in the horizontal direction of the imaging plane of the camera.
A pedestrian speed measurement system, comprising: the device comprises a loading module, a target detection module, a tracking module, a difference value calculation module, a judgment module, a mapping module, a movement distance acquisition module and a target speed acquisition module; the loading module is used for loading original image data; the target detection module performs target detection on the original image data and outputs a detection result; the tracking module performs associated tracking of different frames of targets on the target detection result and outputs the detection result meeting the tracking constraint condition; the difference value calculating module is used for calculating the pixel difference of the vertex of the detection frame and the time difference between the previous frame and the next frame; the judgment module judges the accumulated time of the time difference and the pixel difference to ensure that the stored data is within a preset time threshold; the mapping module maps the pixel difference to obtain an actual distance, and then the actual distance is accumulated according to pixels to obtain moving distances in an X direction and a Y direction of time threshold left and right movement; the movement distance acquisition module calculates the obtained distance of left and right movement of the time threshold values in the X direction and the Y direction in the real scene according to the pythagorean theorem to obtain the movement distance; and the target speed acquisition module is used for acquiring the target movement speed by adopting a speed calculation formula according to the movement distance and the accumulated time.
A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform any of the above methods.
A computing device, comprising: one or more processors, memory, and one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the above-described methods.
Due to the adoption of the technical scheme, the invention has the following advantages: 1. the invention is based on a monocular camera, and realizes accurate pedestrian real-time speed detection by deep learning and space mapping methods and optimizing according to multi-coordinate information. 2. According to the invention, the pixels and the actual distances thereof are mapped one by one through spatial mapping, and modes such as multi-coordinate comprehensive threshold judgment are adopted, so that the accuracy of pedestrian speed detection in a monocular camera scene is improved, the speed detection accuracy is not affected under the severe shielding condition, the speed calculation process is accurate, and the accuracy is finally up to more than 90%.
Drawings
FIG. 1 is a schematic flow chart of a method in an embodiment of the present invention.
Fig. 2 is a vertical cross-sectional view of camera imaging in an embodiment of the invention.
Fig. 3 is a horizontal cross-sectional view of camera imaging in an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the drawings of the embodiments of the present invention. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the invention, are within the scope of the invention.
In the description of the present invention, the corresponding abbreviations and key terms are defined as follows:
YOLOv 3: a convolutional neural network framework applied in an industrial scene has a special convolutional layer organization mode, so that the accuracy of the convolutional neural network framework is obviously improved compared with other convolutional neural network models for small targets.
DeepsORT: the multi-target tracking algorithm is characterized in that data association is carried out by utilizing a motion model and appearance information, in the real-time target tracking process, the appearance characteristics of a target are extracted to carry out nearest neighbor matching, and the target tracking effect under the shielding condition can be improved. Meanwhile, the problem of target ID jumping is also reduced. And the more reliable measurement is used to replace the correlation measurement, so that the convolutional neural network is trained in a large-scale pedestrian data set, and the characteristics are extracted, thereby increasing the robustness of the network to loss and obstacles.
The distance of the real horizontal plane is mapped to the pixel points on the imaging plane of the monocular camera according to the set space mapping method, and the mapping method performs one-to-one mapping according to the pixels, so that the method has higher precision compared with the traditional grid method. The vertex of the pedestrian detection frame is used as a pedestrian positioning point to carry out pedestrian speed measurement, so that higher speed measurement accuracy can be still obtained under the condition that the body of a pedestrian is shielded in a large area; and comprehensive threshold judgment is carried out according to the moving distance of the coordinates of the four vertexes of the pedestrian detection frame in unit time, so that filtering of the pedestrian non-walking state is realized. The pedestrian speed measurement and calculation with high speed and high accuracy is realized. The present invention will be described in detail with reference to examples.
In a first embodiment of the present invention, as shown in fig. 1, there is provided a pedestrian speed measuring method based on a monocular camera, including:
step S1: loading original image data;
in this embodiment, the original image data is loaded in the form of a monocular camera or a video file.
Step S2: carrying out target detection on the original image data and outputting a detection result;
in this embodiment, a YOLOv3 deep neural network model is adopted to complete multi-target detection and output the detection frame type and coordinate information.
Step S3: performing associated tracking of targets in different frames on target detection results, and outputting detection results meeting tracking constraint conditions;
in the embodiment, a DeepsORT algorithm is adopted to realize the associated tracking of targets in different frames, and the target types and coordinate information obtained by associated tracking are output;
step S4: calculating the pixel difference of the vertex of the detection frame and the time difference between the previous frame and the next frame;
the specific calculation comprises the following steps:
step S41: initializing a preset time memory and a pixel difference memory of the vertex of the detection frame, decomposing the original motion pixel difference of the vertex of the detection frame into an X direction and a Y direction, and preparing for iteratively calculating the pixel difference of the vertex of the detection frame.
Step S42: and calculating the pixel difference of the vertexes of the front frame detection frame and the rear frame detection frame in the X direction and storing the pixel difference into a pixel difference storage.
Step S43: the time difference between the two frames before and after the time is calculated and recorded to the time memory.
Step S44: and calculating the pixel difference of the vertexes of the front frame detection frame and the rear frame detection frame in the Y direction and storing the pixel difference into a pixel difference storage.
Step S5: and performing accumulated time judgment on the time difference and the pixel difference to ensure that the stored data is within a preset time threshold.
Specifically, the method for judging the accumulated time comprises the following steps: and recording the time stamp of the current frame while storing the pixel difference, and then judging whether the time difference between the initial time and the final time in the time memory is larger than a set time threshold value or not. If the sum is greater than the time threshold, the pixel difference in the pixel difference memory is accumulated and summed, the step S6 is carried out to carry out the mapping calculation of the pixel and the actual distance, and the data of the initial position in the time memory and the pixel difference memory are deleted; if the result is less than the preset value, returning to the step S4 for recalculation until the result is met;
in this embodiment, the preset time threshold is 1 second.
Step S6: the pixel difference is mapped to obtain the actual distance, and then the actual distance is accumulated according to the pixels to obtain the moving distances in the X direction and the Y direction of the time threshold moving left and right (for example, moving left and right for 1 second).
In this embodiment, the pixel difference mapping formula is:
Figure BDA0002842742810000051
Figure BDA0002842742810000052
Figure BDA0002842742810000053
in the above formula, w is shown in FIGS. 2 and 31The distance between the position of the vertical point of the camera and the nearest end of the vertical view field; w is a2The distance from the nearest end of the vertical field of view to the center line of the field of view; w is a3The distance from the farthest end of the vertical visual field to the central line of the visual field; w is a4The distance from the farthest end of the vertical visual field to the extension line of the central line of the visual field; h is the height of the camera; theta1Is an included angle between a vertical line of the camera and a central line of a camera view field; n is the number of pixel points in the Y direction; m is a constant with a value range of [1, n-2 ]];x0The actual distance corresponding to the first pixel vertically upwards from the center of the imaging plane of the camera is obtained; x is the number of1The actual distance corresponding to the second pixel vertically upwards from the center of the imaging plane of the camera; y is the distance in the real scene corresponding to each pixel in the vertical direction of the imaging plane of the camera; h0The distance from each pixel point to the ground in the vertical direction of the imaging plane of the camera is calculated; theta0Half of the horizontal field angle of the camera; w is a5Half the width of the horizontal field of view; and X is the distance in the real scene corresponding to each pixel in the horizontal direction of the imaging plane of the camera.
Step S7: and (4) calculating the distance of the left and right movement of the time threshold values in the X and Y directions in the real scene according to the Pythagorean theorem to obtain the movement distance.
Step S8: the target moving speed is obtained from the moving distance obtained in step S7 and the accumulated time obtained in step S5 by using a speed calculation formula (v ═ S/t).
In a second embodiment of the present invention, there is provided a pedestrian speed measurement system including: the device comprises a loading module, a target detection module, a tracking module, a difference value calculation module, a judgment module, a mapping module, a movement distance acquisition module and a target speed acquisition module;
the loading module is used for loading original image data;
the target detection module performs target detection on the original image data and outputs a detection result;
the tracking module performs associated tracking of different frames of targets on the target detection result and outputs the detection result meeting the tracking constraint condition;
the difference value calculating module is used for calculating the pixel difference of the vertex of the detection frame and the time difference between the previous frame and the next frame;
the judgment module judges the accumulated time of the time difference and the pixel difference to ensure that the stored data is within a preset time threshold;
the mapping module maps the pixel difference to obtain an actual distance, and then the actual distance is accumulated according to pixels to obtain moving distances in an X direction and a Y direction of time threshold left and right movement;
the movement distance acquisition module calculates the movement distance according to the pythagorean theorem according to the distance of the left and right movement of the time threshold values in the X and Y directions in the real scene;
and the target speed acquisition module is used for acquiring the target movement speed by adopting a speed calculation formula according to the movement distance and the accumulated time.
In a third embodiment of the invention, there is provided a computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform any of the methods as in the first embodiment above.
In a fourth embodiment of the present invention, there is provided a computing device comprising: one or more processors, memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods as described above in the first embodiment.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

Claims (10)

1. A pedestrian speed measuring method, characterized by comprising:
step S1: loading original image data;
step S2: carrying out target detection on the original image data and outputting a detection result;
step S3: performing associated tracking of targets in different frames on target detection results, and outputting detection results meeting tracking constraint conditions;
step S4: calculating the pixel difference of the vertex of the detection frame and the time difference between the previous frame and the next frame;
step S5: judging the accumulated time of the time difference and the pixel difference to ensure that the stored data is within a preset time threshold;
step S6: mapping the pixel difference to obtain an actual distance, and accumulating according to pixels to obtain moving distances in an X direction and a Y direction of a time threshold value moving left and right;
step S7: the obtained time thresholds in the X direction and the Y direction in the real scene are moved to the left and right, and the movement distance is calculated according to the pythagorean theorem;
step S8: and obtaining the target movement speed by adopting a speed calculation formula according to the movement distance and the accumulated time.
2. The measuring method according to claim 1, wherein in step S1, the original image data is loaded by means of a monocular camera or a video file.
3. The measurement method as set forth in claim 1, wherein in step S2, a YOLOv3 deep neural network model is used to perform multi-object detection and output the detection box category and coordinate information.
4. The measuring method according to claim 1, wherein in step S3, a DeepSORT algorithm is used to realize the association tracking of different frame targets, and the target category and coordinate information obtained by association tracking are output.
5. The measurement method according to claim 1, wherein in the step S4, the specific calculation includes:
step S41: initializing a preset time memory and a pixel difference memory of a detection frame vertex, and decomposing an original motion pixel difference of the detection frame vertex into an X direction and a Y direction;
step S42: calculating the pixel difference of the vertexes of the front and rear frame detection frames in the X direction and storing the pixel difference into a pixel difference storage;
step S43: calculating the time difference between the two frames before and after and recording the time difference into a time memory;
step S44: and calculating the pixel difference of the vertexes of the front frame detection frame and the rear frame detection frame in the Y direction and storing the pixel difference into a pixel difference storage.
6. The measuring method according to claim 1, wherein in the step S5, the accumulated time judging method comprises: recording a timestamp of a current frame while storing the pixel difference, then judging whether the time difference between the beginning and the end in the time memory is greater than a set time threshold, if so, performing cumulative summation on the pixel difference in the pixel difference memory, entering a step S6 to perform mapping calculation of the pixel and the actual distance, and deleting data of the initial position in the time memory and the pixel difference memory; if so, the process returns to step S4 to recalculate until the result is satisfied.
7. The measuring method according to claim 1, wherein in step S6, the pixel difference mapping formula is:
Figure FDA0002842742800000021
Figure FDA0002842742800000022
Figure FDA0002842742800000023
in the formula: w is a1The distance between the position of the vertical point of the camera and the nearest end of the vertical view field; w is a2The distance from the nearest end of the vertical field of view to the center line of the field of view; w is a3The distance from the farthest end of the vertical visual field to the central line of the visual field; w is a4The distance from the farthest end of the vertical visual field to the extension line of the central line of the visual field; h is the height of the camera; theta1Is an included angle between a vertical line of the camera and a central line of a camera view field; n is the number of pixel points in the Y direction; m is a constant with a value range of [1, n-2 ]];x0The actual distance corresponding to the first pixel vertically upwards from the center of the imaging plane of the camera is obtained; x is the number of1Corresponding to the second pixel vertically upward from the center of the imaging plane of the cameraThe actual distance of (d); y is the distance in the real scene corresponding to each pixel in the vertical direction of the imaging plane of the camera; h0The distance from each pixel point to the ground in the vertical direction of the imaging plane of the camera is calculated; theta0Half of the horizontal field angle of the camera; w is a5Half the width of the horizontal field of view; and X is the distance in the real scene corresponding to each pixel in the horizontal direction of the imaging plane of the camera.
8. A pedestrian speed measurement system, comprising: the device comprises a loading module, a target detection module, a tracking module, a difference value calculation module, a judgment module, a mapping module, a movement distance acquisition module and a target speed acquisition module;
the loading module is used for loading original image data;
the target detection module performs target detection on the original image data and outputs a detection result;
the tracking module performs associated tracking of different frames of targets on the target detection result and outputs the detection result meeting the tracking constraint condition;
the difference value calculating module is used for calculating the pixel difference of the vertex of the detection frame and the time difference between the previous frame and the next frame;
the judgment module judges the accumulated time of the time difference and the pixel difference to ensure that the stored data is within a preset time threshold;
the mapping module maps the pixel difference to obtain an actual distance, and then the actual distance is accumulated according to pixels to obtain moving distances in an X direction and a Y direction of time threshold left and right movement;
the movement distance acquisition module calculates the obtained distance of left and right movement of the time threshold values in the X direction and the Y direction in the real scene according to the pythagorean theorem to obtain the movement distance;
and the target speed acquisition module is used for acquiring the target movement speed by adopting a speed calculation formula according to the movement distance and the accumulated time.
9. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform any of the methods of claims 1-7.
10. A computing device, comprising: one or more processors, memory, and one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods of claims 1-7.
CN202011497958.8A 2020-12-17 2020-12-17 Pedestrian speed measuring method, system, medium and computing device Pending CN112541938A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011497958.8A CN112541938A (en) 2020-12-17 2020-12-17 Pedestrian speed measuring method, system, medium and computing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011497958.8A CN112541938A (en) 2020-12-17 2020-12-17 Pedestrian speed measuring method, system, medium and computing device

Publications (1)

Publication Number Publication Date
CN112541938A true CN112541938A (en) 2021-03-23

Family

ID=75019059

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011497958.8A Pending CN112541938A (en) 2020-12-17 2020-12-17 Pedestrian speed measuring method, system, medium and computing device

Country Status (1)

Country Link
CN (1) CN112541938A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113096151A (en) * 2021-04-07 2021-07-09 地平线征程(杭州)人工智能科技有限公司 Method and apparatus for detecting motion information of object, device and medium
CN114755444A (en) * 2022-06-14 2022-07-15 天津所托瑞安汽车科技有限公司 Target speed measuring method, target speed measuring device, electronic apparatus, and storage medium
CN114966090A (en) * 2022-06-15 2022-08-30 南京航空航天大学 Ship video speed measurement method based on deep learning

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113096151A (en) * 2021-04-07 2021-07-09 地平线征程(杭州)人工智能科技有限公司 Method and apparatus for detecting motion information of object, device and medium
CN113096151B (en) * 2021-04-07 2022-08-09 地平线征程(杭州)人工智能科技有限公司 Method and apparatus for detecting motion information of object, device and medium
CN114755444A (en) * 2022-06-14 2022-07-15 天津所托瑞安汽车科技有限公司 Target speed measuring method, target speed measuring device, electronic apparatus, and storage medium
CN114966090A (en) * 2022-06-15 2022-08-30 南京航空航天大学 Ship video speed measurement method based on deep learning

Similar Documents

Publication Publication Date Title
CN111462200B (en) Cross-video pedestrian positioning and tracking method, system and equipment
CN110349250B (en) RGBD camera-based three-dimensional reconstruction method for indoor dynamic scene
CN110070615B (en) Multi-camera cooperation-based panoramic vision SLAM method
JP6095018B2 (en) Detection and tracking of moving objects
CN112541938A (en) Pedestrian speed measuring method, system, medium and computing device
JP5487298B2 (en) 3D image generation
CN110827320B (en) Target tracking method and device based on time sequence prediction
CN104200492A (en) Automatic detecting and tracking method for aerial video target based on trajectory constraint
JP5027758B2 (en) Image monitoring device
WO2022127181A1 (en) Passenger flow monitoring method and apparatus, and electronic device and storage medium
CN112906777A (en) Target detection method and device, electronic equipment and storage medium
CN114280611A (en) Road side sensing method integrating millimeter wave radar and camera
CN106709432B (en) Human head detection counting method based on binocular stereo vision
CN114119729A (en) Obstacle identification method and device
CN103679172A (en) Method for detecting long-distance ground moving object via rotary infrared detector
CN111260725B (en) Dynamic environment-oriented wheel speed meter-assisted visual odometer method
CN117132649A (en) Ship video positioning method and device for artificial intelligent Beidou satellite navigation fusion
Notz et al. Extraction and assessment of naturalistic human driving trajectories from infrastructure camera and radar sensors
CN113920254B (en) Monocular RGB (Red Green blue) -based indoor three-dimensional reconstruction method and system thereof
CN116259001A (en) Multi-view fusion three-dimensional pedestrian posture estimation and tracking method
Pfeiffer et al. Ground truth evaluation of the Stixel representation using laser scanners
CN115546829A (en) Pedestrian spatial information sensing method and device based on ZED (zero-energy-dimension) stereo camera
CN107977957B (en) Soil body instability comprehensive information measuring method based on static monocular camera
CN107067411B (en) Mean-shift tracking method combined with dense features
CN117593650B (en) Moving point filtering vision SLAM method based on 4D millimeter wave radar and SAM image segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination