CN114120168A - Target running distance measuring and calculating method, system, equipment and storage medium - Google Patents

Target running distance measuring and calculating method, system, equipment and storage medium Download PDF

Info

Publication number
CN114120168A
CN114120168A CN202111204527.2A CN202111204527A CN114120168A CN 114120168 A CN114120168 A CN 114120168A CN 202111204527 A CN202111204527 A CN 202111204527A CN 114120168 A CN114120168 A CN 114120168A
Authority
CN
China
Prior art keywords
target
running distance
monitoring
target tracking
tracking result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111204527.2A
Other languages
Chinese (zh)
Inventor
杨勰
马贤忠
姚成祥
项伟
孙太一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Lota Information Technology Co ltd
Original Assignee
Shanghai Lota Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Lota Information Technology Co ltd filed Critical Shanghai Lota Information Technology Co ltd
Priority to CN202111204527.2A priority Critical patent/CN114120168A/en
Publication of CN114120168A publication Critical patent/CN114120168A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a method, a system, equipment and a storage medium for measuring and calculating a target running distance. According to the technical scheme provided by the embodiment of the application, the video images of the monitored area shot by the cameras are obtained, and the cameras shoot corresponding to different shooting visual angles of the monitored area; monitoring target detection and tracking are carried out based on each video image, and first target tracking results corresponding to each camera are generated; fusing each first target tracking result into a second target tracking result of the monitoring target; and calculating the running distance of the monitored target in the monitored area based on the coordinate data contained in the second target tracking result. By adopting the technical means, the target movement monitoring under different scenes can be adapted, the input cost of target running distance measurement and calculation is reduced, and the problem that the target running distance measurement and calculation is limited by the monitored scene is avoided. Moreover, the accuracy and reliability of measuring and calculating the running distance of the target can be improved by fusing the target tracking results of the multiple cameras.

Description

Target running distance measuring and calculating method, system, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of computer vision, in particular to a method, a system, equipment and a storage medium for measuring and calculating a target running distance.
Background
At present, the application of a target running distance measuring and calculating technology is related to the field of sports events and the like. For example, in a football game, the running distance of a football player on a football field can be monitored to serve as a technical analysis basis for the aspects of physical performance conditions, technical characteristics, activity degree on the field, tactical execution degree and the like of the player, so that objective and accurate analysis results can be obtained. When the target running distance is calculated, a mode that a sensor collects target motion data is generally adopted, and then the target running distance is determined based on the target motion data, so that the activity condition of a target can be directly reflected through the target running distance.
However, the traditional method for calculating the target running distance by collecting motion data by using a sensor has relatively high investment cost, is easily limited by a monitoring scene, is difficult to achieve an ideal general effect, and has a lack of flexibility in a target running distance calculating mode.
Disclosure of Invention
The embodiment of the application provides a method, a system, equipment and a storage medium for measuring and calculating target running distance, which are suitable for measuring and calculating target running distance in various scenes, reduce the input cost of measuring and calculating target running distance and solve the technical problems that the conventional method for measuring and calculating target running distance is easily limited by monitoring scenes and lacks flexibility.
In a first aspect, an embodiment of the present application provides a target running distance calculating method, including:
acquiring video images of monitoring areas shot by each camera, wherein the cameras shoot corresponding to different shooting visual angles of the monitoring areas;
monitoring target detection and tracking are carried out on the basis of the video images, and first target tracking results corresponding to the cameras are generated;
fusing each first target tracking result into a second target tracking result of the monitoring target;
and calculating the running distance of the monitored target in the monitored area based on the coordinate data contained in the second target tracking result.
In a second aspect, an embodiment of the present application provides a target running distance estimation system, including:
the system comprises an acquisition module, a monitoring module and a display module, wherein the acquisition module is used for acquiring video images of monitoring areas shot by various cameras, and the cameras shoot corresponding to different shooting visual angles of the monitoring areas;
the tracking module is used for detecting and tracking a monitoring target based on each video image and generating a first target tracking result corresponding to each camera;
the fusion module is used for fusing the first target tracking results into second target tracking results of the monitoring target;
and the calculating module is used for calculating the running distance of the monitored target in the monitored area based on the coordinate data contained in the second target tracking result.
In a third aspect, an embodiment of the present application provides an electronic device, including:
a memory and one or more processors;
the memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the target running distance estimation method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a storage medium containing computer-executable instructions for performing the target running distance estimation method according to the first aspect when executed by a computer processor.
According to the embodiment of the application, video images of the monitored area shot by each camera are obtained, and the cameras shoot different shooting visual angles corresponding to the monitored area; monitoring target detection and tracking are carried out based on each video image, and first target tracking results corresponding to each camera are generated; fusing each first target tracking result into a second target tracking result of the monitoring target; and calculating the running distance of the monitored target in the monitored area based on the coordinate data contained in the second target tracking result. By adopting the technical means, the target running distance is measured and calculated by fusing the target tracking results of the multiple cameras and the target tracking results, so that the target activity monitoring under different scenes can be adapted, the input cost for measuring and calculating the target running distance is reduced, and the problem that the target running distance is limited by the monitored scene during measuring and calculating is avoided. Moreover, the accuracy and reliability of measuring and calculating the running distance of the target can be improved by fusing the target tracking results of the multiple cameras.
Drawings
FIG. 1 is a flow chart of a method for measuring a target running distance according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of fusion of target tracking results in an embodiment of the present application;
FIG. 3 is another flowchart of fusion of target tracking results in the embodiment of the present application;
FIG. 4 is a flow chart of a running distance estimation based on a target tracking result in an embodiment of the present application;
FIG. 5 is a schematic view of a target running distance of an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a target running distance estimation system according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, specific embodiments of the present application will be described in detail with reference to the accompanying drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some but not all of the relevant portions of the present application are shown in the drawings. Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
The target tracking result of a plurality of cameras is fused, the target running distance is calculated according to the target tracking result, the target running distance can be calculated by monitoring videos through the cameras, target activity monitoring is adaptive to different monitoring scenes, the input cost of target activity monitoring is reduced, and the flexibility and the universality of target activity monitoring are improved. For the traditional method of measuring and calculating the running distance of the target, the target is generally required to be provided with a sensor or to be shot by using a thermal imaging camera. For example, in a professional football field, a special camera with a thermal imaging function is used for collecting video information of the field, and data are finally displayed through special analysis software. However, these devices are expensive and difficult to use in amateur or amusement game settings. Based on this, the method for measuring and calculating the target running distance is provided to solve the technical problems that the existing target activity monitoring method is easily limited by a monitoring scene and lacks flexibility
Example (b):
fig. 1 is a flowchart of a target running distance measuring method according to an embodiment of the present disclosure, where the target running distance measuring method according to the embodiment may be performed by a target running distance measuring device, the target running distance measuring device may be implemented by software and/or hardware, and the target running distance measuring device may be formed by two or more physical entities or may be formed by one physical entity. Generally, the target running distance measuring device may be a computing device such as a server host, a computer, or the like.
The following description will be given taking the target running distance measuring apparatus as an example of a main body that performs the target running distance measuring method. Referring to fig. 1, the target running distance measuring method specifically includes:
s110, video images of the monitored areas shot by the cameras are obtained, and the cameras shoot corresponding to different shooting visual angles of the monitored areas.
When the target running distance is measured and calculated, the monitoring area is shot by the aid of the cameras to collect video images containing corresponding monitoring targets. The camera sets up the different positions in the surveillance area according to the actual shooting demand to shoot the surveillance area through different shooting visual angles. According to actual needs, each camera can be responsible for collecting video images of a certain part of subareas of the monitoring area, and can also directly collect video images of the whole monitoring area corresponding to different shooting visual angles. Taking a football match as an example, the video images of the panorama of the monitoring area are collected through the cameras arranged at the four corners of the football field, and then the target activity monitoring is carried out through the video images. Optionally, the number and the positions of the cameras can be adaptively selected according to actual needs. For example, on the basis that cameras are arranged at four corners of a football field, cameras are further correspondingly arranged at four sides of the football field, so that video images with more visual angles in a monitoring area can be increased, and the target activity monitoring effect is further optimized.
By acquiring the video images acquired by the cameras, the embodiment of the application identifies and tracks the monitored target in the video images to determine the coordinate position of the monitored target in the monitored area at different time points, so that the target running distance can be measured and calculated, and the running data of the monitored target in the monitored area can be determined. Use football match as an example, through the video image of gathering whole football match, appoint football person to detect the tracking based on the video image pair, confirm the match in-process, the coordinate position of different time point football person on the football court, and then confirm the coordinate position of different time points of football person on the football court, calculate this football person's the data of running at monitoring area. Based on the running data, the football technical analysis data can be used as the technical analysis basis of the aspects of physical condition, technical characteristics, activity degree on a court, tactical execution degree and the like of the football player, and more scientific and accurate football technical analysis effect is realized.
And S120, monitoring target detection and tracking are carried out based on the video images, and first target tracking results corresponding to the cameras are generated.
Based on the obtained video images, the embodiment of the application firstly detects and tracks the monitoring target based on the video images respectively. Before that, an object detection model is constructed in advance. Taking target detection of a football player as an example, in order to train a football player target detection model, image information (such as a human face or a set of the human face and a whole body image) of the football player needs to be collected as a training data set, and then rectangular frames of the football player are manually marked one by one from the image information as marking data. And further designing a neural network structure and a loss function of the player target detection model, and training network parameters of the player target detection model by using the labeling data. After model training is complete, the model structure and parameters are saved. The target detection model may be a YOLOv5 or other target detection model, and the specific detection model is not fixedly limited in the embodiments of the present application and is not described herein in detail.
The trained target detection model is deployed on the target running distance measuring device in the embodiment of the application. The video images collected by the cameras are input into the target detection model, and detection frames of the players in the video images can be output through model calculation and processing.
Further, for the tracking of the player, a kalman filter tracker may be used, the player detection frame of each frame of video image is used as an input, each detection frame and the existing kalman filter tracker are subjected to similarity (for example, euclidean distance of the detection frame center point) association, and the detection frame and the tracking frame are divided into three cases according to the matching result: and successfully pairing, unmatched detection boxes and unmatched tracking boxes, respectively determining tracking states under various conditions, then calculating the minimum overhead assignment according to the Hungarian algorithm, and outputting the final tracking result. In addition, according to the actual detection requirement, a deep learning algorithm such as a DeepsORT algorithm can be used, and on the basis of calculating the Euclidean distance between the detection frame and the tracking frame, the visual features of the detection frame and the tracking frame are further extracted to be used as similarity bases for comparison, so that a more accurate target tracking result is obtained. The embodiment of the application does not make fixed restrictions on a specific target tracking model, and is not repeated herein.
Optionally, in the process of detecting and tracking the monitored target, the embodiment of the application further performs identity re-identification correction on the detection and tracking result of the monitored target based on an identity re-identification algorithm. The identity re-identification correction is mainly used for solving the problem of instability existing in long-time target tracking, and the detection and tracking error result which possibly exists is corrected in an identity re-identification mode. It can be understood that, in the process of football match and detection and tracking for a player, the situation that the tracked target cannot be detected due to reasons such as shielding may exist, and therefore, the identification re-recognition correction needs to be performed on the target detection and tracking result by combining with the identification re-recognition model. Before this, an identity re-recognition model is constructed in advance, a large number of human body image data sets with various postures of target identity information need to be collected in a model training stage, a neural network structure and a loss function are designed, and a feature extraction network is trained. Like a deep learning algorithm for target detection and tracking, the identity re-recognition model stores network structures and parameters after training is completed, so that the network structures and parameters can be used for subsequent deployment for multiple times without retraining.
Further, when re-identifying the identity of players, it is first necessary to determine the identity feature library of all the participants in the local game. Personal pictures provided by contestants or personal pictures obtained by detecting, cutting and marking frame-extracted pictures in the competition are subjected to neural network extraction to obtain identity characteristic vectors of each player (each picture can obtain one characteristic vector, and each player can contain a plurality of characteristic vectors). Then, based on the identity re-recognition model, a player detection frame output by the player target detection model is used as input, the feature vectors are extracted through the identity re-recognition model, similarity comparison is carried out on the feature vectors and feature vectors in a match-entering person identity library, the current player identity is determined, and a player tracking result is corrected. Generally speaking, in order to improve the efficiency of the whole process, the identity re-identification is not required to be carried out frame by frame, and a plurality of frames can be selected according to the actual effect to be subjected to identity re-identification correction once. Therefore, the target detection tracking result is further optimized through identity re-identification correction, so that the target activity monitoring is more accurate and stable.
According to the embodiment of the application, the target detection model and the target tracking model are respectively adopted to carry out target detection and tracking corresponding to the video image acquired without a camera, so that a corresponding target detection result is obtained, and the target tracking result is defined as a first target tracking result. The first target tracking result comprises coordinate data of the monitored target at different time points in the monitored area. Therefore, the activity monitoring of the monitored target in the monitored area can be realized, and the running distance of the monitored target is calculated.
And S130, fusing the first target tracking results into a second target tracking result of the monitoring target.
Based on the first target tracking results of all the video images, the method adopts a target tracking result fusion mode to fuse all the first target tracking results together to generate a second target tracking result, and the second target tracking result is used for calculating the target running distance.
Specifically, coordinate data contained in each first target tracking result is fused and mapped to a top view of the monitoring area, and a second target tracking result of the monitoring target is generated. In order to fuse the first target tracking results of the multiple cameras together, the coordinate data included in each first target tracking result needs to be mapped onto a uniform monitoring area top view. Taking the target tracking result of the player in the football match as an example, the mapping relationship between the first target tracking result and the second target tracking result can be expressed as a single mapping matrix P3×3And solving the matrix, wherein the solving can be obtained by calculating four coordinate points corresponding to the actual court picture and the two-dimensional top view. Wherein, the coordinates (x, y) of the middle point of the bottom edge of the two-dimensional tracking frame of the selected video image represent the positions of the players on the court, wherein x and y represent the coordinates under the pixel coordinate system. The coordinates are rewritten into a homogeneous coordinate form "Q ═ (x, y, 1)T", the position that the player maps to the top view is calculated as follows:
(x′,y′,z′)T=P3×3Q;
Figure BDA0003306313020000061
where (x ', y') represents the transpose of the first target tracking result coordinate data (x, y), and (x ", y") represents the position where the player maps to the top view, i.e., the second target tracking result.
Optionally, when the first target tracking result is merged into the second target tracking result, the embodiment of the present application further performs distortion removal processing on the coordinate data included in each first target tracking result based on the camera model and the distortion model, and merges the coordinate data after the distortion removal processing to generate the second target tracking result of the monitoring target. It can be understood that, corresponding distortion exists in general video images, and in order to ensure the accuracy of the target tracking result, it is required to perform a distortion removal process according to the camera model and the distortion model, where the formula is expressed as Q ' ═ f (Q), replace the original homogeneous coordinate Q with the distortion-removed coordinate Q ', and then map the coordinate Q ' onto the top view. Through distortion removal processing, the accuracy of a target tracking result is further optimized, and the fact that the finally calculated running data accurately reflect the activity condition of the target is guaranteed.
It should be noted that, since the video images acquired under different shooting angles include overlapping portions, the first target tracking results obtained from the respective video images also include overlapping portions. Based on this, this application fuses first target tracking result through setting up the effective shooting region of each camera.
As shown in fig. 2, the target tracking result fusion process based on the effective shooting area includes:
s1301, selecting effective coordinate data from the coordinate data of the first target tracking result according to effective shooting areas preset by the cameras, and dividing the effective shooting areas into the cameras in advance according to top views of monitoring areas;
and S1302, mapping the effective coordinate data to each effective shooting area on the top view respectively to generate a second target tracking result of the monitoring target.
The top view of the monitoring area is divided into a plurality of parts in advance, each part is used for collecting video images by a single camera, and the area which is used for shooting is the effective shooting area. And corresponding to the video image collected by each camera, only carrying out target detection and tracking on the effective shooting area, determining the coordinate data of the monitored target in the effective shooting area, and defining the part of coordinate data as effective coordinate data. And determining effective coordinate data for the video image of each camera according to the above mode, and finally combining the effective coordinate data to complete target tracking result fusion to obtain a corresponding second target tracking result.
Optionally, in an embodiment, the fusion of the first target tracking result may also adopt a repeated coordinate data screening manner. Referring to fig. 3, the target tracking result fusion process based on repeated coordinate data screening includes:
s1303, mapping coordinate data contained in each first target tracking result to a top view of the monitoring area;
and S1304, screening repeated coordinate data on the top view according to the distance between the coordinates and the camera and/or the coordinate confidence score, and generating a second target tracking result of the monitoring target.
It can be understood that, since there are portions where the monitored areas photographed between the respective cameras overlap with each other, even each camera can photograph a panoramic monitored area. Then, for a plurality of first target tracking results obtained from each video image, coordinate data of a plurality of monitoring targets must exist at the same time point. Under the influence of different shooting visual angles, coordinate data of the same time point of the monitored target obtained by detecting and tracking based on different video images may be different. After mapping the coordinate data of the plurality of first target tracking results onto the top view, the coordinate data belonging to the same monitored target at the same time point need to be merged. The merging mode can be screened according to the distance between the coordinates and the camera and/or the confidence scores of the coordinates. If the distance between one coordinate data and the corresponding camera is the closest and the distances between the other coordinates and the cameras are relatively far, the coordinate data is selected as effective coordinate data, the other coordinate data at the same time point are screened out, and the like, so that the combination of all repeated coordinate data can be completed, and a second target tracking result of the monitored target is obtained. And for the confidence score of the coordinate, representing the reliability degree of the target detection box corresponding to the coordinate. It can be understood that the higher the similarity between the target detection frame and the pre-stored target image, the higher the confidence score thereof. And according to the method, the confidence scores of the coordinate data which are repeated mutually are determined, the coordinate data with the highest confidence score is selected as effective coordinate data, other coordinate data at the same time point are screened out, and the like, so that the combination of all repeated coordinate data can be completed, and a second target tracking result of the monitored target is obtained. Optionally, according to actual requirements, a plurality of results such as the distance between the coordinates and the camera and the coordinate confidence score can be integrated to perform weighted calculation, and effective coordinate data is determined according to the weighted calculation result. The screening and merging method for the repeated coordinate data in the embodiment of the application is not subject to fixed limitation, and is not repeated herein.
Optionally, in an embodiment, the target running distance measuring and calculating device further uses a set number of video frames as a sliding window, and performs inter-frame smoothing on the coordinate movement track of the monitoring target in the second target tracking result by using a weighted moving average method. In order to optimize the jitter of the detection result and the slight jump of the multi-camera fusion, the movement track of each player needs to be subjected to interframe smoothing processing. By selecting a fixed number of video frame images as a sliding window, the trajectory is smoothed using a weighted sliding average method. It can be understood that, since the target detection frame varies with the human-shaped posture, the position jitter phenomenon may be caused by directly using the lower midpoint coordinate of the detection frame as the target coordinate data tracking track, and therefore the target motion track smoothing processing is added to the embodiment of the present application to obtain a more effective target tracking result. In the process of track smoothing, firstly, sequentially inputting a second target tracking result, taking the length of a sampling sliding window set as 20 frames as an example, and skipping data of the first 10 frames and the last 10 frames of a video; and then circulating all the monitored targets in the target tracking result, sequentially obtaining the coordinate data of the monitored targets in nearly 20 frames, and taking the median of the coordinates of the middle point of the lower side in the 20 frames in the horizontal axis and the vertical axis of the image as the smoothing result of the target frame. And then sequentially circulating the track corresponding to each monitoring target in each frame, and repeating the steps to finally finish the track smoothing processing of the monitoring targets.
And S140, calculating the running distance of the monitored target in the monitored area based on the coordinate data contained in the second target tracking result.
Finally, based on the second target tracking result, the running distance of the monitored target in the monitored area can be calculated based on the coordinate data by determining each coordinate data contained in the second target tracking result. It can be understood that all the coordinate data on the top view identify the position information of the monitoring target at different time points, and then two coordinate data in front and back order are determined according to the detection time stamp order of the coordinate data, so as to determine the coordinate separation distance between the two coordinate data. By analogy, the coordinate interval distances of all the coordinate data can be determined in sequence, and then the running distance of the monitored target in the monitored area is calculated.
Referring to fig. 4, the running distance estimation process based on the target tracking result includes:
s1401, selecting corresponding coordinate data on a top view of each appointed time interval point, and calculating a coordinate interval distance according to the coordinate data of adjacent appointed time interval points;
and S1402, overlapping the interval distances of the coordinates to obtain the running distance of the monitored target in the monitored area.
Selecting corresponding time intervals, selecting corresponding coordinate data on a top view at specified time interval points according to the time stamp sequence of each coordinate data from the first coordinate data and the set time intervals, and then calculating the coordinate interval distance of the coordinate data of two adjacent time interval points. By analogy, the running distance of the monitored target in the monitored area can be obtained by superposing the interval distances of all the coordinates. It should be noted that, since the coordinate separation distance is calculated based on the top view, the finally calculated running distance is converted into the actual running distance according to the proportional relationship between the top view and the real monitored area.
For example, referring to fig. 5, in the process of a soccer game, the running distance of a player in a certain period of time is calculated, according to a specified time interval point, coordinate data of the player at positions 1, 2, 3 and 4 shown in fig. 5 are selected, coordinate interval distances between positions 1 and 2,2 and 3 and between positions 3 and 4 are calculated respectively, then the coordinate interval distances are superposed, and the running distance of the player in the certain period of time can be obtained by converting the coordinate interval distances into actual distances.
It should be noted that the time interval points can be increased or decreased adaptively according to actual needs. In order to obtain a more accurate running distance calculation result, the time interval point can directly select the time interval between two adjacent detection frames in the target tracking result, namely the coordinate data of all the detected detection frames participate in the running distance measurement and calculation, so that the most accurate running distance measurement and calculation result can be obtained, and the target activity monitoring effect is further optimized.
In an embodiment, based on the second target tracking result, the embodiment of the present application further determines the activity frequency of the monitoring target in each preset sub-area of the monitoring area. The monitoring method comprises the steps that a plurality of sub-areas are divided in advance corresponding to a top view of a monitoring area, the number of coordinates contained in each preset sub-area is counted on the top view of the monitoring area, and the activity frequency of a monitoring target in each preset sub-area is calculated based on the number of the coordinates. And determining the region with higher activity frequency of the monitoring target based on the activity frequency, so as to further analyze the activity condition of the target.
Optionally, in the embodiment of the present application, the designated time interval points of different sub-regions are set based on the activity frequencies of the monitoring target in different sub-regions, and the running distance of the corresponding sub-region is measured and calculated corresponding to the different designated time interval points. Taking the football game as an example, when measuring and calculating the running distance of a player, for the area with higher activity frequency of the player, the time interval points are arranged more densely, so as to more accurately measure and calculate the running distance of the player in the sub-area. And for the subarea with low activity frequency of the players, larger time interval points can be set so as to reduce the measurement and calculation amount and improve the measurement and calculation efficiency. Therefore, the measuring and calculating efficiency is guaranteed, the measuring and calculating accuracy of the running distance is improved in adaptability, and the measuring and calculating effect is optimized.
In the above, by acquiring the video images of the monitored areas shot by the cameras, the cameras shoot different shooting visual angles corresponding to the monitored areas; monitoring target detection and tracking are carried out based on each video image, and first target tracking results corresponding to each camera are generated; fusing each first target tracking result into a second target tracking result of the monitoring target; and calculating the running distance of the monitored target in the monitored area based on the coordinate data contained in the second target tracking result. By adopting the technical means, the target running distance is measured and calculated by fusing the target tracking results of the multiple cameras and the target tracking results, so that the target activity monitoring under different scenes can be adapted, the input cost for measuring and calculating the target running distance is reduced, and the problem that the target running distance is limited by the monitored scene during measuring and calculating is avoided. Moreover, the accuracy and reliability of measuring and calculating the running distance of the target can be improved by fusing the target tracking results of the multiple cameras.
Based on the above embodiments, fig. 6 is a schematic structural diagram of a target running distance measuring system provided in the present application. Referring to fig. 6, the target running distance measuring and calculating system provided in this embodiment specifically includes: the system comprises an acquisition module 21, a tracking module 22, a fusion module 23 and a calculation module 24.
The acquisition module 21 is configured to acquire video images of monitored areas shot by each camera, and the cameras shoot different shooting angles corresponding to the monitored areas;
the tracking module 22 is configured to perform monitoring target detection and tracking based on each video image, and generate a first target tracking result corresponding to each camera;
the fusion module 23 is configured to fuse the first target tracking results into a second target tracking result of the monitoring target;
the calculating module 24 is configured to calculate a running distance of the monitored target in the monitored area based on the coordinate data included in the second target tracking result.
Specifically, the tracking module 22 includes:
and the identity re-identification unit is used for carrying out identity re-identification correction on the detection tracking result of the monitored target based on an identity re-identification algorithm.
Specifically, the fusion module 23 includes:
and the mapping unit is used for fusing and mapping the coordinate data contained in each first target tracking result to the top view of the monitoring area to generate a second target tracking result of the monitoring target.
Selecting effective coordinate data from the coordinate data of the first target tracking result according to effective shooting areas preset by the cameras, wherein the effective shooting areas are divided into the cameras in advance according to top views of monitoring areas; and respectively mapping the effective coordinate data to each effective shooting area on the top view to generate a second target tracking result of the monitoring target.
Or mapping coordinate data contained in each first target tracking result to a top view of the monitoring area; and screening repeated coordinate data on the top view according to the distance between the coordinates and the camera and/or the coordinate confidence score to generate a second target tracking result of the monitoring target.
Specifically, the fusion module 23 further includes:
and the distortion processing unit is used for carrying out distortion removal processing on the coordinate data contained in each first target tracking result based on the camera model and the distortion model, and fusing the coordinate data subjected to distortion removal processing to generate a second target tracking result of the monitoring target.
And the smoothing processing unit is used for performing interframe smoothing processing on the coordinate moving track of the monitoring target in the second target tracking result by using a weighted moving average method by taking a set number of video frames as a sliding window.
The estimation module 24 includes:
selecting corresponding coordinate data on a top view of each appointed time interval point, and calculating a coordinate interval distance according to the coordinate data of adjacent appointed time interval points;
and superposing the interval distances of the coordinates to obtain the running distance of the monitored target in the monitored area.
In the above, by acquiring the video images of the monitored areas shot by the cameras, the cameras shoot different shooting visual angles corresponding to the monitored areas; monitoring target detection and tracking are carried out based on each video image, and first target tracking results corresponding to each camera are generated; fusing each first target tracking result into a second target tracking result of the monitoring target; and calculating the running distance of the monitored target in the monitored area based on the coordinate data contained in the second target tracking result. By adopting the technical means, the target running distance is measured and calculated by fusing the target tracking results of the multiple cameras and the target tracking results, so that the target activity monitoring under different scenes can be adapted, the input cost for measuring and calculating the target running distance is reduced, and the problem that the target running distance is limited by the monitored scene during measuring and calculating is avoided. Moreover, the accuracy and reliability of measuring and calculating the running distance of the target can be improved by fusing the target tracking results of the multiple cameras.
The target running distance measuring and calculating system provided by the embodiment of the application can be used for executing the target running distance measuring and calculating method provided by the embodiment, and has corresponding functions and beneficial effects.
On the basis of the above practical example, an embodiment of the present application further provides an electronic device, with reference to fig. 7, the electronic device includes: a processor 31, a memory 32, a communication module 33, an input device 34, and an output device 35. The memory 32 is a computer readable storage medium, and can be used for storing software programs, computer executable programs, and modules, such as program instructions/modules corresponding to the target running distance calculating method according to any embodiment of the present application (for example, an acquisition module, a tracking module, a fusion module, and a calculating module in the target running distance calculating system). The communication module 33 is used for data transmission. The processor 31 executes various functional applications of the device and data processing by executing software programs, instructions, and modules stored in the memory, that is, implements the target running distance estimation method described above. The input device 34 may be used to receive entered numeric or character information and to generate key signal inputs relating to user settings and function controls of the apparatus. The output device 35 may include a display device such as a display screen. The electronic device provided by the embodiment can be used for executing the target running distance measuring and calculating method provided by the embodiment, and has corresponding functions and beneficial effects.
On the basis of the above embodiments, the present application also provides a storage medium containing computer-executable instructions for performing a target running distance measuring method when executed by a computer processor, and the storage medium may be any of various types of memory devices or storage devices. Of course, the storage medium provided in the embodiments of the present application contains computer-executable instructions, and the computer-executable instructions are not limited to the target running distance calculating method described above, and may also perform related operations in the target running distance calculating method provided in any embodiment of the present application.
The foregoing is considered as illustrative of the preferred embodiments of the invention and the technical principles employed. The present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the claims.

Claims (10)

1. A target running distance measuring method, comprising:
acquiring video images of monitoring areas shot by each camera, wherein the cameras shoot corresponding to different shooting visual angles of the monitoring areas;
monitoring target detection and tracking are carried out on the basis of the video images, and first target tracking results corresponding to the cameras are generated;
fusing each first target tracking result into a second target tracking result of the monitoring target;
and calculating the running distance of the monitored target in the monitored area based on the coordinate data contained in the second target tracking result.
2. The target running distance calculating method according to claim 1, wherein the fusing each of the first target tracking results into a second target tracking result of the monitoring target includes:
and fusing and mapping coordinate data contained in each first target tracking result to a top view of the monitoring area to generate a second target tracking result of the monitoring target.
3. The target running distance measuring and calculating method according to claim 2, wherein the generating of the second target tracking result of the monitored target by fusion mapping of the coordinate data included in each of the first target tracking results to the top view of the monitored area comprises:
selecting effective coordinate data from the coordinate data of the first target tracking result according to an effective shooting area preset by each camera, wherein the effective shooting area is divided to each camera in advance according to a top view of the monitoring area;
and mapping the effective coordinate data to each effective shooting area on the top view respectively to generate a second target tracking result of the monitoring target.
4. The target running distance measuring and calculating method according to claim 2, wherein the generating of the second target tracking result of the monitored target by fusion mapping of the coordinate data included in each of the first target tracking results to the top view of the monitored area comprises:
mapping coordinate data contained in each first target tracking result to a top view of the monitoring area;
and screening the repeated coordinate data on the top view according to the distance between the coordinate and the camera and/or the coordinate confidence score, and generating a second target tracking result of the monitoring target.
5. The target running distance estimation method according to claim 2, wherein the estimation of the running distance of the monitoring target in the monitoring area based on the coordinate data included in the second target tracking result includes:
selecting corresponding coordinate data on the top view of each appointed time interval point, and calculating a coordinate interval distance according to the coordinate data of adjacent appointed time interval points;
and superposing the coordinate spacing distances to obtain the running distance of the monitored target in the monitored area.
6. The target running distance calculating method according to claim 1, wherein the fusing each of the first target tracking results into a second target tracking result of the monitoring target further includes:
and carrying out distortion removal processing on the coordinate data contained in each first target tracking result based on the camera model and the distortion model, and fusing the coordinate data subjected to distortion removal processing to generate a second target tracking result of the monitoring target.
7. The target running distance calculating method according to claim 1, further comprising, after the fusing each of the first target tracking results into a second target tracking result of the monitor target:
and taking a set number of video frames as a sliding window, and performing interframe smoothing processing on the coordinate moving track of the monitoring target in the second target tracking result by adopting a weighted moving average method.
8. An object running distance estimation system, comprising:
the system comprises an acquisition module, a monitoring module and a display module, wherein the acquisition module is used for acquiring video images of monitoring areas shot by various cameras, and the cameras shoot corresponding to different shooting visual angles of the monitoring areas;
the tracking module is used for detecting and tracking a monitoring target based on each video image and generating a first target tracking result corresponding to each camera;
the fusion module is used for fusing the first target tracking results into second target tracking results of the monitoring target;
and the calculating module is used for calculating the running distance of the monitored target in the monitored area based on the coordinate data contained in the second target tracking result.
9. An electronic device, comprising:
a memory and one or more processors;
the memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the target running distance estimation method of any one of claims 1-7.
10. A storage medium containing computer-executable instructions for performing the target running distance estimation method according to any one of claims 1-7 when executed by a computer processor.
CN202111204527.2A 2021-10-15 2021-10-15 Target running distance measuring and calculating method, system, equipment and storage medium Pending CN114120168A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111204527.2A CN114120168A (en) 2021-10-15 2021-10-15 Target running distance measuring and calculating method, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111204527.2A CN114120168A (en) 2021-10-15 2021-10-15 Target running distance measuring and calculating method, system, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114120168A true CN114120168A (en) 2022-03-01

Family

ID=80375703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111204527.2A Pending CN114120168A (en) 2021-10-15 2021-10-15 Target running distance measuring and calculating method, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114120168A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309686A (en) * 2023-05-19 2023-06-23 北京航天时代光电科技有限公司 Video positioning and speed measuring method, device and equipment for swimmers and storage medium
CN116469040A (en) * 2023-06-12 2023-07-21 南昌大学 Football player tracking method based on video and sensor perception fusion
WO2024027634A1 (en) * 2022-08-01 2024-02-08 京东方科技集团股份有限公司 Running distance estimation method and apparatus, electronic device, and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024027634A1 (en) * 2022-08-01 2024-02-08 京东方科技集团股份有限公司 Running distance estimation method and apparatus, electronic device, and storage medium
CN116309686A (en) * 2023-05-19 2023-06-23 北京航天时代光电科技有限公司 Video positioning and speed measuring method, device and equipment for swimmers and storage medium
CN116469040A (en) * 2023-06-12 2023-07-21 南昌大学 Football player tracking method based on video and sensor perception fusion
CN116469040B (en) * 2023-06-12 2023-08-29 南昌大学 Football player tracking method based on video and sensor perception fusion

Similar Documents

Publication Publication Date Title
US10489656B2 (en) Methods and systems for ball game analytics with a mobile device
JP6448223B2 (en) Image recognition system, image recognition apparatus, image recognition method, and computer program
CN114120168A (en) Target running distance measuring and calculating method, system, equipment and storage medium
Guéziec Tracking pitches for broadcast television
US8805007B2 (en) Integrated background and foreground tracking
US9031279B2 (en) Multiple-object tracking and team identification for game strategy analysis
CN113850248B (en) Motion attitude evaluation method and device, edge calculation server and storage medium
US9600760B2 (en) System and method for utilizing motion fields to predict evolution in dynamic scenes
CN110544301A (en) Three-dimensional human body action reconstruction system, method and action training system
CN110544302A (en) Human body action reconstruction system and method based on multi-view vision and action training system
CN113239797B (en) Human body action recognition method, device and system
US20170201723A1 (en) Method of providing object image based on object tracking
CN114037923A (en) Target activity hotspot graph drawing method, system, equipment and storage medium
CN106846372B (en) Human motion quality visual analysis and evaluation system and method thereof
JP4465150B2 (en) System and method for measuring relative position of an object with respect to a reference point
Ingwersen et al. SportsPose-A Dynamic 3D sports pose dataset
CN115100744A (en) Badminton game human body posture estimation and ball path tracking method
CN110910449A (en) Method and system for recognizing three-dimensional position of object
CN114140721A (en) Archery posture evaluation method and device, edge calculation server and storage medium
Reno et al. Tennis player segmentation for semantic behavior analysis
CN116523962A (en) Visual tracking method, device, system, equipment and medium for target object
WO2019137186A1 (en) Food identification method and apparatus, storage medium and computer device
CN115841602A (en) Construction method and device of three-dimensional attitude estimation data set based on multiple visual angles
Tahan et al. A computer vision driven squash players tracking system
Wang et al. Research and implementation of the sports analysis system based on 3D image technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination