CN107481270A - Table tennis target following and trajectory predictions method, apparatus, storage medium and computer equipment - Google Patents

Table tennis target following and trajectory predictions method, apparatus, storage medium and computer equipment Download PDF

Info

Publication number
CN107481270A
CN107481270A CN201710682442.2A CN201710682442A CN107481270A CN 107481270 A CN107481270 A CN 107481270A CN 201710682442 A CN201710682442 A CN 201710682442A CN 107481270 A CN107481270 A CN 107481270A
Authority
CN
China
Prior art keywords
target
bounding box
coordinate
dimensional coordinate
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710682442.2A
Other languages
Chinese (zh)
Other versions
CN107481270B (en
Inventor
任杰
盛斌
施之皓
张本轩
杨靖
侯爽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Shanghai University of Sport
Original Assignee
Shanghai Jiaotong University
Shanghai University of Sport
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University, Shanghai University of Sport filed Critical Shanghai Jiaotong University
Priority to CN201710682442.2A priority Critical patent/CN107481270B/en
Publication of CN107481270A publication Critical patent/CN107481270A/en
Application granted granted Critical
Publication of CN107481270B publication Critical patent/CN107481270B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches

Abstract

The present invention relates to a kind of table tennis target following and trajectory predictions method, apparatus, storage medium and computer equipment.Two video cameras are obtained in synchronization respectively to a two field picture of tracking target shooting, candidate region corresponding to tracking target is extracted from image.Candidate region is inputted into default trace model to be handled to obtain bounding box corresponding to tracking target.The two-dimensional coordinate at bounding box center corresponding to the tracking target that two video cameras are shot in synchronization respectively is obtained, the three-dimensional coordinate at bounding box center is calculated further according to video camera projection matrix.The three-dimensional coordinate for obtaining the bounding box at continuous moment forms continuous coordinate sequence, and continuous coordinate sequence is inputted in recurrent neural network LSTM and carries out calculating the follow-up coordinate sequence of generation, obtains tracking the track of target according to above-mentioned coordinate sequence.Target location is tracked using default trace model, the advantage of temporal aspect can be effectively analyzed in conjunction with LSTM, it becomes possible to realizes the Accurate Prediction to tracking target trajectory.

Description

Table tennis target following and trajectory predictions method, apparatus, storage medium and computer Equipment
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of table tennis target following and trajectory predictions side Method, device, storage medium and computer equipment.
Background technology
In the design of ping-pong robot system, there are two problems to need to solve.One the problem of being target following, i.e., Give the position of tracking target previous frame, the position being likely to occur in predicting tracing target next frame.Another is trajectory predictions The problem of, i.e., given a bit of table tennis coordinate sequence, automatically generate follow-up coordinate sequence.
Target Tracking Problem constantly obtains significant development as the classical problem in computer vision in recent decades.From The Lucas-Kanade trackers, mean-shift trackers etc. based on pure computer vision methods at the beginning, it is finally whole Closed detection and and its study thoughts increasingly complex tracker, then the track algorithm based on deep learning by now. Major depth learning model currently used for tracking is all based on CNN, i.e. convolutional neural networks.In general based on CNN with In track algorithm, CNN models are mainly as feature extractor (feature extractor).Drawn using current track algorithm Encirclement frame it is not accurate enough, inaccurate encirclement frame does not mean only that the error of positional information, can also directly result in whole tracking Framework produces drift and even loses target.The error of trajectory predictions can be directly resulted in when error occurs in target following.
The content of the invention
Based on this, it is necessary to for above-mentioned technical problem, there is provided a kind of table tennis target following and trajectory predictions method, dress Put, storage medium and computer equipment.
A kind of table tennis target following and trajectory predictions method, methods described include:
Two video cameras are obtained in synchronization respectively to a two field picture of tracking target shooting;
Candidate region corresponding to the tracking target is extracted from described image;
The candidate region is inputted into default trace model to be handled to obtain bounding box corresponding to the tracking target;
Obtain the two dimension at the bounding box center corresponding to the tracking target that two video cameras are shot in synchronization respectively Coordinate, the three-dimensional coordinate at the bounding box center that target is tracked corresponding to the moment is calculated further according to video camera projection matrix;
Three-dimensional coordinate corresponding to obtaining the bounding box at continuous moment, continuous coordinate sequence is formed, by the continuous seat Calculated in mark sequence inputting recurrent neural network LSTM, generate follow-up coordinate sequence;
Obtain tracking the track of target according to continuous coordinate sequence and follow-up coordinate sequence.
In one of the embodiments, methods described also includes:
It will be carried out in the three-dimensional coordinate input LSTM at the bounding box center that target is tracked corresponding to the moment calculated Calculate, predict the three-dimensional coordinate of the tracking target in next two field picture of two video cameras shooting;
Using the region comprising the three-dimensional coordinate as the candidate region that target is tracked in next two field picture.
In one of the embodiments, it is described by the candidate region input default trace model handled to obtain it is described Bounding box corresponding to target is tracked, including:
The candidate region is inputted into default convolutional neural networks model to obtain tracking target in described image by processing Bounding box;
Default return after layer carries out recurrence processing of the bounding box input that target is tracked in described image is obtained into the tracking Bounding box after being returned corresponding to target, the default low layer convolution for returning layer and including the default convolutional neural networks model Layer.
It is in one of the embodiments, described that candidate region corresponding to the tracking target is extracted from described image, Including:
Candidate region corresponding to the tracking target is extracted using the method for background subtraction from described image.
In one of the embodiments, the process of the video camera projection matrix is established, including:
World coordinate system, camera coordinate system are established respectively;
Obtain the Intrinsic Matrix of video camera and outer parameter matrix;
Video camera projection matrix is established according to the Intrinsic Matrix and outer parameter matrix, the video camera projection matrix can The two-dimensional coordinate of camera coordinate system is changed to the three-dimensional coordinate of world coordinate system.
A kind of table tennis target following and trajectory predictions device, described device include:
Video camera taking module, for obtaining two video cameras in synchronization respectively to a frame figure of tracking target shooting Picture;
Tracking object candidate area puies forward power module, for extracting candidate corresponding to the tracking target from described image Region;
Target bounding box acquisition module is tracked, is handled to obtain for the candidate region to be inputted into default trace model Bounding box corresponding to the tracking target;
Target bounding box three-dimensional coordinate computing module is tracked, is shot respectively in synchronization for two video cameras of acquisition The two-dimensional coordinate at the bounding box center corresponding to target is tracked, it is corresponding to calculate the moment further according to video camera projection matrix Tracking target bounding box center three-dimensional coordinate;
Coordinate sequence generation module, for obtaining three-dimensional coordinate corresponding to the bounding box at continuous moment, form continuous sit Sequence is marked, will be calculated in the continuous coordinate sequence input recurrent neural network LSTM, generate follow-up coordinate sequence;
The Track Pick-up module of target is tracked, for being tracked according to continuous coordinate sequence and follow-up coordinate sequence The track of target.
In one of the embodiments, described device also includes:
The three-dimensional coordinate prediction module of target is tracked, for the encirclement of target will to be tracked corresponding to the moment calculated Calculated in the three-dimensional coordinate input LSTM at box center, predict the tracking in next two field picture of two video cameras shooting The three-dimensional coordinate of target;
The candidate region acquisition module of target is tracked, for using the region comprising the three-dimensional coordinate as next two field picture The candidate region of middle tracking target.
In one of the embodiments, the tracking target bounding box acquisition module includes:
Convolutional neural networks module, for the candidate region to be inputted into default convolutional neural networks model by handling The bounding box of target is tracked into described image;
Layer module is returned, for the default layer that returns of the bounding box input that target is tracked in described image to be carried out into recurrence processing The bounding box after being returned corresponding to the tracking target is obtained afterwards, and the default recurrence layer includes the default convolutional neural networks The low layer convolutional layer of model.
A kind of computer-readable recording medium, is stored thereon with computer program, and the program is realized when being executed by processor Following steps:
Two video cameras are obtained in synchronization respectively to a two field picture of tracking target shooting;
Candidate region corresponding to the tracking target is extracted from described image;
The candidate region is inputted into default trace model to be handled to obtain bounding box corresponding to the tracking target;
Obtain the two dimension at the bounding box center corresponding to the tracking target that two video cameras are shot in synchronization respectively Coordinate, the three-dimensional coordinate at the bounding box center that target is tracked corresponding to the moment is calculated further according to video camera projection matrix;
Three-dimensional coordinate corresponding to obtaining the bounding box at continuous moment, continuous coordinate sequence is formed, by the continuous seat Calculated in mark sequence inputting recurrent neural network LSTM, generate follow-up coordinate sequence;
Obtain tracking the track of target according to continuous coordinate sequence and follow-up coordinate sequence.
A kind of computer equipment, the computer equipment include memory, processor and are stored on the memory simultaneously The computer program that can be run on the processor, following steps are realized during computer program described in the computing device:
Two video cameras are obtained in synchronization respectively to a two field picture of tracking target shooting;
Candidate region corresponding to the tracking target is extracted from described image;
The candidate region is inputted into default trace model to be handled to obtain bounding box corresponding to the tracking target;
Obtain the two dimension at the bounding box center corresponding to the tracking target that two video cameras are shot in synchronization respectively Coordinate, the three-dimensional coordinate at the bounding box center that target is tracked corresponding to the moment is calculated further according to video camera projection matrix;
Three-dimensional coordinate corresponding to obtaining the bounding box at continuous moment, continuous coordinate sequence is formed, by the continuous seat Calculated in mark sequence inputting recurrent neural network LSTM, generate follow-up coordinate sequence;
Obtain tracking the track of target according to continuous coordinate sequence and follow-up coordinate sequence.
Above-mentioned table tennis target following and trajectory predictions method, apparatus, storage medium and computer equipment, by obtaining two Platform video camera is in synchronization respectively to the image of tracking target shooting, then extracted from image and track candidate corresponding to target Region, candidate region input CNN models are handled to obtain bounding box corresponding to tracking target.Because it is that two video cameras are same When shoot, so each moment can obtain two images so that obtain two bounding boxs, will be tracked in synchronization image The two-dimensional coordinate combination video camera projection matrix at two bounding box centers corresponding to target calculates tracking mesh corresponding to the moment The three-dimensional coordinate of target bounding box.Three-dimensional coordinate corresponding to the bounding box at continuous moment is obtained, and forms continuous coordinate sequence. Continuous coordinate sequence is inputted in LSTM and calculated, generates follow-up coordinate sequence.According to continuous coordinate sequence with after Continuous coordinate sequence has just obtained the complete track of tracking target.The tracking of accurate target location is carried out using CNN models, then The advantage of temporal aspect can be effectively analyzed with reference to LSTM, it becomes possible to realize the accurate pre- of the movement locus to tracking target Survey.
Brief description of the drawings
Fig. 1 is the applied environment figure of table tennis target following and trajectory predictions method in one embodiment;
Fig. 2 is the cut-away view of server in one embodiment;
Fig. 3 is the flow chart of table tennis target following and trajectory predictions method in one embodiment;
Fig. 4 is the flow chart of table tennis target following and trajectory predictions method in one embodiment;
Fig. 5 is to obtain the flow chart of bounding volume method in Fig. 4;
Fig. 6 is the flow chart that video camera Projection Matrix Approach is established in one embodiment;
Fig. 7 is the structural representation of table tennis target following and trajectory predictions device in one embodiment;
Fig. 8 is the structural representation of table tennis target following and trajectory predictions device in one embodiment;
Fig. 9 is the structural representation that target bounding box acquisition module is tracked in Fig. 7;
Figure 10 is the structural representation of table tennis target following and trajectory predictions device in one embodiment.
Embodiment
In order to facilitate the understanding of the purposes, features and advantages of the present invention, below in conjunction with the accompanying drawings to the present invention Embodiment be described in detail.Many details are elaborated in the following description in order to fully understand this hair It is bright.But the invention can be embodied in many other ways as described herein, those skilled in the art can be not Similar improvement is done in the case of running counter to intension of the present invention, therefore the present invention is not limited to the specific embodiments disclosed below.
Unless otherwise defined, all of technologies and scientific terms used here by the article is with belonging to technical field of the invention The implication that technical staff is generally understood that is identical.Term used in the description of the invention herein is intended merely to description tool The purpose of the embodiment of body, it is not intended that in the limitation present invention.Each technical characteristic of above example can carry out arbitrary group Close, to make description succinct, combination not all possible to each technical characteristic in above-described embodiment is all described, however, As long as contradiction is not present in the combination of these technical characteristics, the scope of this specification record is all considered to be.
In recent years, as the development of computer vision technique is with gradually ripe, computer is specific sports field Using also continuously emerging.In table tennis, table tennis is tracked in each two field picture of video camera shooting, so as to record ball Positional information, and the movement locus of table tennis is predicted.Because table tennis has, small volume, feature are few, move soon Feature is, it is necessary to specially design tracking and prediction algorithm to meet these requirements.
A kind of table tennis target following proposed in the embodiment of the present invention and trajectory predictions method are, it is necessary to specific actual Used under environment configurations.As shown in Figure 1, it is assumed that sportsman is played ball, and table tennis 110, table tennis table 120 are entered with video camera Row shooting.Specifically, two high-speed cameras are placed in the side of table tennis table, it is in parallel by hardware trigger, it is synchronous Ground is shot to billiard table region, and is maintained in whole process and is not moved.In order to accurately calculate three-dimensional after ensureing Coordinate, the model specification of camera need unanimously.Using an angle of billiard table as origin, world coordinate system is established.Wherein x-axis is along ball Platform bottom line, y-axis is along side edge of table, and z-axis is perpendicular to billiard table.Two video cameras computed in advance are needed to the projection matrix of billiard table, So that the two-dimensional coordinate obtained from two video cameras calculates three-dimensional coordinate, or three-dimensional coordinate is projected to camera plane.
In one embodiment, as shown in Fig. 2 additionally providing a kind of server, the server includes passing through system bus Processor, non-volatile memory medium, built-in storage, the network interface of connection, operation is stored with non-volatile memory medium System and a kind of table tennis target following and trajectory predictions device, the table tennis target following and trajectory predictions device are used to perform A kind of table tennis target following and trajectory predictions method.The processor is used to improve calculating and control ability, supports whole service The operation of device.Built-in storage is used for for the table tennis target following and the operation of trajectory predictions device in non-volatile memory medium Environment is provided, computer-readable instruction can be stored in the built-in storage, can when the computer-readable instruction is executed by processor So that a kind of table tennis target following of the computing device and trajectory predictions method.Network interface receives regarding comprising tracking target Frequency etc..
In one embodiment, as shown in Figure 3, there is provided a kind of table tennis target following and trajectory predictions method, with this Method is applied to illustrate exemplified by the application scenarios in Fig. 1, including:
Step 310, two video cameras are obtained in synchronization respectively to a two field picture of tracking target shooting.
Two high-speed cameras are placed in the side of table tennis table, synchronously billiard table region is shot.Track mesh The table tennis of motion is designated as, at each moment, obtains a two field picture respectively from two video cameras.
Step 320, candidate region corresponding to tracking target is extracted from image.
Before the tracking target in image is tracked, first have to solve is how to obtain the initial of tracking target Bounding box, i.e., tracking target-table tennis how is detected in the picture.This framework first looks for the Probability Area of table tennis.By Fixed all the time in video camera, and whole scene does not have too many moving object, can specifically use the modes such as background subtraction to distinguish Foreground area is extracted from the image of two video camera shootings, reduces hunting zone, these regions are corresponding as tracking target Candidate region.
Step 330, candidate region is inputted into default trace model to be handled to obtain bounding box corresponding to tracking target.
Default trace model includes:Default convolutional neural networks model and default recurrence layer.Convolutional neural networks (Convolutional Neural Network, CNN) is one of network structure of great representative in depth learning technology.It is default Convolutional neural networks model is the convolutional Neural that beforehand through one group of mark training set convolutional neural networks are trained with gained Network model, including convolutional layer, pond layer and full articulamentum.The default layer that returns includes:Full articulamentum, interest pool area layer And the low layer convolutional layer in above-mentioned default convolutional neural networks model.Default recurrence layer is established, it is necessary to which another group of mark is trained Collection first pass through it is above-mentioned the default convolutional neural networks model of foundation is handled after, through recurrence layer carrying out recurrence processing Afterwards, default recurrence layer is established by training.
The candidate region extracted from image is inputted into default trace model respectively to be handled to obtain tracking target pair The bounding box answered.Target detection is carried out in CNN specifically, candidate region is sequentially input, exports a probable value to represent this Whether candidate region includes target.If the candidate region of input does not find target, need all candidate regions all to input CNN detects target.If being found that target from previous frame image, just only by previous frame image near target position Candidate region inputs CNN, it is possible to reduce unnecessary calculating, improves efficiency.
Step 340, bounding box center corresponding to the tracking target that two video cameras are shot in synchronization respectively is obtained Two-dimensional coordinate, the three-dimensional of bounding box center that target is tracked corresponding to the moment is calculated further according to video camera projection matrix and is sat Mark.
After default trace model processing, two video cameras are obtained and have been tracked in the image that synchronization is shot respectively Bounding box corresponding to target, the two-dimensional coordinate at bounding box center is obtained further according to bounding box.Now this two-dimensional coordinate is shooting In machine coordinate system, calculated further according to video camera projection matrix and the bounding box center of target is tracked corresponding to the moment in the world Three-dimensional coordinate in coordinate system.Wherein, video camera projection matrix is precalculated.Specifically, the world is established respectively in advance Coordinate system, camera coordinate system, then obtain the Intrinsic Matrix of video camera and outer parameter matrix.According to Intrinsic Matrix and outer ginseng Matrix number establishes video camera projection matrix, and the two-dimensional coordinate of camera coordinate system can be changed to the world and sat by video camera projection matrix Mark the three-dimensional coordinate of system.
Step 350, three-dimensional coordinate corresponding to obtaining the bounding box at continuous moment, continuous coordinate sequence is formed, will be continuous Coordinate sequence input recurrent neural network LSTM in calculated, generate follow-up coordinate sequence.
To each moment, the image of shooting passes through above-mentioned calculating, obtains and is surrounded corresponding to the tracking target at continuous moment Box, then obtain three-dimensional coordinate corresponding to bounding box center.By the three-dimensional coordinate at bounding box center corresponding to these continuous moment, according to It is secondary to form continuous coordinate sequence.Follow-up coordinate will be automatically generated in continuous coordinate sequence input recurrent neural network LSTM Sequence.LSTM (Long Short-Term Memory), refers to two-way shot and long term memory network model, is a kind of time recurrence god Through network.Two-way shot and long term memory network model includes preceding to shot and long term memory network model and backward shot and long term memory network mould Type.
Step 360, obtain tracking the track of target according to continuous coordinate sequence and follow-up coordinate sequence.
The follow-up coordinate sequence being calculated, continuous coordinate sequence will be inputted in LSTM by continuous coordinate sequence Row just constitute the track of tracking target together, can be so as to tracking target, for example table tennis carries out trajectory predictions and drop point is pre- Survey.
In the present embodiment, by obtaining two video cameras in synchronization respectively to the image of tracking target shooting, then from Candidate region corresponding to tracking target is extracted in image, input default trace model in candidate region is handled and tracked Bounding box corresponding to target.Because being two video cameras while shooting, each moment can obtain two images and then Two bounding boxs are obtained, the two-dimensional coordinate that two bounding box centers corresponding to target are tracked in synchronization image is combined into shooting Machine projection matrix calculates the three-dimensional coordinate of the bounding box of tracking target corresponding to the moment.Obtain the bounding box pair at continuous moment The three-dimensional coordinate answered, and form continuous coordinate sequence.Continuous coordinate sequence is inputted in LSTM and calculated, generation is follow-up Coordinate sequence.The complete track of tracking target has just been obtained according to continuous coordinate sequence and follow-up coordinate sequence.Using Default trace model carries out the tracking of accurate target location, and the advantage of temporal aspect can be effectively analyzed in conjunction with LSTM, It can be realized as the Accurate Prediction of the movement locus to tracking target.
In one embodiment, as shown in figure 4, a kind of table tennis target following and trajectory predictions method, in addition to:
Step 370, by the three-dimensional coordinate input LSTM at the bounding box center of corresponding tracking target at the time of calculating Calculated, predict the three-dimensional coordinate of the tracking target in next two field picture of two video camera shootings.
LSTM can also input the coordinate that the image shot to this is handled to obtain as the model of target following Into LSTM, the coordinate that target bounding box is tracked in next two field picture is predicted.Specifically, the seat that will all be obtained in each circulation Mark input LSTM models, and make LSTM models only export the parameter of a mixed Gauss model, similar to Kalman filter, come Predicting tracing target is in the position that next frame is likely to occur, so as to reduce the hunting zone of tracking.
Step 380, using the region comprising three-dimensional coordinate as the candidate region that target is tracked in next two field picture.
Using the region comprising the three-dimensional coordinate for predicting to obtain by LSTM as the candidate that target is tracked in next two field picture Region.Blocking or tracking, target speed is too fast, when default trace model may lose target, uses LSTM's Prediction result is as tracking result so that whole tracking framework can continue to work, be unlikely to because default trace model is lost Target and paralyse.
In the present embodiment, the continuous coordinate sequence of tracking target of gained is calculated using former two field pictures, passes through LSTM Model can not only obtain tracking the follow-up coordinate sequence of target, obtain tracking the complete track of target further according to coordinate sequence. Target in next two field picture can also be tracked.So as to reduce the search being tracked using default trace model Scope, and can make up and be blocked or movement velocity is too fast in tracking target, default trace model may lose target When the defects of.
In one embodiment, handled to obtain tracking mesh as shown in figure 5, candidate region is inputted into default trace model Bounding box corresponding to mark, including:
Step 331, candidate region is inputted into default convolutional neural networks model to obtain tracking target in image by processing Bounding box.
Default trace model includes:Default convolutional neural networks model and default recurrence layer.Specifically, it will be carried from image The pixel of taking-up is to carry out convolution operation in the default convolutional neural networks model of 100 × 100 candidate regions input.Default convolution god The CaffeNet of training in advance has been used through the convolutional layer in network model.Above-mentioned convolution operation can be to carry out multiple convolution behaviour Make, extract the characteristic pattern of candidate region.
It is pond layer on convolutional layer, the characteristic pattern input pond layer of the image extracted is subjected to pondization operation, i.e., Carry out the characteristic pattern after Feature Compression is compressed.Specifically, pond layer can be spatial pyramid pond layer (spacial Pyramid pooling layer), this spatial pyramid pond layer is used to retain more positional informations.
Characteristic pattern after the compression that will be obtained by pond layer, then by two full articulamentums, by the 2500 of output tie up to Quantitative change is changed to 50 × 50 matrix, that is, exports 50 × 50 probability graph.Each element in matrix is a probable value, is represented The pixel of relevant position belongs to the probability of tracking target in input picture.For an image for including tracking target, general meeting The region of one connection of output.The obvious probable value than outside of probable value within this region is high.Can be by probability Value sets threshold value, more than some probable value then in bounding box, so as to calculate a bounding box, this bounding box is just made For the prediction result to target location.
Wherein, the step of establishing default convolutional neural networks model is as follows:Obtain the convolutional neural networks instruction for modeling Practice collection, convolutional neural networks training set includes the image comprising target and the image not comprising target, and image is from including target Video in obtain;Image is labeled, the value in the actual bounding box of target in image is arranged to the first value, will be schemed Value as in outside the actual bounding box of target is arranged to second value;By convolutional neural networks training set input initialization network parameter Convolutional neural networks in be trained to obtain the bounding box of target in image;According to the bounding box of target in image, mark out Actual bounding box and Softmax loss function computation modeling after convolutional neural networks network parameter;According to network parameter Obtain default convolutional neural networks model.
Wherein, it is as follows to establish default the step of returning layer:The recurrence layer training set for modeling is obtained, returns layer training set Including the image comprising target, image is what is obtained from the video comprising target;Image is labeled, marked out in image The size of the actual bounding box of target;It will return and be trained to obtain figure in the default convolutional neural networks model of layer training set input The bounding box of target as in;After the recurrence layer of the bounding box input initialization network parameter of target in image is carried out into recurrence processing Obtain the size of the bounding box after being returned corresponding to target;The size of bounding box after the recurrence according to corresponding to target, mark out Actual bounding box size and smoothL1 loss function computation modelings after recurrence layer network parameter;According to network parameter Obtain default recurrence layer.
Step 333, default return after layer carries out recurrence processing of the bounding box input that target is tracked in image is tracked Bounding box after being returned corresponding to target, preset and return the low layer convolutional layer that layer includes default convolutional neural networks model.
Followed by recurrence layer after default convolutional neural networks model.It is default convolution successively from bottom to top to return layer Low layer convolutional layer, interest pool area layer, full articulamentum in neural network model.By the candidate region of target by default volume Product neural network model obtains the bounding box of target, projects into the low layer convolutional layer in default convolutional neural networks model and carries out Process of convolution obtains clarification of objective figure.
The clarification of objective figure obtained in previous step is inputted to interest pool area layer and carry out Feature Compression, obtained Characteristic pattern after compression.Specifically, bounding box is cut on the characteristic pattern of low layer convolutional layer, and zoom to one 7 × 7 The new characteristic pattern of size.
Characteristic pattern after compression is inputted into full articulamentum to be handled to obtain, after the CNN bounding boxs calculated and recurrence Bounding box between the displacement in xy directions and the scaling of length and width.So as to calculated according to CNN bounding box, xy directions displacement and The scaling of length and width returned after bounding box.Specifically, add a full articulamentum again on this characteristic pattern, it is contemplated that The positional precision of convolutional layer can not be too low, and selection of the embodiment of the present invention is cut in conv-1 (first layer convolutional layer).By Both bounding boxs calculated for 4 real numbers, bounding box and CNN after representative recurrence exported after full articulamentum processing are in xy side To displacement and length and width scaling.So as to which the bounding box calculated by CNN is corrected and finely tuned.Finally given with Bounding box corresponding to track target.
In the present embodiment, the bounding box of the target obtained by default convolutional neural networks model treatment is inputted to pre- After if recurrence layer carries out recurrence processing, because the default low layer convolutional layer for returning layer and including default convolutional neural networks model, institute So that the positional information of the semantic information of high-rise convolutional layer (target classification etc.) and low layer convolutional layer can be taken into account simultaneously, so as to Correctly identify the target in input picture and provide the bounding box of target exactly.After recurrence finally being calculated by recurrence layer Both bounding boxs that bounding box and CNN are calculated are in the displacement in xy directions and the scaling of length and width.So as to the encirclement calculated to CNN Box is corrected so that the bounding box after recurrence is more accurate, effectively avoids entirely tracking framework generation drift or even loses Target.
In one embodiment, candidate region corresponding to tracking target is extracted from image, including:Used from image The method of background subtraction extracts candidate region corresponding to tracking target.
Typically in the work of target following, track algorithm would generally be assumed to have been presented in the first two field picture will be with The initial bounding box of the target of track.Therefore when actually using track algorithm, first have to solve is how initially to be wrapped Box is enclosed, i.e., how to detect tracking target in the picture.The embodiment of the present invention first looks for the Probability Area of target table tennis.By Fixed all the time in video camera, and whole scene does not have too many moving object, the modes such as background subtraction can be used to extract foreground zone Domain, reduce hunting zone.These regions can be input into default trace model calculate afterwards as object candidate area.Tool Body, it should be noted that this system not depends critically upon background subtraction.It only needs to find the initial bit of target in initial several frames Put, then target following can be carried out with use such as LSTM models.Certainly, in the main part of algorithm, can still make By the use of background subtraction as householder method, candidate region is provided for tracking.
In the present embodiment, hunting zone can be reduced quickly using background subtraction method so that later use it is default with Track model or default LSTM models carry out the encirclement according to the bounding box coordinate prediction next frame target of target in former two field pictures Box coordinate.
In one embodiment, as shown in fig. 6, establishing the process of video camera projection matrix, including:
Step 610, world coordinate system, camera coordinate system are established respectively.
Using an angle of billiard table as origin, world coordinate system is established.Wherein x-axis is along billiard table bottom line, and y-axis is along billiard table side Line, z-axis is perpendicular to billiard table.Camera coordinate system is established by origin of video camera.It is of course also possible to shooting is established in other ways Machine coordinate system.
Step 630, the Intrinsic Matrix of video camera and outer parameter matrix are obtained.
First, using chessboard calibration method, by multi-angled shooting chessboard picture, and OpenCV built-in demarcation letter is used Number, obtain the Intrinsic Matrix M of camera3×3And distortion factor.Intrinsic Matrix is used to turn camera coordinate system three-dimensional coordinate It is changed to camera plane two-dimensional coordinate:
Then, the table tennis table region in image is identified by color characteristic, its boundary line is obtained using Hough transformation.It is logical The intersection point for crossing boundary line obtains the coordinate at four angles of billiard table, then calculates camera to the outer parameter matrix of billiard table, including spin moment Battle array R3×3With transposed matrix T3x1.Outer parameter matrix is used for the conversion of camera coordinate system and world coordinate system:
Step 650, video camera projection matrix is established according to Intrinsic Matrix and outer parameter matrix, video camera projection matrix can The two-dimensional coordinate of camera coordinate system is changed to the three-dimensional coordinate of world coordinate system.
Finally, the formula of summary two, it is known that certain o'clock in the two-dimensional coordinate of two camera planes, is calculated using below equation Three-dimensional coordinate.
Wherein, ZcIt is unknown number for Z coordinate of this o'clock in a camera coordinate system.U, v are that the point is put down in video camera The coordinate in face.R is spin matrix, and T is transposed matrix.Xw,Yw,ZwFor the three-dimensional coordinate under the world coordinate system to be solved.Altogether Four unknown numbers, two video cameras respectively provide an aforesaid equation, therefore can utilize linear algebra direct solution.
In the present embodiment, video camera projection matrix is precomputed, to carry out table tennis target following and track Directly the two-dimensional coordinate of camera coordinate system is changed to the three-dimensional coordinate of world coordinate system when prediction.Unify to world coordinates System is calculated, so convenient and swift.
In one embodiment, as shown in Figure 7, there is provided a kind of table tennis target following and trajectory predictions device 700, should Device includes:Video camera taking module 710, tracking object candidate area carry power module 720, tracking target bounding box acquisition module 730th, target bounding box three-dimensional coordinate computing module 740, coordinate sequence generation module 750 and the Track Pick-up for tracking target are tracked Module 760.
Video camera taking module 710, for obtaining two video cameras in synchronization respectively to the one of tracking target shooting Two field picture.
Tracking object candidate area puies forward power module 720, for extracting candidate region corresponding to tracking target from image.
Target bounding box acquisition module 730 is tracked, is handled to obtain for candidate region to be inputted into default trace model Track bounding box corresponding to target.
Target bounding box three-dimensional coordinate computing module 740 is tracked, is clapped respectively in synchronization for obtaining two video cameras The two-dimensional coordinate at bounding box center corresponding to the tracking target taken the photograph, further according to video camera projection matrix calculate corresponding to the moment with The three-dimensional coordinate at the bounding box center of track target.
Coordinate sequence generation module 750, for obtaining three-dimensional coordinate corresponding to the bounding box at continuous moment, form continuous Coordinate sequence, continuous coordinate sequence is inputted in recurrent neural network LSTM and calculated, generates follow-up coordinate sequence.
The Track Pick-up module 760 of target is tracked, for being obtained according to continuous coordinate sequence and follow-up coordinate sequence Track the track of target.
In one embodiment, as shown in figure 8, a kind of table tennis target following and trajectory predictions device 700 also include:With The three-dimensional coordinate prediction module 770 of track target and the candidate region acquisition module 780 for tracking target.
Track target three-dimensional coordinate prediction module 770, for by the time of calculating it is corresponding tracking target encirclement Calculated in the three-dimensional coordinate input LSTM at box center, predict the tracking target in next two field picture of two video camera shootings Three-dimensional coordinate.
The candidate region acquisition module 780 of target is tracked, for using the region comprising three-dimensional coordinate as next two field picture The candidate region of middle tracking target.
In one embodiment, as shown in figure 9, tracking target bounding box acquisition module 730 includes:Convolutional neural networks mould Block 731 and recurrence layer module 733.
Convolutional neural networks module 731, for candidate region to be inputted into default convolutional neural networks model by handling The bounding box of target is tracked into image.
Layer module 733 is returned, for the default layer that returns of the bounding box input that target is tracked in image to be carried out into recurrence processing Obtain tracking the bounding box after returning corresponding to target afterwards, preset and return the low layer volume that layer includes default convolutional neural networks model Lamination.
In one embodiment, tracking object candidate area puies forward power module 720 and is additionally operable to:Background subtraction is used from image Method extract tracking target corresponding to candidate region.
In one embodiment, as shown in Figure 10, a kind of table tennis target following and trajectory predictions device 700 also include taking the photograph Camera projection matrix establishes module 790, and video camera projection matrix establishes module 790 and is used to establish world coordinate system, shooting respectively Machine coordinate system;Obtain the Intrinsic Matrix of video camera and outer parameter matrix;Established and taken the photograph according to Intrinsic Matrix and outer parameter matrix The two-dimensional coordinate of camera coordinate system can be changed to the three-dimensional of world coordinate system and sat by camera projection matrix, video camera projection matrix Mark.
In one embodiment, a kind of computer-readable recording medium is additionally provided, is stored thereon with computer program, should Following steps are realized when program is executed by processor:Two video cameras are obtained in synchronization respectively to the one of tracking target shooting Two field picture;Candidate region corresponding to tracking target is extracted from image;Candidate region is inputted at default trace model Reason obtains tracking bounding box corresponding to target;Obtain bag corresponding to the tracking target that two video cameras are shot in synchronization respectively The two-dimensional coordinate at box center is enclosed, the bounding box center of tracking target corresponding to the moment is calculated further according to video camera projection matrix Three-dimensional coordinate;Three-dimensional coordinate corresponding to obtaining the bounding box at continuous moment, continuous coordinate sequence is formed, by continuous coordinate sequence Calculated in row input recurrent neural network LSTM, generate follow-up coordinate sequence;According to continuous coordinate sequence and subsequently Coordinate sequence obtain track target track.
In one embodiment, following steps are also realized when said procedure is executed by processor:At the time of calculating pair Calculated in the three-dimensional coordinate input LSTM at the bounding box center for the tracking target answered, predict that two video cameras are shot next The three-dimensional coordinate of tracking target in two field picture;Using the region comprising three-dimensional coordinate as the time that target is tracked in next two field picture Favored area.
In one embodiment, following steps are also realized when said procedure is executed by processor:Candidate region input is pre- If convolutional neural networks model obtains tracking the bounding box of target in image by processing;The bounding box of target will be tracked in image Default return after layer carries out recurrence processing of input obtains the bounding box after being returned corresponding to tracking target, presets and returns layer comprising pre- If the low layer convolutional layer of convolutional neural networks model.
In one embodiment, following steps are also realized when said procedure is executed by processor:Background is used from image The method of subduction extracts candidate region corresponding to tracking target.
In one embodiment, following steps are also realized when said procedure is executed by processor:World coordinates is established respectively System, camera coordinate system;Obtain the Intrinsic Matrix of video camera and outer parameter matrix;According to Intrinsic Matrix and outer parameter matrix Video camera projection matrix is established, video camera projection matrix can change the two-dimensional coordinate of camera coordinate system to world coordinate system Three-dimensional coordinate.
In one embodiment, additionally provide a kind of computer equipment, the computer equipment includes memory, processor and Storage realizes following step on a memory and the computer program that can run on a processor, during computing device computer program Suddenly:
Two video cameras are obtained in synchronization respectively to a two field picture of tracking target shooting;Extracted from image with Candidate region corresponding to track target;Candidate region is inputted into default trace model to be handled to obtain encirclement corresponding to tracking target Box;Obtain the two-dimensional coordinate at bounding box center corresponding to the tracking target that two video cameras are shot in synchronization respectively, then root The three-dimensional coordinate at the bounding box center that target is tracked corresponding to the moment is calculated according to video camera projection matrix;Obtain the continuous moment Three-dimensional coordinate corresponding to bounding box, continuous coordinate sequence is formed, continuous coordinate sequence is inputted into recurrent neural network LSTM In calculated, generate follow-up coordinate sequence;Obtained tracking target according to continuous coordinate sequence and follow-up coordinate sequence Track.
In one embodiment, following steps are also realized during above-mentioned computing device computer program:By calculate when Calculated in the three-dimensional coordinate input LSTM at the bounding box center of tracking target corresponding to carving, two video camera shootings of prediction The three-dimensional coordinate of tracking target in next two field picture;Using the region comprising three-dimensional coordinate as tracking target in next two field picture Candidate region.
In one embodiment, following steps are also realized during above-mentioned computing device computer program:Candidate region is defeated Enter the bounding box that default convolutional neural networks model obtains tracking target in image by processing;The bag of target will be tracked in image Enclose default return after layer carries out recurrence processing of box input to obtain tracking the bounding box after returning corresponding to target, preset and return layer bag Low layer convolutional layer containing default convolutional neural networks model.
In one embodiment, following steps are also realized during above-mentioned computing device computer program:Used from image The method of background subtraction extracts candidate region corresponding to tracking target.
In one embodiment, following steps are also realized during above-mentioned computing device computer program:The world is established respectively Coordinate system, camera coordinate system;Obtain the Intrinsic Matrix of video camera and outer parameter matrix;According to Intrinsic Matrix and outer parameter Matrix establishes video camera projection matrix, and video camera projection matrix can change the two-dimensional coordinate of camera coordinate system to world coordinates The three-dimensional coordinate of system.
One of ordinary skill in the art will appreciate that realize all or part of flow in above-described embodiment method, being can be with The hardware of correlation is instructed to complete by computer program, program can be stored in a non-volatile computer-readable storage In medium, in the embodiment of the present invention, the program can be stored in the storage medium of computer system, and by the computer system In at least one computing device, with realize include as above-mentioned each method embodiment flow.Wherein, storage medium can be Magnetic disc, CD, read-only memory (Read-Only Memory, ROM) or random access memory (Random Access Memory, RAM) etc..
Each technical characteristic of above example can be combined arbitrarily, to make description succinct, not to above-described embodiment In each technical characteristic it is all possible combination be all described, as long as however, lance is not present in the combination of these technical characteristics Shield, all it is considered to be the scope of this specification record.
Embodiment described above only expresses the several embodiments of the present invention, and its description is more specific and detailed, but simultaneously Can not therefore it be construed as limiting the scope of the patent.It should be pointed out that come for one of ordinary skill in the art Say, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to the protection of the present invention Scope.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.

Claims (10)

1. a kind of table tennis target following and trajectory predictions method, methods described include:
Two video cameras are obtained in synchronization respectively to a two field picture of tracking target shooting;
Candidate region corresponding to the tracking target is extracted from described image;
The candidate region is inputted into default trace model to be handled to obtain bounding box corresponding to the tracking target;
The two-dimensional coordinate at the bounding box center corresponding to the tracking target that two video cameras are shot in synchronization respectively is obtained, The three-dimensional coordinate at the bounding box center that target is tracked corresponding to the moment is calculated further according to video camera projection matrix;
Three-dimensional coordinate corresponding to obtaining the bounding box at continuous moment, continuous coordinate sequence is formed, by the continuous coordinate sequence Calculated in row input recurrent neural network LSTM, generate follow-up coordinate sequence;
Obtain tracking the track of target according to continuous coordinate sequence and follow-up coordinate sequence.
2. according to the method for claim 1, it is characterised in that methods described also includes:
It will be calculated in the three-dimensional coordinate input LSTM at the bounding box center that target is tracked corresponding to the moment calculated, Predict the three-dimensional coordinate of the tracking target in next two field picture of two video cameras shooting;
Using the region comprising the three-dimensional coordinate as the candidate region that target is tracked in next two field picture.
3. according to the method for claim 1, it is characterised in that described to enter the default trace model of candidate region input Row processing obtains bounding box corresponding to the tracking target, including:
The candidate region is inputted into default convolutional neural networks model and obtains the bag of tracking target in described image by processing Enclose box;
Default return after layer carries out recurrence processing of the bounding box input that target is tracked in described image is obtained into the tracking target Bounding box after corresponding recurrence, the default low layer convolutional layer for returning layer and including the default convolutional neural networks model.
4. according to the method for claim 1, it is characterised in that described that the tracking target pair is extracted from described image The candidate region answered, including:
Candidate region corresponding to the tracking target is extracted using the method for background subtraction from described image.
5. according to the method for claim 1, it is characterised in that the process of the video camera projection matrix is established, including:
World coordinate system, camera coordinate system are established respectively;
Obtain the Intrinsic Matrix of video camera and outer parameter matrix;
Video camera projection matrix is established according to the Intrinsic Matrix and outer parameter matrix, the video camera projection matrix will can be taken the photograph The two-dimensional coordinate of camera coordinate system is changed to the three-dimensional coordinate of world coordinate system.
6. a kind of table tennis target following and trajectory predictions device, it is characterised in that described device includes:
Video camera taking module, for obtaining two video cameras in synchronization respectively to a two field picture of tracking target shooting;
Tracking object candidate area puies forward power module, for extracting candidate regions corresponding to the tracking target from described image Domain;
Track target bounding box acquisition module, for by the candidate region input default trace model handled to obtain it is described Track bounding box corresponding to target;
Target bounding box three-dimensional coordinate computing module is tracked, the tracking shot respectively in synchronization for obtaining two video cameras The two-dimensional coordinate at the bounding box center corresponding to target, further according to video camera projection matrix calculate corresponding to the moment with The three-dimensional coordinate at the bounding box center of track target;
Coordinate sequence generation module, for three-dimensional coordinate corresponding to obtaining the bounding box at continuous moment, form continuous coordinate sequence Row, it will be calculated in the continuous coordinate sequence input recurrent neural network LSTM, generate follow-up coordinate sequence;
The Track Pick-up module of target is tracked, for obtaining tracking target according to continuous coordinate sequence and follow-up coordinate sequence Track.
7. device according to claim 6, it is characterised in that described device also includes:
The three-dimensional coordinate prediction module of target is tracked, for by the bounding box that target is tracked corresponding to the moment calculated Calculated in the three-dimensional coordinate input LSTM of the heart, predict the tracking target in next two field picture of two video cameras shooting Three-dimensional coordinate;
Track target candidate region acquisition module, for using the region comprising the three-dimensional coordinate as in next two field picture with The candidate region of track target.
8. device according to claim 6, it is characterised in that the tracking target bounding box acquisition module includes:
Convolutional neural networks module, institute is obtained by processing for the candidate region to be inputted into default convolutional neural networks model State the bounding box that target is tracked in image;
Recurrence layer module, for default return after layer carries out recurrence processing of the bounding box input that target is tracked in described image to be obtained Bounding box to after recurrence corresponding to the tracking target, the default recurrence layer include the default convolutional neural networks model Low layer convolutional layer.
9. a kind of computer-readable recording medium, is stored thereon with computer program, it is characterised in that the program is held by processor Table tennis target following and the trajectory predictions method as any one of power 1 to 5 are realized during row.
10. a kind of computer equipment, the computer equipment includes memory, processor and is stored on the memory and can The computer program run on the processor, it is characterised in that realized described in the computing device during computer program Table tennis target following and trajectory predictions method as any one of weighing 1 to 5.
CN201710682442.2A 2017-08-10 2017-08-10 Table tennis target tracking and trajectory prediction method, device, storage medium and computer equipment Active CN107481270B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710682442.2A CN107481270B (en) 2017-08-10 2017-08-10 Table tennis target tracking and trajectory prediction method, device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710682442.2A CN107481270B (en) 2017-08-10 2017-08-10 Table tennis target tracking and trajectory prediction method, device, storage medium and computer equipment

Publications (2)

Publication Number Publication Date
CN107481270A true CN107481270A (en) 2017-12-15
CN107481270B CN107481270B (en) 2020-05-19

Family

ID=60600283

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710682442.2A Active CN107481270B (en) 2017-08-10 2017-08-10 Table tennis target tracking and trajectory prediction method, device, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN107481270B (en)

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108053653A (en) * 2018-01-11 2018-05-18 广东蔚海数问大数据科技有限公司 Vehicle behavior Forecasting Methodology and device based on LSTM
CN108062763A (en) * 2017-12-29 2018-05-22 纳恩博(北京)科技有限公司 Method for tracking target and device, storage medium
CN108465224A (en) * 2018-04-07 2018-08-31 华北理工大学 Table tennis track analysis system
CN108540817A (en) * 2018-05-08 2018-09-14 成都市喜爱科技有限公司 Video data handling procedure, device, server and computer readable storage medium
CN108876821A (en) * 2018-07-05 2018-11-23 北京云视万维科技有限公司 Across camera lens multi-object tracking method and system
CN108986141A (en) * 2018-07-03 2018-12-11 百度在线网络技术(北京)有限公司 Object of which movement information processing method, device, augmented reality equipment and storage medium
CN109241856A (en) * 2018-08-13 2019-01-18 浙江零跑科技有限公司 A kind of vehicle-mounted vision system solid object detection method of monocular
CN109559332A (en) * 2018-10-31 2019-04-02 浙江工业大学 A kind of sight tracing of the two-way LSTM and Itracker of combination
CN109711274A (en) * 2018-12-05 2019-05-03 斑马网络技术有限公司 Vehicle checking method, device, equipment and storage medium
CN110111358A (en) * 2019-05-14 2019-08-09 西南交通大学 A kind of method for tracking target based on multilayer temporal filtering
CN110340901A (en) * 2019-06-28 2019-10-18 深圳盈天下视觉科技有限公司 A kind of control method, control device and terminal device
CN110362098A (en) * 2018-03-26 2019-10-22 北京京东尚科信息技术有限公司 Unmanned plane vision method of servo-controlling, device and unmanned plane
CN110458281A (en) * 2019-08-02 2019-11-15 中科新松有限公司 The deeply study rotation speed prediction technique and system of ping-pong robot
CN110517292A (en) * 2019-08-29 2019-11-29 京东方科技集团股份有限公司 Method for tracking target, device, system and computer readable storage medium
CN110796093A (en) * 2019-10-30 2020-02-14 上海眼控科技股份有限公司 Target tracking method and device, computer equipment and storage medium
CN110827320A (en) * 2019-09-17 2020-02-21 北京邮电大学 Target tracking method and device based on time sequence prediction
CN110956644A (en) * 2018-09-27 2020-04-03 杭州海康威视数字技术股份有限公司 Motion trail determination method and system
CN111028287A (en) * 2018-10-09 2020-04-17 杭州海康威视数字技术股份有限公司 Method and device for determining transformation matrix of radar coordinates and camera coordinates
CN111291585A (en) * 2018-12-06 2020-06-16 杭州海康威视数字技术股份有限公司 Target tracking system, method and device based on GPS and dome camera
CN111369629A (en) * 2019-12-27 2020-07-03 浙江万里学院 Ball return trajectory prediction method based on binocular visual perception of swinging, shooting and hitting actions
CN111546332A (en) * 2020-04-23 2020-08-18 上海电机学院 Table tennis robot system based on embedded equipment and application
WO2020223940A1 (en) * 2019-05-06 2020-11-12 深圳大学 Posture prediction method, computer device and storage medium
CN111939541A (en) * 2020-06-23 2020-11-17 北京瑞盖科技股份有限公司 Evaluation method, device, equipment and system for table tennis training
CN113160275A (en) * 2021-04-21 2021-07-23 河南大学 Automatic target tracking and track calculating method based on multiple videos
CN113253755A (en) * 2021-05-08 2021-08-13 广东白云学院 Neural network-based rotor unmanned aerial vehicle tracking algorithm
CN114589719A (en) * 2022-04-02 2022-06-07 中国电子科技集团公司第五十八研究所 Real-time calibration and calibration system and method for table tennis serving robot
CN114612522A (en) * 2022-05-09 2022-06-10 广州精天信息科技股份有限公司 Table tennis sport parameter detection method and device and table tennis training auxiliary system
CN115278194A (en) * 2022-09-22 2022-11-01 山东省青东智能科技有限公司 Image data processing method based on 3D industrial camera
CN115965658A (en) * 2023-03-16 2023-04-14 江西工业贸易职业技术学院 Ball motion trajectory prediction method and system, electronic device and storage medium
CN116504068A (en) * 2023-06-26 2023-07-28 创辉达设计股份有限公司江苏分公司 Statistical method, device, computer equipment and storage medium for lane-level traffic flow
CN117237409A (en) * 2023-09-06 2023-12-15 广州飞漫思维数码科技有限公司 Shooting game sight correction method and system based on Internet of things

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101256673A (en) * 2008-03-18 2008-09-03 中国计量学院 Method for tracing arm motion in real time video tracking system
CN101282461A (en) * 2007-04-02 2008-10-08 财团法人工业技术研究院 Image processing methods
CN101458434A (en) * 2009-01-08 2009-06-17 浙江大学 System for precision measuring and predicting table tennis track and system operation method
CN107690840B (en) * 2009-06-24 2013-07-31 中国科学院自动化研究所 Unmanned plane vision auxiliary navigation method and system
CN106022527A (en) * 2016-05-27 2016-10-12 河南明晰信息科技有限公司 Trajectory prediction method and device based on map tiling and LSTM cyclic neural network
CN106485226A (en) * 2016-10-14 2017-03-08 杭州派尼澳电子科技有限公司 A kind of video pedestrian detection method based on neutral net
CN106780620A (en) * 2016-11-28 2017-05-31 长安大学 A kind of table tennis track identification positioning and tracking system and method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101282461A (en) * 2007-04-02 2008-10-08 财团法人工业技术研究院 Image processing methods
CN101256673A (en) * 2008-03-18 2008-09-03 中国计量学院 Method for tracing arm motion in real time video tracking system
CN101458434A (en) * 2009-01-08 2009-06-17 浙江大学 System for precision measuring and predicting table tennis track and system operation method
CN107690840B (en) * 2009-06-24 2013-07-31 中国科学院自动化研究所 Unmanned plane vision auxiliary navigation method and system
CN106022527A (en) * 2016-05-27 2016-10-12 河南明晰信息科技有限公司 Trajectory prediction method and device based on map tiling and LSTM cyclic neural network
CN106485226A (en) * 2016-10-14 2017-03-08 杭州派尼澳电子科技有限公司 A kind of video pedestrian detection method based on neutral net
CN106780620A (en) * 2016-11-28 2017-05-31 长安大学 A kind of table tennis track identification positioning and tracking system and method

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108062763A (en) * 2017-12-29 2018-05-22 纳恩博(北京)科技有限公司 Method for tracking target and device, storage medium
CN108062763B (en) * 2017-12-29 2020-10-16 纳恩博(北京)科技有限公司 Target tracking method and device and storage medium
CN108053653A (en) * 2018-01-11 2018-05-18 广东蔚海数问大数据科技有限公司 Vehicle behavior Forecasting Methodology and device based on LSTM
CN108053653B (en) * 2018-01-11 2021-03-30 广东蔚海数问大数据科技有限公司 Vehicle behavior prediction method and device based on LSTM
CN110362098B (en) * 2018-03-26 2022-07-05 北京京东尚科信息技术有限公司 Unmanned aerial vehicle visual servo control method and device and unmanned aerial vehicle
CN110362098A (en) * 2018-03-26 2019-10-22 北京京东尚科信息技术有限公司 Unmanned plane vision method of servo-controlling, device and unmanned plane
CN108465224A (en) * 2018-04-07 2018-08-31 华北理工大学 Table tennis track analysis system
CN108540817A (en) * 2018-05-08 2018-09-14 成都市喜爱科技有限公司 Video data handling procedure, device, server and computer readable storage medium
CN108986141A (en) * 2018-07-03 2018-12-11 百度在线网络技术(北京)有限公司 Object of which movement information processing method, device, augmented reality equipment and storage medium
CN108876821A (en) * 2018-07-05 2018-11-23 北京云视万维科技有限公司 Across camera lens multi-object tracking method and system
CN108876821B (en) * 2018-07-05 2019-06-07 北京云视万维科技有限公司 Across camera lens multi-object tracking method and system
CN109241856A (en) * 2018-08-13 2019-01-18 浙江零跑科技有限公司 A kind of vehicle-mounted vision system solid object detection method of monocular
CN110956644B (en) * 2018-09-27 2023-10-10 杭州海康威视数字技术股份有限公司 Motion trail determination method and system
CN110956644A (en) * 2018-09-27 2020-04-03 杭州海康威视数字技术股份有限公司 Motion trail determination method and system
CN111028287A (en) * 2018-10-09 2020-04-17 杭州海康威视数字技术股份有限公司 Method and device for determining transformation matrix of radar coordinates and camera coordinates
CN111028287B (en) * 2018-10-09 2023-10-20 杭州海康威视数字技术股份有限公司 Method and device for determining a transformation matrix of radar coordinates and camera coordinates
CN109559332A (en) * 2018-10-31 2019-04-02 浙江工业大学 A kind of sight tracing of the two-way LSTM and Itracker of combination
CN109559332B (en) * 2018-10-31 2021-06-18 浙江工业大学 Sight tracking method combining bidirectional LSTM and Itracker
CN109711274A (en) * 2018-12-05 2019-05-03 斑马网络技术有限公司 Vehicle checking method, device, equipment and storage medium
CN111291585B (en) * 2018-12-06 2023-12-08 杭州海康威视数字技术股份有限公司 GPS-based target tracking system, method and device and ball machine
CN111291585A (en) * 2018-12-06 2020-06-16 杭州海康威视数字技术股份有限公司 Target tracking system, method and device based on GPS and dome camera
US11348304B2 (en) 2019-05-06 2022-05-31 Shenzhen University Posture prediction method, computer device and storage medium
WO2020223940A1 (en) * 2019-05-06 2020-11-12 深圳大学 Posture prediction method, computer device and storage medium
CN110111358A (en) * 2019-05-14 2019-08-09 西南交通大学 A kind of method for tracking target based on multilayer temporal filtering
CN110111358B (en) * 2019-05-14 2022-05-24 西南交通大学 Target tracking method based on multilayer time sequence filtering
CN110340901A (en) * 2019-06-28 2019-10-18 深圳盈天下视觉科技有限公司 A kind of control method, control device and terminal device
CN110458281A (en) * 2019-08-02 2019-11-15 中科新松有限公司 The deeply study rotation speed prediction technique and system of ping-pong robot
CN110517292A (en) * 2019-08-29 2019-11-29 京东方科技集团股份有限公司 Method for tracking target, device, system and computer readable storage medium
US11455735B2 (en) 2019-08-29 2022-09-27 Beijing Boe Technology Development Co., Ltd. Target tracking method, device, system and non-transitory computer readable storage medium
CN110827320A (en) * 2019-09-17 2020-02-21 北京邮电大学 Target tracking method and device based on time sequence prediction
CN110827320B (en) * 2019-09-17 2022-05-20 北京邮电大学 Target tracking method and device based on time sequence prediction
CN110796093A (en) * 2019-10-30 2020-02-14 上海眼控科技股份有限公司 Target tracking method and device, computer equipment and storage medium
CN111369629A (en) * 2019-12-27 2020-07-03 浙江万里学院 Ball return trajectory prediction method based on binocular visual perception of swinging, shooting and hitting actions
CN111546332A (en) * 2020-04-23 2020-08-18 上海电机学院 Table tennis robot system based on embedded equipment and application
CN111939541A (en) * 2020-06-23 2020-11-17 北京瑞盖科技股份有限公司 Evaluation method, device, equipment and system for table tennis training
CN113160275B (en) * 2021-04-21 2022-11-08 河南大学 Automatic target tracking and track calculating method based on multiple videos
CN113160275A (en) * 2021-04-21 2021-07-23 河南大学 Automatic target tracking and track calculating method based on multiple videos
CN113253755A (en) * 2021-05-08 2021-08-13 广东白云学院 Neural network-based rotor unmanned aerial vehicle tracking algorithm
CN114589719A (en) * 2022-04-02 2022-06-07 中国电子科技集团公司第五十八研究所 Real-time calibration and calibration system and method for table tennis serving robot
CN114589719B (en) * 2022-04-02 2024-03-08 中国电子科技集团公司第五十八研究所 Real-time calibration and calibration system and method for table tennis service robot
CN114612522A (en) * 2022-05-09 2022-06-10 广州精天信息科技股份有限公司 Table tennis sport parameter detection method and device and table tennis training auxiliary system
CN114612522B (en) * 2022-05-09 2023-01-17 广东金融学院 Table tennis sport parameter detection method and device and table tennis training auxiliary system
CN115278194A (en) * 2022-09-22 2022-11-01 山东省青东智能科技有限公司 Image data processing method based on 3D industrial camera
CN115965658A (en) * 2023-03-16 2023-04-14 江西工业贸易职业技术学院 Ball motion trajectory prediction method and system, electronic device and storage medium
CN116504068A (en) * 2023-06-26 2023-07-28 创辉达设计股份有限公司江苏分公司 Statistical method, device, computer equipment and storage medium for lane-level traffic flow
CN117237409A (en) * 2023-09-06 2023-12-15 广州飞漫思维数码科技有限公司 Shooting game sight correction method and system based on Internet of things

Also Published As

Publication number Publication date
CN107481270B (en) 2020-05-19

Similar Documents

Publication Publication Date Title
CN107481270A (en) Table tennis target following and trajectory predictions method, apparatus, storage medium and computer equipment
Hossain et al. Crowd counting using scale-aware attention networks
CN109800689B (en) Target tracking method based on space-time feature fusion learning
Luo et al. 3d-ssd: Learning hierarchical features from rgb-d images for amodal 3d object detection
CN109816686A (en) Robot semanteme SLAM method, processor and robot based on object example match
CN108446585A (en) Method for tracking target, device, computer equipment and storage medium
WO2017150032A1 (en) Method and system for detecting actions of object in scene
Yu et al. An object-based visual attention model for robotic applications
Zhao et al. Stochastic human segmentation from a static camera
CN109948526A (en) Image processing method and device, detection device and storage medium
CN110533687A (en) Multiple target three-dimensional track tracking and device
CN111199556A (en) Indoor pedestrian detection and tracking method based on camera
CN114036969B (en) 3D human body action recognition algorithm under multi-view condition
CN109063549A (en) High-resolution based on deep neural network is taken photo by plane video moving object detection method
CN112927264A (en) Unmanned aerial vehicle tracking shooting system and RGBD tracking method thereof
Han et al. Fully conventional anchor-free siamese networks for object tracking
Li et al. Video-based table tennis tracking and trajectory prediction using convolutional neural networks
Arbués-Sangüesa et al. Single-camera basketball tracker through pose and semantic feature fusion
Sokolova et al. Human identification by gait from event-based camera
CN107948586A (en) Trans-regional moving target detecting method and device based on video-splicing
Chen et al. Stingray detection of aerial images with region-based convolution neural network
Wang et al. Research and implementation of the sports analysis system based on 3D image technology
Yang et al. Design and implementation of intelligent analysis technology in sports video target and trajectory tracking algorithm
Wu et al. 3d semantic vslam of dynamic environment based on yolact
Ning et al. Enhancing embedded AI-based object detection using multi-view approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant