CN114926901A - Motion detection method, motion detection device, electronic device, and storage medium - Google Patents

Motion detection method, motion detection device, electronic device, and storage medium Download PDF

Info

Publication number
CN114926901A
CN114926901A CN202210556500.8A CN202210556500A CN114926901A CN 114926901 A CN114926901 A CN 114926901A CN 202210556500 A CN202210556500 A CN 202210556500A CN 114926901 A CN114926901 A CN 114926901A
Authority
CN
China
Prior art keywords
track
motion
current
grid
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210556500.8A
Other languages
Chinese (zh)
Inventor
史璇珂
王权
王睿
王超
郑龙澍
钱晨
杨奇勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing National Aquatics Center Co ltd
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing National Aquatics Center Co ltd
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing National Aquatics Center Co ltd, Beijing Sensetime Technology Development Co Ltd filed Critical Beijing National Aquatics Center Co ltd
Priority to CN202210556500.8A priority Critical patent/CN114926901A/en
Publication of CN114926901A publication Critical patent/CN114926901A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The disclosure relates to the technical field of computer vision, and particularly provides a motion detection method and device, electronic equipment and a storage medium. A motion detection method, comprising: carrying out target detection on the obtained video stream data, and determining the current motion track of a target object in a target area; determining current track grid data corresponding to the current motion track according to the current motion track and the grid data corresponding to the target area; and determining a motion detection result of the target object according to the current track grid data and the historical track grid data set. According to the embodiment of the disclosure, objective and effective effect quantification and expression can be performed on the motion of the object, and the motion detection effect is improved.

Description

Motion detection method, motion detection device, electronic device, and storage medium
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to a motion detection method and apparatus, an electronic device, and a storage medium.
Background
Motion detection is one of the most widely used tasks in the field of Computer Vision (CV), however, the motion detection task in the related art is often focused only on the recognition of motion. Such as abnormal behavior detection, motion trail detection, etc., which focus only on identifying the motion behavior from the video data, and do not provide more information on the motion behavior itself, such as motion effect expression, motion analysis, etc. Therefore, the application of the motion detection in the related art is poor.
Disclosure of Invention
The embodiment of the disclosure provides a motion detection method and device, electronic equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a motion detection method, including:
acquiring video stream data to be processed;
carrying out target detection on the video stream data, and determining the current motion track of a target object in a target area;
determining current track grid data corresponding to the current motion track according to the current motion track and the grid data corresponding to the target area;
determining a motion detection result of the target object according to the current track grid data and the historical track grid data; the historical track grid data comprises track grid data corresponding to at least one historical motion track of the target object, and the motion detection result is used for reflecting the relationship between the current motion track and at least one historical motion track.
In some embodiments of the present invention, the substrate is,
determining the grid data corresponding to the target area according to the following mode:
determining a target scale of the grid unit according to the size of the target object;
and carrying out gridding processing on the target area according to the grid unit with the target size to obtain grid data corresponding to the target area.
In some embodiments of the present invention, the substrate is,
the grid data corresponding to the target area comprises a plurality of grid units; determining current track grid data corresponding to the current motion track according to the current motion track and the grid data corresponding to the target area, wherein the determining comprises the following steps:
determining grid cells passed by the current motion track based on the grid data;
and determining current track grid data corresponding to the current motion track according to the grid unit through which the current motion track passes.
In some embodiments of the present invention, the substrate is,
determining a motion detection result of the target object according to the current trajectory grid data and the historical trajectory grid data, wherein the motion detection result comprises:
according to the current track grid data and track grid data corresponding to any historical motion track, obtaining a first track coincidence degree of the current motion track and any historical motion track; the motion detection result of the target object comprises the first track coincidence degree.
In some embodiments, the determining a motion detection result of the target object according to the current trajectory mesh data and a historical trajectory mesh data set includes:
obtaining second track contact ratio according to the current track grid data and track frequency data obtained based on the historical track grid data; the motion detection result of the target object comprises the second track coincidence degree.
In some embodiments, the process of obtaining the trajectory frequency data based on the historical trajectory grid data comprises:
determining the frequency of the target object passing through each grid unit in the grid data according to historical track grid data;
and obtaining the track frequency data according to the frequency of the target object passing through each grid unit.
In some embodiments, the method further comprises:
updating the track frequency data according to the current track grid data of the current motion track to obtain updated track frequency data; and the motion detection result of the target object comprises the updated track frequency data.
In some embodiments, the target object comprises a curling, and the target area comprises a curling field.
In a second aspect, the present disclosure provides a motion detection apparatus, including:
the video acquisition module is configured to acquire video stream data to be processed;
the target detection module is configured to perform target detection on the video stream data and determine the current motion track of a target object in a target area;
a track grid module configured to determine corresponding current track grid data in the current motion track according to the current motion track and the grid data corresponding to the target area;
a result determination module configured to determine a motion detection result of the target object according to the current trajectory grid data and the historical trajectory grid data; the historical track grid data comprises track grid data corresponding to at least one historical motion track of the target object, and the motion detection result is used for reflecting the relationship between the current motion track and at least one historical motion track.
In some embodiments, the motion detection apparatus of the present disclosure further comprises a mesh processing module configured to:
determining a target scale of the grid unit according to the size of the target object;
and carrying out gridding processing on the target area according to the grid unit with the target size to obtain grid data corresponding to the target area.
In some embodiments, the trajectory grid module is configured to:
determining grid cells passed by the current motion track based on the grid data;
and determining current track grid data corresponding to the current motion track according to the grid unit through which the current motion track passes.
In some embodiments, the result determination module is configured to:
according to the current track grid data and track grid data corresponding to any historical motion track, obtaining a first track coincidence degree of the current motion track and any historical motion track; the motion detection result of the target object comprises the first track coincidence degree.
In some embodiments, the result determination module is configured to:
obtaining second track contact ratio according to the current track grid data and track frequency data obtained based on the historical track grid data; the motion detection result of the target object comprises the second track coincidence degree.
In some embodiments, the motion detection apparatus of the present disclosure further comprises a trajectory frequency module configured to:
determining the frequency of the target object passing through each grid unit in the grid data according to historical track grid data;
and obtaining the track frequency data according to the frequency of the target object passing through each grid unit.
In some embodiments, the result determination module is configured to:
updating the track frequency data according to the current track grid data of the current motion track to obtain updated track frequency data; and the motion detection result of the target object comprises the updated track frequency data.
In some embodiments, the target object comprises a curling, and the target area comprises a curling field.
In a third aspect, the disclosed embodiments provide an electronic device, including:
a processor; and
a memory storing computer instructions for causing a processor to perform the method according to any of the embodiments of the first aspect.
In a fourth aspect, the present disclosure provides a storage medium storing computer instructions for causing a computer to execute the method according to any embodiment of the first aspect.
The motion detection method of the embodiment of the disclosure comprises the steps of carrying out target detection on video stream data to be processed, determining a current motion track of a target object in a target area, determining track grid data corresponding to the current motion track according to the current motion track and grid data corresponding to the target area, and determining a motion detection result of the target object according to the current track grid data and a historical track grid data set. In the embodiment of the disclosure, the grid data is obtained by gridding the target area, the motion track of the target object can be effectively quantized and expressed by using the grid data, and the motion detection result of the current motion track is determined by comparing the current track grid data with the historical track grid data, so that the motion effect of the current motion process of the target object can be accurately reflected, and the motion detection effect is improved.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic diagram of a motion detection system according to some embodiments of the present disclosure.
Fig. 2 is a flow chart of a motion detection method in some embodiments according to the present disclosure.
Fig. 3 is a schematic diagram of a motion detection method according to some embodiments of the present disclosure.
Fig. 4 is a schematic diagram of a motion detection method according to some embodiments of the present disclosure.
Fig. 5 is a flow chart of a motion detection method in some embodiments according to the present disclosure.
Fig. 6 is a flow chart of a motion detection method in some embodiments according to the present disclosure.
FIG. 7 is a flow chart of a method of motion detection in some embodiments according to the present disclosure.
FIG. 8 is a schematic diagram of a method of motion detection in some embodiments according to the present disclosure.
FIG. 9 is a flow chart of a method of motion detection in some embodiments according to the present disclosure.
FIG. 10 is a schematic diagram of a motion detection method according to some embodiments of the present disclosure.
FIG. 11 is a schematic diagram of a motion detection method in some embodiments according to the present disclosure.
Fig. 12 is a flow chart of a motion detection method in some embodiments according to the present disclosure.
Fig. 13 is a flow chart of a motion detection method in some embodiments according to the present disclosure.
Fig. 14 is a block diagram of a motion detection device according to some embodiments of the present disclosure.
Fig. 15 is a block diagram of a motion detection device according to some embodiments of the present disclosure.
FIG. 16 is a block diagram of an electronic device in some embodiments according to the present disclosure.
Detailed Description
The technical solutions of the present disclosure will be described below clearly and completely with reference to the accompanying drawings, and it is to be understood that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. All other embodiments, which can be derived by one of ordinary skill in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure. In addition, technical features involved in different embodiments of the present disclosure described below may be combined with each other as long as they do not conflict with each other.
Motion detection, which is one of the important tasks in the field of Computer Vision (CV), refers to recognizing the motion of an object by detecting the motion trajectory of the same object on a video stream sequence.
The motion detection scenes in the related art often focus on the identification of motion only, and do not provide more effective information on the motion behavior itself. For example, for the track detection task, the related art only focuses on identifying the motion track of the object from the video stream data, and the effect of the motion of the object cannot provide an effective information expression. However, the method has important significance for the motion behavior detection analysis of the object.
For example, in an example, taking a curling motion scene as an example, when an athlete performs a curling operation once, and the curling slides on the ice surface until the curling is static, the generated motion trajectory can directly reflect the quality of the curling effect.
For example, in another example, taking a motion performance detection scenario of a mobile robot as an example, a motion trajectory generated by the robot during one or more movements may directly reflect the moving performance of the robot, such as whether a trajectory deviation or a trajectory error occurs.
It can be seen that in these scene requirements, attention is not only paid to recognizing the motion of an object, but also to obtaining an effective expression of the motion effect of the object through motion detection. However, at present, the motion detection based on computer vision still cannot effectively cover the scenes, so that the motion detection lacks many practical applications.
Based on this, the disclosed embodiments provide a motion detection method, a motion detection apparatus, an electronic device, and a storage medium, which aim to effectively and quantitatively express a motion effect of an object based on a motion trajectory of the moving object, and can more accurately reflect the motion effect of the object.
The embodiment of the disclosure provides a motion detection method, which can be applied to electronic equipment. The electronic device disclosed in the present disclosure may be any device type suitable for implementation, such as a mobile terminal, a wearable device, an in-vehicle device, a personal computer, a server, a cloud platform, and the like, which is not limited in the present disclosure.
Fig. 1 shows a schematic structure diagram of a motion detection system 100 in some embodiments of the present disclosure, where the motion detection system 100 may be deployed in a scene requiring motion detection, such as a curling sports field and a robot motion detection field.
As shown in fig. 1, in some implementations, a motion detection system 100 of examples of the present disclosure includes a video capture device 110 and an electronic device 120.
The video capture device 110 may be a camera disposed in a natural scene, for example, in one example, the video capture device 110 may be a camera disposed in a curling sports arena, such that video stream data in a sports scene may be captured by the video capture device 100. In the embodiment of the present disclosure, the number, position, and viewing angle of the video capture devices 110 are not particularly limited, and may be set according to the corresponding scene requirement, which is not limited by the present disclosure.
The electronic device 120 may establish a communication connection with the video capture device 110 in a wireless or wired manner, so that the video stream data captured by the video capture device 110 may be received. After obtaining the video stream data, the electronic device 120 may perform motion detection on the video stream data by using the method of the embodiment of the present disclosure to obtain a motion detection result of the moving object. The following description is made with reference to the embodiment of fig. 2.
As shown in fig. 2, in some embodiments, a motion detection method of an example of the present disclosure includes:
s210, video stream data to be processed is obtained.
In the embodiment of the present disclosure, the video stream data of the current scene may be captured by the video capture device 110 shown in fig. 1, for example. For example, in one example, the video capture device 110 is a camera located in a sports venue, such that the video stream data captured by the video capture device 110 may be video of a scene occurring in the venue.
The electronic device 120 may receive video stream data sent by the video capture device 110, where the video stream data is to-be-processed video stream data according to the present disclosure.
And S220, carrying out target detection on the video stream data, and determining the current motion track of the target object in the target area.
The target object refers to a moving object to be subjected to motion detection, and the target object may be any movable object suitable for implementation according to different application scenes. The target area refers to a moving range of the target object, in some scenarios, the target object moves within a preset moving range, and the capturing range of the video capturing device 110 may include the entire target area, so that during the movement of the target object, the complete video stream data may be captured.
For example, in an example of a curling sports scene, the target object is a curling, and the target area is a curling field. The athlete throws the curling and the curling slides freely within the curling field, so that the content of the video stream data collected by the video collecting device 110 can record a complete curling process.
For example, in another example, taking the detection of the movement performance of the robot as an example, the target object is the robot, and the target area is the detection area. The robot can move within the detection area, so that the content of the video stream data collected by the video collecting device 110 can record one movement process of the robot.
Of course, those skilled in the art will understand that the application scenario of the method of the present disclosure is not limited to the above examples, and the present disclosure does not enumerate this again.
In the embodiment of the present disclosure, the target detection is performed on the video stream data based on a video detection technology, so that the current motion trajectory of the target object in the video stream data can be identified and obtained.
It will be appreciated that the current movement trajectory is relative to the historical movement trajectory, and the movement process of the target object may include a plurality of movements, for example, in a curling scene, the athlete may perform a plurality of curling actions in the field, so that the curling may also have a plurality of movement trajectories in the scene. In the embodiment of the present disclosure, the current motion trajectory may be any one-time motion trajectory identified from video stream data, and is not limited to be understood as a motion trajectory generated at the current time.
In some embodiments, the position of the target object on each frame of image in the video stream data may be tracked, the image coordinates of the target object on each frame of image may be determined, and then the complete moving path of the target object in the moving process, that is, the moving track of the target object, may be obtained based on the time sequence information of the image sequence. The motion trail of the target object includes the image coordinates of the target object on each frame of image.
For the principle and process of obtaining the motion trajectory of the target object by video detection, those skilled in the art can understand and fully implement the motion trajectory by referring to the related technologies, and the details of the disclosure are not repeated.
And S230, determining corresponding current track grid data in the current motion track according to the current motion track and the grid data corresponding to the target area.
In the embodiment of the present disclosure, the target area may be subjected to gridding in advance, that is, the target area where the target object moves may be divided into a plurality of grid cells. The grid data includes image coordinates of each grid cell, and the image coordinates refer to position coordinates of the grid cells in an image coordinate system.
In some embodiments, an image of a scene including a target area may be captured by, for example, the video capture device 110 shown in fig. 1, an image range of the target area is determined from the image of the scene through image recognition, and then the image range of the target area is divided into a grid map in grid units of a preset scale size.
For example, as shown in fig. 3, taking the scene of curling as an example, the curling field may be as shown in (a) of fig. 3, and in the example of fig. 3, the curling field is the target area. In the embodiment of the present disclosure, after performing the gridding process on the target region, the grid graph shown in fig. 3 (b) may be obtained, where the grid graph includes m × n grid cells, and the grid data includes the image coordinates corresponding to each grid cell.
In the embodiment of the present disclosure, after the current motion trajectory is obtained by detecting the video stream data, current trajectory mesh data corresponding to the current motion trajectory may be determined according to the current motion trajectory and pre-established mesh data.
In the embodiment of the present disclosure, all the grid units that the current motion trajectory passes through may be determined according to the image coordinates corresponding to the current motion trajectory and the image coordinates corresponding to each grid unit in the grid data, and the trajectory grid data corresponding to the current motion trajectory may be obtained according to the grid units that the current motion trajectory passes through. That is, the trajectory mesh data includes mesh cells through which the motion trajectory passes and image coordinates corresponding to the mesh cells.
For example, as shown in fig. 4, the movement track of the athlete in one watering process is as shown in fig. 4 (a), in an embodiment of the present disclosure, the track mesh data corresponding to the movement track may be determined based on the image coordinates of the movement track and the image coordinates of each mesh cell in the mesh data, that is, as shown in fig. 4 (b), the track mesh data includes the mesh cell passed by the movement track and the image coordinates of the passed mesh cell.
And S240, determining a motion detection result of the target object according to the current track grid data and the historical track grid data.
In the embodiment of the present disclosure, the historical trajectory grid data includes trajectory grid data corresponding to at least one historical motion trajectory of the target object. The motion detection result of the current motion track of the target object can be determined according to the current track grid data corresponding to the current motion track and the track grid data corresponding to at least one historical motion track.
The motion detection result of the target object can reflect the relationship between the current motion track and at least one historical motion track. Specifically, the motion detection results include, but are not limited to: the coincidence degree of the current motion track and a first track of any one historical motion track and the coincidence degree of the current motion track and second tracks of all historical motion tracks.
Taking the curling motion scene as an example, an athlete can judge the ice surface friction condition on the curling track according to the effect of the curling at this time and personal experience in the curling throwing process, so as to guide the curling throwing to make corresponding adjustment next time.
It can be seen that in this scene, the motion analysis on the current motion trail of the curling has important significance, but in the related technology, the curling can only depend on the personal experience of athletes, and the curling effect cannot be accurately quantified. In the embodiment of the present disclosure, a motion detection result for quantitatively expressing the effect of the current motion trajectory is obtained through the current trajectory mesh data of the current motion trajectory and the trajectory mesh data of at least one historical motion trajectory.
In some embodiments, a first track coincidence degree of the current track mesh data of the current motion track and the previous track mesh data of the single historical motion track may be calculated according to the current track mesh data of the current motion track and the previous track mesh data of the single historical motion track. Or, the second track coincidence degree may be calculated according to the current track mesh data of the current motion track and the track mesh data of all historical motion tracks. The first track contact ratio and/or the second track contact ratio may be used as a motion detection result of the current motion track, and may reflect a motion effect of the current motion track, for example, whether a track deviation occurs or not.
Still taking the aforementioned curling motion scene as an example, based on the motion detection method disclosed by the present disclosure, the coincidence degree of the current trajectory grid data of the current curling process and the historical trajectory grid data of the last curling process can be calculated, the coincidence degree can effectively and quantitatively express the current curling effect of the athlete, if the coincidence degree is higher, the error of the current curling is less, otherwise, if the coincidence degree is lower, the error of the current curling is larger, and accordingly, the athlete can be effectively guided to make adjustments.
The process of calculating the first track coincidence degree and the second track coincidence degree is explained in the following embodiments of the present disclosure, and will not be described in detail here.
Of course, the motion detection result of the embodiment of the present disclosure is not limited to the first coincidence degree and the second coincidence degree mentioned in the above example, and may also include any other detection result suitable for expressing the motion effect, such as track frequency data obtained from a historical motion track, and the like, which is not limited by the present disclosure.
It can be understood that, in the embodiment of the present disclosure, the motion detection result obtained by the current trajectory grid data of the current motion trajectory and the trajectory grid data of the historical motion trajectory can effectively express the motion of the target object in a quantitative manner. For example, the curling motion track of the example can effectively quantitatively express the current curling effect of the athlete by the method disclosed by the invention, so that the athlete can be scientifically guided to train according to the motion detection result.
Of course, the above is only an example of a curling motion scene, and the disclosed method is not limited to curling motion scenes in fact. For example, taking a mobile performance detection scene of the mobile robot as an example, by the method, the track contact ratio is obtained according to the current track grid data of the current moving track of the robot and the historical track grid, and the moving performance of the robot can be effectively reflected. For example, a lower coincidence degree indicates a larger robot movement error and thus a poor robot movement performance, and vice versa. It can be seen that the method disclosed by the invention can also quantitatively express the movement performance of the robot, so that research personnel can specifically optimize and improve the movement performance of the robot according to the motion detection result.
Therefore, in the embodiment of the present disclosure, the grid data is obtained by gridding the target area, the motion trajectory of the target object can be effectively quantized and expressed by using the grid data, and the motion detection result of the current motion trajectory is determined by comparing the current trajectory grid data with the historical trajectory grid data, so that the motion effect of the current motion process of the target object can be accurately reflected, and the motion detection effect is improved.
In the embodiment of the present disclosure, before detecting the motion of the target object, the target area in which the target object moves needs to be gridded, which will be described below with reference to fig. 5.
As shown in fig. 5, in some embodiments, the process of gridding the target area to obtain the grid data includes:
and S510, determining the target dimension of the grid unit according to the size of the target object.
And S520, carrying out gridding processing on the target area according to the grid unit with the target size to obtain grid data corresponding to the target area.
In an example, still taking the curling motion scene as an example, the target object is a curling, and the target area is a curling field shown in (a) in fig. 3, that is, in the embodiment of the present disclosure, the curling field needs to be gridded.
In some embodiments, an image of a scene including a curling field may be captured by, for example, the video capture device 110 shown in fig. 1, and then an image range of the curling field may be determined from the image of the scene through image recognition. The size of the grid cells may be determined prior to gridding the image range of the curling scene.
In the embodiment of the present disclosure, the target dimension of the grid cell may be determined according to the size of the target object. It can be understood that the smaller the target scale of the grid unit, the higher the refining effect on the target area, and the more accurate the obtained trajectory grid data, but the larger the data amount of the system operation, and vice versa.
Therefore, in some embodiments, the dimension of the grid unit in the world coordinate system can be set to be substantially consistent with or close to the size of the curling, so that the precision and the operation speed can be considered at the same time.
After the scale of the grid cell in the world coordinate system is determined, the target scale of the grid cell in the image coordinate system can be obtained based on the mapping relationship between the world coordinate system and the image coordinate system of the video capture device 110, and then the grid cell of the target scale is used for gridding the image range of the curling scene.
For example, as shown in fig. 3 (b), the curling scene is gridded by using the grid cells with the target dimension, so as to obtain the grid map of m × n shown in the figure, and the grid data corresponding to the grid map includes the image coordinate of each grid cell in the image coordinate system
Therefore, in the embodiment of the present disclosure, by performing gridding processing on the target area of the object motion, the motion trajectory of the object can be effectively quantized and expressed by using the grid data, so as to obtain an accurate motion detection result.
After the mesh data of the target area is obtained, the mesh data may be stored in the electronic device 120, so that when the motion trajectory of the target object is detected in the following process, the mesh data may be retrieved to determine trajectory mesh data corresponding to the current motion trajectory of the target object. The following description is made with reference to the embodiment of fig. 6.
As shown in fig. 6, in some embodiments, the motion detection method of the examples of the present disclosure, a process of determining current trajectory mesh data according to a current motion trajectory and mesh data, includes:
and S610, determining the grid unit passed by the current motion trail based on the grid data.
And S620, determining current track grid data corresponding to the current motion track according to the grid units passed by the current motion track.
It is understood that each grid cell in the grid data represents a partial area in the target area, and thus, a motion trajectory generated by the target object moving in the target area passes through the partial grid cell.
Therefore, in the embodiment of the present disclosure, after determining the current motion trajectory of the target object, each grid cell passed by the current motion trajectory is obtained according to the image coordinates of the current motion trajectory in the image coordinate system and the image coordinates of each grid cell in the grid data in the image coordinate system, and the current trajectory grid data includes each grid cell passed by the current motion trajectory and the image coordinates of each passed grid cell.
For example, in the curling scene shown in fig. 4, the current motion trajectory of the current pitcher of the athlete is shown in (a) in fig. 4, the grid cells passed by the current motion trajectory are marked based on the current motion trajectory and the grid cells, and the obtained current trajectory grid data is shown in (b) in fig. 4. The current trajectory mesh data includes the passing mesh cells (shaded portions in fig. 4), and the image coordinates corresponding to these mesh cells.
Therefore, in the embodiment of the present disclosure, by performing gridding processing on the target area of the object motion, the motion trajectory of the object can be effectively quantized and expressed by using the grid data, so as to obtain an accurate motion detection result.
In some embodiments, the motion detection result of the target object may include the following data:
1) and the coincidence degree of the current motion track and any one historical motion track is the first track coincidence degree.
For example, taking a shooting motion scene as an example, a current shooting corresponds to the current trajectory grid data a, and a historical shooting corresponds to the trajectory grid data B, so that the trajectory coincidence degree of shooting twice, that is, the first trajectory coincidence degree, can be obtained according to the current trajectory grid data a and the trajectory grid data B.
2) And obtaining track frequency data according to the plurality of historical track grid data.
The trajectory frequency data represents the frequency of the multiple historical motion trajectories passing through each grid cell in the grid data. Still taking a pitcher motion scene as an example, assuming that an athlete performs pitcher 10 times in total, each time the pitcher is thrown corresponds to one track grid data, each track grid data comprises a plurality of grid units, and different track grid data may comprise the same grid unit or different grid units, so that the frequency of each grid unit is counted according to the track grid data of the 10 times of pitcher throwing, and track frequency data is obtained.
3) And the coincidence degree of the current track grid data of the current motion track and the second track of the track frequency data.
Still taking the scenario of watering movement as an example, the current watering corresponds to the current track grid data A, and the track frequency data corresponding to the multiple historical watering is
Figure BDA0003655027280000131
Thereby according to the current track grid data A and the track frequency data
Figure BDA0003655027280000141
And obtaining the track coincidence degree, namely the second track coincidence degree.
Of course, it can be understood by those skilled in the art that the motion detection result is not limited to the above data, and may also include any other data suitable for quantitatively expressing the motion effect, which is not limited by the present disclosure. Next, the determination process of the above three types of motion detection result data will be described.
As shown in fig. 7, in some embodiments, the motion detection method of the present disclosure, obtaining the motion detection result of the target object includes:
and S710, obtaining a first track coincidence degree of the current motion track and any historical motion track according to the current track grid data of the current motion track and the track grid data corresponding to any historical motion track.
And S720, determining the coincidence degree of the first track as the motion detection result of the target object.
In the embodiment of the present disclosure, based on the foregoing motion detection process, trajectory mesh data corresponding to each motion trajectory of the target object may be obtained. For example, the target object is moving along the current motion trajectory T i The corresponding current track grid data is A, and the target object moves along the previous track T i-1 The corresponding historical track mesh data is B.
After obtaining the trajectory mesh data a and B of the two motion trajectories, obtaining a first trajectory coincidence degree of the trajectory mesh data a and the trajectory mesh data B according to the trajectory mesh data a and the trajectory mesh data B, that is, a ratio of an intersection of a mesh unit occupied by the trajectory mesh data a and a mesh unit occupied by the trajectory mesh data B to a union of the two is the first trajectory coincidence degree of the motion trajectories a and B. The concrete expression is as follows:
Figure BDA0003655027280000142
taking the above-mentioned pot-throwing motion scenario as an example, suppose that the athlete carries out 2 pot-throwing processes in sequence, the motion track generated in the second pot-throwing process is defined as the "current motion track", and the motion track generated in the first pot-throwing process is defined as the "historical motion track".
Based on the foregoing method processes of the present disclosure, historical track mesh data corresponding to the historical motion track may be obtained, for example, the historical motion track a is as shown in (a) in fig. 4, and the corresponding historical track mesh data a is as shown in (b) in fig. 4.
In addition, based on the foregoing method process, the current trajectory mesh data corresponding to the current motion trajectory may be obtained, for example, the current motion trajectory B is as shown in (a) of fig. 8, and the corresponding current trajectory mesh data B is as shown in (B) of fig. 8.
After the trajectory grid data a and the trajectory grid data B are obtained, a first trajectory coincidence degree of the trajectory grid data a and the trajectory grid data B can be calculated according to the formula. In the curling scene, the athlete expects the motion tracks of two times of curling to have higher coincidence, so that more prior knowledge can be obtained from the previous curling process. Therefore, if the coincidence degree of the first track is larger, the effect of the current kettle throwing is better, and otherwise, if the coincidence degree of the first track is smaller, the effect of the current kettle throwing is poorer. Namely, the specific data of the first track contact ratio can be utilized to quantitatively express the motion effect of the pot throwing of the sportsman, the pot throwing effect of the sportsman can be effectively evaluated based on the first track contact ratio, and the sportsman can be guided to make corresponding adjustment.
Therefore, in the embodiment of the present disclosure, the obtained first trajectory coincidence degree may be determined as a result of the movement of the target object.
Of course, it is understood that the coincidence degree of the first trajectory in the embodiments of the present disclosure is not limited to the coincidence degree of the two adjacent motion trajectories, and may also be the coincidence degree of any two motion trajectories, which is not limited by the present disclosure.
Therefore, in the embodiment of the disclosure, the first track coincidence degree of the current motion track and any one of the historical motion tracks is obtained based on the track grid data, so that objective and effective effect quantification and expression can be effectively performed on the motion of the object, and the motion detection accuracy is improved.
As shown in fig. 9, in some embodiments, the motion detection method of the present disclosure, obtaining the trajectory frequency data includes:
s910, according to the historical track grid data, determining the frequency of the target object passing through each grid unit of the grid data.
S920, obtaining track frequency data according to the frequency of the target object passing through each grid unit.
In the embodiment of the present disclosure, as can be seen from the foregoing, the historical trajectory grid data includes trajectory grid data corresponding to one or more historical motion trajectories, and each trajectory grid data includes a plurality of grid cells. Different track grid data may include the same grid cells or different grid cells, so that the frequency of each grid cell in the historical track grid data set is counted, and corresponding track frequency data can be obtained.
Still taking the aforementioned curling motion scenario as an example, assuming that the athlete continuously performs 5 curling processes, the corresponding trajectory grid data are shown in fig. 10 respectively.
It is to be understood that each of the trajectory mesh data is a mesh region formed by a plurality of mesh cells in sequence, and the same mesh cells may exist in a plurality of pieces of the trajectory mesh data. In the embodiment of the present disclosure, the frequency of each grid unit passing through by the trajectory grid data may be counted, so as to obtain a trajectory frequency map.
For example, for a certain grid unit, the track grid data of 5 consecutive pitchers do not pass through the grid unit, and then the frequency corresponding to the grid unit is 0; if 2 times of track grid data in the track grid data of 5 times of watering pass through the grid unit, the frequency corresponding to the grid unit is 2; if the trajectory grid data of 5 consecutive watering shots pass through the grid cell, the frequency corresponding to the grid cell is 5 … …, and so on, and the frequency of all grid cells in the grid data can be obtained through statistics.
In this example, after obtaining the frequency of each grid cell in the grid data, the obtained trajectory frequency map may be as shown in fig. 11, where the data corresponding to the trajectory frequency map is trajectory frequency data, and the trajectory frequency data may be determined as a motion detection result.
It will be appreciated that in a trace frequency chart such as that shown in figure 11, the frequency with which each grid cell is passed can be visually seen, so that a comprehensive analysis of the athlete's multiple pitcher traces can be performed to obtain ice conditions or pitcher effects. For example, it can be known from the trace frequency chart which ice surface positions are passed more frequently, so that the corresponding friction coefficient of the ice surface area is smaller, and vice versa. For example, the effect of the athlete to throw the pots for multiple times can be known through the discrete degree of the track frequency chart, the lower the discrete degree is, the higher the coincidence degree of the tracks for multiple times is, the better the pot throwing effect is, and the opposite is true.
According to the track frequency data obtained according to the track grid data of the historical motion track, the multiple motion conditions of the target object can be directly counted and expressed, and the analysis on the motion effect of the object is facilitated.
As shown in fig. 12, in some embodiments, the motion detection method of the present disclosure may obtain the motion detection result of the target object by:
s1210, obtaining second track contact ratio according to current track network data of the current motion track and track frequency data obtained based on historical track grid data.
And S1220, determining the second track coincidence degree as the motion detection result of the target object.
In the embodiment of the present disclosure, after the current trajectory mesh data corresponding to the current motion trajectory is obtained through the foregoing process, in addition to obtaining a first trajectory overlap ratio between the current motion trajectory and any one of the historical motion trajectories according to the foregoing process in the embodiment of fig. 8, a second trajectory overlap ratio between the current trajectory mesh data and the previous trajectory frequency data of the current motion trajectory can be obtained according to the current trajectory mesh data of the current motion trajectory and the previous trajectory frequency data.
It can be understood that the track frequency data represents the overall situation of one or more historical motion tracks, and the difference between the current motion track and the overall motion situation in the past period of time can be reflected by calculating the second track overlap ratio of the current track grid data of the current motion track and the track frequency data.
In some embodiments, the current trajectory mesh data C corresponding to the current motion trajectory may be obtained through the foregoing method flow, and a second trajectory overlap ratio of the current trajectory mesh data C and the trajectory frequency data is obtained according to the current trajectory mesh data C and the trajectory frequency data, that is, a ratio of an intersection of a grid cell occupied by the trajectory mesh data C and a grid cell included in the trajectory frequency data to a grid cell occupied by the trajectory mesh data C is the second trajectory overlap ratio. The concrete expression is as follows:
Figure BDA0003655027280000171
still taking the aforementioned curling motion scene as an example, the trajectory frequency data obtained based on the historical 5 curling processes can be shown in fig. 11, which is not described in detail in this disclosure. On the basis of the embodiment shown in fig. 11, the athlete performs the 6 th watering process, and the motion track generated in the watering process is defined as the "current motion track".
Based on the method process, the current track grid data C corresponding to the current motion track can be obtained, and then the second track contact ratio corresponding to the current motion track can be calculated according to the formula.
Based on the above, the current trajectory grid data C represents the motion trajectory of the current watering of the athlete, and the trajectory frequency data represents the comprehensive situation of the historical watering process. Therefore, if the coincidence degree of the second track is larger, the coincidence of the motion track of the current pot throwing and the historical track is higher, and the pot throwing effect is better. On the contrary, if the coincidence degree of the second track is smaller, the coincidence degree of the motion track of the current pot throwing and the historical track is lower, and the pot throwing effect is poorer. That is, the specific data of the second track contact ratio can be utilized to quantitatively express the motion effect of the pot throwing of the sportsman, the pot throwing effect of the sportsman can be effectively evaluated based on the second track contact ratio, and the sportsman can be guided to make corresponding adjustment.
Therefore, in the embodiment of the present disclosure, the obtained second trajectory coincidence degree may be determined as a result of the movement of the target object.
According to the method and the device, in the embodiment of the disclosure, the second track overlap ratio of the current motion track and the historical motion track is obtained based on the track grid data, so that objective and effective effect quantification and expression can be effectively performed on the motion of the object, and the accuracy of motion detection is improved.
In some embodiments, after obtaining the current trajectory mesh data of the current motion trajectory, the trajectory frequency data may be updated by using the current trajectory mesh data of the current motion trajectory. For example, in the embodiment of fig. 12, after the current trajectory grid data corresponding to the 6 th watering process is obtained, the trajectory frequency data shown in fig. 11, for example, may be updated by using the current trajectory grid data. The following description will be made with reference to the embodiment of fig. 13.
As shown in fig. 13, in some embodiments, a motion detection method of an example of the present disclosure includes:
s1310, updating the track frequency data according to the current track grid data of the current motion track to obtain updated track frequency data.
And S1320, determining the updated track frequency data as the motion detection result of the target object.
In the embodiment of the present disclosure, the current trajectory mesh data of the current motion trajectory represents mesh cells through which the current motion trajectory of the target object passes. The trajectory frequency data represents grid cells traversed by the target object over a previous period of time. The track frequency data can be updated based on the track grid data of the current motion track, that is, the grid units passed by the current track grid data are counted in the track frequency data, and the track frequency is updated.
For example, in the above exemplary scenario of curling, after obtaining the trajectory grid data of the current motion trajectory generated by the 6 th curling process, the current trajectory grid data of the 6 th curling process may be counted in the trajectory frequency data based on the trajectory frequency data shown in fig. 11, so as to update the trajectory frequency data. That is, the updated trajectory frequency data includes trajectory mesh data corresponding to 6 pitcher processes, and then the updated trajectory frequency data may be determined as a motion detection result of the target object.
It can be understood that, in the embodiment of the present disclosure, motion detection may be performed on each motion trajectory in real time, and a motion detection result may be obtained. For example, in an example, after an athlete performs a pot throwing process, the motion detection system 100 may obtain trajectory grid data of a pot throwing this time based on the above process, and then obtain a first trajectory coincidence degree based on the trajectory grid data of the pot throwing this time and the trajectory grid data of a pot throwing last time; obtaining a second track contact ratio based on the track grid data and the track frequency data of the current kettle throwing; and updating the track frequency data based on the track grid data to obtain new track frequency data. When the athlete performs the next pot throwing process, the process is repeated, and the first track contact ratio, the second track contact ratio and the track frequency data corresponding to the next pot throwing process can be obtained.
Therefore, in the embodiment of the present disclosure, the mesh data is obtained by meshing the target area, the motion trajectory of the target object can be effectively quantized and expressed by using the mesh data, and the motion detection result of the current motion trajectory is determined by comparing the current trajectory mesh data with the historical trajectory mesh data, so that the motion effect of the current motion process of the target object can be accurately reflected, and the motion detection effect is improved. Especially for sports scenes, the method and the device can quantitatively analyze the motion trail effect in real time and guide athletes to make adjustment in time based on the embodiment disclosed by the invention so as to ensure the optimal motion effect.
The embodiment of the disclosure provides a motion detection device, which can be applied to electronic equipment. The electronic device disclosed in the present disclosure may be any device type suitable for implementation, such as a mobile terminal, a wearable device, an in-vehicle device, a personal computer, a server, a cloud platform, and the like, which is not limited in the present disclosure.
As shown in fig. 14, in some embodiments, a motion detection apparatus of an example of the present disclosure includes:
a video acquisition module 10 configured to acquire video stream data to be processed;
a target detection module 20 configured to perform target detection on the video stream data, and determine a current motion trajectory of a target object in a target area;
a track grid module 30 configured to determine corresponding current track grid data in the current motion track according to the current motion track and the grid data corresponding to the target area;
a result determination module 40 configured to determine a motion detection result of the target object according to the current trajectory grid data and the historical trajectory grid data; the historical track grid data comprises track grid data corresponding to at least one historical motion track of the target object, and the motion detection result is used for reflecting the relationship between the current motion track and at least one historical motion track.
Therefore, in the embodiment of the present disclosure, the grid data is obtained by gridding the target area, the motion trajectory of the target object can be effectively quantized and expressed by using the grid data, and the motion detection result of the current motion trajectory is determined by comparing the current trajectory grid data with the historical trajectory grid data, so that the motion effect of the current motion process of the target object can be accurately reflected, and the motion detection effect is improved.
As shown in fig. 15, in some embodiments, the motion detection apparatus of the present disclosure further includes a mesh processing module 50, the mesh processing module 50 being configured to:
determining a target scale of the grid unit according to the size of the target object;
and carrying out gridding processing on the target area according to the grid unit with the target size to obtain grid data corresponding to the target area.
Therefore, in the embodiment of the present disclosure, by performing gridding processing on the target area of the object motion, the motion trajectory of the object can be effectively quantized and expressed by using the grid data, so as to obtain an accurate motion detection result.
In some embodiments, the trajectory grid module 30 is configured to:
determining grid cells passed by the current motion track based on the grid data;
and determining current track grid data corresponding to the current motion track according to the grid unit through which the current motion track passes.
Therefore, in the embodiment of the present disclosure, by performing gridding processing on the target area of the object motion, the motion trajectory of the object can be effectively quantized and expressed by using the grid data, so as to obtain an accurate motion detection result.
In some embodiments, the result determination module 40 is configured to:
according to the current track grid data and track grid data corresponding to any historical motion track, obtaining a first track coincidence degree of the current motion track and any historical motion track; the motion detection result of the target object comprises the first track coincidence degree.
In some embodiments, the result determination module 40 is configured to:
obtaining second track contact ratio according to the current track grid data and track frequency data obtained based on the historical track grid data; the motion detection result of the target object comprises the second track coincidence degree.
Therefore, in the embodiment of the disclosure, the track contact ratio between the current motion track and the historical motion track is obtained based on the track grid data, so that objective and effective effect quantification and expression can be effectively performed on the motion of the object, and the motion detection accuracy is improved.
As shown in fig. 15, in some embodiments, the motion detection apparatus of the present disclosure further includes a track frequency module 60, where the track frequency module 60 is configured to:
determining the frequency of the target object passing through each grid unit in the grid data according to historical track grid data;
and obtaining the track frequency data according to the frequency of the target object passing through each grid unit.
In some embodiments, the result determination module 40 is configured to:
updating the track frequency data according to the current track grid data of the current motion track to obtain updated track frequency data; and the motion detection result of the target object comprises the updated track frequency data.
In some embodiments, the target object comprises a curling, and the target area comprises a curling field.
Therefore, in the embodiment of the present disclosure, the grid data is obtained by gridding the target area, the motion trajectory of the target object can be effectively quantized and expressed by using the grid data, and the motion detection result of the current motion trajectory is determined by comparing the current trajectory grid data with the historical trajectory grid data, so that the motion effect of the current motion process of the target object can be accurately reflected, and the motion detection effect is improved. Especially for sports scenes, based on the embodiment of the disclosure, the quantitative analysis can be carried out on the motion track effect in real time, and the athlete can be guided to make adjustment in time so as to ensure the optimal motion effect.
The disclosed embodiment provides an electronic device, including:
a processor; and
a memory storing computer instructions for causing the processor to perform the method of any of the embodiments described above.
The disclosed embodiments provide a storage medium storing computer instructions for causing a computer to perform the method of any of the above embodiments.
Specifically, fig. 16 shows a schematic structural diagram of an electronic device 600 suitable for implementing the method of the present disclosure, and the corresponding functions of the processor and the storage medium can be implemented by the electronic device shown in fig. 16.
As shown in fig. 16, the electronic device 600 includes a processor 601, which can perform various appropriate actions and processes according to a program stored in a memory 602 or a program loaded from a storage section 608 into the memory 602. In the memory 602, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processor 601 and the memory 602 are connected to each other by a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. A driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, the above method processes may be implemented as a computer software program according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the above-described method. In such embodiments, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be understood that the above embodiments are only examples for clarity of description, and are not limiting. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. This need not be, nor should it be exhaustive of all embodiments. And obvious variations or modifications of the present disclosure may be made without departing from the scope of the present disclosure.

Claims (11)

1. A motion detection method, comprising:
acquiring video stream data to be processed;
performing target detection on the video stream data, and determining the current motion track of a target object in a target area;
determining current track grid data corresponding to the current motion track according to the current motion track and the grid data corresponding to the target area;
determining a motion detection result of the target object according to the current track grid data and the historical track grid data; the historical track grid data comprises track grid data corresponding to at least one historical motion track of the target object, and the motion detection result is used for reflecting the relationship between the current motion track and at least one historical motion track.
2. The method of claim 1, wherein the mesh data corresponding to the target region is determined according to:
determining a target scale of the grid unit according to the size of the target object;
and carrying out gridding processing on the target area according to the grid unit with the target size to obtain grid data corresponding to the target area.
3. The method of claim 1, wherein the mesh data corresponding to the target area comprises a plurality of mesh cells; determining current track grid data corresponding to the current motion track according to the current motion track and the grid data corresponding to the target area, including:
determining grid units passed by the current motion trail based on the grid data;
and determining current track grid data corresponding to the current motion track according to the grid unit through which the current motion track passes.
4. The method according to any one of claims 1 to 3, wherein the determining the motion detection result of the target object according to the current trajectory grid data and the historical trajectory grid data comprises:
according to the current track grid data and track grid data corresponding to any historical motion track, obtaining a first track coincidence degree of the current motion track and any historical motion track; the motion detection result of the target object comprises the first track coincidence degree.
5. The method according to any one of claims 1 to 4, wherein determining the motion detection result of the target object according to the current trajectory grid data and a historical trajectory grid data set comprises:
obtaining second track contact ratio according to the current track grid data and track frequency data obtained based on the historical track grid data; the motion detection result of the target object comprises the second track coincidence degree.
6. The method of claim 5, wherein the deriving the trajectory frequency data based on the historical trajectory grid data comprises:
determining the frequency of the target object passing through each grid unit in the grid data according to the historical track grid data;
and obtaining the track frequency data according to the frequency of the target object passing through each grid unit.
7. The method of claim 5, further comprising:
updating the track frequency data according to the current track grid data of the current motion track to obtain updated track frequency data; and the motion detection result of the target object comprises the updated track frequency data.
8. The method of claim 7,
the target object comprises a curling, and the target area comprises a curling field.
9. A motion detection apparatus, comprising:
the video acquisition module is configured to acquire video stream data to be processed;
the target detection module is configured to perform target detection on the video stream data and determine a current motion track of a target object in a target area;
the track grid module is configured to determine corresponding current track grid data in the current motion track according to the current motion track and the grid data corresponding to the target area;
a result determination module configured to determine a motion detection result of the target object according to the current trajectory grid data and the historical trajectory grid data; the historical track grid data comprises track grid data corresponding to at least one historical motion track of the target object, and the motion detection result is used for reflecting the relationship between the current motion track and at least one historical motion track.
10. An electronic device, comprising:
a processor; and
memory storing computer instructions for causing a processor to perform the method according to any one of claims 1 to 8.
11. A storage medium having stored thereon computer instructions for causing a computer to perform the method of any one of claims 1 to 8.
CN202210556500.8A 2022-05-20 2022-05-20 Motion detection method, motion detection device, electronic device, and storage medium Pending CN114926901A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210556500.8A CN114926901A (en) 2022-05-20 2022-05-20 Motion detection method, motion detection device, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210556500.8A CN114926901A (en) 2022-05-20 2022-05-20 Motion detection method, motion detection device, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN114926901A true CN114926901A (en) 2022-08-19

Family

ID=82810987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210556500.8A Pending CN114926901A (en) 2022-05-20 2022-05-20 Motion detection method, motion detection device, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN114926901A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115592675A (en) * 2022-12-01 2023-01-13 今麦郎饮品股份有限公司(Cn) Control system based on portable drink preparation arm

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115592675A (en) * 2022-12-01 2023-01-13 今麦郎饮品股份有限公司(Cn) Control system based on portable drink preparation arm
CN115592675B (en) * 2022-12-01 2023-09-12 今麦郎饮品股份有限公司 Control system based on mobile beverage preparation mechanical arm

Similar Documents

Publication Publication Date Title
KR20180084085A (en) METHOD, APPARATUS AND ELECTRONIC DEVICE
US20170177946A1 (en) Method, device, and computer program for re-identification of objects in images obtained from a plurality of cameras
CN107408303A (en) System and method for Object tracking
CN104937638A (en) Systems and methods for tracking and detecting a target object
US20150104067A1 (en) Method and apparatus for tracking object, and method for selecting tracking feature
CN111160243A (en) Passenger flow volume statistical method and related product
Ren et al. Multi-camera video surveillance for real-time analysis and reconstruction of soccer games
CN103533303A (en) Real-time tracking system and method of moving target
CN110245641A (en) A kind of target tracking image pickup method, device, electronic equipment
CN115546705B (en) Target identification method, terminal device and storage medium
CN111507204A (en) Method and device for detecting countdown signal lamp, electronic equipment and storage medium
CN114926901A (en) Motion detection method, motion detection device, electronic device, and storage medium
CN115063454A (en) Multi-target tracking matching method, device, terminal and storage medium
Rongved et al. Using 3D convolutional neural networks for real-time detection of soccer events
CN114169425A (en) Training target tracking model and target tracking method and device
Galor et al. Strong-TransCenter: Improved multi-object tracking based on transformers with dense representations
Zhang et al. Multi-domain collaborative feature representation for robust visual object tracking
CN106934339B (en) Target tracking and tracking target identification feature extraction method and device
CN115953434B (en) Track matching method, track matching device, electronic equipment and storage medium
CN112166435A (en) Target tracking method and device, electronic equipment and storage medium
CN113992976B (en) Video playing method, device, equipment and computer storage medium
Liu et al. Multiple objects tracking based vehicle speed analysis with Gaussian filter from drone video
CN115171185A (en) Cross-camera face tracking method, device and medium based on time-space correlation
CN112215036B (en) Cross-mirror tracking method, device, equipment and storage medium
Choudhary et al. Real time video summarization on mobile platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination