CN114302234B - Quick packaging method for air skills - Google Patents

Quick packaging method for air skills Download PDF

Info

Publication number
CN114302234B
CN114302234B CN202111638702.9A CN202111638702A CN114302234B CN 114302234 B CN114302234 B CN 114302234B CN 202111638702 A CN202111638702 A CN 202111638702A CN 114302234 B CN114302234 B CN 114302234B
Authority
CN
China
Prior art keywords
target
human body
picture
frame
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111638702.9A
Other languages
Chinese (zh)
Other versions
CN114302234A (en
Inventor
吴奕刚
纪亭
王伟明
许国忠
巨金波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Arcvideo Technology Co ltd
Original Assignee
Hangzhou Arcvideo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Arcvideo Technology Co ltd filed Critical Hangzhou Arcvideo Technology Co ltd
Priority to CN202111638702.9A priority Critical patent/CN114302234B/en
Publication of CN114302234A publication Critical patent/CN114302234A/en
Application granted granted Critical
Publication of CN114302234B publication Critical patent/CN114302234B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention discloses a quick packaging method for air skills. The method specifically comprises the following steps: the original picture is acquired in real time through a field high-frame-rate ultra-high-definition video acquisition system and is imported into an AI video analysis and synthesis module; detecting a target athlete in a picture and extracting target images according to the time sequence by an AI video analysis and synthesis module, and extracting a series of target images according to the positions of targets in front of and behind the picture; according to the time position relations of the targets, combining data measurement analysis of the background and corresponding target related data information returned by visual processing to generate a superimposed information data set; and combining the generated information data set into a final video stream, and finally, deriving the synthesized video stream from the system. The beneficial effects of the invention are as follows: greatly improving the watching participation degree, enriching the watching experience and giving viewers a bullet time watching sense.

Description

Quick packaging method for air skills
Technical Field
The invention relates to the technical field related to high-dynamic video range video processing, in particular to an air skill quick packaging method.
Background
In the existing two technical schemes, the scheme one can meet the requirement of real-time performance, but basically only plays back by a slow lens, and no additional information such as more detailed technical description exists, and for a viewer, the scheme two can be observed to be capable of superposing additional information such as a certain technical description through post-processing, but the real-time performance may not be guaranteed.
Disclosure of Invention
The invention provides an air skill quick packaging method which meets the real-time performance and additional description in order to overcome the defects in the prior art.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
the quick packaging method for the air skills specifically comprises the following steps:
(1) The original picture is acquired in real time through a field high-frame-rate ultra-high-definition video acquisition system and is imported into an AI video analysis and synthesis module;
(2) Detecting a target athlete in a picture and extracting target images according to the time sequence by an AI video analysis and synthesis module, and extracting a series of target images according to the positions of targets in front of and behind the picture;
(3) According to the time position relations of the targets, combining data measurement analysis of the background and corresponding target related data information returned by visual processing to generate a superimposed information data set;
(4) And combining the generated information data set into a final video stream, and finally, deriving the synthesized video stream from the system, thereby realizing real-time superposition of data information and presenting the motion trail and related data information of the target in a target path ghost mode.
The real-time original pictures on site are collected and imported into an AI video analysis and synthesis module through a high-frame-rate ultra-high-definition video collection system, targets (mainly athletes) in the pictures are detected and target images are extracted, then a series of extracted target images and background data measurement and analysis are carried out in time sequence, returned data information is visually processed and combined into a unified video stream, and finally the synthesized video stream is exported from the unified video stream, so that the real-time superposition of the data information is realized, and the motion trail and related data information of the targets are presented in a path ghost mode. Furthermore, the playing speed of the output video can be adjusted through configuration, the watching effect of slow playing is realized, the watching participation degree is greatly improved, the watching experience is enriched, and the watching somatic sense of bullet time is given to the audience. Bullet time (bullettime) is a photographic technique used in movies, television advertisements, or computer games to simulate variable speed special effects such as enhanced slow shots, time-stationary, etc.
Preferably, in step (2), specifically: when the monitoring and processing of the quick packaging of the space skills are triggered, a background service starts to monitor video content, firstly decodes the video to generate a picture sequence set corresponding to the video, then uses an AI video analysis and synthesis module to study video motion foreground extraction as an algorithm target based on AI deep learning capability, utilizes a target detection tracking technology to accurately capture the foreground target in a region shot by a camera to obtain the position of the target, utilizes an image segmentation technology to obtain the outline of the target, extracts the foreground target according to outline information, and performs quick image synthesis, wherein the foreground extraction of athlete targets in different frame sequences is realized by three sub-modules including target detection tracking, human body key point detection and human body target segmentation and extraction.
Preferably, the target detection tracking module is used for providing coordinate information of a rectangular frame where the target athlete is located in the current video frame, detecting the designated area in real time, and starting tracking when the target motion is detected, and obtaining the target human body area coordinates of the subsequent video frame.
Preferably, the specific operation method of the target detection tracking module is as follows: the target detection utilizes a convolutional neural network to extract candidate windows, screens targets existing in a region of interest, detects the position of a foreground target, tracks the target on the basis of target detection, and adopts a tracking algorithm to acquire the position of the target in continuous frames.
Preferably, the human body key point detection module is used for acquiring coordinate information of 17 key points of a human body, providing position information for calculation of the subsequent human body height speed, adding athlete materials with different postures according to the scene because the athlete rotates in the air and the conventional human body posture is large, adding disturbance to simulate the posture of the athlete in the air in the training process, enriching a data set, and guiding a network learning difficulty sample by using a Focalloss function.
Preferably, the specific operation method of the human body key point detection module is as follows: the method comprises the steps of detecting key points of human bodies in a target detection area, inputting vertex coordinates of a rectangular frame of the area where the human bodies are located, and outputting 17 human body posture key points, wherein the key points are respectively as follows: nose, left and right eyes, left and right ears, left and right shoulders, left and right elbows, left and right wrists, left and right hips, left and right knees, and left and right ankles.
Preferably, the human body target segmentation extraction module outputs a binary image for superposition of human body overlapping areas of subsequent adjacent video frames, the module is a fusion model based on a key point detection network, the robustness of the key point model to a background area is utilized to constrain characteristic diagram information of the segmentation network by sharing part of network weight with the key point network, further the background robustness of the segmentation network is improved, pixel-level constraint of an edge area and gradient constraint of the binary image are added in the training process to improve the segmentation effect of the segmentation network to a difficult area, and similar FocalLoss loss functions are adopted for learning complex samples, so that the problem of similarity between classes is relieved.
Preferably, the specific operation method of the human body target segmentation and extraction module is as follows: according to the detection result of the key points of the human body, a target segmentation result is obtained by utilizing a target segmentation technology based on joints of the human body, and a foreground target is extracted according to the target segmentation result and the input video stream information.
Preferably, in step (3), specifically: the method comprises the steps of measuring and analyzing moving target interaction data based on images, generating and outputting visual data and images in a superposition way, wherein the measurement of human body height and speed information is based on coordinate information output by a human body key point detection module, firstly, obtaining the estimated jump platform pixel height in a picture based on the actual height of a jump platform, and obtaining the relation between pixel distance and actual distance according to the initial speed information of an athlete, the initial height of the athlete and the video frame number information; the height information is provided by the ordinate of the highest point of the key points of the athlete of the current video frame, then the currently displayed height information is updated based on the height information of the previous frame, and the speed information is updated based on the human barycenter coordinates of the athlete of the current video frame by calculating the barycenter offset distance and the time interval of the previous frame and the current frame.
Preferably, the image superposition supports a parameterized automatic mode and a manual equidistant mode, wherein the automatic mode is based on a target detection module, and when a detection frame of a player in a current frame is not overlapped with a detection frame displayed by the last superposition, the player in the detection frame is selected as a new superposition target; the manual equal interval mode is to superimpose the outputted division results every several frames.
The beneficial effects of the invention are as follows: the playing speed of the output video can be adjusted, the watching effect of slow playing is realized, the watching participation degree is greatly improved, the watching experience is enriched, and the watching body feeling of bullet time is given to the audience.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of an object detection tracking module;
FIGS. 3 and 4 are schematic diagrams of human body key point detection;
FIG. 5 is a schematic illustration of human target segmentation extraction;
fig. 6 is a schematic diagram of a slow play illustration.
Detailed Description
The invention is further described below with reference to the drawings and detailed description.
In the embodiment shown in fig. 1, an air skill rapid packaging method specifically includes the following steps:
(1) The original picture is acquired in real time through a field high-frame-rate ultra-high-definition video acquisition system and is imported into an AI video analysis and synthesis module;
(2) Detecting a target athlete in a picture and extracting target images according to the time sequence by an AI video analysis and synthesis module, and extracting a series of target images according to the positions of targets in front of and behind the picture; the method comprises the following steps: when the monitoring and processing of the quick packaging of the space skills are triggered, a background service starts to monitor video content, firstly decodes the video to generate a picture sequence set corresponding to the video, then uses an AI video analysis and synthesis module to study video motion foreground extraction as an algorithm target based on AI deep learning capability, utilizes a target detection tracking technology to accurately capture the foreground target in a region shot by a camera to obtain the position of the target, utilizes an image segmentation technology to obtain the outline of the target, extracts the foreground target according to outline information, and performs quick image synthesis, wherein the foreground extraction of athlete targets in different frame sequences is realized by three sub-modules including target detection tracking, human body key point detection and human body target segmentation and extraction.
As shown in fig. 2, the target detection tracking module is configured to provide coordinate information of a rectangular frame where a target athlete is located in a current video frame, and the module detects a specified ROI area in real time based on a centrnet, and starts tracking when a target motion is detected, so as to obtain coordinates of a target human body area in a subsequent video frame. The specific operation method of the target detection tracking module comprises the following steps: the target detection utilizes a convolutional neural network to extract candidate windows, screens targets existing in a region of interest, detects the position of a foreground target, and acquires the position of the target in a continuous frame by adopting a tracking algorithm such as means shift and the like on the basis of target detection.
As shown in fig. 3 and fig. 4, the human body key point detection module is used for acquiring coordinate information of 17 key points of a human body and providing position information for calculation of the subsequent human body height speed, the module is based on alphaPose, and because the athlete rotates in the air and the difference between the conventional human body gesture is large, according to the scene, athlete materials with different gestures are added, meanwhile, disturbance such as rotation and overturning are added in the training process to simulate the gesture of the athlete in the air, a data set is enriched, a FocalLoss function is used for guiding a network learning difficulty sample, and the human body key point detection module is responsible. The specific operation method of the human body key point detection module comprises the following steps: the method comprises the steps of detecting key points of human bodies in a target detection area, inputting vertex coordinates of a rectangular frame of the area where the human bodies are located, and outputting 17 human body posture key points, wherein the key points are respectively as follows: nose, left and right eyes, left and right ears, left and right shoulders, left and right elbows, left and right wrists, left and right hips, left and right knees, and left and right ankles.
As shown in fig. 5, the human body target segmentation extraction module outputs a binary image for superposition of human body overlapping areas of subsequent adjacent video frames, the module is a fusion model based on a key point detection network, and by sharing part of network weight with the key point network, the robustness of the key point model to a background area is utilized to constrain the characteristic diagram information of the segmentation network, so that the background robustness of the segmentation network is improved, pixel-level constraint of an edge area and gradient constraint of the binary image are added in the training process to improve the segmentation effect of the segmentation network to a difficult area (human body edge area), and similar FocalLoss functions are adopted for learning complex samples, so that the problem of similarity between classes is relieved. The specific operation method of the human body target segmentation and extraction module comprises the following steps: according to the detection result of the key points of the human body, a target segmentation result is obtained by utilizing a target segmentation technology based on joints of the human body, and a foreground target is extracted according to the target segmentation result and the input video stream information.
(3) According to the time position relations of the targets, combining data measurement analysis of the background and corresponding target related data information returned by visual processing to generate a superimposed information data set; the method comprises the following steps: the method comprises the steps of measuring and analyzing moving target interaction data based on images, generating and outputting visual data and images in a superposition way, wherein the measurement of human body height and speed information is based on coordinate information output by a human body key point detection module, firstly, obtaining the estimated jump platform pixel height in a picture based on the actual height of a jump platform, and obtaining the relation between pixel distance and actual distance according to the initial speed information of an athlete, the initial height of the athlete and the video frame number information; the height information is provided by the ordinate of the highest point of the key points of the athlete of the current video frame, then the currently displayed height information is updated based on the height information of the previous frame, and the speed information is updated based on the human barycenter coordinates of the athlete of the current video frame by calculating the barycenter offset distance and the time interval of the previous frame and the current frame. The image superposition supports a parameterized automatic mode and a manual equidistant mode, wherein the automatic mode is based on a target detection module, and when a detection frame of a player in the current frame is not overlapped with a detection frame displayed by the previous superposition, the player in the detection frame is selected as a new superposition target; the manual equal interval mode is to superimpose the outputted division results every several frames.
(4) And combining the generated information data set into a final video stream, and finally, deriving the synthesized video stream from the system, thereby realizing real-time superposition of data information and presenting the motion trail and related data information of the target in a target path ghost mode.
In summary, in the method, the relation between the pixel and the actual distance is preset by correlating the jump height information actually known in reality with the jump image pixels in the frame, the time interval between each frame of the frame is determined by the numerical value of the actual frame high frame rate, and meanwhile, each frame of the frame is guided into the human body key point detection module using the human body limb identification technology to confirm the barycenter coordinates of the human body of the athlete, and the actual movement speed of the moving object is estimated by shifting the pixel value in the time interval by the barycenter coordinates. The gravity center coordinates of the athlete are marked by the human body limb identification technology, and the gravity center is used as a monitoring analysis target, so that the actual motion trail and speed of the athlete can be estimated more accurately, and the subsequent analysis processing flow is facilitated. In a series of picture processing processes, according to experience of an actual service scene, the speed is relatively slowest when the center of gravity is highest, and the motion is an optimal part of a series of actions, so that the current target division area in the current moving picture is preset as an optimal display picture, and is taken as a selected picture, other displayed moving pictures are based on the time point and coordinate information, and because each frame picture has a corresponding current target division area, if the current target division area in the other displayed moving pictures has coincident pixels with the selected picture target division area, the picture is discarded until the current target division area in the other displayed moving pictures and the selected picture target division area have coincident pixels, the current target division area in the current moving picture is selected again and is taken as a residual image in the display picture, and a series of action sequence charts are generated and displayed according to the logic.
The real-time original pictures on site are collected and imported into an AI video analysis and synthesis module through a high-frame-rate ultra-high-definition video collection system, targets (mainly athletes) in the pictures are detected and target images are extracted, then a series of extracted target images and background data measurement and analysis are carried out in time sequence, returned data information is visually processed and combined into a unified video stream, and finally the synthesized video stream is exported from the unified video stream, so that the real-time superposition of the data information is realized, and the motion trail and related data information of the targets are presented in a path ghost mode. Furthermore, the playing speed of the output video can be adjusted through configuration, the watching effect of slow playing is realized, the watching participation degree is greatly improved, the watching experience is enriched, and the watching somatic sense of bullet time is given to the audience. Bullet time (bullettime) is a photographic technique used in movies, television advertisements, or computer games to simulate variable speed special effects such as enhanced slow shots, time-stationary, etc.
In addition, according to the actual service scene, the background needs the link of video processing, namely the identification and superposition operation processing, and does not need to run all the time, and only the small time which needs to embody the air skill needs to be presented in a mode of identifying and positioning and then superposing the target path ghost. For the conventional common viewing experience, the general scene is that targets enter, performance is ready to start, then the targets start accelerating, and start to rush to a jump table, jump to the jump table to vacate, then the air skills, the air overturning and the rotating are displayed, and finally the whole set of performance is completed safely in a landing mode. Under the support of the existing playing system, a series of slow actions after the athlete vacates are cut into the horse after the athlete finishes the performance, so that the audience can clearly see the action details presented by the athlete in the air, as shown in fig. 6. The slow motion is cut in and out, and a fixed image-text package is often used for marking.
In the method, as a mode of assisting in positioning key fragments, a content detection mode can be used for detecting the fixed transition diagram Wen Pianduan as an in-point and an out-point of the subsequent fast packaging access of the air skills, the fast packaging monitoring and processing of the air skills is started by detecting the playing of slow motion video fragments, then the transition diagram Wen Pianduan is monitored to play again, and the fast packaging of the air skills is exited. In order to further confirm the correlation between the video content and the target air skills, the scene detection can be further carried out on the video content which starts the monitoring and processing of the air skills fast packaging, whether the video content is consistent with key scenes such as a jump starting station or not is judged, and the subsequent superposition processing is carried out after the video content is matched.
The fast packaging method of the air skills in the invention is not only used for real-time processing and displaying of the free skiing air skills of the snowfield plateau, but also can be used for displaying the whole empty turning and turning processes as long as the process of taking off and vacating at the appointed position, and can be used for displaying the terminal display pictures as well as the program for displaying the skills in the air like plateau diving and the like. In addition, external equipment such as an infrared range finder and the like can be further combined, and more accurate real-time data collected by the equipment can be synchronously integrated into a finally presented picture. Similarly, some contracted technical actions can be processed in advance, and synchronous presentation and explanation can be performed in real time when the athlete actually presents the corresponding technical actions.

Claims (7)

1. The quick packaging method for the air skills is characterized by comprising the following steps of:
(1) The original picture is acquired in real time through a field high-frame-rate ultra-high-definition video acquisition system and is imported into an AI video analysis and synthesis module;
(2) Detecting a target athlete in a picture and extracting target images according to the time sequence by an AI video analysis and synthesis module, and extracting a series of target images according to the positions of targets in front of and behind the picture; the method comprises the following steps: when the monitoring and processing of the quick packaging of the space skills are triggered, the background service starts to monitor the video content, firstly decodes the video to generate a picture sequence set corresponding to the video, then uses an AI video analysis and synthesis module to research video motion foreground extraction as an algorithm target based on AI deep learning capability, uses a target detection tracking technology to accurately capture the foreground target in a region shot by a camera to obtain the position of the target, uses an image segmentation technology to obtain the outline of the target, extracts the foreground target according to outline information, and performs quick image synthesis, wherein the foreground extraction of athlete targets in different frame sequences is realized by three sub-modules including target detection tracking, human body key point detection and human body target segmentation and extraction; the human body key point detection module is used for acquiring coordinate information of 17 key points of a human body, providing position information for the calculation of the subsequent human body height speed, adding athlete materials with different postures aiming at a scene that an athlete rotates in the air because the athlete rotates in the air and the conventional human body posture is large, adding disturbance to simulate the posture of the athlete in the air in the training process, enriching a data set, and guiding a network learning difficulty sample by using a Focalloss function;
(3) According to the time position relations of the targets, combining data measurement analysis of the background and corresponding target related data information returned by visual processing to generate a superimposed information data set; the method comprises the following steps: the method comprises the steps of measuring and analyzing moving target interaction data based on images, generating and outputting visual data and images in a superposition way, wherein the measurement of human body height and speed information is based on coordinate information output by a human body key point detection module, firstly, obtaining the estimated jump platform pixel height in a picture based on the actual height of a jump platform, and obtaining the relation between pixel distance and actual distance according to the initial speed information of an athlete, the initial height of the athlete and the video frame number information; the height information is provided by the ordinate of the highest point of the key points of the athlete of the current video frame, then the currently displayed height information is updated based on the height information of the previous frame, and the speed information is updated based on the barycenter coordinates of the athlete of the current video frame by calculating the barycenter offset distance and the time interval of the previous frame and the current frame; the method comprises the steps of correlating jump height information actually known in reality with jump image pixels in a picture, presetting a corresponding relation between the pixels and an actual distance, determining a time interval between each frame of the picture through a numerical value of an actual picture high frame rate, simultaneously guiding each frame of picture into a human body key point detection module using a human body limb identification technology, confirming barycenter coordinates of a sportsman body, and estimating an actual movement speed of a moving target by shifting pixel values in the time interval through the barycenter coordinates; in a series of picture processing processes, according to experience of an actual service scene, the speed is relatively slowest when the center of gravity is highest, and the current target dividing region is preset to be the optimal display picture when the center of gravity is highest, and is taken as a selected picture, other displayed moving pictures are based on the time point and coordinate information, and as each frame picture has a corresponding current target dividing region, if the current target dividing region in the other displayed moving pictures has coincident pixels with the selected picture target dividing region, the picture is discarded until the current target dividing region in the other displayed moving pictures and the selected picture target dividing region have coincident pixels, the current target dividing region in the current moving pictures is selected again and is taken as a residual image in the displayed picture, and a series of action sequence charts are generated and displayed according to the logic;
(4) And combining the generated information data set into a final video stream, and finally, deriving the synthesized video stream from the system, thereby realizing real-time superposition of data information and presenting the motion trail and related data information of the target in a target path ghost mode.
2. The quick packaging method for air skills according to claim 1, wherein the target detection tracking module is used for providing coordinate information of a rectangular frame where a target athlete is located in a current video frame, detecting a designated area in real time, and starting tracking when detecting target movement, and obtaining target human body area coordinates of a subsequent video frame.
3. The fast packaging method for air skills according to claim 1 or 2, wherein the specific operation method of the target detection tracking module is as follows: the target detection utilizes a convolutional neural network to extract candidate windows, screens targets existing in a region of interest, detects the position of a foreground target, tracks the target on the basis of target detection, and adopts a tracking algorithm to acquire the position of the target in continuous frames.
4. The rapid packaging method for air skills according to claim 3, wherein the specific operation method of the human body key point detection module is as follows: the method comprises the steps of detecting key points of human bodies in a target detection area, inputting vertex coordinates of a rectangular frame of the area where the human bodies are located, and outputting 17 human body posture key points, wherein the key points are respectively as follows: nose, left and right eyes, left and right ears, left and right shoulders, left and right elbows, left and right wrists, left and right hips, left and right knees, and left and right ankles.
5. The air skill rapid packaging method according to claim 1, wherein the human body target segmentation extraction module outputs a binary image for superposition of human body overlapping areas of subsequent adjacent video frames, the module is a fusion model based on a key point detection network, and the model is used for restricting characteristic diagram information of a segmentation network by sharing part of network weight with the key point network and utilizing robustness of the key point model to a background area so as to further promote the background robustness of the segmentation network, pixel-level restriction of an edge area and gradient restriction of a binary image are added in a training process to promote the segmentation effect of the segmentation network to a difficult area, and similar FocalLoss loss functions are adopted for learning complex samples to relieve the problem of similarity between classes.
6. The fast air skill packaging method according to claim 4, wherein the specific operation method of the human body target segmentation and extraction module is as follows: according to the detection result of the key points of the human body, a target segmentation result is obtained by utilizing a target segmentation technology based on joints of the human body, and a foreground target is extracted according to the target segmentation result and the input video stream information.
7. The rapid air skill packaging method according to claim 1, wherein the image overlay supports a parameterized automatic mode and a manual equidistant mode, wherein the automatic mode is based on a goal detection module, and when a detection frame of a player of a current frame is not overlapped with a detection frame displayed by the previous overlay, the player in the detection frame is selected as a new overlay goal; the manual equal interval mode is to superimpose the outputted division results every several frames.
CN202111638702.9A 2021-12-29 2021-12-29 Quick packaging method for air skills Active CN114302234B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111638702.9A CN114302234B (en) 2021-12-29 2021-12-29 Quick packaging method for air skills

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111638702.9A CN114302234B (en) 2021-12-29 2021-12-29 Quick packaging method for air skills

Publications (2)

Publication Number Publication Date
CN114302234A CN114302234A (en) 2022-04-08
CN114302234B true CN114302234B (en) 2023-11-07

Family

ID=80971050

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111638702.9A Active CN114302234B (en) 2021-12-29 2021-12-29 Quick packaging method for air skills

Country Status (1)

Country Link
CN (1) CN114302234B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114913471B (en) * 2022-07-18 2023-09-12 深圳比特微电子科技有限公司 Image processing method, device and readable storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10276351A (en) * 1997-03-31 1998-10-13 Mitsubishi Electric Corp Sports completition display device
US6710713B1 (en) * 2002-05-17 2004-03-23 Tom Russo Method and apparatus for evaluating athletes in competition
KR101291765B1 (en) * 2013-05-15 2013-08-01 (주)엠비씨플러스미디어 Ball trace providing system for realtime broadcasting
CN109040837A (en) * 2018-07-27 2018-12-18 北京市商汤科技开发有限公司 Method for processing video frequency and device, electronic equipment and storage medium
CN109903312A (en) * 2019-01-25 2019-06-18 北京工业大学 A kind of football sportsman based on video multi-target tracking runs distance statistics method
CN110472554A (en) * 2019-08-12 2019-11-19 南京邮电大学 Table tennis action identification method and system based on posture segmentation and crucial point feature
CN110516620A (en) * 2019-08-29 2019-11-29 腾讯科技(深圳)有限公司 Method for tracking target, device, storage medium and electronic equipment
CN112135045A (en) * 2020-09-23 2020-12-25 努比亚技术有限公司 Video processing method, mobile terminal and computer storage medium
CN112668522A (en) * 2020-12-31 2021-04-16 华南理工大学 Human body key point and human body mask combined detection network and method
CN112990162A (en) * 2021-05-18 2021-06-18 所托(杭州)汽车智能设备有限公司 Target detection method and device, terminal equipment and storage medium
WO2021129064A1 (en) * 2019-12-24 2021-07-01 腾讯科技(深圳)有限公司 Posture acquisition method and device, and key point coordinate positioning model training method and device
WO2021238325A1 (en) * 2020-05-29 2021-12-02 华为技术有限公司 Image processing method and apparatus

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014013690A1 (en) * 2012-07-17 2014-01-23 パナソニック株式会社 Comment information generation device and comment information generation method
US9486693B2 (en) * 2012-08-31 2016-11-08 Catapult Group International Pty Ltd. Sports data collection and presentation
JP5838371B1 (en) * 2014-06-30 2016-01-06 パナソニックIpマネジメント株式会社 Flow line analysis system, camera device, and flow line analysis method
US11625646B2 (en) * 2020-04-06 2023-04-11 Huawei Cloud Computing Technologies Co., Ltd. Method, system, and medium for identifying human behavior in a digital video using convolutional neural networks
US20210322852A1 (en) * 2020-04-21 2021-10-21 Stupa Sports Analytics Private Limited Determining trajectory of a ball from two-dimensional media-content using computer vision

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10276351A (en) * 1997-03-31 1998-10-13 Mitsubishi Electric Corp Sports completition display device
US6710713B1 (en) * 2002-05-17 2004-03-23 Tom Russo Method and apparatus for evaluating athletes in competition
KR101291765B1 (en) * 2013-05-15 2013-08-01 (주)엠비씨플러스미디어 Ball trace providing system for realtime broadcasting
CN109040837A (en) * 2018-07-27 2018-12-18 北京市商汤科技开发有限公司 Method for processing video frequency and device, electronic equipment and storage medium
CN109903312A (en) * 2019-01-25 2019-06-18 北京工业大学 A kind of football sportsman based on video multi-target tracking runs distance statistics method
CN110472554A (en) * 2019-08-12 2019-11-19 南京邮电大学 Table tennis action identification method and system based on posture segmentation and crucial point feature
CN110516620A (en) * 2019-08-29 2019-11-29 腾讯科技(深圳)有限公司 Method for tracking target, device, storage medium and electronic equipment
WO2021129064A1 (en) * 2019-12-24 2021-07-01 腾讯科技(深圳)有限公司 Posture acquisition method and device, and key point coordinate positioning model training method and device
WO2021238325A1 (en) * 2020-05-29 2021-12-02 华为技术有限公司 Image processing method and apparatus
CN112135045A (en) * 2020-09-23 2020-12-25 努比亚技术有限公司 Video processing method, mobile terminal and computer storage medium
CN112668522A (en) * 2020-12-31 2021-04-16 华南理工大学 Human body key point and human body mask combined detection network and method
CN112990162A (en) * 2021-05-18 2021-06-18 所托(杭州)汽车智能设备有限公司 Target detection method and device, terminal equipment and storage medium

Also Published As

Publication number Publication date
CN114302234A (en) 2022-04-08

Similar Documents

Publication Publication Date Title
US8675021B2 (en) Coordination and combination of video sequences with spatial and temporal normalization
US11373354B2 (en) Techniques for rendering three-dimensional animated graphics from video
JP4739520B2 (en) Method and apparatus for synthesizing video sequence with spatio-temporal alignment
EP1287518B1 (en) Automated stroboscoping of video sequences
JP2009505553A (en) System and method for managing the insertion of visual effects into a video stream
CN112653848B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
Pidaparthy et al. Keep your eye on the puck: Automatic hockey videography
CN114363689A (en) Live broadcast control method and device, storage medium and electronic equipment
CN114302234B (en) Quick packaging method for air skills
JP6030072B2 (en) Comparison based on motion vectors of moving objects
JP2008194095A (en) Mileage image generator and generation program
CN116719176B (en) Intelligent display system of intelligent exhibition hall
WO2021017496A1 (en) Directing method and apparatus and computer-readable storage medium
CN112287771A (en) Method, apparatus, server and medium for detecting video event
WO2002104009A1 (en) Method and system for combining video with spatio-temporal alignment
Martín et al. Automatic players detection and tracking in multi-camera tennis videos
Xie et al. Object Tracking Method based on 3D Cartoon Animation in Broadcast Soccer Videos
EP4120687A1 (en) An object or region of interest video processing system and method
Inamoto et al. Arbitrary viewpoint observation for soccer match video
Takahashi et al. Visualization of putting trajectories in live golf broadcasting
Goenetxea et al. Capturing the sporting heroes of our past by extracting 3D movements from legacy video content
KR20230096360A (en) Sports motion analysis system using multi-camera
WO2023157005A1 (en) An augmented reality interface for watching live sport games
Mikami et al. [POSTER] Automatic visual feedback from multiple views for motor learning
Hao A Sports Analysis System Based on Video Game Technology: Design and Implementation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant