CN114302234A - Air skill rapid packaging method - Google Patents

Air skill rapid packaging method Download PDF

Info

Publication number
CN114302234A
CN114302234A CN202111638702.9A CN202111638702A CN114302234A CN 114302234 A CN114302234 A CN 114302234A CN 202111638702 A CN202111638702 A CN 202111638702A CN 114302234 A CN114302234 A CN 114302234A
Authority
CN
China
Prior art keywords
target
information
video
detection
human body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111638702.9A
Other languages
Chinese (zh)
Other versions
CN114302234B (en
Inventor
吴奕刚
纪亭
王伟明
许国忠
巨金波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Arcvideo Technology Co ltd
Original Assignee
Hangzhou Arcvideo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Arcvideo Technology Co ltd filed Critical Hangzhou Arcvideo Technology Co ltd
Priority to CN202111638702.9A priority Critical patent/CN114302234B/en
Publication of CN114302234A publication Critical patent/CN114302234A/en
Application granted granted Critical
Publication of CN114302234B publication Critical patent/CN114302234B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a quick packing method for air skills. The method specifically comprises the following steps: acquiring original pictures in real time through an on-site high frame rate and ultra-high definition video acquisition system, and importing the pictures into an AI video analysis and synthesis module; through an AI video analysis and synthesis module, a target athlete in the picture is detected and a target image is extracted, and a series of target images are extracted according to the time sequence and the positions of the target before and after the picture; according to the time position relations of the targets, combining data measurement and analysis of a background and corresponding target related data information returned by visualization processing to generate a superposed information data set; and combining the generated information data set into a final video stream, and finally exporting the combined video stream from the system. The invention has the beneficial effects that: the participation degree of watching that promotes greatly has richened the experience of watching, gives spectator's a body sense of watching of bullet time.

Description

Air skill rapid packaging method
Technical Field
The invention relates to the technical field related to high-dynamic video range video processing, in particular to an air skill rapid packaging method.
Background
In two existing technical solutions, a "solution one" may meet the requirement of real-time performance, but basically only slow-shot playback is performed, and additional information such as more detailed technical specifications and the like is not available.
Disclosure of Invention
The invention provides a quick packing method of air skills, which meets the real-time performance and additional description in order to overcome the defects in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
a quick packing method for air skills specifically comprises the following steps:
(1) acquiring original pictures in real time through an on-site high frame rate and ultra-high definition video acquisition system, and importing the pictures into an AI video analysis and synthesis module;
(2) through an AI video analysis and synthesis module, a target athlete in the picture is detected and a target image is extracted, and a series of target images are extracted according to the time sequence and the positions of the target before and after the picture;
(3) according to the time position relations of the targets, combining data measurement and analysis of a background and corresponding target related data information returned by visualization processing to generate a superposed information data set;
(4) and the generated information data set is combined into the final video stream, and the synthesized video stream is finally exported from the system, so that the data information is superimposed in real time, and the motion track and the related data information of the target are presented in a target path ghost mode.
Through a high-frame-rate and ultra-high-definition video acquisition system, a real-time original picture on site is acquired and guided into an AI video analysis synthesis module, a target (mainly an athlete) in the picture is detected and a target image is extracted, then a series of extracted target images and data measurement analysis and visualization of a background are carried out on the data information returned by the background according to the time sequence, the data information is combined into a unified video stream, and finally the synthesized video stream is guided out, so that the real-time data information superposition is realized, and the motion track and related data information of the target are presented in a path ghost mode. Furthermore, the playing speed of the output video can be adjusted through configuration, the watching effect of slow playing is realized, the watching participation degree is greatly improved, the watching experience is enriched, and the watching body feeling of the bullet time is provided for audiences. Bullet time (Bullet time) is a photographic technique used in movies, television commercials, or computer games to simulate variable speed effects, such as intensified slow-shot, time-stationary, etc.
Preferably, in the step (2), specifically: when monitoring and processing of air skill fast package are triggered, a background service starts to monitor video content, the video is decoded to generate a picture sequence set corresponding to the video, then an AI video analysis and synthesis module is used for researching video motion foreground extraction as an algorithm target based on AI deep learning capacity, a target detection and tracking technology is used for accurately capturing a foreground target in a region shot by a camera to obtain the position of the target, an image segmentation technology is used for obtaining the contour of the target, the foreground target is extracted according to contour information, fast image synthesis is carried out, and the foreground extraction of athlete targets in different frame sequences is realized by three sub-modules including target detection and tracking, human key point detection and human target segmentation and extraction.
Preferably, the target detection tracking module is used for providing coordinate information of a rectangular frame where a target athlete of a current video frame is located, detecting the designated area in real time, starting tracking when the target motion is detected, and acquiring the coordinates of the target human body area of a subsequent video frame.
Preferably, the specific operation method of the target detection and tracking module is as follows: the target detection utilizes a convolutional neural network to extract candidate windows, screens targets existing in an interested region, detects the positions of foreground targets, tracks the targets on the basis of the target detection, and adopts a tracking algorithm to obtain the positions of the targets in continuous frames.
Preferably, the human body key point detection module is used for acquiring coordinate information of 17 key points of a human body and providing position information for subsequent calculation of human body height speed, and due to the fact that the difference between the posture of an athlete in the air and the posture of a conventional human body is large, materials of the athlete in different postures are added for the scene, meanwhile, the posture of the athlete in the air is simulated by disturbance in the training process, a data set is enriched, and a FocalLoss loss function is used for guiding a network learning difficult sample.
Preferably, the specific operation method of the human body key point detection module is as follows: carrying out key point detection on the human body in the target detection area, inputting the vertex coordinates of a rectangular frame of the area where the human body is located, which are provided by the target detection, and outputting 17 human body posture key points, which are respectively as follows: nose, left and right eyes, left and right ears, left and right shoulders, left and right elbows, left and right wrists, left and right hips, left and right knees, and left and right ankles.
Preferably, the human body target segmentation and extraction module outputs a binary image for superposition of subsequent human body overlapping regions of adjacent video frames, the module is a fusion model based on a key point detection network, partial network weight is shared with the key point network, the characteristic diagram information of the segmentation network is constrained by using the robustness of the key point model to a background region, further the background robustness of the segmentation network is improved, the pixel level constraint of an edge region and the gradient constraint of the binary image are added in the training process to improve the segmentation effect of the segmentation network to a difficult region, and for the learning of a complex sample, a similar FocalLoss loss function is adopted to alleviate the problem of inter-class similarity.
Preferably, the specific operation method of the human body target segmentation and extraction module is as follows: and acquiring a segmentation result of the target by using a target segmentation technology based on a human body joint according to the detection result of the human body key point, and extracting the foreground target according to the segmentation result of the target and the input video stream information.
Preferably, in the step (3), specifically: the method comprises the steps of measuring and analyzing moving target interaction data based on images, generating and outputting visual data and images in a superposition mode, wherein the measurement of human height and speed information is based on coordinate information output by a human key point detection module, and firstly, acquiring the estimated diving platform pixel height in a picture based on the actual height of a diving platform and acquiring the relation between the pixel distance and the actual distance based on the initial speed information of an athlete, the initial height of the athlete and the video frame number information; the height information is provided by the vertical coordinate of the highest point of the key point of the athlete in the current video frame, then the height information displayed at present is updated based on the height information of the previous frame, the speed information is updated based on the coordinates of the human body gravity center of the athlete in the current video frame, and the updating of the speed information is realized by calculating the gravity center offset distance and the time interval of the previous frame and the current frame.
Preferably, the image superposition supports a parameterized automatic mode and a manual equal interval mode, wherein the automatic mode is based on a target detection module, and when the detection frame of the current frame athlete is not overlapped with the last detection frame displayed in a superposition mode, the athlete in the detection frame is selected as a new superposition target; the manual equidistant mode is to superpose the output division results every several frames.
The invention has the beneficial effects that: the play speed of the output video can be adjusted, the watching effect of slow play is realized, the watching participation degree is greatly improved, the watching experience is enriched, and the watching body feeling of the bullet time of audiences is given.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a schematic diagram of a target detection tracking module;
fig. 3 and 4 are schematic diagrams of human body key point detection;
FIG. 5 is a schematic diagram of human target segmentation extraction;
fig. 6 is a schematic diagram of a slow play illustration.
Detailed Description
The invention is further described with reference to the following figures and detailed description.
In the embodiment shown in fig. 1, the aerial skill rapid packaging method specifically comprises the following steps:
(1) acquiring original pictures in real time through an on-site high frame rate and ultra-high definition video acquisition system, and importing the pictures into an AI video analysis and synthesis module;
(2) through an AI video analysis and synthesis module, a target athlete in the picture is detected and a target image is extracted, and a series of target images are extracted according to the time sequence and the positions of the target before and after the picture; the method specifically comprises the following steps: when monitoring and processing of air skill fast package are triggered, a background service starts to monitor video content, the video is decoded to generate a picture sequence set corresponding to the video, then an AI video analysis and synthesis module is used for researching video motion foreground extraction as an algorithm target based on AI deep learning capacity, a target detection and tracking technology is used for accurately capturing a foreground target in a region shot by a camera to obtain the position of the target, an image segmentation technology is used for obtaining the contour of the target, the foreground target is extracted according to contour information, fast image synthesis is carried out, and the foreground extraction of athlete targets in different frame sequences is realized by three sub-modules including target detection and tracking, human key point detection and human target segmentation and extraction.
As shown in fig. 2, the target detection and tracking module is configured to provide coordinate information of a rectangular frame where a target athlete is located in a current video frame, and the module performs real-time detection on a designated ROI area based on centrnet, and starts tracking when target motion is detected, so as to obtain coordinates of a target human body area in a subsequent video frame. The specific operation method of the target detection tracking module comprises the following steps: the target detection utilizes a convolutional neural network to extract candidate windows, screens targets existing in an interested region, detects the positions of foreground targets, tracks the targets on the basis of target detection, and adopts tracking algorithms such as meanshift and the like to obtain the positions of the targets in continuous frames.
As shown in fig. 3 and 4, the human body key point detection module is used for obtaining coordinate information of 17 key points of a human body and providing position information for subsequent calculation of human body height speed, and is based on alphapos. The specific operation method of the human body key point detection module comprises the following steps: carrying out key point detection on the human body in the target detection area, inputting the vertex coordinates of a rectangular frame of the area where the human body is located, which are provided by the target detection, and outputting 17 human body posture key points, which are respectively as follows: nose, left and right eyes, left and right ears, left and right shoulders, left and right elbows, left and right wrists, left and right hips, left and right knees, and left and right ankles.
As shown in fig. 5, the human body target segmentation extraction module outputs a binary image for superposition of subsequent human body overlap regions of adjacent video frames, the module is a fusion model based on a key point detection network, partial network weight is shared with the key point network, the characteristic diagram information of the segmentation network is constrained by using the robustness of the key point model to a background region, further the background robustness of the segmentation network is improved, the pixel level constraint of an edge region and the gradient constraint of the binary image are added in the training process to improve the segmentation effect of the segmentation network to a difficult region (human body edge region), and for learning of a complex sample, a similar focalliss loss function is adopted to alleviate the inter-class similarity problem. The specific operation method of the human body target segmentation and extraction module comprises the following steps: and acquiring a segmentation result of the target by using a target segmentation technology based on a human body joint according to the detection result of the human body key point, and extracting the foreground target according to the segmentation result of the target and the input video stream information.
(3) According to the time position relations of the targets, combining data measurement and analysis of a background and corresponding target related data information returned by visualization processing to generate a superposed information data set; the method specifically comprises the following steps: the method comprises the steps of measuring and analyzing moving target interaction data based on images, generating and outputting visual data and images in a superposition mode, wherein the measurement of human height and speed information is based on coordinate information output by a human key point detection module, and firstly, acquiring the estimated diving platform pixel height in a picture based on the actual height of a diving platform and acquiring the relation between the pixel distance and the actual distance based on the initial speed information of an athlete, the initial height of the athlete and the video frame number information; the height information is provided by the vertical coordinate of the highest point of the key point of the athlete in the current video frame, then the height information displayed at present is updated based on the height information of the previous frame, the speed information is updated based on the coordinates of the human body gravity center of the athlete in the current video frame, and the updating of the speed information is realized by calculating the gravity center offset distance and the time interval of the previous frame and the current frame. The image superposition supports a parameterized automatic mode and a manual equal interval mode, wherein the automatic mode is based on a target detection module, and when the detection frame of the current frame athlete is not overlapped with the last detection frame displayed in a superposition mode, the athlete in the detection frame is selected as a new superposition target; the manual equidistant mode is to superpose the output division results every several frames.
(4) And the generated information data set is combined into the final video stream, and the synthesized video stream is finally exported from the system, so that the data information is superimposed in real time, and the motion track and the related data information of the target are presented in a target path ghost mode.
In summary, in the method, actually known diving tower height information is associated with diving tower image pixels in a picture, a corresponding relation between the pixels and an actual distance is preset, a time interval between each frame of the picture is determined according to a value of an actual picture high frame rate, meanwhile, each frame of the picture is led into a human body key point detection module using a human body limb identification technology, barycentric coordinates of a human body of an athlete are confirmed, and the actual moving speed of a moving target is estimated by shifting the pixel value of the barycentric coordinates in the time interval. Through the human body limb recognition technology, the gravity center coordinates of the athlete are marked, the gravity center is used as a monitoring analysis target, the actual movement track and speed of the athlete can be accurately estimated, and the subsequent analysis processing flow is facilitated. In a series of picture processing processes, according to experience of an actual service scene, when the center of gravity is the highest, the speed is relatively the slowest and is the most attractive part of a series of actions, so that the optimal rendering picture is preset when the center of gravity is the highest, the current target segmentation area is taken as a selected picture, other rendered moving pictures are based on the time point and the coordinate information, as each frame of picture has a corresponding current target segmentation area, if the current target segmentation area in other rendered moving pictures has a superposed pixel with the target segmentation area of the selected picture, the picture is discarded until the current target segmentation area in other rendered moving pictures and the target segmentation area of the selected picture have a superposed pixel, the current target segmentation area in the current moving picture is selected again and taken as a residual shadow in the rendered picture, according to the logic, and generating a series of action sequence diagrams for presentation.
Through a high-frame-rate and ultra-high-definition video acquisition system, a real-time original picture on site is acquired and guided into an AI video analysis synthesis module, a target (mainly an athlete) in the picture is detected and a target image is extracted, then a series of extracted target images and data measurement analysis and visualization of a background are carried out on the data information returned by the background according to the time sequence, the data information is combined into a unified video stream, and finally the synthesized video stream is guided out, so that the real-time data information superposition is realized, and the motion track and related data information of the target are presented in a path ghost mode. Furthermore, the playing speed of the output video can be adjusted through configuration, the watching effect of slow playing is realized, the watching participation degree is greatly improved, the watching experience is enriched, and the watching body feeling of the bullet time is provided for audiences. Bullet time (Bullet time) is a photographic technique used in movies, television commercials, or computer games to simulate variable speed effects, such as intensified slow-shot, time-stationary, etc.
In addition, according to the actual service scene, the links of the background which need video processing, namely, the identification and superposition operation processing, do not need to be operated all the time, and only the short time which needs to embody the air skill needs to be presented in a mode of identifying and positioning and then superposing the target path ghost. In the existing common viewing experience, a general scene is that a target enters the scene to be ready for performance, then the target starts to accelerate, and starts to rush to a jump platform to jump to the air, then air skills, air turning and turning bodies are shown, and finally the target safely falls to the ground to complete the whole set of actions of the performance. Under the support of the existing playing system, after the performance of the athlete is finished, a series of slow motion playing after the athlete vacates at that time is switched into immediately, so that the audience can clearly see the motion details of the athlete in the air, as shown in fig. 6. The slow-motion cutting-in and cutting-out of the section often has fixed graphic packages for identification.
In the method, as a mode for assisting in positioning the key segment, a content detection mode can be used for detecting the fixed transition image-text segment as an access point and an exit point for subsequent air skill quick packaging access, monitoring and processing of air skill quick packaging are started by detecting the playing of the slow motion video segment, then the transition image-text segment is monitored to be played again, and then the monitoring and processing of the air skill quick packaging are quitted. In order to further confirm the correlation between the video content and the target air skill, scene detection can be further carried out on the video content which is monitored and processed by starting the air skill rapid package, whether the video content is consistent with key scenes such as a jumping platform and the like is judged, and subsequent superposition processing is carried out after matching.
The air skill rapid packaging method is not only used for real-time processing and displaying of the air skill of free skiing on the snow plateau, but also can be used for the whole process of taking off, jumping, turning and displaying at the appointed position, similar to high-platform diving and the like, and programs similar to the skill displaying in the air can be completely used as the processing of the terminal display picture. In addition, external devices such as infrared range finders and the like can be further combined, and the real-time data collected by the devices and more accurate can be synchronously integrated into the finally presented picture. Similarly, some agreed technical actions can be processed in advance, and synchronous presentation and description can be performed in real time when the athlete actually presents the corresponding technical actions.

Claims (10)

1. A quick packing method for air skills is characterized by comprising the following steps:
(1) acquiring original pictures in real time through an on-site high frame rate and ultra-high definition video acquisition system, and importing the pictures into an AI video analysis and synthesis module;
(2) through an AI video analysis and synthesis module, a target athlete in the picture is detected and a target image is extracted, and a series of target images are extracted according to the time sequence and the positions of the target before and after the picture;
(3) according to the time position relations of the targets, combining data measurement and analysis of a background and corresponding target related data information returned by visualization processing to generate a superposed information data set;
(4) and the generated information data set is combined into the final video stream, and the synthesized video stream is finally exported from the system, so that the data information is superimposed in real time, and the motion track and the related data information of the target are presented in a target path ghost mode.
2. An airborne trick quick packing method according to claim 1 characterized in that in step (2) it is specifically: when monitoring and processing of air skill fast package are triggered, a background service starts to monitor video content, the video is decoded to generate a picture sequence set corresponding to the video, then an AI video analysis and synthesis module is used for researching video motion foreground extraction as an algorithm target based on AI deep learning capacity, a target detection and tracking technology is used for accurately capturing a foreground target in a region shot by a camera to obtain the position of the target, an image segmentation technology is used for obtaining the contour of the target, the foreground target is extracted according to contour information, fast image synthesis is carried out, and the foreground extraction of athlete targets in different frame sequences is realized by three sub-modules including target detection and tracking, human key point detection and human target segmentation and extraction.
3. The method for air skill rapid packaging according to claim 2, wherein the target detection tracking module is configured to provide coordinate information of a rectangular frame where the target athlete is located in the current video frame, perform real-time detection on the designated area, and when the target motion is detected, start tracking to obtain the coordinates of the target body area in the subsequent video frame.
4. An airborne trick quick packing method according to claim 2 or 3 characterized in that the specific operation method of the target detection tracking module is: the target detection utilizes a convolutional neural network to extract candidate windows, screens targets existing in an interested region, detects the positions of foreground targets, tracks the targets on the basis of the target detection, and adopts a tracking algorithm to obtain the positions of the targets in continuous frames.
5. The method for fast packing airborne skills according to claim 2, wherein said human body key point detection module is used to obtain coordinate information of 17 key points of the human body and provide position information for the calculation of the subsequent human body height speed, as the difference between the athlete's rotation in the air and the conventional human body posture is larger, for the scene, athlete materials of different postures are added, meanwhile, the posture of the athlete in the air is simulated by adding disturbance in the training process, the data set is enriched, and FocalLoss loss function is used to guide the network learning difficult sample.
6. The aerial skill rapid packaging method of claim 4, wherein the human key point detection module is operated by the following method: carrying out key point detection on the human body in the target detection area, inputting the vertex coordinates of a rectangular frame of the area where the human body is located, which are provided by the target detection, and outputting 17 human body posture key points, which are respectively as follows: nose, left and right eyes, left and right ears, left and right shoulders, left and right elbows, left and right wrists, left and right hips, left and right knees, and left and right ankles.
7. The method for rapid packing of airborne skills according to claim 2, wherein the human body target segmentation extraction module outputs binary images for superposition of subsequent human body overlap regions of adjacent video frames, the module is a fusion model based on a key point detection network, part of network weight is shared with a key point network, the characteristic graph information of the segmentation network is constrained by using robustness of the key point model to a background region, further background robustness of the segmentation network is improved, pixel level constraint of an edge region and gradient constraint of the binary images are added in a training process to improve segmentation effect of the segmentation network to a difficult region, and for learning of complex samples, similar FocalLoss functions are adopted to alleviate inter-class similarity problems.
8. The air skill rapid packing method of claim 6, wherein the human target segmentation and extraction module is operated by the following steps: and acquiring a segmentation result of the target by using a target segmentation technology based on a human body joint according to the detection result of the human body key point, and extracting the foreground target according to the segmentation result of the target and the input video stream information.
9. An airborne trick quick packing method according to claim 5 characterized in that in step (3) it is specifically: the method comprises the steps of measuring and analyzing moving target interaction data based on images, generating and outputting visual data and images in a superposition mode, wherein the measurement of human height and speed information is based on coordinate information output by a human key point detection module, and firstly, acquiring the estimated diving platform pixel height in a picture based on the actual height of a diving platform and acquiring the relation between the pixel distance and the actual distance based on the initial speed information of an athlete, the initial height of the athlete and the video frame number information; the height information is provided by the vertical coordinate of the highest point of the key point of the athlete in the current video frame, then the height information displayed at present is updated based on the height information of the previous frame, the speed information is updated based on the coordinates of the human body gravity center of the athlete in the current video frame, and the updating of the speed information is realized by calculating the gravity center offset distance and the time interval of the previous frame and the current frame.
10. The air skill rapid packing method of claim 9, wherein the image overlay supports a parameterized automatic mode and a manual equidistant mode, wherein the automatic mode is based on an object detection module, and when the detection frame of the current frame player is not overlapped with the previous detection frame displayed in an overlay mode, the player in the detection frame is selected as a new overlay object; the manual equidistant mode is to superpose the output division results every several frames.
CN202111638702.9A 2021-12-29 2021-12-29 Quick packaging method for air skills Active CN114302234B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111638702.9A CN114302234B (en) 2021-12-29 2021-12-29 Quick packaging method for air skills

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111638702.9A CN114302234B (en) 2021-12-29 2021-12-29 Quick packaging method for air skills

Publications (2)

Publication Number Publication Date
CN114302234A true CN114302234A (en) 2022-04-08
CN114302234B CN114302234B (en) 2023-11-07

Family

ID=80971050

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111638702.9A Active CN114302234B (en) 2021-12-29 2021-12-29 Quick packaging method for air skills

Country Status (1)

Country Link
CN (1) CN114302234B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114913471A (en) * 2022-07-18 2022-08-16 深圳比特微电子科技有限公司 Image processing method and device and readable storage medium

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10276351A (en) * 1997-03-31 1998-10-13 Mitsubishi Electric Corp Sports completition display device
US6710713B1 (en) * 2002-05-17 2004-03-23 Tom Russo Method and apparatus for evaluating athletes in competition
KR101291765B1 (en) * 2013-05-15 2013-08-01 (주)엠비씨플러스미디어 Ball trace providing system for realtime broadcasting
US20140067098A1 (en) * 2012-08-31 2014-03-06 Catapult Innovations Pty Ltd Sports data collection and presentation
US20140196082A1 (en) * 2012-07-17 2014-07-10 Panasonic Corporation Comment information generating apparatus and comment information generating method
US20150379725A1 (en) * 2014-06-30 2015-12-31 Panasonic Intellectual Property Management Co., Ltd. Moving information analyzing system, camera, and moving information analyzing method
CN109040837A (en) * 2018-07-27 2018-12-18 北京市商汤科技开发有限公司 Method for processing video frequency and device, electronic equipment and storage medium
CN109903312A (en) * 2019-01-25 2019-06-18 北京工业大学 A kind of football sportsman based on video multi-target tracking runs distance statistics method
CN110472554A (en) * 2019-08-12 2019-11-19 南京邮电大学 Table tennis action identification method and system based on posture segmentation and crucial point feature
CN110516620A (en) * 2019-08-29 2019-11-29 腾讯科技(深圳)有限公司 Method for tracking target, device, storage medium and electronic equipment
CN112135045A (en) * 2020-09-23 2020-12-25 努比亚技术有限公司 Video processing method, mobile terminal and computer storage medium
CN112668522A (en) * 2020-12-31 2021-04-16 华南理工大学 Human body key point and human body mask combined detection network and method
CN112990162A (en) * 2021-05-18 2021-06-18 所托(杭州)汽车智能设备有限公司 Target detection method and device, terminal equipment and storage medium
WO2021129064A1 (en) * 2019-12-24 2021-07-01 腾讯科技(深圳)有限公司 Posture acquisition method and device, and key point coordinate positioning model training method and device
US20210312321A1 (en) * 2020-04-06 2021-10-07 Huawu DENG Method, system, and medium for identifying human behavior in a digital video using convolutional neural networks
US20210322852A1 (en) * 2020-04-21 2021-10-21 Stupa Sports Analytics Private Limited Determining trajectory of a ball from two-dimensional media-content using computer vision
WO2021238325A1 (en) * 2020-05-29 2021-12-02 华为技术有限公司 Image processing method and apparatus

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10276351A (en) * 1997-03-31 1998-10-13 Mitsubishi Electric Corp Sports completition display device
US6710713B1 (en) * 2002-05-17 2004-03-23 Tom Russo Method and apparatus for evaluating athletes in competition
US20140196082A1 (en) * 2012-07-17 2014-07-10 Panasonic Corporation Comment information generating apparatus and comment information generating method
US20140067098A1 (en) * 2012-08-31 2014-03-06 Catapult Innovations Pty Ltd Sports data collection and presentation
KR101291765B1 (en) * 2013-05-15 2013-08-01 (주)엠비씨플러스미디어 Ball trace providing system for realtime broadcasting
US20150379725A1 (en) * 2014-06-30 2015-12-31 Panasonic Intellectual Property Management Co., Ltd. Moving information analyzing system, camera, and moving information analyzing method
CN109040837A (en) * 2018-07-27 2018-12-18 北京市商汤科技开发有限公司 Method for processing video frequency and device, electronic equipment and storage medium
CN109903312A (en) * 2019-01-25 2019-06-18 北京工业大学 A kind of football sportsman based on video multi-target tracking runs distance statistics method
CN110472554A (en) * 2019-08-12 2019-11-19 南京邮电大学 Table tennis action identification method and system based on posture segmentation and crucial point feature
CN110516620A (en) * 2019-08-29 2019-11-29 腾讯科技(深圳)有限公司 Method for tracking target, device, storage medium and electronic equipment
WO2021129064A1 (en) * 2019-12-24 2021-07-01 腾讯科技(深圳)有限公司 Posture acquisition method and device, and key point coordinate positioning model training method and device
US20210312321A1 (en) * 2020-04-06 2021-10-07 Huawu DENG Method, system, and medium for identifying human behavior in a digital video using convolutional neural networks
US20210322852A1 (en) * 2020-04-21 2021-10-21 Stupa Sports Analytics Private Limited Determining trajectory of a ball from two-dimensional media-content using computer vision
WO2021238325A1 (en) * 2020-05-29 2021-12-02 华为技术有限公司 Image processing method and apparatus
CN112135045A (en) * 2020-09-23 2020-12-25 努比亚技术有限公司 Video processing method, mobile terminal and computer storage medium
CN112668522A (en) * 2020-12-31 2021-04-16 华南理工大学 Human body key point and human body mask combined detection network and method
CN112990162A (en) * 2021-05-18 2021-06-18 所托(杭州)汽车智能设备有限公司 Target detection method and device, terminal equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114913471A (en) * 2022-07-18 2022-08-16 深圳比特微电子科技有限公司 Image processing method and device and readable storage medium
CN114913471B (en) * 2022-07-18 2023-09-12 深圳比特微电子科技有限公司 Image processing method, device and readable storage medium

Also Published As

Publication number Publication date
CN114302234B (en) 2023-11-07

Similar Documents

Publication Publication Date Title
US11373354B2 (en) Techniques for rendering three-dimensional animated graphics from video
US8675021B2 (en) Coordination and combination of video sequences with spatial and temporal normalization
US10922879B2 (en) Method and system for generating an image
JP4739520B2 (en) Method and apparatus for synthesizing video sequence with spatio-temporal alignment
US20140029920A1 (en) Image tracking and substitution system and methodology for audio-visual presentations
JP2009505553A (en) System and method for managing the insertion of visual effects into a video stream
US7843510B1 (en) Method and system for combining video sequences with spatio-temporal alignment
US9087380B2 (en) Method and system for creating event data and making same available to be served
Pidaparthy et al. Keep your eye on the puck: Automatic hockey videography
Bebie et al. A Video‐Based 3D‐Reconstruction of Soccer Games
CN114363689A (en) Live broadcast control method and device, storage medium and electronic equipment
CN114302234A (en) Air skill rapid packaging method
RU2602792C2 (en) Motion vector based comparison of moving objects
Inamoto et al. Free viewpoint video synthesis and presentation from multiple sporting videos
Nieto et al. An automatic system for sports analytics in multi-camera tennis videos
JP2009519539A (en) Method and system for creating event data and making it serviceable
EP1449357A1 (en) Method and system for combining video with spatio-temporal alignment
Xie et al. Object tracking method based on 3d cartoon animation in broadcast soccer videos
Inamoto et al. Arbitrary viewpoint observation for soccer match video
US20220182691A1 (en) Method and system for encoding, decoding and playback of video content in client-server architecture
EP4120687A1 (en) An object or region of interest video processing system and method
KR20230096360A (en) Sports motion analysis system using multi-camera
WO2023157005A1 (en) An augmented reality interface for watching live sport games
Sanjeewa Automated Highlights Generator for DOTA 2 Game Using Audio-Visual Framework
Liu et al. Scene Composition in Augmented Virtual Presenter System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant