CN113052119B - Ball game tracking camera shooting method and system - Google Patents
Ball game tracking camera shooting method and system Download PDFInfo
- Publication number
- CN113052119B CN113052119B CN202110374167.4A CN202110374167A CN113052119B CN 113052119 B CN113052119 B CN 113052119B CN 202110374167 A CN202110374167 A CN 202110374167A CN 113052119 B CN113052119 B CN 113052119B
- Authority
- CN
- China
- Prior art keywords
- video image
- image
- sphere
- images
- wide
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 239000011159 matrix material Substances 0.000 claims abstract description 39
- 238000013519 translation Methods 0.000 claims abstract description 39
- 238000012545 processing Methods 0.000 claims abstract description 29
- 238000012937 correction Methods 0.000 claims abstract description 9
- 238000006073 displacement reaction Methods 0.000 claims abstract description 8
- 230000009466 transformation Effects 0.000 claims abstract description 8
- 230000008859 change Effects 0.000 claims description 11
- 238000013507 mapping Methods 0.000 claims description 6
- 230000008030 elimination Effects 0.000 claims description 4
- 238000003379 elimination reaction Methods 0.000 claims description 4
- 238000003384 imaging method Methods 0.000 claims description 4
- 230000000694 effects Effects 0.000 description 7
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000033001 locomotion Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000033764 rhythmic process Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 238000004026 adhesive bonding Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G06T5/80—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
Abstract
The invention relates to the technical field of image processing, in particular to a ball game tracking and shooting method and system. The method comprises the following steps: shooting different positions of a field to obtain real shooting images shot at different angles and splicing the real shooting images into a real-time wide-angle image; identifying a sphere in the wide-angle image, and intercepting the wide-angle image according to the image coordinates of the sphere to obtain a video image, so that the sphere is always positioned in the video image; and (3) according to the time sequence generated by each video image, extracting a rotation translation matrix of the edge positions of two adjacent video images, calculating the angle information and the displacement information of the rotation translation matrix, and correcting the intercepting position of the next video image by taking the rotation translation matrix of the edge position of the previous video image and the relative position of a sphere in the video image as references. According to the invention, images are processed through undistorted splicing, and then interception and front and rear image interception area correction are performed in a targeted manner, so that unmanned shooting is realized, and stable video image transformation can be ensured.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a ball game tracking and shooting method and system.
Background
At present, campus sports activities and events represented by campus football are more and more, event video recording and live broadcasting become one of ways of publicizing and sharing the events, however, most schools at present do not have professional equipment and personnel to shoot, record and live broadcast the activities, the change of the conversion rhythm of the ball events is very quick, only ordinary DV shooting is used, the conversion rhythm cannot be kept up basically, and the requirements of users are difficult to meet, so that the problems of pain and difficulty in shooting the ball sports activities are solved.
Disclosure of Invention
The invention aims to provide a ball game tracking shooting method and system, which are used for overcoming the defect of poor shooting effect of the existing campus ball game.
In a first aspect, a ball game tracking camera shooting method is provided, including the following steps:
shooting different positions of a field to obtain real shooting images shot at different angles and splicing the real shooting images into a real-time wide-angle image;
identifying a sphere in the wide-angle image, and intercepting the wide-angle image according to the image coordinates of the sphere to obtain a video image, so that the sphere is always positioned in the video image;
and (3) according to the time sequence generated by each video image, extracting a rotation translation matrix of the edge positions of two adjacent video images, calculating the angle information and the displacement information of the rotation translation matrix, and correcting the intercepting position of the next video image by taking the rotation translation matrix of the edge position of the previous video image and the relative position of a sphere in the video image as references.
Optionally, the method further comprises the following steps:
performing distortion correction on a shot image in a wide-angle image, and selecting coincident pixel points in two adjacent real shot images as reference points for solving a homography matrix;
and establishing a unified coordinate system in the wide-angle image, eliminating partial overlapped pixel points through perspective transformation, and then mapping the real shot image into a corresponding area of the wide-angle image again.
Optionally, the method further comprises the following steps:
and (3) eliminating mismatching points in the selected coincident pixel points by using a RANSAC algorithm, calculating initial values of homography matrixes of the residual coincident pixel points after elimination, and carrying out refinement elimination by using a Levenberg-Marquardt nonlinear iterative minimum approximation method.
Optionally, the method further comprises the following steps:
defining a sphere area in the middle of the video image, so that spheres of the video image which is intercepted for the first time are positioned in the sphere area;
extracting feature information of a sphere region, and obtaining a rotation translation matrix of edge pixel points of two adjacent video images by adopting a self-adaptive feature point registration algorithm;
comparing the characteristic information change of the sphere area of the front video image and the rear video image, re-intercepting the rear video image by taking the intercepting range of the front video image as a reference if the characteristic information change is within a preset threshold value, enabling the intercepting range of the rear video image to be consistent with the intercepting range of the rear video image, and generating a plurality of intermediate images by taking the front video image as a reference if the characteristic information change exceeds the preset threshold value, so that the rotation translation matrixes of the edge pixel points of the front video image, the intermediate images and the rear video image are in a linear relation.
In a second aspect, there is provided a ball game tracking camera system comprising:
the camera modules are used for shooting different positions of the field to form a real shot image;
the image processing module is used for splicing the real-time photographed images to obtain real-time wide-angle images;
the identification module is used for identifying the sphere in the wide-angle image;
the image processing module is also used for intercepting the wide-angle image according to the image coordinates of the sphere to obtain video images, enabling the sphere to be always located in the video images, generating time sequences according to each video image, extracting the rotation translation matrixes of the edge positions of two adjacent video images, calculating the angle information and the displacement information of the rotation translation matrixes, and correcting the intercepting position of the next video image by taking the rotation translation matrixes of the edge positions of the previous video image and the relative positions of the sphere in the video image as references.
Optionally, the image processing module is further configured to perform distortion correction on the shot image in the wide-angle image, and select overlapping pixel points in two adjacent real shot images as reference points for solving the homography matrix; and establishing a unified coordinate system in the wide-angle image, eliminating partial overlapped pixel points through perspective transformation, and then mapping the real shot image into a corresponding area of the wide-angle image again.
Optionally, the image processing module is further configured to reject the mismatching point in the selected overlapping pixel points by using a RANSAC algorithm, calculate an initial value of a homography matrix of the remaining overlapping pixel points after rejection, and perform refinement rejection by using a Levenberg-Marquardt nonlinear iterative least approximation method.
Optionally, the device further comprises a judging module;
the judging module is used for comparing the characteristic information changes of the sphere areas of the front video image and the rear video image;
the image processing module is also used for defining a sphere area in the middle of the video image so that the sphere of the video image which is intercepted for the first time is positioned in the sphere area;
extracting feature information of a sphere region, and obtaining a rotation translation matrix of edge pixel points of two adjacent video images by adopting a self-adaptive feature point registration algorithm;
and re-intercepting the next video image by taking the interception range of the previous video image as a reference according to the judging result of the judging module, so that the interception range of the next video image is consistent with that of the previous video image, or generating a plurality of intermediate images by taking the previous video image as a reference, and enabling the rotation translation matrixes of the edge pixel points of the previous video image, the intermediate image and the next video image to be in a linear relation.
The invention has the beneficial effects that: the images are processed through undistorted splicing, and then interception and front-back image interception area correction are performed pertinently, so that unmanned operation shooting and recording of match videos are realized, stable video image conversion can be ensured, and loss caused by the conditions of rapid movement of a sphere, off-site interference and the like is prevented.
Drawings
FIG. 1 is an exemplary system architecture for implementing the ball motion tracking camera method of the present application.
Fig. 2 is a flowchart of a ball game tracking camera method according to a first embodiment.
Fig. 3 is a flowchart of a ball game tracking camera method according to a second embodiment.
Fig. 4 is a flowchart of a ball game tracking camera method according to a third embodiment.
Fig. 5 is a block diagram illustrating a ball game tracking camera system according to one embodiment.
Fig. 6 is a block diagram illustrating a ball game tracking camera system according to another embodiment.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more clear, the present invention will be further described with reference to the embodiments and the accompanying drawings.
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
FIG. 1 illustrates an exemplary system architecture to which embodiments of the ball game tracking camera methods and systems of the present application may be applied.
As shown in fig. 1, the system architecture may include cameras 101, 102, 103, a connection medium 104, and a processing terminal 105. The connection medium 104 is a medium for providing a transmission link between the cameras 101, 102, 103 and the processing terminal 105. The connection medium 104 may include various connection types, such as wired, wireless transmission links, or fiber optic cables.
It should be understood that the number of cameras, connection mediums and processing terminals in fig. 1 is merely illustrative, and that any number of cameras, connection mediums and processing terminals may be provided as desired for implementation.
According to a first aspect of the present invention, there is provided a ball game tracking camera method.
Fig. 2 is a flowchart showing a ball game tracking imaging method according to the first embodiment, in which the ball game tracking imaging method of the embodiment of the present application is executed by a camera, a connection medium, and a processing terminal. Referring to fig. 2, the method comprises the steps of:
and S21, shooting different positions of the field, obtaining real shooting images shot at different angles, and splicing the real shooting images into a real-time wide-angle image.
In step S21, shooting different angles and positions on one side of the field by using four cameras, taking football match as an example, the field positions mainly captured by the four cameras are respectively a left back field, a left middle field, a right middle field and a right back field, and real shot images obtained by shooting by the four cameras are sent to a processing terminal, and the processing terminal performs image stitching processing through an image stitching algorithm to obtain a wide-angle image.
And S22, identifying a sphere in the wide-angle image, and intercepting the wide-angle image according to the image coordinates of the sphere to obtain a video image, so that the sphere is always positioned in the video image.
In step S22, the processing terminal identifies the sphere in the wide-angle image through an image identification algorithm, where the image identification algorithm may be a Sift algorithm or Surf algorithm, and in the Sift algorithm, the main steps are as follows: extracting key points; adding detailed information (local features) to the keypoints, so-called descriptors; finding out a plurality of pairs of feature points matched with each other through pairwise comparison of the feature points (the key points attached with the feature vectors) of the two sides; in the SURF algorithm, the main steps are as follows: constructing a Hessian matrix; constructing a scale space; accurately positioning characteristic points; the main direction is determined. After the ball body is identified, a video image used for making a recorded match video is obtained in a wide-angle image in a screenshot mode, so that the ball body is always positioned in the video image, and meanwhile, the video image is properly amplified and adjusted, so that the effect of watching a gluing area at a shorter distance is obtained.
Step S23, generating time sequence according to each video image, extracting a rotation translation matrix of the edge positions of two adjacent video images, calculating angle information and displacement information of the rotation translation matrix, and correcting the intercepting position of the next video image by taking the rotation translation matrix of the edge position of the previous video image and the relative position of a sphere in the video image as references.
In step S23, after the video image is acquired, the intercepting position and intercepting range of the next video image are redetermined according to the angle information and the displacement information of the rotation translation matrix, so as to cope with the insufficient alleviation and uniformity of the video image caused by the large-scale transition of the sphere during the rapid attack and defense transition on the court. After the video images are primarily determined according to the positions of the spheres, the same position proportion of the front video image and the rear video image is judged through the rotary translation matrix, when the same position proportion of the front video image and the rear video image is larger than a threshold value, namely the spheres do not quickly transfer in a large range, the intercepting position of the next video image is corrected to be consistent with the intercepting position of the previous video image, when the same position proportion of the front video image and the rear video image is smaller than the threshold value, namely the spheres quickly transfer in a large range, namely the situation that the front video image and the rear video image are not relaxed due to larger phase difference of the intercepting positions, the video image transformation is eased in a frame supplementing mode along the direction generating position deviation.
Fig. 3 is a flowchart of a ball game tracking imaging method according to a second embodiment, and as shown in fig. 3, a method for obtaining a wide-angle image is provided, including the steps of:
and S31, performing distortion correction on the shot images in the wide-angle images, and selecting coincident pixel points in two adjacent real shot images as reference points for solving the homography matrix.
And S32, establishing a unified coordinate system in the wide-angle image, eliminating partial overlapped pixel points through perspective transformation, and then mapping the real shot image into a corresponding area of the wide-angle image again.
In this embodiment, the method further includes removing mismatching points in the selected overlapping pixels by using a RANSAC algorithm, calculating initial values of homography matrices of the remaining overlapping pixels after removal, and performing refinement removal by using a Levenberg-Marquardt nonlinear iterative least approximation method.
Fig. 4 is a flowchart of a ball game tracking camera shooting method according to a third embodiment, and as shown in fig. 4, a method for correcting a video image capturing range is provided, including the following steps:
and step S41, defining a sphere area in the middle of the video image, and enabling the sphere of the video image which is intercepted for the first time to be positioned in the sphere area.
In this embodiment, after determining the position of the sphere in the wide-angle image, the sphere area may be determined based on the sphere, for example, a circular area within a certain range with the sphere as the center, and the capturing range of the video image is determined based on the sphere area, so as to complete the primary capturing of the video image.
And S42, extracting the characteristic information of the sphere region, and adopting a self-adaptive characteristic point registration algorithm to obtain a rotation translation matrix of the edge pixel points of two adjacent video images.
Before extracting the characteristic information of the sphere region, the pixel points of the non-sphere characteristics, such as the field athlete and referee, are removed.
And S43, comparing the characteristic information change of the sphere area of the front video image and the rear video image, re-intercepting the rear video image by taking the intercepting range of the front video image as a reference if the characteristic information change is within a preset threshold value, enabling the intercepting range of the rear video image to be consistent with the intercepting range of the rear video image, and generating a plurality of intermediate images by taking the front video image as the reference if the characteristic information change exceeds the preset threshold value, so that the rotation translation matrixes of the edge pixel points of the front video image, the intermediate images and the rear video image are in a linear relation.
The correction principle of the truncated image is based on a rotational translation matrix of the edge position of the video image and the image coordinates of the player within the video image. The rotary translation matrix comprises position coordinates of pixel points at the edge of the video image in the whole repair image, and the interception range and the interception position of the video image can be determined by acquiring the rotary translation matrix of the pixel points at four sides or four corners of the video image, so that the interception range of the video image is corrected.
In this embodiment, according to the rotation translation matrix of the front video image and the rear video image and referring to the wide-angle image, the azimuth of the rear video image relative to the front video image is determined, when the characteristic information change of the sphere area exceeds a preset threshold, the processing terminal starts with the front video image, and intercepts a plurality of intermediate images in real time along the direction of the rear video image, so as to realize the frame-supplementing effect. For example, between the front video image and the rear video image, the sphere is transferred in a large range along the horizontal direction, so that the front video image and the rear video image directly jump, at this time, the processing terminal performs multiple screenshot along the horizontal direction in the real-time wide-angle image and supplements the screenshot into the front video image and the rear video image, and the video image switching effect is eased. After the video is formed, the video is formed by arranging the video images at the generation time, and the playing speed of the captured video is 25 frames per second.
In this embodiment, when the sphere is not within the wide-angle image, e.g., the sphere is out of bounds, then the truncated image of the field area where the sphere last appears is maintained until the sphere appears again.
According to a second aspect of the present application, a ball game tracking camera system is provided.
FIG. 5 is a block diagram of a ball motion tracking camera system, see FIG. 5, according to one embodiment, comprising:
a plurality of camera modules 51 for shooting different locations of the field to form a real shot image;
an image processing module 52 for stitching the real-time photographed images to obtain a real-time wide-angle image;
an identification module 53 for identifying a sphere in the wide-angle image;
the image processing module 52 is further configured to intercept the wide-angle image according to the image coordinates of the sphere to obtain a video image, make the sphere always located in the video image, generate a time sequence according to each video image, extract a rotation translation matrix of the edge positions of two adjacent video images, calculate the angle information and the displacement information of the rotation translation matrix, and correct the intercept position of the next video image with reference to the rotation translation matrix of the edge position of the previous video image and the relative position of the sphere in the video image.
The image capturing module 51, the image processing module 52, and the recognition module 53 may be integrally provided or may be separately provided.
Optionally, the image processing module 52 is further configured to perform distortion correction on the shot image in the wide-angle image, and select overlapping pixel points in two adjacent real shot images as reference points for solving the homography matrix; and establishing a unified coordinate system in the wide-angle image, eliminating partial overlapped pixel points through perspective transformation, and then mapping the real shot image into a corresponding area of the wide-angle image again.
Optionally, the image processing module 52 is further configured to reject the mismatching point in the selected overlapping pixel points by using a RANSAC algorithm, calculate an initial value of a homography matrix of the remaining overlapping pixel points after being rejected, and perform refinement rejection by using a Levenberg-Marquardt nonlinear iterative least approximation method.
Optionally, as shown in fig. 6, the system further includes a judgment module 54;
the judging module 54 is used for comparing the characteristic information changes of the sphere areas of the front video image and the rear video image;
the image processing module 52 is further configured to define a sphere area in the middle of the video image, so that the sphere of the video image that is captured for the first time is located in the sphere area;
extracting feature information of a sphere region, and obtaining a rotation translation matrix of edge pixel points of two adjacent video images by adopting a self-adaptive feature point registration algorithm;
and according to the judging result of the judging module 54, intercepting the next video image again by taking the intercepting range of the previous video image as a reference, so that the intercepting range of the next video image is consistent with that of the previous video image, or generating a plurality of intermediate images by taking the previous video image as a reference, and enabling the rotation translation matrixes of the edge pixel points of the previous video image, the intermediate image and the next video image to be in a linear relation.
The specific manner in which the various modules perform the operations in relation to the systems of the above embodiments have been described in detail in relation to the embodiments of the method and will not be described in detail herein.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
In this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a list of elements is included, and may include other elements not expressly listed.
The foregoing description is only a preferred embodiment of the present invention, and is not intended to limit the present invention, but although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the technical solutions described in the foregoing embodiments, or that equivalents may be substituted for part of the technical features thereof. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (4)
1. The ball game tracking and shooting method is characterized by comprising the following steps of:
shooting different positions of a field to obtain real shooting images shot at different angles and splicing the real shooting images into a real-time wide-angle image;
identifying a sphere in the wide-angle image, and intercepting the wide-angle image according to the image coordinates of the sphere to obtain a video image, so that the sphere is always positioned in the video image;
according to the time sequence generated by each video image, extracting the rotation translation matrix of the edge positions of two adjacent video images, calculating the angle information and the displacement information of the rotation translation matrix, and correcting the intercepting position of the next video image by taking the rotation translation matrix of the edge position of the previous video image and the relative position of a sphere in the video image as references;
performing distortion correction on a shot image in a wide-angle image, and selecting coincident pixel points in two adjacent real shot images as reference points for solving a homography matrix;
establishing a unified coordinate system in the wide-angle image, eliminating partial coincident pixel points through perspective transformation, and then mapping the real shot image into a corresponding area of the wide-angle image again;
defining a sphere area in the middle of the video image, so that spheres of the video image which is intercepted for the first time are positioned in the sphere area;
extracting feature information of a sphere region, and obtaining a rotation translation matrix of edge pixel points of two adjacent video images by adopting a self-adaptive feature point registration algorithm;
comparing the characteristic information change of the sphere area of the front video image and the rear video image, re-intercepting the rear video image by taking the intercepting range of the front video image as a reference if the characteristic information change is within a preset threshold value, enabling the intercepting range of the rear video image to be consistent with the intercepting range of the rear video image, and generating a plurality of intermediate images by taking the front video image as a reference if the characteristic information change exceeds the preset threshold value, so that the rotation translation matrixes of the edge pixel points of the front video image, the intermediate images and the rear video image are in a linear relation.
2. The ball game tracking imaging method according to claim 1, further comprising the steps of:
and (3) eliminating mismatching points in the selected coincident pixel points by using a RANSAC algorithm, calculating initial values of homography matrixes of the residual coincident pixel points after elimination, and carrying out refinement elimination by using a Levenberg-Marquardt nonlinear iterative minimum approximation method.
3. A ball game tracking camera system, comprising:
the camera modules are used for shooting different positions of the field to form a real shot image;
the image processing module is used for splicing the real-time photographed images to obtain real-time wide-angle images;
the identification module is used for identifying the sphere in the wide-angle image;
the image processing module is also used for intercepting a wide-angle image according to the image coordinates of the sphere to obtain a video image, enabling the sphere to be always positioned in the video image, generating time sequences according to each video image, extracting a rotation translation matrix of the edge positions of two adjacent video images, calculating angle information and displacement information of the rotation translation matrix, and correcting the intercepting position of the next video image by taking the rotation translation matrix of the edge position of the previous video image and the relative position of the sphere in the video image as references;
the image processing module is also used for carrying out distortion correction on the shot image in the wide-angle image, and selecting coincident pixel points in two adjacent real shot images as reference points for solving the homography matrix; establishing a unified coordinate system in the wide-angle image, eliminating partial coincident pixel points through perspective transformation, and then mapping the real shot image into a corresponding area of the wide-angle image again;
the device also comprises a judging module;
the judging module is used for comparing the characteristic information changes of the sphere areas of the front video image and the rear video image;
the image processing module is also used for defining a sphere area in the middle of the video image so that the sphere of the video image which is intercepted for the first time is positioned in the sphere area;
extracting feature information of a sphere region, and obtaining a rotation translation matrix of edge pixel points of two adjacent video images by adopting a self-adaptive feature point registration algorithm;
and re-intercepting the next video image by taking the interception range of the previous video image as a reference according to the judging result of the judging module, so that the interception range of the next video image is consistent with that of the previous video image, or generating a plurality of intermediate images by taking the previous video image as a reference, and enabling the rotation translation matrixes of the edge pixel points of the previous video image, the intermediate image and the next video image to be in a linear relation.
4. The ball game tracking camera system according to claim 3, wherein the image processing module is further configured to reject mismatching points in the selected overlapping pixels by using a RANSAC algorithm, calculate initial values of homography matrices of remaining overlapping pixels after rejection, and perform refinement rejection by using a Levenberg-Marquardt nonlinear iterative least approximation method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110374167.4A CN113052119B (en) | 2021-04-07 | 2021-04-07 | Ball game tracking camera shooting method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110374167.4A CN113052119B (en) | 2021-04-07 | 2021-04-07 | Ball game tracking camera shooting method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113052119A CN113052119A (en) | 2021-06-29 |
CN113052119B true CN113052119B (en) | 2024-03-15 |
Family
ID=76518904
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110374167.4A Active CN113052119B (en) | 2021-04-07 | 2021-04-07 | Ball game tracking camera shooting method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113052119B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114268741B (en) * | 2022-02-24 | 2023-01-31 | 荣耀终端有限公司 | Transition dynamic effect generation method, electronic device, and storage medium |
CN116612168A (en) * | 2023-04-20 | 2023-08-18 | 北京百度网讯科技有限公司 | Image processing method, device, electronic equipment, image processing system and medium |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2511846A1 (en) * | 2004-07-07 | 2006-01-07 | Leo Vision | Process for obtaining a succession of images in the form of a spinning effect |
CN101601277A (en) * | 2006-12-06 | 2009-12-09 | 索尼英国有限公司 | Be used to generate the method and apparatus of picture material |
KR101291765B1 (en) * | 2013-05-15 | 2013-08-01 | (주)엠비씨플러스미디어 | Ball trace providing system for realtime broadcasting |
CN103745483A (en) * | 2013-12-20 | 2014-04-23 | 成都体育学院 | Mobile-target position automatic detection method based on stadium match video images |
CN104580933A (en) * | 2015-02-09 | 2015-04-29 | 上海安威士科技股份有限公司 | Multi-scale real-time monitoring video stitching device based on feature points and multi-scale real-time monitoring video stitching method |
WO2016086754A1 (en) * | 2014-12-03 | 2016-06-09 | 中国矿业大学 | Large-scale scene video image stitching method |
CN106600548A (en) * | 2016-10-20 | 2017-04-26 | 广州视源电子科技股份有限公司 | Fish-eye camera image processing method and system |
CN106780620A (en) * | 2016-11-28 | 2017-05-31 | 长安大学 | A kind of table tennis track identification positioning and tracking system and method |
CN106803912A (en) * | 2017-03-10 | 2017-06-06 | 武汉东信同邦信息技术有限公司 | It is a kind of from motion tracking Camcording system and method |
WO2017133605A1 (en) * | 2016-02-03 | 2017-08-10 | 歌尔股份有限公司 | Method and device for facial tracking and smart terminal |
CN107257494A (en) * | 2017-01-06 | 2017-10-17 | 深圳市纬氪智能科技有限公司 | A kind of competitive sports image pickup method and its camera system |
CN107945113A (en) * | 2017-11-17 | 2018-04-20 | 北京天睿空间科技股份有限公司 | The antidote of topography's splicing dislocation |
WO2018138697A1 (en) * | 2017-01-30 | 2018-08-02 | Virtual Innovation Center Srl | Method of generating tracking information for automatic aiming and related automatic aiming system for the video acquisition of sports events, particularly soccer matches with 5, 7 or 11 players |
CN109886130A (en) * | 2019-01-24 | 2019-06-14 | 上海媒智科技有限公司 | Determination method, apparatus, storage medium and the processor of target object |
CN110782394A (en) * | 2019-10-21 | 2020-02-11 | 中国人民解放军63861部队 | Panoramic video rapid splicing method and system |
CN111583116A (en) * | 2020-05-06 | 2020-08-25 | 上海瀚正信息科技股份有限公司 | Video panorama stitching and fusing method and system based on multi-camera cross photography |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150235076A1 (en) * | 2014-02-20 | 2015-08-20 | AiScreen Oy | Method for shooting video of playing field and filtering tracking information from the video of playing field |
-
2021
- 2021-04-07 CN CN202110374167.4A patent/CN113052119B/en active Active
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2511846A1 (en) * | 2004-07-07 | 2006-01-07 | Leo Vision | Process for obtaining a succession of images in the form of a spinning effect |
CN101601277A (en) * | 2006-12-06 | 2009-12-09 | 索尼英国有限公司 | Be used to generate the method and apparatus of picture material |
KR101291765B1 (en) * | 2013-05-15 | 2013-08-01 | (주)엠비씨플러스미디어 | Ball trace providing system for realtime broadcasting |
CN103745483A (en) * | 2013-12-20 | 2014-04-23 | 成都体育学院 | Mobile-target position automatic detection method based on stadium match video images |
WO2016086754A1 (en) * | 2014-12-03 | 2016-06-09 | 中国矿业大学 | Large-scale scene video image stitching method |
CN104580933A (en) * | 2015-02-09 | 2015-04-29 | 上海安威士科技股份有限公司 | Multi-scale real-time monitoring video stitching device based on feature points and multi-scale real-time monitoring video stitching method |
WO2017133605A1 (en) * | 2016-02-03 | 2017-08-10 | 歌尔股份有限公司 | Method and device for facial tracking and smart terminal |
CN106600548A (en) * | 2016-10-20 | 2017-04-26 | 广州视源电子科技股份有限公司 | Fish-eye camera image processing method and system |
CN106780620A (en) * | 2016-11-28 | 2017-05-31 | 长安大学 | A kind of table tennis track identification positioning and tracking system and method |
CN107257494A (en) * | 2017-01-06 | 2017-10-17 | 深圳市纬氪智能科技有限公司 | A kind of competitive sports image pickup method and its camera system |
WO2018138697A1 (en) * | 2017-01-30 | 2018-08-02 | Virtual Innovation Center Srl | Method of generating tracking information for automatic aiming and related automatic aiming system for the video acquisition of sports events, particularly soccer matches with 5, 7 or 11 players |
CN106803912A (en) * | 2017-03-10 | 2017-06-06 | 武汉东信同邦信息技术有限公司 | It is a kind of from motion tracking Camcording system and method |
CN107945113A (en) * | 2017-11-17 | 2018-04-20 | 北京天睿空间科技股份有限公司 | The antidote of topography's splicing dislocation |
CN109886130A (en) * | 2019-01-24 | 2019-06-14 | 上海媒智科技有限公司 | Determination method, apparatus, storage medium and the processor of target object |
CN110782394A (en) * | 2019-10-21 | 2020-02-11 | 中国人民解放军63861部队 | Panoramic video rapid splicing method and system |
CN111583116A (en) * | 2020-05-06 | 2020-08-25 | 上海瀚正信息科技股份有限公司 | Video panorama stitching and fusing method and system based on multi-camera cross photography |
Also Published As
Publication number | Publication date |
---|---|
CN113052119A (en) | 2021-06-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111145238B (en) | Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment | |
CN113052119B (en) | Ball game tracking camera shooting method and system | |
JP6767743B2 (en) | Image correction method and equipment | |
CN107871120B (en) | Sports event understanding system and method based on machine learning | |
Li et al. | Efficient video stitching based on fast structure deformation | |
Ghosh et al. | Quantitative evaluation of image mosaicing in multiple scene categories | |
CN104392416A (en) | Video stitching method for sports scene | |
El-Saban et al. | Fast stitching of videos captured from freely moving devices by exploiting temporal redundancy | |
Lo et al. | Image stitching for dual fisheye cameras | |
US20140085478A1 (en) | Automatic Camera Identification from a Multi-Camera Video Stream | |
CN110689476A (en) | Panoramic image splicing method and device, readable storage medium and electronic equipment | |
CN110930310A (en) | Panoramic image splicing method | |
Possegger et al. | Unsupervised calibration of camera networks and virtual PTZ cameras | |
EP4211602A1 (en) | Systems and methods for video-based sports field registration | |
CN107274352A (en) | A kind of image processing method and the real-time sampling system applied to lens distortion and photography distortion correction | |
Jin | A three-point minimal solution for panoramic stitching with lens distortion | |
CN112465702B (en) | Synchronous self-adaptive splicing display processing method for multi-channel ultrahigh-definition video | |
Xu et al. | Wide-angle image stitching using multi-homography warping | |
CN116109484A (en) | Image splicing method, device and equipment for retaining foreground information and storage medium | |
CN115190259A (en) | Shooting method and shooting system for small ball game | |
CN108234904A (en) | A kind of more video fusion method, apparatus and system | |
WO2020244194A1 (en) | Method and system for obtaining shallow depth-of-field image | |
CN112766033B (en) | Method for estimating common attention targets of downlinks in scene based on multi-view camera | |
Kim et al. | Robust multi-object tracking to acquire object oriented videos in indoor sports | |
Zhang et al. | Effective video frame acquisition for image stitching |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |