CN113052119A - Ball motion tracking camera shooting method and system - Google Patents

Ball motion tracking camera shooting method and system Download PDF

Info

Publication number
CN113052119A
CN113052119A CN202110374167.4A CN202110374167A CN113052119A CN 113052119 A CN113052119 A CN 113052119A CN 202110374167 A CN202110374167 A CN 202110374167A CN 113052119 A CN113052119 A CN 113052119A
Authority
CN
China
Prior art keywords
video image
image
images
sphere
wide
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110374167.4A
Other languages
Chinese (zh)
Other versions
CN113052119B (en
Inventor
唐郁松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xingti Guangzhou Intelligent Technology Co ltd
Original Assignee
Xingti Guangzhou Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xingti Guangzhou Intelligent Technology Co ltd filed Critical Xingti Guangzhou Intelligent Technology Co ltd
Priority to CN202110374167.4A priority Critical patent/CN113052119B/en
Publication of CN113052119A publication Critical patent/CN113052119A/en
Application granted granted Critical
Publication of CN113052119B publication Critical patent/CN113052119B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a ball motion tracking camera shooting method and a ball motion tracking camera shooting system. The method comprises the following steps: shooting different positions of a field to obtain real-shot images shot at different angles and splicing the real-shot images into real-time wide-angle images; identifying a sphere in the wide-angle image, and intercepting the wide-angle image according to the image coordinate of the sphere to obtain a video image so that the sphere is always positioned in the video image; and generating time sequencing according to each video image, extracting the rotation and translation matrixes of the edge positions of two adjacent video images, calculating the angle information and the displacement information of the rotation and translation matrixes, taking the rotation and translation matrixes of the edge position of the former video image and the relative position of a sphere in the video image as references, and correcting the interception position of the latter video image. The invention processes images through distortion-free splicing, and further performs interception and correction of the intercepted areas of the front and the rear images in a targeted manner, thereby realizing unmanned shooting and ensuring stable conversion of video images.

Description

Ball motion tracking camera shooting method and system
Technical Field
The invention relates to the technical field of image processing, in particular to a ball motion tracking camera shooting method and a ball motion tracking camera shooting system.
Background
At present, campus sports activities and events represented by campus football are more and more, event video recording and live broadcasting also become one of the ways for publicizing and sharing the events, however, most schools have no professional equipment and personnel for shooting, recording and live broadcasting the events at present, the change of the conversion rhythm of the ball events is very quick, the conversion rhythm cannot be basically followed by the common DV shooting, the requirements of users are difficult to meet, and the problems of pain points and difficulty in shooting the ball sports activities are solved.
Disclosure of Invention
The invention aims to provide a ball motion tracking camera shooting method and system to overcome the defect of poor shooting effect of the existing campus ball game.
In a first aspect, a ball motion tracking camera shooting method is provided, which includes the following steps:
shooting different positions of a field to obtain real-shot images shot at different angles and splicing the real-shot images into real-time wide-angle images;
identifying a sphere in the wide-angle image, and intercepting the wide-angle image according to the image coordinate of the sphere to obtain a video image so that the sphere is always positioned in the video image;
and generating time sequencing according to each video image, extracting the rotation and translation matrixes of the edge positions of two adjacent video images, calculating the angle information and the displacement information of the rotation and translation matrixes, taking the rotation and translation matrixes of the edge position of the former video image and the relative position of a sphere in the video image as references, and correcting the interception position of the latter video image.
Optionally, the method further comprises the following steps:
distortion correction is carried out on the shot images in the wide-angle images, and coincident pixel points in two adjacent real shot images are selected as reference points for solving the homography matrix;
and establishing a uniform coordinate system in the wide-angle image, eliminating partial overlapped pixel points through perspective transformation, and then mapping the real shot image to a corresponding area of the wide-angle image again.
Optionally, the method further comprises the following steps:
and eliminating mismatching points in the selected coincident pixel points by using an RANSAC algorithm, calculating an initial value of a homography matrix of the residual coincident pixel points after elimination, and performing refinement elimination by using a Levenberg-Marquardt nonlinear iteration minimum approximation method.
Optionally, the method further comprises the following steps:
defining a sphere area in the middle of the video image, and enabling a sphere of the video image intercepted for the first time to be in the sphere area;
extracting characteristic information of a sphere region, and obtaining a rotation translation matrix of edge pixel points of two adjacent video images by adopting a self-adaptive characteristic point registration algorithm;
comparing the characteristic information change of the sphere areas of the front video image and the rear video image, if the characteristic information change is within a preset threshold value, taking the interception range of the former video image as a reference, intercepting the latter video image again, enabling the interception range of the latter video image to be consistent, and if the characteristic information change exceeds the preset threshold value, taking the former video image as the reference, generating a plurality of intermediate images, and enabling the rotation and translation matrixes of the edge pixel points of the former video image, the intermediate images and the latter video image to be in a linear relation.
In a second aspect, there is provided a ball motion tracking camera system comprising:
the camera modules are used for shooting different positions of a field to form real shooting images;
the image processing module is used for splicing the real-shot images to obtain a real-time wide-angle image;
the identification module is used for identifying a sphere in the wide-angle image;
the image processing module is also used for intercepting the wide-angle image according to the image coordinates of the ball to obtain a video image, enabling the ball to be always positioned in the video image, generating time sequence according to each video image, extracting the rotation and translation matrixes of the edge positions of two adjacent video images and calculating the angle information and displacement information of the rotation and translation matrixes, and correcting the intercepted position of the latter video image by taking the rotation and translation matrix of the edge position of the former video image and the relative position of the ball in the video image as references.
Optionally, the image processing module is further configured to perform distortion correction on the captured image in the wide-angle image, and select a coincident pixel point in two adjacent real captured images as a reference point for solving the homography matrix; and establishing a uniform coordinate system in the wide-angle image, eliminating partial overlapped pixel points through perspective transformation, and then mapping the real shot image to a corresponding area of the wide-angle image again.
Optionally, the image processing module is further configured to eliminate mismatching points in the selected coincident pixel points by using a RANSAC algorithm, calculate an initial value of a homography matrix of the eliminated residual coincident pixel points, and perform refinement elimination by using a Levenberg-Marquardt nonlinear iterative minimum approximation method.
Optionally, the system further comprises a judging module;
the judgment module is used for comparing the characteristic information change of the sphere areas of the front video image and the rear video image;
the image processing module is further used for defining a sphere area in the middle of the video image, so that a sphere of the video image intercepted for the first time is located in the sphere area;
extracting characteristic information of a sphere region, and obtaining a rotation translation matrix of edge pixel points of two adjacent video images by adopting a self-adaptive characteristic point registration algorithm;
according to the judgment result of the judgment module, taking the interception range of the previous video image as a reference, intercepting the next video image again to enable the interception range of the next video image to be consistent, or taking the previous video image as a reference to generate a plurality of intermediate images, and enabling the rotation and translation matrixes of the edge pixel points of the previous video image, the intermediate images and the next video image to be in a linear relation.
The invention has the beneficial effects that: images are spliced and processed without distortion, and then the images are intercepted and the intercepted areas of the front and the rear images are corrected in a targeted manner, so that the situation that the match video is shot and recorded without operation is realized, the video image is ensured to be stably transformed, and the loss caused by the conditions of rapid movement of a ball body, off-site interference and the like is prevented.
Drawings
Fig. 1 is an exemplary system architecture for implementing the ball motion tracking camera method of the present application.
Fig. 2 is a flowchart of a ball motion tracking camera method according to a first embodiment.
Fig. 3 is a flowchart of a ball motion tracking camera method according to a second embodiment.
Fig. 4 is a flowchart of a ball motion tracking camera method according to a third embodiment.
FIG. 5 is a block diagram illustrating a ball motion tracking camera system according to one embodiment.
Fig. 6 is a block diagram illustrating a ball motion tracking camera system according to another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the present invention will be further described with reference to the embodiments and the accompanying drawings.
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
FIG. 1 illustrates an exemplary system architecture to which embodiments of the ball motion tracking camera method and system of the present application may be applied.
As shown in fig. 1, the system architecture may include cameras 101, 102, 103, a connection medium 104, and a processing terminal 105. The connection medium 104 is a medium for providing a transmission link between the cameras 101, 102, 103 and the processing terminal 105. The connection medium 104 may include various connection types such as wired, wireless transmission links, or fiber optic cables, among others.
It should be understood that the number of cameras, connection media, and processing terminals in fig. 1 is merely illustrative, and that any number of cameras, connection media, and processing terminals may be present, as desired for an implementation.
According to a first aspect of the present invention, a ball motion tracking camera method is provided.
Fig. 2 is a flowchart illustrating a ball motion tracking camera shooting method according to a first embodiment, in which the ball motion tracking camera shooting method according to the embodiment of the present application is executed by a camera, a connection medium, and a processing terminal. Referring to fig. 2, the method comprises the steps of:
and step S21, shooting different positions of the site, obtaining real shooting images shot at different angles and splicing the real shooting images into real-time wide-angle images.
In step S21, the four cameras are used to capture different angles and positions of one side of the field, for example, a football game, the positions of the field captured mainly by the four cameras are a left back field, a left middle field, a right middle field, and a right back field, the real images captured by the four cameras are sent to the processing terminal, and the processing terminal performs image stitching processing through an image stitching algorithm to obtain a wide-angle image.
And step S22, identifying a sphere in the wide-angle image, and intercepting the wide-angle image according to the image coordinate of the sphere to obtain a video image so that the sphere is always in the video image.
In step S22, the processing terminal identifies the sphere in the wide-angle image through an image identification algorithm, where the image identification algorithm may be a Sift algorithm or a Surf algorithm, and the Sift algorithm mainly includes: extracting key points; adding detailed information (local features) to the keypoints, so-called descriptors; finding out a plurality of pairs of feature points which are matched with each other through pairwise comparison of two-side feature points (key points with feature vectors); in the SURF algorithm, the main steps are: constructing a Hessian matrix; constructing a scale space; accurately positioning the characteristic points; the main direction is determined. After the ball body is identified, a video image for making and recording the game video is obtained in the wide-angle image in a screenshot mode, so that the ball body is always positioned in the video image, and meanwhile, the video image is properly amplified and adjusted to obtain the effect of watching the adhesive area in a closer distance.
And step S23, generating time sequence according to each video image, extracting the rotation and translation matrixes of the edge positions of two adjacent video images, calculating the angle information and the displacement information of the rotation and translation matrixes, and correcting the interception position of the next video image by taking the rotation and translation matrix of the edge position of the previous video image and the relative position of a sphere in the video image as references.
In step S23, after the video image is obtained, the capturing position and the capturing range of the next video image are determined again according to the angle information and the displacement information of the rotational-translational matrix, so as to cope with the problem that the video image is not gentle and uniform enough due to the large-range transfer of the sphere during the fast attack-defense conversion on the court. After the video images are determined for the first time according to the position of the sphere, the same position proportion of the front video image and the rear video image is judged through the rotation and translation matrix, when the same position proportion of the front video image and the rear video image is larger than a threshold value, namely the sphere is not transferred in a large range, the intercepted position of the rear video image is corrected to be consistent with the intercepted position of the front video image, when the same position proportion of the front video image and the rear video image is smaller than the threshold value, namely the sphere is transferred in a large range, the front video image and the rear video image are possibly unsmooth due to the fact that the intercepted position is large in difference, and the video image is changed to be moderate in a frame supplementing mode along the direction of position deviation.
Fig. 3 is a flowchart illustrating a ball motion tracking camera method according to a second embodiment, and as shown in fig. 3, there is provided a method of obtaining a wide-angle image, including the steps of:
and step S31, distortion correction is carried out on the shot images in the wide-angle images, and coincident pixel points in two adjacent real shot images are selected as reference points for solving the homography matrix.
And step S32, establishing a uniform coordinate system in the wide-angle image, eliminating partial overlapped pixel points through perspective transformation, and then mapping the real shot image to the corresponding area of the wide-angle image again.
In this embodiment, the method further includes eliminating mismatching points in the selected coincident pixel points by using a RANSAC algorithm, calculating an initial value of a homography matrix of the residual coincident pixel points after elimination, and performing refinement elimination by using a Levenberg-Marquardt nonlinear iteration minimum approximation method.
Fig. 4 is a flowchart of a ball motion tracking camera shooting method according to a third embodiment, and as shown in fig. 4, a method for correcting a video image capture range is provided, which includes the following steps:
and step S41, defining a sphere area in the middle of the video image, and enabling the sphere of the video image intercepted for the first time to be in the sphere area.
In this embodiment, after determining the position of the sphere in the wide-angle image, a sphere region, for example, a circular region within a certain range with the sphere as the center, may be determined based on the sphere region, and then the capturing range of the video image is determined outward based on the sphere region, so as to complete the primary capturing of the video image.
And step S42, extracting the characteristic information of the sphere area, and obtaining the rotation and translation matrix of the edge pixel points of two adjacent video images by adopting a self-adaptive characteristic point registration algorithm.
It should be noted that before extracting the feature information of the sphere region, the pixel points of the non-sphere features, such as players and referees on the field, are removed.
And step S43, comparing the change of the characteristic information of the sphere areas of the front and the back video images, if the change of the characteristic information is within a preset threshold value, taking the interception range of the former video image as a reference, intercepting the latter video image again to enable the interception range of the latter video image to be consistent, and if the change of the characteristic information exceeds the preset threshold value, taking the former video image as the reference to generate a plurality of intermediate images to enable the rotation and translation matrixes of the edge pixel points of the former video image, the intermediate images and the latter video image to be in a linear relationship.
The correction principle of the intercepted image is based on a rotation and translation matrix of the edge position of the video image and the image coordinates of the athlete in the video image. The rotation translation matrix comprises position coordinates of edge pixel points of the video image in the whole restored image, and the interception range and the interception position of the video image can be determined by acquiring the rotation translation matrix of the pixel points on four sides or four corners of the video image, so that the interception range of the video image is corrected.
In this embodiment, the orientation of the next video image relative to the previous video image is determined according to the rotation and translation matrices of the previous and next video images and with reference to the wide-angle image, and when the change of the feature information of the sphere region exceeds a preset threshold, the processing terminal takes the previous video image as a start and intercepts a plurality of intermediate images in real time along the direction of the next video image, so as to achieve the effect of "frame supplement". For example, between the front and rear video images, the sphere is transferred in a large range along the horizontal direction, so that the front and rear video images are directly subjected to picture skipping, and at the moment, the processing terminal performs multiple screenshots in the real-time wide-angle image along the horizontal direction and supplements the screenshots into the front and rear video images, so that the switching effect of the video images is mild. After the video is formed, the video image generation time is arranged to form a camera video, and the playing speed of the camera video is 25 frames per second.
In this embodiment, when the sphere is not within the wide-angle image, e.g., the sphere is out of bounds, the captured image of the field area where the sphere last appeared is maintained until the sphere appears again.
According to a second aspect of the present application, a ball motion tracking camera system is provided.
Fig. 5 is a block diagram of a ball motion tracking camera system according to one embodiment, and referring to fig. 5, the system includes:
a plurality of camera modules 51 for shooting different positions of the field to form real-shot images;
an image processing module 52 for stitching the live images to obtain a real-time wide-angle image;
an identifying module 53, configured to identify a sphere in the wide-angle image;
the image processing module 52 is further configured to intercept the wide-angle image according to the image coordinates of the sphere to obtain video images, so that the sphere is always located in the video images, generate time sequence according to each video image, extract the rotation and translation matrix of the edge positions of two adjacent video images, and calculate the angle information and displacement information of the rotation and translation matrix, where the rotation and translation matrix of the edge position of the previous video image and the relative position of the sphere in the video image are used as references, and correct the intercepted position of the subsequent video image.
The image capturing module 51, the image processing module 52, and the recognition module 53 may be provided integrally or separately.
Optionally, the image processing module 52 is further configured to perform distortion correction on the captured image in the wide-angle image, and select a coincident pixel point in two adjacent real captured images as a reference point for solving the homography matrix; and establishing a uniform coordinate system in the wide-angle image, eliminating partial overlapped pixel points through perspective transformation, and then mapping the real shot image to a corresponding area of the wide-angle image again.
Optionally, the image processing module 52 is further configured to eliminate the mismatching points in the selected overlapping pixels by using a RANSAC algorithm, calculate an initial value of a homography matrix of the remaining overlapping pixels after elimination, and perform refinement elimination by using a Levenberg-Marquardt nonlinear iterative minimum approximation method.
Optionally, as shown in fig. 6, the system further includes a determining module 54;
the judging module 54 is configured to compare feature information changes of sphere regions of the front and rear video images;
the image processing module 52 is further configured to define a sphere region in the middle of the video image, so that the sphere of the video image captured for the first time is located in the sphere region;
extracting characteristic information of a sphere region, and obtaining a rotation translation matrix of edge pixel points of two adjacent video images by adopting a self-adaptive characteristic point registration algorithm;
according to the judgment result of the judgment module 54, based on the interception range of the previous video image, the subsequent video image is intercepted again, so that the interception range of the subsequent video image is consistent, or based on the previous video image, a plurality of intermediate images are generated, so that the rotation and translation matrixes of the edge pixels of the previous video image, the intermediate images and the subsequent video image are in a linear relationship.
With regard to the system in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
As used herein, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, including not only those elements listed, but also other elements not expressly listed.
Although the present invention has been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that various changes, modifications and substitutions can be made without departing from the spirit and scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A ball motion tracking camera shooting method is characterized by comprising the following steps:
shooting different positions of a field to obtain real-shot images shot at different angles and splicing the real-shot images into real-time wide-angle images;
identifying a sphere in the wide-angle image, and intercepting the wide-angle image according to the image coordinate of the sphere to obtain a video image so that the sphere is always positioned in the video image;
and generating time sequencing according to each video image, extracting the rotation and translation matrixes of the edge positions of two adjacent video images, calculating the angle information and the displacement information of the rotation and translation matrixes, taking the rotation and translation matrixes of the edge position of the former video image and the relative position of a sphere in the video image as references, and correcting the interception position of the latter video image.
2. A ball motion tracking camera method according to claim 1, further comprising the steps of:
distortion correction is carried out on the shot images in the wide-angle images, and coincident pixel points in two adjacent real shot images are selected as reference points for solving the homography matrix;
and establishing a uniform coordinate system in the wide-angle image, eliminating partial overlapped pixel points through perspective transformation, and then mapping the real shot image to a corresponding area of the wide-angle image again.
3. A ball motion tracking camera method according to claim 2, further comprising the steps of:
and eliminating mismatching points in the selected coincident pixel points by using an RANSAC algorithm, calculating an initial value of a homography matrix of the residual coincident pixel points after elimination, and performing refinement elimination by using a Levenberg-Marquardt nonlinear iteration minimum approximation method.
4. A ball motion tracking camera method according to claim 1, further comprising the steps of:
defining a sphere area in the middle of the video image, and enabling a sphere of the video image intercepted for the first time to be in the sphere area;
extracting characteristic information of a sphere region, and obtaining a rotation translation matrix of edge pixel points of two adjacent video images by adopting a self-adaptive characteristic point registration algorithm;
comparing the characteristic information change of the sphere areas of the front video image and the rear video image, if the characteristic information change is within a preset threshold value, taking the interception range of the former video image as a reference, intercepting the latter video image again, enabling the interception range of the latter video image to be consistent, and if the characteristic information change exceeds the preset threshold value, taking the former video image as the reference, generating a plurality of intermediate images, and enabling the rotation and translation matrixes of the edge pixel points of the former video image, the intermediate images and the latter video image to be in a linear relation.
5. A ball motion tracking camera system, comprising:
the camera modules are used for shooting different positions of a field to form real shooting images;
the image processing module is used for splicing the real-shot images to obtain a real-time wide-angle image;
the identification module is used for identifying a sphere in the wide-angle image;
the image processing module is also used for intercepting the wide-angle image according to the image coordinates of the ball to obtain a video image, enabling the ball to be always positioned in the video image, generating time sequence according to each video image, extracting the rotation and translation matrixes of the edge positions of two adjacent video images and calculating the angle information and displacement information of the rotation and translation matrixes, and correcting the intercepted position of the latter video image by taking the rotation and translation matrix of the edge position of the former video image and the relative position of the ball in the video image as references.
6. The ball motion tracking camera system according to claim 5, wherein the image processing module is further configured to perform distortion correction on the captured images in the wide-angle image, and select coincident pixel points in two adjacent real captured images as reference points for solving the homography matrix; and establishing a uniform coordinate system in the wide-angle image, eliminating partial overlapped pixel points through perspective transformation, and then mapping the real shot image to a corresponding area of the wide-angle image again.
7. The ball motion tracking camera system of claim 6, wherein the image processing module is further configured to reject mismatching points among the selected coincident pixel points by using a RANSAC algorithm, calculate an initial value of a homography matrix of the remaining coincident pixel points after rejection, and perform refinement rejection by using a Levenberg-Marquardt nonlinear iterative minimum approximation method.
8. The ball motion tracking camera system of claim 5, further comprising a determination module;
the judgment module is used for comparing the characteristic information change of the sphere areas of the front video image and the rear video image;
the image processing module is further used for defining a sphere area in the middle of the video image, so that a sphere of the video image intercepted for the first time is located in the sphere area;
extracting characteristic information of a sphere region, and obtaining a rotation translation matrix of edge pixel points of two adjacent video images by adopting a self-adaptive characteristic point registration algorithm;
according to the judgment result of the judgment module, taking the interception range of the previous video image as a reference, intercepting the next video image again to enable the interception range of the next video image to be consistent, or taking the previous video image as a reference to generate a plurality of intermediate images, and enabling the rotation and translation matrixes of the edge pixel points of the previous video image, the intermediate images and the next video image to be in a linear relation.
CN202110374167.4A 2021-04-07 2021-04-07 Ball game tracking camera shooting method and system Active CN113052119B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110374167.4A CN113052119B (en) 2021-04-07 2021-04-07 Ball game tracking camera shooting method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110374167.4A CN113052119B (en) 2021-04-07 2021-04-07 Ball game tracking camera shooting method and system

Publications (2)

Publication Number Publication Date
CN113052119A true CN113052119A (en) 2021-06-29
CN113052119B CN113052119B (en) 2024-03-15

Family

ID=76518904

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110374167.4A Active CN113052119B (en) 2021-04-07 2021-04-07 Ball game tracking camera shooting method and system

Country Status (1)

Country Link
CN (1) CN113052119B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114268741A (en) * 2022-02-24 2022-04-01 荣耀终端有限公司 Transition dynamic effect generation method, electronic device, and storage medium
CN116612168A (en) * 2023-04-20 2023-08-18 北京百度网讯科技有限公司 Image processing method, device, electronic equipment, image processing system and medium

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2511846A1 (en) * 2004-07-07 2006-01-07 Leo Vision Process for obtaining a succession of images in the form of a spinning effect
CN101601277A (en) * 2006-12-06 2009-12-09 索尼英国有限公司 Be used to generate the method and apparatus of picture material
KR101291765B1 (en) * 2013-05-15 2013-08-01 (주)엠비씨플러스미디어 Ball trace providing system for realtime broadcasting
CN103745483A (en) * 2013-12-20 2014-04-23 成都体育学院 Mobile-target position automatic detection method based on stadium match video images
CN104580933A (en) * 2015-02-09 2015-04-29 上海安威士科技股份有限公司 Multi-scale real-time monitoring video stitching device based on feature points and multi-scale real-time monitoring video stitching method
US20150235076A1 (en) * 2014-02-20 2015-08-20 AiScreen Oy Method for shooting video of playing field and filtering tracking information from the video of playing field
WO2016086754A1 (en) * 2014-12-03 2016-06-09 中国矿业大学 Large-scale scene video image stitching method
CN106600548A (en) * 2016-10-20 2017-04-26 广州视源电子科技股份有限公司 Fish-eye camera image processing method and system
CN106780620A (en) * 2016-11-28 2017-05-31 长安大学 A kind of table tennis track identification positioning and tracking system and method
CN106803912A (en) * 2017-03-10 2017-06-06 武汉东信同邦信息技术有限公司 It is a kind of from motion tracking Camcording system and method
WO2017133605A1 (en) * 2016-02-03 2017-08-10 歌尔股份有限公司 Method and device for facial tracking and smart terminal
CN107257494A (en) * 2017-01-06 2017-10-17 深圳市纬氪智能科技有限公司 A kind of competitive sports image pickup method and its camera system
CN107945113A (en) * 2017-11-17 2018-04-20 北京天睿空间科技股份有限公司 The antidote of topography's splicing dislocation
WO2018138697A1 (en) * 2017-01-30 2018-08-02 Virtual Innovation Center Srl Method of generating tracking information for automatic aiming and related automatic aiming system for the video acquisition of sports events, particularly soccer matches with 5, 7 or 11 players
CN109886130A (en) * 2019-01-24 2019-06-14 上海媒智科技有限公司 Determination method, apparatus, storage medium and the processor of target object
CN110782394A (en) * 2019-10-21 2020-02-11 中国人民解放军63861部队 Panoramic video rapid splicing method and system
CN111583116A (en) * 2020-05-06 2020-08-25 上海瀚正信息科技股份有限公司 Video panorama stitching and fusing method and system based on multi-camera cross photography

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2511846A1 (en) * 2004-07-07 2006-01-07 Leo Vision Process for obtaining a succession of images in the form of a spinning effect
CN101601277A (en) * 2006-12-06 2009-12-09 索尼英国有限公司 Be used to generate the method and apparatus of picture material
KR101291765B1 (en) * 2013-05-15 2013-08-01 (주)엠비씨플러스미디어 Ball trace providing system for realtime broadcasting
CN103745483A (en) * 2013-12-20 2014-04-23 成都体育学院 Mobile-target position automatic detection method based on stadium match video images
US20150235076A1 (en) * 2014-02-20 2015-08-20 AiScreen Oy Method for shooting video of playing field and filtering tracking information from the video of playing field
WO2016086754A1 (en) * 2014-12-03 2016-06-09 中国矿业大学 Large-scale scene video image stitching method
CN104580933A (en) * 2015-02-09 2015-04-29 上海安威士科技股份有限公司 Multi-scale real-time monitoring video stitching device based on feature points and multi-scale real-time monitoring video stitching method
WO2017133605A1 (en) * 2016-02-03 2017-08-10 歌尔股份有限公司 Method and device for facial tracking and smart terminal
CN106600548A (en) * 2016-10-20 2017-04-26 广州视源电子科技股份有限公司 Fish-eye camera image processing method and system
CN106780620A (en) * 2016-11-28 2017-05-31 长安大学 A kind of table tennis track identification positioning and tracking system and method
CN107257494A (en) * 2017-01-06 2017-10-17 深圳市纬氪智能科技有限公司 A kind of competitive sports image pickup method and its camera system
WO2018138697A1 (en) * 2017-01-30 2018-08-02 Virtual Innovation Center Srl Method of generating tracking information for automatic aiming and related automatic aiming system for the video acquisition of sports events, particularly soccer matches with 5, 7 or 11 players
CN106803912A (en) * 2017-03-10 2017-06-06 武汉东信同邦信息技术有限公司 It is a kind of from motion tracking Camcording system and method
CN107945113A (en) * 2017-11-17 2018-04-20 北京天睿空间科技股份有限公司 The antidote of topography's splicing dislocation
CN109886130A (en) * 2019-01-24 2019-06-14 上海媒智科技有限公司 Determination method, apparatus, storage medium and the processor of target object
CN110782394A (en) * 2019-10-21 2020-02-11 中国人民解放军63861部队 Panoramic video rapid splicing method and system
CN111583116A (en) * 2020-05-06 2020-08-25 上海瀚正信息科技股份有限公司 Video panorama stitching and fusing method and system based on multi-camera cross photography

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114268741A (en) * 2022-02-24 2022-04-01 荣耀终端有限公司 Transition dynamic effect generation method, electronic device, and storage medium
CN116612168A (en) * 2023-04-20 2023-08-18 北京百度网讯科技有限公司 Image processing method, device, electronic equipment, image processing system and medium

Also Published As

Publication number Publication date
CN113052119B (en) 2024-03-15

Similar Documents

Publication Publication Date Title
CN111145238B (en) Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment
CN113052119B (en) Ball game tracking camera shooting method and system
KR100790887B1 (en) Apparatus and method for processing image
WO2020235110A1 (en) Calibration device, chart for calibration, and calibration method
CN110930310B (en) Panoramic image splicing method
CN110689476A (en) Panoramic image splicing method and device, readable storage medium and electronic equipment
CN111311492A (en) Crack image splicing method
CN110278366B (en) Panoramic image blurring method, terminal and computer readable storage medium
Qian et al. Manifold alignment based color transfer for multiview image stitching
Xu et al. Wide-angle image stitching using multi-homography warping
CN110675349B (en) Endoscopic imaging method and device
CN112465702A (en) Synchronous self-adaptive splicing display processing method for multi-channel ultrahigh-definition video
CN109598675B (en) Splicing method of multiple repeated texture images
CN112215749A (en) Image splicing method, system and equipment based on cylindrical projection and storage medium
CN116109484A (en) Image splicing method, device and equipment for retaining foreground information and storage medium
Zhang et al. Effective video frame acquisition for image stitching
CN115190259A (en) Shooting method and shooting system for small ball game
CN115965697A (en) Projector calibration method, calibration system and device based on Samm's law
CN108234904A (en) A kind of more video fusion method, apparatus and system
WO2020244194A1 (en) Method and system for obtaining shallow depth-of-field image
CN110213500B (en) Wide dynamic graph generation method for multi-lens shooting
Kim et al. Robust multi-object tracking to acquire object oriented videos in indoor sports
Wang et al. A common feature-based disparity control strategy in stereoscopic panorama generation
US10288486B2 (en) Image processing device and method
CN107566849B (en) Football game video playing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant