CN111757148B - Method, device and system for processing sports event video - Google Patents

Method, device and system for processing sports event video Download PDF

Info

Publication number
CN111757148B
CN111757148B CN202010493123.9A CN202010493123A CN111757148B CN 111757148 B CN111757148 B CN 111757148B CN 202010493123 A CN202010493123 A CN 202010493123A CN 111757148 B CN111757148 B CN 111757148B
Authority
CN
China
Prior art keywords
video
event
target
information
image frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010493123.9A
Other languages
Chinese (zh)
Other versions
CN111757148A (en
Inventor
宋兵
赵筠
吴双龙
尹东芹
任芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Biying Technology Co ltd
Jiangsu Suning Cloud Computing Co ltd
Original Assignee
Suning Cloud Computing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suning Cloud Computing Co Ltd filed Critical Suning Cloud Computing Co Ltd
Priority to CN202010493123.9A priority Critical patent/CN111757148B/en
Publication of CN111757148A publication Critical patent/CN111757148A/en
Application granted granted Critical
Publication of CN111757148B publication Critical patent/CN111757148B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234363Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application relates to a method, a device and a system for processing sports event videos. The method comprises the following steps: acquiring event detailed event data of a target event; acquiring a plurality of target image frames containing score board display areas of the target event; determining the playing time information of a score display area in the plurality of target image frames; obtaining origin-destination information according to the competition duration information and the competition detailed event data; and obtaining the target clip video segment according to the origin-destination information. The embodiment of the application can process a large number of sports event videos of different types in a short time, saves labor cost, and has high yield, high accuracy and high universality.

Description

Method, device and system for processing sports event video
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a method, an apparatus, and a system for processing a video of a sporting event.
Background
Sports event videos have a large audience population and a wide market demand, and event operators often clip some wonderful segments from the sports event videos for users to quickly browse, watch and share. However, the traditional video processing method usually needs to manually clip the video, consumes manpower, has poor timeliness and low content yield, the number of output clip video segments is very limited, and the segment clipping of non-key events is particularly difficult to guarantee.
With the rapid development of the internet and the expansion of video services, the traditional video processing method requiring manual editing is increasingly unable to meet the increasing demand. In line with the change of times, some methods for automatically editing sports event videos also appear, but the methods are only suitable for processing live videos and depend on live manuscripts, so that application scenes are very limited. In addition, the time information about the event time in the live broadcast is not accurate, so that the produced video cannot correspond to an accurate time point.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a system, a method and a device for processing sports event videos.
The present invention provides according to a first aspect a method for video processing of a sporting event, which in one embodiment comprises:
acquiring event detailed event data of a target event;
acquiring a plurality of target image frames containing score display areas of a target event;
determining the playing duration information of a score display area in a plurality of target image frames;
obtaining origin-destination information according to the competition duration information and the competition detailed event data;
and obtaining the target clip video segment according to the origin-destination information.
The present invention provides according to a second aspect an apparatus for video processing of a sporting event, which in one embodiment comprises:
the event data acquisition module is used for acquiring event detailed event data of the target event;
the target image frame acquisition module is used for acquiring a plurality of target image frames containing score display areas of a target event;
the match duration determining module is used for determining match duration information of score display areas in the target image frames;
the starting-to-end point information determining module is used for obtaining the starting-to-end point information according to the competition duration information and the competition detailed event data;
and the clip acquiring module is used for acquiring the target clip video clip according to the origin-destination information.
The present invention provides according to a third aspect a system for video processing of a sporting event, which in one embodiment comprises:
the identification module is used for responding to the competition starting time identification instruction of the control module, acquiring a plurality of target image frames of the target event, wherein the target image frames comprise score display areas, determining competition starting time information of the score display areas in the target image frames, and sending the competition starting time information to the control module;
the control module is used for acquiring the detailed event data of the events, obtaining the origin-destination information according to the competition duration information and the detailed event data of the events sent by the identification module, and sending the origin-destination information to the video processing module;
and the video processing module is used for obtaining the target clip video fragment according to the origin-destination information after receiving the origin-destination information sent by the control module.
In the embodiments of the method, the device and the system for processing the sports event video, when the sports event video needs to be processed, the sports event detailed event data of a target event and a plurality of target image frames including a score display area are automatically acquired according to a preset instruction, the opening duration information is directly determined by the plurality of target image frames including the score display area, the origin-destination information is further obtained according to the opening duration information and the sports event detailed data, and then the video processing operation is performed according to the origin-destination information. The method for processing the sports event video saves labor cost through the steps, can be applied to various video playing modes such as live broadcasting and on-demand broadcasting, and has high universality. By utilizing the event detailed data, the video can be edited according to the accurate time point, the automatic processing of the live and/or on-demand sports event video is realized, and a large amount of accurate editing video segments can be generated in a short time.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating a method for video processing of a sporting event according to one embodiment;
FIG. 2 is a flowchart illustrating the steps of determining origin-destination information in one embodiment;
FIG. 3 is a flowchart illustrating the step of obtaining a target clip segment in one embodiment;
FIG. 4 is a block diagram of a video processing device for a sporting event according to one embodiment;
FIG. 5 is a block diagram that illustrates a refinement of the origin-destination determination module in one embodiment;
FIG. 6 is a block diagram showing a detailed structure of a clip retrieval module in one embodiment;
FIG. 7 is a diagram of the internal structure of a computer device in one embodiment;
FIG. 8 is a block diagram of a video processing system for a sporting event in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, as shown in FIG. 1, there is provided a method of video processing of a sporting event comprising the steps of:
step 102, obtaining event detail event data of the target event.
The sports event video processing method provided by the embodiment of the application can be applied to a system capable of processing videos, and the system can be realized by an independent server or a server cluster consisting of a plurality of servers and can also be realized by other network side equipment. The system can automatically clip the event video of the sports event, thereby obtaining some wonderful segments or highlights in the process of the event.
When the system begins processing game video for a particular session or sessions of sports events, event detail event data for that session (i.e., the target event) is obtained. It can be understood that each sports game is pre-assigned with a unique identifier to facilitate distinguishing between different sports games, and a setting format of the unique identifier may be configured according to a specific application scenario, which is not limited in this embodiment.
The event detailed data refers to information about a game event occurring during the game. For example, for a football game, the game event may be provided with events such as a red card, a yellow card, a click, a free kick, an offside, a foul and/or a goal, and it can be understood that the types and the number of the game events may be different in different sports; the information related to the game event may include the time the event occurred, the people related to the event (e.g., people who scored a goal, people who helped an attack, etc.), and the like.
Further, the event detail data may be from an event data provider. For example, the system may obtain event detail event data for the target event from an event data provider via the data interface. In one embodiment, the system may obtain event detail event data for the target event through a preset interface provided by an event data provider, such as "OPTA Sports".
And 104, acquiring a plurality of target image frames containing score board display areas of the target event.
Specifically, the system cuts out a plurality of image frames from the event video of the target event, then identifies the content in the plurality of image frames to determine whether each cut-out image frame contains the score board display area, and determines the image frame which can be identified to contain the score board display area in the plurality of cut-out image frames as the target image frame.
In one embodiment, the event video of the target event may be captured within a preset time range, for example, the event video of the target event is captured within a period of time when the upper half and the lower half of the target event are opened, so as to obtain a plurality of image frames, for example, 4 or 6 image frames, for the upper half and the lower half of the target event are opened. In another embodiment, the event video of the target event may be captured according to a preset frequency to obtain a plurality of target image frames of the target event, where the target image frames include a score board display area, for example, one image frame is captured from the event video of the target event at intervals, and then a plurality of image frames including the score board display area are screened from the captured image frames.
And 106, determining the playing time information of the score display areas in the multiple target image frames.
The playing time length information is time information about the playing time length of the game displayed in the score display area in the target image frame.
And 108, obtaining the origin-destination information according to the competition duration information and the competition detailed event data.
Wherein, the origin-destination information is video time-related data that can be used for video processing operation of the system.
Step 110: and obtaining the target clip video segment according to the origin-destination information.
The target clip video clip is a video clip obtained by the system performing video processing operation on the event video of the target event.
Specifically, the system executes video processing operation according to the origin-destination information obtained in the previous step, and obtains a target clip video segment.
In the embodiments of the foregoing sports event video processing method, apparatus, and system provided in the present application, when a sports event video needs to be processed, the system automatically obtains event detailed event data of a target event and a plurality of target image frames including a score display area according to a preset instruction, directly determines opening duration information from the plurality of target image frames including the score display area, further obtains origin-destination information according to the opening duration information and the event detailed data, and performs a video processing operation according to the origin-destination information. The method for processing the body sports event video saves labor cost through the steps, can be applied to various video playing modes such as live broadcasting and on-demand broadcasting, and has high universality. By utilizing the event detailed data, the video can be edited according to the accurate time point, the automatic processing of the live and/or on-demand sports event video is realized, and a large amount of accurate editing video segments can be produced in a short time.
In one embodiment, the method further comprises:
determining the type of a video to be processed, wherein the type of the video to be processed is a live video or an on-demand video;
when the type of the video to be processed is a live video, intercepting a video stream of the target event according to a preset frame extraction frequency to obtain a plurality of live video frames, video clip identification information and display timestamp information corresponding to each live video frame.
The type of the video to be processed is the type of the sports event video to be processed by the system according to the body sports event video processing method, and the video to be processed can be live video or on-demand video; the video segment identification information and the display timestamp information may be used to indicate a relationship between a live video frame and a live event playing Time, and the display timestamp information may specifically be a PTS (Presentation Time Stamp).
Specifically, the system determines a to-be-processed video type corresponding to a target event, and when the to-be-processed video type is determined to be a live video, the system intercepts a video stream of the target event according to a preset frame extraction frequency, so that a plurality of live video frames of the live video stream are obtained, video clip identification information and display timestamp information corresponding to each live video frame are obtained, and the live video frames are image frames intercepted from the live video stream of the target event. For example, in one embodiment, the system may intercept the video stream of the target event at a fixed frequency in units of seconds, thereby obtaining image frames of a plurality of live video streams, video clip identification information and PTS display timestamp information corresponding to each image frame.
In one embodiment, the acquiring a plurality of target image frames containing a score display area of the target event includes:
and selecting from the plurality of live broadcast image frames to obtain a plurality of target image frames of the target event, wherein the target image frames comprise score display areas.
Specifically, after acquiring a plurality of live video frames of a live video stream, the system may select, from the live video frames, an image frame in the score board display area whose display time is within a preset range as a plurality of target image frames of the target event, which include the score board display area. In one embodiment, the system may select 4-6 live video frames of the top and bottom halves of the game as target image frames of the target event including the score board display area from the live video frames of the plurality of live video streams.
In one embodiment, as shown in fig. 2, the obtaining of the start-to-end point information according to the duration information of the event and the event detail event data includes the following steps:
step 202, determining corresponding time information of a plurality of target image frames in the video stream of the target event.
The time information corresponding to the target image frame in the video stream of the target event may be used to indicate a corresponding playing time length of the target image frame in the video stream of the target event.
Step 204: and obtaining the association relation between any image frame in the video stream of the target event and the corresponding competition starting duration according to the competition starting duration information sent by the identification module and the time information of the plurality of target image frames in the video stream of the target event.
And step 206, obtaining origin-destination information according to the association relation and the event detail event data.
Specifically, the system acquires time information corresponding to each target image frame in the video stream of the target event, and determines the association relationship between the display timestamp of any image frame in the video stream of the target event and the corresponding competition duration according to the time information and the competition duration information acquired in the previous step. According to the association relationship and the detailed event data obtained in the previous step, the system obtains origin-destination information.
In one embodiment, the system may read the event detail data containing the content of the target event detail event, thereby selecting a candidate event appearing in the event video of the target event, for example, in a soccer event, the candidate event may be a goal, or the like, and then determining the origin-value information corresponding to the candidate event according to the association between the display timestamp and the corresponding duration of the event of any image frame in the video stream of the target event and the candidate event.
In another embodiment, after selecting the candidate event, the system may determine a preset duration corresponding to the candidate event, and then determine the origin-value information corresponding to the candidate event according to the association relationship among the candidate event, the preset duration corresponding to the candidate event, and the display timestamp and the corresponding duration of the event of the target event in any image frame of the video stream. The preset duration corresponding to the candidate event may be determined by related personnel, such as a video clip operator.
In one embodiment, when the type of the video to be processed is an on-demand video, the system determines time information of a plurality of target image frames in an on-demand video stream of a target event, wherein the time information is used for representing a relative relation between a video playing time length corresponding to a target image frame and a total on-demand video time length of the target event; obtaining the association relation between any image frame in the on-demand video stream of the target event and the corresponding competition starting duration according to the time information and the competition starting duration information obtained in the previous step; and obtaining origin-destination information according to the correlation and the event detailed event data obtained in the previous step.
In this embodiment, the association relationship between the display timestamp of any image frame in the video stream of the target event and the corresponding competition duration is obtained according to the competition duration information obtained in the foregoing steps and the time information corresponding to the plurality of target image frames in the video stream of the target event, and then the origin-destination information for subsequently processing the video is obtained according to the association relationship and the event detail event data. According to the competition start duration information obtained by identifying the target image frames and the time information of the target image frames, the association relationship between the display time stamp of any image frame in the video stream of the target competition and the competition start duration corresponding to the display time stamp is obtained, and the corresponding matching relationship between a large amount of data (namely the association relationship between the display time stamp of any image frame in the video stream of the target competition and the competition start duration corresponding to the display time stamp) is obtained by processing a small part of data (namely the time information corresponding to a plurality of target image frames in the video stream of the target competition), so that the process of the method is fast and efficient.
In one embodiment, the determining the time information corresponding to the plurality of target image frames in the video stream of the target event includes: when the type of the video to be processed is determined to be a live video, determining time information corresponding to a plurality of target image frames in a video stream of a target event according to video segment identification information and display time stamp information corresponding to the plurality of live image frames, wherein the time information corresponding to the plurality of target image frames in the video stream of the target event comprises video segment identification information and display time stamp information corresponding to each target image frame in the video stream of the target event.
Specifically, when the type of the video to be processed is determined to be a live video, the system determines time information corresponding to a plurality of target image frames in the video stream of the target event according to the video segment identification information and the display timestamp information corresponding to the plurality of live image frames obtained in the previous steps, the time information includes video segment identification information and display timestamp information corresponding to each target image frame in the video stream of the target event, and the display timestamp information corresponding to the target image frame in the video stream of the target event may be PTS display timestamp information.
In one embodiment, when the type of the video to be processed is live video, the origin-destination information includes start video segment identification information, start display timestamp information, end video segment identification information, and end display timestamp information.
Wherein the start video segment identification information, the start display timestamp information, the end video segment identification information, and the end display timestamp information are all associated with the target clip video and are available for further video processing operations by the system.
In one embodiment, as shown in fig. 3, the above deriving the target clip video segment from origin-destination information comprises:
step 302: acquiring a video segment to be clipped according to the identification information of the starting video segment and the identification information of the ending video segment;
step 304: and clipping the video segment to be clipped according to the initial display timestamp information and the final display timestamp information to obtain the target clipping video segment.
Specifically, the system performs the relevant video processing operation according to the identification information of the start video segment and the identification information of the end video segment to obtain the video segment to be clipped. And then, according to the initial display timestamp information and the ending display timestamp information, carrying out clipping processing on the video segment to be clipped to obtain a target clipping segment. For example, in one embodiment, the system may locate the video stream segment where the target clip video segment is located according to the identification information of the start video segment and the identification information of the end video segment, and download the video stream segment to obtain the video segment to be clipped; and then, carrying out accurate time positioning according to the initial display time stamp information and the final display time stamp information, and editing the video segment to be edited to obtain the target edited video segment.
In this embodiment, the system obtains a video to be clipped from a live video stream of a target event through the identification information of the start video segment and the identification information of the end video segment, and clips the video to be clipped according to the start display timestamp information and the end display timestamp information to obtain a target clip video segment. Accurate cutting is realized, and the target cut video clip with high accuracy is obtained.
In one embodiment, the method further comprises: obtaining match course data; and generating video attribute information according to the course data.
Wherein, the course data may include the time of the match, the basic information of the players participating in the team, etc., and the video attribute information may include the video title, the video tag, etc., wherein the video tag may be the event type information, such as "NBA".
In one embodiment, the method further comprises: and transcoding the target clip video segment to obtain the target video.
Specifically, the system can transcode the target clip video according to different transcoding parameters, so that a plurality of target videos with different resolutions are obtained.
In an embodiment, the method is applied to a sports event video processing system as an example, and the sports event video processing system may be one server or a server cluster composed of a plurality of servers. In one embodiment, the sporting event video processing system includes a media asset database. Specifically, the system stores the identification information of the start video segment, the identification information of the end video segment, the identification information of the start display timestamp information and the end display timestamp information in the asset database and generates the video attribute information corresponding to the target clip video segment as the video tag according to the course data while realizing the steps 302 and 304 (i.e., acquiring the video segment to be clipped according to the identification information of the start video segment and the identification information of the end video segment, clipping the video segment to be clipped according to the start display timestamp information and the end display timestamp information to obtain the target clip video segment). The media asset database is interactive with the terminal, and the terminal acquires information from the media asset database, so that the front end of the target clip video segment is exposed. And the user operates on a terminal interface, and the target event live broadcast video playing can be realized through the initial video segment identification information, the termination video segment identification information, the initial display timestamp information and the termination display timestamp information. The system positions the video stream segment where the target clip video segment is located according to the initial video segment identification information and the termination video segment identification information, and downloads the video stream segment to obtain the video segment to be clipped; and then, carrying out accurate time positioning according to the initial display time stamp information and the final display time stamp information, and editing the video segment to be edited to obtain the target edited video segment. The system can transcode the target clip video according to different transcoding parameters, so that a plurality of target videos with different resolutions are obtained. The system may transmit the target video to the terminal through the content distribution network. The terminal includes, but is not limited to, a smart phone, a personal computer, a notebook computer, a tablet computer and other mobile electronic devices. Specifically, after target videos with different resolutions are obtained, the system can send the corresponding target video to the terminal according to a video sending request of the terminal, so that the transcoded target video after the clip is distributed according to the requirements of the user.
In the embodiment, the video front end exposure is realized when the video segment to be edited is further processed, so that the timeliness of video content production and shelf loading is ensured to the maximum extent, and the user experience is improved.
It should be understood that although the various steps in the flow charts of fig. 1-3 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-3 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 4, the present application provides a sporting event video processing apparatus comprising: the event data acquisition module, the target image frame acquisition module, the open-time duration determination module, the origin-destination information determination module and the clip acquisition module, wherein:
an event data acquiring module 402, configured to acquire event detailed event data of a target event;
a target image frame acquiring module 404, configured to acquire a plurality of target image frames of a target event, where the target image frames include a score display area;
the opening time length determining module 406 is used for determining opening time length information of a score display area in a plurality of target image frames;
an origin-destination information determining module 408, configured to obtain origin-destination information according to the duration information of the race and the detailed event data of the race;
a clip obtaining module 410, configured to obtain the target clip video clip according to the origin-destination information.
In one embodiment, the apparatus further comprises:
a first determining module (not shown in the figure) for determining the type of the video to be processed, wherein the type of the video to be processed is a live video or an on-demand video;
the first acquisition module (not shown in the figure) is used for intercepting a video stream of a target event according to a preset frame extraction frequency when the type of the video to be processed is determined to be a live video, so as to obtain a plurality of live video frames, video clip identification information and display timestamp information corresponding to each live video frame.
In one embodiment, the target image frame acquisition module 404 further comprises:
the first acquisition unit (not shown in the figure) is used for selecting from a plurality of live broadcast image frames to obtain a plurality of target image frames of the target event, wherein the target image frames comprise a score board display area.
In one embodiment, as shown in fig. 5, the origin-destination information determination module 408 further includes:
a first determining unit 502, configured to determine time information corresponding to a plurality of target image frames in a video stream of a target event;
the second determining unit 504 is configured to obtain, according to the running time information and the time information corresponding to the plurality of target image frames in the video stream of the target event, an association relationship between any image frame in the video stream of the target event and the running time corresponding to the image frame;
a third determining unit 506, configured to obtain the origin-destination information according to the association relationship between any image frame in the video stream of the target event and the corresponding duration of the event and the event detail event data.
In one embodiment, the first determining unit 502 further comprises:
and a fourth determining unit (not shown in the figure), configured to determine, when it is determined that the type of the video to be processed is a live video, time information corresponding to the multiple target image frames in the video stream of the target event according to the video segment identification information and the display timestamp information corresponding to the multiple live image frames, where the time information corresponding to the multiple target image frames in the video stream of the target event includes video segment identification information and display timestamp information corresponding to each target image frame in the video stream of the target event.
In one embodiment, as shown in fig. 6, the clip acquiring module 410 further includes:
a second obtaining unit 602, configured to obtain a video segment to be clipped according to the start video segment identification information and the end video segment identification information;
and a third obtaining unit 604, configured to perform clipping processing on the video segment to be clipped according to the start display timestamp information and the end display timestamp information to obtain a target clip video segment.
For specific limitations of the apparatus for processing video of sports events, reference may be made to the above limitations of the method for processing video of sports events, which are not described in detail herein. The various modules in the sporting event video apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent of a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The database of the computer device is used to store sports event video processing data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of video processing of a sporting event.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring event detailed event data of a target event;
acquiring a plurality of target image frames containing score board display areas of a target event;
determining the playing duration information of a score display area in a plurality of target image frames;
obtaining origin-destination information according to the competition duration information and the competition detailed event data;
and obtaining the target clip video clip according to the origin-destination information.
In one embodiment, the processor, when executing the computer program, further performs the steps of: determining the type of a video to be processed, wherein the type of the video to be processed is a live video or an on-demand video; when the type of the video to be processed is a live video, intercepting a video stream of the target event according to a preset frame extraction frequency to obtain a plurality of live video frames, video clip identification information and display timestamp information corresponding to each live video frame.
In one embodiment, when the processor executes the computer program to achieve the above-mentioned obtaining of a plurality of target image frames of the target event, which include a score display area, the method further includes: and selecting from the plurality of live broadcast image frames to obtain a plurality of target image frames of the target event, wherein the target image frames comprise the score display area.
In one embodiment, when the processor executes the computer program to obtain the start-to-end information according to the duration information of the event and the event detail event data, the following steps are further specifically implemented: determining time information corresponding to a plurality of target image frames in a video stream of a target event; obtaining the association relation between any image frame in the video stream of the target event and the corresponding competition duration according to the competition duration information and the corresponding time information of the plurality of target image frames in the video stream of the target event; and obtaining the origin-destination information according to the association relationship between any image frame in the video stream of the target event and the corresponding competition duration and the event detailed event data.
In one embodiment, when the processor executes the computer program to determine the time information corresponding to the plurality of target image frames in the video stream of the target event, the following is further specifically implemented: when the type of the video to be processed is determined to be a live video, determining time information corresponding to a plurality of target image frames in a video stream of a target event according to video segment identification information and display timestamp information corresponding to the plurality of live image frames, wherein the time information corresponding to the plurality of target image frames in the video stream of the target event comprises video segment identification information and display timestamp information corresponding to each target image frame in the video stream of the target event.
In one embodiment, the processor, when executing the computer program, further implements: when the type of the video to be processed is a live video, the origin-destination information comprises initial video segment identification information, initial display time stamp information, end video segment identification information and end display time stamp information.
In one embodiment, when the processor executes the computer program to obtain the target clip video segment according to the origin-destination information, the following is also specifically implemented: acquiring a video segment to be clipped according to the identification information of the starting video segment and the identification information of the ending video segment; and clipping the video segment to be clipped according to the initial display timestamp information and the final display timestamp information to obtain the target clipping video segment.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, performs the steps of:
acquiring event detailed event data of a target event;
acquiring a plurality of target image frames containing score board display areas of a target event;
determining the competition start duration information of score display areas in a plurality of target image frames;
obtaining origin-destination information according to the competition duration information and the competition detailed event data;
and obtaining the target clip video segment according to the origin-destination information.
In one embodiment, the computer program when executed by the processor further performs the steps of: determining the type of a video to be processed, wherein the type of the video to be processed is a live video or an on-demand video; when the type of the video to be processed is live video, intercepting a video stream of a target event according to a preset frame extraction frequency to obtain a plurality of live video frames, video clip identification information and display timestamp information corresponding to each live video frame.
In one embodiment, when the processor obtains a plurality of target image frames of the target event including the score display area, the computer program further implements: and selecting from the plurality of live broadcast image frames to obtain a plurality of target image frames of the target event, wherein the target image frames comprise score display areas.
In one embodiment, when the computer program is executed by the processor to obtain the origin-destination information according to the event duration information and the event detail event data, the following steps are further implemented: determining time information corresponding to a plurality of target image frames in a video stream of a target event; obtaining the association relation between any image frame in the video stream of the target event and the corresponding competition opening duration according to the competition opening duration information and the time information corresponding to the plurality of target image frames in the video stream of the target event; and obtaining origin-destination information according to the association relation between any image frame in the video stream of the target event and the corresponding competition duration and the event detail event data.
In one embodiment, when the computer program is executed by the processor to determine the time information corresponding to the plurality of target image frames in the video stream of the target event, the method further includes: when the type of the video to be processed is determined to be a live video, determining time information corresponding to a plurality of target image frames in a video stream of a target event according to video segment identification information and display timestamp information corresponding to the plurality of live image frames, wherein the time information corresponding to the plurality of target image frames in the video stream of the target event comprises video segment identification information and display timestamp information corresponding to each target image frame in the video stream of the target event.
In one embodiment, the computer program when executed by the processor further implements: when the type of the video to be processed is a live video, the origin-destination information comprises initial video segment identification information, initial display time stamp information, end video segment identification information and end display time stamp information.
In one embodiment, when the computer program is executed by the processor to obtain the target clip video segment according to the origin-destination information, the method further comprises: acquiring a video segment to be clipped according to the identification information of the starting video segment and the identification information of the ending video segment; and clipping the video segment to be clipped according to the initial display timestamp information and the final display timestamp information to obtain the target clipping video segment.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by hardware instructions of a computer program, which may be stored in a non-volatile computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
Fig. 8 is a schematic structural diagram of a sports event video system according to an embodiment of the present invention. The sporting event video system may be implemented by a single server, a server cluster comprised of multiple servers, or other network-side devices.
In one embodiment, as shown in FIG. 8, there is provided a sporting event video system comprising: an identification module, a control module and a video processing module,
the identification module 802 is configured to respond to the running time identification instruction of the control module, acquire a plurality of target image frames of the target event, each of which includes a score display area, determine running time information of the score display areas in the plurality of target image frames, and send the running time information to the control module;
the control module 804 is used for acquiring the event detail event data, obtaining the origin-destination information according to the competition duration information and the event detail event data sent by the identification module, and sending the origin-destination information to the video processing module;
and the video processing module 806 is configured to obtain the target clip video segment according to the origin-destination information after receiving the origin-destination information sent by the control module.
The event detailed data refers to information about a game event occurring during the game. The game events need to be preset, for example, for a football game, the game events can be provided with events such as red cards, yellow cards, ordered balls, free balls, offside, foul rules and/or goal, and it can be understood that the types and the number of the game events can be different in different sports; the information related to the game event may include the time of occurrence of the event, the persons related to the event (e.g., the person who scored the goal, the person who helped the attack, etc.), and the like.
Further, the event detail data may be from an event data provider. For example, the system may obtain event detail event data for the target event from an event data provider via the data interface. In one embodiment, the control module 804 may obtain event detail event data for the target event through a preset interface provided by an event data provider, such as "OPTA Sports.
Specifically, the recognition module 802 captures a plurality of image frames from an event video of a target event, then recognizes contents of the plurality of image frames to determine whether each captured image frame includes a score board display area, and determines an image frame, which is capable of recognizing that the captured image frame includes the score board display area, as the target image frame.
In one embodiment, the recognition module 802 may capture the event video of the target event within a preset time range, for example, capture the event video of the target event within a period of time when the top half and the bottom half of the target event are open, and obtain a plurality of image frames, for example, 4 or 6 image frames, when the top half and the bottom half of the target event are open. In another embodiment, the recognition module 802 may intercept the event video of the target event according to a preset frequency to obtain a plurality of target image frames of the target event, where the target image frames include a score board display area, for example, intercept one image frame from the event video of the target event at intervals, and then screen a plurality of image frames including the score board display area from the intercepted image frames.
The recognition module 802 determines the duration information of the match score display area in a plurality of target image frames after acquiring a plurality of target image frames containing the match score display area of the target event. The playing duration information is time information about the playing duration displayed in a score display area in the target image frame. The identification module 802 sends the opening duration information to the control module 804 after determining the opening duration information of the score display area in the target image frame; the control module 804 receives the start duration information sent by the identification module 802, and sends the start-to-end information obtained according to the event detail event data and the start duration information to the video processing module 806. Wherein, the origin-destination information is video time related data which can be used for the system to perform video processing operation. After receiving the origin-destination information sent by the control module, the video processing module 806 performs a video clip processing operation according to the origin-destination information to obtain a target clip video segment.
In this embodiment, when a sports event video needs to be processed, the control module 804 automatically obtains event detailed event data of a target event according to a preset instruction, the identification module 802 obtains a plurality of target image frames including a score display area, determines start duration information directly from the plurality of target image frames including the score display area, and sends the start duration information to the control module 804; the control board module 804 further obtains the start-to-end point information according to the start-to-end time information and the event detailed data, and the video processing module 806 performs video processing operation according to the start-to-end point information. The body sports event video processing system performs sports event video processing operation according to the steps, saves labor cost, can be used for processing various video playing modes such as live broadcast and on-demand broadcast, and has high universality. The system utilizes the event detailed data in the execution processing process, can clip the video according to the accurate time point, realizes the automatic processing of the live and/or on-demand sports event video, and can produce a large amount of accurate clip video segments in a short time.
In one embodiment, the system further comprises:
and the content distribution network is used for responding to the live event frame extraction instruction sent by the control module, extracting a plurality of live event image frames from the video stream of the target event corresponding to the program live address according to the preset frame extraction frequency, determining the video segment identification information and the display time stamp information of the plurality of live event image frames corresponding to the video stream of the target event, and sending the video segment identification information and the display time stamp information of the plurality of live event image frames corresponding to the video stream of the target event to the control module.
The control module is also used for determining the type of a video to be processed, and the type of the video to be processed is a live video or an on-demand video; when the video type to be processed is determined to be a live video, sending a live event frame-extracting indication associated with a program live address of a target event to a content distribution network, and sending an opening duration identification indication for identifying the opening duration of the live event to an identification module; and when the type of the video to be processed is determined to be the on-demand video, sending a running time identification instruction which is associated with the on-demand event video address of the target event and is used for identifying the running time of the on-demand event to the identification module.
The type of the video to be processed is the type of the sports event video to be processed by the system according to the body sports event video processing method, and the video to be processed can be live video or on-demand video; the video segment identification information and the display timestamp information may be used to indicate a relationship between a live video frame and a live event playing Time, and the display timestamp information may specifically be a PTS (Presentation Time Stamp).
Specifically, the control module 804 determines the type of the video to be processed, and when it is determined that the type of the video to be processed is a live video, sends a live event frame extraction instruction associated with a program live address of the target event to the content distribution network, and sends an opening duration identification instruction for identifying the opening duration of the live event to the identification module 802; when it is determined that the type of the video to be processed is the on-demand video, an indication of the running time identification for identifying the running time of the on-demand event associated with the on-demand event video address of the target event is sent to the identification module 802. After receiving a live event frame extraction instruction which is sent by the control module 804 and is associated with a program live address of a target event, the content distribution network extracts a plurality of live event image frames from a video stream of the target event corresponding to the program live address according to a preset frame extraction frequency, and determines video segment identification information and PTS display timestamp information of the plurality of live event image frames corresponding to the video stream of the target event; and sending video segment identification information and PTS display timestamp information corresponding to the plurality of live broadcast image frames in the video stream of the target event to the control module 804.
In this embodiment, the system includes a content distribution network, and the content distribution network extracts a plurality of live video frames from a video stream of a target event corresponding to a program live broadcast address according to an instruction sent by the control module, so that the frame extraction efficiency is improved, and the speed of the system execution flow is increased.
In one embodiment, the control module 804 is further configured to select from a plurality of live video frames sent from the content distribution network, obtain a plurality of target video frames of the target event including the score board display area, and send the plurality of target video frames of the target event including the score board display area to the identification module.
Specifically, after obtaining a plurality of live video frames of a live video stream, the control module 804 selects, according to a preset rule, a plurality of target image frames of a target event including a score display area, and sends the plurality of target image frames of the target event including the score display area to the identification module 802. For example, in one embodiment, the control module may select 4 or 6 live video frames of the top and bottom half of the match as target video frames of the target event including the score display area from the live video frames of the plurality of live video streams and send them to the recognition module 802.
In an embodiment, the control module 804 is specifically configured to, when configured to obtain the start-to-end point information according to the competition duration information and the event detail event data sent by the identification module:
determining time information corresponding to a plurality of target image frames in a video stream of a target event; obtaining the association relation between any image frame in the video stream of the target event and the corresponding competition opening duration according to the competition opening duration information sent by the identification module and the time information corresponding to the plurality of target image frames in the video stream of the target event; and obtaining the origin-destination information according to the association relationship between any image frame in the video stream of the target event and the corresponding competition duration and the event detailed event data.
The time information corresponding to the target image frame in the video stream of the target event may be used to indicate a corresponding playing time length of the target image frame in the video stream of the target event.
Specifically, the control module 804 obtains time information corresponding to each target image frame in the video stream of the target event, determines an association relationship between a display timestamp of any image frame in the video stream of the target event and the corresponding competition duration according to the time information and the competition duration information obtained in the previous step, and obtains origin-value information according to the association relationship and the competition detailed data obtained in the previous step.
In one embodiment, the control module 804 may read the event detail data containing the content of the target event detail event, thereby selecting a candidate event appearing in the event video of the target event, for example, in a soccer event, the candidate event may be a goal, or the like, and then determining the origin-value information corresponding to the candidate event according to the association between the display timestamps of any image frame in the video stream of the candidate event and the target event and the corresponding duration of the event.
In another embodiment, after selecting the candidate event, the control module 804 may determine a preset time duration corresponding to the candidate event, and then determine the origin-destination information corresponding to the candidate event according to the association relationship among the candidate event, the preset time duration corresponding to the candidate event, and the display timestamp and the corresponding duration of the event of the target event in any image frame of the video stream. The preset duration corresponding to the candidate event may be determined by related personnel, such as a video clip operator.
In one embodiment, when the type of the video to be processed is an on-demand video, the control module 804 determines time information of a plurality of target image frames in an on-demand video stream of a target event, where the time information is used to indicate a relative relationship between a video playing time length corresponding to a target image frame and a total on-demand video time length of the target event; obtaining the association relation between any image frame in the on-demand video stream of the target event and the corresponding competition starting duration according to the time information and the competition starting duration information obtained in the previous step; and obtaining origin-destination information according to the correlation and the event detailed event data obtained in the previous step.
In this embodiment, the control module 804 obtains an association relationship between a display timestamp of any image frame in the video stream of the target event and the corresponding duration of the event according to the time information corresponding to the multiple target image frames in the video stream of the target event and the duration information sent by the identification module 802, and obtains origin-destination information for subsequently processing the video according to the association relationship and the event detailed event data. The control module 804 obtains the association between the display timestamp of any image frame in the video stream of the target event and the corresponding running time thereof according to the running time information obtained by identifying the target image frame and the time information of the target image frame, and obtains the corresponding matching relationship between a large amount of data (i.e., the association between the display timestamp of any image frame in the video stream of the target event and the corresponding running time thereof) by processing a small amount of data (i.e., the time information corresponding to a plurality of target image frames in the video stream of the target event), so that the process of the method is fast and efficient.
In one embodiment, when the type of the video to be processed is live video, the origin-destination information includes start video segment identification information, start display timestamp information, end video segment identification information and end display timestamp information; the video processing module 806, when configured to obtain the target clip video segment according to the origin-destination information, is further configured to obtain a video segment to be clipped according to the start video segment identification information and the end video segment identification information; and clipping the video segment to be clipped according to the initial display timestamp information and the final display timestamp information to obtain the target clipping video segment.
Wherein the start video segment identification information, the start display timestamp information, the end video segment identification information, and the end display timestamp information are all related to the target clip video and can be used for further video processing operations by the system.
Specifically, the video processing module 806 performs related video processing operations according to the identification information of the start video segment and the identification information of the end video segment to obtain a video segment to be clipped; and then, according to the initial display time stamp information and the ending display time stamp information, carrying out clipping processing on the video segment to be clipped to obtain a target clipping segment. In one embodiment, the video processing module 806 may locate the video stream segment where the target clip video segment is located according to the identification information of the start video segment and the identification information of the end video segment, and download the video stream segment to obtain the video segment to be clipped; and then, carrying out accurate time positioning according to the initial display time stamp information and the final display time stamp information, and editing the video segment to be edited to obtain the target edited video segment.
In this embodiment, the video processing module 806 obtains a video to be clipped from a live video stream of the target event through the start video segment identification information and the end video segment identification information, and clips the video segment to be clipped according to the start display timestamp information and the end display timestamp information to obtain the target clipped video segment. Accurate clipping is realized, and a target clipped video segment with high accuracy is obtained.
In one embodiment, when the control module 804 is configured to determine corresponding time information of a plurality of target image frames in a video stream of a target event, the control module is specifically configured to: the method comprises the steps of determining corresponding target time information of a plurality of target image frames in a video stream of a target event according to corresponding video segment identification information and display timestamp information of the plurality of live broadcast image frames in the video stream of the target event, wherein the corresponding target time information of the plurality of target image frames in the video stream of the target event comprises corresponding video segment identification information and display timestamp information of each target image frame in the video stream of the target event.
Specifically, when it is determined that the type of the video to be processed is a live video, the control module 804 determines, according to the video segment identification information and the display timestamp information corresponding to the multiple live video frames obtained in the previous steps, time information corresponding to the multiple target video frames in the video stream of the target event, where the time information includes video segment identification information and display timestamp information corresponding to each target video frame in the video stream of the target event.
In one embodiment, if the control module 804 determines that the type of the video to be processed is a live video, the identifying module 802 is specifically configured to, when being configured to acquire a plurality of target image frames of a target event, the target image frames including a score display area: acquiring a plurality of target image frames containing a score display area of a target event sent by a control module; if the control module 804 determines that the type of the video to be processed is the on-demand video, the identification module 802 is specifically configured to, when being configured to obtain a plurality of target image frames containing a score board display area of the target event: and determining the video-on-demand event video address of the target event sent by the control module, and intercepting video streams corresponding to the video-on-demand event video address to obtain a plurality of target image frames containing score display areas of the target event.
Specifically, if the control module 804 determines that the type of the video to be processed is a live video, the identification module 802 directly obtains a plurality of target image frames containing a score display area of the target event sent by the control module; if the control module 804 determines that the type of the video to be processed is the on-demand video, the identification module 802 first determines the on-demand event video address of the target event sent by the control module, and then performs frame extraction and interception on the video stream corresponding to the on-demand event video address, so as to obtain a plurality of target image frames of the target event including the score display area.
In this embodiment, the recognition module 802 performs different execution processes according to different types of videos to be processed determined by the control module 804, and cooperates with the control module 804 to obtain a plurality of target image frames of the target event, which include a score display area. When the type of the video to be processed is a live video, the identification module 802 directly acquires the video image frame obtained by the control module 804, so that repeated operation is avoided; when the type of the video to be processed is the video on demand, the identification module 802 completes the acquisition of a plurality of target image frames of the target event, wherein the target image frames comprise a score display area, and the control module 804 is used for sending the video on demand event video address of the target event to the identification module 802, so that the system has clear division of labor and high efficiency in the execution process.
In one embodiment, the control module 804 is further configured to: obtaining match course data; and generating video attribute information according to the course data.
Wherein, the course data may include the time of the match, the basic information of the players participating in the team, etc., and the video attribute information may include the video title, the video tag, etc., wherein the video tag may be the event type information, such as "NBA".
In one embodiment, the video processing module 806 is further configured to: and transcoding the target clip video segment to obtain the target video.
Specifically, the video processing module 806 may transcode the target clip video according to different transcoding parameters, thereby obtaining a plurality of target videos with different resolutions.
In one embodiment, the system further comprises a media asset database. The control module 804 stores the start video segment identification information, the end video segment identification information, the start display time stamp information, and the end display time stamp information in the asset database, and generates video attribute information corresponding to the target clip video segment as a video tag according to the course data. The media asset database is interactive with the terminal, and the terminal acquires information from the media asset database, so that the front end of the target clip video segment is exposed. And the user operates the terminal interface, and the target event live video playing can be realized through the initial video segment identification information, the termination video segment identification information, the initial display timestamp information and the termination display timestamp information. The video processing module 806 locates a video stream segment where the target clip video segment is located according to the start video segment identification information and the end video segment identification information, and downloads the video stream segment to obtain a video segment to be clipped; and then, carrying out accurate time positioning according to the initial display time stamp information and the final display time stamp information, and editing the video segment to be edited to obtain the target edited video segment. The video processing module 806 may also transcode the target clip video according to different transcoding parameters, thereby obtaining multiple target videos with different resolutions. The control module 804 may transmit the target video to the terminal through the content distribution network. The terminal includes, but is not limited to, a smart phone, a personal computer, a notebook computer, a tablet computer and other mobile electronic devices. Specifically, the control module 804 may send the corresponding target video to the terminal according to the video sending request of the terminal, so as to distribute the transcoded target video according to the requirement of the user.
In the embodiment, the video front end exposure is realized when the video segment to be edited is further processed, so that the timeliness of video content production and shelf loading is ensured to the maximum extent, and the user experience is improved.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent application shall be subject to the appended claims.

Claims (10)

1. A method of video processing of a sporting event, the method comprising:
acquiring event detailed event data of a target event;
acquiring a plurality of target image frames containing score board display areas of the target event;
determining the starting time length information of score display areas in the target image frames;
determining time information corresponding to the plurality of target image frames in the video stream of the target event;
obtaining the association relation between any image frame in the video stream of the target event and the corresponding competition opening duration according to the competition opening duration information and the time information corresponding to the plurality of target image frames in the video stream of the target event;
obtaining origin-destination information according to the incidence relation between any image frame in the video stream of the target event and the corresponding competition duration and the event detailed event data;
and obtaining the target clip video segment according to the origin-destination information.
2. The method of claim 1, further comprising:
determining the type of a video to be processed, wherein the type of the video to be processed is a live video or an on-demand video;
when the type of the video to be processed is a live video, intercepting a video stream of the target event according to a preset frame extraction frequency to obtain a plurality of live video frames, and video clip identification information and display timestamp information corresponding to each live video frame;
the obtaining a plurality of target image frames containing score board display areas of the target event comprises:
and selecting from the live broadcast image frames to obtain target image frames of the target event, wherein the target image frames comprise score display areas.
3. The method of claim 2, wherein the determining corresponding temporal information for the plurality of target image frames in a video stream of the target event comprises:
when the type of the video to be processed is determined to be a live video, determining time information corresponding to the target image frames in the video stream of the target event according to the video segment identification information and the display time stamp information corresponding to the live image frames, wherein the time information corresponding to the target image frames in the video stream of the target event comprises the video segment identification information and the display time stamp information corresponding to each target image frame in the video stream of the target event.
4. The method according to claim 2 or 3, wherein when the type of the video to be processed is live video, the origin-destination information comprises start video segment identification information, start display time stamp information, end video segment identification information and end display time stamp information;
the obtaining the target clip video segment according to the origin-destination information comprises:
acquiring a video segment to be clipped according to the identification information of the starting video segment and the identification information of the ending video segment; and clipping the video segment to be clipped according to the starting display timestamp information and the ending display timestamp information to obtain a target clipping video segment.
5. A sports event video processing apparatus, the apparatus comprising:
the event data acquisition module is used for acquiring event detailed event data of the target event;
the target image frame acquisition module is used for acquiring a plurality of target image frames containing score display areas of the target event;
the playing time determining module is used for determining playing time information of a score display area in the plurality of target image frames;
a start-to-end information determining module, configured to determine time information corresponding to the target image frames in a video stream of the target event; obtaining the association relation between any image frame in the video stream of the target event and the corresponding competition starting duration according to the competition starting duration information and the time information corresponding to the plurality of target image frames in the video stream of the target event; obtaining origin-destination information according to the incidence relation between any image frame in the video stream of the target event and the corresponding competition duration and the event detail event data;
and the clip acquiring module is used for acquiring the target clip video clip according to the origin-destination information.
6. A video processing system for sports events is characterized by comprising an identification module, a control module and a video processing module;
the identification module is used for responding to the competition start duration identification instruction of the control module, acquiring a plurality of target image frames of the target competition, wherein the target image frames comprise score display areas, determining competition start duration information of the score display areas in the target image frames, and sending the competition start duration information to the control module;
the control module is used for acquiring event detailed event data and determining corresponding time information of the target image frames in the video stream of the target event; obtaining the association relation between any image frame in the video stream of the target event and the corresponding competition duration according to the competition duration information sent by the identification module and the time information of the plurality of target image frames in the video stream of the target event; obtaining origin-destination information according to the incidence relation between any image frame in the video stream of the target event and the corresponding competition duration and the event detail event data, and sending the origin-destination information to the video processing module;
and the video processing module is used for obtaining a target clip video fragment according to the origin-destination information after receiving the origin-destination information sent by the control module.
7. The system of claim 6, further comprising a content distribution network;
the control module is also used for determining the type of a video to be processed, wherein the type of the video to be processed is a live video or an on-demand video; when the type of the video to be processed is determined to be a live video, sending a live event frame-extracting indication associated with a program live address of the target event to the content distribution network, and sending an opening duration identification indication for identifying the opening duration of the live event to the identification module; when the type of the video to be processed is determined to be the video on demand, sending a running time identification instruction which is associated with the video address of the on-demand event of the target event and is used for identifying the running time of the on-demand event to the identification module;
the content distribution network is used for responding to a live event frame extraction instruction sent by the control module, extracting a plurality of live event image frames from a video stream of a target event corresponding to the program live address according to a preset frame extraction frequency, determining video segment identification information and display timestamp information corresponding to the live event image frames in the video stream of the target event, and sending the video segment identification information and the display timestamp information corresponding to the live event image frames in the video stream of the target event to the control module;
the control module is further configured to select from the plurality of live broadcast image frames sent by the content distribution network, obtain a plurality of target image frames of the target event including a score board display area, and send the plurality of target image frames of the target event including the score board display area to the identification module.
8. The system according to claim 7, wherein when the type of the video to be processed is live video, the origin-destination information comprises start video segment identification information, start display time stamp information, end video segment identification information and end display time stamp information;
when the video processing module is used for obtaining a target clip video segment according to the origin-destination information, the video processing module is specifically used for obtaining a video segment to be clipped according to the starting video segment identification information and the ending video segment identification information; and clipping the video segment to be clipped according to the starting display timestamp information and the ending display timestamp information to obtain the target clipping video segment.
9. The system of claim 8,
when the control module is configured to determine time information corresponding to the plurality of target image frames in the video stream of the target event, the control module is specifically configured to:
determining target time information corresponding to a plurality of target image frames in the video stream of the target event according to video segment identification information and display timestamp information corresponding to the plurality of live broadcast image frames in the video stream of the target event sent by the content distribution network,
the target time information corresponding to the target events of the multiple target image frames in the video stream of the target events comprises video clip identification information and display timestamp information corresponding to each target image frame in the video stream of the target events.
10. The system according to any one of claims 7 to 9,
if the control module determines that the type of the video to be processed is a live video, the identification module is specifically configured to, when the identification module is used for acquiring a plurality of target image frames of a target event, the target image frames including a score display area: acquiring a plurality of target image frames containing score display areas of the target events, which are sent by the control module;
if the control module determines that the type of the video to be processed is the video on demand, the identification module is specifically configured to, when the identification module is used for acquiring a plurality of target image frames of a target event, the target image frames including a score display area: and determining the video-on-demand event video address of the target event sent by the control module, and intercepting a video stream corresponding to the video-on-demand event video address to obtain a plurality of target image frames containing score display areas of the target event.
CN202010493123.9A 2020-06-03 2020-06-03 Method, device and system for processing sports event video Active CN111757148B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010493123.9A CN111757148B (en) 2020-06-03 2020-06-03 Method, device and system for processing sports event video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010493123.9A CN111757148B (en) 2020-06-03 2020-06-03 Method, device and system for processing sports event video

Publications (2)

Publication Number Publication Date
CN111757148A CN111757148A (en) 2020-10-09
CN111757148B true CN111757148B (en) 2022-11-04

Family

ID=72674476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010493123.9A Active CN111757148B (en) 2020-06-03 2020-06-03 Method, device and system for processing sports event video

Country Status (1)

Country Link
CN (1) CN111757148B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220303642A1 (en) * 2021-03-19 2022-09-22 Product Development Associates, Inc. Securing video distribution
CN113507630B (en) * 2021-07-08 2023-06-20 北京百度网讯科技有限公司 Method and device for stripping game video
CN113490049B (en) * 2021-08-10 2023-04-21 深圳市前海动竞体育科技有限公司 Sports event video editing method and system based on artificial intelligence
CN114422851B (en) * 2022-01-24 2023-05-16 腾讯科技(深圳)有限公司 Video editing method, device, electronic equipment and readable medium
CN117132925B (en) * 2023-10-26 2024-02-06 成都索贝数码科技股份有限公司 Intelligent stadium method and device for sports event

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8385723B2 (en) * 2010-06-18 2013-02-26 Microsoft Corporation Recording of sports related television programming
CN102547141B (en) * 2012-02-24 2014-12-24 央视国际网络有限公司 Method and device for screening video data based on sports event video
CN109194971B (en) * 2018-08-27 2021-05-18 咪咕视讯科技有限公司 Method and device for generating multimedia file
CN110012348B (en) * 2019-06-04 2019-09-10 成都索贝数码科技股份有限公司 A kind of automatic collection of choice specimens system and method for race program
CN110213672B (en) * 2019-07-04 2021-06-18 腾讯科技(深圳)有限公司 Video generation method, video playing method, video generation system, video playing device, video storage medium and video equipment

Also Published As

Publication number Publication date
CN111757148A (en) 2020-10-09

Similar Documents

Publication Publication Date Title
CN111757148B (en) Method, device and system for processing sports event video
CN109089154B (en) Video extraction method, device, equipment and medium
CN109089127B (en) Video splicing method, device, equipment and medium
CN108924576A (en) A kind of video labeling method, device, equipment and medium
CN110198456B (en) Live broadcast-based video pushing method and device and computer-readable storage medium
US20090213270A1 (en) Video indexing and fingerprinting for video enhancement
US12015807B2 (en) System and method for providing image-based video service
CN111757147B (en) Method, device and system for event video structuring
US20150195626A1 (en) Augmented media service providing method, apparatus thereof, and system thereof
CN109089128A (en) A kind of method for processing video frequency, device, equipment and medium
US11037603B1 (en) Computing system with DVE template selection and video content item generation feature
CN111464819A (en) Live image detection method, device, equipment and storage medium
CN111093093B (en) Method, device and system for generating special effect video and computer equipment
CN111586432B (en) Method and device for determining air-broadcast live broadcast room, server and storage medium
CN113542909A (en) Video processing method and device, electronic equipment and computer storage medium
CN113301386B (en) Video processing method, device, server and storage medium
CN112565820A (en) Video news splitting method and device
CN115022663A (en) Live stream processing method and device, electronic equipment and medium
US20150288731A1 (en) Content switching method and apparatus
CN113537127A (en) Film matching method, device, equipment and storage medium
CN108322782B (en) Method, device and system for pushing multimedia information
CN110100445B (en) Information processing system, information processing apparatus, and computer readable medium
CN114245214B (en) Object playing method, server, terminal and storage medium
CN116546239A (en) Video processing method, apparatus and computer readable storage medium
CN111753105A (en) Multimedia content processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: No.1-1 Suning Avenue, Xuzhuang Software Park, Xuanwu District, Nanjing, Jiangsu Province, 210000

Patentee after: Jiangsu Suning cloud computing Co.,Ltd.

Country or region after: China

Address before: No.1-1 Suning Avenue, Xuzhuang Software Park, Xuanwu District, Nanjing, Jiangsu Province, 210000

Patentee before: Suning Cloud Computing Co.,Ltd.

Country or region before: China

CP03 Change of name, title or address
TR01 Transfer of patent right

Effective date of registration: 20240715

Address after: Room 3104, Building A5, No. 3 Gutan Avenue, Economic Development Zone, Gaochun District, Nanjing City, Jiangsu Province, 210000

Patentee after: Jiangsu Biying Technology Co.,Ltd.

Country or region after: China

Address before: No.1-1 Suning Avenue, Xuzhuang Software Park, Xuanwu District, Nanjing, Jiangsu Province, 210000

Patentee before: Jiangsu Suning cloud computing Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right