CN112633087A - Automatic journaling method and device based on picture analysis for IBC system - Google Patents

Automatic journaling method and device based on picture analysis for IBC system Download PDF

Info

Publication number
CN112633087A
CN112633087A CN202011430028.0A CN202011430028A CN112633087A CN 112633087 A CN112633087 A CN 112633087A CN 202011430028 A CN202011430028 A CN 202011430028A CN 112633087 A CN112633087 A CN 112633087A
Authority
CN
China
Prior art keywords
video
keywords
recognition
event
script
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011430028.0A
Other languages
Chinese (zh)
Inventor
杨永晟
吕辉
吕向峰
薛小勇
王弋珵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Digital Video Beijing Ltd
Original Assignee
China Digital Video Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Digital Video Beijing Ltd filed Critical China Digital Video Beijing Ltd
Priority to CN202011430028.0A priority Critical patent/CN112633087A/en
Publication of CN112633087A publication Critical patent/CN112633087A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2228Video assist systems used in motion picture production, e.g. video cameras connected to viewfinders of motion picture cameras or related video signal processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Television Signal Processing For Recording (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention provides an automatic script recording method and device based on picture analysis of an IBC system, in particular to a method and device for identifying and processing video and audio materials recorded by an event according to a script recording starting event to obtain the time positions of a target character and a keyword in the video and audio materials; traversing the video and audio material segments with the time position as the core and preset duration, and intelligently comparing the video and audio material segments based on the target character and the keywords to obtain an effective video picture comprising the target character and the action keywords; and the script recording system for driving the IBC system performs script recording processing on the effective video pictures based on the keywords. The script recording processing can be realized without manual operation in the process, so that the problem that a large amount of human resources are consumed in manual script recording is solved.

Description

Automatic journaling method and device based on picture analysis for IBC system
Technical Field
The invention relates to the technical field of televisions, in particular to an automatic script recording method and device based on picture analysis of an IBC system.
Background
In the IBC system, the main task of the script is to record the scene information of the video and audio signals, and record the position of the picture by relative time, absolute time or position time code, that is, the detailed information of each scene shot in the scene, such as scene switching, shooting method, character action, caption, transition scene, etc., and the detailed data are recorded in detail and precisely, and correspond to the recorded material one by one, so as to provide accurate retrieval data and material for the later editing.
The inventor of the present application has found that in current practice of events, the time-keeping system is implemented by manual operation, that is, the positions of the shots in the sports game are recorded in real time by manual means and correspond to the recorded materials, so that the operator needs to be familiar with the match rules of various game items and needs to be trained for a long time and operated on the spot, thereby consuming a large amount of human resources.
Disclosure of Invention
In order to solve the above problems, the present invention provides an automatic journaling method and apparatus based on picture analysis for an IBC system, so as to solve the problem that a large amount of human resources are consumed for manual journaling.
In view of this, the present invention discloses an automatic journaling method based on picture analysis for an IBC system, which includes the steps of:
responding to a preset script starting event, and identifying the video and audio material recorded by the event based on a pre-trained identification model to obtain the time positions of the target character and the keywords in the video and audio material;
traversing video and audio material segments with preset duration and taking the time position as a core, and intelligently comparing the video and audio material segments based on the target character and the keyword to obtain an effective video picture comprising the target character and the keyword;
and a script recording module for driving the IBC system carries out script recording processing on the effective video pictures based on the keywords.
Optionally, the script start time is an input event of a script start request of a user or a real-time preview event of the video and audio material.
Optionally, the recognition model is a comprehensive recognition model based on image recognition and auxiliary recognition, where:
the image recognition comprises face recognition, action recognition and/or object recognition;
the auxiliary recognition comprises voice recognition and/or subtitle recognition.
Optionally, the keywords include a script event keyword, a lens level description keyword, and an event scene description keyword.
Optionally, the method further comprises the steps of:
and performing model training by using the keywords and the labeled video and audio materials to obtain the identification model.
There is also provided a picture analysis based automatic journaling apparatus of an IBC system, the automatic journaling apparatus comprising:
the material identification module is configured to respond to a preset script starting event and identify and process video and audio materials recorded by the event based on a pre-trained identification model to obtain the time positions of target characters and keywords in the video and audio materials;
the comparison processing module is configured to traverse video and audio material segments with preset duration and taking the time position as a core, and intelligently compare the video and audio material segments based on the target character and the keywords to obtain an effective video picture comprising the target character and the keywords;
a script execution module configured to drive a script module of the IBC system to script the active video frames based on the keywords.
Optionally, the script start time is an input event of a script start request of a user or a real-time preview event of the video and audio material.
Optionally, the recognition model is a comprehensive recognition model based on image recognition and auxiliary recognition, where:
the image recognition comprises face recognition, action recognition and/or object recognition;
the auxiliary recognition comprises voice recognition and/or subtitle recognition.
Optionally, the keywords include a script event keyword, a lens level description keyword, and an event scene description keyword.
Optionally, the method further comprises the steps of:
and performing model training by using the keywords and the labeled video and audio materials to obtain the identification model.
The technical scheme can be seen that the invention provides an automatic script recording method and device based on picture analysis for an IBC system, and specifically relates to a method and device for identifying and processing video and audio materials recorded by an event according to a script recording starting event to obtain time positions of target characters and keywords in the video and audio materials; traversing the video and audio material segments with the time position as the core and preset duration, and intelligently comparing the video and audio material segments based on the target character and the keywords to obtain an effective video picture comprising the target character and the action keywords; and the script recording system for driving the IBC system performs script recording processing on the effective video pictures based on the keywords. The script recording processing can be realized without manual operation in the process, so that the problem that a large amount of human resources are consumed in manual script recording is solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of an automatic picture-recognition-based journaling method of an IBC system according to an embodiment of the present disclosure;
FIG. 2 is a diagram of time key words of illustrative sentences in the present application;
FIG. 3 is a schematic illustration of a job interface in the present application;
FIG. 4 is a flowchart of an alternative embodiment of an IBC system for automatic picture-recognition-based journaling;
FIG. 5 is a block diagram of an automatic picture-recognition-based journaling apparatus of an IBC system according to an embodiment of the present application;
fig. 6 is a block diagram of an automatic picture recognition-based journaling apparatus of another IBC system according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Fig. 1 is a flowchart of an automatic picture-recognition-based journaling method of an IBC system according to an embodiment of the present disclosure.
The IBC system provides an IBCloud media data sharing system, which includes a receiving module, a script module, and a content search module.
The recording module is used for recording video and audio signals of events corresponding to the script characters, and specifically, recording the event signals by taking a recording task as a unit and adopting a 'simultaneous folding and coding' mode to form video and audio materials and material metadata information files; the script recording module is used for creating script recording tasks and selecting the script recording tasks to carry out script recording operation; the content retrieval module is used for retrieving the event figure materials, the script recording event pictures and the material segments;
referring to fig. 1, the automatic journaling method specifically includes the following steps:
and S1, identifying the video and audio materials according to the script start events.
The script start event here may be an input event of the user when inputting the corresponding script start request, or may be a real-time preview event generated by real-time previewing the corresponding video and audio material on the terminal of the IBC. And on the basis of the script starting event, identifying the video and audio material recorded in the event according to a pre-trained identification model, and identifying and obtaining the time positions of the target character and the keywords in the video and audio material. The target character may be a target athlete or a target sports team, etc.
The keywords include competition rules, specifications and etiquette of sports events, and event keywords of memorial events are uniformly defined, and also can include description keywords of lens layers such as panorama, overhead shot, slow shot and the like, and description keywords of event scenes such as awarding ceremony, flag raising, gold medal and the like. The pre-enumerated user-defined keywords are shot pictures with high estimated follow-up user retrieval query access rate, are effective or meaningful interesting picture contents, and the script needs to label key frames on the shot pictures and extract effective picture segments according to the key frames for users to preview or use.
An exemplary script time keyword is shown in FIG. 2, taking an ice hockey project as an example. Besides self-defining the script recording event keyword group, the name information of the players and the coach who participate in the recorded event can be pre-recorded, so that the script recording shot can be conveniently and clearly marked. Taking the ice hockey project as an example, the operation interface of a marquee of an ice hockey event is shown in fig. 3.
The recognition model in the scheme is a comprehensive recognition model based on image recognition and auxiliary recognition, and the image recognition means comprises face recognition, action recognition and object recognition, and can also adopt partial means; the auxiliary recognition means comprises voice recognition or subtitle recognition or both.
And S2, determining an effective video picture based on the target person and the keywords.
After the time positions of the target characters and the keywords in the video and audio material are obtained, video and audio material segments with preset time length including the time positions are intercepted, for example, the video and audio material segments with the total length of 20 seconds are respectively extended to the front and the back of the time positions by 10S. And then intelligently comparing the segment based on the target characters and the keywords to obtain an effective video picture of the segment comprising the target characters and the keywords.
And S3, driving the script system to script the effective video picture.
On the basis of obtaining the effective video picture, the IBC system script recording system is driven to script recording the effective video picture, namely, the effective frame of the effective video picture is marked by using a corresponding mark, thereby realizing automatic script recording of the video and audio materials.
In addition, a complete bookmarked event record contains the recorded material information, relative time and position time codes, bookmarked event keywords (there may be multiple), athlete information (there may be multiple), etc.
It can be seen from the foregoing technical solutions that, the present embodiment provides an automatic script recording method based on picture analysis for an IBC system, specifically, a video and audio material recorded in an event is identified according to a script recording start event to obtain time positions of a target character and a keyword in the video and audio material; traversing the video and audio material segments with the time position as the core and preset duration, and intelligently comparing the video and audio material segments based on the target character and the keywords to obtain an effective video picture comprising the target character and the action keywords; and the script recording system for driving the IBC system performs script recording processing on the effective video pictures based on the keywords. The script recording processing can be realized without manual operation in the process, so that the problem that a large amount of human resources are consumed in manual script recording is solved.
In addition, in a specific implementation manner of this embodiment, the following steps are further included, as shown in fig. 4.
And S01, performing model training by using the keywords and the video and audio materials.
The above recognition model is obtained through training, the keywords are the same as the keywords described above, and the description is omitted here, the training process here is realized based on the artificially labeled video and audio material, i.e., the artificially labeled video and audio material is the video and audio material after the script-keeping processing is performed in an artificial way.
And the video and audio materials subjected to automatic script-keeping processing can be used as extension samples to further train the recognition model, so that a more accurate recognition model is obtained.
Example two
Fig. 5 is a block diagram of an automatic picture recognition-based slate device of an IBC system according to an embodiment of the present disclosure.
The IBC system provides an IBCloud media data sharing system, which includes a receiving module, a script module, and a content search module.
The recording module is used for recording video and audio signals of events corresponding to the script characters, and specifically, recording the event signals by taking a recording task as a unit and adopting a 'simultaneous folding and coding' mode to form video and audio materials and material metadata information files; the script recording module is used for creating script recording tasks and selecting the script recording tasks to carry out script recording operation; the content retrieval module is used for retrieving the event figure materials, the script recording event pictures and the material segments;
referring to fig. 5, the automatic journaling apparatus specifically includes a material recognition module 10, a comparison processing module 20, and a journaling execution module 30.
The material identification module is used for identifying the video and audio materials according to the script-keeping starting events.
The script start event here may be an input event of the user when inputting the corresponding script start request, or may be a real-time preview event generated by real-time previewing the corresponding video and audio material on the terminal of the IBC. And on the basis of the script starting event, identifying the video and audio material recorded in the event according to a pre-trained identification model, and identifying and obtaining the time positions of the target character and the keywords in the video and audio material. The target character may be a target athlete or a target sports team, etc.
The keywords include competition rules, specifications and etiquette of sports events, and event keywords of memorial events are uniformly defined, and also can include description keywords of lens layers such as panorama, overhead shot, slow shot and the like, and description keywords of event scenes such as awarding ceremony, flag raising, gold medal and the like. The pre-enumerated user-defined keywords are shot pictures with high estimated follow-up user retrieval query access rate, are effective or meaningful interesting picture contents, and the script needs to label key frames on the shot pictures and extract effective picture segments according to the key frames for users to preview or use.
An exemplary script time keyword is shown in FIG. 2, taking an ice hockey project as an example. Besides self-defining the script recording event keyword group, the name information of the players and the coach who participate in the recorded event can be pre-recorded, so that the script recording shot can be conveniently and clearly marked. Taking the ice hockey project as an example, the operation interface of a marquee of an ice hockey event is shown in fig. 3.
The recognition model in the scheme is a comprehensive recognition model based on image recognition and auxiliary recognition, and the image recognition means comprises face recognition, action recognition and object recognition, and can also adopt partial means; the auxiliary recognition means comprises voice recognition or subtitle recognition or both.
The comparison processing module is used for determining an effective video picture based on the target person and the keywords.
After the time positions of the target characters and the keywords in the video and audio material are obtained, video and audio material segments with preset time length including the time positions are intercepted, for example, the video and audio material segments with the total length of 20 seconds are respectively extended to the front and the back of the time positions by 10S. And then intelligently comparing the segment based on the target characters and the keywords to obtain an effective video picture of the segment comprising the target characters and the keywords.
The script recording execution module is used for driving the script recording system to perform script recording processing on the effective video pictures.
On the basis of obtaining the effective video picture, the IBC system script recording system is driven to script recording the effective video picture, namely, the effective frame of the effective video picture is marked by using a corresponding mark, thereby realizing automatic script recording of the video and audio materials.
In addition, a complete bookmarked event record contains the recorded material information, relative time and position time codes, bookmarked event keywords (there may be multiple), athlete information (there may be multiple), etc.
As can be seen from the foregoing technical solutions, the embodiment provides an automatic script device based on picture analysis for an IBC system, which is specifically configured to perform recognition processing on a video/audio material recorded in an event according to a script start event, so as to obtain time positions of a target character and a keyword in the video/audio material; traversing the video and audio material segments with the time position as the core and preset duration, and intelligently comparing the video and audio material segments based on the target character and the keywords to obtain an effective video picture comprising the target character and the action keywords; and the script recording system for driving the IBC system performs script recording processing on the effective video pictures based on the keywords. The script recording processing can be realized without manual operation in the process, so that the problem that a large amount of human resources are consumed in manual script recording is solved.
In addition, in a specific implementation manner of this embodiment, the apparatus further includes a model training module 40, as shown in fig. 6.
And the model training module is used for performing model training by using the keywords and the video and audio materials.
The above recognition model is obtained through training, the keywords are the same as the keywords described above, and the description is omitted here, the training process here is realized based on the artificially labeled video and audio material, i.e., the artificially labeled video and audio material is the video and audio material after the script-keeping processing is performed in an artificial way.
And the video and audio materials subjected to automatic script-keeping processing can be used as extension samples to further train the recognition model, so that a more accurate recognition model is obtained.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The technical solutions provided by the present invention are described in detail above, and the principle and the implementation of the present invention are explained in this document by applying specific examples, and the descriptions of the above examples are only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. An automatic journaling method based on picture analysis for IBC system, characterized in that said automatic journaling method comprises the steps of:
responding to a preset script starting event, and identifying the video and audio material recorded by the event based on a pre-trained identification model to obtain the time positions of the target character and the keywords in the video and audio material;
traversing video and audio material segments with preset duration and taking the time position as a core, and intelligently comparing the video and audio material segments based on the target character and the keyword to obtain an effective video picture comprising the target character and the keyword;
and a script recording module for driving the IBC system carries out script recording processing on the effective video pictures based on the keywords.
2. The automatic journaling method according to claim 1, wherein said journaling start time is an input event of a journaling start request of a user or a live preview event of said audio and video material.
3. The automatic clapping method of claim 1, wherein the recognition model is a comprehensive recognition model based on image recognition and assisted recognition, wherein:
the image recognition comprises face recognition, action recognition and/or object recognition;
the auxiliary recognition comprises voice recognition and/or subtitle recognition.
4. The automatic journaling method according to claim 1, wherein said keywords include a journaling event keyword, a shot level description keyword and an event scene description keyword.
5. The automatic clapping method of any one of claims 1 to 4, further comprising the steps of:
and performing model training by using the keywords and the labeled video and audio materials to obtain the identification model.
6. An automatic picture-analysis-based journaling apparatus of an IBC system, the automatic journaling apparatus comprising:
the material identification module is configured to respond to a preset script starting event and identify and process video and audio materials recorded by the event based on a pre-trained identification model to obtain the time positions of target characters and keywords in the video and audio materials;
the comparison processing module is configured to traverse video and audio material segments with preset duration and taking the time position as a core, and intelligently compare the video and audio material segments based on the target character and the keywords to obtain an effective video picture comprising the target character and the keywords;
a script execution module configured to drive a script module of the IBC system to script the active video frames based on the keywords.
7. The automatic ticker apparatus of claim 6 wherein said ticker start time is an input event of a user's ticker start request or a live preview event of said audio and video material.
8. The automatic clap slater of claim 6 wherein said identification model is a comprehensive identification model based on image recognition and assisted recognition wherein:
the image recognition comprises face recognition, action recognition and/or object recognition;
the auxiliary recognition comprises voice recognition and/or subtitle recognition.
9. The automatic memorandum device according to claim 6, wherein said keywords comprise memorandum event keywords, shot level description keywords, and event scenario description keywords.
10. The automatic clap slate device according to any one of claims 6 to 9, further comprising the steps of:
and performing model training by using the keywords and the labeled video and audio materials to obtain the identification model.
CN202011430028.0A 2020-12-09 2020-12-09 Automatic journaling method and device based on picture analysis for IBC system Pending CN112633087A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011430028.0A CN112633087A (en) 2020-12-09 2020-12-09 Automatic journaling method and device based on picture analysis for IBC system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011430028.0A CN112633087A (en) 2020-12-09 2020-12-09 Automatic journaling method and device based on picture analysis for IBC system

Publications (1)

Publication Number Publication Date
CN112633087A true CN112633087A (en) 2021-04-09

Family

ID=75309474

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011430028.0A Pending CN112633087A (en) 2020-12-09 2020-12-09 Automatic journaling method and device based on picture analysis for IBC system

Country Status (1)

Country Link
CN (1) CN112633087A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132925A (en) * 2023-10-26 2023-11-28 成都索贝数码科技股份有限公司 Intelligent stadium method and device for sports event

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010047266A1 (en) * 1998-01-16 2001-11-29 Peter Fasciano Apparatus and method using speech recognition and scripts to capture author and playback synchronized audio and video
CN101296322A (en) * 2007-04-27 2008-10-29 新奥特硅谷视频技术有限责任公司 Sports event logging system
CN101472082A (en) * 2007-12-25 2009-07-01 新奥特(北京)视频技术有限公司 Log keeping system and method
CN102196231A (en) * 2010-03-12 2011-09-21 新奥特(北京)视频技术有限公司 Method and system for realizing real-time publication of competition scenes
CN102196189A (en) * 2010-03-12 2011-09-21 新奥特(北京)视频技术有限公司 Method and system for retrieving highlights from large-scale competition international broadcasting center (IBC) system
CN102196159A (en) * 2010-03-12 2011-09-21 新奥特(北京)视频技术有限公司 Competition material sharing method and system for international broadcasting center (IBC) system
CN102393854A (en) * 2011-09-09 2012-03-28 杭州海康威视数字技术股份有限公司 Method and device obtaining audio/video data
CN102572294A (en) * 2010-12-16 2012-07-11 新奥特(北京)视频技术有限公司 Field recoding system with ranking function
CN102572293A (en) * 2010-12-16 2012-07-11 新奥特(北京)视频技术有限公司 Field recording-based retrieval system
KR20130008285A (en) * 2011-07-12 2013-01-22 (주)마인스코퍼레이션 Method and apparatus for providing moving pictures by producing in real time
US20140327779A1 (en) * 2013-05-01 2014-11-06 Nokia Corporation Method and apparatus for providing crowdsourced video
CN106851407A (en) * 2017-01-24 2017-06-13 维沃移动通信有限公司 A kind of control method and terminal of video playback progress
CN107888843A (en) * 2017-10-13 2018-04-06 深圳市迅雷网络技术有限公司 Sound mixing method, device, storage medium and the terminal device of user's original content
CN109587552A (en) * 2018-11-26 2019-04-05 Oppo广东移动通信有限公司 Video personage sound effect treatment method, device, mobile terminal and storage medium
US20190191205A1 (en) * 2017-12-19 2019-06-20 At&T Intellectual Property I, L.P. Video system with second screen interaction
CN110012348A (en) * 2019-06-04 2019-07-12 成都索贝数码科技股份有限公司 A kind of automatic collection of choice specimens system and method for race program
KR20190098775A (en) * 2018-01-12 2019-08-23 상명대학교산학협력단 Artificial intelligence deep-learning based video object recognition system and method
CN110188241A (en) * 2019-06-04 2019-08-30 成都索贝数码科技股份有限公司 A kind of race intelligence manufacturing system and production method

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010047266A1 (en) * 1998-01-16 2001-11-29 Peter Fasciano Apparatus and method using speech recognition and scripts to capture author and playback synchronized audio and video
CN101296322A (en) * 2007-04-27 2008-10-29 新奥特硅谷视频技术有限责任公司 Sports event logging system
CN101472082A (en) * 2007-12-25 2009-07-01 新奥特(北京)视频技术有限公司 Log keeping system and method
CN102196231A (en) * 2010-03-12 2011-09-21 新奥特(北京)视频技术有限公司 Method and system for realizing real-time publication of competition scenes
CN102196189A (en) * 2010-03-12 2011-09-21 新奥特(北京)视频技术有限公司 Method and system for retrieving highlights from large-scale competition international broadcasting center (IBC) system
CN102196159A (en) * 2010-03-12 2011-09-21 新奥特(北京)视频技术有限公司 Competition material sharing method and system for international broadcasting center (IBC) system
CN102572293A (en) * 2010-12-16 2012-07-11 新奥特(北京)视频技术有限公司 Field recording-based retrieval system
CN102572294A (en) * 2010-12-16 2012-07-11 新奥特(北京)视频技术有限公司 Field recoding system with ranking function
KR20130008285A (en) * 2011-07-12 2013-01-22 (주)마인스코퍼레이션 Method and apparatus for providing moving pictures by producing in real time
CN102393854A (en) * 2011-09-09 2012-03-28 杭州海康威视数字技术股份有限公司 Method and device obtaining audio/video data
US20140327779A1 (en) * 2013-05-01 2014-11-06 Nokia Corporation Method and apparatus for providing crowdsourced video
CN106851407A (en) * 2017-01-24 2017-06-13 维沃移动通信有限公司 A kind of control method and terminal of video playback progress
CN107888843A (en) * 2017-10-13 2018-04-06 深圳市迅雷网络技术有限公司 Sound mixing method, device, storage medium and the terminal device of user's original content
US20190191205A1 (en) * 2017-12-19 2019-06-20 At&T Intellectual Property I, L.P. Video system with second screen interaction
KR20190098775A (en) * 2018-01-12 2019-08-23 상명대학교산학협력단 Artificial intelligence deep-learning based video object recognition system and method
CN109587552A (en) * 2018-11-26 2019-04-05 Oppo广东移动通信有限公司 Video personage sound effect treatment method, device, mobile terminal and storage medium
CN110012348A (en) * 2019-06-04 2019-07-12 成都索贝数码科技股份有限公司 A kind of automatic collection of choice specimens system and method for race program
CN110188241A (en) * 2019-06-04 2019-08-30 成都索贝数码科技股份有限公司 A kind of race intelligence manufacturing system and production method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GURVINDER SINGH等: "Video frame and region duplication forgery detection based on correlation coefficient and coefficient of variation", MULTIMEDIA TOOLS AND APPLICATIONS(SPRINGER LINK), vol. 78, 26 September 2018 (2018-09-26), pages 11527 - 11562, XP036779905, DOI: 10.1007/s11042-018-6585-1 *
董璐;: "十运会IBC赛事共享系统的实时场记", 广播与电视技术, no. 04, 30 April 2006 (2006-04-30), pages 58 - 61 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132925A (en) * 2023-10-26 2023-11-28 成都索贝数码科技股份有限公司 Intelligent stadium method and device for sports event
CN117132925B (en) * 2023-10-26 2024-02-06 成都索贝数码科技股份有限公司 Intelligent stadium method and device for sports event

Similar Documents

Publication Publication Date Title
WO2021104242A1 (en) Video processing method, electronic device, and storage medium
CN107707931B (en) Method and device for generating interpretation data according to video data, method and device for synthesizing data and electronic equipment
US7873258B2 (en) Method and apparatus for reviewing video
US6799180B1 (en) Method of processing signals and apparatus for signal processing
RU2440606C2 (en) Method and apparatus for automatic generation of summary of plurality of images
US7483618B1 (en) Automatic editing of a visual recording to eliminate content of unacceptably low quality and/or very little or no interest
CN101112090B (en) Video content reproduction supporting method, video content reproduction supporting system, and information delivery server
US20080193099A1 (en) Video Edition Device and Method
EP1083567A2 (en) System and method for editing source metadata to produce an edited metadata sequence
CN110121116A (en) Video generation method and device
Fu et al. Video highlight prediction using audience chat reactions
JP2001028722A (en) Moving picture management device and moving picture management system
JP2007060060A (en) Reproduction system, reproducing apparatus, reproducing method, information processing apparatus, information processing method, and program
KR20090093904A (en) Apparatus and method for scene variation robust multimedia image analysis, and system for multimedia editing based on objects
CN110677735A (en) Video positioning method and device
CN112633087A (en) Automatic journaling method and device based on picture analysis for IBC system
JP3934780B2 (en) Broadcast program management apparatus, broadcast program management method, and recording medium recording broadcast program management processing program
CN113992973A (en) Video abstract generation method and device, electronic equipment and storage medium
JP2001285780A (en) Image editing method and device and storage medium therefor
JP5343658B2 (en) Recording / playback apparatus and content search program
JP2007037031A (en) Editing apparatus and method
KR101783872B1 (en) Video Search System and Method thereof
JP2004508776A (en) System for indexing / summarizing audio / image content
JP2005039354A (en) Metadata input method and editing system
CN101833978A (en) Character signal-triggered court trial video real-time indexing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination