CN113329192A - Intelligent movie subtitle making method and system - Google Patents

Intelligent movie subtitle making method and system Download PDF

Info

Publication number
CN113329192A
CN113329192A CN202110728604.8A CN202110728604A CN113329192A CN 113329192 A CN113329192 A CN 113329192A CN 202110728604 A CN202110728604 A CN 202110728604A CN 113329192 A CN113329192 A CN 113329192A
Authority
CN
China
Prior art keywords
data
client
server
intelligent
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110728604.8A
Other languages
Chinese (zh)
Inventor
王庆山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Hot Hand Technology Co ltd
Original Assignee
Beijing Hot Hand Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Hot Hand Technology Co ltd filed Critical Beijing Hot Hand Technology Co ltd
Priority to CN202110728604.8A priority Critical patent/CN113329192A/en
Publication of CN113329192A publication Critical patent/CN113329192A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/278Subtitling
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention provides an intelligent movie and television subtitle making method and system, wherein the method comprises the following steps: (S1) data collection, wherein the network server srt _ server is connected with a ts database of the production order system and collects file state data of the auto _ ts intelligent manufacturing system; (S2) data storage, wherein the network client ts _ client is connected with the network server Ex _ ts, receives the real-time data sent by the network server Ex _ ts and locally stores the data; (S3) data processing, the Power _ ts intelligent production management control terminal receives the system data change message notification, and the system administrator carries out the next processing of order arrangement. The invention greatly accelerates the terminal caption making period of the film and television industry, introduces the latest artificial intelligence solution into the film and television making field, can analyze historical data from the original traditional post-making system, and displays the data in a form of graph or table on a large screen, a webpage end mobile phone end and the like as customer display, analysis and sharing, and imagination provides rich excavation space.

Description

Intelligent movie subtitle making method and system
Technical Field
The invention mainly relates to the technical field of movie subtitles, in particular to an intelligent movie subtitle making method and system.
Background
At present, a voice recognition system is more and more widely applied to various aspects of life, traffic, production and the like, information recording, transmission and sharing are greatly facilitated, extremely high economic benefits are displayed, and the happiness and convenience index of people is improved. With the accumulation of voice data, the accuracy of making video captions using voice recognition functions has steadily increased over traditional manual methods of making video captions at present in terms of speed and overall accuracy. The voice recognition system has two functions, namely, voice transcription; second, the voice and text information are synchronized. The intelligent video caption making system has the functions of intelligently providing basic voice transcription, proofreading and correcting system texts, synchronizing intelligent voice and character time axes, converting system intelligent caption data formats, and finally outputting specific editing engineering files, caption data format files, standard caption synthetic video files and the like.
The intelligent subtitle making combines online and offline and multi-party participation of the machine intelligent processing system and sends the synchronous progress message in real time, so that the intelligent processing data can be corrected in time and synchronously fed back to the artificial intelligent core module, and the intelligent system is continuously corrected. The manual participation not only optimizes and perfects the data continuously, but also more importantly ensures that each flow link can smoothly and smoothly send messages, the state can be really updated in time and the possible problems can be solved at the first time, most of the similar intelligent subtitles are provided with subtitle services in an unattended mode, some human negligence faults can be inevitably caused due to the fact that data files provided by customers are inevitably carefree to generate careless omission, and the system cannot be timely repaired when abnormal downtime occurs, and finally the subtitle making services cannot be provided to really realize the indecisive boundary in the aspect of subtitle accuracy.
Disclosure of Invention
The invention mainly provides an intelligent movie and television subtitle making method and system, which are used for solving the technical problems in the background technology.
The technical scheme adopted by the invention for solving the technical problems is as follows:
an intelligent movie subtitle making method and system comprises the following steps: (S1) data collection, wherein the network server srt _ server is connected with a ts database of the production order system and collects file state data of the auto _ ts intelligent manufacturing system; (S2) data storage, wherein the network client ts _ client is connected with the network server Ex _ ts, receives the real-time data sent by the network server Ex _ ts and locally stores the data; (S3) data processing, namely, the Power _ ts intelligent manufacturing management control terminal receives the system data change message notification, and the system administrator carries out the next processing on the order arrangement, wherein the data processing comprises the processing of voice recognition, voice text synchronization, text proofreading and format.
Preferably, the intelligent voice subtitle making system includes a web server srt _ server, the web server srt _ server is connected to the voice recognition system; the network client ts _ client is connected to the client local system; a ts _ controller display processing module connected to the offline management system; a ts _ staff extension module connected to the offline production system; the ts _ View display processing module is connected to the offline supervisory control display system; the network server srt _ server and the network client ts _ client are both connected to the auto _ ts intelligent manufacturing system, and the ts _ controller display processing module is connected to the network client ts _ client, the srt _ server module and the ts _ controller module respectively.
Preferably, the web server srt _ server collects the instantaneous state of the platform production system and the order data, and provides the processing results of the orders to the data receiving module to complete the deep intelligent classification and data feature extraction, and the order classification and identification extension module is used for providing the application program with the data fine processing classification and the packet file extraction of the data preliminary information.
Preferably, the network client ts _ client is used for the application end of the caption user submitting data to receive the caption result, and submitting order real-time data to the intelligent caption making platform auto _ ts.
Preferably, the intelligent voice subtitle making system further comprises a ts _ server module, wherein the ts _ server module is connected to the auto _ ts intelligent making system and used for receiving and storing key data, receiving data attribute information in auto _ ts, recording and storing various processing information of intelligent processing states, and storing the data to a hard disk for use by auto _ ts and ts _ Viewer application programs.
Preferably, the network client ts _ client can use three modes, namely a webpage, an applet, an APP and a PC application.
Preferably, the step of connecting the web server srt _ server and the web client ts _ client includes: (a) the network client ts _ client application auto _ ts logs in and tries to connect the server subtitle making service host; (b) the network server srt _ server application program is started and connected with the auto _ ts system database, and the server creates a thread for monitoring and waiting for connection and waits for the network connection of the client; (c) the network client ts _ client is successfully connected, the network server srt _ server creates a thread to wait for receiving order data, and the network client ts _ client sends an audio and video file or an additional reference file; sending the data in the current buffer area to a network client ts _ client; (d) the network client ts _ client stores the obtained data into an adata directory and names adata/profiles/1 and adata/profiles/year-month-day/hour-minute, the system reads attributes such as size and duration of a file, generates order data and writes the order data into an auto _ ts database, returns necessary display data of the client, waits for a user to complete payment, and after the network server srt _ server receives a payment success message, the network server srt _ server system reads the auto _ ts database to acquire a device ID of a person responsible for a caption manufacturing platform to send a new order processing notification, quits the connection thread, and closes the connection.
Preferably, the ts _ Viewer display processing module is configured to provide functions of user data message interaction, statistics, historical data, task progress preview and task requirement manual communication adjustment by a user, perform primary multi-level identification processing or single-level or multi-level manual proofreading on an order task to be checked, enter a processing state for a corresponding bound project requirement, access single-level or multi-level manual proofreading, enter a processing state for a corresponding bound project requirement, access a ts _ server system to import real-time data, and send a task progress real-time message to an associated user in a form of a mobile phone short message or APP notification by an auto _ ts platform.
Preferably, the intelligent voice subtitle making system further comprises a task module for waiting for a new order, the task module for waiting for a new order is connected to the auto _ ts intelligent making system, and the ts _ controller display processing module is used for loading a layout of production data of the intelligent subtitle making platform and loading real-time data received from the auto _ ts.
Preferably, the task module for waiting for a new order includes the following three ways,
clicking a new order in a layout, loading an auto _ ts intelligent voice-to-text server module to complete automatic task matching, reading all reference data and format in the order, requiring a client to process key information such as corresponding speed grade and the like, restoring the state to a current task queue, and displaying an order task processing progress image;
(II) the server completes the intelligent caption processing result data; popping up a dialog box, and selecting manual or machine progress intelligent checking and proofreading data files; loading data into a voice to synchronously process a text time axis; finally, updating the current graph and using new task processing progress information data;
and (III) processing, checking, synchronizing, converting the format, packaging the data, submitting the data to a client, and sending an order completion message to remind the client to download result data on the platform.
Compared with the prior art, the invention has the beneficial effects that:
the invention greatly accelerates the terminal caption making period of the film and television industry, introduces the latest artificial intelligence solution into the film and television making field, can analyze historical data from the original traditional post-making system, and displays the data in a form of graph or table on a large screen, a webpage end mobile phone end and the like as customer display, analysis and sharing, and imagination provides rich excavation space. The system fully reflects the production cycle of the film and television industry, the working property of the operators is lower than the stability requirement and other factors, so the system is focused on a platform, is focused on multi-terminal sharing, is focused on multi-level deep processing of intelligent analysis, is focused on mutual detection of manual work and machine intelligence, and has a unique using method on the application and application mode of the current artificial intelligence.
The present invention will be explained in detail below with reference to the drawings and specific embodiments.
Drawings
FIG. 1 is a schematic diagram of the system of the present invention;
FIG. 2 is a schematic diagram of a graphical display system identifying task lines and text in accordance with the present invention;
FIG. 3 is a text data processing flow diagram of the present invention;
FIG. 4 is a flowchart illustrating the operation of the web server srt _ server according to the present invention;
FIG. 5 is a flow chart of the ts _ client operation of the network client in accordance with the present invention;
FIG. 6 is a ts _ controller display processing module workflow diagram of the present invention.
Detailed Description
In order to facilitate an understanding of the invention, the invention will now be described more fully hereinafter with reference to the accompanying drawings, in which several embodiments of the invention are shown, but which may be embodied in different forms and not limited to the embodiments described herein, but which are provided so as to provide a more thorough and complete disclosure of the invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs, and the knowledge of the terms used herein in the specification of the present invention is for the purpose of describing particular embodiments and is not intended to limit the present invention, and the term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
Referring to fig. 1-6, an intelligent movie subtitle making method and system includes the following steps: (S1) data collection, wherein the network server srt _ server is connected with a ts database of the production order system and collects file state data of the auto _ ts intelligent manufacturing system; (S2) data storage, wherein the network client ts _ client is connected with the network server Ex _ ts, receives the real-time data sent by the network server Ex _ ts and locally stores the data; (S3) data processing, wherein the Power _ ts intelligent manufacturing management control terminal receives the system data change message notification, and the system administrator carries out the next processing of order arrangement, and the data processing comprises the processing of voice recognition, voice text synchronization, text proofreading and format.
The intelligent voice subtitle making system comprises a network server srt _ server, wherein the network server srt _ server is connected with the voice recognition system; the network client ts _ client is connected to the client local system; the ts _ controller display processing module is connected with the offline management system; the ts _ staff extension module is connected with the offline production system; the ts _ View display processing module is connected to the offline supervisory control display system; the network server srt _ server and the network client ts _ client are both connected to the auto _ ts intelligent manufacturing system, and the ts _ controller display processing module is respectively connected with the network client ts _ client, the srt _ server module and the ts _ controller module.
The network server srt _ server is an external interface program, uses a fixed port of a TCP, is a data processing center and an interactive interface between an external voice intelligent recognition system and a data receiving module, acquires instantaneous state and order data of a platform production system, provides processing results of the orders to the data receiving module to complete deep intelligent classification and data feature extraction, provides a packet file extraction of data fine finishing classification and data preliminary information for an application program, and provides a packet file extraction of data fine finishing classification and data preliminary information for the application program, and the order classification identification extension module is responsible for increasing intelligent processing speed. For convenient calling, the number of key grouping nodes can be produced by utilizing the classification expansion module, and the key grouping nodes can be rapidly and accurately classified for specific customers or specific file sizes, so that the accuracy rate of intelligent identification is greatly improved.
And the network client ts _ client is used for submitting the application end of the data receiving caption result by the caption user and submitting the order real-time data to the intelligent caption making platform auto _ ts.
The intelligent voice caption making system also comprises a ts _ server module, wherein the ts _ server module is connected with the auto _ ts intelligent making system and is used for receiving and storing key data, receiving data attribute information in auto _ ts, recording and storing various processing information of intelligent processing states, and storing the data to a hard disk for the auto _ ts and ts _ Viewer application programs to use.
The network client ts _ client can use three modes of a webpage, an applet, an APP and a PC application.
The connection step of the network server srt server and the network client ts _ client comprises the following steps: (a) the network client ts _ client application auto _ ts logs in and tries to connect the server subtitle making service host; (b) the network server srt _ server application program is started and connected with the auto _ ts system database, and the server creates a thread for monitoring and waiting for connection and waits for the network connection of the client; (c) the network client ts _ client is successfully connected, the network server srt _ server creates a thread to wait for receiving order data, and the network client ts _ client sends an audio and video file or an additional reference file; sending the data in the current buffer area to a network client ts _ client; (d) the network client ts _ client stores the obtained data into an adata directory and names adata/profiles/1 and adata/profiles/year-month-day/hour-minute, the system reads attributes such as size and duration of a file, generates order data and writes the order data into an auto _ ts database, returns necessary display data of the client, waits for a user to complete payment, and after the network server srt _ server receives a payment success message, the network server srt _ server system reads the auto _ ts database to acquire a device ID of a person responsible for a caption manufacturing platform to send a new order processing notification, quits the connection thread, and closes the connection.
the ts _ Viewer display processing module is used for providing functions of interactive user data information, statistics, historical data, task progress preview and manual communication adjustment task requirement by a user, performing primary multi-level identification processing or single-level or multi-level manual proofreading on order tasks needing to be checked, enabling the bound corresponding project requirements to enter a processing state, accessing single-level or multi-level manual proofreading, enabling the bound corresponding project requirements to enter a processing state, accessing a ts _ server system to import real-time data, and sending the task progress real-time information to an associated user in a form of mobile phone short messages or APP notifications by an auto _ ts platform. the ts _ Viewer display processing module firstly imports the text data which is completely real and correct and can be used without errors from the auto _ ts system, the text of the data is carried out by the network server interface txt _ ts, and the smoothness and the stability of the whole intelligent identification can be ensured only by uniform text and uniform coding. And then the data is transmitted to a network client ts _ txt interface of a large screen system through a coding and checking port, the network client ts _ client performs data right giving and text confirmation, and the ts _ Viewer display processing module restores the state by reading a history file stored by the network client ts _ client, loading and analyzing a ts _ txt graph, a txt _ data graph provided by an auto _ server expansion module and the like, and provides the restored state to srt _ server for key data deep processing. the ts _ Viewer display processing module can display the processing progress and results of the order tasks and associate the feedback information data in the process with the auto _ ts real-time database, the ts _ Viewer can perform data sorting on a large screen, order searching, conditional scheduling, cost bonus statistics and other conventional operations, optimal display schemes which are respectively adapted to different display terminals are provided, the response speed is high, the order task data display content is rich, a task processing progress graph is opened, processing data of each process of the voice recognition progress can be searched from a layout graph, a graph or a table form can be directly generated, multi-terminal information sharing can be realized in a universal data format, the ts _ Viewer display terminal can automatically feed back and collect auto _ ts platform data to the real-time ts database in real time, when a platform technician or an order client browses orders, a status data analysis graph is automatically popped up.
The intelligent voice caption making system also comprises a new order waiting task module which is connected with the auto _ ts intelligent making system, and the ts _ controller display processing module is used for loading a layout of production data of the intelligent caption making platform and loading real-time data received from the auto _ ts.
The task module for waiting for a new order comprises the following three modes,
clicking a new order in a layout, loading an auto _ ts intelligent voice-to-text server module to complete automatic task matching, reading all reference data and format in the order, requiring a client to process key information such as corresponding speed grade and the like, restoring the state to a current task queue, and displaying an order task processing progress image;
(II) the server completes the intelligent caption processing result data; popping up a dialog box, and selecting manual or machine progress intelligent checking and proofreading data files; loading data into a voice to synchronously process a text time axis; finally, updating the current graph and using new task processing progress information data;
and (III) processing, checking, synchronizing, converting the format, packaging the data, submitting the data to a client, and sending an order completion message to remind the client to download result data on the platform.
The auto _ ts display of the present invention uses two sets of boundary processing. The general access point uses a frame-accurate processing mode, which is suitable for strict programs, but education, children learning, emotional drama and the like are obviously more vivid and hard, and the second processing mode is a relaxing mode, which is to prolong the specific plot properly and ensure good visual feedback and skill response. The second intelligent identification buffer area of the system is not directly placed in interactive display, but is created in a system main processing module, and then is drawn to a ts _ controller window by a system main management group in a bit block transmission mode, so that the problem that a graphic system with multi-terminal and multi-type tasks cannot be displayed on a large screen is solved; meanwhile, most of the audio formats provided by the client are mp3, mov, mpeg and the like, but the srt _ server needs to be converted into wav for further depth identification, so when the preprocessed file is loaded, type conversion is usually performed on a function provided in a read order file runtime library, but the efficiency is very low. The effect is extremely high at present after the audio2wav is subjected to repeated iteration upgrading), an srt _ ser _ rver end serves as a basic format conversion task for intelligent processing of voice recognition, and the stable improvement of the system performance can be brought by using the audio2 wav; the invention can not only display the files in the conventional industry such as the files in wav. mp3 format, but also bind the pictures such as BMP, JPEG image ppt, pages, numbers, mp4 and attachment culture, and the ts _ Viewer display processing module can carry out deep data mining and automatic adaptation on the bound information. And prompting the identified order. The mouse can be moved to the selected node to display the result of the preview precision, the final result file is opened in a new window by double clicking, and the file can also be directly dragged to the window to directly open the project engineering file with the result precision capable of being checked; each recognition task line and character in the graphic display system are calculated and determined before intelligent processing to see or not under the current recognition classification full-automatic processing, invisible intelligent recognition cannot be sent to task correction, classification and extension of intelligent recognition are achieved, and therefore processing and platform running speed are improved; for dialect, foreign language, song, which is usually identified by atat _ srt for intelligent task, the system can process the data in srt _ srt _ txt with deep language and specific type. The txt srt-txt module completes feature conversion, and then regresses again according to the recognition result to complete special task processing, so that multiple models are avoided, the risk of high failure rate is recognized, system operation is greatly facilitated, and about one-to-one and half manual workload is reduced. The invention can also directly open or download order processing condition data, bind the mobile phone or mailbox of the appointed message contact person through double-click from the layout chart, can also directly display or view the progress data file on other system platforms, can browse and view historical data, and display the data on the auto _ ts large screen. During operation, the tasks with the appointed dates in the menu are selected, and the corresponding historical data file version nodes are selected. Meanwhile, operations such as task cancellation and suspension can be performed in the ts _ Viewer graph, the order voice recognition or the proofreading process in the right click graph is cancelled and suspended, and the corresponding operations can be recorded in the database and displayed and updated in real time.
The invention is described above with reference to the accompanying drawings, it is obvious that the invention is not limited to the above-described embodiments, and it is within the scope of the invention to adopt such insubstantial modifications of the inventive method concept and solution, or to apply the inventive concept and solution directly to other applications without modification.

Claims (10)

1. An intelligent movie and television subtitle making method and system are characterized by comprising the following steps: (S1) data collection, wherein the network server srt _ server is connected with a ts database of the production order system and collects file state data of the auto _ ts intelligent manufacturing system; (S2) data storage, wherein the network client ts _ client is connected with the network server Ex _ ts, receives the real-time data sent by the network server Ex _ ts and locally stores the data; (S3) data processing, namely, the Power _ ts intelligent manufacturing management control terminal receives the system data change message notification, and the system administrator carries out the next processing on the order arrangement, wherein the data processing comprises the processing of voice recognition, voice text synchronization, text proofreading and format.
2. The method and system for producing intelligent film and television subtitles according to claim 1, wherein the intelligent voice subtitle production system comprises a network server srt _ server, wherein the network server srt _ server is connected to a voice recognition system; the network client ts _ client is connected to the client local system; a ts _ controller display processing module connected to the offline management system; a ts _ staff extension module connected to the offline production system; the ts _ View display processing module is connected to the offline supervisory control display system; the network server srt _ server and the network client ts _ client are both connected to the auto _ ts intelligent manufacturing system, and the ts _ controller display processing module is connected to the network client ts _ client, the srt _ server module and the ts _ controller module respectively.
3. The method and system for making intelligent video subtitles according to claim 2, wherein the web server srt _ server collects instantaneous state and order data of the platform production system, and provides the processing results of the orders to the data receiving module to complete deep intelligent classification and data feature extraction, and the order classification and identification extension module is used for providing data fine processing classification and data preliminary information packet file extraction for the application program.
4. The method and system for making the intelligent movie and television caption according to claim 2, wherein the network client ts _ client is used for an application end where the caption user submits data to receive the caption result, and submits order real-time data to the intelligent caption making platform auto _ ts.
5. The method and system for making intelligent film and television subtitles according to claim 2, wherein the intelligent voice subtitle making system further comprises a ts _ server module, the ts _ server module is connected to the auto _ ts intelligent making system and used for receiving and storing key data, receiving data attribute information in auto _ ts, recording various processing information of intelligent processing state storage, and storing the data to a hard disk for use by auto _ ts and ts _ Viewer applications.
6. The method and system for making intelligent movie and television subtitles according to claim 4, wherein the network client ts _ client can use three modes, namely a webpage, an applet, an APP and a PC application.
7. The method and system for producing intelligent film and television subtitles according to claim 2, wherein the step of connecting the network server srt _ server and the network client ts _ client comprises: (a) the network client ts _ client application auto _ ts logs in and tries to connect the server subtitle making service host; (b) the network server srt _ server application program is started and connected with the auto _ ts system database, and the server creates a thread for monitoring and waiting for connection and waits for the network connection of the client; (c) the network client ts _ client is successfully connected, the network server srt _ server creates a thread to wait for receiving order data, and the network client ts _ client sends an audio and video file or an additional reference file; sending the data in the current buffer area to a network client ts _ client; (d) the network client ts _ client stores the obtained data into an adata directory and names adata/profiles/1 and adata/profiles/year-month-day/hour-minute, the system reads attributes such as size and duration of a file, generates order data and writes the order data into an auto _ ts database, returns necessary display data of the client, waits for a user to complete payment, and after the network server srt _ server receives a payment success message, the network server srt _ server system reads the auto _ ts database to acquire a device ID of a person responsible for a caption manufacturing platform to send a new order processing notification, quits the connection thread, and closes the connection.
8. The method and system for making intelligent movie and television subtitles according to claim 2, wherein the ts _ Viewer display processing module is used for providing functions of user interaction user data message, statistics, historical data, task progress preview and manual communication task requirement adjustment, performing primary multi-level identification processing or single-level or multi-level manual proofreading on an order task to be checked, enabling a bound corresponding project requirement to enter a processing state, accessing single-level or multi-level manual proofreading and peering, enabling a bound corresponding project requirement to enter a processing state, accessing a ts _ server system to import real-time data, and sending a task progress real-time message to an associated user in a form of mobile phone short message or APP notification from an auto _ ts platform.
9. The method and system for producing intelligent video subtitles according to claim 2, wherein the intelligent voice subtitle production system further comprises a task module for waiting for a new order, the task module for waiting for a new order is connected to the auto _ ts intelligent production system, and the ts _ controller display processing module is used for loading a layout of production data of the intelligent subtitle production platform and loading real-time data received from auto _ ts.
10. The method and system for intelligent movie and television caption making according to claim 9, wherein the task module waiting for new orders comprises the following three ways,
clicking a new order in a layout, loading an auto _ ts intelligent voice-to-text server module to complete automatic task matching, reading all reference data and format in the order, requiring a client to process key information such as corresponding speed grade and the like, restoring the state to a current task queue, and displaying an order task processing progress image;
(II) the server completes the intelligent caption processing result data; popping up a dialog box, and selecting manual or machine progress intelligent checking and proofreading data files; loading data into a voice to synchronously process a text time axis; finally, updating the current graph and using new task processing progress information data;
and (III) processing, checking, synchronizing, converting the format, packaging the data, submitting the data to a client, and sending an order completion message to remind the client to download result data on the platform.
CN202110728604.8A 2021-06-29 2021-06-29 Intelligent movie subtitle making method and system Pending CN113329192A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110728604.8A CN113329192A (en) 2021-06-29 2021-06-29 Intelligent movie subtitle making method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110728604.8A CN113329192A (en) 2021-06-29 2021-06-29 Intelligent movie subtitle making method and system

Publications (1)

Publication Number Publication Date
CN113329192A true CN113329192A (en) 2021-08-31

Family

ID=77425167

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110728604.8A Pending CN113329192A (en) 2021-06-29 2021-06-29 Intelligent movie subtitle making method and system

Country Status (1)

Country Link
CN (1) CN113329192A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001043021A2 (en) * 1999-12-07 2001-06-14 Entricom, Inc. Telecommunications order entry, tracking and management system
JP2004336668A (en) * 2003-05-12 2004-11-25 National Institute Of Information & Communication Technology Administrative server for caption creation and distributed caption program production system
KR20050075471A (en) * 2004-01-15 2005-07-21 에스케이 텔레콤주식회사 Method and system for providing caption service to mobile phone
US20050210511A1 (en) * 2004-03-19 2005-09-22 Pettinato Richard F Real-time media captioning subscription framework for mobile devices
US20070136777A1 (en) * 2005-12-09 2007-06-14 Charles Hasek Caption data delivery apparatus and methods
CN101296325A (en) * 2007-04-27 2008-10-29 新奥特硅谷视频技术有限责任公司 Subtitle generator, subtitle broadcasting system and method
CN101551664A (en) * 2009-05-04 2009-10-07 大道计算机技术(上海)有限公司 Big screen commanding and dispatching system and realizing method thereof
CN101656838A (en) * 2008-08-19 2010-02-24 新奥特(北京)视频技术有限公司 Broadcasting-line automation caption producing and broadcasting system of television broadcast station
CN102045296A (en) * 2009-10-21 2011-05-04 Tcl集团股份有限公司 System for realizing network media caption-playing based on network protocol and method thereof
US7992183B1 (en) * 2007-11-09 2011-08-02 Google Inc. Enabling users to create, to edit and/or to rate online video captions over the web
CN104079838A (en) * 2014-07-08 2014-10-01 丽水桉阳生物科技有限公司 Character generator with financial data caption making and playing function
CN105245917A (en) * 2015-09-28 2016-01-13 徐信 System and method for generating multimedia voice caption
CN109257547A (en) * 2018-09-21 2019-01-22 南京邮电大学 The method for generating captions of Chinese online audio-video

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001043021A2 (en) * 1999-12-07 2001-06-14 Entricom, Inc. Telecommunications order entry, tracking and management system
JP2004336668A (en) * 2003-05-12 2004-11-25 National Institute Of Information & Communication Technology Administrative server for caption creation and distributed caption program production system
KR20050075471A (en) * 2004-01-15 2005-07-21 에스케이 텔레콤주식회사 Method and system for providing caption service to mobile phone
US20050210511A1 (en) * 2004-03-19 2005-09-22 Pettinato Richard F Real-time media captioning subscription framework for mobile devices
US20070136777A1 (en) * 2005-12-09 2007-06-14 Charles Hasek Caption data delivery apparatus and methods
CN101296325A (en) * 2007-04-27 2008-10-29 新奥特硅谷视频技术有限责任公司 Subtitle generator, subtitle broadcasting system and method
US7992183B1 (en) * 2007-11-09 2011-08-02 Google Inc. Enabling users to create, to edit and/or to rate online video captions over the web
CN101656838A (en) * 2008-08-19 2010-02-24 新奥特(北京)视频技术有限公司 Broadcasting-line automation caption producing and broadcasting system of television broadcast station
CN101551664A (en) * 2009-05-04 2009-10-07 大道计算机技术(上海)有限公司 Big screen commanding and dispatching system and realizing method thereof
CN102045296A (en) * 2009-10-21 2011-05-04 Tcl集团股份有限公司 System for realizing network media caption-playing based on network protocol and method thereof
CN104079838A (en) * 2014-07-08 2014-10-01 丽水桉阳生物科技有限公司 Character generator with financial data caption making and playing function
CN105245917A (en) * 2015-09-28 2016-01-13 徐信 System and method for generating multimedia voice caption
CN109257547A (en) * 2018-09-21 2019-01-22 南京邮电大学 The method for generating captions of Chinese online audio-video

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张歆: "智能文稿唱词实验系统的设计与应用", 《现代电视技术》 *
张歆: "智能语音技术在春晚后期字幕制作中的探索与应用", 《现代电视技术》 *

Similar Documents

Publication Publication Date Title
CN103414949B (en) A kind of multimedia edit system based on intelligent television and method
CN109474843A (en) The method of speech control terminal, client, server
US7107284B1 (en) Method of generating user customized document incorporating at least a portion of discovery information recorded in the system of record database in data warehouse environment
CN111405224B (en) Online interaction control method and device, storage medium and electronic equipment
CN109274913A (en) A kind of video intelligent slice clipping method and system
EP4297030A2 (en) Polling questions for a conference call discussion
US11837219B2 (en) Creation of a minute from a record of a teleconference
CN108460120A (en) Data saving method and device, terminal equipment and storage medium
WO2010073695A1 (en) Edited information provision device, edited information provision method, program, and storage medium
CN111324480A (en) Large host transaction fault positioning system and method
CN108228843B (en) Internet-based lecture note compression transmission and restoration method
CN113672748A (en) Multimedia information playing method and device
US20230247068A1 (en) Production tools for collaborative videos
CN115599524A (en) Data lake system based on cooperative scheduling processing of streaming data and batch data
CN111339357A (en) Recommendation method and device based on live user behaviors
CN111401028A (en) Automatic comparison method and device for RPS software version of nuclear power station
CN111507754A (en) Online interaction method and device, storage medium and electronic equipment
US11755181B2 (en) Populating answers to polling questions based on initial responses
CN113055278A (en) Mail filing processing method and device
CN113570335A (en) Customized furniture intelligent visual system and production process
CN109325778A (en) Intelligent customer service system for unified management and method by all kinds of means
CN113329192A (en) Intelligent movie subtitle making method and system
CN116628967A (en) Intelligent factory digital twin system
CN114007221B (en) Creation platform of 5G message facing enterprise
CN111625616B (en) Enterprise-level data management system capable of mass storage

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220118

Address after: Room 314, building 1, No. 50, Shunxi South Road, Renhe Town, Shunyi District, Beijing 101300

Applicant after: Around (Beijing) Advertising Technology Co.,Ltd.

Address before: 100089 floor 7-a668, floor 7, No. 28, information road, Haidian District, Beijing

Applicant before: Beijing hot hand Technology Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240122

Address after: 2190, 2nd Floor, Building 20, Shuangqiao (Shuangqiao Dairy Factory), Chaoyang District, Beijing, 100020

Applicant after: Beijing hot hand Technology Co.,Ltd.

Country or region after: China

Address before: Room 314, building 1, No. 50, Shunxi South Road, Renhe Town, Shunyi District, Beijing 101300

Applicant before: Around (Beijing) Advertising Technology Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right