CN103605710B - A kind of distributed tones video process apparatus and processing method - Google Patents
A kind of distributed tones video process apparatus and processing method Download PDFInfo
- Publication number
- CN103605710B CN103605710B CN201310558626.XA CN201310558626A CN103605710B CN 103605710 B CN103605710 B CN 103605710B CN 201310558626 A CN201310558626 A CN 201310558626A CN 103605710 B CN103605710 B CN 103605710B
- Authority
- CN
- China
- Prior art keywords
- video
- audio
- video data
- processing unit
- fragment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The present invention provides a kind of distributed tones video file processing system, including:Input processing unit, for receiving source video file, processing is carried out to source video file and obtains audio/video data, and be divided into audio/video data after audio/video data fragment in order respectively, and the audio/video data fragment according to obtained by certain allocation rule by segmentation is distributed to corresponding audio/video data processing unit and handled;Several video data processing elements, are respectively used to handle video data segment after singulated;Several voice data processing units, are respectively used to handle audio data fragment after singulated;Processing unit is exported, for the audio/video data fragment after processing to be handled and exported;Scheduling unit, for coordinating input processing unit, several audio/video data processing units and the work for exporting processing unit.Early childhood of the invention supplies a kind of distributed tones video file processing method and a kind of distributed tones video file processing unit.
Description
Technical field
The present invention relates to a kind of utilization computer or the apparatus and method of data processing equipment processing data, more particularly to one
Plant the apparatus and method handled using distributed computer or data processing equipment audio-video document.
Technical background
With the development of network and cultural undertakings, audio and video resources extreme enrichment, to the need for the processing of audio-video document
Also rapid growth.
The substantially flow of audio-video document processing is as follows:The audio-video document that need to be handled decapsulation is turned into frame of video first
Sequence and audio frame sequence;Then sequence of frames of video and audio frame sequence are decoded as RAW forms and PCM format data respectively;
RAW forms and PCM format data are handled;Again by the audio of the data encoding of RAW forms and PCM format form for needed for
Frame sequence and sequence of frames of video;Audio frame sequence and sequence of frames of video are finally packaged into the file format of needs.
Processing is completed by computer or the data processing equipment of computer composition above, these existing calculating
Machine or data processing equipment are to realize the processing to file by the software and hardware resources of the machine.The calculating of audio-video document processing
Amount is huge, and the operational capability and storage resource consumption to processing unit are very big, and with the increasingly increasing of high resolution audio and video file
The continuous increase of many and process demand, the bottleneck problem for carrying out audio-video document processing by unit becomes increasingly conspicuous, unit processing
Speed is slow and easily occurs system crash.Even if user the speed of processing is also difficult to ensure that using configuration very high computer and steady
Determine degree, can not especially meet high-volume and time requirement very high processing task.
In view of problem above present in prior art, the invention provides a kind of distributed processing system(DPS).Use many
Computer or processing unit realize parallel processing, greatly reduce the time needed for processing, while reducing the processing of system
Pressure, reduces the possibility of system crash.The perfect monitoring system due to having used, the reliability of processing is very high, completely may be used
To meet the requirement to handling quality.
The content of the invention
The first level of the present invention provides a kind of distributed tones video file processing system, including:Input processing unit, is used
In receiving source video file, processing is carried out to the source video file and obtains video data and voice data, and by the video
Data and the voice data are divided into after video data segment and audio data fragment in order respectively, and according to certain point
Corresponding video data processing element and sound are distributed to the video data segment and audio data fragment obtained by segmentation with rule
Frequency data processing unit is handled;Several video data processing elements, are respectively used to video data segment after singulated
Handled;Several voice data processing units, are respectively used to handle audio data fragment after singulated;At output
Unit is managed, for the video data segment and audio data fragment after processing to be handled and exported;Scheduling unit, is used for
Coordinate the input processing unit, several video data processing elements, several voice data processing units and output
The work of processing unit.
Preferably, the video data and the voice data are divided into regard respectively by the input processing unit in order
After frequency data slot and audio data fragment, by the serial number of each video data segment and audio data fragment by video data
The quantity for managing unit and voice data processing unit carries out modular arithmetic, according to the result of modular arithmetic by each video data segment and sound
Frequency data slot distributes to corresponding video data processing element and voice data processing unit is handled.
Preferably, the input processing unit has the first monitoring module, first processing module and the first transport module;Respectively
Video data processing element has the second monitoring module, Second processing module and the second transport module respectively;At each voice data
Manage unit has the 3rd monitoring module, the 3rd processing module and the 3rd transport module respectively;The output processing unit has the
Four monitoring modules, fourth processing module and the 4th transport module;Wherein, the first, second, third and fourth monitoring module point
It is not communicatively coupled with the scheduling unit, dependent instruction is received from the scheduling unit, and each by dependent processing unit
Running status report to the scheduling unit;First transport module respectively with each second transport module and each described
3rd transport module is communicated to connect, it would be desirable to which the corresponding video data segment and audio data fragment of processing are respectively sent to respectively
Second transport module and each 3rd transport module;4th transport module respectively with each second transport module
Communicated to connect with each 3rd transport module, receive video data segment and audio data fragment after processing.
Preferably, the input processing unit is decapsulated to the source file received, obtains video sequence and audio
Sequence, and the video sequence and the tonic train are subjected to dividing processing respectively.
Preferably, the output processing unit is being judged from each video data processing element and each voice data processing unit
Receive after all video data segments and audio data fragment after processing, by the video data segment being connected to and sound
Frequency data slot is merged, and is packaged according to predetermined format.
Preferably, the scheduling unit and the first, second, third and fourth monitoring module pass through CONFIG.SYS
The address information of acquisition is communicated.
Preferably, the first, second, third and fourth monitoring module is respectively regularly by the operation of respective handling unit
Status information is sent to the scheduling unit;The scheduling unit according to the running state information maintenance system configuration file,
And the CONFIG.SYS of renewal is sent respectively to the first, second, third and fourth monitoring module.
Preferably, quantity of the content of CONFIG.SYS including the video data processing element, physical address, operation
Quantity, physical address, running status and the working directory of state and working directory and the voice data processing unit.
Preferably, when the processing of one in the video data processing element and the voice data processing unit occurs
During mistake, its corresponding second or the 3rd monitoring module report the mistake to the scheduling unit;The scheduler module determines weight
Newly perform the processing made a mistake or ignore the mistake.
Preferably, the input processing unit rejoins the video data segment and/or audio data fragment of error
In the remaining video data slot and/or audio data fragment that are not sent and carry out serialization, then the institute of serialization again again
The serial number obtained carries out modular arithmetic, root by the quantity of the video data processing element and/or the voice data processing unit
The video data segment and/or audio data fragment of the error are reassigned to corresponding video counts according to the result of modular arithmetic
Handled according to processing unit and/or voice data processing unit.
Preferably, the input processing unit obtains the video frame number after the decapsulation of the source video file, audio frame
Number and audio/video coding form, and the video frame number, audio frame number and audio/video coding form are sent to corresponding video
Data processing unit and voice data processing unit.
Preferably, the quantity of video data segment of the input processing unit by source file after singulated and after singulated
The quantity of audio data fragment be sent to the output processing unit.
Preferably, the input processing unit inserts the piece into the video data segment and audio data fragment after segmentation
The source file information of section, serial number and file type information in all videos or audio data fragment of the source file.
Preferably, the input processing unit is by the source file information, in all videos or audio number of the source file
The file of video data segment and audio data fragment after segmentation is inserted into according to the serial number and file type information in fragment
In name.
Preferably, the audio of the output processing unit according to the quantity of the video data segment after segmentation and after singulated
The quantity of data slot sets up the reception mapping table of the source file, according to input processing unit to the video data segment after segmentation
Source file information with the fragment inserted in audio data fragment, in all videos or audio data fragment of the source file
Serial number and file type information in mapping table is received by the processed data slot received labeled as having been received by;Work as institute
State output processing unit and judge that all video data segments and audio data fragment in reception mapping table mark and are
When, integration processing is carried out to the video data segment after processing and the audio data fragment after processing, obtained
Audio-video document after processing;There are video counts when the output processing unit judges to receive in mapping table after certain time
During according to fragment and audio data fragment labeled as not receiving, by video data segment accordingly after singulated or/and segmented
The missing information of audio data fragment afterwards is sent to the scheduling unit.
Preferably, the quantity and audio data fragment received in mapping table also comprising paid-in video data segment
Quantity;The output processing unit judges to receive the quantity and the number of audio data fragment of the video data segment in mapping table
After amount is consistent with the quantity of the video data segment of input processing unit transmission and the quantity of audio data fragment, just carry out
Receiving all video data segments and audio data fragment in mapping table, whether mark is judgement.
Preferably, when the scheduling unit receives the missing information, indicate that the input processing unit will be lacked
Video data segment and/or audio data fragment serial number by current video data processing unit and/or voice data
The quantity for managing unit carries out modular arithmetic, according to the result of modular arithmetic by the video data segment of each missing and/or voice data piece
Section is reassigned to corresponding video data processing element and/or voice data processing unit is handled, and will handle again
Video data segment and/or audio data fragment afterwards is sent to the output processing unit.
Preferably, when the scheduling unit receives the missing information of video data segment, the input processing is indicated
Unit numbers in order the video data segment of all missings again, and by the numbering by current video data processing unit
Quantity carries out modular arithmetic, and the video data segment of each missing is reassigned into corresponding video data according to the result of modular arithmetic
Processing unit is handled, and the video data segment after handling again is sent to the output processing unit;Adjusted when described
When degree unit receives the missing information of audio data fragment, indicate the input processing unit by the voice data of all missings
Fragment is numbered in order again, and the numbering is carried out into modular arithmetic by the quantity of current audio data processing unit, is transported according to mould
The audio data fragment of each missing is reassigned to corresponding voice data processing unit and handled by the result of calculation, and will again
Audio data fragment after secondary processing is sent to the output processing unit.
Another aspect of the present invention provides a kind of audio-video document processing method, including:Input processing step, passes through input
Manage unit and receive source video file, processing is carried out to the source video file and obtains video data and voice data, and respectively will
The video data and the voice data are divided into after video data segment and audio data fragment in order respectively, and according to
Certain allocation rule distributes to corresponding video data processing to the video data segment and audio data fragment obtained by segmentation
Unit and voice data processing unit are handled;Video data process step, respectively using several video data processing elements
Video data segment after singulated is handled;Voice data process step, handles single using several voice datas respectively
Member is handled audio data fragment after singulated;Process step is exported, by exporting processing unit to after processing
Video data segment and audio data fragment are handled and exported;Scheduling steps, the input is coordinated by scheduling unit
Manage unit, several video data processing elements, several voice data processing units and the work for exporting processing unit.
Preferably, in the input processing step, the video data and the voice data are divided in order respectively
It is cut into after video data segment and audio data fragment, the serial number of each video data segment and audio data fragment is pressed into video
The quantity of data processing unit and voice data processing unit carries out modular arithmetic, according to the result of modular arithmetic by each video data piece
Section and audio data fragment distribute to corresponding video data processing element and voice data processing unit is handled.
Preferably, decapsulation step, is decapsulated to the source file received, obtains video sequence and tonic train;
Segmentation step, carries out dividing processing by the video sequence and the tonic train respectively.
Preferably, in the output process step, judging at each video data processing element and each voice data
Reason unit is received after all video data segments and audio data fragment after processing, by the video data piece being connected to
Section and audio data fragment are merged, and are packaged according to predetermined format.
Preferably, in the scheduling steps, the address information obtained by CONFIG.SYS is communicated.
Preferably, in the scheduling steps, the running state information of respective handling unit is regularly sent to respectively
The scheduling unit, the scheduling unit according to the running state information maintenance system configuration file, and send renewal be
System configuration file.
Preferably, quantity of the content of CONFIG.SYS including the video data processing element, physical address, operation
Quantity, physical address, running status and the working directory of state and working directory and the voice data processing unit.
Preferably, in the video data process step, when the processing of one in the video data processing element
When making a mistake, or in the voice data process step, when the hair of the processing of one in the voice data processing unit
During raw mistake, the mistake is reported to the scheduling unit;The scheduler module is determined to re-execute the processing made a mistake or neglected
The slightly mistake.
Preferably, the input processing unit rejoins the video data segment and/or audio data fragment of error
In the remaining video data slot and/or audio data fragment that are not sent and carry out serialization, then the institute of serialization again again
The serial number obtained carries out modular arithmetic, root by the quantity of the video data processing element and/or the voice data processing unit
The video data segment and/or audio data fragment of the error are reassigned to corresponding video counts according to the result of modular arithmetic
Handled according to processing unit and/or voice data processing unit.
Preferably, in the input processing step, the video frame number after the decapsulation of the source video file, sound are obtained
Frequency frame number and audio/video coding form, and the video frame number, audio frame number and audio/video coding form are sent to accordingly
Video data processing element and voice data processing unit.
Preferably, in the input processing step, by the quantity and warp of the video data segment of source file after singulated
The segmentation information of the quantity of audio data fragment after segmentation is sent to the output processing unit.
Preferably, in the input processing step, the input processing unit to the video data segment after segmentation and
The source file information, suitable in all videos or audio data fragment of the source file of the fragment is inserted in audio data fragment
Sequence number and file type information.
Preferably, in the output process step, the output processing unit is according to the video data segment after segmentation
Quantity and the quantity of audio data fragment after singulated set up the reception mapping table of the source file, according to input processing unit
The source file information for the fragment inserted into the video data segment and audio data fragment after segmentation, the institute in the source file
There are serial number and file type information in video or audio data fragment in mapping table is received by the processed number received
It is to have been received by according to fragment label;When the output processing unit judges to receive all video data segments and audio in mapping table
Data slot mark for when, to the video data segment after processing and the voice data after processing
Fragment carries out integration processing, obtains the audio-video document after processing;When the output processing unit judges to pass through certain time
Receiving afterwards has video data segment and audio data fragment labeled as when not receiving in mapping table, by regarding after singulated accordingly
The missing information of frequency data slot or/and audio data fragment after singulated is sent to the scheduling unit.
Preferably, the quantity and audio data fragment received in mapping table also comprising paid-in video data segment
Quantity;The output processing unit judges to receive the quantity and the number of audio data fragment of the video data segment in mapping table
After amount is consistent with the quantity of the video data segment of input processing unit transmission and the quantity of audio data fragment, just carry out
Receiving all video data segments and audio data fragment in mapping table, whether mark is judgement.
Preferably, when the scheduling unit receives the missing information, indicate that the input processing unit will be lacked
Video data segment and/or audio data fragment serial number by current video data processing unit and/or voice data
The quantity for managing unit carries out modular arithmetic, according to the result of modular arithmetic by the video data segment of each missing and/or voice data piece
Section is reassigned to corresponding video data processing element and/or voice data processing unit is handled, and will handle again
Video data segment and/or audio data fragment afterwards is sent to the output processing unit.
Preferably, when the scheduling unit receives the missing information of video data segment, the input processing is indicated
Unit numbers in order the video data segment of all missings again, and by the numbering by current video data processing unit
Quantity carries out modular arithmetic, and the video data segment of each missing is reassigned into corresponding video data according to the result of modular arithmetic
Processing unit is handled, and the video data segment after handling again is sent to the output processing unit;Adjusted when described
When degree unit receives the missing information of audio data fragment, indicate the input processing unit by the voice data of all missings
Fragment is numbered in order again, and the numbering is carried out into modular arithmetic by the quantity of current audio data processing unit, is transported according to mould
The audio data fragment of each missing is reassigned to corresponding voice data processing unit and handled by the result of calculation, and will again
Audio data fragment after secondary processing is sent to the output processing unit.
The present invention also provides a kind of distributed tones video file processing unit, including:Several first servers, are respectively used to
Video data segment is handled;Several second servers, are respectively used to handle audio data fragment;3rd service
Device, for the video data segment and audio data fragment after processing to be handled and exported;4th server, for connecing
Source video file is received, processing is carried out to the source video file and obtains video data and voice data, and by the video data
It is divided into order after video data segment and audio data fragment respectively with the voice data, and according to certain distribution
Rule is allocated to the video data segment and audio data fragment obtained by segmentation to the corresponding first server and institute
State second server to be handled, and coordinate the several first servers, the several second servers and the described 3rd
Work between server.
Preferably, the video data and the voice data are divided into video by the 4th server in order respectively
After data slot and audio data fragment, the serial number of each video data segment and audio data fragment is pressed into the first service
The quantity of device and the second server carries out modular arithmetic, according to the result of modular arithmetic by each video data segment and voice data
Fragment distributes to the corresponding first server and the second server is handled.
Preferably, each first server only runs a video data treatment progress;Each second server is only
Run a voice data treatment progress.
Preferably, the 4th server is decapsulated to the source file received, obtains video sequence and audio sequence
Row, and the video sequence and the tonic train are subjected to dividing processing respectively.
Preferably, the 3rd server is judging to receive after processing from each first server and each second server
All video data segments and audio data fragment after, the video data segment being connected to and audio data fragment are carried out
Merge, and be packaged according to predetermined format.
Preferably, the 4th server obtains video frame number, audio frame number after the decapsulation of the source video file
With audio/video coding form, and by the video frame number, audio frame number and audio/video coding form be sent to it is corresponding first clothes
Business device and second server.
Preferably, the quantity of video data segment of the 4th server by source file after singulated and after singulated
The segmentation information of the quantity of audio data fragment is sent to the 3rd server.
Preferably, the 4th server is by the source file information, all videos or voice data in the source file
Serial number and file type information in fragment are inserted into the filename of video data segment and audio data fragment after segmentation
In.
Preferably, the audio number of the 3rd server according to the quantity of the video data segment after segmentation and after singulated
The reception mapping table of the source file is set up according to the quantity of fragment, according to the 4th server to the video data segment and sound after segmentation
It is the source file information for the fragment inserted in frequency data slot, suitable in all videos or audio data fragment of the source file
The processed data slot received is labeled as having been received by by sequence number and file type information in mapping table is received;When described
Three servers judge receive mapping table in all video data segments and audio data fragment mark for when, to institute
State the video data segment after processing and the audio data fragment after processing carries out integration processing, obtain after processing
Audio-video document;There are video data segment and sound when the 3rd server judges to receive in mapping table after certain time
Frequency is audio number by video data segment accordingly after singulated and/or after singulated when not receiving according to fragment label
Sent according to the missing information of fragment to the 4th server.
Preferably, the quantity and audio data fragment received in mapping table also comprising paid-in video data segment
Quantity;3rd server judges to receive the quantity and the quantity of audio data fragment of the video data segment in mapping table
After consistent with the quantity of the video data segment of the 4th server transmission and the quantity of audio data fragment, just received
Whether mark is judgement for all video data segments and audio data fragment in mapping table.
Preferably, when the 4th server receives the missing information, by the video data segment of missing and/or
The serial number of audio data fragment carries out modular arithmetic by current first server and/or the quantity of second server, is transported according to mould
The result of calculation by the video data segment and/or audio data fragment of each missing be reassigned to corresponding first server and/
Or second server is handled, and the video data segment after handling again and/or audio data fragment are sent to described
3rd server.
Preferably, when the 4th server receives the missing information of video data segment, by regarding for all missings
Frequency data slot is numbered in order again, and the numbering is carried out into modular arithmetic by the quantity of current first server, is transported according to mould
The video data segment of each missing is reassigned to corresponding first server and handled by the result of calculation, and will be handled again
Video data segment afterwards is sent to the 3rd server;When the 4th server receives the missing of audio data fragment
During information, the audio data fragment of all missings is numbered in order again, and by the numbering press current second server number
Amount carries out modular arithmetic, and the audio data fragment of each missing is reassigned into corresponding second server according to the result of modular arithmetic
Handled, and the audio data fragment after handling again is sent to the 3rd server.
The distributed processing system(DPS) and processing method of the present invention solves speed bottle-neck when unit is handled, Ke Yi great
The big time shortened needed for audio-video document processing.Due to establishing reliable Mechanism of Task Allocation and mechanism for correcting errors, Ke Yibao
Demonstrate,prove the reliability of its result.Simultaneously it is possible to prevente effectively from unit processing load frequently occurs the failures such as deadlock when too high.Its
Very well, system configuration is flexible, the processing for the especially huge or substantial amounts of audio-video document that is particularly suitable for use in for scalability.
Brief description of the drawings
Fig. 1 is the structured flowchart for the distributed processing system(DPS) that embodiment of the present invention is related to;
Fig. 2 is the structured flowchart of the input processing module for the distributed processing system(DPS) that embodiment of the present invention is related to;
Fig. 3 is the structured flowchart of the output processing module for the distributed processing system(DPS) that embodiment of the present invention is related to;
Fig. 4 is the handling process schematic diagram for the distributed processing system(DPS) that embodiment of the present invention is related to;
Fig. 5 is the structural representation of the Map files for the distributed processing system(DPS) that embodiment of the present invention is related to;
Fig. 6 is the process step S5 for the distributed processing system(DPS) that embodiment of the present invention is related to process chart;
Fig. 7 is the structural representation of the message file for the distributed processing system(DPS) that embodiment of the present invention is related to;
Fig. 8 is the structure of the Voice & Video file after the segmentation for the distributed processing system(DPS) that embodiment of the present invention is related to
Schematic diagram;
Fig. 9 is the video file processing stream in the process step S8 for the distributed processing system(DPS) that embodiment of the present invention is related to
Cheng Tu;
Figure 10 is the audio file processing in the process step S8 for the distributed processing system(DPS) that embodiment of the present invention is related to
Flow chart;
Output processing unit in the process step S9 for the distributed processing system(DPS) that Figure 11 is related to for embodiment of the present invention
Process chart;
Figure 12 is the reception mapping table structure schematic diagram for the distributed processing system(DPS) that embodiment of the present invention is related to.
Embodiment
This invention is illustrated below according to accompanying drawing illustrated embodiment.This time disclosed embodiment can consider all
Aspect is to illustrate, without limitation.The scope of the present invention is not limited by the explanation of above-mentioned embodiment, only by claims
Scope shown in, and including having all deformations in the same meaning and right with right.
Fig. 1 is the structured flowchart of distributed processing system(DPS).As shown in figure 1, the distributed system of present embodiment includes adjusting
Spend module(Dispatcher modules)1st, input processing unit 2, several video processing units 3 and 4, several audio treatment units 5
With 6, output processing unit 7, monitoring module(Watcher modules)8 and client modules(Client modules)9.Wherein, dispatch
Module 1 is used for the operation for coordinating each several part of whole system.Input processing unit 2 includes monitoring module(Monitor modules)21、
Input processing module(Ingress modules)22 and transport module(Offer modules)23.Preferably, above-mentioned monitoring module 21, input
Message queue communication can be used between processing module 22 and transport module 23.Above-mentioned video processing unit 3(4)Including monitoring mould
Block(Monitor modules)31(41), video processing module(VP modules)32(42)And transport module(Offer modules)33(43).
Preferably, above-mentioned monitoring module 31(41), video processing module 32(42)With transport module 33(43)Between can use message
Queue communication.Audio treatment unit 5(6)Including monitoring module(Monitor modules)51(61), audio processing modules(AP modules)
52(62)And transport module(Offer modules)53(63).Preferably, above-mentioned monitoring module 51(61), audio processing modules 52
(62)With transport module 53(63)Between can use message queue communication.Exporting processing unit 7 includes monitoring module
(Monitor modules)71st, output processing module(Egress modules)72 and transport module(Offer modules)73.Preferably, it is above-mentioned
Message queue communication can be used between monitoring module 71, output processing module 72 and transport module 73.Monitoring module 8 and scheduling
The shared drive of module 1 is simultaneously derived from all information in scheduler module 1.The information of acquisition is sent to client by monitoring module 8
End module 9, user is shown to by client modules 9 by it with graphical interfaces.Above-mentioned scheduler module 1, above-mentioned input processing unit 2
With above-mentioned monitoring module(Watcher modules)8 can share a physical machine(Server).Each video processing unit 3
Or 4 can be individually with a physical machine(Server), i.e.,:Above-mentioned monitoring module 31, above-mentioned video processing module 32 and above-mentioned
Transport module 33 can share a physical machine(Server), and above-mentioned monitoring module 41, above-mentioned video processing module 42
A physical machine can be shared with above-mentioned transport module 43(Server), each of which platform physical machine(Server)Can be with each
It is secondary only to run a Video processing process(VP processes);Each audio treatment unit 5 or 6 can be individually with a physical machine
(Server), i.e.,:Above-mentioned monitoring module 51, above-mentioned audio processing modules 52 and above-mentioned transport module 53 can share a physics
Machine(Server), and the shared thing of above-mentioned monitoring module 61, above-mentioned audio processing modules 62 and above-mentioned transport module 63
Manage machine(Server), each of which platform physical machine(Server)An audio frequency process process can be only run each time(AP
Process).Above-mentioned output processing unit 7 can be individually with a physical machine(Server), i.e.,:It is above-mentioned monitoring module 71, above-mentioned
Output processing module 72 and above-mentioned transport module 73 can share a physical machine(Server).
Above-mentioned scheduler module 1 respectively with the monitoring module 21 in above-mentioned input processing unit 2, each video processing unit
(Video processing unit 3 and 4)In monitoring module(Monitoring module 31 and 41), each audio treatment unit(Audio treatment unit 5
With 6)In monitoring module(Monitoring module 51 and 61)And the monitoring module 71 in above-mentioned output processing unit 7 carries out communication link
Connect, for coordinating the input processing unit 2 in whole system, video processing unit 3 and 4, audio treatment unit 5 and 6 and output
The operation of the grade each several part of processing unit 7.Transport module 23 in above-mentioned input processing unit 2 respectively with each video processing unit
(Video processing unit 3 and 4)In transport module(Transport module 33 and 43)With each audio treatment unit(Audio treatment unit
5 and 6)In transport module(Monitoring module 53 and 63)It is communicatively coupled, for each video processing unit(Video processing
Unit 3 and 4)In transport module(Transport module 33 and 43)With each audio treatment unit(Audio treatment unit 5 and 6)In
Transport module(Transport module 53 and 63)Corresponding information and data are transmitted respectively.Transmission mould in above-mentioned output processing unit 7
Block 73 respectively with each video processing unit(Video processing unit 3 and 4)In transport module(Transport module 33 and 43)With it is each
Individual audio treatment unit(Audio treatment unit 5 and 6)In transport module(Transport module 53 and 63)It is communicatively coupled, is used for
From each video processing unit(Video processing unit 3 and 4)In transport module(Transport module 33 and 43)At each audio
Manage unit(Audio treatment unit 5 and 6)In transport module(Transport module 53 and 63)Receive corresponding information and data respectively.
Fig. 2 is the structured flowchart of above-mentioned input processing module 22.As shown in Fig. 2 above-mentioned input processing module 22 includes solution
Package module 221, input data processing module 222 and data memory module 223, wherein above-mentioned decapsulation module 221 is to above-mentioned
The source video file that input processing unit 2 is received is decapsulated, and above-mentioned input data processing module 222 is to audio/video text
Part is handled, and above-mentioned data memory module 223 is used to store audio/video file and relevant information.Above-mentioned decapsulation module 221
Including audio/video file format judging unit 2211, decapsulation selecting unit 2212 and several decapsulation units 2213,2214,
2215…….Wherein, above-mentioned several decapsulation units(2213、2214、2215……)With different forms, and above-mentioned solution
Encapsulating selecting unit 2212 can be according to the judged result of above-mentioned audio/video file format judging unit 2211 from above-mentioned several solutions
Corresponding decapsulation unit is selected to decapsulate above-mentioned source video file in encapsulation unit, so that above-mentioned deblocking is die-filling
Block 221 can correspond to different file formats and be decapsulated.Input data processing module 222 has the segmentation He of module 2221
Distribute module 2222.Split module 2221 can by the audio/video file division after decapsulation into multiple audio data fragments and
Video data segment, and multiple audio data fragments and multiple video data segment serializations number are given respectively, distribute module 2222 is led to
Cross to the sequence number of the fragment after segmentation respectively according to the quantity modulus of audio treatment unit and video processing unit, it is every with this determination
The corresponding audio treatment unit of individual audio data fragment and video data segment and video processing unit.Above-mentioned input processing module
22 can obtain the encapsulation format information of the source video file, and by above- mentioned information by above-mentioned transport module 23,33,43,
53rd, 63 it is sent to above-mentioned output processing unit 7.
Fig. 3 is the structured flowchart of above-mentioned output processing module 72.As shown in figure 3, above-mentioned output processing module 72 includes envelope
Die-filling piece 721, output data processing module 722 and memory module 723.Wherein, the above-mentioned storage of memory module 723 is regarded by above-mentioned
At audio file and video file after frequency processing unit 3 and 4 and the processing of above-mentioned audio treatment unit 5 and 6, above-mentioned output data
Audio file of 722 pairs of the module of reason after above-mentioned video processing unit 3 and 4 and above-mentioned audio treatment unit 5 and 6 are handled and regard
Frequency file is handled, and the audio file after being handled through above-mentioned output data processing module 722 and video file are delivered to
Above-mentioned package module 721.
Above-mentioned package module 721 includes encapsulation format selecting unit 7211 and several encapsulation with different encapsulation format
Unit 7212,7213,7214 ..., wherein above-mentioned encapsulation format selecting unit 7211 is according to received by above-mentioned transport module 73
Encapsulation format information from several encapsulation units(7212、7213、7214……)The middle corresponding encapsulation unit of selection is packaged,
So that the requirement that above-mentioned package module 721 corresponds to different encapsulation format is packaged.
Fig. 4 is the handling process schematic diagram of the distributed processing system(DPS) of the present invention.Illustrate distributed place with reference to Fig. 4
The handling process of reason system.Operating personnel open each physical machine(Server), start above-mentioned distributed processing system(DPS)(Step
S1).Above-mentioned scheduler module 1 reads system file(Step S2).Preferably, in step s 2, above-mentioned scheduler module 1 is in specified mesh
CONFIG.SYS is read in record(Such as tvmccd.cfg), obtain the process<Input>(Input)、<Server>(Server)
With<Port>(Port)Deng configuration item, while<Input>The catalogue that item is specified(Such as opt/tvmccd/ingress/
Dispatcher/Input)Read corresponding file(Such as tvmccd.par files and logo files).According to corresponding document(Example
Such as tvmccd.par)In corresponding with the file ID map file of content generation source video filename(Map files, for example
tvmccd.map).Fig. 5 is the schematic diagram of one of the Map files of the distributed processing system(DPS) of the present invention, can according to Map files
With query source video file name and the corresponding relation of file ID.Above-mentioned logo files can have multiple, can also be without logo texts
Part.Record has the configuration information of whole distributed system in CONFIG.SYS, such as:The IP address of each module place computer,
Quantity, method of task distribution of the port numbers, the position of working directory, Video processing and the audio processing modules that use etc..PAR
Recorded in file file to be handled filename and processing after filename, processing after audio/video file coded format,
The parameters such as code check, frame per second, resolution ratio.LOGO file records are that user needs watermark, captions for adding in video etc.
Content and its parameter such as size, position.With scheduler module 1 likewise, other modules in system are also all specified from respective
CONFIG.SYS is read in catalogue, the information such as mailing address of each module in system is obtained from configuration file, with this with needing
Other to be communicated module sets up connection.It is above-mentioned scheduler module 1 monitoring module 21 respectively with above-mentioned input processing unit 2, each
Video processing unit(3 and 4)Monitoring module(31 and 41), each audio treatment unit monitoring module(51 and 61)With it is above-mentioned defeated
Go out foundation communication connection between the monitoring module 71 of processing unit 7(Step S3).Preferably, in step s3, above-mentioned scheduling mould
Block 1 sets up Socket monitorings(Listening port Dispatcher_Port is in CONFIG.SYS, such as tvmccd.cfg),
Wait the connection request of above-mentioned each monitoring module.Each processing unit is sent by the address in CONFIG.SYS to scheduler module 1
The request of Socket connections is set up, once above-mentioned scheduler module 1 listens to the Socket connection requests of above-mentioned monitoring module, is received
The connection request of above-mentioned each monitoring module simultaneously sets up Socket links, while above-mentioned corresponding file is sent into corresponding monitoring
Module(Monitoring module 21,31,41,51,61 or 71).For example, tvmccd.par and tvmccd.map files are sent to each
Individual monitoring module, each video processing unit is sent to by logo files(5 and 6)Monitoring module(51 and 61).
The transport module 23 of above-mentioned input processing unit 2 and each video processing unit(3 and 4)Transport module(33 and 43)
With each audio treatment unit(5 and 6)Transport module(53 and 63)Communication connection is set up, while above-mentioned output processing unit 7
Transport module 73 and each video processing unit(3 and 4)Transport module(33 and 43)With each audio treatment unit(5 and 6)Biography
Defeated module(53 and 63)Set up communication connection(Step S4).Above-mentioned input processing unit 2 starts the processing to source video file(Step
Rapid S5).
Fig. 6 is above-mentioned steps S5 process chart.As shown in fig. 6, the input processing module of above-mentioned input processing unit 2
The information such as the startup time of the treatment progress of source video file and title are sent to above-mentioned monitoring module 21 by 22(Step S51).
Above-mentioned input processing module 22 reads CONFIG.SYS in assigned catalogue(Such as tvmccd.cfg files), obtain corresponding<
Source>(source item),<Processed>(Processing item)、<Failed>(Error items)、<Output>(Output item)With<Send>
(Sending item)Deng configuration item(Step S52).Above-mentioned input processing module 22 is from above-mentioned<Source>Item catalogue reads source in order
Video file(Step S53), and above-mentioned source video file is entered successively by the decapsulation module 221 of above-mentioned input processing module 22
Row decapsulation(Step S54).If source video file is decapsulated successfully(Step S55:It is), obtain corresponding sound, frame of video sequence
Row, and will decapsulate successful source video file from<Source>Item catalogue is transferred to<Processed>Catalogue and will decapsulation
Sound, sequence of frames of video afterwards is stored to above-mentioned memory module 223(Step S56).The input data of above-mentioned input processing module 22
Processing module 222 is handled the sound in above-mentioned memory module 223, sequence of frames of video, obtains its relevant information(Such as encode
Information etc.), and by acquired relevant information(Such as coding information etc.)Write message file(Info files)(Step S57).
Fig. 7 is the form schematic diagram of message file.As shown in Figure 7, source file ID, file size, coding class are contained in message file
Type etc..In the present embodiment, Video Information files are named with VI- file IDs .info form, wherein also comprising video bitrate,
The information such as video height, video width;Audio information files are named with AI- file IDs .info form, wherein also including audio
The information such as bit rate, sample rate, channel number.In step S57, when info files are write, info files are written into<
Output>Catalogue, when info files write complete and confirm, without mistaking, info files to be transplanted on<Send>Catalogue.Above-mentioned input
Sequence of frames of video and audio frame sequence are divided into multiple with GOP by the input data processing module 222 of processing module 22 respectively(Draw
Face group)For the video data segment of unit(Video file)With multiple with GOA(Audio data unit i.e. corresponding with GOP)For
The audio data fragment of unit(Audio file)(Step S58).Such as, for mpeg file, each GOP is started with I frames, I frames
It may be followed by P frames and B frames constitute a GOP.Due to only having I frames to have the data of complete picture, P frames and B frames are only
The data changed with other relative frames, each GOP sections frame segmentation can avoid the number of P frames and B frames in one file
According to because with corresponding I frames complete picture data can not do not recovered in a gop file.Fig. 8 is GOP or GOA files
Form schematic diagram.As shown in Figure 8, in each gop file and GOA files after segmentation comprising its source file ID, this GOP or
Information and the frame data parts such as sequence number of the GOA files in all GOP or GOA files of source file, the frame number included.It is defeated
Enter data processing module 222 with the gop file and GOA files after certain rule name segmentation, filename includes this document pair
Sequence number and file type of the file ID, this document for the source file answered in all GOP or GOA files of source file(Gop file
Or GOA files)(Step S59).For example, file can be marked is, wherein " F01 " is the file ID of source file, "-
1 " represents serial number 1 in the file of this document after singulation, and file suffixes " .gop " represents that this document is gop file;Thus it is literary
Part name is assured that this document is the 1st gop file of the source file that file ID is F01.Input data processing module 222
Video file and audio file are allocated according to certain allocation rule, for example, the inquiry system of input data processing module 222
The quantity of Voice & Video processing unit in configuration file, by the sequence number of GOP or GOA files according to video or audio frequency process list
The quantity modulus of member, thereby determines that and GOP the or GOA files is distributed into which video or audio treatment unit is handled(Step
Rapid S510).For example, there are serial number 1-9 9 gop files, currently there are 2 video processing units 3 and 4 as shown in Figure 1;Input
Data processing module 222 carries out mould 2 with the sequence number of each gop file successively(The quantity of video processing unit)Computing, as a result
Video processing unit 3 is distributed to for 1 file, as a result the file for 0 distributes to video processing unit 4.In this way, file 1,3,5,
7th, 9 video processing unit 3 is assigned to, file 2,4,6,8 is assigned to video processing unit 4.If currently had at 4 videos
Unit VP1-VP4 is managed, then carries out mould 4 with the sequence number of each gop file(The quantity of video processing unit)Computing, be as a result 1
File distribute to VP1, as a result the file for 2 distributes to VP2, and as a result the file for 3 distributes to VP3, as a result the file point for 0
Dispensing VP4.In this way, file 1,5,9 is assigned to VP1, file 2,6 is assigned to VP2, and file 3,7 is assigned to VP3, file
4th, 8 it is assigned to VP4.Likewise, which audio treatment unit GOA files also distribute to by this rule determination.Each GOP or
After being assigned of GOA files, allocation result is sent to the transmission of input processing unit 2 by input processing module 22 by message
Module 23.GOP the and GOA files of generation are respectively written into by the input data processing module 222 of above-mentioned input processing module 22<
Output>Item catalogue, and when each GOP or GOA files write magnetic disk is completed and confirms to be transferred to without mistaking<Send>Item catalogue,
Source file after processing is completed by<Source>Catalogue is transferred to<Processed>Catalogue(Step S511), while by GOP and
The GOA total file of sum write-in(Total files)(Step S512).In step S510, when total files are write,
It is written into<Output>Item catalogue, when total files write complete and confirm that it is transferred to without mistaking<Send>Item catalogue.
The transport module 23 of input processing unit 2 receives the allocation result for each GOP and GOA files that input processing module 22 is sent, and
GOP and GOA files are sent to corresponding audio/video processing unit together with total files by the result(Step S513).In step
In rapid S514, by filename(Filename including source file and GOP, GOA file), file ID, video frame number, audio frame number,
Video code model and audio coding formats are sent to above-mentioned monitoring module 21.If the decapsulation error of source video file(Step
S55:It is no), the source video file that above-mentioned input processing unit 2 malfunctions above-mentioned decapsulation from<Source>Item catalogue is transplanted on<
Failed>Item catalogue(Step S515), while the source video file ID of error is sent into above-mentioned monitoring module 21(Step
S516).The error message that above-mentioned monitoring module 21 is received is sent to client modules 9 by above-mentioned monitoring module 8.User can be with
Inputted and ordered by client modules 9, decide whether again to carry out above-mentioned source video file the processing of a new round, or directly
Abandon carrying out above-mentioned source video file the processing of a new round.
Return to Fig. 4, in step s 6, the monitoring module 21 of above-mentioned input processing unit 2 by above-mentioned filename, file ID,
The relevant informations such as video frame number, audio frame number, video code model and audio coding formats are sent to above-mentioned scheduler module 1, together
The transport module 23 of Shi Shangshu input processing units 2 will be above-mentioned<Send>Info files, GOP and GOA files under catalogue and
The data such as total files are sent to corresponding video processing unit 3(4)With audio treatment unit 5(6)(Step S6).
In the step s 7, above-mentioned scheduler module 1 is by video frame number, audio frame number, video code model and audio coding lattice
The relevant informations such as formula are sent in corresponding video processing unit 3,4 and audio treatment unit 5,6 and output processing unit 7.
In step s 8, above-mentioned video processing unit 3(4)Video processing module 32(42)According to monitoring module 31(41)
The relevant information such as received video frame number and video code model and conveyor module 33(43)Received video file
With the information such as info files to above-mentioned gop file(Video file)Handled accordingly, while above-mentioned audio treatment unit 5
(6)According to monitoring module 51(61)The relevant information such as received audio frame number and audio coding formats and conveyor module 53
(63)The information such as received audio file and info files is to above-mentioned GOA files(Audio file)Handled accordingly,
And send audio/video file after treatment to above-mentioned output processing unit 7.
Fig. 9 is the video file process chart in step S8.In the present embodiment, video processing unit 3 and video
The video file handling process of processing unit 4 is identical, therefore only introduces video file handling process with regard to video processing unit 3.Such as
Shown in Fig. 9, the video processing module 32 of above-mentioned video processing unit 3 is by the startup time of video file treatment progress and receives
The id information of video file(It for example have received following file:F01-1.gop, F01-3.gop, F01-5.gop etc. and F02-
1.gop, F02-3.gop etc.)It is sent to above-mentioned monitoring module 31(Step S811).Above-mentioned monitoring module 31 can be by above-mentioned startup
The id information of time and video file feeds back to above-mentioned scheduler module 1 so that system can monitor above-mentioned video processing unit 3
Treatment progress situation.Above-mentioned video processing module 32 reads CONFIG.SYS in assigned catalogue(tvmccd.cfg), in acquisition
State the treatment progress of video processing unit 3<Source>(source item),<Output>(Output item)With<Send>(Sending item)Deng
Configuration item(Step S812), and<Source>Item catalogue reads relevant information(Info files)And video file(Gop files)
(Step S813).If<Source>There are total files under item catalogue, then above-mentioned video processing module 32 is literary by above-mentioned total
Part is transferred to<Send>Item catalogue.Then, Video coding of the above-mentioned video processing module 32 according to received by monitoring module 31
Format information selects corresponding decoder to decode above-mentioned video file(Step S814).If successfully decoded(Step
S815:It is), above-mentioned video processing module 32 to it is above-mentioned it is decoded after video data carry out predetermined process(Step S816).On
Stating predetermined process can be video frame rate customization regulation, scroll information, the merging of different audio/video files are added into video
Deng.The parameter request of file after the processing that above-mentioned video processing module 32 is received according to monitoring module 31 from scheduler module 1 will
Video data after treatment is encoded, and obtains the video file after processing(Gop files), and by regarding for exporting
Frequency file(Gop files)Write<Output>Catalogue, write magnetic disk terminate and verify it is errorless after this document is transplanted on<Send
>Item catalogue, above-mentioned output processing unit 7 is delivered to by above-mentioned transport module 33(Step S817).
If decoding error(Step S815:It is no), and the decoding frequency n of the video file is not above predetermined threshold a
(Step S818:It is no), then above-mentioned decoding frequency n is added 1(Step S819)And return to step S814 and decoded again, if
The decoding frequency n of the video file exceedes predetermined threshold a(Step S818:It is), then the ID letters of the video file of error will be decoded
Breath is sent to monitoring module 31(Step S8110).The id information that above-mentioned monitoring module 31 will decode the video file of error feeds back
To above-mentioned scheduler module 1, above-mentioned scheduler module 1 can select to allow input processing unit 2 to send out again to above-mentioned video processing unit 3
The video file of the error is given, or allows input processing unit 2 to rejoin the video file of the error not by above-mentioned input
Manage in the remaining video file that the transport module 23 of unit 2 is sent and carry out serialization number again, further according to above-mentioned certain distribution
The video file of the error is distributed to corresponding video processing unit and handled by rule.In addition, scheduler module 1 can also be selected
Select and directly ignore the decoding error, and error message is exported to user.
Figure 10 is the audio file process chart in step S8.In the present embodiment, audio treatment unit 5 and audio
The video file handling process of processing unit 6 is identical, therefore only introduces audio file handling process with regard to audio treatment unit 5.Such as
Shown in Figure 10, the audio processing modules 52 of above-mentioned audio treatment unit 5 are by the startup time of audio file treatment progress and audio
The id information of file(It for example have received following file:F01-1.goa, F01-3.goa, F01-5.goa etc. and F02-1.goa,
F02-3.goa etc.)It is sent to above-mentioned monitoring module 51(Step S821).Above-mentioned monitoring module 51 can by the above-mentioned startup time and
The id information of audio file feeds back to above-mentioned scheduler module 1 so that the processing that system can monitor above-mentioned audio treatment unit 5 is entered
Journey situation.Above-mentioned audio processing modules 52 read CONFIG.SYS in assigned catalogue(tvmccd.cfg), obtain above-mentioned audio
The treatment progress of processing unit 5<Source>(source item),<Output>(Output item)With<Send>(Sending item)Deng configuration item
(Step S822), and<Source>Item catalogue reads relevant information(Info files)And audio file(Goa files)(Step
S823).If<Source>There are total files under item catalogue, then above-mentioned audio processing modules 52 transfer above-mentioned total files
Extremely<Send>Item catalogue.Then, above-mentioned audio processing modules 52 are according to received by monitoring module 51 from above-mentioned scheduler module 1
Audio coding formats information selects corresponding decoder to decode above-mentioned audio file(Step S824).If successfully decoded
(Step S825:It is), above-mentioned audio processing modules 52 to it is above-mentioned it is decoded after voice data carry out predetermined process(Step
S826).Above-mentioned predetermined process can be volume automatically adjust, volume custom regulation or sound channel processing etc..Above-mentioned audio frequency process mould
The parameter request of file after the processing that block 52 is received from scheduler module 1 according to monitoring module 51 is by audio number after treatment
According to being encoded, the audio file after processing is obtained(Goa files), and by the audio file exported(Goa files)Write
Arrive<Output>Catalogue, write magnetic disk terminate and verify it is errorless after this document is transplanted on<Send>Item catalogue, by above-mentioned biography
Defeated module 33 is delivered to above-mentioned output processing unit 7(Step S827).
If decoding error(Step S825:It is no), and the decoding number of times m of the audio file is not above predetermined threshold b
(Step S828:It is no), then above-mentioned decoding number of times m is added 1(Step S829)And return to step S824 and decoded again, if
The decoding number of times m of the audio file exceedes predetermined threshold b(Step S828:It is), then the ID letters of the audio file of error will be decoded
Breath is sent to monitoring module 51(Step S8210).The id information that above-mentioned monitoring module 51 will decode the audio file of error feeds back
To above-mentioned scheduler module 1, above-mentioned scheduler module 1 can select to allow input processing unit 2 to send out again to above-mentioned audio treatment unit 5
The audio file of the error is given, or allows input processing unit 2 to rejoin the audio file of the error not by above-mentioned input
Manage in the remaining audio file that the transport module 23 of unit 2 is sent and carry out serialization number again, further according to above-mentioned certain distribution
The audio file of the error is distributed to corresponding audio treatment unit and handled by rule.In addition, scheduler module 1 can also be selected
Select and directly ignore the decoding error, and error message is exported to user.
Fig. 4 is returned to, in step s 9, above-mentioned output processing unit 7 is according to above-mentioned monitoring module 71 from above-mentioned scheduler module 1
Received relevant information is to from above-mentioned video processing unit 3(4)Video file and from above-mentioned audio treatment unit 5
(6)Audio file handled, obtain audio-video document after processing.
Figure 11 is the process chart of above-mentioned output processing unit 7 in step S9.As shown in figure 11, above-mentioned output processing is single
The output processing module 72 of member 7 will export the startup time of processing unit 7 and the title for the treatment of progress is sent at above-mentioned output
Manage the monitoring module 71 of unit 7(Step S91).Above-mentioned output processing module 72 reads CONFIG.SYS in assigned catalogue(Than
Such as tvmccd.cfg), the configuration item of above-mentioned treatment progress is obtained, for example<Source>(source item),<Output>(Output item)With
<Finished>(Complete item)Deng configuration item(Step S92), and monitor<Source>The write_close events of item catalogue
(Write close event), when there is the arrival of write_close events, the filename of write_close file is obtained, and record
At the time of newest write_close events(Step S93).Thus, it is possible to monitor<Source>The item newest text received of catalogue
Part, if receiving the total texts for being transferred to transport module 73 by transport module 33,43,53 and/or 63 by transport module 23
Part, just reads the total number information and the total number information of video file in total files on audio file, such as gop file
The total number information of total number information and GOA files, while above-mentioned output data processing module 721 obtains audio/video file(Such as
GOP and GOA files)(Step S94).The total number information of GOP and GOA files in total files, output data processing mould
Block 722 is the reception mapping table that each source file generates GOP and GOA files(Step S95).Receive the structure reference picture of mapping table
12, and the above-mentioned received total number information of audio file and the total number information of video file and the reception that is generated reflect
Firing table is storable in the memory module 723 of above-mentioned output processing module 72.Figure 12 is the structural representation for receiving mapping table.
As seen from Figure 12, receiving record in mapping table has the file ID of source file and all GOP and GOA files of the source file,
And reserved a status indicator for each GOP and GOA files.Initial status indicator is 0, and expression does not receive the GOP or GOA
File.Output data processing module 722 scans GOP the and GOA files received, and successively should in mapping table is received
The status indicator of GOP or GOA files is set to 1, and expression have received GOP the or GOA files(Step S96).For example, at output data
Reason module 722 have received the total files that file ID is F01, and the total files show that file F01 has 9 gop file F01-
1.gop~F01-9.gop and 9 GOA files F01-1.goa~F01-9.goa.Accordingly, output data processing module 722 is text
Part ID sets up reception mapping table as shown in figure 12 for F01 source file.Then, output data processing module 722 is scanned
Belong to the part of the source file in GOP the and GOA files received.Split naming rule during file according to input processing unit,
The scanning of output data processing module 722 belongs to source document to .gop the or .goa files started with F01 it may determine that going out this document
Part F01, F01-1.gop are exactly first gop file of the source file.In this way, can be by all of paid-in source file F01
GOP and GOA files are being marked in mapping table is received as reception.Due to<Source>The write_close events of item catalogue
Monitored, output data processing module 722 can monitor GOP the and GOA files newly received, and it is being received into mapping table acceptance of the bid
It is designated as having received.Receive the quantity information of GOP and GOA files for also including receiving in mapping table, often receive a GOP or
GOA files, add 1 by corresponding quantity of documents(Step S97).
Above-mentioned output data processing module 722 will receive the quantity and total of GOP the and GOA files recorded in mapping table
The quantity recorded in file is contrasted(Step S98).When quantity is consistent(Step S98:It is), output data processing module
722 scannings receive mapping table, see whether the status indicator of each GOP and GOA files is to have received(Step S99).Work as reception
When the total number of files arrived is different from the information of total files, illustrate that also GOP or GOA files are not received by.
If above-mentioned output data processing module 722 judges that all GOP and GOA files of above-mentioned source video file have connect
Receive(Step S99:It is), then it is packaged processing in the package module 721 of above-mentioned output processing module 72(Step S910).Such as
Fruit encapsulates successfully(Step S911:It is), above-mentioned package module 721 will encapsulate successfully new audio-video document and is output to above-mentioned output
The transport module 73 of data processing unit 7(Step S912).If encapsulation error(Step S911:It is no), at above-mentioned output data
Reason module 722 judges that trial Reseals whether number of processing M exceeds predetermined value B(Step S917).If above-mentioned trial weight
The number of times M of new encapsulation process is without departing from predetermined value B(Step S917:It is no), M plus 1(Step S918), and return to above-mentioned encapsulation
Module 721 again attempts to encapsulation process.If above-mentioned trial Reseals number of processing M beyond predetermined value B(Step S917:
It is), the sound of error, the id information of video file will be encapsulated and be sent to the monitoring module 71 of above-mentioned output processing unit 7(Step
S919), sound that above-mentioned monitoring module 71 malfunctions above-mentioned encapsulation error information and above-mentioned encapsulation, the id information of video file feed back
To above-mentioned scheduler module 1(Step S920).Above-mentioned scheduler module 1 decides whether again to enter above-mentioned source video file as requested
The processing of a row new round, or directly abandon carrying out above-mentioned source video file the processing of a new round.
If above-mentioned output data processing module 722 judges all sounds, video file in above-mentioned source video file(GOP
With GOA files)It is not fully received(Step S96:It is no), then above-mentioned output data processing module 722 judge to wait not by
Whether the sound of reception, the time N of video file exceed predetermined threshold A(Step S913).If waited not by reception sound, video
The time N of file is not above predetermined threshold A(Step S913:It is no), then sound, the video file not received are continued waiting for.
Predetermined threshold A is not exceeded by the time N of reception sound, video file if waited(Step S913:It is), then above-mentioned output data
Processing module 722 checks the reception mapping table of above-mentioned sound, video file, obtain unreceived audio file id information and/or
Id information of video file etc., and send it to the monitoring module 71 of above-mentioned output processing unit 7(Step S914).For example,
As shown in figure 12, above-mentioned output data processing module 722 is by checking that above-mentioned reception mapping table finds Tvm.mp4 F01-
Two files of 4.goa and F01-6.gop are still confiscated after exceeding above-mentioned predetermined threshold A in the stand-by period, at above-mentioned output data
The file ID for not receiving file is sent to the monitoring module 71 of above-mentioned output processing unit 7 by reason module 722.Wherein, it is above-mentioned etc.
The time for the treatment of calculates the total files of certain source file can be read by output processing module 72, can also be from the source file
Reception mapping table be established from calculate.Above-mentioned sound, video file are received error message and not by above-mentioned monitoring module 71
The audio and/or the id information of video file received feeds back to above-mentioned scheduler module 1(Step S915).Above-mentioned scheduler module 1
The above-mentioned sound, the video file that are fed back according to above-mentioned monitoring module 71 receive the id information of error message and unreceived audio file
And/or the id information of video file can order above-mentioned input processing unit 2 to be redistributed according to above-mentioned certain allocation rule
To audio treatment unit and/or video processing unit(Step S916), i.e., the sequence number of this document is handled single by audio or video
The quantity modulus of member.Therefore, if processing units quantity relatively just sub-distribution when change, this document can be assigned to for the first time
Different processing unit during distribution.For example, have 4 video processing unit VP1-VP4 in system during F01-7.gop just sub-distribution,
This document is assigned to VP3.Output processing unit, which is not received, requires that input processing unit 2 handles this document again after this document.
Now, barrier logs off VP2 for some reason, also has 3 video processing units VP1, VP3, VP4 in system.Therefore, input processing list
The allocation rule of member 2, which is reformed into, uses file sequence number mould 3, and as a result the file for 1 distributes to VP1, is as a result distributed to for 2 file
VP3, as a result distributes to VP4 for 0 file.By this rule, F01-7.gop is reassigned to VP1.
Following processing mode can also be used in step S916:Input processing unit 2, which often receives one, to be needed to handle again
GOP or GOA files just assign the sequence number that one is handled again, and be allocated by the sequence number.For example, input processing unit
2 priorities have received 3 ID of gop file for needing to handle again:F01-7.gop、F03-2.gop、F02-4.gop.Input
The distribute module 2222 of reason unit 2 often receives a gop file that need to be handled again, and that is just safeguarded need to be handled again
The gop file that the sequence number of gop file adds 1 and imparting is newly received.In this way, F01-7.gop, F03-2.gop, F02-4.gop are successively
It is endowed sequence number 1,2,3.Assuming that now there is 2 video processing unit VP1-VP2 in system, distribute module 2222 is by F01-
The 7.gop mould 2 of sequence number 1, is as a result 1, distributes to VP1;By the F03-2.gop mould 2 of sequence number 2, it is as a result 0, distributes to VP2;Will
The F02-4.gop mould 2 of sequence number 3, is as a result 1, distributes to VP1.Then the distribution information by more than of distribute module 2222 is sent to biography
Defeated module 23, corresponding video processing unit is sent to by transport module 23.If input processing unit 2, which is received, to be needed to locate again
The GOA files of reason, are allocated using same method.The gop file and GOA files that handle again is needed to be arranged respectively
Sequence, with sequence number independent of each other.
Input processing module, video processing module, audio processing modules and output processing module timing pass through message team
Arrange to the working condition of the respective module of respective monitoring module report, current task, completed task dispatching, while will also be above-mentioned interior
Appearance charges to respective journal file.Each monitoring module is regularly by the IP address of present treatment unit, system loading, network condition, sheet
Process status and the above- mentioned information received from the processing module of this unit are sent to scheduler module 1.The timing of monitoring module 8 and tune
Degree module 1 shares above- mentioned information, and by being shown to user after the processing of client modules 9 with graphical interfaces.
It is each physical machine in the maintenance of information configuration file that scheduler module 1 is sent using each monitoring module, configuration file
Device is assigned with a status attribute, and when the machine temporarily logs off because of power down or suspension, status attribute is set as unavailable;
When there is new physical machine to add system, user instruction scheduler module 1 adds the information of the machine in configuration file.Configuration
After information in file changes, scheduler module 1 assigns one new version number, exists side by side and sends it to each control mould
Block.
If the power down of scheduler module 1 or without response in system operation, client modules 9 can not just be obtained from monitoring module 8
After access evidence, such case certain time, client modules 9 can pass through the methods such as prompting frame or alarm song and point out user to arrange
Except failure.After failture evacuation, scheduler module 1 re-reads configuration file and is connected with the foundation of each monitoring module, each monitoring module root
The information for sending scheduler module 1 to not successfully is retransferred according to respective daily record.
If a certain monitoring module is stopped, scheduler module 1 is not receiving information by certain time from the module
After judge the module break down.Failure cause is probably that the physical fault caused by power-off or suspension is also likely to be the module
Process is without response.Whether the connection that scheduler module 1 is detected with the physical machine of the monitoring module is normal(Ping can such as be used
Order)If abnormal explanation is physical fault, is sent and pointed out to user by client modules 9.If the normal explanation of connection
Be the module process without response, then tried again after waiting certain time, can not still communicate and pass through again by the trial of certain number of times
Client modules 9 send to user and pointed out.
If the physical machine power down where a certain processing unit, after being re-energised, the processing unit weight of the monitoring module
New initialization is set up with other modules to be connected.If the machine of power down is at input processing unit, video processing unit or audio
Manage unit where machine, completed processing file can by from<Source>Item catalogue is transferred to<Processed>Or<
failed>Etc. catalogue, therefore, as long as after recovery from<Source>Item catalogue reads file and proceeds processing.If fallen
The machine of electricity is to export the machine where processing unit, after initialization, output processing module scanning<Source>Item catalogue, reads
Total files under catalogue, set up the reception mapping table of sound, video file, and according to<Source>GOP under catalogue and
GOA filling in documents receives mapping table.The newest file received is monitored while scanning, it is right<Source>All texts under item catalogue
Part all completes to continue S93~S917 processing after aforesaid operations.If the physical machine suspension where a certain processing unit, when even
When connecing recovery, scheduler module 1 is sent to newest configuration file, the monitoring module inquiry log of the unit, and instructs the list
The transport module of member will<Send >File under catalogue is transferred to the processing unit specified.
If the process of monitoring module is set up with other modules and communicated without being restarted after response first after restarting, Ran Hou
Inquiry and the last call duration time of each module in daily record, it is desirable to which scheduler module 1 and corresponding processing module, transport module are retransmitted should
Information after time.Monitoring module all carries the version number of configuration file, therefore, scheduling when sending information to scheduler module 1
Module 1, which finds that the configuration file in information is expired, can resend new configuration file.Monitoring module is to the letter of the re-transmission received
Breath performs normal operation.
As described above, input processing module, video processing module, audio processing modules and output processing module timing are logical
The working condition that message queue reports respective module to respective monitoring module is crossed, therefore, when the process of some module is without response
More than certain time, corresponding monitoring module is it finds that exception, if waiting certain time to recover, monitoring module is just carried
Awake user restarts the process of the module or voluntarily restarts the process of the module.After restarting, these modules are from respective work mesh
Record is read file and handled.
In the present embodiment, each physical machine is connected by LAN, the address of each module for place machine IP
Location and corresponding end slogan, but each physical machine can also be connected by other modes such as wide area network or high-speed buses, as long as can be with
Assign each module corresponding address representation.
In addition, when handling substantial amounts of small size video file using the system of the present invention, segmentation sound/regard can also be omitted
The step of frequency file, each audio processing modules and video processing module is allowed to handle the audio or video portion of a source file alone
Point.
In the present embodiment, distributed system is used to handle audio-video document, but the system changes the work(of corresponding module
Can after can be used for handling other kinds of data, as long as the handling process that remerges will not be to the number after the processing of this piecemeal
According to result impact.
Claims (37)
1. a kind of distributed tones video file processing system, including:
Input processing unit, for receiving source video file, carries out processing to the source video file and obtains video data and sound
Frequency evidence, and the video data and the voice data are divided into video data segment and voice data piece in order respectively
Duan Hou, and video data segment and audio data fragment according to obtained by certain allocation rule by segmentation distribute to and regard accordingly
Frequency data processing unit/voice data processing unit is handled;
Several video data processing elements, are respectively used to handle video data segment after singulated;
Several voice data processing units, are respectively used to handle audio data fragment after singulated;
Processing unit is exported, for the video data segment and audio data fragment after processing to be handled and exported;
Scheduling unit, for coordinating the input processing unit, several video data processing elements, several audio numbers
According to the work of processing unit and output processing unit;Wherein
The quantity of video data segment of the input processing unit by source file after singulated and voice data after singulated
The quantity of fragment is sent to the output processing unit;
The input processing unit inserts the source file of the fragment into the video data segment and audio data fragment after segmentation
Information, serial number and file type information in all videos or audio data fragment of the source file;
The audio data fragment of the output processing unit according to the quantity of the video data segment after segmentation and after singulated
Quantity sets up the reception mapping table of the source file, according to input processing unit to the video data segment and voice data after segmentation
The source file information for the fragment inserted in fragment, the serial number in all videos or audio data fragment of the source file and
The processed data slot received is labeled as having been received by by file type information in mapping table is received, and counts paid-in
The quantity of video data segment and the quantity of audio data fragment;
The quantity of paid-in video data segment and the quantity of audio data fragment are also included in the reception mapping table;It is described
Export processing unit judge receive mapping table in the quantity of video data segment and the quantity of audio data fragment with it is described defeated
Enter processing unit transmission video data segment quantity it is consistent with the quantity of audio data fragment after, just progress reception mapping table
In all video data segments and audio data fragment whether mark for judge;
Wherein, when having new audio treatment unit and/or video processing unit addition, user instruction scheduling unit is in configuration text
Add the information of the unit in part, the version number of scheduling unit more new configuration file, and by the configuration file after renewal send to
The each unit of system.
2. audio-video document processing system according to claim 1, it is characterised in that:
The video data and the voice data are divided into video data segment by the input processing unit in order respectively
After audio data fragment, the serial number of each video data segment and audio data fragment is pressed into video data processing element and sound
The quantity of frequency data processing unit carries out modular arithmetic respectively, according to the knot of modular arithmetic respectively by each video data segment and audio number
Corresponding video data processing element is distributed to according to fragment and voice data processing unit is handled.
3. audio-video document processing system according to claim 1, it is characterised in that:
The input processing unit has the first monitoring module, first processing module and the first transport module;
Each video data processing element has the second monitoring module, Second processing module and the second transport module respectively;
Each voice data processing unit has the 3rd monitoring module, the 3rd processing module and the 3rd transport module respectively;
The output processing unit has the 4th monitoring module, fourth processing module and the 4th transport module;Wherein,
The first, second, third and fourth monitoring module is communicatively coupled with the scheduling unit respectively, from the scheduling
Unit receives dependent instruction, and each reports the running status of dependent processing unit to the scheduling unit;Described first passes
Defeated module is communicated to connect with each second transport module and each 3rd transport module respectively, it would be desirable to processing it is corresponding
Video data segment and audio data fragment are respectively sent to each second transport module and each 3rd transport module;Institute
State the 4th transport module to communicate to connect with each second transport module and each 3rd transport module respectively, receive through processing
Video data segment and audio data fragment afterwards.
4. audio-video document processing system according to claim 3, it is characterised in that:
The input processing unit is decapsulated to the source file received, obtains video sequence and tonic train, and respectively
The video sequence and the tonic train are subjected to dividing processing.
5. audio-video document processing system according to claim 1, it is characterised in that:The output processing unit is judging
From each video data processing element and each voice data processing unit receive all video data segments after processing and
After audio data fragment, the video data segment being connected to and audio data fragment are merged, and enters according to predetermined format
Row encapsulation.
6. audio-video document processing system according to claim 3, it is characterised in that:
The address that the scheduling unit and the first, second, third and fourth monitoring module are obtained by CONFIG.SYS
Information is communicated.
7. audio-video document processing system according to claim 6, it is characterised in that:
The first, second, third and fourth monitoring module respectively regularly sends out the running state information of respective handling unit
Give the scheduling unit;
The scheduling unit divides according to the running state information maintenance system configuration file, and by the CONFIG.SYS of renewal
The first, second, third and fourth monitoring module is not sent to.
8. audio-video document processing system according to claim 6, it is characterised in that:
The content of CONFIG.SYS includes quantity, physical address, running status and the work of the video data processing element
Quantity, physical address, running status and the working directory of catalogue and the voice data processing unit.
9. audio-video document processing system according to claim 6, it is characterised in that:
When the processing of one in the video data processing element and the voice data processing unit makes a mistake, its phase
Answer second or the 3rd monitoring module report the mistake to the scheduling unit;
The scheduler module determines to re-execute the processing made a mistake or ignores the mistake.
10. audio-video document processing system according to claim 9, it is characterised in that:
The input processing unit rejoins the video data segment and/or audio data fragment of error do not sent surplus
In remaining video data segment and/or audio data fragment and carry out serialization again, then by the serial number obtained by the serialization again
Modular arithmetic is carried out by the quantity of the video data processing element and/or the voice data processing unit, according to modular arithmetic
As a result the video data segment and/or audio data fragment of the error are reassigned to corresponding video data processing element
And/or voice data processing unit is handled.
11. audio-video document processing system according to claim 1, it is characterised in that:
The input processing unit obtains video frame number, audio frame number and audio frequency and video after the decapsulation of the source video file and compiled
Code form, and the video frame number, audio frame number and audio/video coding form are sent to corresponding video data processing element
With voice data processing unit.
12. audio-video document processing system according to claim 1, it is characterised in that:
The input processing unit is by the source file information, suitable in all videos or audio data fragment of the source file
Sequence number and file type information are inserted into the filename of the video data segment after segmentation and audio data fragment.
13. audio-video document processing system according to claim 1, it is characterised in that:
When all video data segments and audio data fragment that the output processing unit judges to receive in mapping table are marked
During to have been received by, the video data segment after processing and the audio data fragment after processing are carried out at integration
Reason, obtains the audio-video document after processing;
There are video data segment and voice data when the output processing unit judges to receive in mapping table after certain time
Fragment label is when not receiving, by video data segment accordingly after singulated or/and audio data fragment after singulated
Missing information send to the scheduling unit.
14. audio-video document processing system according to claim 13, it is characterised in that:
When the scheduling unit receives the missing information, indicate the input processing unit by the video data piece of missing
The serial number of section and/or audio data fragment presses the quantity of current video data processing unit and/or voice data processing unit
Modular arithmetic is carried out, the video data segment and/or audio data fragment of each missing is reassigned to according to the result of modular arithmetic
Corresponding video data processing element and/or voice data processing unit are handled, and by the video data after handling again
Fragment and/or audio data fragment are sent to the output processing unit.
15. audio-video document processing system according to claim 13, it is characterised in that:
When the scheduling unit receives the missing information of video data segment, indicate that the input processing unit lacks all
The video data segment of mistake is numbered in order again, and the numbering is carried out into mould fortune by the quantity of current video data processing unit
Calculate, the video data segment of each missing is reassigned into corresponding video data processing element according to the result of modular arithmetic is carried out
Processing, and the video data segment after handling again is sent to the output processing unit;
When the scheduling unit receives the missing information of audio data fragment, indicate that the input processing unit lacks all
The audio data fragment of mistake is numbered in order again, and the numbering is carried out into mould fortune by the quantity of current audio data processing unit
Calculate, the audio data fragment of each missing is reassigned into corresponding voice data processing unit according to the result of modular arithmetic is carried out
Processing, and the audio data fragment after handling again is sent to the output processing unit.
16. a kind of distributed tones video file processing method, including:
Input processing step, source video file is received by input processing unit, and processing acquisition is carried out to the source video file
Video data and voice data, and the video data and the voice data are divided into video data in order respectively respectively
After fragment and audio data fragment, and video data segment and voice data piece according to obtained by certain allocation rule by segmentation
Section distributes to corresponding video data processing element and voice data processing unit is handled;
Video data process step, is carried out using several video data processing elements to video data segment after singulated respectively
Processing;
Voice data process step, is carried out using several voice data processing units to audio data fragment after singulated respectively
Processing;
Export process step, by export processing unit to the video data segment and audio data fragment after processing at
Manage and export;
Scheduling steps, the input processing unit, several video data processing elements, the number are coordinated by scheduling unit
Individual voice data processing unit and the work for exporting processing unit;
In the input processing step, audio by the quantity of the video data segment of source file after singulated and after singulated
The segmentation information of the quantity of data slot is sent to the output processing unit;
The input processing unit inserts the source file of the fragment into the video data segment and audio data fragment after segmentation
Information, serial number and file type information in all videos or audio data fragment of the source file;
The audio data fragment of the output processing unit according to the quantity of the video data segment after segmentation and after singulated
Quantity sets up the reception mapping table of the source file, according to input processing unit to the video data segment and voice data after segmentation
The source file information for the fragment inserted in fragment, the serial number in all videos or audio data fragment of the source file and
The processed data slot received is labeled as having been received by by file type information in mapping table is received, and counts paid-in
The quantity of video data segment and the quantity of audio data fragment;
The quantity of paid-in video data segment and the quantity of audio data fragment are also included in the reception mapping table;It is described
Export processing unit judge receive mapping table in the quantity of video data segment and the quantity of audio data fragment with it is described defeated
Enter processing unit transmission video data segment quantity it is consistent with the quantity of audio data fragment after, just progress reception mapping table
In all video data segments and audio data fragment whether mark for judge;
Wherein, when having new audio treatment unit and/or video processing unit addition, user instruction scheduling unit is in configuration text
Add the information of the unit in part, the version number of scheduling unit more new configuration file, and by the configuration file after renewal send to
The each unit of system.
17. audio-video document processing method according to claim 16, it is characterised in that:The input processing step bag
Include:
In the input processing step, the video data and the voice data are divided into video data in order respectively
After fragment and audio data fragment, the serial number of each video data segment and audio data fragment is pressed into video data processing element
Carry out modular arithmetic respectively with the quantity of voice data processing unit, according to the result of modular arithmetic respectively by each video data segment and
Audio data fragment distributes to corresponding video data processing element and voice data processing unit is handled.
18. audio-video document processing method according to claim 16, it is characterised in that:
Decapsulation step, is decapsulated to the source file received, obtains video sequence and tonic train;
Segmentation step, carries out dividing processing by the video sequence and the tonic train respectively.
19. audio-video document processing method according to claim 16, it is characterised in that:In the output process step
In, judging to receive all video counts after processing from each video data processing element and each voice data processing unit
After fragment and audio data fragment, the video data segment being connected to and audio data fragment are merged, and according to pre-
The formula of fixing is packaged.
20. the audio-video document processing method according to any one of claim 16~19, it is characterised in that:
In the scheduling steps, the address information obtained by CONFIG.SYS is communicated.
21. audio-video document processing method according to claim 20, it is characterised in that:
In the scheduling steps, the running state information of respective handling unit is regularly sent to the scheduling list respectively
Member, the scheduling unit sends the CONFIG.SYS of renewal according to the running state information maintenance system configuration file.
22. audio-video document processing method according to claim 20, it is characterised in that:
The content of CONFIG.SYS includes quantity, physical address, running status and the work of the video data processing element
Quantity, physical address, running status and the working directory of catalogue and the voice data processing unit.
23. audio-video document processing method according to claim 20, it is characterised in that:
In the video data process step, when the processing of one in the video data processing element makes a mistake,
Or in the voice data process step, when the processing of one in the voice data processing unit makes a mistake, to
The scheduling unit reports the mistake;
The scheduler module determines to re-execute the processing made a mistake or ignores the mistake.
24. audio-video document processing method according to claim 23, it is characterised in that:
The input processing unit rejoins the video data segment and/or audio data fragment of error do not sent surplus
In remaining video data segment and/or audio data fragment and carry out serialization again, then by the serial number obtained by the serialization again
Modular arithmetic is carried out by the quantity of the video data processing element and/or the voice data processing unit, according to modular arithmetic
As a result the video data segment and/or audio data fragment of the error are reassigned to corresponding video data processing element
And/or voice data processing unit is handled.
25. audio-video document processing method according to claim 16, it is characterised in that:
In the input processing step, video frame number, audio frame number and the sound after the decapsulation of the source video file are obtained
Video code model, and the video frame number, audio frame number and audio/video coding form are sent at corresponding video data
Manage unit and voice data processing unit.
26. audio-video document processing method according to claim 16, it is characterised in that:When the output processing unit is sentenced
Disconnecting receive mapping table in all video data segments and audio data fragment mark for when, to described after processing
Video data segment and the audio data fragment after processing carry out integration processing, obtain the text of the audio frequency and video after processing
Part;
There are video data segment and voice data when the output processing unit judges to receive in mapping table after certain time
Fragment label is when not receiving, by video data segment accordingly after singulated or/and audio data fragment after singulated
Missing information send to the scheduling unit.
27. audio-video document processing method according to claim 26, it is characterised in that:
When the scheduling unit receives the missing information, indicate the input processing unit by the video data piece of missing
The serial number of section and/or audio data fragment presses the quantity of current video data processing unit and/or voice data processing unit
Modular arithmetic is carried out, the video data segment and/or audio data fragment of each missing is reassigned to according to the result of modular arithmetic
Corresponding video data processing element and/or voice data processing unit are handled, and by the video data after handling again
Fragment and/or audio data fragment are sent to the output processing unit.
28. audio-video document processing method according to claim 26, it is characterised in that:
When the scheduling unit receives the missing information of video data segment, indicate that the input processing unit lacks all
The video data segment of mistake is numbered in order again, and the numbering is carried out into mould fortune by the quantity of current video data processing unit
Calculate, the video data segment of each missing is reassigned into corresponding video data processing element according to the result of modular arithmetic is carried out
Processing, and the video data segment after handling again is sent to the output processing unit;
When the scheduling unit receives the missing information of audio data fragment, indicate that the input processing unit lacks all
The audio data fragment of mistake is numbered in order again, and the numbering is carried out into mould fortune by the quantity of current audio data processing unit
Calculate, the audio data fragment of each missing is reassigned into corresponding voice data processing unit according to the result of modular arithmetic is carried out
Processing, and the audio data fragment after handling again is sent to the output processing unit.
29. a kind of distributed tones video file processing unit, including:
Several first servers, are respectively used to handle video data segment;
Several second servers, are respectively used to handle audio data fragment;
3rd server, for the video data segment and audio data fragment after processing to be handled and exported;
4th server, for receiving source video file, carries out processing to the source video file and obtains video data and audio
Data, and the video data and the voice data are divided into video data segment and audio data fragment in order respectively
Afterwards, and video data segment and audio data fragment according to obtained by certain allocation rule by segmentation distribute to it is corresponding described in
First server and the second server are handled, and coordinate the several first servers, several second clothes
The work being engaged between device and the 3rd server;Wherein,
The quantity of video data segment of 4th server by source file after singulated and voice data piece after singulated
The segmentation information of the quantity of section is sent to the 3rd server;
4th server is by the source file information, the order in all videos or audio data fragment of the source file
Number and file type information be inserted into segmentation after video data segment and audio data fragment filename in;
The number of audio data fragment of 3rd server according to the quantity of the video data segment after segmentation and after singulated
Amount sets up the reception mapping table of the source file, according to the 4th server to the video data segment and audio data fragment after segmentation
The source file information of the fragment of middle insertion, serial number and file in all videos or audio data fragment of the source file
The processed data slot received is labeled as having been received by by type information in mapping table is received, and counts paid-in video
The quantity of data slot and the quantity of audio data fragment;
The quantity of paid-in video data segment and the quantity of audio data fragment are also included in the reception mapping table;It is described
3rd server judges to receive the quantity and the quantity and the described 4th of audio data fragment of the video data segment in mapping table
After the quantity for the video data segment that server is sent is consistent with the quantity of audio data fragment, just carry out in reception mapping table
Whether mark is judgement for all video data segments and audio data fragment;
Wherein, when having new audio treatment unit and/or video processing unit addition, user instruction scheduling unit is in configuration text
Add the information of the unit in part, the version number of scheduling unit more new configuration file, and by the configuration file after renewal send to
The each unit of system.
30. device according to claim 29, it is characterised in that:
4th server by the video data and the voice data be divided into order respectively video data segment and
After audio data fragment, by the serial number of each video data segment and audio data fragment by the first server and described the
The quantity of two servers carries out modular arithmetic respectively, according to the result of modular arithmetic respectively by each video data segment and voice data piece
Section distributes to the corresponding first server and the second server is handled.
31. device according to claim 29, it is characterised in that:
Each first server only runs a video data treatment progress;
Each second server only runs a voice data treatment progress.
32. device according to claim 29, it is characterised in that:
4th server is decapsulated to the source file received, obtains video sequence and tonic train, and respectively will
The video sequence and the tonic train carry out dividing processing.
33. device according to claim 29, it is characterised in that:3rd server is being judged from each first server
Received with each second server after all video data segments and audio data fragment after processing, by regarding for being connected to
Frequency data slot and audio data fragment are merged, and are packaged according to predetermined format.
34. device according to claim 29, it is characterised in that:
4th server obtains video frame number, audio frame number and the audio/video coding after the decapsulation of the source video file
Form, and the video frame number, audio frame number and audio/video coding form are sent to corresponding first server and the second clothes
Business device.
35. device according to claim 29, it is characterised in that:
When the 3rd server judge receive mapping table in all video data segments and audio data fragment mark for
When having been received by, the video data segment after processing and the audio data fragment after processing are carried out at integration
Reason, obtains the audio-video document after processing;
There are video data segment and voice data piece when the 3rd server judges to receive in mapping table after certain time
When segment mark is designated as not receiving, by video data segment accordingly after singulated or/and audio data fragment after singulated
Missing information is sent to the 4th server.
36. device according to claim 35, it is characterised in that:
When the 4th server receives the missing information, by the video data segment of missing and/or voice data piece
The serial number of section carries out modular arithmetic by current first server and/or the quantity of second server, will according to the result of modular arithmetic
The video data segment and/or audio data fragment respectively lacked is reassigned to corresponding first server and/or second service
Device is handled, and the video data segment after handling again and/or audio data fragment are sent to the 3rd server.
37. device according to claim 35, it is characterised in that:
When the 4th server receives the missing information of video data segment, by the video data segment weight of all missings
Newly number in order, and the numbering is subjected to modular arithmetic by the quantity of current first server, will be each according to the result of modular arithmetic
The video data segment of missing is reassigned to corresponding first server and handled, and by again handle after video data
Fragment is sent to the 3rd server;
When the 4th server receives the missing information of audio data fragment, by the audio data fragment weight of all missings
Newly number in order, and the numbering is subjected to modular arithmetic by the quantity of current second server, will be each according to the result of modular arithmetic
The audio data fragment of missing is reassigned to corresponding second server and handled, and by again handle after voice data
Fragment is sent to the 3rd server.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310558626.XA CN103605710B (en) | 2013-11-12 | 2013-11-12 | A kind of distributed tones video process apparatus and processing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310558626.XA CN103605710B (en) | 2013-11-12 | 2013-11-12 | A kind of distributed tones video process apparatus and processing method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103605710A CN103605710A (en) | 2014-02-26 |
CN103605710B true CN103605710B (en) | 2017-10-03 |
Family
ID=50123933
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310558626.XA Expired - Fee Related CN103605710B (en) | 2013-11-12 | 2013-11-12 | A kind of distributed tones video process apparatus and processing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103605710B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103838878B (en) * | 2014-03-27 | 2017-03-01 | 无锡天脉聚源传媒科技有限公司 | A kind of distributed tones processing system for video and processing method |
CN105224291B (en) * | 2015-09-29 | 2017-12-08 | 北京奇艺世纪科技有限公司 | A kind of data processing method and device |
CN105354242A (en) * | 2015-10-15 | 2016-02-24 | 北京航空航天大学 | Distributed data processing method and device |
CN105407360A (en) * | 2015-10-29 | 2016-03-16 | 无锡天脉聚源传媒科技有限公司 | Data processing method and device |
CN105354058A (en) * | 2015-10-29 | 2016-02-24 | 无锡天脉聚源传媒科技有限公司 | File updating method and apparatus |
CN105357229B (en) * | 2015-12-22 | 2019-12-13 | 深圳市科漫达智能管理科技有限公司 | Video processing method and device |
CN110635864A (en) * | 2019-10-09 | 2019-12-31 | 中国联合网络通信集团有限公司 | Parameter decoding method, device, equipment and computer readable storage medium |
CN111372011B (en) * | 2020-04-13 | 2022-07-22 | 杭州友勤信息技术有限公司 | KVM high definition video decollator |
CN114900718A (en) * | 2022-07-12 | 2022-08-12 | 深圳市华曦达科技股份有限公司 | Multi-region perception automatic multi-subtitle realization method, device and system |
CN116108492B (en) * | 2023-04-07 | 2023-06-30 | 安羚科技(杭州)有限公司 | Laterally expandable data leakage prevention system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101098483A (en) * | 2007-07-19 | 2008-01-02 | 上海交通大学 | Video cluster transcoding system using image group structure as parallel processing element |
CN101098260A (en) * | 2006-06-29 | 2008-01-02 | 国际商业机器公司 | Distributed equipment monitor management method, equipment and system |
CN101141627A (en) * | 2007-10-23 | 2008-03-12 | 深圳市迅雷网络技术有限公司 | Storage system and method of stream media file |
CN102739799A (en) * | 2012-07-04 | 2012-10-17 | 合一网络技术(北京)有限公司 | Distributed communication method in distributed application |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8296358B2 (en) * | 2009-05-14 | 2012-10-23 | Hewlett-Packard Development Company, L.P. | Method and system for journaling data updates in a distributed file system |
-
2013
- 2013-11-12 CN CN201310558626.XA patent/CN103605710B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101098260A (en) * | 2006-06-29 | 2008-01-02 | 国际商业机器公司 | Distributed equipment monitor management method, equipment and system |
CN101098483A (en) * | 2007-07-19 | 2008-01-02 | 上海交通大学 | Video cluster transcoding system using image group structure as parallel processing element |
CN101141627A (en) * | 2007-10-23 | 2008-03-12 | 深圳市迅雷网络技术有限公司 | Storage system and method of stream media file |
CN102739799A (en) * | 2012-07-04 | 2012-10-17 | 合一网络技术(北京)有限公司 | Distributed communication method in distributed application |
Also Published As
Publication number | Publication date |
---|---|
CN103605710A (en) | 2014-02-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103605710B (en) | A kind of distributed tones video process apparatus and processing method | |
CN103605709B (en) | A kind of distributed tones video process apparatus and processing method | |
CN103838878B (en) | A kind of distributed tones processing system for video and processing method | |
US9473378B1 (en) | Method for transmitting packet-based media data having header in which overhead is minimized | |
CN102265535A (en) | Method and apparatus for streaming multiple scalable coded video content to client devices at different encoding rates | |
EP2675129B1 (en) | Streaming media service processing method | |
CN102882829A (en) | Transcoding method and system | |
CN104092719B (en) | Document transmission method, device and distributed cluster file system | |
CN100559876C (en) | Information-transmission apparatus and information transferring method | |
CN104782133A (en) | Method and apparatus for media data delivery control | |
CN102591964A (en) | Implementation method and device for data reading-writing splitting system | |
CN110601903B (en) | Data processing method and device based on message queue middleware | |
CN101447856A (en) | High-capacity file transmission method | |
CN102377685A (en) | Subscription message sending system and subscription message sending method | |
CN104580158A (en) | Distributed platform file and content distribution method and distributed platform file and content distribution system | |
CN104158909B (en) | A kind of disributed media processing method and system thereof | |
CN107070535A (en) | A kind of method that global Incorporate satellite broadcast service is provided | |
CN109756552A (en) | A kind of passenger information system message distributing method and device and passenger information system | |
CN108306852A (en) | A kind of message-oriented middleware system and method based on simple binary coding | |
CN103905843B (en) | Distributed audio/video processing device and method for continuous frame-I circumvention | |
CN105635802A (en) | Transmission method of digital media data and device | |
CN102970251A (en) | Networking method and networking device | |
CN105763375A (en) | Data packet transmission method, receiving method and microwave station | |
CN103634229B (en) | A kind of Inter-chip communication method and control device | |
CN102510398B (en) | Request concurrent processing method and device, and server |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right |
Denomination of invention: A distributed audio and video processing device and processing method Effective date of registration: 20210104 Granted publication date: 20171003 Pledgee: Inner Mongolia Huipu Energy Co.,Ltd. Pledgor: TVMINING (BEIJING) MEDIA TECHNOLOGY Co.,Ltd. Registration number: Y2020990001527 |
|
PE01 | Entry into force of the registration of the contract for pledge of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20171003 Termination date: 20211112 |
|
CF01 | Termination of patent right due to non-payment of annual fee |