CN103605710A - Distributed audio and video processing device and distributed audio and video processing method - Google Patents

Distributed audio and video processing device and distributed audio and video processing method Download PDF

Info

Publication number
CN103605710A
CN103605710A CN201310558626.XA CN201310558626A CN103605710A CN 103605710 A CN103605710 A CN 103605710A CN 201310558626 A CN201310558626 A CN 201310558626A CN 103605710 A CN103605710 A CN 103605710A
Authority
CN
China
Prior art keywords
data fragment
video
voice data
processing unit
video data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310558626.XA
Other languages
Chinese (zh)
Other versions
CN103605710B (en
Inventor
武悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TVMining Beijing Media Technology Co Ltd
Original Assignee
TVMining Beijing Media Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TVMining Beijing Media Technology Co Ltd filed Critical TVMining Beijing Media Technology Co Ltd
Priority to CN201310558626.XA priority Critical patent/CN103605710B/en
Publication of CN103605710A publication Critical patent/CN103605710A/en
Application granted granted Critical
Publication of CN103605710B publication Critical patent/CN103605710B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention provides a distributed audio and video file processing system which comprises an input processing unit, multiple video data processing units, multiple audio data processing units, an output processing unit and a dispatching unit. The input processing unit is used for receiving and processing source video files to acquire audio and video data, the audio and video data are respectively segmented into audio and video data fragments which are dispatched to corresponding audio and video data processing units for processing according to a certain dispatching rule; the video data processing units are respectively used for processing the segmented video data fragments; the audio data processing units are respectively used for processing the segmented audio data fragments; the output processing unit is used for processing and outputting the processed audio and video data fragments; the dispatching unit is used for coordinating operation of the input processing unit, the audio and video data processing units and the output processing unit. The invention further provides a distributed audio and video file processing method and a distributed audio and video file processing device.

Description

A kind of distributed tones video process apparatus and disposal route
Technical field
The present invention relates to a kind of apparatus and method of utilizing computing machine or data processing equipment deal with data, relate in particular to a kind of apparatus and method of utilizing distributed computer or data processing equipment to process audio-video document.
Technical background
Along with the development of network and cultural undertakings, audio and video resources is greatly abundant, to the needs of the processing of audio-video document also rapid growth.
The roughly flow process that audio-video document is processed is as follows: the audio-video document decapsulation of first need being processed becomes sequence of frames of video and audio frame sequence; Then sequence of frames of video and audio frame sequence are decoded as respectively to RAW form and PCM formatted data; RAW form and PCM formatted data are processed; The audio frame sequence and the sequence of frames of video that by the data encoding of RAW form and PCM form, are required form again; Finally audio frame sequence and sequence of frames of video are packaged into the file layout needing.
More than processing is that the data processing equipment consisting of computing machine or computing machine completes, and existing these computing machines or data processing equipment are to rely on the software and hardware resources of the machine to realize the processing to file.The calculated amount that audio-video document is processed is huge, very large to the arithmetic capability for the treatment of apparatus and storage resource consumption, and along with the continuous increase of the increasing and processing demands of high resolution audio and video file, the bottleneck problem that relies on unit to carry out audio-video document processing becomes increasingly conspicuous, the slow and easy generation systems collapse of unit processing speed.Even if user uses the very high computing machine of configuration to be also difficult to guarantee speed and the degree of stability of processing, especially cannot meet in enormous quantities and the very high Processing tasks of time requirement.
Above problem in view of existing in prior art, the invention provides a kind of distributed processing system(DPS).With many computing machines or treating apparatus, realize parallel processing, greatly reduced and processed the required time, reduced the processing pressure of system simultaneously, reduced the possibility of system crash.Owing to having used perfect monitoring system, the reliability of processing is very high, can meet the requirement to Disposal quality completely.
Summary of the invention
The first level of the present invention provides a kind of distributed tones video file disposal system, comprise: input processing unit, for reception sources video file, described source video file is processed and obtained video data and voice data, and described video data and described voice data are divided into respectively after video data fragment and voice data fragment in order, and to corresponding video data processing unit and voice data processing unit, process cutting apart video data fragment and the voice data fragment allocation of gained according to certain allocation rule; Several video data processing units, are respectively used to the video data fragment after cutting apart to process; Several voice data processing units, are respectively used to the voice data fragment after cutting apart to process; Output processing unit, processes and exports for the video data fragment to after treatment and voice data fragment; Scheduling unit, for coordinating the work of described input processing unit, described several video data processing units, described several voice data processing units and output processing unit.
Preferably, described input processing unit is divided into described video data and described voice data respectively after video data fragment and voice data fragment in order, the serial number of each video data fragment and voice data fragment is carried out to modular arithmetic by the quantity of video data processing unit and voice data processing unit, according to the result of modular arithmetic, process each video data fragment and voice data fragment allocation to corresponding video data processing unit and voice data processing unit.
Preferably, described input processing unit has the first monitoring module, the first processing module and the first transport module; Each video data processing unit has respectively the second monitoring module, the second processing module and the second transport module; Each voice data processing unit has respectively the 3rd monitoring module, the 3rd processing module and the 3rd transport module; Described output processing unit has the 4th monitoring module, the 4th processing module and the 4th transport module; Wherein, the described first, second, third and the 4th monitoring module communicates and is connected with described scheduling unit respectively, receives dependent instruction, and separately the running status of relevant treatment unit is reported to described scheduling unit from described scheduling unit; Described the first transport module respectively with the 3rd transport module communication connection described in the second transport module described in each and each, need corresponding video data fragment to be processed and voice data fragment are sent to respectively described in each to the second transport module and the 3rd transport module described in each; Described the 4th transport module communicates to connect with the second transport module described in each and the 3rd transport module described in each respectively, receives video data fragment and voice data fragment after treatment.
Preferably, described input processing unit carries out decapsulation to received source file, obtains video sequence and tonic train, and respectively described video sequence and described tonic train is carried out to dividing processing.
Preferably, described output processing unit receives all video data fragments and voice data fragment after treatment from each video data processing unit and each voice data processing unit in judgement, received video data fragment and voice data fragment are merged, and encapsulate according to predetermined format.
Preferably, the address information that described scheduling unit and the described first, second, third and the 4th monitoring module obtain by CONFIG.SYS communicates.
Preferably, the described first, second, third and the 4th monitoring module sends to described scheduling unit by the running state information of respective handling unit respectively termly; Described scheduling unit is according to described running state information maintenance system configuration file, and the CONFIG.SYS of renewal is sent to respectively to the described first, second, third and the 4th monitoring module.
Preferably, the content of CONFIG.SYS comprises quantity, physical address, running status and the working directory of described video data processing unit and quantity, physical address, running status and the working directory of described voice data processing unit.
Preferably, when in described video data processing unit and described voice data processing unit, the processing of makes a mistake, its corresponding second or the 3rd monitoring module is reported this mistake to described scheduling unit; Described scheduler module is definite to be re-executed the processing making a mistake or ignores this mistake.
Preferably, described input processing unit rejoins in the remaining video data slot that is not sent out and/or voice data fragment by the video data fragment of makeing mistakes and/or voice data fragment and carries out serialization again, again, the serial number of serialization gained carries out modular arithmetic by the quantity of described video data processing unit and/or described voice data processing unit again, according to the result of modular arithmetic, described video data fragment of makeing mistakes and/or voice data fragment is redistributed to corresponding video data processing unit and/or voice data processing unit and is processed.
Preferably, described input processing unit obtains video frame number, audio frame number and the audio/video coding form after the decapsulation of described source video file, and described video frame number, audio frame number and audio/video coding form are sent to corresponding video data processing unit and voice data processing unit.
Preferably, the quantity of described input processing unit video data fragment after cutting apart by source file and the quantity of the voice data fragment after cutting apart send to described output processing unit.
Preferably, described input processing unit is to inserting the source file information of this fragment in the video data fragment after cutting apart and voice data fragment, in all videos or the serial number in voice data fragment and the file type information of this source file.
Preferably, described input processing unit is by described source file information, be inserted in the filename of video data fragment after cutting apart and voice data fragment in all videos of this source file or the serial number in voice data fragment and file type information.
Preferably, the reception mapping table that described output processing unit is set up this source file according to the quantity of video data fragment after cutting apart and the quantity of the voice data fragment after cutting apart, according to input processing unit to the source file information of this fragment of inserting in the video data fragment after cutting apart and voice data fragment, the processed data slot of receiving is labeled as and is received in all videos of this source file or the serial number in voice data fragment and file type information are receiving mapping table; When all video data fragments in described output processing unit judges reception mapping table and voice data fragment are all labeled as and have received, described video data fragment after treatment and described voice data fragment are after treatment integrated to processing, obtain audio-video document after treatment; When described output, process unit judges and after certain hour, receive and in mapping table, have video data fragment and voice data fragment label when not receiving, by the video data fragment after cutting apart accordingly or/and the disappearance information of the voice data fragment after cutting apart is sent to described scheduling unit.
Preferably, in described reception mapping table, also comprise the quantity of paid-in video data fragment and the quantity of voice data fragment; After the quantity of video data fragment that the quantity of video data fragment in described output processing unit judges reception mapping table and the quantity of voice data fragment send with described input processing unit and the quantity of voice data fragment are consistent, whether all video data fragments and the voice data fragment that just receive in mapping table are all labeled as paid-in judgement.
Preferably, when described scheduling unit receives described disappearance information, indicate described input processing unit that the serial number of the video data fragment of disappearance and/or voice data fragment is carried out to modular arithmetic by the quantity of current video data processing unit and/or voice data processing unit, according to the result of modular arithmetic, the video data fragment of each disappearance and/or voice data fragment are redistributed to corresponding video data processing unit and/or voice data processing unit and processed, and video data fragment and/or voice data fragment after again processing are sent to described output processing unit.
Preferably, when described scheduling unit receives the disappearance information of video data fragment, indicate described input processing unit that the video data fragment of all disappearances is numbered in order again, and this numbering is carried out to modular arithmetic by the quantity of current video data processing unit, according to the result of modular arithmetic, the video data fragment of each disappearance is redistributed to corresponding video data processing unit and processed, and the video data fragment after again processing is sent to described output processing unit; When described scheduling unit receives the disappearance information of voice data fragment, indicate described input processing unit that the voice data fragment of all disappearances is numbered in order again, and this numbering is carried out to modular arithmetic by the quantity of current voice data processing unit, according to the result of modular arithmetic, the voice data fragment of each disappearance is redistributed to corresponding voice data processing unit and processed, and the voice data fragment after again processing is sent to described output processing unit.
The present invention provides a kind of audio-video document disposal route on the other hand, comprise: input processing step, by input processing unit reception sources video file, described source video file is processed and obtained video data and voice data, and respectively described video data and described voice data are divided into respectively after video data fragment and voice data fragment in order, and to corresponding video data processing unit and voice data processing unit, process cutting apart video data fragment and the voice data fragment allocation of gained according to certain allocation rule; Video data treatment step, is used respectively several video data processing units to process the video data fragment after cutting apart; Voice data treatment step, is used respectively several voice data processing units to process the voice data fragment after cutting apart; Output treatment step, is processed and exports video data fragment and voice data fragment after treatment by output processing unit; Scheduling step, coordinates the work of described input processing unit, described several video data processing units, described several voice data processing units and output processing unit by scheduling unit.
Preferably, in described input processing step, described video data and described voice data are divided into respectively after video data fragment and voice data fragment in order, the serial number of each video data fragment and voice data fragment is carried out to modular arithmetic by the quantity of video data processing unit and voice data processing unit, according to the result of modular arithmetic, process each video data fragment and voice data fragment allocation to corresponding video data processing unit and voice data processing unit.
Preferably, decapsulation step, carries out decapsulation to received source file, obtains video sequence and tonic train; Segmentation procedure, carries out dividing processing by described video sequence and described tonic train respectively.
Preferably, in described output treatment step, in judgement, from each video data processing unit and each voice data processing unit, receive all video data fragments and voice data fragment after treatment, received video data fragment and voice data fragment are merged, and encapsulate according to predetermined format.
Preferably, in described scheduling step, the address information of obtaining by CONFIG.SYS communicates.
Preferably, in described scheduling step, the running state information of respective handling unit is sent to respectively described scheduling unit termly, and described scheduling unit is according to described running state information maintenance system configuration file, and the CONFIG.SYS of transmission renewal.
Preferably, the content of CONFIG.SYS comprises quantity, physical address, running status and the working directory of described video data processing unit and quantity, physical address, running status and the working directory of described voice data processing unit.
Preferably, in described video data treatment step, when the processing of in described video data processing unit makes a mistake, or in described voice data treatment step, when the processing of in described voice data processing unit makes a mistake, to described scheduling unit, report this mistake; Described scheduler module is definite to be re-executed the processing making a mistake or ignores this mistake.
Preferably, described input processing unit rejoins in the remaining video data slot that is not sent out and/or voice data fragment by the video data fragment of makeing mistakes and/or voice data fragment and carries out serialization again, again, the serial number of serialization gained carries out modular arithmetic by the quantity of described video data processing unit and/or described voice data processing unit again, according to the result of modular arithmetic, described video data fragment of makeing mistakes and/or voice data fragment is redistributed to corresponding video data processing unit and/or voice data processing unit and is processed.
Preferably, in described input processing step, obtain video frame number, audio frame number and audio/video coding form after the decapsulation of described source video file, and described video frame number, audio frame number and audio/video coding form are sent to corresponding video data processing unit and voice data processing unit.
Preferably, in described input processing step, the carve information of the quantity of the quantity of the video data fragment by source file after cutting apart and the voice data fragment after cutting apart sends to described output processing unit.
Preferably, in described input processing step, described input processing unit is to inserting the source file information of this fragment in the video data fragment after cutting apart and voice data fragment, in all videos or the serial number in voice data fragment and the file type information of this source file.
Preferably, in described output treatment step, the reception mapping table that described output processing unit is set up this source file according to the quantity of video data fragment after cutting apart and the quantity of the voice data fragment after cutting apart, according to input processing unit to the source file information of this fragment of inserting in the video data fragment after cutting apart and voice data fragment, the processed data slot of receiving is labeled as and is received in all videos of this source file or the serial number in voice data fragment and file type information are receiving mapping table; When all video data fragments in described output processing unit judges reception mapping table and voice data fragment are all labeled as and have received, described video data fragment after treatment and described voice data fragment are after treatment integrated to processing, obtain audio-video document after treatment; When described output, process unit judges and after certain hour, receive and in mapping table, have video data fragment and voice data fragment label when not receiving, by the video data fragment after cutting apart accordingly or/and the disappearance information of the voice data fragment after cutting apart is sent to described scheduling unit.
Preferably, in described reception mapping table, also comprise the quantity of paid-in video data fragment and the quantity of voice data fragment; After the quantity of video data fragment that the quantity of video data fragment in described output processing unit judges reception mapping table and the quantity of voice data fragment send with described input processing unit and the quantity of voice data fragment are consistent, whether all video data fragments and the voice data fragment that just receive in mapping table are all labeled as paid-in judgement.
Preferably, when described scheduling unit receives described disappearance information, indicate described input processing unit that the serial number of the video data fragment of disappearance and/or voice data fragment is carried out to modular arithmetic by the quantity of current video data processing unit and/or voice data processing unit, according to the result of modular arithmetic, the video data fragment of each disappearance and/or voice data fragment are redistributed to corresponding video data processing unit and/or voice data processing unit and processed, and video data fragment and/or voice data fragment after again processing are sent to described output processing unit.
Preferably, when described scheduling unit receives the disappearance information of video data fragment, indicate described input processing unit that the video data fragment of all disappearances is numbered in order again, and this numbering is carried out to modular arithmetic by the quantity of current video data processing unit, according to the result of modular arithmetic, the video data fragment of each disappearance is redistributed to corresponding video data processing unit and processed, and the video data fragment after again processing is sent to described output processing unit; When described scheduling unit receives the disappearance information of voice data fragment, indicate described input processing unit that the voice data fragment of all disappearances is numbered in order again, and this numbering is carried out to modular arithmetic by the quantity of current voice data processing unit, according to the result of modular arithmetic, the voice data fragment of each disappearance is redistributed to corresponding voice data processing unit and processed, and the voice data fragment after again processing is sent to described output processing unit.
The present invention also provides a kind of distributed tones video file treating apparatus, comprising: several first servers, are respectively used to video data fragment to process; Several second servers, are respectively used to voice data fragment to process; The 3rd server, processes and exports for the video data fragment to after treatment and voice data fragment; The 4th server, for reception sources video file, described source video file is processed and obtained video data and voice data, and after described video data is divided into into respectively to video data fragment and voice data fragment in order with described voice data, and according to certain allocation rule to cutting apart the video data fragment of gained and voice data fragment is distributed to corresponding described first server and described second server is processed, and coordinate the work between described several first servers, described several second servers and described the 3rd server.
Preferably, described the 4th server is divided into described video data and described voice data respectively after video data fragment and voice data fragment in order, the serial number of each video data fragment and voice data fragment is carried out to modular arithmetic by the quantity of described first server and described second server, according to the result of modular arithmetic, process each video data fragment and voice data fragment allocation to corresponding described first server and described second server.
Preferably, each first server is only moved a video data treatment progress; Each second server only moves a voice data treatment progress.
Preferably, described the 4th server carries out decapsulation to received source file, obtains video sequence and tonic train, and respectively described video sequence and described tonic train is carried out to dividing processing.
Preferably, described the 3rd server receives all video data fragments and voice data fragment after treatment from each first server and each second server in judgement, received video data fragment and voice data fragment are merged, and encapsulate according to predetermined format.
Preferably, described the 4th server obtains video frame number, audio frame number and the audio/video coding form after the decapsulation of described source video file, and described video frame number, audio frame number and audio/video coding form are sent to corresponding first server and second server.
Preferably, the carve information of the quantity of described the 4th server video data fragment after cutting apart by source file and the quantity of the voice data fragment after cutting apart sends to described the 3rd server.
Preferably, described the 4th server is by described source file information, be inserted in the filename of video data fragment after cutting apart and voice data fragment in all videos of this source file or the serial number in voice data fragment and file type information.
Preferably, the reception mapping table that described the 3rd server is set up this source file according to the quantity of video data fragment after cutting apart and the quantity of the voice data fragment after cutting apart, according to the 4th server to the source file information of this fragment of inserting in the video data fragment after cutting apart and voice data fragment, the processed data slot of receiving is labeled as and is received in all videos of this source file or the serial number in voice data fragment and file type information are receiving mapping table; When all video data fragments in described the 3rd server judgement reception mapping table and voice data fragment are all labeled as and have received, described video data fragment after treatment and described voice data fragment are after treatment integrated to processing, obtain audio-video document after treatment; When receiving after certain hour, described the 3rd server judgement in mapping table, has video data fragment and voice data fragment label when not receiving, by the video data fragment after cutting apart and/or the disappearance information of the voice data fragment after cutting apart are sent to described the 4th server accordingly.
Preferably, in described reception mapping table, also comprise the quantity of paid-in video data fragment and the quantity of voice data fragment; After the quantity of video data fragment that the quantity of video data fragment in described the 3rd server judgement reception mapping table and the quantity of voice data fragment send with described the 4th server and the quantity of voice data fragment are consistent, whether all video data fragments and the voice data fragment that just receive in mapping table are all labeled as paid-in judgement.
Preferably, when described the 4th server receives described disappearance information, the serial number of the video data fragment of disappearance and/or voice data fragment is carried out to modular arithmetic by the quantity of current first server and/or second server, according to the result of modular arithmetic, the video data fragment of each disappearance and/or voice data fragment are redistributed to corresponding first server and/or second server and processed, and video data fragment and/or voice data fragment after again processing are sent to described the 3rd server.
Preferably, when described the 4th server receives the disappearance information of video data fragment, the video data fragment of all disappearances is numbered in order again, and this numbering is carried out to modular arithmetic by the quantity of current first server, according to the result of modular arithmetic, the video data fragment of each disappearance is redistributed to corresponding first server and processed, and the video data fragment after again processing is sent to described the 3rd server; When described the 4th server receives the disappearance information of voice data fragment, the voice data fragment of all disappearances is numbered in order again, and this numbering is carried out to modular arithmetic by the quantity of current second server, according to the result of modular arithmetic, the voice data fragment of each disappearance is redistributed to corresponding second server and processed, and the voice data fragment after again processing is sent to described the 3rd server.
Distributed processing system(DPS) of the present invention and disposal route have solved speed bottle-neck when unit is processed, and can greatly shorten audio-video document and process the required time.Owing to having set up reliable Mechanism of Task Allocation and mechanism for correcting errors, can guarantee the reliability of its result.The frequent faults such as deadlock that occur simultaneously in the time of can effectively avoiding unit processing load too high.Its extensibility is fine, and system configuration is flexible, is particularly useful for the processing of huge especially or a large amount of audio-video documents.
Accompanying drawing explanation
Fig. 1 is the structured flowchart of the distributed processing system(DPS) that relates to of embodiment of the present invention;
Fig. 2 is the structured flowchart of the input processing module of the distributed processing system(DPS) that relates to of embodiment of the present invention;
Fig. 3 is the structured flowchart of the output processing module of the distributed processing system(DPS) that relates to of embodiment of the present invention;
Fig. 4 is the treatment scheme schematic diagram of the distributed processing system(DPS) that relates to of embodiment of the present invention;
Fig. 5 is the structural representation of the Map file of the distributed processing system(DPS) that relates to of embodiment of the present invention;
Fig. 6 is the processing flow chart of the treatment step S5 of the distributed processing system(DPS) that relates to of embodiment of the present invention;
Fig. 7 is the structural representation of the message file of the distributed processing system(DPS) that relates to of embodiment of the present invention;
Fig. 8 is the structural representation of the Voice & Video file after cutting apart of the distributed processing system(DPS) that relates to of embodiment of the present invention;
Fig. 9 is the video file processing flow chart in the treatment step S8 of the distributed processing system(DPS) that relates to of embodiment of the present invention;
Figure 10 is the audio file processing flow chart in the treatment step S8 of the distributed processing system(DPS) that relates to of embodiment of the present invention;
Figure 11 exports the processing flow chart of processing unit in the treatment step S9 of the distributed processing system(DPS) that relates to of embodiment of the present invention;
Figure 12 is the reception mapping table structural representation of the distributed processing system(DPS) that relates to of embodiment of the present invention.
Embodiment
Illustrated embodiment is set forth this invention with reference to the accompanying drawings below.This time disclosed embodiment can be thought and is in all respects illustration, and tool is not restricted.Scope of the present invention is not limit by the explanation of above-mentioned embodiment, only by shown in the scope of claims, and comprises with claim scope having all distortion within the scope of the same meaning and claim.
Fig. 1 is the structured flowchart of distributed processing system(DPS).As shown in Figure 1, the distributed system of present embodiment comprises scheduler module (Dispatcher module) 1, input processing unit 2, several video processing unit 3 and 4, several audio treatment unit 5 and 6, output processing unit 7, monitors module (Watcher module) 8 and client modules (Client module) 9.Wherein, scheduler module 1 is for coordinating the operation of the each several part of whole system.Input processing unit 2 comprises monitoring module (Monitor module) 21, input processing module (Ingress module) 22 and transport module (Offer module) 23.Preferably, between above-mentioned monitoring module 21, input processing module 22 and transport module 23, can adopt message queue communication.Above-mentioned video processing unit 3(4) comprise monitoring module (Monitor module) 31(41), video processing module (VP module) 32(42) and transport module (Offer module) 33(43).Preferably, above-mentioned monitoring module 31(41), video processing module 32(42) and transport module 33(43) between can adopt message queue communication.Audio treatment unit 5(6) comprise monitoring module (Monitor module) 51(61), audio processing modules (AP module) 52(62) and transport module (Offer module) 53(63).Preferably, above-mentioned monitoring module 51(61), audio processing modules 52(62) and transport module 53(63) between can adopt message queue communication.Output processing unit 7 comprises monitoring module (Monitor module) 71, output processing module (Egress module) 72 and transport module (Offer module) 73.Preferably, between above-mentioned monitoring module 71, output processing module 72 and transport module 73, can adopt message queue communication.Monitor module 8 with scheduler module 1 shared drive and obtain thus all information in scheduler module 1.Monitor that module 8 sends to client modules 9 by the information of obtaining, by client modules 9, it is shown to user with graphical interfaces.Above-mentioned scheduler module 1, above-mentioned input processing unit 2 and above-mentioned supervision module (Watcher module) 8 can share a physical machine (server).Each video processing unit 3 or 4 can be separately with a physical machine (server), that is: above-mentioned monitoring module 31, above-mentioned video processing module 32 and above-mentioned transport module 33 can share a physical machine (server), and above-mentioned monitoring module 41, above-mentioned video processing module 42 and above-mentioned transport module 43 can share a physical machine (server), wherein each physical machine (server) can only move a Video processing process (VP process) each time; Each audio treatment unit 5 or 6 can be separately with a physical machine (server), that is: above-mentioned monitoring module 51, above-mentioned audio processing modules 52 and above-mentioned transport module 53 can share a physical machine (server), and above-mentioned monitoring module 61, above-mentioned audio processing modules 62 and the shared physical machine (server) of above-mentioned transport module 63, wherein each physical machine (server) can only move an audio frequency treatment progress (AP process) each time.Above-mentioned output processing unit 7 can be separately with a physical machine (server), that is: above-mentioned monitoring module 71, above-mentioned output processing module 72 and above-mentioned transport module 73 can share a physical machine (server).
Above-mentioned scheduler module 1 communicates and is connected with monitoring module 21 in above-mentioned input processing unit 2, monitoring module (monitoring module 31 and 41) in each video processing unit (video processing unit 3 and 4), monitoring module (monitoring module 51 and 61) in each audio treatment unit (audio treatment unit 5 and 6) and the monitoring module 71 in above-mentioned output processing unit 7 respectively, for coordinating the input processing unit 2, video processing unit 3 and 4, audio treatment unit 5 and 6 and the operation of the each several parts such as output processing unit 7 of whole system.Transport module 23 in above-mentioned input processing unit 2 communicates and is connected with transport module (transport module 33 and 43) in each video processing unit (video processing unit 3 and 4) and transport module (monitoring module 53 and 63) in each audio treatment unit (audio treatment unit 5 and 6) respectively, for the transport module (transport module 53 and 63) in the transport module to each video processing unit (video processing unit 3 and 4) (transport module 33 and 43) and each audio treatment unit (audio treatment unit 5 and 6), transmits respectively corresponding information and data.Transport module 73 in above-mentioned output processing unit 7 communicates and is connected with transport module (transport module 33 and 43) in each video processing unit (video processing unit 3 and 4) and transport module (transport module 53 and 63) in each audio treatment unit (audio treatment unit 5 and 6) respectively, for the transport module (transport module 53 and 63) the transport module from each video processing unit (video processing unit 3 and 4) (transport module 33 and 43) and each audio treatment unit (audio treatment unit 5 and 6), accepts respectively corresponding information and data.
Fig. 2 is the structured flowchart of above-mentioned input processing module 22.As shown in Figure 2, above-mentioned input processing module 22 comprises decapsulation module 221, input data processing module 222 and data memory module 223, the source video file that wherein 221 pairs of above-mentioned input processing units 2 of above-mentioned decapsulation module are accepted carries out decapsulation, 222 pairs of audio/video files of above-mentioned input data processing module are processed, and above-mentioned data memory module 223 is for storing audio/video file and relevant information.Above-mentioned decapsulation module 221 comprises audio/video file layout judging unit 2211, decapsulation selected cell 2212 and several decapsulation unit 2213,2214,2215 ...Wherein, above-mentioned several decapsulation unit (2213,2214,2215 ...) there is different forms, and above-mentioned decapsulation selected cell 2212 can select corresponding decapsulation unit to carry out decapsulation to above-mentioned source video file from above-mentioned several decapsulation unit according to the judged result of above-mentioned audio/video file layout judging unit 2211, thereby makes above-mentioned decapsulation module 221 to carry out decapsulation corresponding to different file layouts.Input data processing module 222 has the module of cutting apart 2221 and distribution module 2222.Cut apart module 2221 and the audio/video file division after decapsulation can be become to a plurality of voice data fragments and video data fragment, and compile sequence number to respectively a plurality of voice data fragments and a plurality of video data fragment, the sequence number of distribution module 2222 by the fragment to after cutting apart, respectively according to the quantity delivery of audio treatment unit and video processing unit, determined each voice data fragment and corresponding audio treatment unit and the video processing unit of video data fragment with this.Above-mentioned input processing module 22 can obtain the encapsulation format information of described source video file, and above-mentioned information exchange is crossed to above-mentioned transport module 23,33,43,53,63 sends to above-mentioned output processing unit 7.
Fig. 3 is the structured flowchart of above-mentioned output processing module 72.As shown in Figure 3, above-mentioned output processing module 72 comprises package module 721, output data processing module 722 and memory module 723.Wherein, 723 storages of above-mentioned memory module are through above-mentioned video processing unit 3 and 4 and audio file and the video file of above-mentioned audio treatment unit 5 and 6 after processing, 722 pairs of above-mentioned output data processing modules through above-mentioned video processing unit 3 and 4 and audio file and the video file of above-mentioned audio treatment unit 5 and 6 after processing process, and audio file and video file after above-mentioned output data processing module 722 is processed are delivered to above-mentioned package module 721.
Above-mentioned package module 721 comprises encapsulation format selected cell 7211 and several encapsulation unit 7212,7213,7214 with different encapsulation format ... wherein above-mentioned encapsulation format selected cell 7211 according to the received encapsulation format information of above-mentioned transport module 73 from several encapsulation units (7212,7213,7214 ...) in select corresponding encapsulation unit to encapsulate, thereby can be so that above-mentioned package module 721 encapsulates corresponding to the requirement of different encapsulation format.
Fig. 4 is the treatment scheme schematic diagram of distributed processing system(DPS) of the present invention.The treatment scheme of distributed processing system(DPS) is described below in conjunction with Fig. 4.Operating personnel open each physical machine (server), start above-mentioned distributed processing system(DPS) (step S1).Above-mentioned scheduler module 1 reading system file (step S2).Preferably, in step S2, above-mentioned scheduler module 1 for example, at assigned catalogue reading system configuration file (tvmccd.cfg), obtain the <Input>(input of this process), <Server>(server) and <Port>(port) etc. configuration item, catalogue (for example opt/tvmccd/ingress/Dispatcher/Input) in the appointment of <Input> item reads corresponding file (for example tvmccd.par file and logo file) simultaneously.For example, according to the content in corresponding document (tvmccd.par), generate the source video file name mapping table file (Map file, for example tvmccd.map) corresponding with file ID.Fig. 5 is the schematic diagram of an example of the Map file of distributed processing system(DPS) of the present invention, can query source video file name and the corresponding relation of file ID according to Map file.Above-mentioned logo file can have a plurality of, also can there is no logo file.In CONFIG.SYS, record the configuration information of whole distributed system, as: the method that the IP address of each module place computing machine, the port numbers of use, the quantity of the position of working directory, Video processing and audio processing modules, task are distributed etc.In PAR file, record the filename of pending file and process after the parameters such as coded format, code check, frame per second, resolution of filename, the audio/video file after processing.LOGO file record be content and the parameter such as size, position thereof of user's watermark that need to add in video, captions etc.Same with scheduler module 1, other modules in system are reading system configuration files from assigned catalogue separately all also, and the information such as mailing address of each module in acquisition system from configuration file connect with needing other modules that communicate by letter with this.Above-mentioned scheduler module 1 respectively and the monitoring module 21 of above-mentioned input processing unit 2, the monitoring module (31 and 41) of each video processing unit (3 and 4), establish a communications link between the monitoring module (51 and 61) of each audio treatment unit and the monitoring module 71 of above-mentioned output processing unit 7 (step S3).Preferably, in step S3, above-mentioned scheduler module 1 is set up Socket and is monitored (listening port Dispatcher_Port is in CONFIG.SYS, such as tvmccd.cfg), waits for the connection request of above-mentioned each monitoring module.Each processing unit sends to scheduler module 1 request that Socket connects of setting up by the address in CONFIG.SYS, once above-mentioned scheduler module 1 listens to the Socket connection request of above-mentioned monitoring module, accept the connection request of above-mentioned each monitoring module and set up Socket link, above-mentioned corresponding file is sent to corresponding monitoring module (monitoring module 21,31,41,51,61 or 71) simultaneously.For example, tvmccd.par and tvmccd.map file are sent to each monitoring module, logo file is sent to the monitoring module (51 and 61) of each video processing unit (5 and 6).
The transport module 23 of above-mentioned input processing unit 2 establishes a communications link with the transport module (33 and 43) of each video processing unit (3 and 4) and the transport module (53 and 63) of each audio treatment unit (5 and 6), simultaneously the transport module 73 of above-mentioned output processing unit 7 and the transport module (33 and 43) of each video processing unit (3 and 4) and the transport module (53 and 63) of each audio treatment unit (5 and 6) establish a communications link (step S4).Above-mentioned input processing unit 2 starts the processing of source video file (step S5).
Fig. 6 is the processing flow chart of above-mentioned steps S5.As shown in Figure 6, the input processing module 22 of above-mentioned input processing unit 2 sends to above-mentioned monitoring module 21(step S51 by information such as the start-up time of the treatment progress of source video file and titles).Above-mentioned input processing module 22, at assigned catalogue reading system configuration file (such as tvmccd.cfg file), obtains corresponding <Source> (source item), <Processed>(processing item), <Failed>(error items), <Output>(output item) and <Send>(sending item) etc. configuration item (step S52).Above-mentioned input processing module 22 reads source video file (step S53) in order from above-mentioned <Source> item catalogue, and successively above-mentioned source video file is carried out to decapsulation (step S54) by the decapsulation module 221 of above-mentioned input processing module 22.If source video file decapsulation success (step S55: be), obtain corresponding sound, sequence of frames of video, and the successful source of decapsulation video file is transferred to <Processed> catalogue and the sound decapsulation, sequence of frames of video are stored to above-mentioned memory module 223(step S56 from <Source> item catalogue).Sound, sequence of frames of video in 222 pairs of above-mentioned memory modules 223 of input data processing module of above-mentioned input processing module 22 are processed, obtain its relevant information (such as coded message etc.), and by obtained relevant information (such as coded message etc.) writing information file (info file) (step S57).Fig. 7 is the form schematic diagram of message file.As shown in Figure 7, in message file, contain source file ID, file size, type of coding etc.In the present embodiment, video information file, with the form name of VI-file ID .info, wherein also comprises the information such as video bitrate, video height, video width; Audio-frequency information file, with the form name of AI-file ID .info, wherein also comprises the information such as audio bit rate, sampling rate, channel number.In step S57, when info file is being write, info file is written into <Output> catalogue, has write and confirmed when errorless, info file to be transplanted on to <Send> catalogue when info file.The input data processing module 222 of above-mentioned input processing module 22 is divided into respectively a plurality of GOP(of take picture groups by sequence of frames of video and audio frame sequence) be the i.e. audio data unit corresponding with GOP of the video data segment (video file) of unit and a plurality of GOA(of take) be the voice data fragment (audio file) (step S58) of unit.Such as, for mpeg file, each GOP starts with I frame, can follow P frame and B frame after I frame and form a GOP.Owing to only having I frame to there are the data of complete picture, P frame only has the vicissitudinous data of relative other frames with B frame, the frame of each GOP section be segmented in a file, can avoid P frame and B frame data because with corresponding I frame not in a gop file and the picture data that cannot regain one's integrity.Fig. 8 is the form schematic diagram of GOP or GOA file.As shown in Figure 8, in each gop file after cutting apart and GOA file, comprise its source file ID, this GOP or GOA file in all GOP or the sequence number in GOA file, the information such as frame number that comprise and the frame data part of source file.Gop file and GOA file after input data processing module 222 is cut apart with certain rule name, filename comprises that the file ID, this document of the source file that this document is corresponding are in all GOP or the sequence number in GOA file and the file type (gop file or GOA file) (step S59) of source file.For example, file can be labeled as F01-1.gop, and wherein " F01 " is the file ID of source file, and " 1 " represents that in the file of this document after cutting apart, sequence number is 1, and file suffixes " .gop " represents that this document is gop file; Filename just can determine that this document is that file ID is the 1st gop file of the source file of F01 thus.Input data processing module 222 distributes video file and audio file according to certain allocation rule, for example, the quantity of the Voice & Video processing unit in input data processing module 222 inquiry system configuration files, the sequence number of GOP or GOA file, according to the quantity delivery of video or audio treatment unit, is determined thus this GOP or GOA file allocation are processed to (step S510) to which video or audio treatment unit.For example, having sequence number is 9 gop files of 1-9, and current have 2 video processing units 3 and 4 as shown in Figure 1; Input data processing module 222 by the sequence number of each gop file, carry out successively the quantity of mould 2(video processing unit) computing, the file allocation that result is 1 is to video processing unit 3, the file allocation that result is 0 is to video processing unit 4.So, file 1,3,5,7,9 is assigned to video processing unit 3, and file 2,4,6,8 is assigned to video processing unit 4.If current, there are 4 video processing unit VP1-VP4, by the sequence number of each gop file, carry out the quantity of mould 4(video processing unit) computing, result is that 1 file allocation is to VP1, result is that 2 file allocation is to VP2, result be 3 file allocation to VP3, the file allocation that result is 0 is to VP4.So, file 1,5,9 is assigned to VP1, and file 2,6 is assigned to VP2, and file 3,7 is assigned to VP3, and file 4,8 is assigned to VP4.Same, GOA file is also determined and is distributed to which audio treatment unit by this rule.After being assigned of each GOP or GOA file, input processing module 22 sends to allocation result the transport module 23 of input processing unit 2 by message.The input data processing module 222 of above-mentioned input processing module 22 writes respectively <Output> item catalogue by the GOP of generation and GOA file, and when each GOP or GOA file, write disk and complete and confirm to be transferred to when errorless <Send> item catalogue, source file is transferred to <Processed> catalogue (step S511) by <Source> catalogue after finishing dealing with, the sum of GOP and GOA is write to total file (total file) (step S512) simultaneously.In step S510, when total file is being write, it is written into <Output> item catalogue, when total file has been write and confirmed that when errorless, it is transferred to <Send> item catalogue.The transport module 23 of input processing unit 2 receives each GOP of input processing module 22 transmissions and the allocation result of GOA file, and by this result, GOP and GOA file is sent to corresponding audio/video processing unit (step S513) together with total file.In step S514, filename (filename that comprises source file and GOP, GOA file), file ID, video frame number, audio frame number, video code model and audio coding form are sent to above-mentioned monitoring module 21.If source video file decapsulation makes mistakes (step S55: no), the source video file that above-mentioned input processing unit 2 is made mistakes above-mentioned decapsulation is transplanted on <Failed> item catalogue (step S515) from <Source> item catalogue, the source video file ID makeing mistakes is sent to above-mentioned monitoring module 21(step S516 simultaneously).The error message that above-mentioned supervision module 8 is received above-mentioned monitoring module 21 sends to client modules 9.User can pass through client modules 9 input commands, determines whether again above-mentioned source video file is carried out to the processing of a new round, or directly abandons above-mentioned source video file to carry out the processing of a new round.
Return to Fig. 4, in step S6, the monitoring module 21 of above-mentioned input processing unit 2 sends to above-mentioned scheduler module 1 by relevant informations such as above-mentioned filename, file ID, video frame number, audio frame number, video code model and audio coding forms, and the transport module 23 of above-mentioned input processing unit 2 sends to corresponding video processing unit 3(4 by data such as the info file under above-mentioned <Send> catalogue, GOP and GOA file and total files simultaneously) and audio treatment unit 5(6) (step S6).
In step S7, above-mentioned scheduler module 1 sends to the relevant informations such as video frame number, audio frame number, video code model and audio coding form in corresponding video processing unit 3,4 and audio treatment unit 5,6 and output processing unit 7.
In step S8, above-mentioned video processing unit 3(4) video processing module 32(42) according to monitoring module 31(41) relevant information and the conveyor module 33(43 such as received video frame number and video code model) information such as received video file and info file processes accordingly to above-mentioned gop file (video file), above-mentioned audio treatment unit 5(6 simultaneously) according to monitoring module 51(61) relevant information and the conveyor module 53(63 such as received audio frame number and audio coding form) information such as received audio file and info file processes accordingly to above-mentioned GOA file (audio file), and audio/video file is after treatment sent to above-mentioned output processing unit 7.
Fig. 9 is the video file processing flow chart in step S8.In the present embodiment, the video file treatment scheme of video processing unit 3 and video processing unit 4 is identical, therefore only with regard to video processing unit 3, introduces video file treatment scheme.As shown in Figure 9, the video processing module 32 of above-mentioned video processing unit 3 (has for example been received following file: F01-1.gop by the start-up time of video file treatment progress and the id information of the video file of receiving, F01-3.gop, F01-5.gop etc. and F02-1.gop, F02-3.gop etc.) send to above-mentioned monitoring module 31(step S811).Above-mentioned monitoring module 31 can feed back to the id information of above-mentioned start-up time and video file above-mentioned scheduler module 1, makes system can monitor the treatment progress situation of above-mentioned video processing unit 3.Above-mentioned video processing module 32 is at assigned catalogue reading system configuration file (tvmccd.cfg), obtain the <Source> (source item) of the treatment progress of above-mentioned video processing unit 3, <Output>(output item) and <Send>(sending item) etc. configuration item (step S812), and read relevant information (info file) and video file (gop file) (step S813) in <Source> item catalogue.If have total file under <Source> item catalogue, above-mentioned video processing module 32 is transferred to <Send> item catalogue by above-mentioned total file.Then, above-mentioned video processing module 32 according to the received corresponding demoder of video code model Information Selection of monitoring module 31 to above-mentioned video file decode (step S814).If successfully decoded (step S815: be), 32 pairs of above-mentioned video processing modules are above-mentioned carries out predetermined process (step S816) through decoded video data.Above-mentioned predetermined process can be video frame rate customization regulate, to the merging etc. of adding roll screen information, different audio/video files in video.The parameter request of the file the processing that above-mentioned video processing module 32 is received from scheduler module 1 according to monitoring module 31 is by coding video data after treatment, obtain video file (gop file) after treatment, and the video file of exporting (gop file) is write to <Output> item catalogue, writing, disk finishes and verification is transplanted on <Send> item catalogue by this document after errorless, by above-mentioned transport module 33, be delivered to above-mentioned output processing unit 7(step S817).
If decoding makes mistakes (step S815: no), and the decoding frequency n of this video file does not surpass predetermined threshold a(step S818: no), above-mentioned decoding frequency n is added to 1(step S819) and turn back to step S814 and again decode, if the decoding frequency n of this video file surpasses predetermined threshold a(step S818: be), the id information of the video file of decoding being made mistakes sends to monitoring module 31(step S8110).The id information of the video file that above-mentioned monitoring module 31 is made mistakes decoding feeds back to above-mentioned scheduler module 1, above-mentioned scheduler module 1 can select to allow input processing unit 2 again send to above-mentioned video processing unit 3 video file that this is made mistakes, or the video file that allows input processing unit 2 that this is made mistakes rejoins in the remaining video file not sent by the transport module 23 of above-mentioned input processing unit 2 and again compile sequence number, then the video file of this being made mistakes according to above-mentioned certain allocation rule is distributed to corresponding video processing unit and is processed.In addition, scheduler module 1 also can be selected directly to ignore this decoding error, and error message is exported to user.
Figure 10 is the audio file processing flow chart in step S8.In the present embodiment, the video file treatment scheme of audio treatment unit 5 and audio treatment unit 6 is identical, therefore only with regard to audio treatment unit 5, introduces audio file treatment scheme.As shown in figure 10, the audio processing modules 52 of above-mentioned audio treatment unit 5 (has for example been received following file: F01-1.goa by the id information of the start-up time of audio file treatment progress and audio file, F01-3.goa, F01-5.goa etc. and F02-1.goa, F02-3.goa etc.) send to above-mentioned monitoring module 51(step S821).Above-mentioned monitoring module 51 can feed back to the id information of above-mentioned start-up time and audio file above-mentioned scheduler module 1, makes system can monitor the treatment progress situation of above-mentioned audio treatment unit 5.Above-mentioned audio processing modules 52 is at assigned catalogue reading system configuration file (tvmccd.cfg), obtain the <Source> (source item) of the treatment progress of above-mentioned audio treatment unit 5, <Output>(output item) and <Send>(sending item) etc. configuration item (step S822), and read relevant information (info file) and audio file (goa file) (step S823) in <Source> item catalogue.If have total file under <Source> item catalogue, above-mentioned audio processing modules 52 is transferred to <Send> item catalogue by above-mentioned total file.Then, above-mentioned audio processing modules 52 selects corresponding demoder to above-mentioned audio file decode (step S824) according to monitoring module 51 from the received audio coding format information of above-mentioned scheduler module 1.If successfully decoded (step S825: be), 52 pairs of above-mentioned audio processing modules are above-mentioned carries out predetermined process (step S826) through decoded voice data.Above-mentioned predetermined process can be that volume regulates automatically, volume customizes adjusting or sound channel processing etc.The parameter request of the file the processing that above-mentioned audio processing modules 52 is received from scheduler module 1 according to monitoring module 51 is encoded voice data after treatment, obtain audio file (goa file) after treatment, and the audio file of exporting (goa file) is write to <Output> item catalogue, writing, disk finishes and verification is transplanted on <Send> item catalogue by this document after errorless, by above-mentioned transport module 33, be delivered to above-mentioned output processing unit 7(step S827).
If decoding makes mistakes (step S825: no), and the decoding number of times m of this audio file does not surpass predetermined threshold b(step S828: no), above-mentioned decoding number of times m is added to 1(step S829) and turn back to step S824 and again decode, if the decoding number of times m of this audio file surpasses predetermined threshold b(step S828: be), the id information of the audio file of decoding being made mistakes sends to monitoring module 51(step S8210).The id information of the audio file that above-mentioned monitoring module 51 is made mistakes decoding feeds back to above-mentioned scheduler module 1, above-mentioned scheduler module 1 can select to allow input processing unit 2 again send to above-mentioned audio treatment unit 5 audio file that this is made mistakes, or the audio file that allows input processing unit 2 that this is made mistakes rejoins in the remaining audio file not sent by the transport module 23 of above-mentioned input processing unit 2 and again compile sequence number, then the audio file of this being made mistakes according to above-mentioned certain allocation rule is distributed to corresponding audio treatment unit and is processed.In addition, scheduler module 1 also can be selected directly to ignore this decoding error, and error message is exported to user.
Return to Fig. 4, in step S9, above-mentioned output processing unit 7 according to above-mentioned monitoring module 71 from the received relevant information of above-mentioned scheduler module 1 to from above-mentioned video processing unit 3(4) video file and from above-mentioned audio treatment unit 5(6) audio file process, obtain audio-video document after treatment.
Figure 11 is the processing flow chart of above-mentioned output processing unit 7 in step S9.As shown in figure 11, the output processing module 72 of above-mentioned output processing unit 7 sends to the output start-up time of processing unit 7 and the title for the treatment of progress the monitoring module 71(step S91 of above-mentioned output processing unit 7).Above-mentioned output processing module 72 is at assigned catalogue reading system configuration file (such as tvmccd.cfg), obtain the configuration item of above-mentioned treatment progress, <Source> (source item) for example, <Output>(output item) and <Finished>(complete) etc. configuration item (step S92), and the write_close event (writing close event) of monitoring <Source> item catalogue, when having write_close event to arrive, obtain the filename of the file of write_close, and the moment (step S93) of recording up-to-date write_close event.Thus, can monitor the up-to-date file of receiving of <Source> item catalogue, if receive and be transferred to the total file of transport module 73 by transport module 23 through transport module 33,43,53 and/or 63, just read in total file about the total information of audio file and the total information of video file, such as the total information of gop file and the total information of GOA file, above-mentioned output data processing module 721 obtains audio/video file (such as GOP and GOA file) (step S94) simultaneously.According to the total information of the GOP in total file and GOA file, output data processing module 722 is the reception mapping table (step S95) of each source file generation GOP and GOA file.Receive the structure of mapping table with reference to Figure 12, and the total information of above-mentioned received audio file and the total information of video file and the reception mapping table that generates all can be stored in the memory module 723 of above-mentioned output processing module 72.Figure 12 is for receiving the structural representation of mapping table.As seen from Figure 12, receive in mapping table and record the file ID of source file and all GOP of this source file and GOA file, and reserved a status indicator for each GOP and GOA file.Initial status indicator is 0, represents not receive this GOP or GOA file.GOP and GOA file that 722 scannings of output data processing module have been received, and in receiving mapping table, the status indicator of this GOP or GOA file is made as to 1 successively, represent to have received this GOP or GOA file (step S96).For example, output data processing module 722 has been received the total file that file ID is F01, and this total file display file F01 has 9 gop file F01-1.gop~F01-9.gop and 9 GOA file F01-1.goa~F01-9.goa.Accordingly, the source file that output data processing module 722 is F01 for file ID is set up reception mapping table as shown in figure 12.Then, the part that belongs to this source file in the GOP that 722 scannings of output data processing module have been received and GOA file.Naming rule during according to input processing unit divided file, output data processing module 722 scans just can be judged this document with the .gop of F01 beginning or .goa file and belong to source file F01, and F01-1.gop is exactly first gop file of this source file.So, all GOP of paid-in source file F01 and GOA file can be labeled as and receive in receiving mapping table.Because the write_close event of <Source> item catalogue is monitored, output data processing module 722 can be monitored GOP and the GOA file of newly receiving, and it is labeled as and is received in receiving mapping table.Receive and in mapping table, also to comprise the GOP that receives and the quantity information of GOA file, often receive a GOP or GOA file, corresponding quantity of documents is added to 1(step S97).
Above-mentioned output data processing module 722 contrasts (step S98) by the quantity recording in the quantity of the GOP recording in reception mapping table and GOA file and total file.When quantity is consistent (step S98: be), the scanning of output data processing module 722 receives mapping tables, sees whether the status indicator of each GOP and GOA file is to receive (step S99).When the information of the total number of files receiving and total file is different, illustrate that in addition GOP or GOA file do not receive.
If all GOP and the GOA file of the above-mentioned output data processing module 722 above-mentioned source of judgement video files all receive (step S99: be), in the package module 721 of above-mentioned output processing module 72, carry out encapsulation process (step S910).If encapsulate successfully (step S911: be), above-mentioned package module 721 outputs to the successfully new audio-video document of encapsulation the transport module 73(step S912 of above-mentioned output data processing unit 7).If encapsulation makes mistakes (step S911: no), whether the number of times M that Reseal processing is attempted in above-mentioned output data processing module 722 judgements exceeds predetermined value B(step S917).If the number of times M that above-mentioned trial Reseal is processed does not exceed predetermined value B(step S917: no), M is added to 1(step S918), and get back to above-mentioned package module 721 and again attempt encapsulation process.If the number of times M that above-mentioned trial Reseal is processed exceeds predetermined value B(step S917: be), the sound that encapsulation is made mistakes, the id information of video file send to the monitoring module 71(step S919 of above-mentioned output processing unit 7), the sound that above-mentioned monitoring module 71 is made mistakes above-mentioned encapsulation error information and above-mentioned encapsulation, the id information of video file feed back to above-mentioned scheduler module 1(step S920).Above-mentioned scheduler module 1 determines whether again above-mentioned source video file is carried out to the processing of a new round as requested, or directly abandons above-mentioned source video file to carry out the processing of a new round.
If all sounds, video file (GOP and GOA file) in the above-mentioned output data processing module 722 above-mentioned source of judgement video files are not received (step S96: no) completely, above-mentioned output data processing module 722 judgements wait for not having the time N of received sound, video file whether to surpass predetermined threshold A(step S913).If wait for that the time N of not received sound, video file does not surpass predetermined threshold A(step S913: no), continue to wait for there is no received sound, video file.If wait for that the time N of not received sound, video file surpasses predetermined threshold A(step S913: be), above-mentioned output data processing module 722 checks the reception mapping table of above-mentioned sound, video file, obtain the id information of unreceived audio file and/or the id information of video file etc., and send it to the monitoring module 71(step S914 of above-mentioned output processing unit 7).For example, as shown in figure 12, above-mentioned output data processing module 722 is still confiscated after the stand-by period surpasses above-mentioned predetermined threshold A by checking F01-4.goa and two files of F01-6.gop of above-mentioned reception mapping table discovery Tvm.mp4, and above-mentioned output data processing module 722 sends to the file ID of not receiving file the monitoring module 71 of above-mentioned output processing unit 7.Wherein, the total file that the above-mentioned stand-by period can read certain source file by output processing module 72 plays calculating, in also can being established from the reception mapping table of this source file calculating.The id information that above-mentioned monitoring module 71 receives error message and unreceived audio frequency and/or video file by above-mentioned sound, video file feeds back to above-mentioned scheduler module 1(step S915).The above-mentioned sound that above-mentioned scheduler module 1 is fed back according to above-mentioned monitoring module 71, video file receive error message and the id information of unreceived audio file and/or the id information of video file can order above-mentioned input processing unit 2 to be redistributed to audio treatment unit and/or video processing unit (step S916) according to above-mentioned certain allocation rule, and the sequence number that is about to this document is pressed the quantity delivery of audio or video processing unit.Therefore, if changed during just sub-distribution of processing units quantity, different processing unit when this document can be assigned to from first sub-distribution.For example, during just sub-distribution of F01-7.gop, in system, have 4 video processing unit VP1-VP4, this document is assigned to VP3.After receiving this document, output processing unit do not require input processing unit 2 again to process this document.Now, VP2 logs off because of fault, also has 3 video processing unit VP1, VP3, VP4 in system.Therefore, the allocation rule of input processing unit 2 just becomes with file sequence number mould 3, and the file allocation that result is 1 is to VP1, and the file allocation that result is 2 is to VP3, and the file allocation that result is 0 is to VP4.By this rule, F01-7.gop is reallocated to VP1.
At step S916, can also adopt following processing mode: input processing unit 2 often receives that a GOP that need to again process or GOA file just give the sequence number that one is again processed, and distributes by this sequence number.For example, input processing unit 2 has successively been received ID:F01-7.gop, F03-2.gop, the F02-4.gop of 3 gop files that need to again process.The distribution module 2222 of input processing unit 2 is often received the gop file that need again process, just the sequence number of the gop file that need again process of its maintenance is added to 1 and give the gop file of newly receiving.So, F01-7.gop, F03-2.gop, F02-4.gop are endowed sequence number 1,2,3 successively.Supposing now has 2 video processing unit VP1-VP2 in system, and distribution module 2222 is by the sequence number of F01-7.gop 1 mould 2, and result is 1, distributes to VP1; By the sequence number of F03-2.gop 2 moulds 2, result is 0, distributes to VP2; By the sequence number of F02-4.gop 3 moulds 2, result is 1, distributes to VP1.Then distribution module 2222 sends to transport module 23 by above assignment information, by transport module 23, sends to corresponding video processing unit.If input processing unit 2 is received the GOA file that need to again process, adopting uses the same method distributes.The gop file that need to again process and GOA file sort respectively, have sequence number independent of each other.
Input processing module, video processing module, audio processing modules and output processing module are regularly reported the duty, current task of module separately, are finished the work etc. to monitoring module separately by message queue, also foregoing are charged to journal file separately simultaneously.Each monitoring module regularly sends to scheduler module 1 by the IP address of this processing unit, system loading, network condition, this process status and the above-mentioned information received from the processing module of this unit.Supervision module 8 is regularly shared above-mentioned information with scheduler module 1, and with graphical interfaces, is shown to user after processing by client modules 9.
The maintenance of information configuration file that scheduler module 1 utilizes each monitoring module to send, in configuration file, for each physical machine has distributed a status attribute, when this machine temporarily logs off because of power down or suspension, status attribute is set as unavailable; When having new physical machine to add system, user instruction scheduler module 1 is added the information of this machine in configuration file.After information in configuration file changes, scheduler module 1 is given an one new version number, exists side by side and sends it to each control module.
If system scheduler module 1 power down in service or without response, client modules 9 just cannot be from monitoring that module 8 obtain data, and after this situation certain time, client modules 9 can be fixed a breakdown by method prompting users such as prompting frame or alarm songs.After fault is got rid of, scheduler module 1 again reads configuration file and each monitoring module connects, and each monitoring module retransfers the unsuccessful information that sends scheduler module 1 to according to daily record separately.
If a certain monitoring module quits work, scheduler module 1 is not judging that from this module is received information this module breaks down through certain hour.Failure cause may be that the physical fault that caused by power-off or suspension may be also that the process of this module is without response.Being connected of the physical machine of scheduler module 1 detecting and this monitoring module whether normally (such as using ping order), if undesired explanation is physical fault, by client modules 9, to user, send prompting.If try again after the process that the normal explanation of connection is this module, without response, is waited for certain hour, still cannot communicate by letter and send prompting by client modules 9 to user again through the trial of certain number of times.
If the physical machine power down at a certain processing unit place, after energising, the processing unit of this monitoring module reinitializes with other modules and connects again.If the machine of power down is the machine at input processing unit, video processing unit or audio treatment unit place, the file that has completed processing can be transferred to the catalogues such as <Processed> or <failed> from <Source> item catalogue, therefore, recover rear as long as proceed to process from <Source> item catalogue file reading.If the machine of power down is the machine at output processing unit place, after initialization, output processing module scanning <Source> item catalogue, read the total file under catalogue, set up the reception mapping table of sound, video file, and receive mapping table according to the GOP under <Source> item catalogue and GOA filling in documents.In scanning time, monitors the up-to-date file of receiving, the All Files under <Source> item catalogue is all completed to the processing that continues S93~S917 after aforesaid operations.If the physical machine suspension at a certain processing unit place, when connecting recovery, scheduler module 1 sends up-to-date configuration file to it, the monitoring module inquiry log of this unit, and the transport module of this unit of instruction by the file transfer under <Send > catalogue the processing unit to appointment.
If the process of monitoring module is restarted after responding, after restarting, first set up and communicate by letter with other modules, then the last call duration time of inquiry and each module in daily record, requires scheduler module 1 and corresponding processing module, transport module to retransmit this time information afterwards.When monitoring module sends information to scheduler module 1, all with the version number of configuration file, therefore, the expired meeting of configuration file in scheduler module 1 discovery information resends new configuration file.Monitoring module operates normally to the information and executing of the re-transmission of receiving.
As mentioned above, input processing module, video processing module, audio processing modules and output processing module are regularly reported the duty of module separately by message queue to monitoring module separately, therefore, when the process of certain module surpasses certain hour without response, corresponding monitoring module will note abnormalities, if wait for that certain hour can not recover, monitoring module is restarted the process of this module with regard to reminding user or is restarted voluntarily the process of this module.After restarting, these modules are processed from working directory file reading separately.
In the present embodiment, each physical machine connects by LAN (Local Area Network), the address of each module is IP address and the corresponding end slogan of place machine, but each physical machine also can connect by other modes such as wide area network or high-speed buses, as long as can give the corresponding address of each module, represents mode.
In addition, when utilizing system of the present invention to process a large amount of small size video files, also can omit the step of cutting apart audio/video file, allow each audio processing modules and video processing module process alone the audio or video part of a source file.
In the present embodiment, distributed system is for the treatment of audio-video document, but also can be for the treatment of the data of other types, as long as the treatment scheme that this piecemeal remerges after processing can not impact the result of these data after the function of native system change corresponding module.

Claims (46)

1. a distributed tones video file disposal system, comprising:
Input processing unit, for reception sources video file, described source video file is processed and obtained video data and voice data, and described video data and described voice data are divided into respectively after video data fragment and voice data fragment in order, and according to certain allocation rule, video data fragment and the voice data fragment allocation of cutting apart gained are processed to corresponding video data processing unit/voice data processing unit;
Several video data processing units, are respectively used to the video data fragment after cutting apart to process;
Several voice data processing units, are respectively used to the voice data fragment after cutting apart to process;
Output processing unit, processes and exports for the video data fragment to after treatment and voice data fragment;
Scheduling unit, for coordinating the work of described input processing unit, described several video data processing units, described several voice data processing units and output processing unit.
2. audio-video document disposal system according to claim 1, is characterized in that:
Described input processing unit is divided into described video data and described voice data respectively after video data fragment and voice data fragment in order, the serial number of each video data fragment and voice data fragment is carried out respectively to modular arithmetic by the quantity of video data processing unit and voice data processing unit, according to the knot of modular arithmetic, process each video data fragment and voice data fragment allocation to corresponding video data processing unit and voice data processing unit respectively.
3. audio-video document disposal system according to claim 1, is characterized in that:
Described input processing unit has the first monitoring module, the first processing module and the first transport module;
Each video data processing unit has respectively the second monitoring module, the second processing module and the second transport module;
Each voice data processing unit has respectively the 3rd monitoring module, the 3rd processing module and the 3rd transport module;
Described output processing unit has the 4th monitoring module, the 4th processing module and the 4th transport module; Wherein,
The described first, second, third and the 4th monitoring module communicates and is connected with described scheduling unit respectively, receives dependent instruction, and separately the running status of relevant treatment unit is reported to described scheduling unit from described scheduling unit; Described the first transport module respectively with the 3rd transport module communication connection described in the second transport module described in each and each, need corresponding video data fragment to be processed and voice data fragment are sent to respectively described in each to the second transport module and the 3rd transport module described in each; Described the 4th transport module communicates to connect with the second transport module described in each and the 3rd transport module described in each respectively, receives video data fragment and voice data fragment after treatment.
4. audio-video document disposal system according to claim 3, is characterized in that:
Described input processing unit carries out decapsulation to received source file, obtains video sequence and tonic train, and respectively described video sequence and described tonic train is carried out to dividing processing.
5. audio-video document disposal system according to claim 1, it is characterized in that: described output processing unit receives all video data fragments and voice data fragment after treatment from each video data processing unit and each voice data processing unit in judgement, received video data fragment and voice data fragment are merged, and encapsulate according to predetermined format.
6. audio-video document disposal system according to claim 3, is characterized in that:
The address information that described scheduling unit and the described first, second, third and the 4th monitoring module obtain by CONFIG.SYS communicates.
7. audio-video document disposal system according to claim 6, is characterized in that:
The described first, second, third and the 4th monitoring module sends to described scheduling unit by the running state information of respective handling unit respectively termly;
Described scheduling unit is according to described running state information maintenance system configuration file, and the CONFIG.SYS of renewal is sent to respectively to the described first, second, third and the 4th monitoring module.
8. audio-video document disposal system according to claim 6, is characterized in that:
The content of CONFIG.SYS comprises quantity, physical address, running status and the working directory of described video data processing unit and quantity, physical address, running status and the working directory of described voice data processing unit.
9. audio-video document disposal system according to claim 6, is characterized in that:
When in described video data processing unit and described voice data processing unit, the processing of makes a mistake, its corresponding second or the 3rd monitoring module is reported this mistake to described scheduling unit;
Described scheduler module is definite to be re-executed the processing making a mistake or ignores this mistake.
10. audio-video document disposal system according to claim 9, is characterized in that:
Described input processing unit rejoins in the remaining video data slot that is not sent out and/or voice data fragment by the video data fragment of makeing mistakes and/or voice data fragment and carries out serialization again, again the serial number of the described gained of serialization is again carried out to modular arithmetic by the quantity of described video data processing unit and/or described voice data processing unit, according to the result of modular arithmetic, described video data fragment of makeing mistakes and/or voice data fragment are redistributed to corresponding video data processing unit and/or voice data processing unit and processed.
11. audio-video document disposal systems according to claim 1, is characterized in that:
Described input processing unit obtains video frame number, audio frame number and the audio/video coding form after the decapsulation of described source video file, and described video frame number, audio frame number and audio/video coding form are sent to corresponding video data processing unit and voice data processing unit.
12. according to the audio-video document disposal system described in claim 1~11, it is characterized in that:
The quantity of described input processing unit video data fragment after cutting apart by source file and the quantity of the voice data fragment after cutting apart send to described output processing unit.
13. audio-video document disposal systems according to claim 12, is characterized in that:
Described input processing unit is to inserting the source file information of this fragment in the video data fragment after cutting apart and voice data fragment, in all videos or the serial number in voice data fragment and the file type information of this source file.
14. audio-video document disposal systems according to claim 13, is characterized in that:
Described input processing unit is by described source file information, be inserted in the filename of video data fragment after cutting apart and voice data fragment in all videos of this source file or the serial number in voice data fragment and file type information.
15. audio-video document disposal systems according to claim 13, is characterized in that:
The reception mapping table that described output processing unit is set up this source file according to the quantity of video data fragment after cutting apart and the quantity of the voice data fragment after cutting apart, according to input processing unit to the source file information of this fragment of inserting in the video data fragment after cutting apart and voice data fragment, the processed data slot of receiving is labeled as and is received in all videos of this source file or the serial number in voice data fragment and file type information are receiving mapping table;
When all video data fragments in described output processing unit judges reception mapping table and voice data fragment are all labeled as and have received, described video data fragment after treatment and described voice data fragment are after treatment integrated to processing, obtain audio-video document after treatment;
When described output, process unit judges and after certain hour, receive and in mapping table, have video data fragment and voice data fragment label when not receiving, by the video data fragment after cutting apart accordingly or/and the disappearance information of the voice data fragment after cutting apart is sent to described scheduling unit.
16. audio-video document disposal systems according to claim 15, is characterized in that:
In described reception mapping table, also comprise the quantity of paid-in video data fragment and the quantity of voice data fragment;
After the quantity of video data fragment that the quantity of video data fragment in described output processing unit judges reception mapping table and the quantity of voice data fragment send with described input processing unit and the quantity of voice data fragment are consistent, whether all video data fragments and the voice data fragment that just receive in mapping table are all labeled as paid-in judgement.
17. audio-video document disposal systems according to claim 15, is characterized in that:
When described scheduling unit receives described disappearance information, indicate described input processing unit that the serial number of the video data fragment of disappearance and/or voice data fragment is carried out to modular arithmetic by the quantity of current video data processing unit and/or voice data processing unit, according to the result of modular arithmetic, the video data fragment of each disappearance and/or voice data fragment are redistributed to corresponding video data processing unit and/or voice data processing unit and processed, and video data fragment and/or voice data fragment after again processing are sent to described output processing unit.
18. audio-video document disposal systems according to claim 15, is characterized in that:
When described scheduling unit receives the disappearance information of video data fragment, indicate described input processing unit that the video data fragment of all disappearances is numbered in order again, and this numbering is carried out to modular arithmetic by the quantity of current video data processing unit, according to the result of modular arithmetic, the video data fragment of each disappearance is redistributed to corresponding video data processing unit and processed, and the video data fragment after again processing is sent to described output processing unit;
When described scheduling unit receives the disappearance information of voice data fragment, indicate described input processing unit that the voice data fragment of all disappearances is numbered in order again, and this numbering is carried out to modular arithmetic by the quantity of current voice data processing unit, according to the result of modular arithmetic, the voice data fragment of each disappearance is redistributed to corresponding voice data processing unit and processed, and the voice data fragment after again processing is sent to described output processing unit.
19. 1 kinds of distributed tones video file disposal routes, comprising:
Input processing step, by input processing unit reception sources video file, described source video file is processed and obtained video data and voice data, and respectively described video data and described voice data are divided into respectively after video data fragment and voice data fragment in order, and according to certain allocation rule, video data fragment and the voice data fragment allocation of cutting apart gained are processed to corresponding video data processing unit and voice data processing unit;
Video data treatment step, is used respectively several video data processing units to process the video data fragment after cutting apart;
Voice data treatment step, is used respectively several voice data processing units to process the voice data fragment after cutting apart;
Output treatment step, is processed and exports video data fragment and voice data fragment after treatment by output processing unit;
Scheduling step, coordinates the work of described input processing unit, described several video data processing units, described several voice data processing units and output processing unit by scheduling unit.
20. audio-video document disposal routes according to claim 19, is characterized in that: described input processing step comprises:
In described input processing step, described video data and described voice data are divided into respectively after video data fragment and voice data fragment in order, the serial number of each video data fragment and voice data fragment is carried out respectively to modular arithmetic by the quantity of video data processing unit and voice data processing unit, according to the result of modular arithmetic, process each video data fragment and voice data fragment allocation to corresponding video data processing unit and voice data processing unit respectively.
21. audio-video document disposal routes according to claim 19, is characterized in that:
Decapsulation step, carries out decapsulation to received source file, obtains video sequence and tonic train;
Segmentation procedure, carries out dividing processing by described video sequence and described tonic train respectively.
22. audio-video document disposal routes according to claim 19, it is characterized in that: in described output treatment step, in judgement, from each video data processing unit and each voice data processing unit, receive all video data fragments and voice data fragment after treatment, received video data fragment and voice data fragment are merged, and encapsulate according to predetermined format.
23. according to the audio-video document disposal route described in claim 19~21, it is characterized in that:
In described scheduling step, the address information of obtaining by CONFIG.SYS communicates.
24. audio-video document disposal routes according to claim 22, is characterized in that:
In described scheduling step, the running state information of respective handling unit is sent to respectively described scheduling unit termly, and described scheduling unit is according to described running state information maintenance system configuration file, and the CONFIG.SYS of transmission renewal.
25. audio-video document disposal routes according to claim 22, is characterized in that:
The content of CONFIG.SYS comprises quantity, physical address, running status and the working directory of described video data processing unit and quantity, physical address, running status and the working directory of described voice data processing unit.
26. audio-video document disposal routes according to claim 22, is characterized in that:
In described video data treatment step, when the processing of in described video data processing unit makes a mistake, or in described voice data treatment step, when the processing of in described voice data processing unit makes a mistake, to described scheduling unit, report this mistake;
Described scheduler module is definite to be re-executed the processing making a mistake or ignores this mistake.
27. audio-video document disposal routes according to claim 25, is characterized in that:
Described input processing unit rejoins in the remaining video data slot that is not sent out and/or voice data fragment by the video data fragment of makeing mistakes and/or voice data fragment and carries out serialization again, again the serial number of the described gained of serialization is again carried out to modular arithmetic by the quantity of described video data processing unit and/or described voice data processing unit, according to the result of modular arithmetic, described video data fragment of makeing mistakes and/or voice data fragment are redistributed to corresponding video data processing unit and/or voice data processing unit and processed.
28. audio-video document disposal routes according to claim 19, is characterized in that:
In described input processing step, obtain video frame number, audio frame number and audio/video coding form after the decapsulation of described source video file, and described video frame number, audio frame number and audio/video coding form are sent to corresponding video data processing unit and voice data processing unit.
29. audio-video document disposal routes according to claim 19, is characterized in that:
In described input processing step, the carve information of the quantity of the quantity of the video data fragment by source file after cutting apart and the voice data fragment after cutting apart sends to described output processing unit.
30. audio-video document disposal routes according to claim 28, is characterized in that:
In described input processing step, described input processing unit is to inserting the source file information of this fragment in the video data fragment after cutting apart and voice data fragment, in all videos or the serial number in voice data fragment and the file type information of this source file.
31. audio-video document disposal routes according to claim 28, is characterized in that:
In described output treatment step, the reception mapping table that described output processing unit is set up this source file according to the quantity of video data fragment after cutting apart and the quantity of the voice data fragment after cutting apart, according to input processing unit to the source file information of this fragment of inserting in the video data fragment after cutting apart and voice data fragment, the processed data slot of receiving is labeled as and is received in all videos of this source file or the serial number in voice data fragment and file type information are receiving mapping table;
When all video data fragments in described output processing unit judges reception mapping table and voice data fragment are all labeled as and have received, described video data fragment after treatment and described voice data fragment are after treatment integrated to processing, obtain audio-video document after treatment;
When described output, process unit judges and after certain hour, receive and in mapping table, have video data fragment and voice data fragment label when not receiving, by the video data fragment after cutting apart accordingly or/and the disappearance information of the voice data fragment after cutting apart is sent to described scheduling unit.
32. audio-video document disposal routes according to claim 30, is characterized in that:
In described reception mapping table, also comprise the quantity of paid-in video data fragment and the quantity of voice data fragment;
After the quantity of video data fragment that the quantity of video data fragment in described output processing unit judges reception mapping table and the quantity of voice data fragment send with described input processing unit and the quantity of voice data fragment are consistent, whether all video data fragments and the voice data fragment that just receive in mapping table are all labeled as paid-in judgement.
33. audio-video document disposal routes according to claim 30, is characterized in that:
When described scheduling unit receives described disappearance information, indicate described input processing unit that the serial number of the video data fragment of disappearance and/or voice data fragment is carried out to modular arithmetic by the quantity of current video data processing unit and/or voice data processing unit, according to the result of modular arithmetic, the video data fragment of each disappearance and/or voice data fragment are redistributed to corresponding video data processing unit and/or voice data processing unit and processed, and video data fragment and/or voice data fragment after again processing are sent to described output processing unit.
34. audio-video document disposal routes according to claim 30, is characterized in that:
When described scheduling unit receives the disappearance information of video data fragment, indicate described input processing unit that the video data fragment of all disappearances is numbered in order again, and this numbering is carried out to modular arithmetic by the quantity of current video data processing unit, according to the result of modular arithmetic, the video data fragment of each disappearance is redistributed to corresponding video data processing unit and processed, and the video data fragment after again processing is sent to described output processing unit;
When described scheduling unit receives the disappearance information of voice data fragment, indicate described input processing unit that the voice data fragment of all disappearances is numbered in order again, and this numbering is carried out to modular arithmetic by the quantity of current voice data processing unit, according to the result of modular arithmetic, the voice data fragment of each disappearance is redistributed to corresponding voice data processing unit and processed, and the voice data fragment after again processing is sent to described output processing unit.
35. 1 kinds of distributed tones video file treating apparatus, comprising:
Several first servers, are respectively used to video data fragment to process;
Several second servers, are respectively used to voice data fragment to process;
The 3rd server, processes and exports for the video data fragment to after treatment and voice data fragment;
The 4th server, for reception sources video file, described source video file is processed and obtained video data and voice data, and described video data and described voice data are divided into respectively after video data fragment and voice data fragment in order, and according to certain allocation rule, video data fragment and the voice data fragment allocation of cutting apart gained are processed to corresponding described first server and described second server, and coordinate the work between described several first servers, described several second servers and described the 3rd server.
36. devices according to claim 34, is characterized in that:
Described the 4th server is divided into described video data and described voice data respectively after video data fragment and voice data fragment in order, the serial number of each video data fragment and voice data fragment is carried out respectively to modular arithmetic by the quantity of described first server and described second server, according to the result of modular arithmetic, process each video data fragment and voice data fragment allocation to corresponding described first server and described second server respectively.
37. devices according to claim 34, is characterized in that:
Each first server is only moved a video data treatment progress;
Each second server only moves a voice data treatment progress.
38. devices according to claim 34, is characterized in that:
Described the 4th server carries out decapsulation to received source file, obtains video sequence and tonic train, and respectively described video sequence and described tonic train is carried out to dividing processing.
39. devices according to claim 34, it is characterized in that: described the 3rd server receives all video data fragments and voice data fragment after treatment from each first server and each second server in judgement, received video data fragment and voice data fragment are merged, and encapsulate according to predetermined format.
40. devices according to claim 34, is characterized in that:
Described the 4th server obtains video frame number, audio frame number and the audio/video coding form after the decapsulation of described source video file, and described video frame number, audio frame number and audio/video coding form are sent to corresponding first server and second server.
41. devices according to claim 34, is characterized in that:
The carve information of the quantity of described the 4th server video data fragment after cutting apart by source file and the quantity of the voice data fragment after cutting apart sends to described the 3rd server.
42. according to the device described in claim 39, it is characterized in that:
Described the 4th server is by described source file information, be inserted in the filename of video data fragment after cutting apart and voice data fragment in all videos of this source file or the serial number in voice data fragment and file type information.
43. according to the device described in claim 39, it is characterized in that:
The reception mapping table that described the 3rd server is set up this source file according to the quantity of video data fragment after cutting apart and the quantity of the voice data fragment after cutting apart, according to the 4th server to the source file information of this fragment of inserting in the video data fragment after cutting apart and voice data fragment, the processed data slot of receiving is labeled as and is received in all videos of this source file or the serial number in voice data fragment and file type information are receiving mapping table;
When all video data fragments in described the 3rd server judgement reception mapping table and voice data fragment are all labeled as and have received, described video data fragment after treatment and described voice data fragment are after treatment integrated to processing, obtain audio-video document after treatment;
When described the 3rd server judgement receives after certain hour, in mapping table, have video data fragment and voice data fragment label when not receiving, by the video data fragment after cutting apart accordingly or/and the disappearance information of the voice data fragment after cutting apart is sent to described the 4th server.
44. according to the device described in claim 41, it is characterized in that:
In described reception mapping table, also comprise the quantity of paid-in video data fragment and the quantity of voice data fragment;
After the quantity of video data fragment that the quantity of video data fragment in described the 3rd server judgement reception mapping table and the quantity of voice data fragment send with described the 4th server and the quantity of voice data fragment are consistent, whether all video data fragments and the voice data fragment that just receive in mapping table are all labeled as paid-in judgement.
45. according to the device described in claim 41, it is characterized in that:
When described the 4th server receives described disappearance information, the serial number of the video data fragment of disappearance and/or voice data fragment is carried out to modular arithmetic by the quantity of current first server and/or second server, according to the result of modular arithmetic, the video data fragment of each disappearance and/or voice data fragment are redistributed to corresponding first server and/or second server and processed, and video data fragment and/or voice data fragment after again processing are sent to described the 3rd server.
46. according to the device described in claim 41, it is characterized in that:
When described the 4th server receives the disappearance information of video data fragment, the video data fragment of all disappearances is numbered in order again, and this numbering is carried out to modular arithmetic by the quantity of current first server, according to the result of modular arithmetic, the video data fragment of each disappearance is redistributed to corresponding first server and processed, and the video data fragment after again processing is sent to described the 3rd server;
When described the 4th server receives the disappearance information of voice data fragment, the voice data fragment of all disappearances is numbered in order again, and this numbering is carried out to modular arithmetic by the quantity of current second server, according to the result of modular arithmetic, the voice data fragment of each disappearance is redistributed to corresponding second server and processed, and the voice data fragment after again processing is sent to described the 3rd server.
CN201310558626.XA 2013-11-12 2013-11-12 A kind of distributed tones video process apparatus and processing method Expired - Fee Related CN103605710B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310558626.XA CN103605710B (en) 2013-11-12 2013-11-12 A kind of distributed tones video process apparatus and processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310558626.XA CN103605710B (en) 2013-11-12 2013-11-12 A kind of distributed tones video process apparatus and processing method

Publications (2)

Publication Number Publication Date
CN103605710A true CN103605710A (en) 2014-02-26
CN103605710B CN103605710B (en) 2017-10-03

Family

ID=50123933

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310558626.XA Expired - Fee Related CN103605710B (en) 2013-11-12 2013-11-12 A kind of distributed tones video process apparatus and processing method

Country Status (1)

Country Link
CN (1) CN103605710B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105224291A (en) * 2015-09-29 2016-01-06 北京奇艺世纪科技有限公司 A kind of data processing method and device
CN105354058A (en) * 2015-10-29 2016-02-24 无锡天脉聚源传媒科技有限公司 File updating method and apparatus
CN105354242A (en) * 2015-10-15 2016-02-24 北京航空航天大学 Distributed data processing method and device
CN105357229A (en) * 2015-12-22 2016-02-24 深圳市科漫达智能管理科技有限公司 Video processing method and device
CN105407360A (en) * 2015-10-29 2016-03-16 无锡天脉聚源传媒科技有限公司 Data processing method and device
CN103838878B (en) * 2014-03-27 2017-03-01 无锡天脉聚源传媒科技有限公司 A kind of distributed tones processing system for video and processing method
CN110635864A (en) * 2019-10-09 2019-12-31 中国联合网络通信集团有限公司 Parameter decoding method, device, equipment and computer readable storage medium
CN111372011A (en) * 2020-04-13 2020-07-03 杭州友勤信息技术有限公司 KVM high definition video decollator
CN114900718A (en) * 2022-07-12 2022-08-12 深圳市华曦达科技股份有限公司 Multi-region perception automatic multi-subtitle realization method, device and system
CN116108492A (en) * 2023-04-07 2023-05-12 安羚科技(杭州)有限公司 Laterally expandable data leakage prevention system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101098483A (en) * 2007-07-19 2008-01-02 上海交通大学 Video cluster transcoding system using image group structure as parallel processing element
CN101098260A (en) * 2006-06-29 2008-01-02 国际商业机器公司 Distributed equipment monitor management method, equipment and system
CN101141627A (en) * 2007-10-23 2008-03-12 深圳市迅雷网络技术有限公司 Storage system and method of stream media file
US20100293137A1 (en) * 2009-05-14 2010-11-18 Boris Zuckerman Method and system for journaling data updates in a distributed file system
CN102739799A (en) * 2012-07-04 2012-10-17 合一网络技术(北京)有限公司 Distributed communication method in distributed application

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101098260A (en) * 2006-06-29 2008-01-02 国际商业机器公司 Distributed equipment monitor management method, equipment and system
CN101098483A (en) * 2007-07-19 2008-01-02 上海交通大学 Video cluster transcoding system using image group structure as parallel processing element
CN101141627A (en) * 2007-10-23 2008-03-12 深圳市迅雷网络技术有限公司 Storage system and method of stream media file
US20100293137A1 (en) * 2009-05-14 2010-11-18 Boris Zuckerman Method and system for journaling data updates in a distributed file system
CN102739799A (en) * 2012-07-04 2012-10-17 合一网络技术(北京)有限公司 Distributed communication method in distributed application

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103838878B (en) * 2014-03-27 2017-03-01 无锡天脉聚源传媒科技有限公司 A kind of distributed tones processing system for video and processing method
CN105224291A (en) * 2015-09-29 2016-01-06 北京奇艺世纪科技有限公司 A kind of data processing method and device
CN105224291B (en) * 2015-09-29 2017-12-08 北京奇艺世纪科技有限公司 A kind of data processing method and device
CN105354242A (en) * 2015-10-15 2016-02-24 北京航空航天大学 Distributed data processing method and device
CN105354058A (en) * 2015-10-29 2016-02-24 无锡天脉聚源传媒科技有限公司 File updating method and apparatus
CN105407360A (en) * 2015-10-29 2016-03-16 无锡天脉聚源传媒科技有限公司 Data processing method and device
CN105357229A (en) * 2015-12-22 2016-02-24 深圳市科漫达智能管理科技有限公司 Video processing method and device
CN105357229B (en) * 2015-12-22 2019-12-13 深圳市科漫达智能管理科技有限公司 Video processing method and device
CN110635864A (en) * 2019-10-09 2019-12-31 中国联合网络通信集团有限公司 Parameter decoding method, device, equipment and computer readable storage medium
CN111372011A (en) * 2020-04-13 2020-07-03 杭州友勤信息技术有限公司 KVM high definition video decollator
CN114900718A (en) * 2022-07-12 2022-08-12 深圳市华曦达科技股份有限公司 Multi-region perception automatic multi-subtitle realization method, device and system
CN116108492A (en) * 2023-04-07 2023-05-12 安羚科技(杭州)有限公司 Laterally expandable data leakage prevention system

Also Published As

Publication number Publication date
CN103605710B (en) 2017-10-03

Similar Documents

Publication Publication Date Title
CN103605710A (en) Distributed audio and video processing device and distributed audio and video processing method
CN103605709A (en) Distributed audio and video processing device and distributed audio and video processing method
CN103838878A (en) Distributed type audio and video processing system and processing method
CN100559876C (en) Information-transmission apparatus and information transferring method
CN110554930B (en) Data storage method and related equipment
CN103677701B (en) The method and system of large-size screen monitors simultaneous display
CN101447856A (en) High-capacity file transmission method
CN101141197A (en) Software download method
CN111858050B (en) Server cluster hybrid deployment method, cluster management node and related system
CN102833585A (en) System and method for transmitting ubiquitous terminal video
CN107070535A (en) A kind of method that global Incorporate satellite broadcast service is provided
CN103905843B (en) Distributed audio/video processing device and method for continuous frame-I circumvention
CN105976164B (en) Service data emergency switching system and processing method
CN102739650A (en) File transmission system combining broadcast and television net and internet
CN102077186A (en) Methods and systems for transmitting disk images
CN100515028C (en) Unified updating management system for set-top box downloading document
US9218238B2 (en) Contents data recording apparatus and contents data recording method
CN115776486A (en) Electronic file exchange method and system based on exchange node hierarchical grouping
CN102523485B (en) Information distribution method and system
CN102006223A (en) Data transmission method, device and system between cards, board card and distributed system
KR101338666B1 (en) Video server device and synchronization control method
CN101500069B (en) Control method, apparatus and system for digital television receiving terminal of different model number
CN103534681A (en) Method, device and system for deploying application process
CN112732660A (en) Intervention type file transmission method, device and system
CN102541699B (en) Failover information management device, failover control method and storage processing device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A distributed audio and video processing device and processing method

Effective date of registration: 20210104

Granted publication date: 20171003

Pledgee: Inner Mongolia Huipu Energy Co.,Ltd.

Pledgor: TVMINING (BEIJING) MEDIA TECHNOLOGY Co.,Ltd.

Registration number: Y2020990001527

PE01 Entry into force of the registration of the contract for pledge of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20171003

Termination date: 20211112

CF01 Termination of patent right due to non-payment of annual fee