CN109874027A - A kind of low delay educational surgery demonstration live broadcasting method and its system - Google Patents
A kind of low delay educational surgery demonstration live broadcasting method and its system Download PDFInfo
- Publication number
- CN109874027A CN109874027A CN201910181769.0A CN201910181769A CN109874027A CN 109874027 A CN109874027 A CN 109874027A CN 201910181769 A CN201910181769 A CN 201910181769A CN 109874027 A CN109874027 A CN 109874027A
- Authority
- CN
- China
- Prior art keywords
- audio
- video data
- module
- data
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Abstract
The present invention relates to technical field of long-distance medical, a kind of low delay educational surgery demonstration live broadcasting method and system are proposed, system includes operating room terminal, direct broadcast server and views and emulates room client.Low delay educational surgery demonstration live broadcasting method proposed by the present invention and system are efficiently concurrently run by carrying out data processing by multithreading asynchronous buffer mechanism with reaching each data cell of program, task is split as multiple subtasks progress Multi-cores in live streaming while being handled, to reduce the general assignment result output time, the beneficial effect for reducing live streaming delay is reached.
Description
Technical field
The present invention relates to technical field of long-distance medical, and in particular to a kind of low delay educational surgery demonstration live broadcasting method and its is
System.
Background technique
With the development of internet and the communication technology, the development of Tele medicine is also promoted.Wherein in order to adapt to hand
Art teaching and the continuous improvement of operation relay demand, remote operation teaching technology are come into being.Educational surgery demonstration live broadcast system leads to
Live streaming is crossed by the surgical procedure of doctor in operating room, and the video data for indoor various Medical Devices of performing the operation, it can be true
It is timely presented to intern, or views and emulates personnel at the moment, to achieve the purpose that teaching or academic exchange.Live streaming delay is to be
Very important parameter in system, the more low more real-time condition that can will perform the operation that is delayed feed back to and view and emulate client, connect to end is viewed and emulated in art
Wheat text instant messaging to operating room discusses that related surgical content has obvious positive effect.Traditional operation teaching live streaming is based on
RTMP (Real TimeMessaging Protocol) real-time messages transport protocol video flowing reference time delay substantially 1000ms extremely
The fluctuation of 2000ms range has that delay is excessively high.
Summary of the invention
In view of the deficiencies of the prior art, the present invention proposes a kind of low delay educational surgery demonstration live broadcast system, solves existing skill
Be delayed in educational surgery demonstration live streaming excessively high problem in art.
The technical scheme of the present invention is realized as follows:
A kind of low delay educational surgery demonstration live broadcasting method, method include: step 1, and the acquisition module of operating room terminal acquires
To the first audio, video data;Step 2, the data processing module of operating room terminal handles first audio, video data and obtains second
Audio, video data;Data processing module includes format conversion, frame per second filtering to the processing of the first audio, video data;Step 3, it performs the operation
The coding module of room terminal encodes the second audio, video data, obtains audio, video data packet, and coding module increases hardware and compiles
Code is supported, opens zero propagation coding mode and enable non-B frame coding strategy;Step 4, operating room terminal sends out audio, video data packet
It send to direct broadcast server.
Wherein, the multithreading asynchronous buffer control module of operating room terminal locates step 1-4 progress Multi-core simultaneously
Reason.
Step 5, direct broadcast server is requested according to the live streaming for viewing and emulating client received, copies the audio-video number received
Corresponding client is viewed and emulated according to wrapping and being distributed to.
Step 6, the audio, video data packet received is decoded processing by the decoder module for viewing and emulating client, obtains third
Audio, video data;Step 7, the synchronization module for viewing and emulating room terminal carries out the synchronization of audio and video to the third audio, video data
Control processing, obtains the 4th audio, video data;Step 8, the display module for viewing and emulating room terminal carries out wash with watercolours to the 4th audio, video data
Dye display;4th audio, video data does not need to carry out Data Format Transform before being rendered.
Further, the acquisition module in step 1 carries out data acquisition increase multithreading asynchronous buffer mechanism.
Further, multithreading asynchronous buffer mechanism is specially that the high time-consuming operation of collecting thread is separated to multiple threads,
Need the data that exchange to do interim memory buffer thread between two different threads, data be not processed directly be buffered to it is multi-thread
In journey chained list memory, thread resources are discharged in time.
Further, the coding module in step 3 amplifies scramble time stamp time precision by amplified video frame per second grade.
Further, the 4th audio, video data is put into SDL frame display layer to carry out rendering aobvious by the display module in step 8
Show, wherein video data carries out in rendering display to the forms of specified forms handle, and loudspeaker buffering is then written in audio data
Sound broadcasting is carried out in area.
The invention also provides a kind of low delay educational surgery demonstration live broadcast system, including operating room terminal, direct broadcast server and
View and emulate room client.
Operating room terminal includes:
For acquiring the acquisition module of the first audio, video data;
The progress data processing of first audio, video data is obtained into the data processing module of the second audio, video data, data processing
Module includes format conversion, frame per second filtering to the processing of first audio, video data;
Second audio, video data is encoded to obtain the coding module of audio, video data packet, and coding module increases hardware encoding
It supports, open zero propagation coding mode and enable non-B frame coding strategy;
Audio, video data packet is sent to the sending module of direct broadcast server;And
First Multi-thread control module, the controlling other modules of operating room terminal of the task carry out Multi-core and locate simultaneously
Reason.
Direct broadcast server includes copying the audio, video data received according to the live streaming request for viewing and emulating client received
It wraps and is distributed to the corresponding live transmission control module for viewing and emulating client.
Viewing and emulating client includes:
The audio, video data packet received is decoded processing, obtains the decoder module of third audio, video data;
The synchronously control processing that audio and video are carried out to third audio, video data, obtains the synchronization of the 4th audio, video data
Module;
The display module of rendering display is carried out to the 4th audio, video data;4th audio, video data is not required to before being rendered
Carry out Data Format Transform.
Further, acquisition module carries out data acquisition increase multithreading asynchronous buffer mechanism.
Further, multithreading asynchronous buffer mechanism is specially that the high time-consuming operation of collecting thread is separated to multiple threads,
Need the data that exchange to do interim memory buffer thread between two different threads, data be not processed directly be buffered to it is multi-thread
In journey chained list memory, thread resources are discharged in time.
Further, coding module amplifies scramble time stamp time precision by amplified video frame per second grade.
Further, the 4th audio, video data is put into SDL frame display layer and carries out rendering by display module shows, wherein regarding
Frequency shows that the forms of specified forms handle, carry out sound in loudspeaker buffer area is then written in audio data according to rendering is carried out
Sound plays.
Low delay educational surgery demonstration live broadcasting method proposed by the present invention and system by by multithreading asynchronous buffer mechanism into
Row data processing is efficiently concurrently run with reaching each data cell of program, and task is split as multiple subtasks in live streaming and is carried out
Multi-core is handled simultaneously, to reduce the general assignment result output time, has reached the beneficial effect for reducing live streaming delay.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of invention without any creative labor, may be used also for those of ordinary skill in the art
To obtain other drawings based on these drawings.
Fig. 1 is a kind of process flow diagram of low delay educational surgery demonstration live broadcasting method one embodiment of the present invention;
Fig. 2 is the single thread flow chart of data processing figure of traditional operation teaching live broadcasting method;
Fig. 3 is a kind of low delay educational surgery demonstration live broadcasting method one embodiment of the present invention using multithreading asynchronous buffer machine
The flow chart of data processing figure of system;
Fig. 4 is the program circuit that a kind of low delay educational surgery demonstration live broadcasting method one embodiment of the present invention judges coding mode
Figure;
Fig. 5 is that a kind of low delay educational surgery demonstration live broadcasting method timestamp calculation flow chart of the present invention and traditional operation teaching are straight
The comparison of broadcasting method timestamp calculation flow chart;
Fig. 6 is a kind of spin lock schematic diagram of mechanism of low delay educational surgery demonstration live broadcasting method one embodiment of the present invention;
Fig. 7 is a kind of test equipment structure chart of low delay educational surgery demonstration live broadcast system of the present invention;
Fig. 8 is a kind of test result figure of low delay educational surgery demonstration live broadcast system of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
A kind of low delay educational surgery demonstration live broadcasting method, method include: step 1, and the acquisition module of operating room terminal acquires
To the first audio, video data;Step 2, the data processing module of operating room terminal handles first audio, video data and obtains second
Audio, video data;Data processing module includes format conversion, frame per second filtering to the processing of the first audio, video data;Step 3, it performs the operation
The coding module of room terminal encodes the second audio, video data, obtains audio, video data packet, and coding module increases hardware and compiles
Code is supported, opens zero propagation coding mode and enable non-B frame coding strategy;Step 4, operating room terminal sends out audio, video data packet
It send to direct broadcast server.
Wherein, the multithreading asynchronous buffer control module of operating room terminal locates step 1-4 progress Multi-core simultaneously
Reason.
Step 5, direct broadcast server is requested according to the live streaming for viewing and emulating client received, copies the audio-video number received
Corresponding client is viewed and emulated according to wrapping and being distributed to.
Step 6, the audio, video data packet received is decoded processing by the decoder module for viewing and emulating client, obtains third
Audio, video data;Step 7, the synchronization module for viewing and emulating room terminal carries out the synchronization of audio and video to the third audio, video data
Control processing, obtains the 4th audio, video data;Step 8, the display module for viewing and emulating room terminal carries out wash with watercolours to the 4th audio, video data
Dye display;4th audio, video data does not need to carry out Data Format Transform before being rendered.
Further, the acquisition module in step 1 carries out data acquisition increase multithreading asynchronous buffer mechanism.
Further, multithreading asynchronous buffer mechanism is specially that the high time-consuming operation of collecting thread is separated to multiple threads,
Need the data that exchange to do interim memory buffer thread between two different threads, data be not processed directly be buffered to it is multi-thread
In journey chained list memory, thread resources are discharged in time.
Further, the coding module in step 3 amplifies scramble time stamp time precision by amplified video frame per second grade.
Further, the 4th audio, video data is put into SDL frame display layer to carry out rendering aobvious by the display module in step 8
Show, wherein video data carries out in rendering display to the forms of specified forms handle, and loudspeaker buffering is then written in audio data
Sound broadcasting is carried out in area.
As shown in Figure 1, being sealed after being acquired camera picture data we describe system and handled and encoded
Packet is sent to the process of network server.
Such as one video network data packet of encapsulation can be regarded as a general assignment, wherein video data acquiring, frame per second
Filtering, picture-in-picture, scaling, Video coding, package are the next subtask of general assignment, and each subtask is pressed arrow sequence and carried out, includes
The time represents the time loss of each unit in number.
Detailed process is described as follows:
Camera picture data are collected system by ffmpeg frame dshow technology by step 1. system, due to difference
Camera frame per second limits, therefore each frame data output of camera has certain time, and 25fps format calculates upper figure, and each frame is taken the photograph
As the head picture data output gap time is 40ms.
Step 2. is since the camera device default frame per second of docking is not quite similar, and range is from 10fps~60fps etc., institute
To cause the received data delay of process layer often to change, frame per second filtering is then the technology of a unified video frame rate output, to
Video frame is reduced in camera data flow after acquisition, and the time is exported with balanced video frame.This system video flowing is defaulted between 40ms
Every doing filtering principle.When the camera data of capture are too fast, abandon be spaced at the same time it is non-closest to interval tail in (40ms)
Other frames in portion.
Algorithmic descriptions: setting and be divided into M between acquisition is flowed, and P_ is certain time point timestamp;If being divided into N between normal stream, P is certain time
Point timestamp;Certain time point frame number total amount is C, then can calculate as follows:
P=C*N;
P_==P | | (P-P_) < M;(attention M is average time interval)
Formulae results are if false, to need the frame abandoned above.Frame per second filtering can be between the efficient balance video flowing output time
Every, and systems stabilisation resource occupation and stable picture fluency.Since the output time is uniform, driven in system equipment unstable
When can also play the role of stable picture frame per second, make the filtered frame per second of system will not be by the unstable bring frame of device drives
The non-uniform problem of rate.
After frame per second filtering, the processing of multiple video equipment picture-in-pictures can be carried out, picture-in-picture and a small picture is pasted into big picture
Display is overlapped in face.
Picture scaling, i.e., stretched or be compressed to the processing of suitable dimension by the excessive or too small picture of picture.
Above-mentioned frame per second filtering, picture-in-picture, picture scaling require to consume certain system time-consuming.Therefore son is carried out with a frame picture
It is 20ms that task, which needs to estimate time-consuming unit,.
Step 3. camera acquisition data are acquired with after frame per second filtering, picture-in-picture and scaling, these data formats are all
It is unpacked format, is stored and operated with YUV420P format.Unpacked format data space occupies quite big, Yao Jinhang
Network transmission we compress.Video coding subtask can then carry out unpacked data being compressed to compressed format.
Using mainstream, H.264 compression algorithm carries out compressed encoding to system at present.This subtask time-consuming estimates 10ms.
By taking video single thread carry out task as an example, then a video bag output time ≈ 70ms can be calculated.If will
Above-mentioned high time-consuming subtask unit is all disassembled as the processing of multithreading task, then it can be concluded that following TU task unit time-consuming signal
Figure, as shown in Figure 2.From Fig. 2 analysis it can be concluded that, a video bag output time ≈ 40ms.Due to TU task unit in multithreading
It is parallel, therefore total Mission Time depends on unit assignment time longest unit.It is handled by multithreading asynchronous buffer mechanism, respectively
Time overhead can be greatlyd save between unit module, reduce delay.
Further, increase multithreading asynchronous buffer mechanism in the acquisition of camera data and audio data collecting, i.e., it is logical
Cross multithreading it is asynchronous will camera data or microphone data deposit thread pooling queue in.
Data acquisition modes have actively acquisition and passive acquisition two ways.Active mode, that is, program activly request drives layer
Data;The passive acquisition mode, that is, program obtains data by monitoring driving layer data readjustment.Above two mode can all have one
A problem, if consumption operates when doing high after programmed acquisition thread acquisition driving layer data (such as acquire Data Format Transform, encode
Etc.) can all cause driving layer by data buffering or discarding, in turn result in delay.
Multithreading asynchronous buffer mechanism specific strategy: the data exchanged are needed to do temporarily thread between two different threads
Memory buffer, the high time-consuming operation of collecting thread are separated to multiple threads.It can accomplish that acquire layer functions reads in time in time in this way
It takes or adjusts back in time.
In the acquisition of step 1. aggressive mode data, camera data are transferred by dshow technology function in ffmpeg frame
Collecting thread, data, which are not processed, to be directly buffered in multithreading chained list memory, is discharged thread resources in time and is immediately done the
Secondary dshow function is read.It can thus accomplish to avoid dshow function to transfer not in time so that bottom data nowhere heap
It puts, in turn results in loss of data.
The passive acquisition mode of step 2. acquires similar with aggressive mode, and principle is also after receiving passive call back function notice,
High time-consuming operation is not done inside function, directly stores data into multithreading chained list memory, thread resources is discharged in time and waits
Readjustment notice next time.
Multithreading asynchronous buffer mechanism, which can effectively prevent, does high time-consuming operation in collecting thread, ensures driving layer without slow
Punching and data are not lost.Such as driven in layer data in this programme using ffmpeg frame direct technology acquisition camera, such as
Fruit data acquired slowly, and ffmpeg ccf layer, which then returns prompt buffering, has expired, the prompt that follow-up data will be dropped.
Further, coding module delay Optimization.
For rtmp protocol video stream, what our Video codings used is H.264 to encode, Encoder Optimization we from such as
Lower several points carry out:
(1) increase hardware encoding to support.Code efficiency and saving CPU occupancy can be effectively improved due to opening hardware encoding,
Therefore hardware encoding strategy can effectively reduce unit assignment time-consuming.Specific strategy is to judge whether machine supports hardware encoding, is supported
Hardware encoding is then enabled, otherwise enables soft coding, as shown in Figure 3.
Hardware decoding is the scheme with GPU resource decoded video streams that graphic chips producer proposes --- it is on the other side to be
Soft solution, that is, traditional scheme that decoding effort is undertaken with CPU;Advantage be it is high-efficient, it is low in energy consumption, heat power consumption is low, the disadvantage is that
Lack strong support (including filter, subtitle etc.), limitation it is larger (such as open hardware decoding after PC energy conservation aspect function
Can fail cnq etc.), it is arranged complex;Need hardware have hardware decoder module, relevant driving cooperation, suitably play it is soft
Part and playout software is correctly arranged, lacks one and hardware decoding function cannot be opened, the hardware decoding scheme of mainstream by
Intel, AMD-ATI and Nvdia are released.
To video data encoding and decoding, generally there are two types of modes:
1, the mode of software.Data are handled using software codecs such as conventional x264, x265, advantage is spirit
It is living, it can according to need and be customized, the disadvantage is that speed is slow;
2, the mode of hardware.Encoding and decoding are carried out using the API that hardware chip manufacturer provides, these codecs are collected
At hardware bottom layer has been arrived, advantage is that speed is fast, the disadvantage is that related to platform and inflexible.
(2) encoder zero propagation mode is opened.Libx264 encoder default is to be not turned on zero to prolong in ffmpeg frame
When coding mode.By opening encoder zero propagation mode, decoder internal buffered video number of frames can be reduced.Default
It is 20+ or so video frame that libx264, which encodes buffered video number of frames, is calculated with the video of 25 frame per second, buffer time 40x20=
800ms delay.After opening zero propagation, buffering is reduced to 1, and delay drops to 40ms.(FFmpeg be it is a set of can be used to record,
Converted digital audio, video, and the open source computer program of stream can be translated into.Using LGPL or GPL licensing.It is provided
It records, the total solution of conversion and fluidisation audio-video.Ffmpeg: the tool that the project provides can be used for lattice
Formula conversion, decoding or TV card encode immediately;)
(3) non-B frame coding strategy is enabled.Video frame type has I frame (intracoded frame), B frame (bi-directional predicted interpolation coding
Frame), P frame (forward-predictive-coded frames), due to the characteristic of each coded frame, removing B frame coding can prevent decoder internal from buffering
P frame.Improve coding timeliness.
Further, by improving the delay of timestamp precision optimizing.
Timestamp accurate calculation is to ensure video fluency and timeliness important parameter.Program is collected from video stream data
Format conversion arrive coding again and arrive during output stream is arranged again, since each link has the calculating of a set of timestamp concept,
If High Precision Time Stamps and low Precision Time stamp frequent transitions necessarily will cause precision loss, meta position when video frame being caused to export
Dislocation is set, as picture interval is unstable and delay.Precision loses problem, and we are handled using following strategy:
As shown in figure 4, we with 30 frame video stream calculations, be not optimised before steps are as follows:
Step 1, in acquisition camera data, system-computed video frame time stamp is nanosecond class precision.
Step 2, when arriving Video coding subtask, video time stamp default is calculated with frame per second and (1:30);If
Timestamp conversion is this time carried out to be certain to lose precision.
Step 3, Millisecond time precision embodies in video bag package subtask, rtmp agreement package video flowing timestamp essence
Degree is 1:1000, and the low precision of 1:30, which is turned high-precision, not will cause precision loss.
Timestamp is successively transformed into frame per second grade by nanosecond and is finally converted to Millisecond, and medium accuracy causes to lose.
After encountering the above problem, we repair this essence using the timestamp time precision mode of amplification encoder subtask
Spend loss problem.Video frame rate grade is amplified 1000000000 times.After repairing, we can be by video frame rate and amplification
To nanosecond, it is reconverted into Millisecond, ensures timestamp conversion only from high-precision conversion downwards.
Further, pass through communication mechanism optimization delay between multithreading.
Multithreading operation unit assignment, unavoidable data exchange between task.Fast data exchange is to reduce data output
One ring of necessity of delay.There are two types of logical thinkings for general inter-thread data data exchange: passive informing and active poll.
Such as starting two TU task units now, in the case of multithreading, two TU task units start simultaneously, due to job order
The result data of the data dependence TU task unit 1 of member 2.Therefore TU task unit 2 needs to wait certain time that can just be handled.
In active poll pattern, task 2 needs to be arranged the data result shape that query task 1 is ceaselessly removed in a delay
State, this mode really export the time due to not knowing that task 1 handles data, and the data of acquisition have delay.
It is then just to continue directly to task behaviour after the message directly initiated by waiting task 1 wakes up in passive informing mode
Make, this mode is timely with respect to poll pattern.
Program has more in multi-thread data exchange and uses this technology, to ensure data timeliness.Program is specifically used
For the spin lock mechanism realized based on atom constant std::atomic_flag in c11.Lock mechanism spin in multi-thread data height
It is with the obvious advantage in frequency exchange.
As shown in figure 5, in aggressive mode, the 1 passive notified thread in time of timely 2. thread of proactive notification thread 2 of thread
34 situations of task can simplify as any two threading model, and principle is similar.Threaded design can generally calculate time-consuming behaviour
Make, initial data thread can be shorter than by typically relying on thread time.
Further, delay Optimization is rendered by improving.
Video data needs to be shown to display screen process in the presence of delay by decoding.Delay output is broadly divided into two
Position, Data Format Transform delay;Display layer rendering delay.This programme is using former as preference strategy without Data Format Transform
Then, rendering delay is reduced.Such as the rendering of routine yuv420p video data format needs to turn if rendered using drict9 mode
Changing format is rgb32, then is delivered to rendering objects.Conversion diagram is delayed as size will increase 5ms to 30ms or so herein;If
Using sdl2.0 mode to render then can be directly with the rendering of SDL_PIXELFORMAT_IYUV format, caused by saving format conversion
Delay.
A kind of low delay educational surgery demonstration live broadcast system, including operating room terminal, direct broadcast server and view and emulate room client.
Operating room terminal includes:
For acquiring the acquisition module of the first audio, video data;
The progress data processing of first audio, video data is obtained into the data processing module of the second audio, video data, data processing
Module includes format conversion, frame per second filtering to the processing of first audio, video data;
Second audio, video data is encoded to obtain the coding module of audio, video data packet, and coding module increases hardware encoding
It supports, open zero propagation coding mode and enable non-B frame coding strategy;
Audio, video data packet is sent to the sending module of direct broadcast server;And
First Multi-thread control module, the controlling other modules of operating room terminal of the task carry out Multi-core and locate simultaneously
Reason.
Direct broadcast server includes copying the audio, video data received according to the live streaming request for viewing and emulating client received
It wraps and is distributed to the corresponding live transmission control module for viewing and emulating client.
Viewing and emulating client includes:
The audio, video data packet received is decoded processing, obtains the decoder module of third audio, video data;
The synchronously control processing that audio and video are carried out to third audio, video data, obtains the synchronization of the 4th audio, video data
Module;
The display module of rendering display is carried out to the 4th audio, video data;4th audio, video data is not required to before being rendered
Carry out Data Format Transform.
Further, acquisition module carries out data acquisition increase multithreading asynchronous buffer mechanism.
Further, multithreading asynchronous buffer mechanism is specially that the high time-consuming operation of collecting thread is separated to multiple threads,
Need the data that exchange to do interim memory buffer thread between two different threads, data be not processed directly be buffered to it is multi-thread
In journey chained list memory, thread resources are discharged in time.
Further, coding module amplifies scramble time stamp time precision by amplified video frame per second grade.
Further, the 4th audio, video data is put into SDL frame display layer and carries out rendering by display module shows, wherein regarding
Frequency shows that the forms of specified forms handle, carry out sound in loudspeaker buffer area is then written in audio data according to rendering is carried out
Sound plays.
Educational surgery demonstration live broadcast system functional overview: by docking all kinds of Medical Devices vision signals of operating room, multichannel is regarded
Frequency stream signal is forwarded to multiple processes viewing and emulating terminal and being watched in real time by direct broadcast server.System can acquire in operating room
Important video in all types of Medical Instruments, such as DSA equipment picture, endoscope, X-ray machine, shadowless lamp downwards angle of visibility procedure
Signal, and the voice of synchronous acquisition operator doctor and indoor voice signal of entirely performing the operation.System can further support hospital
After the access of HIS/PACS/LIS system data, with operating room signal fused, each teaching center is real-time transmitted to by network, is done
The system nodes such as public room, central station of floating dock realize live broadcasting, support the interactive voice between each point, meet teaching request.
The system of the present embodiment specifically tests environment as shown in fig. 7, test video stream parameter: rtmp protocol package, video codes
Rate 8000kbps, resolution ratio 1920*1080, frame per second 30fps.Test network environment is 100,000,000 local area networks.Test terminal machine is matched
It sets: cpu i7 7700;Video card 1050ti 4G;Memory 8G;System win10.Fig. 8 is the delay test result of the present embodiment.
Surgery demonstration system is the software systems based on C/C++ language development in the present embodiment, cross-platform with support,
The advantage of multiple terminals deployment.Linux is supported comprehensively.The multiple systems such as Windows, mac, Android, ios.
System in the present embodiment uses business sdk rank developing thought, has the most abundant Interface Expanding.It can modify
The initial data of audio, video data in any module free routing, secondary development degree and maintainable degree are high.
System in the present embodiment have business level stability 7x24h not between hold live streaming.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention
Within mind and principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.
Claims (10)
1. a kind of low delay educational surgery demonstration live broadcasting method, which comprises
Step 1, the acquisition module of operating room terminal collects the first audio, video data;
Step 2, the data processing module of the operating room terminal handles first audio, video data and obtains the second audio-video number
According to;The data processing module includes format conversion, frame per second filtering to the processing of first audio, video data;
Step 3, the coding module of the operating room terminal encodes second audio, video data, obtains audio, video data
Packet, the coding module increase hardware encoding and support, open zero propagation coding mode and enable non-B frame coding strategy;
Step 4, the audio, video data packet is sent to direct broadcast server by the operating room terminal;
Wherein, the multithreading asynchronous buffer control module of the operating room terminal locates step 1-4 progress Multi-core simultaneously
Reason;
Step 5, the direct broadcast server is requested according to the live streaming for viewing and emulating client received, copies the audio-video number received
Request what is be broadcast live to view and emulate client according to wrapping and being distributed to;
Step 6, the audio, video data packet received is decoded processing by the decoder module for viewing and emulating client, obtains third sound view
Frequency evidence;
Step 7, the synchronization module for viewing and emulating room terminal carries out the synchronous control of audio and video to the third audio, video data
System processing, obtains the 4th audio, video data;
Step 8, the display module for viewing and emulating room terminal carries out rendering to the 4th audio, video data and shows;4th sound
Video data does not need to carry out Data Format Transform before being rendered.
2. method as described in claim 1, which is characterized in that the step 1, it is more that the acquisition module carries out data acquisition increase
Thread asynchronous buffer mechanism.
3. method as claimed in claim 2, which is characterized in that the multithreading asynchronous buffer mechanism is specially collecting thread high consumption
When operation separation to multiple threads, need the data exchanged to do interim memory buffer thread between two different threads, number
It is directly buffered in multithreading chained list memory according to being not processed, discharges thread resources in time.
4. method as described in claim 1, which is characterized in that the step 2, the coding module pass through amplified video frame per second grade
To amplify scramble time stamp time precision.
5. method as described in claim 1, which is characterized in that the step 8, the display module put the 4th audio, video data
Enter SDL frame display layer and carry out rendering to show, wherein video data carries out in rendering display to the forms of specified forms handle,
Progress sound broadcasting in loudspeaker buffer area is then written in audio data.
6. a kind of low delay educational surgery demonstration live broadcast system, including operating room terminal, direct broadcast server and view and emulate room client;
The operating room terminal includes:
For acquiring the acquisition module of the first audio, video data;
First audio, video data progress data processing is obtained into the data processing module of the second audio, video data, the data
Processing module includes format conversion, frame per second filtering to the processing of first audio, video data;
Second audio, video data is encoded to obtain the coding module of audio, video data packet, the coding module increases hard
Part coding is supported, opens zero propagation coding mode and enable non-B frame coding strategy;
The audio, video data packet is sent to the sending module of direct broadcast server;And
First Multi-thread control module, the controlling other modules of the operating room terminal of the task carry out Multi-core while locating
Reason;
The direct broadcast server includes copying the audio, video data received according to the live streaming request for viewing and emulating client received
Wrap and be distributed to the live transmission control module for viewing and emulating client of request live streaming;
The client of viewing and emulating includes:
The audio, video data packet received is decoded processing, obtains the decoder module of third audio, video data;
The synchronously control processing that audio and video are carried out to the third audio, video data, obtains the synchronization of the 4th audio, video data
Module;
The display module of rendering display is carried out to the 4th audio, video data;4th audio, video data is before being rendered
It does not need to carry out Data Format Transform.
7. system as claimed in claim 6, which is characterized in that it is asynchronous slow that the acquisition module carries out data acquisition increase multithreading
Rush mechanism.
8. system as claimed in claim 7, which is characterized in that the multithreading asynchronous buffer mechanism is specially collecting thread high consumption
When operation separation to multiple threads, need the data exchanged to do interim memory buffer thread between two different threads, number
It is directly buffered in multithreading chained list memory according to being not processed, discharges thread resources in time.
9. system as claimed in claim 6, which is characterized in that the coding module amplifies coding by amplified video frame per second grade
Timestamp time precision.
10. system as claimed in claim 6, which is characterized in that the 4th audio, video data is put into SDL frame by the display module
Display layer carries out rendering and shows, wherein video data carries out in rendering display to the forms of specified forms handle, audio data
Progress sound broadcasting in loudspeaker buffer area is then written.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910181769.0A CN109874027A (en) | 2019-03-11 | 2019-03-11 | A kind of low delay educational surgery demonstration live broadcasting method and its system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910181769.0A CN109874027A (en) | 2019-03-11 | 2019-03-11 | A kind of low delay educational surgery demonstration live broadcasting method and its system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109874027A true CN109874027A (en) | 2019-06-11 |
Family
ID=66920136
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910181769.0A Pending CN109874027A (en) | 2019-03-11 | 2019-03-11 | A kind of low delay educational surgery demonstration live broadcasting method and its system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109874027A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111526466A (en) * | 2020-04-30 | 2020-08-11 | 成都千立网络科技有限公司 | Real-time audio signal processing method for sound amplification system |
CN111641796A (en) * | 2020-06-10 | 2020-09-08 | 广东盛利医疗科技有限公司 | System and method for remote operation guidance and teaching |
CN113573080A (en) * | 2021-06-28 | 2021-10-29 | 北京百度网讯科技有限公司 | Live broadcast recording method and device, electronic equipment and storage medium |
CN113824973A (en) * | 2021-08-04 | 2021-12-21 | 杭州星犀科技有限公司 | Multi-platform direct-push plug flow method, system, electronic device and storage medium |
CN113965768A (en) * | 2021-09-10 | 2022-01-21 | 北京达佳互联信息技术有限公司 | Live broadcast room information display method and device, electronic equipment and server |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1996258A (en) * | 2006-12-28 | 2007-07-11 | 武汉虹旭信息技术有限责任公司 | Method for implementing dynamic memory pool |
CN101470665A (en) * | 2007-12-27 | 2009-07-01 | Tcl集团股份有限公司 | Method and system for internal memory management of application system without MMU platform |
CN101626500A (en) * | 2009-07-31 | 2010-01-13 | 北京大学深圳研究生院 | Method and device for controlling video frame rate |
CN103227919A (en) * | 2013-03-29 | 2013-07-31 | 苏州皓泰视频技术有限公司 | Scalable video coding (SVC) method based on multi-core processor Tilera |
CN103327417A (en) * | 2013-07-11 | 2013-09-25 | 亿览在线网络技术(北京)有限公司 | Method and device for directly broadcasting real-time long-distance audio and video frequency |
CN103500120A (en) * | 2013-09-17 | 2014-01-08 | 北京思特奇信息技术股份有限公司 | Distributed cache high-availability processing method and system based on multithreading asynchronous double writing |
CN104598563A (en) * | 2015-01-08 | 2015-05-06 | 北京京东尚科信息技术有限公司 | High concurrency data storage method and device |
CN105378652A (en) * | 2013-12-24 | 2016-03-02 | 华为技术有限公司 | Method and apparatus for allocating thread shared resource |
CN105939312A (en) * | 2015-08-26 | 2016-09-14 | 杭州迪普科技有限公司 | Data transmission method and device |
CN106230841A (en) * | 2016-08-04 | 2016-12-14 | 深圳响巢看看信息技术有限公司 | A kind of video U.S. face and the method for plug-flow in real time in network direct broadcasting based on terminal |
CN106325980A (en) * | 2015-06-30 | 2017-01-11 | 中国石油化工股份有限公司 | Multi-thread concurrent system |
CN106658030A (en) * | 2016-12-30 | 2017-05-10 | 上海寰视网络科技有限公司 | Method and device for playing composite video comprising single-path audio and multipath videos |
CN106844041A (en) * | 2016-12-29 | 2017-06-13 | 华为技术有限公司 | The method and internal storage management system of memory management |
CN107026856A (en) * | 2017-03-30 | 2017-08-08 | 上海七牛信息技术有限公司 | The optimization method and optimization system of a kind of network plug-flow quality |
CN108234977A (en) * | 2018-01-12 | 2018-06-29 | 京东方科技集团股份有限公司 | A kind of video broadcasting method and display system |
CN109151762A (en) * | 2018-10-19 | 2019-01-04 | 海南易乐物联科技有限公司 | A kind of the asynchronous process system and processing method of high concurrent acquisition data |
-
2019
- 2019-03-11 CN CN201910181769.0A patent/CN109874027A/en active Pending
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1996258A (en) * | 2006-12-28 | 2007-07-11 | 武汉虹旭信息技术有限责任公司 | Method for implementing dynamic memory pool |
CN101470665A (en) * | 2007-12-27 | 2009-07-01 | Tcl集团股份有限公司 | Method and system for internal memory management of application system without MMU platform |
CN101626500A (en) * | 2009-07-31 | 2010-01-13 | 北京大学深圳研究生院 | Method and device for controlling video frame rate |
CN103227919A (en) * | 2013-03-29 | 2013-07-31 | 苏州皓泰视频技术有限公司 | Scalable video coding (SVC) method based on multi-core processor Tilera |
CN103327417A (en) * | 2013-07-11 | 2013-09-25 | 亿览在线网络技术(北京)有限公司 | Method and device for directly broadcasting real-time long-distance audio and video frequency |
CN103500120A (en) * | 2013-09-17 | 2014-01-08 | 北京思特奇信息技术股份有限公司 | Distributed cache high-availability processing method and system based on multithreading asynchronous double writing |
CN105378652A (en) * | 2013-12-24 | 2016-03-02 | 华为技术有限公司 | Method and apparatus for allocating thread shared resource |
CN104598563A (en) * | 2015-01-08 | 2015-05-06 | 北京京东尚科信息技术有限公司 | High concurrency data storage method and device |
CN106325980A (en) * | 2015-06-30 | 2017-01-11 | 中国石油化工股份有限公司 | Multi-thread concurrent system |
CN105939312A (en) * | 2015-08-26 | 2016-09-14 | 杭州迪普科技有限公司 | Data transmission method and device |
CN106230841A (en) * | 2016-08-04 | 2016-12-14 | 深圳响巢看看信息技术有限公司 | A kind of video U.S. face and the method for plug-flow in real time in network direct broadcasting based on terminal |
CN106844041A (en) * | 2016-12-29 | 2017-06-13 | 华为技术有限公司 | The method and internal storage management system of memory management |
CN106658030A (en) * | 2016-12-30 | 2017-05-10 | 上海寰视网络科技有限公司 | Method and device for playing composite video comprising single-path audio and multipath videos |
CN107026856A (en) * | 2017-03-30 | 2017-08-08 | 上海七牛信息技术有限公司 | The optimization method and optimization system of a kind of network plug-flow quality |
CN108234977A (en) * | 2018-01-12 | 2018-06-29 | 京东方科技集团股份有限公司 | A kind of video broadcasting method and display system |
CN109151762A (en) * | 2018-10-19 | 2019-01-04 | 海南易乐物联科技有限公司 | A kind of the asynchronous process system and processing method of high concurrent acquisition data |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111526466A (en) * | 2020-04-30 | 2020-08-11 | 成都千立网络科技有限公司 | Real-time audio signal processing method for sound amplification system |
CN111526466B (en) * | 2020-04-30 | 2022-07-22 | 成都千立网络科技有限公司 | Real-time audio signal processing method for sound amplification system |
CN111641796A (en) * | 2020-06-10 | 2020-09-08 | 广东盛利医疗科技有限公司 | System and method for remote operation guidance and teaching |
CN113573080A (en) * | 2021-06-28 | 2021-10-29 | 北京百度网讯科技有限公司 | Live broadcast recording method and device, electronic equipment and storage medium |
CN113573080B (en) * | 2021-06-28 | 2023-09-29 | 北京百度网讯科技有限公司 | Live broadcast recording method and device, electronic equipment and storage medium |
CN113824973A (en) * | 2021-08-04 | 2021-12-21 | 杭州星犀科技有限公司 | Multi-platform direct-push plug flow method, system, electronic device and storage medium |
CN113965768A (en) * | 2021-09-10 | 2022-01-21 | 北京达佳互联信息技术有限公司 | Live broadcast room information display method and device, electronic equipment and server |
CN113965768B (en) * | 2021-09-10 | 2024-01-02 | 北京达佳互联信息技术有限公司 | Live broadcasting room information display method and device, electronic equipment and server |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109874027A (en) | A kind of low delay educational surgery demonstration live broadcasting method and its system | |
CN103605709B (en) | A kind of distributed tones video process apparatus and processing method | |
CN101674486B (en) | Streaming media audio and video synchronization method and system | |
CN110213598A (en) | A kind of video code conversion system, method and Related product | |
CN102932676B (en) | Self-adaptive bandwidth transmitting and playing method based on audio and video frequency synchronization | |
CN101873482B (en) | Real-time streaming media cluster transcoding system | |
US8675728B2 (en) | Transmitting apparatus and method, and receiving apparatus and method | |
CN110022297A (en) | A kind of HD video live broadcast system | |
CN1125031A (en) | Adaptive video compression using variable quantization | |
CN103929657A (en) | Video multiplexing apparatus, video multiplexing method, multiplexed video decoding apparatus, and multiplexed video decoding method | |
CN101951387A (en) | Method and device for transmitting stream media | |
CN104284098A (en) | Method and system for processing video data | |
CN108111859A (en) | H.264 video coding-decoding method and device based on JetsonTX1 platforms | |
CN114640886A (en) | Bandwidth-adaptive audio and video transmission method and device, computer equipment and medium | |
CN108632679B (en) | A kind of method that multi-medium data transmits and a kind of view networked terminals | |
CN113938470A (en) | Method and device for playing RTSP data source by browser and streaming media server | |
CN110300278A (en) | Video transmission method and equipment | |
WO2024022317A1 (en) | Video stream processing method and apparatus, storage medium, and electronic device | |
WO2011072550A1 (en) | Method and device for stably outputting media data | |
CN103582149A (en) | Resource scheduling method and device of VOLTE | |
CN106791908B (en) | A kind of live video stream storage method for supporting cloud platform to use double buffering | |
CN112543374A (en) | Transcoding control method and device and electronic equipment | |
CN106412518A (en) | Wireless video transmission system based on TD-LTE emergency communication | |
CN105657448B (en) | A kind of retransmission method, the apparatus and system of encoded video stream | |
CN105187688A (en) | Method and system for carrying out synchronization on real-time video and audio collected by mobile phone |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190611 |