CN108632679B - A kind of method that multi-medium data transmits and a kind of view networked terminals - Google Patents

A kind of method that multi-medium data transmits and a kind of view networked terminals Download PDF

Info

Publication number
CN108632679B
CN108632679B CN201710802232.2A CN201710802232A CN108632679B CN 108632679 B CN108632679 B CN 108632679B CN 201710802232 A CN201710802232 A CN 201710802232A CN 108632679 B CN108632679 B CN 108632679B
Authority
CN
China
Prior art keywords
frame
data
networked terminals
view
coding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710802232.2A
Other languages
Chinese (zh)
Other versions
CN108632679A (en
Inventor
王艳辉
朱道彦
杨春晖
谭智东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainan Shilian Communication Technology Co.,Ltd.
Original Assignee
Visionvera Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionvera Information Technology Co Ltd filed Critical Visionvera Information Technology Co Ltd
Priority to CN201710802232.2A priority Critical patent/CN108632679B/en
Publication of CN108632679A publication Critical patent/CN108632679A/en
Application granted granted Critical
Publication of CN108632679B publication Critical patent/CN108632679B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64784Data processing by the network
    • H04N21/64792Controlling the complexity of the content stream, e.g. by dropping packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • H04N21/6437Real-time Transport Protocol [RTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

The embodiment of the invention provides a kind of methods of multi-medium data transmission, are applied in view networking, are related to regarding networked terminals, and view networked server;The view networked terminals obtain multi-medium data;The multi-medium data includes image data and audio data;Described image data are made of multiple picture groups, and the picture group is made of at least one I frame and at least one P frame;The ratio that I frame and P frame in the picture group are obtained depending on networked terminals;If the ratio is less than or equal to preset rate threshold, described to encode current image date according to the ratio depending on networked terminals, while the audio data being encoded;Audio data after image data after coding, and coding is sent to the view networked server by the view networked terminals.The embodiment of the present invention solves phenomena such as multi-medium data in the prior art frequently occurs packet loss, Caton and delay during transmission, improves multi-medium data transmission real-time.

Description

A kind of method that multi-medium data transmits and a kind of view networked terminals
Technical field
The present invention relates to view networking technology fields, network more particularly to a kind of multimedia data transmission method and a kind of view Terminal.
Background technique
With the fast development of the network technology, life of the application scenarios such as video conference, video monitoring, network direct broadcasting in user Living, work, study etc. are widely available.
In these application scenarios, need to carry out instantaneous transmission, In to multi-medium datas such as image data and audio datas It before transmission, needs first to encode multi-medium data, current existing encoding scheme is by I, P frame ratio in image data A fixed ratio is arranged in rate, then encodes image data according to fixed ratio, while also compiling to audio data Code, coding after the completion of using transport protocol by after coding image data and audio data be sent to server.
But because a fixed ratio is arranged in I, P frame ratio in coding, the ratio when I frame can be resulted in high in this way When fixed ratio, need to abandon extra I frame, thus occur Caton, delay, even frame losing the phenomenon that.Moreover, existing biography Defeated agreement is IP-based RTP (Real-time Transport Protocol/Real-Time Streaming Protocol, real-time transport protocol/real-time streaming protocol) agreement, RTP/RTSP is built upon on the basis of transport layer, that is, is passed The UDP/TCP protocol transport layer of system, so, it is unable to reach the requirement of high real-time.
Summary of the invention
In view of the above problems, the method and corresponding one kind of a kind of multi-medium data transmission of the embodiment of the present invention are proposed Depending on networked terminals.
To solve the above-mentioned problems, the embodiment of the invention discloses a kind of method of multi-medium data transmission, the methods Applied in view networking, it is related to view networked terminals, and regard networked server;
The method includes:
The view networked terminals obtain multi-medium data;The multi-medium data includes image data and audio data; Described image data are made of multiple picture groups, and the picture group is made of at least one I frame and at least one P frame;
The ratio that I frame and P frame in the picture group are obtained depending on networked terminals;
If the ratio be less than or equal to preset rate threshold, the view networked terminals by current image date according to The ratio is encoded, while the audio data being encoded;
Audio data after image data after coding, and coding is sent to the view networking by the view networked terminals Server.
Preferably, the multi-medium data is acquired by image capture device and audio collecting device;The view networking Terminal obtain multi-medium data the step of include:
The view networked terminals obtain described image data from described image acquisition equipment, and set from the audio collection It is standby to obtain the audio data.
Preferably, the method further include:
If the ratio is greater than preset rate threshold, the view networked terminals are by current image date according to described pre- If rate threshold encoded, while the audio data being encoded.
Preferably, the view networked terminals include view networking protocol stack, and every kind of view networking protocol has one-to-one number According to type;
Audio data after image data after coding, and coding is sent to the view networking by the view networked terminals The step of server includes:
The view networked terminals use view networking protocol corresponding with described image data type, by the picture number after coding According to, and coding after audio data be sent to the view networked server.
Preferably, the view networked terminals include the first chip and the second chip, and first chip is used for the figure As data are encoded, second chip is for encoding the audio data.
Correspondingly, the view networked terminals are applied in view networking the embodiment of the invention discloses a kind of view networked terminals, Data interaction is carried out with view networked server;
Described includes: depending on networked terminals
Multi-medium data obtains module, for obtaining multi-medium data;The multi-medium data includes image data, and Audio data;Described image data are made of multiple picture groups, and the picture group is by least one I frame and at least one P frame group At;
Ratio obtains module, for obtaining the ratio of I frame and P frame in the picture group;
Coding module, when the ratio be less than or equal to preset rate threshold when, for by current image date according to The ratio is encoded, while the audio data being encoded;
Sending module is sent to the view networking for the audio data after the image data after encoding, and coding Server.
Preferably, the multi-medium data is acquired by image capture device and audio collecting device;The view networking Terminal further include:
Receiving module, for being set from described image acquisition equipment acquisition described image data, and from the audio collection It is standby to obtain the audio data.
Preferably, when the ratio is greater than preset rate threshold, the coding module is also used to present image number It is encoded according to according to the preset rate threshold, while the audio data being encoded.
Preferably, the view networked terminals include view networking protocol stack, and every kind of view networking protocol has one-to-one number According to type;
The sending module is also used to using view networking protocol corresponding with described image data type, by the figure after coding As the audio data after data, and coding is sent to the view networked server.
Preferably, described depending on networked terminals further includes the first chip and the second chip, and first chip is used for described Image data is encoded, and second chip is for encoding the audio data
The embodiment of the present invention includes following advantages:
In embodiments of the present invention, pass through the I frame of image data in the collected multi-medium data of calculating depending on networked terminals With the ratio of P frame, then the ratio being calculated is compared with preset rate threshold, if the ratio being calculated is small In or be equal to preset rate threshold, then just image data is encoded according to the ratio of current I frame and P frame, while right Audio data in multi-medium data is also encoded, then by after coding image data and audio data be sent to view networking The mode of server realizes view networking coding I, P frame ratio dynamic configuration, solves in the prior art because of I, P frame ratio Multi-medium data caused by fixed configurations frequently occurs phenomena such as packet loss, Caton and delay during transmission, improves Multi-medium data transmits real-time.
Moreover, being integrated with view networking protocol stack depending on networked terminals, can be added when being packaged to multi-medium data in this way Depending on networking protocol packet header, there are the view networking address mac and order in packet header, and establish the transmission on original socket, therefore, energy Enough efficient independent process data transmit-receives and order transmitting-receiving, ensure that the accurately and timely transmission of multi-medium data and bidding protocol.
Detailed description of the invention
Fig. 1 is a kind of networking schematic diagram of view networking of the invention;
Fig. 2 is a kind of hardware structural diagram of node server of the invention;
Fig. 3 is a kind of hardware structural diagram of access switch of the invention;
Fig. 4 is the hardware structural diagram that a kind of Ethernet association of the invention turns gateway;
Fig. 5 is a kind of step flow chart of multimedia data transmission method embodiment of the invention;
Fig. 6 is a kind of structural block diagram of view networked terminals embodiment of the invention.
Specific embodiment
In order to make the foregoing objectives, features and advantages of the present invention clearer and more comprehensible, with reference to the accompanying drawing and specific real Applying mode, the present invention is described in further detail.
It is the important milestone of network Development depending on networking, is a real-time network, can be realized HD video real-time Transmission, Push numerous Internet applications to HD video, high definition is face-to-face.
Real-time high-definition video switching technology is used depending on networking, it can be such as high in a network platform by required service Clear video conference, Intellectualized monitoring analysis, emergency command, digital broadcast television, delay TV, the Web-based instruction, shows video monitoring Field live streaming, VOD program request, TV Mail, individual character records (PVR), Intranet (manages) channel by oneself, intelligent video Broadcast Control, information publication All be incorporated into a system platform etc. services such as tens of kinds of videos, voice, picture, text, communication, data, by TV or Computer realizes that high-definition quality video plays.
Embodiment in order to enable those skilled in the art to better understand the present invention is introduced to depending on networking below:
Depending on networking, applied portion of techniques is as described below:
Network technology (Network Technology)
Traditional ethernet (Ethernet) is improved depending on the network technology innovation networked, with potential huge on network Video flow.(Circuit is exchanged different from simple network packet packet switch (Packet Switching) or lattice network Switching), Streaming demand is met using Packet Switching depending on networking technology.Has grouping depending on networking technology Flexible, the simple and low price of exchange, is provided simultaneously with the quality and safety assurance of circuit switching, it is virtually electric to realize the whole network switch type The seamless connection of road and data format.
Switching technology (Switching Technology)
Two advantages of asynchronous and packet switch that Ethernet is used depending on networking eliminate Ethernet under the premise of complete compatible and lack It falls into, has the end-to-end seamless connection of the whole network, direct user terminal, directly carrying IP data packet.User data is in network-wide basis It is not required to any format conversion.It is the more advanced form of Ethernet depending on networking, is a real-time exchange platform, can be realized at present mutually The whole network large-scale high-definition realtime video transmission that networking cannot achieve pushes numerous network video applications to high Qinghua, unitizes.
Server technology (Server Technology)
It is different from traditional server, its Streaming Media depending on the server technology in networking and unified video platform Transmission be built upon it is connection-oriented on the basis of, data-handling capacity is unrelated with flow, communication time, single network layer energy Enough transmitted comprising signaling and data.For voice and video business, handled depending on networking and unified video platform Streaming Media Complexity many simpler than data processing, efficiency substantially increase hundred times or more than traditional server.
Reservoir technology (Storage Technology)
The ultrahigh speed reservoir technology of unified video platform in order to adapt to the media content of vast capacity and super-flow and Using state-of-the-art real time operating system, the programme information in server instruction is mapped to specific hard drive space, media Content is no longer pass through server, and moment is directly delivered to user terminal, and user waits typical time less than 0.2 second.It optimizes Sector distribution greatly reduces the mechanical movement of hard disc magnetic head tracking, and resource consumption only accounts for the 20% of the internet ad eundem IP, but The concurrent flow greater than 3 times of traditional disk array is generated, overall efficiency promotes 10 times or more.
Network security technology (Network Security Technology)
Depending on the structural design networked by servicing independent licence system, equipment and the modes such as user data is completely isolated every time The network security problem that puzzlement internet has thoroughly been eradicated from structure, does not need antivirus applet, firewall generally, has prevented black The attack of visitor and virus, structural carefree secure network is provided for user.
It services innovative technology (Service Innovation Technology)
Business and transmission are fused together by unified video platform, whether single user, private user or a net The sum total of network is all only primary automatic connection.User terminal, set-top box or PC are attached directly to unified video platform, obtain rich The multimedia video service of rich colorful various forms.Unified video platform is traditional to substitute with table schema using " menu type " Complicated applications programming, considerably less code, which can be used, can be realized complicated application, realize the new business innovation of " endless ".
Networking depending on networking is as described below:
It is a kind of central controlled network structure depending on networking, which can be Tree Network, Star network, ring network etc. class Type, but centralized control node is needed to control whole network in network on this basis.
As shown in Figure 1, being divided into access net and Metropolitan Area Network (MAN) two parts depending on networking.
The equipment of access mesh portions can be mainly divided into 3 classes: node server, access switch, terminal (including various machines Top box, encoding board, memory etc.).Node server is connected with access switch, and access switch can be with multiple terminal phases Even, and it can connect Ethernet.
Wherein, node server is the node that centralized control functions are played in access net, can control access switch and terminal. Node server can directly be connected with access switch, can also directly be connected with terminal.
Similar, the equipment of metropolitan area mesh portions can also be divided into 3 classes: metropolitan area server, node switch, node serve Device.Metropolitan area server is connected with node switch, and node switch can be connected with multiple node servers.
Wherein, node server is the node server for accessing mesh portions, i.e. node server had both belonged to access wet end Point, and belong to metropolitan area mesh portions.
Metropolitan area server is the node that centralized control functions are played in Metropolitan Area Network (MAN), can control node switch and node serve Device.Metropolitan area server can be directly connected to node switch, can also be directly connected to node server.
It can be seen that be entirely a kind of central controlled network structure of layering depending on networking network, and node server and metropolitan area The network controlled under server can be the various structures such as tree-shaped, star-like, cyclic annular.
Visually claim, access mesh portions can form unified video platform (part in virtual coil), and multiple unified videos are flat Platform can form view networking;Each unified video platform can be interconnected by metropolitan area and wide area depending on networking.
Classify depending on networked devices
1.1 embodiment of the present invention can be mainly divided into 3 classes: server depending on the equipment in networking, interchanger (including ether Net gateway), terminal (including various set-top boxes, encoding board, memory etc.).Depending on networking can be divided on the whole Metropolitan Area Network (MAN) (or National net, World Wide Web etc.) and access net.
1.2 equipment for wherein accessing mesh portions can be mainly divided into 3 classes: node server, access switch (including ether Net gateway), terminal (including various set-top boxes, encoding board, memory etc.).
The specific hardware structure of each access network equipment are as follows:
Node server:
As shown in Fig. 2, mainly including Network Interface Module 201, switching engine module 202, CPU module 203, disk array Module 204;
Wherein, Network Interface Module 201, the Bao Jun that CPU module 203, disk array module 204 are come in enter switching engine Module 202;Switching engine module 202 look into the operation of address table 205 to the packet come in, to obtain the navigation information of packet; And the packet is stored according to the navigation information of packet the queue of corresponding pack buffer 206;If the queue of pack buffer 206 is close It is full, then it abandons;All pack buffer queues of 202 poll of switching engine mould, are forwarded: 1) port if meeting the following conditions It is less than to send caching;2) the queue package counting facility is greater than zero.Disk array module 204 mainly realizes the control to hard disk, including The operation such as initialization, read-write to hard disk;CPU module 203 is mainly responsible between access switch, terminal (not shown) Protocol processes, to address table 205 (including descending protocol packet address table, uplink protocol package address table, data packet addressed table) Configuration, and, the configuration to disk array module 204.
Access switch:
As shown in figure 3, mainly including Network Interface Module (downstream network interface module 301, uplink network interface module 302), switching engine module 303 and CPU module 304;
Wherein, the packet (upstream data) that downstream network interface module 301 is come in enters packet detection module 305;Packet detection mould Whether mesh way address (DA), source address (SA), type of data packet and the packet length of the detection packet of block 305 meet the requirements, if met, It then distributes corresponding flow identifier (stream-id), and enters switching engine module 303, otherwise abandon;Uplink network interface mould The packet (downlink data) that block 302 is come in enters switching engine module 303;The data packet that CPU module 204 is come in enters switching engine Module 303;Switching engine module 303 look into the operation of address table 306 to the packet come in, to obtain the navigation information of packet; If the packet into switching engine module 303 is that downstream network interface is gone toward uplink network interface, in conjunction with flow identifier (stream-id) packet is stored in the queue of corresponding pack buffer 307;If the queue of the pack buffer 307 is close full, It abandons;If the packet into switching engine module 303 is not that downstream network interface is gone toward uplink network interface, according to packet Navigation information is stored in the data packet queue of corresponding pack buffer 307;If the queue of the pack buffer 307 is close full, Then abandon.
All pack buffer queues of 303 poll of switching engine module, are divided to two kinds of situations in embodiments of the present invention:
If the queue is that downstream network interface is gone toward uplink network interface, meets the following conditions and be forwarded: 1) It is less than that the port sends caching;2) the queue package counting facility is greater than zero;3) token that rate control module generates is obtained;
If the queue is not that downstream network interface is gone toward uplink network interface, meets the following conditions and is forwarded: 1) it is less than to send caching for the port;2) the queue package counting facility is greater than zero.
Rate control module 208 is configured by CPU module 204, to all downlink networks in programmable interval Interface generates token toward the pack buffer queue that uplink network interface is gone, to control the code rate of forwarded upstream.
CPU module 304 is mainly responsible for the protocol processes between node server, the configuration to address table 306, and, Configuration to rate control module 308.
Ethernet association turns gateway:
As shown in figure 4, mainly including Network Interface Module (downstream network interface module 401, uplink network interface module 402), switching engine module 403, CPU module 404, packet detection module 405, rate control module 408, address table 406, Bao Huan Storage 407 and MAC adding module 409, MAC removing module 410.
Wherein, the data packet that downstream network interface module 401 is come in enters packet detection module 405;Packet detection module 405 is examined Ethernet mac DA, ethernet mac SA, Ethernet length or frame type, the view networking mesh way address of measured data packet DA, whether meet the requirements depending on networking source address SA, depending on networking data Packet type and packet length, corresponding stream is distributed if meeting Identifier (stream-id);Then, MAC DA, MAC SA, length or frame type are subtracted by MAC removing module 410 (2byte), and enter corresponding receive and cache, otherwise abandon;
Downstream network interface module 401 detects the transmission caching of the port, according to the view of packet networking mesh if there is Bao Ze Address D A knows the ethernet mac DA of corresponding terminal, adds the ethernet mac DA of terminal, Ethernet assists the MAC for turning gateway SA, Ethernet length or frame type, and send.
The function that Ethernet association turns other modules in gateway is similar with access switch.
Terminal:
It mainly include Network Interface Module, Service Processing Module and CPU module;For example, set-top box mainly connects including network Mouth mold block, video/audio encoding and decoding engine modules, CPU module;Encoding board mainly includes Network Interface Module, video encoding engine Module, CPU module;Memory mainly includes Network Interface Module, CPU module and disk array module.
The equipment of 1.3 metropolitan area mesh portions can be mainly divided into 2 classes: node server, node switch, metropolitan area server. Wherein, node switch mainly includes Network Interface Module, switching engine module and CPU module;Metropolitan area server mainly includes Network Interface Module, switching engine module and CPU module are constituted.
2, networking data package definition is regarded
2.1 access network data package definitions
Access net data packet mainly include following sections: destination address (DA), source address (SA), reserve bytes, payload(PDU)、CRC。
As shown in the table, the data packet for accessing net mainly includes following sections:
DA SA Reserved Payload CRC
Wherein:
Destination address (DA) is made of 8 bytes (byte), and first character section indicates type (such as the various associations of data packet Discuss packet, multicast packet, unicast packet etc.), be up to 256 kinds of possibility, the second byte to the 6th byte is metropolitan area net address, Seven, the 8th bytes are access net address;
Source address (SA) is also to be made of 8 bytes (byte), is defined identical as destination address (DA);
Reserve bytes are made of 2 bytes;
The part payload has different length according to the type of different datagrams, is if it is various protocol packages 64 bytes are 32+1024=1056 bytes if it is single group unicast packets words, are not restricted to above 2 kinds certainly;
CRC is made of 4 bytes, and calculation method follows the Ethernet CRC algorithm of standard.
2.2 Metropolitan Area Network (MAN) packet definitions
The topology of Metropolitan Area Network (MAN) is pattern, may there is 2 kinds, connection even of more than two kinds, i.e. node switching between two equipment It can all can exceed that 2 kinds between machine and node server, node switch and node switch, node switch and node server Connection.But the metropolitan area net address of metropolitan area network equipment is uniquely, to close to accurately describe the connection between metropolitan area network equipment System, introduces parameter in embodiments of the present invention: label, uniquely to describe a metropolitan area network equipment.
(Multi-Protocol Label Switch, multiprotocol label are handed over by the definition of label and MPLS in this specification Change) label definition it is similar, it is assumed that between equipment A and equipment B there are two connection, then data packet from equipment A to equipment B just There are 2 labels, data packet also there are 2 labels from equipment B to equipment A.Label is divided into label, outgoing label, it is assumed that data packet enters The label (entering label) of equipment A is 0x0000, and the label (outgoing label) when this data packet leaves equipment A may reform into 0x0001.The networking process of Metropolitan Area Network (MAN) is to enter network process under centralized control, also means that address distribution, the label of Metropolitan Area Network (MAN) Distribution be all to be dominated by metropolitan area server, node switch, node server be all passively execute, this point with The label distribution of MPLS is different, and the distribution of the label of MPLS is the result that interchanger, server are negotiated mutually.
As shown in the table, the data packet of Metropolitan Area Network (MAN) mainly includes following sections:
DA SA Reserved Label Payload CRC
That is destination address (DA), source address (SA), reserve bytes (Reserved), label, payload (PDU), CRC.Its In, the format of label, which can refer to, such as gives a definition: label is 32bit, wherein high 16bit retains, only with low 16bit, its position Set is between the reserve bytes and payload of data packet.
Based on the above-mentioned characteristic of view networking, one of the core concepts of the embodiments of the present invention is proposed, is calculated depending on networked terminals The ratio of the I frame and P frame of image data in collected multi-medium data, then by the ratio being calculated and preset ratio Threshold value is compared, if the ratio being calculated be less than or equal to preset rate threshold, just by image data according to The ratio of current I frame and P frame is encoded, while also being encoded to the audio data in multi-medium data, then will coding Image data and audio data afterwards is sent to view networked server.
Referring to Fig. 5, a kind of step flow chart of multimedia data transmission method embodiment of the invention, this method are shown It can be applied in view networking, be related to regarding networked terminals, and view networked server.
In the concrete realization, it can be applied to video conference depending on networked terminals, video monitoring, network direct broadcasting, click video Stream etc. need instantaneous transmission image, audio application scenarios in, can connect camera and microphone depending on networked terminals, for adopting Collect the multi-medium datas such as image data and audio data, can also be acquired by included camera and microphone image data and The multi-medium datas such as audio data can also connect television set, for playing the multi-medium datas such as image data and audio data. Wherein, there is operating system, such as LINUX depending on networked terminals.
In embodiments of the present invention, such as in the application scenarios such as video monitoring, network direct broadcasting, click video flowing, depending on connection Network termination is the equipment of an outside source, i.e. view networked terminals acquire the multi-medium datas such as image data and audio data simultaneously It is sent to view networked server.
But if it is should in video conference, then just need at least two view networked terminals, i.e., first view networking eventually End and the second view networked terminals, the first view networked terminals and second regard networked terminals outside source each other, i.e., the first view networking Terminal acquires the first multi-medium data and is sent to the second view networked terminals, and second plays the first multimedia number depending on networked terminals According to, meanwhile, second acquires the second multi-medium data depending on networked terminals and is sent to the first view networked terminals, the first view networked terminals Play second multi-medium data.
Due to the communication between terminal be it is lasting, the first multi-medium data of the first view networked terminals acquisition, the Two the second multi-medium datas acquired depending on networked terminals are lasting.
The method can specifically include following steps:
Step 501, the view networked terminals obtain multi-medium data;The multi-medium data includes image data, and Audio data;Described image data are made of multiple picture groups, and the picture group is by least one I frame and at least one P frame group At;
H264 is the coding standard of a new generation, with high compression high quality and supports the streaming media of multiple network famous, Theoretical foundation in encoding context, H264 is: showing to draw in adjacent a few width images referring to the statistical result of image in a period of time In face, general differentiated pixel only has the point within 10%, and luminance difference variation is no more than 2%, and the variation of chroma difference Within only 1%.So a complete picture frame A can be encoded out first for one section of little image frame of variation, subsequent B frame does not just encode all images, only the difference of write-in and A frame, and the size of such B frame just only has the 1/10 or smaller of whole frame, B If the C frame variation after frame is less, we can continue to encode C frame in a manner of with reference to B, and circulation is gone down in this way.This section figure As we are known as a sequence (one piece of data that sequence is exactly same characteristics), when some image with image change before very Greatly, it can not be generated with reference to the frame of front, we just terminate a upper sequence, start next section of sequence, that is, to this As soon as image generates a whole frame A1, subsequent image refers to A1 generation, only the difference content of write-in and A1.
Three kinds of frames are defined in H264 agreement, the frame completely encoded only includes I frame, with reference to what I frame before generated The frame of difference section coding is P frame, and there are also a kind of frames of the frame coding before and after reference B frame.
The core algorithm that H264 is used is frame data compression and interframe compression, and frame data compression is the algorithm for generating I frame, interframe pressure Contracting is the algorithm for generating B frame and P frame.
The explanation of sequence
Image is to carry out tissue in H264 as unit of sequence, and a sequence is the data flow after one section of image coding, with I Frame starts, and arrives next I frame end.
First image of one sequence is called IDR image (refreshed image immediately), and IDR image is all I frame image. H.264 introducing IDR image is re-synchronized in order to decoded, immediately that Reference Frame List is clear when decoder is decoded to IDR image Decoded data are all exported or are abandoned by sky, search parameter set again, start a new sequence.In this way, if previous There is gross mistake in a sequence, can obtain the chance of re-synchronization herein.Image after IDR image never uses The data of image before IDR decode.
One sequence is exactly the burst of data stream generated after the less big image of one section of content deltas encodes.Work as motion change It,, can because the content that motion change lacks representative image picture changes very little as soon as sequence can be very long when fewer To compile an I frame, then P frame, B frame always.When motion change is more, as soon as possible sequence is just shorter, such as packet Containing an I frame and 3,4 P frames.
The explanation of three kinds of frames
I frame: intracoded frame, I frame indicate key frame, it can be understood as the complete reservation of this frame picture;Because comprising Complete picture, so only needing this frame data that can complete when decoding.
I frame feature:
1, it is a full frame compressed coded frames.Full frame image information is carried out JPEG compression coding and transmission by it;
2, only use the data of I frame with regard to restructural complete image when decoding;
3, the I frame delineation details of image background and moving person;
4, I frame is generated without necessarily referring to other pictures;
5, I frame is the reference frame (quality of each frame after its quality directly influences in same group) of P frame and B frame;
6, I frame is the basic frame (first frame) of frame group GOP, in a group only one I frame;
7, I frame is without the concern for motion vector;
8, the information content of data shared by I frame is bigger.
P frame: forward-predictive-coded frames.What P frame indicated is difference of this frame with a key frame (or P frame) before, It needs the picture cached before to be superimposed with the difference of this frame definition when decoding, generates final picture.Namely difference frame, P frame There is no complete picture data, the only data with the picture difference of former frame.
The prediction and reconstruct of P frame: P frame is that the predicted value and movement of P frame " certain point " are found out in I frame using I frame as reference frame Vector takes prediction difference and motion vector to transmit together.P frame " certain point " is found out from I frame according to motion vector in receiving end Predicted value is simultaneously and difference value is to obtain P frame " certain point " sample value, so that complete P frame can be obtained.
P frame feature:
1, P frame is the coded frame for being separated by 1~2 frame behind I frame;
2, P frame transmits its difference and motion vector (prediction mistake with I the or P frame of front using the method for motion compensation Difference);
3, complete P frame image could must will be reconstructed after the predicted value and prediction error summation in I frame when decoding;
4, P frame belongs to the interframe encode of forward prediction.It is only with reference to front near its I frame or P frame;
5, P frame can be the reference frame of P frame behind, be also possible to the reference frame of the B frame before and after it;
6, since P frame is reference frame, it may cause the diffusion of decoding error;
7, due to being that difference transmits, the compression of P frame is relatively high.
B frame: bi-directional predicted interpolation coding frame.B frame is two-way difference frame, that is, B frame recording is this frame and before and after frames Difference to decode B frame in simple terms, the caching picture before not only obtaining, the picture after also decoding, by preceding Picture with being superimposed for this frame data obtains final picture afterwards.B frame compression ratio is high, but CPU can be relatively more tired when decoding.
The prediction and reconstruct of B frame
B frame is using I the or P frame of front and subsequent P frame as reference frame, the predicted value of " finding out " B frame " certain point " and two fortune Dynamic vector, and prediction difference and motion vector is taken to transmit.It " finds out and (calculates in two reference frames according to motion vector in receiving end Out) " predicted value and with difference sum, B frame " certain point " sample value is obtained, so that complete B frame can be obtained.
B frame feature
1, B frame is predicted by I the or P frame and subsequent P frame of front;
2, the transmission of B frame is its prediction error and motion vector between I the or P frame and subsequent P frame of front;
3, B frame is bi-directional predictive coding frame;
4, B frame compression ratio highest is predicted because it only reflects the third situation of change with reference to interframe moving person than calibrated Really;
5, B frame is not reference frame, not will cause the diffusion of decoding error.
It should be noted that in the prior art, each frame of I, B, P is the needs according to compression algorithm, be it is artificially defined, it Be all physical frame true.In general, it is that 20, B frame can achieve 50 that the compression ratio of I frame, which is 7, P frame,.It can be seen that use B frame can save big quantity space, and saved space can be used to save more I frames, in this way under same code rate, Ke Yiti For better image quality.
The explanation of compression algorithm
The compression method of h264:
1, it is grouped: a few frame images is divided into one group (GOP, Group of Pictures, picture group), that is, a sequence Column, to prevent motion change, frame number should not take more.
2, it defines frame: being three types, i.e. I frame, B frame and P frame by frame image definition each in every group;
3, it predicts frame: with I frame as basic frame, P frame being predicted with I frame, then B frame is predicted by I frame and P frame;
4, data are transmitted: finally storing and transmitting the difference information of I frame data and prediction.
(Intraframe) compression is also referred to as space compression (Spatial compression) in frame.When one frame figure of compression When picture, the data of this frame are only considered without considering the redundancy between consecutive frame, this is actually similar with Static Picture Compression. Lossy Compression Algorithm is generally used in frame, since frame data compression is one complete image of coding, it is possible to independent decoding, Display.Frame data compression is similar with coding jpeg typically up to less than very high compression.
The principle of interframe (Interframe) compression is: the data of adjacent several frames have very big correlation, in other words front and back The characteristics of two frame informations vary less.Namely continuously between its consecutive frame of video with redundancy, according to this characteristic, pressure Amount of redundancy between contracting consecutive frame can further increase decrement, reduce compression ratio.Interframe compression is also referred to as time compression (Temporal compression), it is compressed by comparing the data between different frame on time shaft.Interframe compression one As be lossless.Frame difference (Frame differencing) algorithm is a kind of typical time compression method, it is by comparing this Difference between frame and consecutive frame, only the difference of minute book frame frame adjacent thereto, can greatly reduce data volume in this way.
(Lossy) compression and lossless (Lossy less) compression are by the way damaged down.Reconciliation before lossless compression namely compression Compressed data are completely the same.Most lossless compressions all use RLE run-length encoding algorithm.Lossy compression means to decompress The data before data and compression afterwards are inconsistent.Some human eyes and human ear insensitive image are lost during compression Or audio-frequency information, and the information lost is irrecoverable.The algorithm of nearly all high compression all uses lossy compression, such ability Reach the target of low data rate.The data transfer rate of loss is related with compression ratio, and compression ratio is smaller, and the data of loss are more, decompression Effect afterwards is generally poorer.In addition, certain Lossy Compression Algorithms by the way of compression is repeated several times, can also cause additionally in this way Loss of data.
From foregoing description it is known that in the prior art, during being compressed to image, the ratio of I frame and P frame Rate be it is fixed, in this case, if screen switching is less, such as be broadcast live someone stand give a lecture on dais, then needing I frame with regard to fewer, coding when, can save coding resource, improve code efficiency.But if screen switching is more, Such as someone is broadcast live and dances, then more I frame is just needed at this time, if continued at this time to the image data of acquisition according to solid The ratio for determining I frame P frame is compressed, then just will appear because I frame not enough leads to the loss of data in coding, and then causes to solve There is the case where flower screen or Caton in the image that code comes out.
For above situation, in embodiments of the present invention, can first judge to cut in the image data of acquisition with the presence or absence of picture Change more situation.Specifically, the picture group in image data is extracted depending on networked terminals after getting multi-medium data, In, picture group is made of at least one I frame and at least one P frame.If the quantity of I frame is more, image data is meant that In screen switching it is more, just need to calculate the ratio of I frame and P frame at this time.
It should be noted that not will cause the diffusion of decoding error because B frame is not reference frame, so of the invention real It applies in example, without the concern for B frame.
In a kind of preferred embodiment of the present invention, the multi-medium data is set by image capture device and audio collection It is standby to be acquired;It is described depending on networked terminals obtain multi-medium data the step of include:
The view networked terminals obtain described image data from described image acquisition equipment, and set from the audio collection It is standby to obtain the audio data.
Specifically, depending on networked terminals can directly with the audio collections such as image capture devices and microphone such as camera Equipment is connected, to directly acquire image data and audio data, can also obtain other set by connecting other equipment The image data and audio data of standby acquisition, such as laptop have been collected image data and audio data, then will figure As data and audio data are sent to view networked terminals.
Step 502, the ratio that I frame and P frame in the picture group are obtained depending on networked terminals;
Specifically, obtaining the quantity of I frame and the quantity of P frame in picture group, the ratio of I frame and P frame is then calculated.
Step 503, if the ratio is less than or equal to preset rate threshold, the view networked terminals are by present image Data are encoded according to the ratio, while the audio data being encoded;
In practical applications, image data is in acquisition, the case where each frame for hardly occurring acquiring all is I frame, That is, be not in 100% the case where being all I frame in picture group, and when being encoded to image data, I frame is more, The load of encoder is bigger, but I frame is more, and the quality of picture is better, so the quality and volume of picture in order to balance The load of code device, can preset the rate threshold of an I frame and P frame in embodiments of the present invention, such as the ratio of I frame and P frame is 4:1, if the ratio of the I frame and P frame that are calculated is less than preset rate threshold, just according to the current ratio of I frame and P frame Rate threshold value is encoded.For example, the ratio of preset I frame and P frame is 4:1, the ratio of I frame and P frame in the image data of acquisition For 3:1, it is less than preset rate threshold, then being just that 3:1 is compiled according to the ratio of I frame and P frame by the image data of acquisition Code.
In a kind of preferred embodiment of the present invention, if the ratio is greater than preset rate threshold, the view networking is eventually End encodes current image date according to the preset rate threshold, while the audio data being encoded.
Specifically, if the ratio of I frame and P frame is greater than the ratio threshold of preset I frame and P frame in the image data of acquisition Value, then just being encoded according to image of the preset rate threshold to acquisition.For example, I frame and P frame in the image data of acquisition Ratio be 5:1, greater than the rate threshold of preset 4:1, then, at this time just according to the ratio of I frame and P frame be 4:1 to acquisition Image encoded.
Because theoretically I frame can be used to be encoded with any frame image, but in practical applications will not In this way, so, when in the image data of acquisition the ratio of I frame and P frame be greater than preset I frame and P frame ratio threshold When value, the frame for being originally I frame is just encoded to P frame by force, until the ratio of I frame and P frame is not more than in the image data of acquisition The rate threshold of preset I frame and P frame can.
In a kind of preferred embodiment of the present invention, the view networked terminals include the first chip and the second chip, and described the One chip is for encoding described image data, and second chip is for encoding the audio data.
It in embodiments of the present invention, include two chips depending on networked terminals, first for being encoded to image data Chip, and the second chip for being encoded to audio data.That is, being to image data and audio depending on networked terminals What data were separately decoded.
Step 504, the audio data after the image data after coding, and coding is sent to institute by the view networked terminals State view networked server.
Depending on networked terminals after the completion of to image data, audio data coding, needing for multi-medium data to be packaged in can be with The data packet transmitted on the link.
In a kind of preferred embodiment of the present invention, the view networked terminals include view networking protocol stack, every kind of view networking association View has one-to-one data type;
Audio data after image data after coding, and coding is sent to the view networking by the view networked terminals The step of server includes:
The view networked terminals use view networking protocol corresponding with described image data type, by the picture number after coding According to, and coding after audio data be sent to the view networked server.
Specifically, including MP (the Message process, depending on networked information of following several major class in view networking protocol stack Processing protocol): 1, Remote Video Conference agreement;2, video monitoring agreement;3, network direct broadcasting agreement;4, video stream protocol is clicked; 5, multimedia retransmission protocol;6, subtitle agreement;7, menu agreement (is set) in terminal;8, audio protocols, each major class may be used also To include small classification.Each major class MP has one-to-one data transmission, such as Remote Video Conference agreement pair What is answered is the multi-medium data for transmitting video conference, and corresponding video monitoring agreement is transmission video monitoring data.
So needing to know the MP using which kind of major class when being packaged multi-medium data depending on networked terminals The type for the MP for being packaged, and being used then needs and is connected with depending on networked terminals to be sent to view networking in advance depending on networked server Terminal.Specifically, can be directly connected to terminal devices such as computers depending on networked server, user can pass through the terminals such as calculating The data type of equipment setting view networked terminals in view networked server, then will regard the number of networked terminals depending on networked server View networked terminals are sent to according to type, in this way, can use when being packaged to multi-medium data depending on networked terminals View networking protocol corresponding with data type.
After multi-medium data is packed, needs to increase packet header in data packet, there is view to join in embodiments of the present invention, in packet header The address mac and order are netted, MP is primarily due to and is based on original socket, that is, directly adjusted in protocol layer processing data link layer Data are handled with raw socket, and directly transmit two layers of address mac, such order can lack IP layers, more succinct efficient;And Order is exactly each MP thing specifically to be executed.
Depending on networked server after receiving packed data packet, if view networking has also been accessed in the address mac, take Data packet is sent directly to destination address according to the address mac by business device, if view networking, service are not accessed in the address mac Device, which also needs first to send data packets to, turns server depending on networking association, wherein turns server depending on networking association and is used to internet packet Head is converted with depending on networking packet header, is then turned server depending on networking association and is sent data packets to destination address.
In embodiments of the present invention, pass through the I frame of image data in the collected multi-medium data of calculating depending on networked terminals With the ratio of P frame, then the ratio being calculated is compared with preset rate threshold, if the ratio being calculated is small In or be equal to preset rate threshold, then just image data is encoded according to the ratio of current I frame and P frame, while right Audio data in multi-medium data is also encoded, then by after coding image data and audio data be sent to view networking The mode of server realizes view networking coding I, P frame ratio dynamic configuration, solves in the prior art because of I, P frame ratio Multi-medium data caused by fixed configurations frequently occurs phenomena such as packet loss, Caton and delay during transmission.
Moreover, being integrated with view networking protocol stack depending on networked terminals, can be added when being packaged to multi-medium data in this way Depending on networking protocol packet header, there are the view networking address mac and order in packet header, and establish the transmission on original socket, therefore, energy Enough efficient independent process data transmit-receives and order transmitting-receiving, ensure that the accurately and timely transmission of multi-medium data and bidding protocol.
It should be noted that for simple description, therefore, it is stated as a series of action groups for embodiment of the method It closes, but those skilled in the art should understand that, embodiment of that present invention are not limited by the describe sequence of actions, because according to According to the embodiment of the present invention, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art also should Know, the embodiments described in the specification are all preferred embodiments, and the related movement not necessarily present invention is implemented Necessary to example.
Referring to Fig. 6, a kind of structural block diagram of view networked terminals embodiment of the invention is shown, the view networked terminals application In view networking, data interaction is carried out with view networked server;Described can specifically include following module depending on networked terminals:
Multi-medium data obtains module 601, for obtaining multi-medium data;The multi-medium data includes image data, And audio data;Described image data are made of multiple picture groups, and the picture group is by least one I frame and at least one P Frame composition;
Ratio obtains module 602, for obtaining the ratio of I frame and P frame in the picture group;
Coding module 603, when the ratio is less than or equal to preset rate threshold, for pressing current image date It is encoded according to the ratio, while the audio data being encoded;
Sending module 604 is sent to the view connection for the audio data after the image data after encoding, and coding Network server.
In a kind of preferred embodiment of the present invention, the multi-medium data is set by image capture device and audio collection It is standby to be acquired;The view networked terminals further include:
Receiving module, for being set from described image acquisition equipment acquisition described image data, and from the audio collection It is standby to obtain the audio data.
In a kind of preferred embodiment of the present invention, when the ratio is greater than preset rate threshold, the coding module It is also used to encode current image date according to the preset rate threshold, while the audio data being compiled Code.
In a kind of preferred embodiment of the present invention, the view networked terminals include view networking protocol stack, every kind of view networking association View has one-to-one data type;
The sending module is also used to using view networking protocol corresponding with described image data type, by the figure after coding As the audio data after data, and coding is sent to the view networked server.
In a kind of preferred embodiment of the present invention, described depending on networked terminals further includes the first chip and the second chip, described First chip is for encoding described image data, and second chip is for encoding the audio data.
For regarding networked terminals embodiment, since it is basically similar to the method embodiment, so the comparison of description is simple Single, the relevent part can refer to the partial explaination of embodiments of method.
All the embodiments in this specification are described in a progressive manner, the highlights of each of the examples are with The difference of other embodiments, the same or similar parts between the embodiments can be referred to each other.
It should be understood by those skilled in the art that, the embodiment of the embodiment of the present invention can provide as method, apparatus or calculate Machine program product.Therefore, the embodiment of the present invention can be used complete hardware embodiment, complete software embodiment or combine software and The form of the embodiment of hardware aspect.Moreover, the embodiment of the present invention can be used one or more wherein include computer can With in the computer-usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) of program code The form of the computer program product of implementation.
The embodiment of the present invention be referring to according to the method for the embodiment of the present invention, terminal device (system) and computer program The flowchart and/or the block diagram of product describes.It should be understood that flowchart and/or the block diagram can be realized by computer program instructions In each flow and/or block and flowchart and/or the block diagram in process and/or box combination.It can provide these Computer program instructions are set to general purpose computer, special purpose computer, Embedded Processor or other programmable data processing terminals Standby processor is to generate a machine, so that being held by the processor of computer or other programmable data processing terminal devices Capable instruction generates for realizing in one or more flows of the flowchart and/or one or more blocks of the block diagram The device of specified function.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing terminal devices In computer-readable memory operate in a specific manner, so that instruction stored in the computer readable memory generates packet The manufacture of command device is included, which realizes in one side of one or more flows of the flowchart and/or block diagram The function of being specified in frame or multiple boxes.
These computer program instructions can also be loaded into computer or other programmable data processing terminal devices, so that Series of operation steps are executed on computer or other programmable terminal equipments to generate computer implemented processing, thus The instruction executed on computer or other programmable terminal equipments is provided for realizing in one or more flows of the flowchart And/or in one or more blocks of the block diagram specify function the step of.
Although the preferred embodiment of the embodiment of the present invention has been described, once a person skilled in the art knows bases This creative concept, then additional changes and modifications can be made to these embodiments.So the following claims are intended to be interpreted as Including preferred embodiment and fall into all change and modification of range of embodiment of the invention.
Finally, it is to be noted that, herein, relational terms such as first and second and the like be used merely to by One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning Covering non-exclusive inclusion, so that process, method, article or terminal device including a series of elements not only wrap Those elements are included, but also including other elements that are not explicitly listed, or further includes for this process, method, article Or the element that terminal device is intrinsic.In the absence of more restrictions, being wanted by what sentence "including a ..." limited Element, it is not excluded that there is also other identical elements in process, method, article or the terminal device for including the element.
Above to a kind of method that multi-medium data transmits provided by the present invention and a kind of view networked terminals, carry out in detail Thin to introduce, used herein a specific example illustrates the principle and implementation of the invention, and above embodiments are said It is bright to be merely used to help understand method and its core concept of the invention;At the same time, for those skilled in the art, foundation Thought of the invention, there will be changes in the specific implementation manner and application range, in conclusion the content of the present specification is not It is interpreted as limitation of the present invention.

Claims (8)

1. a kind of method of multi-medium data transmission, which is characterized in that the method is applied in view networking, is related to view networking eventually End, and view networked server;The method includes:
The view networked terminals obtain multi-medium data;The multi-medium data includes image data and audio data;It is described Image data is made of multiple picture groups, and the picture group is made of at least one I frame and at least one P frame;
The ratio that I frame and P frame in the picture group are obtained depending on networked terminals;
If the ratio is less than or equal to preset rate threshold, the view networked terminals are by current image date according to described Ratio is encoded, while the audio data being encoded;
If the ratio be greater than preset rate threshold, it is described will be current according to the preset rate threshold depending on networked terminals I frame in image data is encoded to P frame, while the audio data being encoded;
Audio data after image data after coding, and coding is sent to the view the Internet services by the view networked terminals Device.
2. the method according to claim 1, wherein the multi-medium data is by image capture device, Yi Jiyin Frequency acquisition equipment is acquired;It is described depending on networked terminals obtain multi-medium data the step of include:
The view networked terminals obtain described image data from described image acquisition equipment, and obtain from the audio collecting device Take the audio data.
3. method according to claim 1 or 2, which is characterized in that the view networked terminals include view networking protocol stack, often Kind view networking protocol has one-to-one data type;
Audio data after image data after coding, and coding is sent to the view the Internet services by the view networked terminals The step of device includes:
The view networked terminals use view networking protocol corresponding with described image data type, by the image data after coding, And the audio data after coding is sent to the view networked server.
4. the method according to claim 1, wherein the view networked terminals include the first chip and the second core Piece, for encoding to described image data, second chip is used to carry out the audio data first chip Coding.
5. a kind of view networked terminals, which is characterized in that the view networked terminals be applied to view networking in, with view networked server into Row data interaction;Described includes: depending on networked terminals
Multi-medium data obtains module, for obtaining multi-medium data;The multi-medium data includes image data and audio Data;Described image data are made of multiple picture groups, and the picture group is made of at least one I frame and at least one P frame;
Ratio obtains module, for obtaining the ratio of I frame and P frame in the picture group;
Coding module is used for current image date when the ratio is less than or equal to preset rate threshold according to described Ratio is encoded, while the audio data being encoded;
When the ratio is greater than preset rate threshold, the coding module is also used to will according to the preset rate threshold I frame in current image date is encoded to P frame, while the audio data being encoded;
Sending module is sent to the view the Internet services for the audio data after the image data after encoding, and coding Device.
6. view networked terminals according to claim 5, which is characterized in that the multi-medium data by image capture device, And audio collecting device is acquired;The view networked terminals further include:
Receiving module, for being obtained from described image acquisition equipment acquisition described image data, and from the audio collecting device Take the audio data.
7. view networked terminals according to claim 5 or 6, which is characterized in that the view networked terminals include view networking association Stack is discussed, every kind of view networking protocol has one-to-one data type;
The sending module is also used to using view networking protocol corresponding with described image data type, by the picture number after coding According to, and coding after audio data be sent to the view networked server.
8. view networked terminals according to claim 5, which is characterized in that it is described depending on networked terminals further include the first chip and Second chip, for encoding to described image data, second chip is used for the audio number first chip According to being encoded.
CN201710802232.2A 2017-09-07 2017-09-07 A kind of method that multi-medium data transmits and a kind of view networked terminals Active CN108632679B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710802232.2A CN108632679B (en) 2017-09-07 2017-09-07 A kind of method that multi-medium data transmits and a kind of view networked terminals

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710802232.2A CN108632679B (en) 2017-09-07 2017-09-07 A kind of method that multi-medium data transmits and a kind of view networked terminals

Publications (2)

Publication Number Publication Date
CN108632679A CN108632679A (en) 2018-10-09
CN108632679B true CN108632679B (en) 2019-11-01

Family

ID=63705796

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710802232.2A Active CN108632679B (en) 2017-09-07 2017-09-07 A kind of method that multi-medium data transmits and a kind of view networked terminals

Country Status (1)

Country Link
CN (1) CN108632679B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111083520B (en) * 2018-10-19 2021-08-17 杭州海康威视系统技术有限公司 Method and apparatus for storing video data
CN110035297B (en) * 2019-03-08 2021-05-14 视联动力信息技术股份有限公司 Video processing method and device
CN110278195B (en) * 2019-05-23 2020-12-11 视联动力信息技术股份有限公司 Data transmission method and device based on video network
CN113099308B (en) * 2021-03-31 2023-10-27 聚好看科技股份有限公司 Content display method, display equipment and image collector

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7046729B2 (en) * 2002-08-27 2006-05-16 Ess Technology, Inc. Bit rate control for motion compensated video compression system
CN106303693B (en) * 2015-05-25 2019-02-05 视联动力信息技术股份有限公司 A kind of method and device of video data decoding
CN105791260A (en) * 2015-11-30 2016-07-20 武汉斗鱼网络科技有限公司 Network self-adaptive stream media service quality control method and device
CN106713947A (en) * 2016-12-13 2017-05-24 飞狐信息技术(天津)有限公司 Method and device for reducing live broadcasting time delay and standstill as well as live broadcasting system

Also Published As

Publication number Publication date
CN108632679A (en) 2018-10-09

Similar Documents

Publication Publication Date Title
CN109194982B (en) Method and device for transmitting large file stream
CN108881927A (en) A kind of video data synthetic method and device
CN108632679B (en) A kind of method that multi-medium data transmits and a kind of view networked terminals
CN109756789B (en) Method and system for processing packet loss of audio and video data packet
CN108063745B (en) A kind of video call method and its system based on Android device
CN109462761A (en) A kind of video encoding/decoding method and device
CN109788232A (en) A kind of summary of meeting recording method of video conference, device and system
CN108881135A (en) It is a kind of based on view networking information transferring method, device and system
CN108574816B (en) Video networking terminal and communication method and device based on video networking terminal
CN111147859A (en) Video processing method and device
CN108881958A (en) A kind of multimedia data stream packaging method and device
CN109302384B (en) Data processing method and system
CN110769179B (en) Audio and video data stream processing method and system
CN110769297A (en) Audio and video data processing method and system
CN110661992A (en) Data processing method and device
CN111212255B (en) Monitoring resource obtaining method and device and computer readable storage medium
CN108989831A (en) A kind of network REC method and apparatus of multi-code stream
CN110086773B (en) Audio and video data processing method and system
CN110324667B (en) Novel video stream playing method and system
CN108574819B (en) A kind of terminal device and a kind of method of video conference
CN110445761A (en) A kind of video drawing stream method and device
CN110149497A (en) A kind of view networked data transmission method, apparatus, system and readable storage medium storing program for executing
CN110460790A (en) A kind of abstracting method and device of video frame
CN108965744A (en) A kind of method of video image processing and device based on view networking
CN108965993A (en) A kind of coding/decoding method and device of multi-path video stream

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100000 Dongcheng District, Beijing, Qinglong Hutong 1, 1103 house of Ge Hua building.

Applicant after: Video Link Power Information Technology Co., Ltd.

Address before: 100000 Beijing Dongcheng District Qinglong Hutong 1 Song Hua Building A1103-1113

Applicant before: BEIJING VISIONVERA INTERNATIONAL INFORMATION TECHNOLOGY CO., LTD.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20201230

Address after: 571924 building C07, Zone C, Hainan Ecological Software Park, hi tech Industrial Demonstration Zone, old town, Haikou City, Hainan Province

Patentee after: Hainan Shilian Communication Technology Co.,Ltd.

Address before: 100000 Dongcheng District, Beijing, Qinglong Hutong 1, 1103 house of Ge Hua building.

Patentee before: VISIONVERA INFORMATION TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right