CN108632679A - A kind of method of multi-medium data transmission and a kind of regarding networked terminals - Google Patents

A kind of method of multi-medium data transmission and a kind of regarding networked terminals Download PDF

Info

Publication number
CN108632679A
CN108632679A CN201710802232.2A CN201710802232A CN108632679A CN 108632679 A CN108632679 A CN 108632679A CN 201710802232 A CN201710802232 A CN 201710802232A CN 108632679 A CN108632679 A CN 108632679A
Authority
CN
China
Prior art keywords
data
frames
networked terminals
regard
coding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710802232.2A
Other languages
Chinese (zh)
Other versions
CN108632679B (en
Inventor
王艳辉
朱道彦
杨春晖
谭智东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainan Shilian Communication Technology Co.,Ltd.
Original Assignee
Beijing Visionvera International Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Visionvera International Information Technology Co Ltd filed Critical Beijing Visionvera International Information Technology Co Ltd
Priority to CN201710802232.2A priority Critical patent/CN108632679B/en
Publication of CN108632679A publication Critical patent/CN108632679A/en
Application granted granted Critical
Publication of CN108632679B publication Critical patent/CN108632679B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64784Data processing by the network
    • H04N21/64792Controlling the complexity of the content stream, e.g. by dropping packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • H04N21/6437Real-time Transport Protocol [RTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

An embodiment of the present invention provides a kind of method of multi-medium data transmission, it is applied to regard in networking, is related to regarding networked terminals, and regard networked server;The networked terminals that regard obtain multi-medium data;The multi-medium data includes image data and audio data;Described image data are made of multiple picture groups, and the picture group is made of at least one I frames and at least one P frames;The ratio that I frames and P frames in the picture group are obtained depending on networked terminals;If the ratio is less than or equal to preset rate threshold, described to encode current image date according to the ratio depending on networked terminals, while the audio data being encoded;The networked terminals that regard are by the image data after coding, and audio data after coding is sent to and described regards networked server.The embodiment of the present invention solves phenomena such as multi-medium data in the prior art frequently occurs packet loss, interim card and delay during transmission, improves multi-medium data transmission real-time.

Description

A kind of method of multi-medium data transmission and a kind of regarding networked terminals
Technical field
The present invention relates to regarding networking technology field, network more particularly to a kind of multimedia data transmission method and a kind of regard Terminal.
Background technology
With the fast development of the network technology, life of the application scenarios such as video conference, video monitoring, network direct broadcasting in user Living, work, study etc. are widely available.
In these application scenarios, need to carry out instantaneous transmission to multi-medium datas such as image data and audio datas, It before transmission, needs first to encode multi-medium data, current existing encoding scheme is by I, P frame ratio in image data A fixed ratio is arranged in rate, then encodes image data according to fixed ratio, while also being compiled to audio data Code, coding after the completion of using transport protocol by after coding image data and audio data be sent to server.
But because a fixed ratio is arranged in I, P frame ratio in coding, the ratio when I frames can be resulted in high in this way When fixed ratio, need to abandon extra I frames, to occur interim card, delay, even frame losing the phenomenon that.Moreover, existing biography Defeated agreement is IP-based RTP (Real-time Transport Protocol/Real-Time Streaming Protocol, real-time transport protocol/real-time streaming protocol) agreement, RTP/RTSP is built upon on the basis of transport layer, that is, is passed The UDP/TCP protocol transport layers of system, so, it is unable to reach the requirement of high real-time.
Invention content
In view of the above problems, it is proposed that the method and corresponding one kind of a kind of multi-medium data transmission of the embodiment of the present invention Depending on networked terminals.
To solve the above-mentioned problems, the embodiment of the invention discloses a kind of method of multi-medium data transmission, the methods Applied in networking, be related to regard networked terminals, and regard networked server;
The method includes:
The networked terminals that regard obtain multi-medium data;The multi-medium data includes image data and audio data; Described image data are made of multiple picture groups, and the picture group is made of at least one I frames and at least one P frames;
The ratio that I frames and P frames in the picture group are obtained depending on networked terminals;
If the ratio is less than or equal to preset rate threshold, it is described regard networked terminals by current image date according to The ratio is encoded, while the audio data being encoded;
The networked terminals that regard are by the image data after coding, and audio data after coding is sent to and described regards networking Server.
Preferably, the multi-medium data is acquired by image capture device and audio collecting device;It is described to regard networking Terminal obtain multi-medium data the step of include:
The networked terminals that regard are set from described image collecting device acquisition described image data, and from the audio collection It is standby to obtain the audio data.
Preferably, the method further includes:
If the ratio is more than preset rate threshold, described to regard networked terminals by current image date according to described pre- If rate threshold encoded, while the audio data being encoded.
Preferably, the networked terminals that regard include regarding networking protocol stack, each, which regards networking protocol, has one-to-one number According to type;
The networked terminals that regard are by the image data after coding, and audio data after coding is sent to and described regards networking The step of server includes:
The networked terminals that regard regard networking protocol using corresponding with described image data type, by the picture number after coding According to, and audio data after coding is sent to and described regards networked server.
Preferably, the networked terminals that regard include the first chip and the second chip, and first chip is used for the figure As data are encoded, second chip is for encoding the audio data.
Correspondingly, the embodiment of the invention discloses one kind regarding networked terminals, the networked terminals that regard are applied to regard in networking, Data interaction is carried out with regarding networked server;
Described includes depending on networked terminals:
Multi-medium data acquisition module, for obtaining multi-medium data;The multi-medium data includes image data, and Audio data;Described image data are made of multiple picture groups, and the picture group is by least one I frames and at least one P frames group At;
Ratio acquisition module, the ratio for obtaining I frames and P frames in the picture group;
Coding module, when the ratio be less than or equal to preset rate threshold when, for by current image date according to The ratio is encoded, while the audio data being encoded;
Sending module is sent to described regard for the audio data after the image data after encoding, and coding and networks Server.
Preferably, the multi-medium data is acquired by image capture device and audio collecting device;It is described to regard networking Terminal further includes:
Receiving module, for being set from described image collecting device acquisition described image data, and from the audio collection It is standby to obtain the audio data.
Preferably, when the ratio is more than preset rate threshold, the coding module is additionally operable to present image number It is encoded according to according to the preset rate threshold, while the audio data being encoded.
Preferably, the networked terminals that regard include regarding networking protocol stack, each, which regards networking protocol, has one-to-one number According to type;
The sending module is additionally operable to regard networking protocol using corresponding with described image data type, by the figure after coding Audio data as data, and after coding, which is sent to, described regards networked server.
Preferably, described depending on networked terminals further includes the first chip and the second chip, and first chip is used for described Image data is encoded, and second chip is for encoding the audio data
The embodiment of the present invention includes following advantages:
In embodiments of the present invention, pass through the I frames of image data in the collected multi-medium data of calculating depending on networked terminals With the ratio of P frames, then the ratio being calculated is compared with preset rate threshold, if the ratio being calculated is small In or be equal to preset rate threshold, then just image data is encoded according to the ratio of current I frames and P frames, while right Audio data in multi-medium data is also encoded, then by after coding image data and audio data be sent to regarding networking The mode of server is realized regarding networking coding I, P frame ratio dynamic configuration, is solved in the prior art because of I, P frame ratio Multi-medium data caused by fixed configurations frequently occurs phenomena such as packet loss, interim card and delay during transmission, improves Multi-medium data transmits real-time.
Moreover, being integrated with regarding networking protocol stack depending on networked terminals, can be added when being packaged to multi-medium data in this way Depending on networking protocol packet header, have in packet header regarding the networking addresses mac and order, and establishes the transmission on original socket, therefore, energy Enough efficient independent process data transmit-receives and order are received and dispatched, and ensure that the accurately and timely transmission of multi-medium data and bidding protocol.
Description of the drawings
Fig. 1 is a kind of networking schematic diagram regarding networking of the present invention;
Fig. 2 is a kind of hardware architecture diagram of node server of the present invention;
Fig. 3 is a kind of hardware architecture diagram of access switch of the present invention;
Fig. 4 is that a kind of Ethernet association of the present invention turns the hardware architecture diagram of gateway;
Fig. 5 is a kind of step flow chart of multimedia data transmission method embodiment of the present invention;
Fig. 6 is a kind of structure diagram regarding networked terminals embodiment of the present invention.
Specific implementation mode
In order to make the foregoing objectives, features and advantages of the present invention clearer and more comprehensible, below in conjunction with the accompanying drawings and specific real Applying mode, the present invention is described in further detail.
It is the important milestone of network Development depending on networking, is a real-time network, can realize HD video real-time Transmission, Push numerous the Internet, applications to HD video, high definition is face-to-face.
Real-time high-definition video switching technology is used depending on networking, it can be such as high in a network platform by required service Clear video conference, Intellectualized monitoring analysis, emergency command, digital broadcast television, delay TV, the Web-based instruction, shows video monitoring Field live streaming, VOD program requests, TV Mail, individual character recording (PVR), Intranet (manage) channel, intelligent video Broadcast Control, information publication by oneself All be incorporated into a system platform etc. services such as tens of kinds of videos, voice, picture, word, communication, data, by TV or Computer realizes that high-definition quality video plays.
To make those skilled in the art more fully understand the embodiment of the present invention, below to being introduced depending on networking:
Depending on networking, applied portion of techniques is as described below:
Network technology (Network Technology)
Network technology innovation depending on networking improves traditional ethernet (Ethernet), potential huge on network to face Video flow.(Circuit is exchanged different from simple network packet packet switch (Packet Switching) or lattice network Switching), Streaming demands are met using Packet Switching depending on networking technology.Has grouping depending on networking technology Flexible, the simple and low price exchanged, is provided simultaneously with quality and the safety assurance of circuit switching, it is virtually electric to realize the whole network switch type The seamless connection of road and data format.
Switching technology (Switching Technology)
Depending on two advantages of asynchronous and packet switch networked using Ethernet, it is scarce under the premise of complete be compatible with to eliminate Ethernet It falls into, has the end-to-end seamless connection of the whole network, direct user terminal directly carries IP data packets.User data is in network-wide basis It is not required to any format conversion.It is the more advanced form of Ethernet depending on networking, is a real-time exchange platform, can realizes at present mutually The whole network large-scale high-definition realtime video transmission that networking cannot achieve pushes numerous Internet video applications to high Qinghua, unitizes.
Server technology (Server Technology)
It is different from traditional server, its Streaming Media depending on the server technology in networking and unified video platform Transmission be built upon it is connection-oriented on the basis of, data-handling capacity is unrelated with flow, communication time, single network layer energy Enough include signaling and data transmission.For voice and video business, handled depending on networking and unified video platform Streaming Media Complexity many simpler than data processing, efficiency substantially increase hundred times or more than traditional server.
Reservoir technology (Storage Technology)
The ultrahigh speed reservoir technology of unified video platform in order to adapt to the media content of vast capacity and super-flow and State-of-the-art real time operating system is used, the programme information in server instruction is mapped to specific hard drive space, media Content is no longer pass through server, and moment is directly delivered to user terminal, and user waits for typical time to be less than 0.2 second.It optimizes Sector distribution greatly reduces the mechanical movement of hard disc magnetic head tracking, and resource consumption only accounts for the 20% of the internets ad eundem IP, but The concurrent flow more than 3 times of traditional disk array is generated, overall efficiency promotes 10 times or more.
Network security technology (Network Security Technology)
Structural design depending on networking by servicing independent licence system, equipment and the modes such as user data is completely isolated every time The network security problem that puzzlement internet has thoroughly been eradicated from structure, does not need antivirus applet, fire wall generally, has prevented black The attack of visitor and virus, provide structural carefree secure network to the user.
Service innovative technology (Service Innovation Technology)
Business and transmission are merged by unified video platform, whether single user, private user or a net The sum total of network is all only primary automatic connection.User terminal, set-top box or PC are attached directly to unified video platform, obtain rich The multimedia video service of rich colorful various forms.Unified video platform is substituted using " menu type " with table schema traditional Complicated applications program, and can use considerably less code that complicated application can be realized, and realize the new business innovation of " endless ".
Networking depending on networking is as described below:
It is a kind of central controlled network structure depending on networking, which can be Tree Network, Star network, ring network etc. class Type, but centralized control node is needed to control whole network in network on this basis.
As shown in Figure 1, being divided into access net and Metropolitan Area Network (MAN) two parts depending on networking.
The equipment of access mesh portions can be mainly divided into 3 classes:Node server, access switch, terminal (including various machines Top box, encoding board, memory etc.).Node server is connected with access switch, and access switch can be with multiple terminal phases Even, and Ethernet can be connected.
Wherein, node server is the node that centralized control functions are played in access net, can control access switch and terminal. Node server can directly be connected with access switch, can also directly be connected with terminal.
Similar, the equipment of metropolitan area mesh portions can also be divided into 3 classes:Metropolitan area server, node switch, node serve Device.Metropolitan area server is connected with node switch, and node switch can be connected with multiple node servers.
Wherein, node server is the node server for accessing mesh portions, i.e. node server had both belonged to access wet end Point, and belong to metropolitan area mesh portions.
Metropolitan area server is the node that centralized control functions are played in Metropolitan Area Network (MAN), can control node switch and node serve Device.Metropolitan area server can be directly connected to node switch, can also be directly connected to node server.
It can be seen that be entirely a kind of central controlled network structure of layering depending on networking network, and node server and metropolitan area The network controlled under server can be the various structures such as tree-shaped, star-like, cyclic annular.
Visually claim, access mesh portions can form unified video platform (part in virtual coil), and multiple unified videos are flat Platform can be formed regarding networking;Each unified video platform can be interconnected by metropolitan area and wide area depending on networking.
Classify depending on networked devices
1.1 embodiment of the present invention can be mainly divided into 3 classes depending on the equipment in networking:Server, interchanger (including ether Net gateway), terminal (including various set-top boxes, encoding board, memory etc.).Depending on networking can be divided on the whole Metropolitan Area Network (MAN) (or National net, World Wide Web etc.) and access net.
The equipment of 1.2 wherein access mesh portions can be mainly divided into 3 classes:Node server, access switch (including ether Net gateway), terminal (including various set-top boxes, encoding board, memory etc.).
The particular hardware structure of each access network equipment is:
Node server:
As shown in Fig. 2, including mainly Network Interface Module 201, switching engine module 202, CPU module 203, disk array Module 204;
Wherein, Network Interface Module 201, the Bao Jun that CPU module 203, disk array module 204 are come in enter switching engine Module 202;Switching engine module 202 to the packet come in look into the operation of address table 205, to obtain the navigation information of packet; And the packet is stored according to the navigation information of packet the queue of corresponding pack buffer 206;If the queue of pack buffer 206 is close It is full, then it abandons;All pack buffer queues of 202 poll of switching engine mould, are forwarded if meeting the following conditions:1) port It is less than to send caching;2) the queue package counting facility is more than zero.Disk array module 204 mainly realizes the control to hard disk, including The operations such as initialization, read-write to hard disk;CPU module 203 is mainly responsible between access switch, terminal (not shown) Protocol processes, to address table 205 (including descending protocol packet address table, uplink protocol package address table, data packet addressed table) Configuration, and, the configuration to disk array module 204.
Access switch:
As shown in figure 3, including mainly Network Interface Module (downstream network interface module 301, uplink network interface module 302), switching engine module 303 and CPU module 304;
Wherein, the packet (upstream data) that downstream network interface module 301 is come in enters packet detection module 305;Packet detection mould Whether mesh way address (DA), source address (SA), type of data packet and the packet length of the detection packet of block 305 meet the requirements, if met, Corresponding flow identifier (stream-id) is then distributed, and enters switching engine module 303, is otherwise abandoned;Uplink network interface mould The packet (downlink data) that block 302 is come in enters switching engine module 303;The data packet that CPU module 204 is come in enters switching engine Module 303;Switching engine module 303 to the packet come in look into the operation of address table 306, to obtain the navigation information of packet; It is gone toward uplink network interface if the packet into switching engine module 303 is downstream network interface, in conjunction with flow identifier (stream-id) packet is stored in the queue of corresponding pack buffer 307;If the queue of the pack buffer 307 is close full, It abandons;If the packet into switching engine module 303 is not that downstream network interface is gone toward uplink network interface, according to packet Navigation information is stored in the data packet queue of corresponding pack buffer 307;If the queue of the pack buffer 307 is close full, Then abandon.
All pack buffer queues of 303 poll of switching engine module, are divided to two kinds of situations in embodiments of the present invention:
It is gone toward uplink network interface if the queue is downstream network interface, meets the following conditions and be forwarded:1) It is less than that the port sends caching;2) the queue package counting facility is more than zero;3) token that rate control module generates is obtained;
It is gone toward uplink network interface if the queue is not downstream network interface, meets the following conditions and be forwarded: 1) it is less than to send caching for the port;2) the queue package counting facility is more than zero.
Rate control module 208 is configured by CPU module 204, to all downlink networks in programmable interval The pack buffer queue that interface is gone toward uplink network interface generates token, to control the code check of forwarded upstream.
CPU module 304 is mainly responsible for the protocol processes between node server, the configuration to address table 306, and, Configuration to rate control module 308.
Ethernet association turns gateway
As shown in figure 4, including mainly Network Interface Module (downstream network interface module 401, uplink network interface module 402), switching engine module 403, CPU module 404, packet detection module 405, rate control module 408, address table 406, Bao Huan Storage 407 and MAC add modules 409, MAC removing modules 410.
Wherein, the data packet that downstream network interface module 401 is come in enters packet detection module 405;Packet detection module 405 is examined The ethernet mac DA of measured data packet, ethernet mac SA, Ethernet length or frame type, networking mesh way address is regarded DA, whether meet the requirements depending on networking source address SA, depending on networking data Packet type and packet length, corresponding stream is distributed if meeting Identifier (stream-id);Then, MAC DA, MAC SA, length or frame type are subtracted by MAC removing modules 410 (2byte), and enter corresponding order caching, otherwise abandon;
Downstream network interface module 401 detects the transmission caching of the port, if there is Bao Ze regarding with networking mesh according to packet Address D A knows the ethernet mac DA of corresponding terminal, adds the ethernet mac DA of terminal, Ethernet assists the MAC for turning gateway SA, Ethernet length or frame type, and send.
The function that Ethernet association turns other modules in gateway is similar with access switch.
Terminal:
Include mainly Network Interface Module, Service Processing Module and CPU module;For example, set-top box mainly connects including network Mouth mold block, video/audio encoding and decoding engine modules, CPU module;Encoding board includes mainly Network Interface Module, video encoding engine Module, CPU module;Memory includes mainly Network Interface Module, CPU module and disk array module.
The equipment of 1.3 metropolitan area mesh portions can be mainly divided into 2 classes:Node server, node switch, metropolitan area server. Wherein, node switch includes mainly Network Interface Module, switching engine module and CPU module;Metropolitan area server includes mainly Network Interface Module, switching engine module and CPU module are constituted.
2, networking data package definition is regarded
2.1 access network data package definitions
The data packet for accessing net includes mainly following sections:Destination address (DA), source address (SA), reserve bytes, payload(PDU)、CRC。
As shown in the table, the data packet for accessing net includes mainly following sections:
DA SA Reserved Payload CRC
Wherein:
Destination address (DA) is made of 8 bytes (byte), and first character section indicates type (such as the various associations of data packet Discuss packet, multicast packet, unicast packet etc.), be up to 256 kinds of possibility, the second byte to the 6th byte is metropolitan area net address, Seven, the 8th bytes are access net address;
Source address (SA) is also to be made of 8 bytes (byte), is defined identical as destination address (DA);
Reserve bytes are made of 2 bytes;
The parts payload have different length according to the type of different datagrams, are if it is various protocol packages 64 bytes are 32+1024=1056 bytes if it is single group unicast packets words, are not restricted to above 2 kinds certainly;
CRC is made of 4 bytes, and computational methods follow the Ethernet CRC algorithm of standard.
2.2 Metropolitan Area Network (MAN) packet definitions
The topology of Metropolitan Area Network (MAN) is pattern, may there is 2 kinds, connection even of more than two kinds, i.e. node switching between two equipment 2 kinds can be all can exceed that between machine and node server, node switch and node switch, node switch and node server Connection.But the metropolitan area net address of metropolitan area network equipment is unique, is closed to accurately describe the connection between metropolitan area network equipment System, introduces parameter in embodiments of the present invention:Label uniquely describes a metropolitan area network equipment.
(Multi-Protocol Label Switch, multiprotocol label are handed over by the definition of label and MPLS in this specification Change) label definition it is similar, it is assumed that between device A and equipment B there are two connection, then data packet from device A to equipment B just There are 2 labels, data packet also there are 2 labels from equipment B to device A.Label is divided into label, outgoing label, it is assumed that data packet enters The label (entering label) of device A is 0x0000, and the label (outgoing label) when this data packet leaves device A may reform into 0x0001.The networking flow of Metropolitan Area Network (MAN) is to enter network process under centralized control, also means that address distribution, the label of Metropolitan Area Network (MAN) Distribution is all dominated by metropolitan area server, and node switch, node server are all passive execution, this point with The label distribution of MPLS is different, and the distribution of the label of MPLS is the result that interchanger, server are negotiated mutually.
As shown in the table, the data packet of Metropolitan Area Network (MAN) includes mainly following sections:
DA SA Reserved Label Payload CRC
That is destination address (DA), source address (SA), reserve bytes (Reserved), label, payload (PDU), CRC.Its In, the format of label, which can refer to, such as gives a definition:Label is 32bit, wherein high 16bit retains, only with low 16bit, its position Set is between the reserve bytes and payload of data packet.
Based on the above-mentioned characteristic regarding networking, it is proposed that one of the core concepts of the embodiments of the present invention is calculated depending on networked terminals The ratio of the I frames and P frames of image data in collected multi-medium data, then by the ratio being calculated and preset ratio Threshold value is compared, if the ratio being calculated be less than or equal to preset rate threshold, just by image data according to The ratio of current I frames and P frames is encoded, while also being encoded to the audio data in multi-medium data, then will coding Image data and audio data afterwards is sent to regarding networked server.
With reference to Fig. 5, a kind of step flow chart of multimedia data transmission method embodiment of the present invention, this method are shown It can be applied to regard in networking, be related to regarding networked terminals, and regard networked server.
In the concrete realization, it can be applied to video conference depending on networked terminals, video monitoring, network direct broadcasting, click video Stream etc. need instantaneous transmission image, audio application scenarios in, camera and microphone can be connected depending on networked terminals, for adopting Collect the multi-medium datas such as image data and audio data, can also by included camera and microphone come acquire image data and The multi-medium datas such as audio data can also connect television set, for playing the multi-medium datas such as image data and audio data. Wherein, there is operating system, such as LINUX depending on networked terminals.
In embodiments of the present invention, such as in the application scenarios such as video monitoring, network direct broadcasting, click video flowing, depending on connection Network termination is the equipment of an outside source, that is, regards networked terminals acquisition image data and audio data etc. multi-medium datas simultaneously It is sent to and regards networked server.
But if it is should in video conference, then just need at least two regard networked terminals, i.e., first regard networking eventually End and second regards networked terminals, and first, which regards networked terminals and second, regards networked terminals outside source each other, i.e., first regards networking Terminal, which acquires the first multi-medium data and is sent to second, regards networked terminals, and second plays the first multimedia number depending on networked terminals According to, meanwhile, second acquires the second multi-medium data depending on networked terminals and is sent to first regarding networked terminals, and first regarding networked terminals Play second multi-medium data.
Due to the communication between terminal be it is lasting, first regards the first multi-medium data of networked terminals acquisition, the Two the second multi-medium datas acquired depending on networked terminals are lasting.
The method can specifically include following steps:
Step 501, the networked terminals that regard obtain multi-medium data;The multi-medium data includes image data, and Audio data;Described image data are made of multiple picture groups, and the picture group is by least one I frames and at least one P frames group At;
H264 is the coding standard of a new generation, with high compression high quality and supports the streaming media of multiple network famous, It is in the theoretical foundation of encoding context, H264:Show to draw in adjacent a few width images with reference to the statistical result of image in a period of time In face, general differentiated pixel only has the point within 10%, and luminance difference variation is no more than 2%, and the variation of chroma difference Within only 1%.So for one section of little image frame of variation, a complete picture frame A can be first encoded out, subsequent B frames just do not encode all images, only the difference of write-in and A frames, and the size of such B frames just only has 1/10 or smaller of whole frame, B If the C frames variation after frame is little, we can continue to encode C frames in a manner of with reference to B, and cycle is gone down in this way.This section figure As we are known as a sequence (sequence is exactly the one piece of data of same characteristics), when some image with image change before very Greatly, it can not be generated with reference to the frame of front, we just terminate a upper sequence, start next section of sequence, that is, to this As soon as image generates a whole frame A1, subsequent image is generated with reference to A1, only the difference content of write-in and A1.
Three kinds of frames are defined in H264 agreements, the frame completely encoded includes only I frames, with reference to what I frames before generated The frame of difference section coding is P frames, and also a kind of frame encoded with reference to front and back frame is B frames.
The core algorithm that H264 is used is frame data compression and interframe compression, and frame data compression is the algorithm for generating I frames, interframe pressure Contracting is to generate the algorithm of B frames and P frames.
The explanation of sequence
Image is to carry out tissue in H264 as unit of sequence, and a sequence is the data flow after one section of image coding, with I Frame starts, and arrives next I frame ends.
First image of one sequence is called IDR images (refreshed image immediately), and IDR images are all I frame images. H.264 introducing IDR images is re-synchronized in order to decoded, immediately that Reference Frame List is clear when decoder is decoded to IDR images Decoded data are all exported or are abandoned by sky, search parameter set again, start a new sequence.In this way, if previous There is gross mistake in a sequence, can obtain the chance of re-synchronization herein.Image after IDR images never uses The data of image before IDR decode.
One sequence is exactly the burst of data stream generated after the less big image of one section of content deltas encodes.Work as motion change It,, can because the content that motion change lacks representative image picture changes very little as soon as sequence can be very long when fewer To compile an I frame, then P frames, B frames always.When motion change is more, as soon as possible sequence is just shorter, such as packet Containing an I frame and 3,4 P frames.
The explanation of three kinds of frames
I frames:Intracoded frame, I frames indicate key frame, it can be understood as the complete reservation of this frame picture;Because including Complete picture, so only needing this frame data that can complete when decoding.
I frame features:
1, it is a full frame compressed coded frames.Full frame image information is carried out JPEG compression coding and transmission by it;
Only use the data of I frames with regard to restructural complete image when 2, decoding;
3, the I frame delineations details of image background and moving person;
4, I frames are generated without necessarily referring to other pictures;
5, I frames are the reference frames (quality of each frame after its quality directly influences in same group) of P frames and B frames;
6, I frames are the basic frames (first frame) of frame group GOP, and only there are one I frames in a group;
7, I frames are without the concern for motion vector;
8, the information content of data shared by I frames is bigger.
P frames:Forward-predictive-coded frames.What P frames indicated is difference of this frame with a key frame (or P frames) before, The difference for being superimposed with this frame definition with the picture cached before is needed when decoding, generates final picture.Namely difference frame, P frames There is no complete picture data, the only data with the picture difference of former frame.
The prediction and reconstruct of P frames:P frames are predicted value and the movement for finding out P frames " certain point " in I frames using I frames as reference frame Vector takes prediction difference and motion vector to transmit together.P frames " certain point " are found out from I frames according to motion vector in receiving terminal Predicted value and with difference value to obtain P frames " certain point " sample value, to which complete P frames can be obtained.
P frame features:
1, P frames are the coded frames for being separated by 1~2 frame behind I frames;
2, P frames transmit its difference and motion vector (prediction mistake with I the or P frames of front using the method for motion compensation Difference);
Complete P frames image could be reconstructed after the predicted value in I frames must be summed with prediction error when 3, decoding;
4, P frames belong to the interframe encode of forward prediction.It is only with reference to front near its I frames or P frames;
5, P frames can be the reference frame of P frames behind, can also be the reference frame of the B frames before and after it;
6, since P frames are reference frames, it may cause the diffusion of decoding error;
7, due to being that difference transmits, the compression of P frames is relatively high.
B frames:Bi-directional predicted interpolation coding frame.B frames are two-way difference frames, that is, B frame recordings is this frame and front and back frame Difference to decode B frames in simple terms, the caching picture before not only obtaining, the picture after also decoding, by preceding Picture with being superimposed for this frame data obtains final picture afterwards.B frames compression ratio is high, but CPU can be relatively more tired when decoding.
The prediction and reconstruct of B frames
B frames are using I the or P frames of front and subsequent P frames as reference frame, the predicted value of " finding out " B frames " certain point " and two fortune Dynamic vector, and prediction difference and motion vector is taken to transmit.Receiving terminal " is found out and (is calculated in two reference frames according to motion vector Go out) " predicted value and with difference sum, B frames " certain point " sample value is obtained, to which complete B frames can be obtained.
B frame features
1, B frames are predicted by I the or P frames and subsequent P frames of front;
2, the transmission of B frames is its prediction error and motion vector between I the or P frames and subsequent P frames of front;
3, B frames are bi-directional predictive coding frames;
4, B frames compression ratio highest is predicted because it only reflects the third situation of change with reference to interframe moving person than calibrated Really;
5, B frames are not reference frames, will not cause the diffusion of decoding error.
It should be noted that in the prior art, each frame of I, B, P is the needs according to compression algorithm, be it is artificially defined, it Be all physical frame true.In general, the compression ratio of I frames is that be 20, B frames can reach 50 to 7, P frames.It can be seen that use B frames can save big quantity space, and saved space can be used for preserving more I frames, in this way under same code rate, Ke Yiti For better image quality.
The explanation of compression algorithm
The compression method of h264:
1, it is grouped:A few frame images are divided into one group (GOP, Group of Pictures, picture group), that is, a sequence Row, to prevent motion change, frame number from should not take more.
2, frame is defined:It is three types, i.e. I frames, B frames and P frames by each frame image definition in every group;
3, frame is predicted:With I frames as basic frame, P frames are predicted with I frames, then B frames are predicted by I frames and P frames;
4, data transmission:Finally the difference information of I frame data and prediction is stored and transmitted.
(Intraframe) compression is also referred to as space compression (Spatial compression) in frame.When one frame figure of compression When picture, the data of this frame are only considered without the redundancy between considering consecutive frame, this is actually similar with Static Picture Compression. Lossy Compression Algorithm is generally used in frame, since frame data compression is one complete image of coding, it is possible to independent decoding, Display.Frame data compression is similar with coding jpeg typically up to less than very high compression.
Interframe (Interframe) compression principle be:The data of adjacent a few frames have prodigious correlation, front and back in other words The characteristics of two frame informations vary less.Namely continuously between its consecutive frame of video with redundancy, according to this characteristic, pressure Amount of redundancy between contracting consecutive frame can further increase decrement, reduce compression ratio.Interframe compression is also referred to as time compression (Temporal compression), it is compressed by comparing the data between different frame on time shaft.Interframe compression one As be lossless.Frame difference (Frame differencing) algorithm is a kind of typical time compression method, it is by comparing this Difference between frame and consecutive frame, the only difference of minute book frame frame adjacent thereto, can greatly reduce data volume in this way.
(Lossy) compression and lossless (Lossy less) compression are by the way damaged down.Reconciliation before lossless compression namely compression Compressed data are completely the same.Most lossless compressions all uses RLE run-length encoding algorithms.Lossy compression means to decompress Data before rear data and compression are inconsistent.Some human eyes and human ear insensitive image are lost during compression Or audio-frequency information, and the information lost is irrecoverable.The algorithm of nearly all high compression all uses lossy compression, such ability Reach the target of low data rate.The data transfer rate of loss is related with compression ratio, and compression ratio is smaller, and the data of loss are more, decompression Effect afterwards is generally poorer.In addition, certain Lossy Compression Algorithms by the way of compression is repeated several times, can also cause additionally in this way Loss of data.
From foregoing description it is known that in the prior art, during being compressed to image, the ratio of I frames and P frames Rate is fixed, in this case, if screen switching is less, such as is broadcast live someone station and gives a lecture on dais, then needing I frames with regard to fewer, coding when, can save coding resource, improve code efficiency.But if screen switching is more, Such as someone is broadcast live and dances, then more I frames are just needed at this time, if continued at this time to the image data of acquisition according to solid The ratio for determining I frame P frames is compressed, then just will appear because I frames not enough lead to the loss of data in coding, and then cause to solve There is the case where flower screen or interim card in the image of code out.
For the above situation, in embodiments of the present invention, can first judge to cut with the presence or absence of picture in the image data of acquisition Change more situation.Specifically, depending on networked terminals after getting multi-medium data, the picture group in image data is extracted, In, picture group is made of at least one I frames and at least one P frames.If the quantity of I frames is more, image data is meant that In screen switching it is more, just need at this time calculate I frames and P frames ratio.
It should be noted that because B frames are not reference frames, the diffusion of decoding error will not be caused, so of the invention real It applies in example, without the concern for B frames.
In a kind of preferred embodiment of the present invention, the multi-medium data is set by image capture device and audio collection It is standby to be acquired;It is described depending on networked terminals obtain multi-medium data the step of include:
The networked terminals that regard are set from described image collecting device acquisition described image data, and from the audio collection It is standby to obtain the audio data.
Specifically, depending on networked terminals can directly with the audio collections such as image capture devices and microphone such as camera Equipment is connected, and to directly acquire image data and audio data, can also obtain other set by connecting miscellaneous equipment The image data and audio data of standby acquisition, such as laptop have been collected image data and audio data, then will figure As data and audio data are sent to regarding networked terminals.
Step 502, the ratio that I frames and P frames in the picture group are obtained depending on networked terminals;
Specifically, obtaining the quantity of the quantity and P frames of I frames in picture group, the ratio of I frames and P frames is then calculated.
Step 503, if the ratio is less than or equal to preset rate threshold, the networked terminals that regard are by present image Data are encoded according to the ratio, while the audio data being encoded;
In practical applications, image data is in acquisition, the case where each frame for hardly occurring acquiring all is I frames, That is, be not in 100% the case where being all I frames in picture group, and when being encoded to image data, I frames are more, The load of encoder is bigger, but I frames are more, and the quality of picture is better, so the quality and volume of picture in order to balance The load of code device, can preset the rate threshold of an I frames and P frames, such as the ratio of I frames and P frames is in embodiments of the present invention 4:1, if the ratio of the I frames and P frames that are calculated is less than preset rate threshold, just according to the current ratio of I frames and P frames Rate threshold value is encoded.For example, the ratio of preset I frames and P frames is 4:1, the ratio of I frames and P frames in the image data of acquisition It is 3:1, it is less than preset rate threshold, then the image data that will just acquire is 3 according to the ratio of I frames and P frames:1 is compiled Code.
It is described to regard networking eventually if the ratio is more than preset rate threshold in a kind of preferred embodiment of the present invention End encodes current image date according to the preset rate threshold, while the audio data being encoded.
Specifically, if the ratio of I frames and P frames is more than the ratio threshold of preset I frames and P frames in the image data of acquisition Value, then just being encoded to the image of acquisition according to preset rate threshold.For example, I frames and P frames in the image data of acquisition Ratio be 5:1, it is more than preset 4:1 rate threshold, then, at this time just according to the ratio of I frames and P frames be 4:1 pair of acquisition Image encoded.
Because can theoretically be encoded using I frames with any frame image, but in practical applications will not In this way, so, when in the image data of acquisition the ratio of I frames and P frames be more than preset I frames and P frames ratio threshold When value, the frame for being originally I frames is just encoded to P frames by force, until the ratio of I frames and P frames is not more than in the image data of acquisition The rate threshold of preset I frames and P frames can.
In a kind of preferred embodiment of the present invention, the networked terminals that regard include the first chip and the second chip, and described the One chip is for encoding described image data, and second chip is for encoding the audio data.
In embodiments of the present invention, depending on networked terminals include two chips, first for being encoded to image data Chip, and the second chip for being encoded to audio data.That is, being to image data and audio depending on networked terminals What data were separately decoded.
Step 504, the audio data after the image data after coding, and coding is sent to institute by the networked terminals that regard It states and regards networked server.
Depending on networked terminals after the completion of to image data, audio data coding, needing multi-medium data being packaged in can be with The data packet transmitted on the link.
In a kind of preferred embodiment of the present invention, the networked terminals that regard include regarding networking protocol stack, each regards networking and assists View has one-to-one data type;
The networked terminals that regard are by the image data after coding, and audio data after coding is sent to and described regards networking The step of server includes:
The networked terminals that regard regard networking protocol using corresponding with described image data type, by the picture number after coding According to, and audio data after coding is sent to and described regards networked server.
Specifically, including MP (the Message process, depending on networked information of following several major class regarding networking protocol stack Processing protocol):1, Remote Video Conference agreement;2, video monitoring agreement;3, network direct broadcasting agreement;4, video stream protocol is clicked; 5, multimedia retransmission protocol;6, subtitle agreement;7, menu agreement (is set) in terminal;8, audio protocols, each major class may be used also With including small classification.Each major class MP has one-to-one data transmission, such as Remote Video Conference agreement pair What is answered is the multi-medium data for transmitting video conference, and corresponding video monitoring agreement is transmission video monitoring data.
So needing when being packaged multi-medium data to know the MP using which kind of major class depending on networked terminals The type for the MP for being packaged, and being used then needs and is connected with depending on networked terminals to be sent in advance regarding networking depending on networked server Terminal.Specifically, can be directly connected to terminal devices such as computers depending on networked server, user can pass through the terminals such as calculating Equipment, then will be regarding the number of networked terminals regarding networked server regarding the data type of networked terminals in the setting in networked server It is sent to regarding networked terminals according to type, in this way, can be used when being packaged to multi-medium data depending on networked terminals It is corresponding with data type to regard networking protocol.
After multi-medium data is packed, needs to increase packet header in data packet, have in embodiments of the present invention, in packet header regarding connection The addresses mac and order are netted, MP is primarily due to and is based on original socket, that is, directly adjusted in protocol layer processing data link layer Data are handled with raw socket, and directly transmit two layers of addresses mac, such order can lack IP layers, more succinct efficient;And Order is exactly each MP things specifically to be executed.
Depending on networked server after receiving packed data packet, if the addresses mac have also been accessed regarding networking, take Data packet is sent directly to destination address by business device according to the addresses mac, if the addresses mac, which are not accessed, regards networking, service Device also needs to first send data packets to turns server depending on networking association, wherein turns server depending on networking association and is used for internet packet Head is converted with depending on networking packet header, and then turning server depending on networking association sends data packets to destination address.
In embodiments of the present invention, pass through the I frames of image data in the collected multi-medium data of calculating depending on networked terminals With the ratio of P frames, then the ratio being calculated is compared with preset rate threshold, if the ratio being calculated is small In or be equal to preset rate threshold, then just image data is encoded according to the ratio of current I frames and P frames, while right Audio data in multi-medium data is also encoded, then by after coding image data and audio data be sent to regarding networking The mode of server is realized regarding networking coding I, P frame ratio dynamic configuration, is solved in the prior art because of I, P frame ratio Multi-medium data caused by fixed configurations frequently occurs phenomena such as packet loss, interim card and delay during transmission.
Moreover, being integrated with regarding networking protocol stack depending on networked terminals, can be added when being packaged to multi-medium data in this way Depending on networking protocol packet header, have in packet header regarding the networking addresses mac and order, and establishes the transmission on original socket, therefore, energy Enough efficient independent process data transmit-receives and order are received and dispatched, and ensure that the accurately and timely transmission of multi-medium data and bidding protocol.
It should be noted that for embodiment of the method, for simple description, therefore it is all expressed as a series of action group It closes, but those skilled in the art should understand that, the embodiment of the present invention is not limited by the described action sequence, because according to According to the embodiment of the present invention, certain steps can be performed in other orders or simultaneously.Secondly, those skilled in the art also should Know, embodiment described in this description belongs to preferred embodiment, and the involved action not necessarily present invention is implemented Necessary to example.
With reference to Fig. 6, a kind of structure diagram regarding networked terminals embodiment of the present invention is shown, should regard networked terminals application In in networking, data interaction is carried out with regarding networked server;Described can specifically include following module depending on networked terminals:
Multi-medium data acquisition module 601, for obtaining multi-medium data;The multi-medium data includes image data, And audio data;Described image data are made of multiple picture groups, and the picture group is by least one I frames and at least one P Frame forms;
Ratio acquisition module 602, the ratio for obtaining I frames and P frames in the picture group;
Coding module 603, when the ratio is less than or equal to preset rate threshold, for pressing current image date It is encoded according to the ratio, while the audio data being encoded;
Sending module 604 is sent to described regard for the audio data after the image data after encoding, and coding and joins Network server.
In a kind of preferred embodiment of the present invention, the multi-medium data is set by image capture device and audio collection It is standby to be acquired;It is described to further include depending on networked terminals:
Receiving module, for being set from described image collecting device acquisition described image data, and from the audio collection It is standby to obtain the audio data.
In a kind of preferred embodiment of the present invention, when the ratio is more than preset rate threshold, the coding module It is additionally operable to encode current image date according to the preset rate threshold, while the audio data being compiled Code.
In a kind of preferred embodiment of the present invention, the networked terminals that regard include regarding networking protocol stack, each regards networking and assists View has one-to-one data type;
The sending module is additionally operable to regard networking protocol using corresponding with described image data type, by the figure after coding Audio data as data, and after coding, which is sent to, described regards networked server.
In a kind of preferred embodiment of the present invention, described depending on networked terminals further includes the first chip and the second chip, described First chip is for encoding described image data, and second chip is for encoding the audio data.
For regarding networked terminals embodiment, since it is basically similar to the method embodiment, so the comparison of description is simple Single, the relevent part can refer to the partial explaination of embodiments of method.
Each embodiment in this specification is described in a progressive manner, the highlights of each of the examples are with The difference of other embodiment, the same or similar parts between the embodiments can be referred to each other.
It should be understood by those skilled in the art that, the embodiment of the embodiment of the present invention can be provided as method, apparatus or calculate Machine program product.Therefore, the embodiment of the present invention can be used complete hardware embodiment, complete software embodiment or combine software and The form of the embodiment of hardware aspect.Moreover, the embodiment of the present invention can be used one or more wherein include computer can With in the computer-usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) of program code The form of the computer program product of implementation.
The embodiment of the present invention be with reference to according to the method for the embodiment of the present invention, terminal device (system) and computer program The flowchart and/or the block diagram of product describes.It should be understood that flowchart and/or the block diagram can be realized by computer program instructions In each flow and/or block and flowchart and/or the block diagram in flow and/or box combination.These can be provided Computer program instructions are set to all-purpose computer, special purpose computer, Embedded Processor or other programmable data processing terminals Standby processor is to generate a machine so that is held by the processor of computer or other programmable data processing terminal equipments Capable instruction generates for realizing in one flow of flow chart or multiple flows and/or one box of block diagram or multiple boxes The device of specified function.
These computer program instructions, which may also be stored in, can guide computer or other programmable data processing terminal equipments In computer-readable memory operate in a specific manner so that instruction stored in the computer readable memory generates packet The manufacture of command device is included, which realizes in one flow of flow chart or multiple flows and/or one side of block diagram The function of being specified in frame or multiple boxes.
These computer program instructions can be also loaded into computer or other programmable data processing terminal equipments so that Series of operation steps are executed on computer or other programmable terminal equipments to generate computer implemented processing, thus The instruction executed on computer or other programmable terminal equipments is provided for realizing in one flow of flow chart or multiple flows And/or in one box of block diagram or multiple boxes specify function the step of.
Although the preferred embodiment of the embodiment of the present invention has been described, once a person skilled in the art knows bases This creative concept, then additional changes and modifications can be made to these embodiments.So the following claims are intended to be interpreted as Including preferred embodiment and fall into all change and modification of range of embodiment of the invention.
Finally, it is to be noted that, herein, relational terms such as first and second and the like be used merely to by One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning Covering non-exclusive inclusion, so that process, method, article or terminal device including a series of elements not only wrap Those elements are included, but also include other elements that are not explicitly listed, or further include for this process, method, article Or the element that terminal device is intrinsic.In the absence of more restrictions, being wanted by what sentence "including a ..." limited Element, it is not excluded that there is also other identical elements in process, method, article or the terminal device including the element.
Above the method to a kind of multi-medium data transmission provided by the present invention and it is a kind of regarding networked terminals, carried out in detail Thin to introduce, principle and implementation of the present invention are described for specific case used herein, and above example is said The bright method and its core concept for being merely used to help understand the present invention;Meanwhile for those of ordinary skill in the art, foundation The thought of the present invention, there will be changes in the specific implementation manner and application range, in conclusion the content of the present specification is not It is interpreted as limitation of the present invention.

Claims (10)

1. a kind of method of multi-medium data transmission, which is characterized in that the method is applied to regard in networking, is related to regarding networking eventually End, and regard networked server;The method includes:
The networked terminals that regard obtain multi-medium data;The multi-medium data includes image data and audio data;It is described Image data is made of multiple picture groups, and the picture group is made of at least one I frames and at least one P frames;
The ratio that I frames and P frames in the picture group are obtained depending on networked terminals;
If the ratio is less than or equal to preset rate threshold, described to regard networked terminals by current image date according to described Ratio is encoded, while the audio data being encoded;
The networked terminals that regard are by the image data after coding, and audio data after coding is sent to and described regards the Internet services Device.
2. according to the method described in claim 1, it is characterized in that, the multi-medium data is by image capture device, Yi Jiyin Frequency collecting device is acquired;It is described depending on networked terminals obtain multi-medium data the step of include:
The networked terminals that regard are obtained from described image collecting device acquisition described image data, and from the audio collecting device Take the audio data.
3. according to the method described in claim 1, it is characterized in that, the method further includes:
If the ratio is more than preset rate threshold, described to regard networked terminals by current image date according to described preset Rate threshold is encoded, while the audio data being encoded.
4. method according to claim 1,2 or 3, which is characterized in that the networked terminals that regard include regarding networking protocol stack, Each, which regards networking protocol, has one-to-one data type;
The networked terminals that regard are by the image data after coding, and audio data after coding is sent to and described regards the Internet services The step of device includes:
The networked terminals that regard regard networking protocol using corresponding with described image data type, by the image data after coding, And the audio data after coding is sent to and described regards networked server.
5. according to the method described in claim 1, it is characterized in that, the networked terminals that regard include the first chip and the second core Piece, for being encoded to described image data, second chip is used to carry out the audio data first chip Coding.
6. one kind regard networked terminals, which is characterized in that it is described regard networked terminals be applied to regard networking in, with regarding networked server into Row data interaction;Described includes depending on networked terminals:
Multi-medium data acquisition module, for obtaining multi-medium data;The multi-medium data includes image data and audio Data;Described image data are made of multiple picture groups, and the picture group is made of at least one I frames and at least one P frames;
Ratio acquisition module, the ratio for obtaining I frames and P frames in the picture group;
Coding module is used for current image date when the ratio is less than or equal to preset rate threshold according to described Ratio is encoded, while the audio data being encoded;
Sending module is sent to described regarding the Internet services for the audio data after the image data after encoding, and coding Device.
7. according to claim 6 regard networked terminals, which is characterized in that the multi-medium data by image capture device, And audio collecting device is acquired;It is described to further include depending on networked terminals:
Receiving module, for being obtained from described image collecting device acquisition described image data, and from the audio collecting device Take the audio data.
8. according to claim 6 regard networked terminals, which is characterized in that when the ratio is more than preset rate threshold When, the coding module is additionally operable to encode current image date according to the preset rate threshold, while will be described Audio data is encoded.
9. regarding networked terminals according to claim 6,7 or 8, which is characterized in that the networked terminals that regard include regarding networking Protocol stack, each, which regards networking protocol, has one-to-one data type;
The sending module is additionally operable to regard networking protocol using corresponding with described image data type, by the picture number after coding According to, and audio data after coding is sent to and described regards networked server.
10. according to claim 6 regard networked terminals, which is characterized in that described depending on networked terminals further includes the first chip With the second chip, for being encoded to described image data, second chip is used for the audio first chip Data are encoded.
CN201710802232.2A 2017-09-07 2017-09-07 A kind of method that multi-medium data transmits and a kind of view networked terminals Active CN108632679B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710802232.2A CN108632679B (en) 2017-09-07 2017-09-07 A kind of method that multi-medium data transmits and a kind of view networked terminals

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710802232.2A CN108632679B (en) 2017-09-07 2017-09-07 A kind of method that multi-medium data transmits and a kind of view networked terminals

Publications (2)

Publication Number Publication Date
CN108632679A true CN108632679A (en) 2018-10-09
CN108632679B CN108632679B (en) 2019-11-01

Family

ID=63705796

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710802232.2A Active CN108632679B (en) 2017-09-07 2017-09-07 A kind of method that multi-medium data transmits and a kind of view networked terminals

Country Status (1)

Country Link
CN (1) CN108632679B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110035297A (en) * 2019-03-08 2019-07-19 视联动力信息技术股份有限公司 Method for processing video frequency and device
CN110278195A (en) * 2019-05-23 2019-09-24 视联动力信息技术股份有限公司 A kind of data transmission method and device based on view networking
WO2020078391A1 (en) * 2018-10-19 2020-04-23 杭州海康威视系统技术有限公司 Method and apparatus for storing video data
CN113099308A (en) * 2021-03-31 2021-07-09 聚好看科技股份有限公司 Content display method, display equipment and image collector

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040042548A1 (en) * 2002-08-27 2004-03-04 Divio, Inc. Bit rate control for motion compensated video compression system
CN105791260A (en) * 2015-11-30 2016-07-20 武汉斗鱼网络科技有限公司 Network self-adaptive stream media service quality control method and device
CN106303693A (en) * 2015-05-25 2017-01-04 北京视联动力国际信息技术有限公司 A kind of method and device of video data decoding
CN106713947A (en) * 2016-12-13 2017-05-24 飞狐信息技术(天津)有限公司 Method and device for reducing live broadcasting time delay and standstill as well as live broadcasting system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040042548A1 (en) * 2002-08-27 2004-03-04 Divio, Inc. Bit rate control for motion compensated video compression system
CN106303693A (en) * 2015-05-25 2017-01-04 北京视联动力国际信息技术有限公司 A kind of method and device of video data decoding
CN105791260A (en) * 2015-11-30 2016-07-20 武汉斗鱼网络科技有限公司 Network self-adaptive stream media service quality control method and device
CN106713947A (en) * 2016-12-13 2017-05-24 飞狐信息技术(天津)有限公司 Method and device for reducing live broadcasting time delay and standstill as well as live broadcasting system

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020078391A1 (en) * 2018-10-19 2020-04-23 杭州海康威视系统技术有限公司 Method and apparatus for storing video data
CN110035297A (en) * 2019-03-08 2019-07-19 视联动力信息技术股份有限公司 Method for processing video frequency and device
CN110035297B (en) * 2019-03-08 2021-05-14 视联动力信息技术股份有限公司 Video processing method and device
CN110278195A (en) * 2019-05-23 2019-09-24 视联动力信息技术股份有限公司 A kind of data transmission method and device based on view networking
CN110278195B (en) * 2019-05-23 2020-12-11 视联动力信息技术股份有限公司 Data transmission method and device based on video network
CN113099308A (en) * 2021-03-31 2021-07-09 聚好看科技股份有限公司 Content display method, display equipment and image collector
CN113099308B (en) * 2021-03-31 2023-10-27 聚好看科技股份有限公司 Content display method, display equipment and image collector

Also Published As

Publication number Publication date
CN108632679B (en) 2019-11-01

Similar Documents

Publication Publication Date Title
CN109194982B (en) Method and device for transmitting large file stream
CN109756789B (en) Method and system for processing packet loss of audio and video data packet
CN108632679B (en) A kind of method that multi-medium data transmits and a kind of view networked terminals
CN108063745B (en) A kind of video call method and its system based on Android device
CN109788232A (en) A kind of summary of meeting recording method of video conference, device and system
CN109462761A (en) A kind of video encoding/decoding method and device
CN108881135A (en) It is a kind of based on view networking information transferring method, device and system
CN108574816B (en) Video networking terminal and communication method and device based on video networking terminal
CN111147859A (en) Video processing method and device
CN109743285A (en) A kind of method and system obtaining PCTV resource
CN109302384B (en) Data processing method and system
CN110769179B (en) Audio and video data stream processing method and system
CN110769297A (en) Audio and video data processing method and system
CN110661992A (en) Data processing method and device
CN111212255B (en) Monitoring resource obtaining method and device and computer readable storage medium
CN108989831A (en) A kind of network REC method and apparatus of multi-code stream
CN110086773B (en) Audio and video data processing method and system
CN110324667B (en) Novel video stream playing method and system
CN108574819B (en) A kind of terminal device and a kind of method of video conference
CN110445761A (en) A kind of video drawing stream method and device
CN110430168A (en) A kind of method and apparatus of data compression
CN109788222A (en) A kind of processing method and processing device regarding networked video
CN108965744A (en) A kind of method of video image processing and device based on view networking
CN110460790A (en) A kind of abstracting method and device of video frame
CN108965914A (en) A kind of video data handling procedure and device based on view networking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100000 Dongcheng District, Beijing, Qinglong Hutong 1, 1103 house of Ge Hua building.

Applicant after: Video Link Power Information Technology Co., Ltd.

Address before: 100000 Beijing Dongcheng District Qinglong Hutong 1 Song Hua Building A1103-1113

Applicant before: BEIJING VISIONVERA INTERNATIONAL INFORMATION TECHNOLOGY CO., LTD.

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201230

Address after: 571924 building C07, Zone C, Hainan Ecological Software Park, hi tech Industrial Demonstration Zone, old town, Haikou City, Hainan Province

Patentee after: Hainan Shilian Communication Technology Co.,Ltd.

Address before: 100000 Dongcheng District, Beijing, Qinglong Hutong 1, 1103 house of Ge Hua building.

Patentee before: VISIONVERA INFORMATION TECHNOLOGY Co.,Ltd.