CN109547727A - Data cache method and device - Google Patents
Data cache method and device Download PDFInfo
- Publication number
- CN109547727A CN109547727A CN201811203720.2A CN201811203720A CN109547727A CN 109547727 A CN109547727 A CN 109547727A CN 201811203720 A CN201811203720 A CN 201811203720A CN 109547727 A CN109547727 A CN 109547727A
- Authority
- CN
- China
- Prior art keywords
- memory
- memory block
- data packet
- block
- networked terminals
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/231—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
- H04N21/23103—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion using load balancing strategies, e.g. by placing or distributing content on different disks, different memories or different servers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
- H04N7/147—Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The present invention provides a kind of data cache method and devices, are applied in view networking.Wherein method includes: that view networked terminals pass through preset memory application function in call operation system, the memory block of application setting quantity, as the memory pool for data buffer storage;The view networked server is received depending on networked terminals and is based on view networking protocol, according to the data packet issued to the downstream communications link depending on networked terminals configuration;Depending on networked terminals target memory block needed for distributing the data packet in the memory pool, and by the data pack buffer into the target memory block.The present invention can only call the biggish memory of memory application function application in once-through operation system as memory pool at the beginning, in subsequent data buffer storage directly from memory pool storage allocation block, there is no need to dynamically, continually call operation system, memory application function generates core dumped when can be avoided frequent calling, improves the stability of system.
Description
Technical field
The present invention relates to view networking technology fields, fill more particularly to a kind of data cache method and a kind of data buffer storage
It sets.
Background technique
With the fast development of the network technology, life of the two-way communications such as video conference, video teaching, videophone in user
Living, work, study etc. are widely available.It, can be with mutual data transmission between the terminal of communicating pair in communication process.Eventually
End after receiving the data, first by data buffer storage and after carrying out respective handling, then is decoded data, shows.
In the prior art, after terminal receives data every time, pass through size needed for the call operation system application data
Memory, it is therefore desirable to continually call operation system, and frequently call and will lead to system errors, to reduce the steady of system
It is qualitative.
Summary of the invention
In view of the above problems, it proposes the embodiment of the present invention and overcomes the above problem or at least partly in order to provide one kind
A kind of data cache method and a kind of corresponding data buffer storage device to solve the above problems.
To solve the above-mentioned problems, the embodiment of the invention discloses a kind of data cache method, the method is applied to view
In networking, the view networking includes view networked terminals and view networked server, which comprises
The view networked terminals pass through preset memory application function in call operation system, the memory of application setting quantity
Block, as the memory pool for data buffer storage;
The view networked terminals receive the view networked server and are based on view networking protocol, according to the view networked terminals
The data packet that the downstream communications link of configuration issues;
The target memory block needed for distributing the data packet in the memory pool depending on networked terminals, and by the number
According to packet caching into the target memory block.
Preferably, the memory pool has allocation pointer, and the memory block in the memory pool is arranged in order, the view connection
Network termination is needed for distributing the data packet in the memory pool the step of target memory block, comprising: the view networked terminals
Judge in the memory pool, the memory BOB(beginning of block) currently pointed to from the allocation pointer, by the number of downward sequence free memory block
The quantity of memory block needed for whether amount is greater than or equal to the data packet;The view networked terminals when the judgment result is yes, from
The memory BOB(beginning of block) that the allocation pointer currently points to, by target memory block needed for data packet described in downward order-assigned;Institute
It states view networked terminals when the judgment result is No, the allocation pointer is directed toward to the top memory block of the memory pool, and from institute
The memory BOB(beginning of block) that allocation pointer currently points to is stated, by target memory block needed for data packet described in downward order-assigned.
Preferably, the memory BOB(beginning of block) currently pointed to from the allocation pointer, by data described in downward order-assigned
The step of target memory block needed for packet, comprising: the memory for judging to currently point to from the allocation pointer depending on networked terminals
BOB(beginning of block), the number of memory block needed for whether being greater than or equal to the data packet by the quantity of downward sequence residue free memory block
Amount;The view networked terminals when the judgment result is yes, press suitable downwards by the memory BOB(beginning of block) currently pointed to from the allocation pointer
The free memory block of the quantity of memory block needed for sequence distributes the data packet, as target memory block needed for the data packet;
The view networked terminals when the judgment result is No, continue to return to execute and described judge to work as from the allocation pointer depending on networked terminals
Whether the memory BOB(beginning of block) of preceding direction is greater than or equal to needed for the data packet by the quantity of downward sequence residue free memory block
The step of quantity of memory block.
Preferably, the memory pool has allocation pointer, and the memory block in the memory pool is arranged in order, in the view
Networked terminals target memory block needed for distributing the data packet in the memory pool, and by the data pack buffer to described
After step in target memory block, further includes: described that the allocation pointer is directed toward in the last one target depending on networked terminals
Next memory block of counterfoil.
Preferably, further includes: the view networked terminals according to the data pack buffer time sequence from morning to night, from
The data packet is extracted in the memory pool, release is for caching the memory block for the data packet extracted.
On the other hand, the embodiment of the invention also discloses a kind of data buffer storage device, described device is applied in view networking,
The view networking includes view networked terminals and regards networked server, described to include: depending on networked terminals
Apply for module, for passing through preset memory application function in call operation system, the memory of application setting quantity
Block, as the memory pool for data buffer storage;
Receiving module is based on view networking protocol for receiving the view networked server, according to the view networked terminals
The data packet that the downstream communications link of configuration issues;
Distribution module, for the target memory block needed for distributing the data packet in the memory pool, and by the number
According to packet caching into the target memory block.
Preferably, the distribution module includes: judging unit, for judging in the memory pool, from the allocation pointer
Whether the memory BOB(beginning of block) currently pointed to is greater than or equal to needed for the data packet interior by the quantity of downward sequence free memory block
The quantity of counterfoil;Memory Allocation unit, for currently referring to from the allocation pointer when the judging unit judging result, which is, is
To memory BOB(beginning of block), by target memory block needed for data packet described in downward order-assigned;Judge to tie in the judging unit
When fruit is no, the allocation pointer is directed toward to the top memory block of the memory pool, and currently pointed to from the allocation pointer
Memory BOB(beginning of block), by target memory block needed for data packet described in downward order-assigned.
Preferably, the Memory Allocation unit is specifically used for: judging that the memory block currently pointed to from the allocation pointer is opened
Begin, the quantity of memory block needed for whether being greater than or equal to the data packet by the quantity of downward sequence residue free memory block;?
Judging result is the memory BOB(beginning of block) currently pointed to from the allocation pointer, by the institute of data packet described in downward order-assigned when being
The free memory block for needing the quantity of memory block, as target memory block needed for the data packet;When the judgment result is No, after
Continuous return executes the memory BOB(beginning of block) for judging to currently point to from the allocation pointer depending on networked terminals, remaining by sequence downwards
The step of quantity of memory block needed for whether the quantity of free memory block is greater than or equal to the data packet.
Preferably, the memory pool has allocation pointer, and the memory block in the memory pool is arranged in order, the view connection
Network termination further include: adjustment module, in distribution module mesh needed for distributing the data packet in the memory pool
Mark memory block, and by the data pack buffer into the target memory block after, the allocation pointer is directed toward the last one
Next memory block of target memory block.
Preferably, the view networked terminals further include: release module, for the time according to the data pack buffer from morning
To the sequence in evening, the data packet is extracted from the memory pool, release is for caching the memory block for the data packet extracted.
In the embodiment of the present invention, depending on networked terminals by preset memory application function in call operation system, application is set
The memory block of fixed number amount, as the memory pool for data buffer storage;The view networked server, which is received, depending on networked terminals is based on view
Networking protocol, according to the data packet issued to the downstream communications link depending on networked terminals configuration;Depending on networked terminals from described
Target memory block needed for distributing the data packet in memory pool, and by the data pack buffer into the target memory block.
It follows that in the embodiment of the present invention, can only call at the beginning memory application function application in once-through operation system compared with
Big memory as memory pool, in subsequent data buffer storage directly from memory pool storage allocation block, there is no need to dynamic
State, continually call operation system, memory application function generates core dumped when can be avoided frequent calling, improves the steady of system
It is qualitative.
Detailed description of the invention
Fig. 1 is a kind of networking schematic diagram of view networking of the invention;
Fig. 2 is a kind of hardware structural diagram of node server of the invention;
Fig. 3 is a kind of hardware structural diagram of access switch of the invention;
Fig. 4 is the hardware structural diagram that a kind of Ethernet association of the invention turns gateway;
Fig. 5 is a kind of step flow chart of data cache method of the embodiment of the present invention one;
Fig. 6 is a kind of structural block diagram of data buffer storage device of the embodiment of the present invention two.
Specific embodiment
In order to make the foregoing objectives, features and advantages of the present invention clearer and more comprehensible, with reference to the accompanying drawing and specific real
Applying mode, the present invention is described in further detail.
It is the important milestone of network Development depending on networking, is a real-time network, can be realized HD video real-time Transmission,
Push numerous Internet applications to HD video, high definition is face-to-face.
Real-time high-definition video switching technology is used depending on networking, it can be such as high in a network platform by required service
Clear video conference, Intellectualized monitoring analysis, emergency command, digital broadcast television, delay TV, the Web-based instruction, shows video monitoring
Field live streaming, VOD program request, TV Mail, individual character records (PVR), Intranet (manages) channel by oneself, intelligent video Broadcast Control, information publication
All be incorporated into a system platform etc. services such as tens of kinds of videos, voice, picture, text, communication, data, by TV or
Computer realizes that high-definition quality video plays.
Embodiment in order to enable those skilled in the art to better understand the present invention is introduced to depending on networking below:
Depending on networking, applied portion of techniques is as described below:
Network technology (Network Technology)
Traditional ethernet (Ethernet) is improved depending on the network technology innovation networked, with potential huge on network
Video flow.(Circuit is exchanged different from simple network packet packet switch (Packet Switching) or lattice network
Switching), Streaming demand is met using Packet Switching depending on networking technology.Has grouping depending on networking technology
Flexible, the simple and low price of exchange, is provided simultaneously with the quality and safety assurance of circuit switching, it is virtually electric to realize the whole network switch type
The seamless connection of road and data format.
Switching technology (Switching Technology)
Two advantages of asynchronous and packet switch that Ethernet is used depending on networking eliminate Ethernet under the premise of complete compatible and lack
It falls into, has the end-to-end seamless connection of the whole network, direct user terminal, directly carrying IP data packet.User data is in network-wide basis
It is not required to any format conversion.It is the more advanced form of Ethernet depending on networking, is a real-time exchange platform, can be realized at present mutually
The whole network large-scale high-definition realtime video transmission that networking cannot achieve pushes numerous network video applications to high Qinghua, unitizes.
Server technology (Server Technology)
It is different from traditional server, its Streaming Media depending on the server technology in networking and unified video platform
Transmission be built upon it is connection-oriented on the basis of, data-handling capacity is unrelated with flow, communication time, single network layer energy
Enough transmitted comprising signaling and data.For voice and video business, handled depending on networking and unified video platform Streaming Media
Complexity many simpler than data processing, efficiency substantially increase hundred times or more than traditional server.
Reservoir technology (Storage Technology)
The ultrahigh speed reservoir technology of unified video platform in order to adapt to the media content of vast capacity and super-flow and
Using state-of-the-art real time operating system, the programme information in server instruction is mapped to specific hard drive space, media
Content is no longer pass through server, and moment is directly delivered to user terminal, and user waits typical time less than 0.2 second.It optimizes
Sector distribution greatly reduces the mechanical movement of hard disc magnetic head tracking, and resource consumption only accounts for the 20% of the internet ad eundem IP, but
The concurrent flow greater than 3 times of traditional disk array is generated, overall efficiency promotes 10 times or more.
Network security technology (Network Security Technology)
Depending on the structural design networked by servicing independent licence system, equipment and the modes such as user data is completely isolated every time
The network security problem that puzzlement internet has thoroughly been eradicated from structure, does not need antivirus applet, firewall generally, has prevented black
The attack of visitor and virus, structural carefree secure network is provided for user.
It services innovative technology (Service Innovation Technology)
Business and transmission are fused together by unified video platform, whether single user, private user or a net
The sum total of network is all only primary automatic connection.User terminal, set-top box or PC are attached directly to unified video platform, obtain rich
The multimedia video service of rich colorful various forms.Unified video platform is traditional to substitute with table schema using " menu type "
Complicated applications programming, considerably less code, which can be used, can be realized complicated application, realize the new business innovation of " endless ".
Networking depending on networking is as described below:
It is a kind of central controlled network structure depending on networking, which can be Tree Network, Star network, ring network etc. class
Type, but centralized control node is needed to control whole network in network on this basis.
As shown in Figure 1, being divided into access net and Metropolitan Area Network (MAN) two parts depending on networking.
The equipment of access mesh portions can be mainly divided into 3 classes: node server, access switch, terminal (including various machines
Top box, encoding board, memory etc.).Node server is connected with access switch, and access switch can be with multiple terminal phases
Even, and it can connect Ethernet.
Wherein, node server is the node that centralized control functions are played in access net, can control access switch and terminal.
Node server can directly be connected with access switch, can also directly be connected with terminal.
Similar, the equipment of metropolitan area mesh portions can also be divided into 3 classes: metropolitan area server, node switch, node serve
Device.Metropolitan area server is connected with node switch, and node switch can be connected with multiple node servers.
Wherein, node server is the node server for accessing mesh portions, i.e. node server had both belonged to access wet end
Point, and belong to metropolitan area mesh portions.
Metropolitan area server is the node that centralized control functions are played in Metropolitan Area Network (MAN), can control node switch and node serve
Device.Metropolitan area server can be directly connected to node switch, can also be directly connected to node server.
It can be seen that be entirely a kind of central controlled network structure of layering depending on networking network, and node server and metropolitan area
The network controlled under server can be the various structures such as tree-shaped, star-like, cyclic annular.
Visually claim, access mesh portions can form unified video platform (part in virtual coil), and multiple unified videos are flat
Platform can form view networking;Each unified video platform can be interconnected by metropolitan area and wide area depending on networking.
1, view networked devices classification
1.1 embodiment of the present invention can be mainly divided into 3 classes: server depending on the equipment in networking, interchanger (including ether
Net gateway), terminal (including various set-top boxes, encoding board, memory etc.).Depending on networking can be divided on the whole Metropolitan Area Network (MAN) (or
National net, World Wide Web etc.) and access net.
1.2 equipment for wherein accessing mesh portions can be mainly divided into 3 classes: node server, access switch (including ether
Net gateway), terminal (including various set-top boxes, encoding board, memory etc.).
The specific hardware structure of each access network equipment are as follows:
Node server:
As shown in Fig. 2, mainly including Network Interface Module 201, switching engine module 202, CPU module 203, disk array
Module 204;
Wherein, Network Interface Module 201, the Bao Jun that CPU module 203, disk array module 204 are come in enter switching engine
Module 202;Switching engine module 202 look into the operation of address table 205 to the packet come in, to obtain the navigation information of packet;
And the packet is stored according to the navigation information of packet the queue of corresponding pack buffer 206;If the queue of pack buffer 206 is close
It is full, then it abandons;All pack buffer queues of 202 poll of switching engine mould, are forwarded: 1) port if meeting the following conditions
It is less than to send caching;2) the queue package counting facility is greater than zero.Disk array module 204 mainly realizes the control to hard disk, including
The operation such as initialization, read-write to hard disk;CPU module 203 is mainly responsible between access switch, terminal (not shown)
Protocol processes, to address table 205 (including descending protocol packet address table, uplink protocol package address table, data packet addressed table)
Configuration, and, the configuration to disk array module 204.
Access switch:
As shown in figure 3, mainly including Network Interface Module (downstream network interface module 301, uplink network interface module
302), switching engine module 303 and CPU module 304;
Wherein, the packet (upstream data) that downstream network interface module 301 is come in enters packet detection module 305;Packet detection mould
Whether mesh way address (DA), source address (SA), type of data packet and the packet length of the detection packet of block 305 meet the requirements, if met,
It then distributes corresponding flow identifier (stream-id), and enters switching engine module 303, otherwise abandon;Uplink network interface mould
The packet (downlink data) that block 302 is come in enters switching engine module 303;The data packet that CPU module 204 is come in enters switching engine
Module 303;Switching engine module 303 look into the operation of address table 306 to the packet come in, to obtain the navigation information of packet;
If the packet into switching engine module 303 is that downstream network interface is gone toward uplink network interface, in conjunction with flow identifier
(stream-id) packet is stored in the queue of corresponding pack buffer 307;If the queue of the pack buffer 307 is close full,
It abandons;If the packet into switching engine module 303 is not that downstream network interface is gone toward uplink network interface, according to packet
Navigation information is stored in the data packet queue of corresponding pack buffer 307;If the queue of the pack buffer 307 is close full,
Then abandon.
All pack buffer queues of 303 poll of switching engine module, are divided to two kinds of situations in embodiments of the present invention:
If the queue is that downstream network interface is gone toward uplink network interface, meets the following conditions and be forwarded: 1)
It is less than that the port sends caching;2) the queue package counting facility is greater than zero;3) token that rate control module generates is obtained;
If the queue is not that downstream network interface is gone toward uplink network interface, meets the following conditions and is forwarded:
1) it is less than to send caching for the port;2) the queue package counting facility is greater than zero.
Rate control module 208 is configured by CPU module 204, to all downlink networks in programmable interval
Interface generates token toward the pack buffer queue that uplink network interface is gone, to control the code rate of forwarded upstream.
CPU module 304 is mainly responsible for the protocol processes between node server, the configuration to address table 306, and,
Configuration to rate control module 308.
Ethernet association turns gateway:
As shown in figure 4, mainly including Network Interface Module (downstream network interface module 401, uplink network interface module
402), switching engine module 403, CPU module 404, packet detection module 405, rate control module 408, address table 406, Bao Huan
Storage 407 and MAC adding module 409, MAC removing module 410.
Wherein, the data packet that downstream network interface module 401 is come in enters packet detection module 405;Packet detection module 405 is examined
Ethernet mac DA, ethernet mac SA, Ethernet length or frame type, the view networking destination address of measured data packet
DA, whether meet the requirements depending on networking source address SA, depending on networking data Packet type and packet length, corresponding stream is distributed if meeting
Identifier (stream-id);Then, MAC DA, MAC SA, length or frame type are subtracted by MAC removing module 410
(2byte), and enter corresponding receive and cache, otherwise abandon;
Downstream network interface module 401 detects the transmission caching of the port, if there is Bao Ze is according to the view of packet networking purpose
Address D A knows the ethernet mac DA of corresponding terminal, adds the ethernet mac DA of terminal, Ethernet assists the MAC for turning gateway
SA, Ethernet length or frame type, and send.
The function that Ethernet association turns other modules in gateway is similar with access switch.
Terminal:
It mainly include Network Interface Module, Service Processing Module and CPU module;For example, set-top box mainly connects including network
Mouth mold block, video/audio encoding and decoding engine modules, CPU module;Encoding board mainly includes Network Interface Module, video encoding engine
Module, CPU module;Memory mainly includes Network Interface Module, CPU module and disk array module.
The equipment of 1.3 metropolitan area mesh portions can be mainly divided into 2 classes: node server, node switch, metropolitan area server.
Wherein, node switch mainly includes Network Interface Module, switching engine module and CPU module;Metropolitan area server mainly includes
Network Interface Module, switching engine module and CPU module are constituted.
2, networking data package definition is regarded
2.1 access network data package definitions
Access net data packet mainly include following sections: destination address (DA), source address (SA), reserve bytes,
payload(PDU)、CRC。
As shown in the table, the data packet for accessing net mainly includes following sections:
DA | SA | Reserved | Payload | CRC |
Wherein:
Destination address (DA) is made of 8 bytes (byte), and first character section indicates type (such as the various associations of data packet
Discuss packet, multicast packet, unicast packet etc.), be up to 256 kinds of possibility, the second byte to the 6th byte is metropolitan area net address,
Seven, the 8th bytes are access net address;
Source address (SA) is also to be made of 8 bytes (byte), is defined identical as destination address (DA);
Reserve bytes are made of 2 bytes;
The part payload has different length according to the type of different data packets, is if it is various protocol packages
64 bytes are 32+1024=1056 bytes if it is single group unicast packets words, are not restricted to above 2 kinds certainly;
CRC is made of 4 bytes, and calculation method follows the Ethernet CRC algorithm of standard.
2.2 Metropolitan Area Network (MAN) packet definitions
The topology of Metropolitan Area Network (MAN) is pattern, may there is 2 kinds, connection even of more than two kinds, i.e. node switching between two equipment
It can all can exceed that 2 kinds between machine and node server, node switch and node switch, node switch and node server
Connection.But the metropolitan area net address of metropolitan area network equipment is uniquely, to close to accurately describe the connection between metropolitan area network equipment
System, introduces parameter in embodiments of the present invention: label, uniquely to describe a metropolitan area network equipment.
(Multi-Protocol Label Switch, multiprotocol label are handed over by the definition of label and MPLS in this specification
Change) label definition it is similar, it is assumed that between equipment A and equipment B there are two connection, then data packet from equipment A to equipment B just
There are 2 labels, data packet also there are 2 labels from equipment B to equipment A.Label is divided into label, outgoing label, it is assumed that data packet enters
The label (entering label) of equipment A is 0x0000, and the label (outgoing label) when this data packet leaves equipment A may reform into
0x0001.The networking process of Metropolitan Area Network (MAN) is to enter network process under centralized control, also means that address distribution, the label of Metropolitan Area Network (MAN)
Distribution be all to be dominated by metropolitan area server, node switch, node server be all passively execute, this point with
The label distribution of MPLS is different, and the distribution of the label of MPLS is the result that interchanger, server are negotiated mutually.
As shown in the table, the data packet of Metropolitan Area Network (MAN) mainly includes following sections:
DA | SA | Reserved | Label | Payload | CRC |
That is destination address (DA), source address (SA), reserve bytes (Reserved), label, payload (PDU), CRC.Its
In, the format of label, which can refer to, such as gives a definition: label is 32bit, wherein high 16bit retains, only with low 16bit, its position
Set is between the reserve bytes and payload of data packet.
Embodiment one
The data cache method of the embodiment of the present invention can be applied in view networking.It may include view networking in depending on networking
For terminal with depending on networked server (being specifically as follows above-mentioned node server), one can access multiple views depending on networked server
Networked terminals.The data cache method of the embodiment of the present invention specifically can be applied in view networked terminals.It is view depending on networked terminals
Join operational line and land equipment, depending on the actual participation person or server of networking service, regard networked terminals can as various set-top boxes,
Streaming Media gateway, storage gateway, media synthesizer, encoding board, etc..It is registered on view networked server depending on networked terminals needs
It can carry out regular traffic.
Referring to Fig. 5, a kind of step flow chart of data cache method of the embodiment of the present invention one is shown.
The data cache method of the embodiment of the present invention may comprise steps of:
Step 501, depending on networked terminals by preset memory application function in call operation system, application sets quantity
Memory block, as the memory pool for data buffer storage.
The data cache method of the embodiment of the present invention can be applied in the communication process based on view networking, such as video council
View, video teaching, videophone, publication live streaming, watch live streaming, etc..
For example, by taking video conference as an example, in the communication process of video conference, sender's (such as can depending on networked terminals
View speaking party) video data of acquisition can be encapsulated as by video data packet based on view networking protocol, and will be regarded by view networking
Frequency data packet is sent to view networked server.It may include the information and reception of the view networked terminals of sender in video data packet
The information etc. of the view networked terminals of side, can be based on view networking protocol, by regarding networking for video data packet depending on networked server
Be issued to recipient depending on networked terminals (such as the meeting side of attending a meeting).Recipient's is receiving video data packet depending on networked terminals
Afterwards, first video data packet is cached, and carries out relevant treatment, such as to out-of-order, packet loss processing, to view after processing
Frequency data packet is decoded, and shows corresponding video.
The embodiment of the present invention mainly describes the view networked terminals of recipient, is receiving the data issued depending on networked server
The process of caching to data packet after packet.Therefore, networked terminals are regarded involved in the embodiment of the present invention as the view of recipient
Networked terminals.
If do not pre-processed, the data packet issued depending on networked server is being received every time depending on networked terminals
Afterwards, can preset memory application function in call operation system, into size needed for operating system application current data packet
It deposits, for caching current data packet.But this kind of mode will lead to the memory application function of frequent call operation system, thus
It is easy to reduce the stability of system so that memory application function itself generates core dumped.
Therefore, the scheme that memory pool is pre-created is proposed in the embodiment of the present invention regarding to the issue above.At the beginning, than
After such as regarding networked terminals starting up, memory application function preset in call operation system, Shen can be passed through depending on networked terminals
The memory block that please set quantity, as the memory pool for data buffer storage.Therefore, the Union of Central Vision of embodiment of the present invention network termination be
Before receiving data packet, the memory block of pre- first to file setting quantity creates memory pool using these memory blocks, which uses
In the subsequently received data packet of caching.For calling the detailed process of memory application function application memory block, art technology
Personnel carry out relevant treatment based on practical experience, this is no longer discussed in detail in the embodiment of the present invention.
Wherein, for memory application function, those skilled in the art select any suitable memory Shen based on practical experience
Please function, the embodiment of the present invention to this with no restriction.For example, memory application function can select kmalloc function,
Vmalloc function, _ get_free_page function, malloc family of functions, alloca function, etc..For setting quantity, ability
Field technique personnel select any suitable numerical value based on practical experience, the embodiment of the present invention to this with no restriction.For example, setting
Fixed number amount can select 100,150,200, etc..
Step 502, the view networked server is received depending on networked terminals and be based on view networking protocol, network according to the view
The data packet that the downstream communications link of terminal configuration issues.
It in a preferred embodiment, can be based on view networking protocol, according to view networked terminals depending on networked server
The downstream communications link of (i.e. the view networked terminals of recipient) configuration, issues data packet to depending on networked terminals, the data packet is specific
It can be the data packet that view networked server is sent to depending on networked terminals of sender.Wherein, data packet can be video data
Packet, packets of audio data, etc..
In practical applications, networking is regarded as the network with centralized control functions, including main control server and undernet
Equipment, which includes terminal, and one of the core idea depending on networking is, by notifying to exchange by main control server
Equipment is directed to when time downstream communications link of service matches table, and the table for being then based on the configuration carries out the transmission of data packet.
That is, including: depending on the communication means in networking
Main control server configuration is when time downstream communications link of service.
Work as time data packet of service for what source terminal (the view networked terminals of such as sender) were sent, according to the downlink communication
Link is sent to target terminal (the view networked terminals of such as recipient).
In embodiments of the present invention, it includes: notice when the downlink of secondary service is logical that time downstream communications link of service is worked as in configuration
Believe that switching equipment involved in link matches table.
It furthermore, include: the configured table of inquiry according to downstream communications link transmission, switching equipment is received to institute
Data packet is transmitted by corresponding port.
In the concrete realization, service includes unicast communication service and Multicast Communication Service.I.e. either cast communication is still
Unicast communication can realize the communication in view networking using the above-mentioned core idea with table-table.
As previously mentioned, view networking includes access mesh portions, in access net, which is node server, under
The grade network equipment includes access switch and terminal.
For the unicast communication service in access net, the main control server configuration is when time downlink communication chain of service
The step of road, may comprise steps of:
Sub-step S11, the service request protocol package that main control server is initiated according to source terminal are obtained when time downlink of service
Communication link information, downstream communications link information include participating in when time downlink of the main control server of service and access switch
Communication port information.
Sub-step S12, downstream communication ports information of the main control server according to main control server, data packet inside it
The downlink port that setting is oriented to when time data packet of service in address table;And believe according to the downstream communication ports of access switch
Breath, to corresponding access switch sending port configuration order.
Sub-step S13, access switch according in port configuration command data packet addressed table inside it, setting when time
The downlink port that the data packet of service is oriented to.
For the Multicast Communication Service (such as video conference) in access net, main control server is obtained when under time service
The step of row communication link information may include following sub-step:
Sub-step S21, main control server obtain the service request agreement for the application Multicast Communication Service that target terminal is initiated
It wraps, includes the access net address of service type information, service content information and target terminal in service request protocol package.
It wherein, include service number in service content information.
Sub-step S22, main control server according to the service number in preset content-address mapping table, extraction source
The access net address of terminal.
Sub-step S23, main control server obtains the corresponding multicast address of source terminal, and distributes to target terminal;And according to
According to the access net address of service type information, source terminal and target terminal, the communication link information when time multicast services is obtained.
Step 503, the target memory block needed for distributing the data packet in the memory pool depending on networked terminals, and by institute
Data pack buffer is stated into the target memory block.
After receiving the data packet issued depending on networked server depending on networked terminals, distribution should from the memory pool being pre-created
Memory block needed for data packet.
In the concrete realization, the data packet that networked terminals receive is regarded as based on view networking, these data packets have first
The characteristics of receiving, first handling, thus in the embodiment of the present invention can using in memory pool memory block order-assigned, sequentially release
The mode put.
Therefore, the memory block in memory pool of the embodiment of the present invention can be arranged in order.For example, if including in memory pool
100 memory blocks, then this 100 memory blocks according to respective number can be arranged as 1 in accordance with the order from top to bottom~
100, wherein memory block 1 is the top memory block of memory pool, and memory block 100 is the bottom memory block of memory pool.
In a preferred embodiment, step 503 may include following sub-step:
Sub-step A1 judges in the memory pool that the memory block currently pointed to from the allocation pointer is opened depending on networked terminals
Begin, the quantity of memory block needed for whether being greater than or equal to the data packet by the quantity of downward sequence free memory block.If so,
Execute sub-step A2;If it is not, then executing sub-step A3.
Depending on information such as sizes that the data packet that networked terminals receive may include the data packet.Depending on networked terminals according to number
The quantity of memory block needed for can determining the data packet according to the size of packet.For example, if the size of the data packet received is
1MB, the size of data that each memory block can store is 128KB, then memory number of blocks needed for the data packet is then 8.
Allocation pointer can be set for memory pool in advance depending on networked terminals, which is used to indicate storage allocation block
When starting position.Therefore, it may determine that the memory BOB(beginning of block) currently pointed to from allocation pointer depending on networked terminals, by sequence downwards
The quantity of memory block needed for whether the quantity of free memory block is greater than or equal to the data packet.
For example, allocation pointer currently points to if including this 100 memory blocks of 1~memory block of memory block 100 in memory pool
Memory block be memory block 93, then the memory BOB(beginning of block) currently pointed to from allocation pointer, by downward sequence free memory block
Quantity is 8, and memory number of blocks needed for the above-mentioned data packet received is 8, thus may determine that currently pointing to from allocation pointer
Memory BOB(beginning of block), by downward sequence free memory block quantity be equal to the data packet needed for memory block quantity.If interior
Depositing includes this 100 memory blocks of 1~memory block of memory block 100 in pond, and the memory block that allocation pointer currently points to is memory block 98,
The memory BOB(beginning of block) so currently pointed to from allocation pointer is 3 by the quantity of downward sequence free memory block, above-mentioned to receive
Memory number of blocks needed for data packet is 8, thus may determine that the memory BOB(beginning of block) currently pointed to from allocation pointer, by suitable downwards
The quantity of memory block needed for the quantity of sequence free memory block is less than the data packet.
Sub-step A2, when the judgment result is yes depending on networked terminals, the memory block currently pointed to from the allocation pointer are opened
Begin, by target memory block needed for data packet described in downward order-assigned.
If the memory BOB(beginning of block) currently pointed to from allocation pointer, it is greater than or is waited by the quantity of downward sequence free memory block
The quantity of the memory block needed for the data packet is then not necessarily to adjust the position of allocation pointer, can currently point to from allocation pointer
Memory BOB(beginning of block), by target memory block needed for data packet described in downward order-assigned.
In a preferred embodiment, the memory BOB(beginning of block) currently pointed to from the allocation pointer, by sequence point downwards
With may include: the step of target memory block needed for the data packet
A1 judges the memory BOB(beginning of block) currently pointed to from the allocation pointer depending on networked terminals, remaining empty by sequence downwards
The quantity of memory block needed for whether the quantity of not busy memory block is greater than or equal to the data packet.If so, executing a2;If it is not, then
Continue to return and executes a1.
Although the memory BOB(beginning of block) currently pointed to from the allocation pointer is greater than by the quantity of downward sequence free memory block
Or the quantity equal to memory block needed for the data packet, but these free memory blocks are likely to be the free time, it is also possible to it is
It occupies.Therefore the embodiment of the present invention and it is indirect distributed from free memory block, but can be judged with further progress, be judged
The quantity of memory block needed for whether the quantity of the free memory block in free memory block is greater than or equal to the data packet.
A2, when the judgment result is yes depending on networked terminals, the memory BOB(beginning of block) currently pointed to from the allocation pointer, by
The free memory block of the quantity of memory block needed for data packet described in lower order-assigned, as target memory needed for the data packet
Block.
If the memory BOB(beginning of block) currently pointed to from the allocation pointer, by the quantity of downward sequence residue free memory block
More than or equal to the quantity of memory block needed for the data packet, then the memory that can be currently pointed to from allocation pointer depending on networked terminals
BOB(beginning of block), by the free memory block of the quantity of memory block needed for data packet described in downward order-assigned, as the data packet institute
The target memory block needed.
For example, being currently pointed to if the memory block that allocation pointer currently points to is memory block 93 from allocation pointer
Memory BOB(beginning of block), is 8 by the quantity of downward sequence free memory block, and memory number of blocks needed for the above-mentioned data packet received is
8, and in remaining 8 memory blocks, this 8 memory blocks of 93~memory block of memory block 100 are idle, thus may determine that from
The memory BOB(beginning of block) that the allocation pointer currently points to is equal to the data packet by the quantity of downward sequence residue free memory block
The quantity of required memory block.Therefore, can since memory block 93, by 93~memory block of downward order-assigned memory block 100 this 8
A free memory block is as target memory block needed for the data packet.
If the memory block that allocation pointer currently points to is memory block 93, the memory block currently pointed to from allocation pointer
Start, be 8 by the quantity of downward sequence free memory block, memory number of blocks needed for the above-mentioned data packet received is 8, and is remained
In 8 remaining memory blocks, this 6 memory blocks of 93~memory block of memory block 98 be it is idle, and memory block 99 and memory block 100 this
Two memory blocks are to occupy, thus may determine that the memory BOB(beginning of block) currently pointed to from the allocation pointer, presses sequence downwards
The quantity of memory block needed for the quantity of remaining free memory block is less than the data packet.In view of data packet is first releasing of first caching
Put, so if the memory block of occupancy may be released after waiting for a period of time, therefore in the embodiment of the present invention in this situation
Under can wait for a period of time, and continue to return and execute a1, until the memory BOB(beginning of block) that is currently pointed to from allocation pointer, by downward
After the quantity of memory block needed for whether the quantity of the remaining free memory block of sequence is greater than or equal to the data packet, according to the mistake of a2
Journey distributes free memory block.
The allocation pointer when the judgment result is No depending on networked terminals is directed toward the top of the memory pool by sub-step A3
Portion's memory block, and the memory BOB(beginning of block) currently pointed to from the allocation pointer, by needed for data packet described in downward order-assigned
Target memory block.
If the memory BOB(beginning of block) currently pointed to from allocation pointer, it is less than by the quantity of downward sequence free memory block described
The quantity of memory block needed for data packet, then the position of adjustable allocation pointer, is directed toward the memory pool for the allocation pointer
Top memory block, and the memory block (namely top memory block) currently pointed to since allocation pointer, by downward order-assigned
Target memory block needed for the data packet.
For the memory BOB(beginning of block) currently pointed in sub-step A3 from the allocation pointer, by described in downward order-assigned
The detailed process of target memory block needed for data packet, it is substantially similar to above-mentioned sub-step A3, referring in particular to above-mentioned sub-step A2
Associated description, the embodiment of the present invention is no longer discussed in detail herein.
In the embodiment of the present invention, described depending on networked terminals target needed for distributing the data packet in the memory pool
Memory block, and will after step of the data pack buffer into the target memory block, can also include: that view networked terminals will
The allocation pointer is directed toward next memory block of the last one target memory block.Namely networked terminals are regarded in the number to receive
After packet storage allocation block, also to update the direction of allocation pointer, so as to it is subsequent being capable of correctly storage allocation block.For example,
When this receives data packet, the memory block that allocation pointer is directed toward before being its storage allocation is memory block 50, is distributed for it
Memory block be 50~memory block of memory block 57, then regard networked terminals after dispensing by allocation pointer be directed toward memory block 58.
It, can also be according to time from the morning of the data pack buffer depending on networked terminals after the data packet that caching receives
To the sequence in evening, the data packet is extracted from the memory pool, release is for caching the memory block for the data packet extracted.Depending on connection
Network termination such as can be decoded to the data packet of extraction, show at the operation, to realize the data transmission between view networked terminals.
In the embodiment of the present invention, it can only call the memory application function application in once-through operation system larger at the beginning
Memory as memory pool, in subsequent data buffer storage directly from memory pool storage allocation block, there is no need to dynamic
Ground, continually call operation system, memory application function generates core dumped when can be avoided frequent calling, improves the stabilization of system
Property.
It should be noted that for simple description, therefore, it is stated as a series of action groups for embodiment of the method
It closes, but those skilled in the art should understand that, embodiment of that present invention are not limited by the describe sequence of actions, because according to
According to the embodiment of the present invention, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art also should
Know, the embodiments described in the specification are all preferred embodiments, and the related movement not necessarily present invention is implemented
Necessary to example.
Embodiment two
Referring to Fig. 6, a kind of structural block diagram of data buffer storage device of the embodiment of the present invention two is shown.The embodiment of the present invention
Data buffer storage device can be applied in view networking, view networking includes view networked terminals and regards networked server.
The data buffer storage device of the embodiment of the present invention specifically can be applied in view networked terminals, may include following be located at
Depending on the module in networked terminals:
It is described to include: depending on networked terminals
Apply for module 601, for by preset memory application function in call operation system, application to set the interior of quantity
Counterfoil, as the memory pool for data buffer storage;
Receiving module 602 is based on view networking protocol for receiving the view networked server, networks eventually according to the view
The data packet that the downstream communications link of end configuration issues;
Distribution module 603, for the target memory block needed for distributing the data packet in the memory pool, and will be described
Data pack buffer is into the target memory block.
In a preferred embodiment, the distribution module includes: judging unit, for judging in the memory pool,
Whether the memory BOB(beginning of block) currently pointed to from the allocation pointer is greater than or equal to institute by the quantity of downward sequence free memory block
State the quantity of memory block needed for data packet;Memory Allocation unit, for when the judging unit judging result, which is, is, from described
The memory BOB(beginning of block) that allocation pointer currently points to, by target memory block needed for data packet described in downward order-assigned;Described
When judging unit judging result is no, the allocation pointer is directed toward to the top memory block of the memory pool, and from the distribution
The memory BOB(beginning of block) that pointer currently points to, by target memory block needed for data packet described in downward order-assigned.
In a preferred embodiment, the Memory Allocation unit is specifically used for: judging current from the allocation pointer
Whether the memory BOB(beginning of block) of direction is greater than or equal to needed for the data packet interior by the quantity of downward sequence residue free memory block
The quantity of counterfoil;When the judgment result is yes, the memory BOB(beginning of block) currently pointed to from the allocation pointer, by downward order-assigned
The free memory block of the quantity of memory block needed for the data packet, as target memory block needed for the data packet;Judging
When being as a result no, continue to return to the execution memory BOB(beginning of block) for judging to currently point to from the allocation pointer depending on networked terminals,
The step of quantity of memory block needed for whether being greater than or equal to the data packet by the quantity of downward sequence residue free memory block.
In a preferred embodiment, the memory pool has allocation pointer, and the memory block in the memory pool is by suitable
Sequence arrangement, the view networked terminals further include: adjustment module, described in being distributed from the memory pool in the distribution module
Target memory block needed for data packet, and by the data pack buffer into the target memory block after, by it is described distribution refer to
Needle is directed toward next memory block of the last one target memory block.
In a preferred embodiment, the view networked terminals further include: release module, for according to the data packet
The sequence of the time of caching from morning to night extracts the data packet from the memory pool, and release is for caching the data extracted
The memory block of packet.
In the embodiment of the present invention, it can only call the memory application function application in once-through operation system larger at the beginning
Memory as memory pool, in subsequent data buffer storage directly from memory pool storage allocation block, there is no need to dynamic
Ground, continually call operation system, memory application function generates core dumped when can be avoided frequent calling, improves the stabilization of system
Property.
For device embodiment, since it is basically similar to the method embodiment, related so being described relatively simple
Place illustrates referring to the part of embodiment of the method.
All the embodiments in this specification are described in a progressive manner, the highlights of each of the examples are with
The difference of other embodiments, the same or similar parts between the embodiments can be referred to each other.
It should be understood by those skilled in the art that, the embodiment of the embodiment of the present invention can provide as method, apparatus or calculate
Machine program product.Therefore, the embodiment of the present invention can be used complete hardware embodiment, complete software embodiment or combine software and
The form of the embodiment of hardware aspect.Moreover, the embodiment of the present invention can be used one or more wherein include computer can
With in the computer-usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) of program code
The form of the computer program product of implementation.
The embodiment of the present invention be referring to according to the method for the embodiment of the present invention, terminal device (system) and computer program
The flowchart and/or the block diagram of product describes.It should be understood that flowchart and/or the block diagram can be realized by computer program instructions
In each flow and/or block and flowchart and/or the block diagram in process and/or box combination.It can provide these
Computer program instructions are set to general purpose computer, special purpose computer, Embedded Processor or other programmable data processing terminals
Standby processor is to generate a machine, so that being held by the processor of computer or other programmable data processing terminal devices
Capable instruction generates for realizing in one or more flows of the flowchart and/or one or more blocks of the block diagram
The device of specified function.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing terminal devices
In computer-readable memory operate in a specific manner, so that instruction stored in the computer readable memory generates packet
The manufacture of command device is included, which realizes in one side of one or more flows of the flowchart and/or block diagram
The function of being specified in frame or multiple boxes.
These computer program instructions can also be loaded into computer or other programmable data processing terminal devices, so that
Series of operation steps are executed on computer or other programmable terminal equipments to generate computer implemented processing, thus
The instruction executed on computer or other programmable terminal equipments is provided for realizing in one or more flows of the flowchart
And/or in one or more blocks of the block diagram specify function the step of.
Although the preferred embodiment of the embodiment of the present invention has been described, once a person skilled in the art knows bases
This creative concept, then additional changes and modifications can be made to these embodiments.So the following claims are intended to be interpreted as
Including preferred embodiment and fall into all change and modification of range of embodiment of the invention.
Finally, it is to be noted that, herein, relational terms such as first and second and the like be used merely to by
One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation
Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning
Covering non-exclusive inclusion, so that process, method, article or terminal device including a series of elements not only wrap
Those elements are included, but also including other elements that are not explicitly listed, or further includes for this process, method, article
Or the element that terminal device is intrinsic.In the absence of more restrictions, being wanted by what sentence "including a ..." limited
Element, it is not excluded that there is also other identical elements in process, method, article or the terminal device for including the element.
Above to a kind of data cache method provided by the present invention and a kind of data buffer storage device, detailed Jie has been carried out
It continues, used herein a specific example illustrates the principle and implementation of the invention, and the explanation of above embodiments is only
It is to be used to help understand method and its core concept of the invention;At the same time, for those skilled in the art, according to this hair
Bright thought, there will be changes in the specific implementation manner and application range, in conclusion the content of the present specification should not manage
Solution is limitation of the present invention.
Claims (10)
1. a kind of data cache method, which is characterized in that the method is applied in view networking, and the view networking includes view networking
Terminal and view networked server, which comprises
The view networked terminals set the memory block of quantity by preset memory application function in call operation system, application,
As the memory pool for data buffer storage;
The view networked terminals receive the view networked server and are based on view networking protocol, configure according to the view networked terminals
The data packet that issues of downstream communications link;
The target memory block needed for distributing the data packet in the memory pool depending on networked terminals, and by the data packet
Caching is into the target memory block.
2. the method according to claim 1, wherein the memory pool has allocation pointer, in the memory pool
Memory block be arranged in order, the target memory block needed for distributing the data packet in the memory pool depending on networked terminals
The step of, comprising:
The view networked terminals judge in the memory pool that the memory BOB(beginning of block) currently pointed to from the allocation pointer is pressed downward
The quantity of memory block needed for whether the quantity of sequence free memory block is greater than or equal to the data packet;
The view networked terminals when the judgment result is yes, press downward by the memory BOB(beginning of block) currently pointed to from the allocation pointer
Target memory block needed for data packet described in order-assigned;
The allocation pointer when the judgment result is No, is directed toward the top memory block of the memory pool by the view networked terminals,
And the memory BOB(beginning of block) currently pointed to from the allocation pointer, by target memory needed for data packet described in downward order-assigned
Block.
3. according to the method described in claim 2, it is characterized in that, the memory block currently pointed to from the allocation pointer is opened
Begin, by needed for data packet described in downward order-assigned the step of target memory block, comprising:
The memory BOB(beginning of block) for judging to currently point to from the allocation pointer depending on networked terminals, by the remaining free time of sequence downwards
The quantity of memory block needed for whether the quantity of counterfoil is greater than or equal to the data packet;
The view networked terminals when the judgment result is yes, press downward by the memory BOB(beginning of block) currently pointed to from the allocation pointer
The free memory block of the quantity of memory block needed for data packet described in order-assigned, as target memory needed for the data packet
Block;
The view networked terminals when the judgment result is No, continue return execute it is described view networked terminals judge from it is described distribute refer to
Whether the memory BOB(beginning of block) that needle currently points to is greater than or equal to the data packet by the quantity of downward sequence residue free memory block
The step of quantity of required memory block.
4. method according to claim 1 or 2, which is characterized in that the memory pool has allocation pointer, the memory pool
In memory block be arranged in order, described depending on networked terminals in the target needed for distributing the data packet in the memory pool
Counterfoil, and will be after step of the data pack buffer into the target memory block, further includes:
Next memory block that the allocation pointer is directed toward to the last one target memory block depending on networked terminals.
5. the method according to claim 1, wherein further include:
The view networked terminals extract institute according to the time sequence from morning to night of the data pack buffer from the memory pool
Data packet is stated, release is for caching the memory block for the data packet extracted.
6. a kind of data buffer storage device, which is characterized in that described device is applied in view networking, and the view networking includes view networking
Terminal and view networked server, it is described to include: depending on networked terminals
Apply for module, for by preset memory application function in call operation system, the memory block of application setting quantity to be made
For the memory pool for data buffer storage;
Receiving module is based on view networking protocol for receiving the view networked server, configures according to the view networked terminals
The data packet that issues of downstream communications link;
Distribution module, for the target memory block needed for distributing the data packet in the memory pool, and by the data packet
Caching is into the target memory block.
7. device according to claim 6, which is characterized in that the distribution module includes:
Judging unit, for judging in the memory pool, the memory BOB(beginning of block) currently pointed to from the allocation pointer, by suitable downwards
The quantity of memory block needed for whether the quantity of sequence free memory block is greater than or equal to the data packet;
Memory Allocation unit, in currently pointing to from the allocation pointer when the judging unit judging result, which is, is
Counterfoil starts, by target memory block needed for data packet described in downward order-assigned;It is no in the judging unit judging result
When, the allocation pointer is directed toward to the top memory block of the memory pool, and the memory block currently pointed to from the allocation pointer
Start, by target memory block needed for data packet described in downward order-assigned.
8. device according to claim 7, which is characterized in that the Memory Allocation unit is specifically used for:
Judge the memory BOB(beginning of block) currently pointed to from the allocation pointer, by downward sequence residue free memory block quantity whether
More than or equal to the quantity of memory block needed for the data packet;
When the judgment result is yes, the memory BOB(beginning of block) currently pointed to from the allocation pointer, by number described in downward order-assigned
According to the free memory block of the quantity of memory block needed for wrapping, as target memory block needed for the data packet;
When the judgment result is No, continue to return execute it is described judge to currently point to from the allocation pointer depending on networked terminals in
Counterfoil starts, the number of memory block needed for whether being greater than or equal to the data packet by the quantity of downward sequence residue free memory block
The step of amount.
9. device according to claim 6 or 7, which is characterized in that the memory pool has allocation pointer, the memory pool
In memory block be arranged in order, the view networked terminals further include:
Module is adjusted, in distribution module target memory block needed for distributing the data packet in the memory pool,
And by the data pack buffer into the target memory block after, the allocation pointer is directed toward the last one target memory block
Next memory block.
10. device according to claim 6, which is characterized in that the view networked terminals further include:
Release module extracts institute for the sequence of the time according to the data pack buffer from morning to night from the memory pool
Data packet is stated, release is for caching the memory block for the data packet extracted.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811203720.2A CN109547727B (en) | 2018-10-16 | 2018-10-16 | Data caching method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811203720.2A CN109547727B (en) | 2018-10-16 | 2018-10-16 | Data caching method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109547727A true CN109547727A (en) | 2019-03-29 |
CN109547727B CN109547727B (en) | 2021-12-17 |
Family
ID=65843958
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811203720.2A Active CN109547727B (en) | 2018-10-16 | 2018-10-16 | Data caching method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109547727B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111404986A (en) * | 2019-12-11 | 2020-07-10 | 杭州海康威视系统技术有限公司 | Data transmission processing method, device and storage medium |
CN112988609A (en) * | 2019-12-02 | 2021-06-18 | 杭州海康机器人技术有限公司 | Data processing method, device, storage medium and client |
CN115118685A (en) * | 2022-08-30 | 2022-09-27 | 无锡沐创集成电路设计有限公司 | Data packet processing method, device, system, electronic device and medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101853215A (en) * | 2010-06-01 | 2010-10-06 | 恒生电子股份有限公司 | Memory allocation method and device |
CN103176855A (en) * | 2013-03-15 | 2013-06-26 | 中兴通讯股份有限公司 | Message exchange handling method and device |
CN107329833A (en) * | 2017-07-03 | 2017-11-07 | 郑州云海信息技术有限公司 | One kind realizes the continuous method and apparatus of internal memory using chained list |
CN108573011A (en) * | 2017-11-20 | 2018-09-25 | 北京视联动力国际信息技术有限公司 | A kind of displaying of terminal device and device |
-
2018
- 2018-10-16 CN CN201811203720.2A patent/CN109547727B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101853215A (en) * | 2010-06-01 | 2010-10-06 | 恒生电子股份有限公司 | Memory allocation method and device |
CN103176855A (en) * | 2013-03-15 | 2013-06-26 | 中兴通讯股份有限公司 | Message exchange handling method and device |
US20160103718A1 (en) * | 2013-03-15 | 2016-04-14 | Zte Corporation | Method and apparatus for message interactive processing |
CN107329833A (en) * | 2017-07-03 | 2017-11-07 | 郑州云海信息技术有限公司 | One kind realizes the continuous method and apparatus of internal memory using chained list |
CN108573011A (en) * | 2017-11-20 | 2018-09-25 | 北京视联动力国际信息技术有限公司 | A kind of displaying of terminal device and device |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112988609A (en) * | 2019-12-02 | 2021-06-18 | 杭州海康机器人技术有限公司 | Data processing method, device, storage medium and client |
CN112988609B (en) * | 2019-12-02 | 2023-05-02 | 杭州海康机器人股份有限公司 | Data processing method, device, storage medium and client |
CN111404986A (en) * | 2019-12-11 | 2020-07-10 | 杭州海康威视系统技术有限公司 | Data transmission processing method, device and storage medium |
CN115118685A (en) * | 2022-08-30 | 2022-09-27 | 无锡沐创集成电路设计有限公司 | Data packet processing method, device, system, electronic device and medium |
CN115118685B (en) * | 2022-08-30 | 2022-11-25 | 无锡沐创集成电路设计有限公司 | Data packet processing method, device, system, electronic device and medium |
Also Published As
Publication number | Publication date |
---|---|
CN109547727B (en) | 2021-12-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108881798B (en) | It is a kind of to be carried out using bridge service device across view networking conference method and system | |
CN108121588B (en) | A kind of method and its view networking access server of access external resource | |
CN108632559B (en) | A kind of video data handling procedure and device | |
CN109743595A (en) | Terminal data synchronous method and device | |
CN109474715A (en) | A kind of resource allocation method and device based on view networking | |
CN108307212B (en) | A kind of file order method and device | |
CN108243343B (en) | A kind of point distribution statistical method and its server based on view networking | |
CN108234190A (en) | A kind of management method and system regarding networked devices | |
CN109246486A (en) | A kind of framing method and device | |
CN109547731A (en) | A kind of methods of exhibiting and system of video conference | |
CN109963109A (en) | A kind of processing method and system of video conference | |
CN110062295A (en) | A kind of file resource acquisition methods and system | |
CN110049346A (en) | A kind of method and system of net cast | |
CN109788369A (en) | Terminal control method and device | |
CN109547727A (en) | Data cache method and device | |
CN110266638A (en) | Information processing method, device and storage medium | |
CN109729184A (en) | A kind of method and apparatus of view networking service processing | |
CN109491783A (en) | A kind of acquisition methods and system of memory usage | |
CN109309803A (en) | A kind of camera long-range control method and device | |
CN109842630A (en) | Method for processing video frequency and device | |
CN110519549A (en) | A kind of conference terminal list obtaining method and system | |
CN110266577A (en) | A kind of tunnel establishing method and view networked system | |
CN110022500A (en) | A kind of loss treating method and device | |
CN109617766A (en) | A kind of heartbeat treating method and apparatus | |
CN109819133A (en) | A kind of photo acquisition methods and device based on view networking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |