CN110460790A - A kind of abstracting method and device of video frame - Google Patents
A kind of abstracting method and device of video frame Download PDFInfo
- Publication number
- CN110460790A CN110460790A CN201810410173.9A CN201810410173A CN110460790A CN 110460790 A CN110460790 A CN 110460790A CN 201810410173 A CN201810410173 A CN 201810410173A CN 110460790 A CN110460790 A CN 110460790A
- Authority
- CN
- China
- Prior art keywords
- video frame
- video
- specified time
- data
- determined
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/44—Receiver circuitry for the reception of television signals according to analogue transmission standards
- H04N5/4448—Receiver circuitry for the reception of television signals according to analogue transmission standards for frame-grabbing
Abstract
The embodiment of the invention provides a kind of abstracting method of video frame and devices, the described method includes: obtaining the extraction request of target video frame, it extracts in request and carries specified time, according to the specified time, the first video frame is read from the corresponding position of video data, and the first display time label is obtained from first video frame, according to the comparison result of specified time and the first display time label, first video frame is determined as target video frame, or the second video frame is searched before and after the first video frame and is determined as target video, when so that extracting target video frame, the corresponding position that can jump directly to specified time in video data is searched, without sequentially searching target video frame backward since the start frame of video, the play time of video frame required for overcoming is more rearward, it searches and is spent needed for target video frame Time longer problem, the problem of consuming time is long when solving video frame extraction, low efficiency.
Description
Technical field
The present invention relates to view networking technology fields, the abstracting method and a kind of video frame more particularly to a kind of video frame
Draw-out device.
Background technique
With the fast development of the network technology, each technical field is increasingly used in depending on networking technology.Depending on networking
Using real-time high-definition video switching technology, realize that high-definition quality video plays by TV or computer.
Usual video platform in order to show the content of video, can usually be extracted from video a frame image show in website or
On the page of client.General abstracting method is needed since the start frame of video, and sequence is searched backward, until needed for finding
The corresponding video frame of the play time wanted.It but is more than that ten hours, file sizes are more than depending on being usually present duration in networking
The big file of 10GB is searched the time spent needed for video frame and is got over because the play time of required video frame is more rearward
The problem of length, consuming time is long so the abstracting method of existing video frame exists, low efficiency.
Summary of the invention
In view of the above problems, it proposes the embodiment of the present invention and overcomes the above problem or at least partly in order to provide one kind
A kind of abstracting method of the video frame to solve the above problems and a kind of corresponding draw-out device of video frame.
According to one aspect of the present invention, a kind of abstracting method of video frame is provided, the method is applied to view networking
In, which comprises
Obtain the extraction request of target video frame, the specified time extracting request carrying video frame and extracting;
According to the specified time, the first video frame is read from the corresponding position of video data, and from first video
The first display time label is obtained in frame;
According to the comparison result of the specified time and the first display time label, first video frame is determined as mesh
Video frame is marked, or, searching the second video frame before and after first video frame is determined as target video frame.
Optionally, the comparison result according to the specified time and the first display time label, described first is regarded
Frequency frame is determined as target video frame, or, searching the second video frame before and after first video frame is determined as target video frame
Include:
If the difference of the specified time and the first display time label is less than given threshold range, by described first
Video frame is determined as target video frame.
Optionally, the comparison result according to the specified time and the first display time label, described first is regarded
Frequency frame is determined as target video frame, or, searching the second video frame before and after first video frame is determined as target video frame
Include:
If the difference of the specified time and the first display time label is more than given threshold range, according to the difference
With the first display time label, determine that the second display time marked;
Search corresponding first picture group of the second display time label;
The first video frame of first picture group is determined as the second video frame.
Optionally, described according to the specified time if the video data belongs to the video stream data in view networking, from
The corresponding position of video data reads the first video frame
Open the broadcasting service in view networking;
Obtain the specified time corresponding video stream data;
The first video frame of the video stream data is determined as the first video frame.
Optionally, described according to the specified time if the video data belongs to local data, from pair of video data
Answer position read the first video frame include:
Obtain the bit-rate parameters of the video data;
According to the specified time and bit-rate parameters, the corresponding position of the video data is calculated;
The second picture group is read from the corresponding position of the video data, and by the first video frame of second picture group
It is determined as the first video frame.
Optionally, the extraction request carries video location information, the method also includes:
According to the video location information, the video fluxion that the video data belongs in local data or view networking is judged
According to.
According to another aspect of the present invention, a kind of draw-out device of video frame is provided, described device is applied to view networking
In, described device includes:
Request module, the extraction for obtaining target video frame are requested, and the extraction request carries video frame and extracts
Specified time;
Label obtains module, for reading the first video frame from the corresponding position of video data according to the specified time,
And the first display time label is obtained from first video frame;
Video frame determining module, for the comparison result according to the specified time and the first display time label, by institute
It states the first video frame and is determined as target video frame, or, searching the second video frame before and after first video frame is determined as mesh
Mark video frame.
Optionally, the video frame determining module includes:
First determines submodule, if the difference for the specified time and the first display time label is less than setting threshold
It is worth range, then first video frame is determined as target video frame.
Optionally, the video frame determining module includes:
It marks and determines submodule, if the difference for the specified time and the first display time label is more than given threshold
Range determines that the second display time marked then according to the difference and the first display time label;
Picture group searches submodule, marks corresponding first picture group for searching the second display time;
Second determines submodule, for the first video frame of first picture group to be determined as the second video frame.
Optionally, if the video data belongs to the video stream data in view networking, the label obtains module and includes:
Submodule is opened in service, for opening the broadcasting service in view networking;
Flow data acquisition submodule, for obtaining the specified time corresponding video stream data;
Third determines submodule, for the first video frame of the video stream data to be determined as the first video frame.
Optionally, if the video data belongs to local data, the label obtains module and includes:
Parameter acquisition submodule, for obtaining the bit-rate parameters of the video data;
Position computational submodule, for calculating pair of the video data according to the specified time and bit-rate parameters
Answer position;
4th determines submodule, for reading the second picture group from the corresponding position of the video data, and by described the
The first video frame of two picture groups is determined as the first video frame.
Optionally, the extraction request carries video location information, described device further include:
Judgment module, for judging that the video data belongs to local data or view connection according to the video location information
Video stream data in net.
The embodiment of the present invention includes following advantages:
In the embodiment of the present invention, the extraction by obtaining target video frame is requested, and is extracted in request and is carried specified time,
According to the specified time, the first video frame is read from the corresponding position of video data, and obtain from first video frame
First video frame is determined as by the first display time label according to the comparison result of specified time and the first display time label
Target video frame, or search the second video frame before and after the first video frame and be determined as target video, so that extracting target video
When frame, the corresponding position that can jump directly to specified time in video data is searched, without the start frame from video
Beginning sequence searches target video frame backward, and the play time of video frame required for overcoming more rearward, searches target video
The problem of time spent needed for frame longer problem, consuming time is long when solving video frame extraction, low efficiency.
Detailed description of the invention
Fig. 1 is a kind of networking schematic diagram of view networking of one embodiment of the invention;
Fig. 2 is a kind of hardware structural diagram of node server of one embodiment of the invention;
Fig. 3 is a kind of hardware structural diagram of access switch of one embodiment of the invention;
Fig. 4 is that a kind of Ethernet association of one embodiment of the invention turns the hardware structural diagram of gateway;
Fig. 5 is a kind of step flow chart of the abstracting method of video frame of one embodiment of the invention;
Fig. 6 is a kind of schematic diagram of video frame extraction process of one embodiment of the invention;
Fig. 7 is a kind of step flow chart of the abstracting method of video frame of another embodiment of the present invention;
Fig. 8 is a kind of structural block diagram of the draw-out device of video frame of one embodiment of the invention.
Specific embodiment
In order to make the foregoing objectives, features and advantages of the present invention clearer and more comprehensible, with reference to the accompanying drawing and specific real
Applying mode, the present invention is described in further detail.
Embodiment in order to enable those skilled in the art to better understand the present invention is introduced to depending on networking below:
It is the important milestone of network Development depending on networking, is a real-time network, can be realized HD video real-time Transmission,
Push numerous Internet applications to HD video, high definition is face-to-face.
Real-time high-definition video switching technology is used depending on networking, it can be such as high in a network platform by required service
Clear video conference, Intellectualized monitoring analysis, emergency command, digital broadcast television, delay TV, the Web-based instruction, shows video monitoring
Field live streaming, VOD program request, TV Mail, individual character records (PVR), Intranet (manages) channel by oneself, intelligent video Broadcast Control, information publication
All be incorporated into a system platform etc. services such as tens of kinds of videos, voice, picture, text, communication, data, by TV or
Computer realizes that high-definition quality video plays.
Depending on networking, applied portion of techniques is as described below:
Network technology (Network Technology)
Traditional ethernet (Ethernet) is improved depending on the network technology innovation networked, with potential huge on network
Video flow.(Circuit is exchanged different from simple network packet packet switch (Packet Switching) or lattice network
Switching), Streaming demand is met using Packet Switching depending on networking technology.Has grouping depending on networking technology
Flexible, the simple and low price of exchange, is provided simultaneously with the quality and safety assurance of circuit switching, it is virtually electric to realize the whole network switch type
The seamless connection of road and data format.
Switching technology (Switching Technology)
Two advantages of asynchronous and packet switch that Ethernet is used depending on networking eliminate Ethernet under the premise of complete compatible and lack
It falls into, has the end-to-end seamless connection of the whole network, direct user terminal, directly carrying IP data packet.User data is in network-wide basis
It is not required to any format conversion.It is the more advanced form of Ethernet depending on networking, is a real-time exchange platform, can be realized at present mutually
The whole network large-scale high-definition realtime video transmission that networking cannot achieve pushes numerous network video applications to high Qinghua, unitizes.
Server technology (Server Technology)
It is different from traditional server, its Streaming Media depending on the server technology in networking and unified video platform
Transmission be built upon it is connection-oriented on the basis of, data-handling capacity is unrelated with flow, communication time, single network layer energy
Enough transmitted comprising signaling and data.For voice and video business, handled depending on networking and unified video platform Streaming Media
Complexity many simpler than data processing, efficiency substantially increase hundred times or more than traditional server.
Reservoir technology (Storage Technology)
The ultrahigh speed reservoir technology of unified video platform in order to adapt to the media content of vast capacity and super-flow and
Using state-of-the-art real time operating system, the programme information in server instruction is mapped to specific hard drive space, media
Content is no longer pass through server, and moment is directly delivered to user terminal, and user waits typical time less than 0.2 second.It optimizes
Sector distribution greatly reduces the mechanical movement of hard disc magnetic head tracking, and resource consumption only accounts for the 20% of the internet ad eundem IP, but
The concurrent flow greater than 3 times of traditional disk array is generated, overall efficiency promotes 10 times or more.
Network security technology (Network Security Technology)
Depending on the structural design networked by servicing independent licence system, equipment and the modes such as user data is completely isolated every time
The network security problem that puzzlement internet has thoroughly been eradicated from structure, does not need antivirus applet, firewall generally, has prevented black
The attack of visitor and virus, structural carefree secure network is provided for user.
It services innovative technology (Service Innovation Technology)
Business and transmission are fused together by unified video platform, whether single user, private user or a net
The sum total of network is all only primary automatic connection.User terminal, set-top box or PC are attached directly to unified video platform, obtain rich
The multimedia video service of rich colorful various forms.Unified video platform is traditional to substitute with table schema using " menu type "
Complicated applications programming, considerably less code, which can be used, can be realized complicated application, realize the new business innovation of " endless ".
Networking depending on networking is as described below:
It is a kind of central controlled network structure depending on networking, which can be Tree Network, Star network, ring network etc. class
Type, but centralized control node is needed to control whole network in network on this basis.
As shown in Figure 1, being divided into access net and Metropolitan Area Network (MAN) two parts depending on networking.
The equipment of access mesh portions can be mainly divided into 3 classes: node server, access switch, terminal (including various machines
Top box, encoding board, memory etc.).Node server is connected with access switch, and access switch can be with multiple terminal phases
Even, and it can connect Ethernet.
Wherein, node server is the node that centralized control functions are played in access net, can control access switch and terminal.
Node server can directly be connected with access switch, can also directly be connected with terminal.
Similar, the equipment of metropolitan area mesh portions can also be divided into 3 classes: metropolitan area server, node switch, node serve
Device.Metropolitan area server is connected with node switch, and node switch can be connected with multiple node servers.
Wherein, node server is the node server for accessing mesh portions, i.e. node server had both belonged to access wet end
Point, and belong to metropolitan area mesh portions.
Metropolitan area server is the node that centralized control functions are played in Metropolitan Area Network (MAN), can control node switch and node serve
Device.Metropolitan area server can be directly connected to node switch, can also be directly connected to node server.
It can be seen that be entirely a kind of central controlled network structure of layering depending on networking network, and node server and metropolitan area
The network controlled under server can be the various structures such as tree-shaped, star-like, cyclic annular.
Visually claim, access mesh portions can form unified video platform (part in virtual coil), and multiple unified videos are flat
Platform can form view networking;Each unified video platform can be interconnected by metropolitan area and wide area depending on networking.
Classify depending on networked devices
1.1 embodiment of the present invention can be mainly divided into 3 classes: server depending on the equipment in networking, interchanger (including ether
Net gateway), terminal (including various set-top boxes, encoding board, memory etc.).Depending on networking can be divided on the whole Metropolitan Area Network (MAN) (or
National net, World Wide Web etc.) and access net.
1.2 equipment for wherein accessing mesh portions can be mainly divided into 3 classes: node server, access switch (including ether
Net gateway), terminal (including various set-top boxes, encoding board, memory etc.).
The specific hardware structure of each access network equipment are as follows:
Node server:
As shown in Fig. 2, mainly including Network Interface Module 201, switching engine module 202, CPU module 203, disk array
Module 204;
Wherein, Network Interface Module 201, the Bao Jun that CPU module 203, disk array module 204 are come in enter switching engine
Module 202;Switching engine module 202 look into the operation of address table 205 to the packet come in, to obtain the navigation information of packet;
And the packet is stored according to the navigation information of packet the queue of corresponding pack buffer 206;If the queue of pack buffer 206 is close
It is full, then it abandons;All pack buffer queues of 202 poll of switching engine mould, are forwarded: 1) port if meeting the following conditions
It is less than to send caching;2) the queue package counting facility is greater than zero.Disk array module 204 mainly realizes the control to hard disk, including
The operation such as initialization, read-write to hard disk;CPU module 203 is mainly responsible between access switch, terminal (not shown)
Protocol processes, to address table 205 (including descending protocol packet address table, uplink protocol package address table, data packet addressed table)
Configuration, and, the configuration to disk array module 204.
Access switch:
As shown in figure 3, mainly including Network Interface Module (downstream network interface module 301, uplink network interface module
302), switching engine module 303 and CPU module 304;
Wherein, the packet (upstream data) that downstream network interface module 301 is come in enters packet detection module 305;Packet detection mould
Whether mesh way address (DA), source address (SA), type of data packet and the packet length of the detection packet of block 305 meet the requirements, if met,
It then distributes corresponding flow identifier (stream-id), and enters switching engine module 303, otherwise abandon;Uplink network interface mould
The packet (downlink data) that block 302 is come in enters switching engine module 303;The data packet that CPU module 204 is come in enters switching engine
Module 303;Switching engine module 303 look into the operation of address table 306 to the packet come in, to obtain the navigation information of packet;
If the packet into switching engine module 303 is that downstream network interface is gone toward uplink network interface, in conjunction with flow identifier
(stream-id) packet is stored in the queue of corresponding pack buffer 307;If the queue of the pack buffer 307 is close full,
It abandons;If the packet into switching engine module 303 is not that downstream network interface is gone toward uplink network interface, according to packet
Navigation information is stored in the data packet queue of corresponding pack buffer 307;If the queue of the pack buffer 307 is close full,
Then abandon.
All pack buffer queues of 303 poll of switching engine module, are divided to two kinds of situations in embodiments of the present invention:
If the queue is that downstream network interface is gone toward uplink network interface, meets the following conditions and be forwarded: 1)
It is less than that the port sends caching;2) the queue package counting facility is greater than zero;3) token that rate control module generates is obtained;
If the queue is not that downstream network interface is gone toward uplink network interface, meets the following conditions and is forwarded:
1) it is less than to send caching for the port;2) the queue package counting facility is greater than zero.
Rate control module 208 is configured by CPU module 204, to all downlink networks in programmable interval
Interface generates token toward the pack buffer queue that uplink network interface is gone, to control the code rate of forwarded upstream.
CPU module 304 is mainly responsible for the protocol processes between node server, the configuration to address table 306, and,
Configuration to rate control module 308.
Ethernet association turns gateway:
As shown in figure 4, mainly including Network Interface Module (downstream network interface module 401, uplink network interface module
402), switching engine module 403, CPU module 404, packet detection module 405, rate control module 408, address table 406, Bao Huan
Storage 407 and MAC adding module 409, MAC removing module 410.
Wherein, the data packet that downstream network interface module 401 is come in enters packet detection module 405;Packet detection module 405 is examined
Ethernet mac DA, ethernet mac SA, Ethernet length or frame type, the view networking mesh way address of measured data packet
DA, whether meet the requirements depending on networking source address SA, depending on networking data Packet type and packet length, corresponding stream is distributed if meeting
Identifier (stream-id);Then, MAC DA, MAC SA, length or frame type are subtracted by MAC removing module 410
(2byte), and enter corresponding receive and cache, otherwise abandon;
Downstream network interface module 401 detects the transmission caching of the port, according to the view of packet networking mesh if there is Bao Ze
Address D A knows the ethernet mac DA of corresponding terminal, adds the ethernet mac DA of terminal, Ethernet assists the MAC for turning gateway
SA, Ethernet length or frame type, and send.
The function that Ethernet association turns other modules in gateway is similar with access switch.
Terminal:
It mainly include Network Interface Module, Service Processing Module and CPU module;For example, set-top box mainly connects including network
Mouth mold block, video/audio encoding and decoding engine modules, CPU module;Encoding board mainly includes Network Interface Module, video encoding engine
Module, CPU module;Memory mainly includes Network Interface Module, CPU module and disk array module.
The equipment of 1.3 metropolitan area mesh portions can be mainly divided into 2 classes: node server, node switch, metropolitan area server.
Wherein, node switch mainly includes Network Interface Module, switching engine module and CPU module;Metropolitan area server mainly includes
Network Interface Module, switching engine module and CPU module are constituted.
2, networking data package definition is regarded
2.1 access network data package definitions
Access net data packet mainly include following sections: destination address (DA), source address (SA), reserve bytes,
payload(PDU)、CRC。
As shown in the table, the data packet for accessing net mainly includes following sections:
DA | SA | Reserved | Payload | CRC |
Wherein:
Destination address (DA) is made of 8 bytes (byte), and first character section indicates type (such as the various associations of data packet
Discuss packet, multicast packet, unicast packet etc.), be up to 256 kinds of possibility, the second byte to the 6th byte is metropolitan area net address,
Seven, the 8th bytes are access net address;
Source address (SA) is also to be made of 8 bytes (byte), is defined identical as destination address (DA);
Reserve bytes are made of 2 bytes;
The part payload has different length according to the type of different datagrams, is if it is various protocol packages
64 bytes are 32+1024=1056 bytes if it is single group unicast packets words, are not restricted to above 2 kinds certainly;
CRC is made of 4 bytes, and calculation method follows the Ethernet CRC algorithm of standard.
2.2 Metropolitan Area Network (MAN) packet definitions
The topology of Metropolitan Area Network (MAN) is pattern, may there is 2 kinds, connection even of more than two kinds, i.e. node switching between two equipment
It can all can exceed that 2 kinds between machine and node server, node switch and node switch, node switch and node server
Connection.But the metropolitan area net address of metropolitan area network equipment is uniquely, to close to accurately describe the connection between metropolitan area network equipment
System, introduces parameter in embodiments of the present invention: label, uniquely to describe a metropolitan area network equipment.
(Multi-Protocol Label Switch, multiprotocol label are handed over by the definition of label and MPLS in this specification
Change) label definition it is similar, it is assumed that between equipment A and equipment B there are two connection, then data packet from equipment A to equipment B just
There are 2 labels, data packet also there are 2 labels from equipment B to equipment A.Label is divided into label, outgoing label, it is assumed that data packet enters
The label (entering label) of equipment A is 0x0000, and the label (outgoing label) when this data packet leaves equipment A may reform into
0x0001.The networking process of Metropolitan Area Network (MAN) is to enter network process under centralized control, also means that address distribution, the label of Metropolitan Area Network (MAN)
Distribution be all to be dominated by metropolitan area server, node switch, node server be all passively execute, this point with
The label distribution of MPLS is different, and the distribution of the label of MPLS is the result that interchanger, server are negotiated mutually.
As shown in the table, the data packet of Metropolitan Area Network (MAN) mainly includes following sections:
DA | SA | Reserved | Label | Payload | CRC |
That is destination address (DA), source address (SA), reserve bytes (Reserved), label, payload (PDU), CRC.Its
In, the format of label, which can refer to, such as gives a definition: label is 32bit, wherein high 16bit retains, only with low 16bit, its position
Set is between the reserve bytes and payload of data packet.
Based on the above-mentioned characteristic of view networking, one of the core concepts of the embodiments of the present invention is proposed, it then follows regard the association of networking
View, realizes the extract function of video frame.
Referring to Fig. 5, a kind of step flow chart of the abstracting method of video frame of one embodiment of the invention, the party are shown
Method can be applied in view networking, can specifically include following steps:
Step 501, the extraction request of target video frame is obtained.
Extracting request includes that the request of video frame is extracted from video, can be to network by view networking to view and takes out frame service
The request that device is sent, is also possible to request or any other applicable request for local video data, and the present invention is implemented
Example is without limitation.
It extracts request and carries the specified time that video frame extracts, wherein specified time is intended to the target video frame extracted
Time, i.e., the time that specified target video frame occurs in video playing, for example, the view that a playing duration is 60 minutes
In frequency, it is possible to specify extract 29 points of 30 seconds video frames occurred.
In embodiments of the present invention, extract request be for target video frame, carry target video frame it is specified when
Between.
In addition it should be noted that, each step can execute in client and/or server in the embodiment of the present invention, respectively
A step can be executed all by client executing or all by server or can partially be executed on the client, part
It executes on the server.
Step 502, according to the specified time, the first video frame is read from the corresponding position of video data, and from described
The first display time label is obtained in first video frame.
In embodiments of the present invention, according to specified time, the corresponding position in video data is first determined, from the corresponding position
Video frame is read, the first video frame is denoted as.The display time of video frame can determine that the display time marks according to display time label
Note (also referred to as Presentation Time Stamp) can be got from the data of video frame, for example, the PTS (Presentation of video frame
Time Stamp, Presentation Time Stamp).
In embodiments of the present invention, optionally, if video data belongs to the video stream data in view networking, then according to institute
Specified time is stated, a kind of implementation for reading the first video frame from the corresponding position of video data may include:
Sub-step S1 opens the broadcasting service in view networking;
Sub-step S2 obtains the specified time corresponding video stream data;
The first video frame of the video stream data is determined as the first video frame by sub-step S3.
The service of broadcasting refers to the program request process by regarding networking transport, for example, video address is RTMP (Real Time
Messaging Protocol, real-time messages transport protocol) address, then RTMP stream is opened, takes out frame with the view networking for providing video
Server establishes connection, obtains video data.
Video stream data refers to the video data transmitted in view networking with streaming manner, by data packet group one by one
At when can not be transmitted since first data packet of entire video when transmission every time, but can according to need or specify
Between transmitted since corresponding data packet.
Frame server is taken out according to specified time depending on networking, finds corresponding video stream data at fixed time.Due to specified
Time is specified as needed, so after a data packet of decoding video flow data, obtained first video frame it is aobvious
Show that the time not necessarily exactly corresponds to specified time.The first video frame of video stream data is first determined as the first video frame, such as
First display time label of fruit first video frame and the difference of specified time are less than given threshold, which can also
Using as target video frame.
For example, the schematic diagram of video frame extraction process as shown in FIG. 6, is networked by view and send order, that is, request is extracted,
It networks to view and takes out frame server, take out frame server depending on networking, the first video frame of video stream data is determined according to specified time, really
It is set to the first video frame, if the display of subsequent specified time and the first video frame of comparison result surface of the first display time label
Time and specified time are not close enough, then regard networking and take out frame server further according to the second display time label, determine the second video
Frame.
In embodiments of the present invention, optionally, if video data belongs to local data, then according to the specified time,
From the corresponding position of video data read the first video frame a kind of implementation include:
Sub-step S4 obtains the bit-rate parameters of the video data;
Sub-step S5 calculates the corresponding position of the video data according to the specified time and bit-rate parameters;
Sub-step S6 reads the second picture group from the corresponding position of the video data, and by second picture group
First video frame is determined as the first video frame.
If video data belongs to local data, then obtaining the bit-rate parameters of video data, the bit rate including video
With the bit rate of audio.According to specified time and bit-rate parameters, corresponding position at fixed time in video data is calculated, with
So that picture group is read in corresponding position directly into file without being decoded to video data, it is denoted as the second picture group.Its
In, picture group is exactly one group of continuous picture, for example, (Moving Picture Experts Group, dynamic image are special by MPEG
Family group) coding in, a GOP (Group of Pictures, picture group) is exactly one group of continuous IPB picture.Mpeg encoded will
Picture (i.e. frame) is divided into tri- kinds of I, P, B, and I is intra-coded frame, and P is forward predicted frame, and B is two-way interpolation frame.Simply, I
Frame is a complete picture, and P frame and B frame recording is variation relative to I frame.There is no I frame, P frame and B frame can not just solve
Code.First video frame in one picture group can separately as a picture, but other video frames later be can not
It is individually decoded, so the first video frame of the second picture group is determined as the first video frame.
Step 503, according to the comparison result of the specified time and the first display time label, by first video frame
It is determined as target video frame, or, searching the second video frame before and after first video frame is determined as target video frame.
According to specified time, the corresponding position of determining video data is can not be accurate, because obtaining according to specified time
When taking video stream data, the display time of first video frame of video stream data is not necessarily specified time, or according to finger
Fix time with bit-rate parameters calculate video data corresponding position when, bit-rate parameters are if it is variable bit rate, then counting
The corresponding position of calculation is also can not be accurately corresponding with specified time, and can only take the first video frame of picture group, so the
The display time of one video frame be not necessarily with specified time just.
In embodiments of the present invention, specified time with first display the time label compared with before, need to convert specified time
For label of corresponding display time, then be converted to pair compared with the first display time label, or by the first display time label
The display time answered, then compared with specified time.
In embodiments of the present invention, comparison result includes time difference, can be specified time and is converted to corresponding display
Difference between time label and the first display time label, when being also possible to the first display time label and being converted to corresponding display
Between difference between specified time.According to comparison result, it can be determined that go out the display time and specified time of the first video frame
Whether approach, if close to that the first video frame can be determined as target video frame, if keeping off, just before and after the first video frame
The second video frame is searched, the video frame until finding display time close to specified time is determined as target video frame.
In embodiments of the present invention, optionally, the comparison result of time label is shown according to the specified time and first,
First video frame is determined as target video frame, is determined or, searching the second video frame before and after first video frame
May include: for a kind of implementation of target video frame
If the difference of the specified time and the first display time label is less than given threshold range, by described first
Video frame is determined as target video frame.
The difference of specified time and the first display time label, can be specified time is converted to corresponding display time and marks
Difference between note and the first display time label is also possible to the first display time label and is converted to the corresponding display time and refers to
Difference between fixing time.If the difference is less than given threshold range, for example, given threshold range is the negative one second to positive one
Second, if difference is within this range, the display time and specified time of the first video frame are close, suit the requirements, are determined as
Target video frame.
In embodiments of the present invention, optionally, the comparison result of time label is shown according to the specified time and first,
First video frame is determined as target video frame, is determined or, searching the second video frame before and after first video frame
Include: for a kind of implementation of target video frame
If the difference of the specified time and the first display time label is more than given threshold range, according to the difference
With the first display time label, determine that the second display time marked;
Search corresponding first picture group of the second display time label;
The first video frame of first picture group is determined as the second video frame.
The difference of specified time and the first display time label, can be specified time is converted to corresponding display time and marks
Difference between note and the first display time label is also possible to the first display time label and is converted to the corresponding display time and refers to
Difference between fixing time.If the difference is more than given threshold range, for example, given threshold range is the negative one second to positive one second,
If difference not within this range, keep off by display time and the specified time of the first video frame, needs are not met.
It needs to determine that the second display time marked according to difference and the first display time label.If difference is negative, table
The display time of bright first video frame at the appointed time after, so forward elapse size of the difference time, determine the second display
Time label, if difference is positive number, before showing display time of the first video frame at the appointed time, thus elapse backward it is poor
It is worth the time of size, determines that the second display time marked.
The corresponding picture group of the second display time label is searched, the first picture group is denoted as, then by the head of the first picture group
A video frame is determined as the second video frame.It, can be according to specified time and after redefining the second video frame when specific implementation
The comparison result of the display time label of two video frames, determines that the second video frame is target video frame, or continue in the second view
Video frame is continued to search before and after frequency frame, until finding satisfactory video frame, is determined as target video frame.
In embodiments of the present invention, it is requested by obtaining the extraction of target video frame, when carrying specified in extraction request
Between, according to the specified time, the first video frame is read from the corresponding position of video data, and obtain from first video frame
It took for the first display time marked, according to the comparison result of specified time and the first display time label, the first video frame is determined
For target video frame, or the second video frame is searched before and after the first video frame and is determined as target video, so that extracting target view
When frequency frame, the corresponding position that can jump directly to specified time in video data is searched, without the starting from video
Frame starts sequence and searches target video frame backward, and the play time of video frame required for overcoming more rearward, searches target view
The problem of time spent needed for frequency frame longer problem, consuming time is long when solving video frame extraction, low efficiency.
Referring to Fig. 7, a kind of step flow chart of the abstracting method of video frame of another embodiment of the present invention is shown, it should
Method can be applied in view networking, can specifically include following steps:
Step 601, the extraction request of target video frame is obtained.
Step 602, according to the video location information, judge that the video data belongs in local data or view networking
Video stream data.
In embodiments of the present invention, it extracts request and carries video location information, video location information includes video file
Network address or local directory etc., the embodiment of the present invention is without limitation.For example, if video address is the address RTMP,
Video data belongs to the video stream data in view networking, if video address is local directory, video data belongs to local data.
Step 603, according to the specified time, the first video frame is read from the corresponding position of video data, and from described
The first display time label is obtained in first video frame.
Step 604, according to the comparison result of the specified time and the first display time label, by first video frame
It is determined as target video frame, or, searching the second video frame before and after first video frame is determined as target video frame.
In embodiments of the present invention, it is requested by obtaining the extraction of target video frame, when carrying specified in extraction request
Between, according to the video location information, judge the video stream data that the video data belongs in local data or view networking, root
According to the specified time, the first video frame is read from the corresponding position of video data, and obtains from first video frame the
First video frame is determined as mesh according to the comparison result of specified time and the first display time label by one display time label
Video frame is marked, or searches the second video frame before and after the first video frame and is determined as target video, so that extracting target video frame
When, the corresponding position that can jump directly to specified time in video data is searched, without opening from the start frame of video
Beginning sequence searches target video frame backward, and the play time of video frame required for overcoming more rearward, searches target video frame
The problem of time of required cost longer problem, consuming time is long when solving video frame extraction, low efficiency.
It should be noted that for simple description, therefore, it is stated as a series of action groups for embodiment of the method
It closes, but those skilled in the art should understand that, embodiment of that present invention are not limited by the describe sequence of actions, because according to
According to the embodiment of the present invention, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art also should
Know, the embodiments described in the specification are all preferred embodiments, and the related movement not necessarily present invention is implemented
Necessary to example.
Referring to Fig. 8, a kind of structural block diagram of the draw-out device of video frame of one embodiment of the invention is shown,
Request module 701, the extraction for obtaining target video frame are requested, and the extraction request carries video frame and takes out
The specified time taken;
Label obtains module 702, for reading the first video from the corresponding position of video data according to the specified time
Frame, and the first display time label is obtained from first video frame;
Video frame determining module 703 will for the comparison result according to the specified time and the first display time label
First video frame is determined as target video frame, is determined as or, searching the second video frame before and after first video frame
Target video frame.
In one embodiment of the invention, the video frame determining module includes:
First determines submodule, if the difference for the specified time and the first display time label is less than setting threshold
It is worth range, then first video frame is determined as target video frame.
In one embodiment of the invention, the video frame determining module includes:
It marks and determines submodule, if the difference for the specified time and the first display time label is more than given threshold
Range determines that the second display time marked then according to the difference and the first display time label;
Picture group searches submodule, marks corresponding first picture group for searching the second display time;
Second determines submodule, for the first video frame of first picture group to be determined as the second video frame.
In one embodiment of the invention, if the video data belongs to the video stream data in view networking, the mark
Note obtains module
Submodule is opened in service, for opening the broadcasting service in view networking;
Flow data acquisition submodule, for obtaining the specified time corresponding video stream data;
Third determines submodule, for the first video frame of the video stream data to be determined as the first video frame.
In one embodiment of the invention, if the video data belongs to local data, the label obtains module packet
It includes:
Parameter acquisition submodule, for obtaining the bit-rate parameters of the video data;
Position computational submodule, for calculating pair of the video data according to the specified time and bit-rate parameters
Answer position;
4th determines submodule, for reading the second picture group from the corresponding position of the video data, and by described the
The first video frame of two picture groups is determined as the first video frame.
In one embodiment of the invention, the extraction request carries video location information, described device further include:
Judgment module, for judging that the video data belongs to local data or view connection according to the video location information
Video stream data in net.
In embodiments of the present invention, it is requested by obtaining the extraction of target video frame, when carrying specified in extraction request
Between, according to the specified time, the first video frame is read from the corresponding position of video data, and obtain from first video frame
It took for the first display time marked, according to the comparison result of specified time and the first display time label, the first video frame is determined
For target video frame, or the second video frame is searched before and after the first video frame and is determined as target video, so that extracting target view
When frequency frame, the corresponding position that can jump directly to specified time in video data is searched, without the starting from video
Frame starts sequence and searches target video frame backward, and the play time of video frame required for overcoming more rearward, searches target view
The problem of time spent needed for frequency frame longer problem, consuming time is long when solving video frame extraction, low efficiency.
For device embodiment, since it is basically similar to the method embodiment, related so being described relatively simple
Place illustrates referring to the part of embodiment of the method.
All the embodiments in this specification are described in a progressive manner, the highlights of each of the examples are with
The difference of other embodiments, the same or similar parts between the embodiments can be referred to each other.
It should be understood by those skilled in the art that, the embodiment of the embodiment of the present invention can provide as method, apparatus or calculate
Machine program product.Therefore, the embodiment of the present invention can be used complete hardware embodiment, complete software embodiment or combine software and
The form of the embodiment of hardware aspect.Moreover, the embodiment of the present invention can be used one or more wherein include computer can
With in the computer-usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) of program code
The form of the computer program product of implementation.
The embodiment of the present invention be referring to according to the method for the embodiment of the present invention, terminal device (system) and computer program
The flowchart and/or the block diagram of product describes.It should be understood that flowchart and/or the block diagram can be realized by computer program instructions
In each flow and/or block and flowchart and/or the block diagram in process and/or box combination.It can provide these
Computer program instructions are set to general purpose computer, special purpose computer, Embedded Processor or other programmable data processing terminals
Standby processor is to generate a machine, so that being held by the processor of computer or other programmable data processing terminal devices
Capable instruction generates for realizing in one or more flows of the flowchart and/or one or more blocks of the block diagram
The device of specified function.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing terminal devices
In computer-readable memory operate in a specific manner, so that instruction stored in the computer readable memory generates packet
The manufacture of command device is included, which realizes in one side of one or more flows of the flowchart and/or block diagram
The function of being specified in frame or multiple boxes.
These computer program instructions can also be loaded into computer or other programmable data processing terminal devices, so that
Series of operation steps are executed on computer or other programmable terminal equipments to generate computer implemented processing, thus
The instruction executed on computer or other programmable terminal equipments is provided for realizing in one or more flows of the flowchart
And/or in one or more blocks of the block diagram specify function the step of.
Although the preferred embodiment of the embodiment of the present invention has been described, once a person skilled in the art knows bases
This creative concept, then additional changes and modifications can be made to these embodiments.So the following claims are intended to be interpreted as
Including preferred embodiment and fall into all change and modification of range of embodiment of the invention.
Finally, it is to be noted that, herein, relational terms such as first and second and the like be used merely to by
One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation
Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning
Covering non-exclusive inclusion, so that process, method, article or terminal device including a series of elements not only wrap
Those elements are included, but also including other elements that are not explicitly listed, or further includes for this process, method, article
Or the element that terminal device is intrinsic.In the absence of more restrictions, being wanted by what sentence "including a ..." limited
Element, it is not excluded that there is also other identical elements in process, method, article or the terminal device for including the element.
Above to a kind of abstracting method and a kind of draw-out device of video frame of video frame provided by the present invention, carry out
It is discussed in detail, used herein a specific example illustrates the principle and implementation of the invention, above embodiments
Illustrate to be merely used to help understand method and its core concept of the invention;At the same time, for those skilled in the art, according to
According to thought of the invention, there will be changes in the specific implementation manner and application range, in conclusion the content of the present specification
It should not be construed as limiting the invention.
Claims (10)
1. a kind of abstracting method of video frame, which is characterized in that the method is applied in view networking, which comprises
Obtain the extraction request of target video frame, the specified time extracting request carrying video frame and extracting;
According to the specified time, the first video frame is read from the corresponding position of video data, and from first video frame
Obtain the first display time label;
According to the comparison result of the specified time and the first display time label, first video frame is determined as target view
Frequency frame, or, searching the second video frame before and after first video frame is determined as target video frame.
2. the method according to claim 1, wherein described according to the specified time and the first display time mark
First video frame is determined as target video frame by the comparison result of note, or, searching the before and after first video frame
Two video frames are determined as target video frame
If the difference of the specified time and the first display time label is less than given threshold range, by first video
Frame is determined as target video frame.
3. the method according to claim 1, wherein described according to the specified time and the first display time mark
First video frame is determined as target video frame by the comparison result of note, or, searching the before and after first video frame
Two video frames are determined as target video frame
If the difference of the specified time and the first display time label is more than given threshold range, according to the difference and the
One display time label, determines that the second display time marked;
Search corresponding first picture group of the second display time label;
The first video frame of first picture group is determined as the second video frame.
4. the method according to claim 1, wherein if the video data belongs to the video fluxion in view networking
According to described according to the specified time, reading the first video frame from the corresponding position of video data includes:
Open the broadcasting service in view networking;
Obtain the specified time corresponding video stream data;
The first video frame of the video stream data is determined as the first video frame.
5. the method according to claim 1, wherein if the video data belongs to local data, the basis
The specified time, reading the first video frame from the corresponding position of video data includes:
Obtain the bit-rate parameters of the video data;
According to the specified time and bit-rate parameters, the corresponding position of the video data is calculated;
The second picture group is read from the corresponding position of the video data, and the first video frame of second picture group is determined
For the first video frame.
6. method according to claim 4 or 5, which is characterized in that the extraction request carries video location information, institute
State method further include:
According to the video location information, the video stream data that the video data belongs in local data or view networking is judged.
7. a kind of draw-out device of video frame, which is characterized in that described device is applied in view networking, and described device includes:
Request module, the extraction for obtaining target video frame are requested, the finger for extracting request and carrying video frame extraction
It fixes time;
Label obtains module, is used for according to the specified time, from the corresponding position of video data the first video frame of reading, and from
The first display time label is obtained in first video frame;
Video frame determining module, for the comparison result according to the specified time and the first display time label, by described the
One video frame is determined as target video frame, or, searching the second video frame before and after first video frame is determined as target view
Frequency frame.
8. device according to claim 7, which is characterized in that the video frame determining module includes:
First determines submodule, if the difference for the specified time and the first display time label is less than given threshold model
It encloses, then first video frame is determined as target video frame.
9. device according to claim 7, which is characterized in that the video frame determining module includes:
It marks and determines submodule, if the difference for the specified time and the first display time label is more than given threshold model
It encloses, then according to the difference and the first display time label, determines that the second display time marked;
Picture group searches submodule, marks corresponding first picture group for searching the second display time;
Second determines submodule, for the first video frame of first picture group to be determined as the second video frame.
10. device according to claim 7, which is characterized in that if the video data belongs to the video flowing in view networking
Data, the label obtain module and include:
Submodule is opened in service, for opening the broadcasting service in view networking;
Flow data acquisition submodule, for obtaining the specified time corresponding video stream data;
Third determines submodule, for the first video frame of the video stream data to be determined as the first video frame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810410173.9A CN110460790A (en) | 2018-05-02 | 2018-05-02 | A kind of abstracting method and device of video frame |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810410173.9A CN110460790A (en) | 2018-05-02 | 2018-05-02 | A kind of abstracting method and device of video frame |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110460790A true CN110460790A (en) | 2019-11-15 |
Family
ID=68471542
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810410173.9A Pending CN110460790A (en) | 2018-05-02 | 2018-05-02 | A kind of abstracting method and device of video frame |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110460790A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111147954A (en) * | 2019-12-30 | 2020-05-12 | 北京奇艺世纪科技有限公司 | Thumbnail extraction method and device |
CN111405288A (en) * | 2020-03-19 | 2020-07-10 | 北京字节跳动网络技术有限公司 | Video frame extraction method and device, electronic equipment and computer readable storage medium |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101675427A (en) * | 2007-05-02 | 2010-03-17 | 微软公司 | Iteratively locating a position corresponding to a desired seek time |
CN103260092A (en) * | 2012-02-17 | 2013-08-21 | 三星电子株式会社 | Method and apparatus for seeking a frame in multimedia contents |
CN103491387A (en) * | 2012-06-14 | 2014-01-01 | 深圳市快播科技有限公司 | System, on-demand unicast terminal and method for video positioning |
CN103544977A (en) * | 2012-07-16 | 2014-01-29 | 三星电子(中国)研发中心 | Device and method for locating videos on basis of touch control |
CN104216959A (en) * | 2014-08-21 | 2014-12-17 | 浙江宇视科技有限公司 | TS (transport stream) file positioning method and device |
CN104394474A (en) * | 2014-11-25 | 2015-03-04 | 苏州航天系统工程有限公司 | Stream media quick locating on-demand playing method |
CN104581436A (en) * | 2015-01-28 | 2015-04-29 | 青岛海信宽带多媒体技术有限公司 | Video frame positioning method and device |
CN104618798A (en) * | 2015-02-12 | 2015-05-13 | 北京清源新创科技有限公司 | Playing time control method and device for Internet live video |
US9110576B1 (en) * | 2014-03-20 | 2015-08-18 | Lg Electronics Inc. | Display device and method for controlling the same |
CN104994433A (en) * | 2015-06-30 | 2015-10-21 | 上海帝联信息科技股份有限公司 | Method and device for providing video file |
CN105302352A (en) * | 2014-07-30 | 2016-02-03 | 西安司坤电子科技有限公司 | Method for realizing positioning in MP3 file in VBR format |
CN105704527A (en) * | 2016-01-20 | 2016-06-22 | 努比亚技术有限公司 | Terminal and method for video frame positioning for terminal |
CN105959310A (en) * | 2016-07-01 | 2016-09-21 | 北京小米移动软件有限公司 | Frame positioning method and device |
CN106101867A (en) * | 2016-07-20 | 2016-11-09 | 深圳芯智汇科技有限公司 | A kind of FLV of raising video jumps the method broadcasting speed and location accuracy |
CN106131660A (en) * | 2016-07-15 | 2016-11-16 | 青岛海信宽带多媒体技术有限公司 | video positioning playing method and device |
CN107979621A (en) * | 2016-10-24 | 2018-05-01 | 杭州海康威视数字技术股份有限公司 | A kind of storage of video file, positioning playing method and device |
-
2018
- 2018-05-02 CN CN201810410173.9A patent/CN110460790A/en active Pending
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101675427A (en) * | 2007-05-02 | 2010-03-17 | 微软公司 | Iteratively locating a position corresponding to a desired seek time |
CN103260092A (en) * | 2012-02-17 | 2013-08-21 | 三星电子株式会社 | Method and apparatus for seeking a frame in multimedia contents |
CN103491387A (en) * | 2012-06-14 | 2014-01-01 | 深圳市快播科技有限公司 | System, on-demand unicast terminal and method for video positioning |
CN103544977A (en) * | 2012-07-16 | 2014-01-29 | 三星电子(中国)研发中心 | Device and method for locating videos on basis of touch control |
US9110576B1 (en) * | 2014-03-20 | 2015-08-18 | Lg Electronics Inc. | Display device and method for controlling the same |
CN105302352A (en) * | 2014-07-30 | 2016-02-03 | 西安司坤电子科技有限公司 | Method for realizing positioning in MP3 file in VBR format |
CN104216959A (en) * | 2014-08-21 | 2014-12-17 | 浙江宇视科技有限公司 | TS (transport stream) file positioning method and device |
CN104394474A (en) * | 2014-11-25 | 2015-03-04 | 苏州航天系统工程有限公司 | Stream media quick locating on-demand playing method |
CN104581436A (en) * | 2015-01-28 | 2015-04-29 | 青岛海信宽带多媒体技术有限公司 | Video frame positioning method and device |
CN104618798A (en) * | 2015-02-12 | 2015-05-13 | 北京清源新创科技有限公司 | Playing time control method and device for Internet live video |
CN104994433A (en) * | 2015-06-30 | 2015-10-21 | 上海帝联信息科技股份有限公司 | Method and device for providing video file |
CN105704527A (en) * | 2016-01-20 | 2016-06-22 | 努比亚技术有限公司 | Terminal and method for video frame positioning for terminal |
CN105959310A (en) * | 2016-07-01 | 2016-09-21 | 北京小米移动软件有限公司 | Frame positioning method and device |
CN106131660A (en) * | 2016-07-15 | 2016-11-16 | 青岛海信宽带多媒体技术有限公司 | video positioning playing method and device |
CN106101867A (en) * | 2016-07-20 | 2016-11-09 | 深圳芯智汇科技有限公司 | A kind of FLV of raising video jumps the method broadcasting speed and location accuracy |
CN107979621A (en) * | 2016-10-24 | 2018-05-01 | 杭州海康威视数字技术股份有限公司 | A kind of storage of video file, positioning playing method and device |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111147954A (en) * | 2019-12-30 | 2020-05-12 | 北京奇艺世纪科技有限公司 | Thumbnail extraction method and device |
CN111405288A (en) * | 2020-03-19 | 2020-07-10 | 北京字节跳动网络技术有限公司 | Video frame extraction method and device, electronic equipment and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110072143A (en) | A kind of method for decoding video stream and device | |
CN109788232A (en) | A kind of summary of meeting recording method of video conference, device and system | |
CN108965224A (en) | A kind of method and apparatus of video on demand | |
CN109996086A (en) | A kind of view networking service method for inquiring status and device | |
CN110267099A (en) | A kind of data transmission method and view networked terminals based on view networking | |
CN109889420A (en) | Method and device for business processing | |
CN108965986A (en) | A kind of video recorded broadcast method, apparatus and system | |
CN110324580A (en) | A kind of monitor video playback method and device based on view networking | |
CN110351506A (en) | A kind of video recording method, device, electronic equipment and readable storage medium storing program for executing | |
CN108881948A (en) | A kind of method and system of view networking taking turn monitor video | |
CN109284265A (en) | A kind of date storage method and system | |
CN110049346A (en) | A kind of method and system of net cast | |
CN108307212A (en) | A kind of file order method and device | |
CN109246135A (en) | A kind of acquisition methods and system of stream medium data | |
CN108881818A (en) | A kind of transmission method and device of video data | |
CN108881819A (en) | A kind of transmission method and device of audio data | |
CN109491783A (en) | A kind of acquisition methods and system of memory usage | |
CN108965930A (en) | A kind of method and apparatus of video data processing | |
CN110505107A (en) | A kind of monitoring method and view networking management system | |
CN110460790A (en) | A kind of abstracting method and device of video frame | |
CN110086773A (en) | A kind of processing method and system of audio, video data | |
CN108574819B (en) | A kind of terminal device and a kind of method of video conference | |
CN110263030A (en) | Data capture method and device based on view networking | |
CN109218302A (en) | A kind of method and apparatus sending view networking data packet | |
CN108965744A (en) | A kind of method of video image processing and device based on view networking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |