CN110677724A - Dual-mode data splicing method and device and terminal equipment - Google Patents

Dual-mode data splicing method and device and terminal equipment Download PDF

Info

Publication number
CN110677724A
CN110677724A CN201910957803.9A CN201910957803A CN110677724A CN 110677724 A CN110677724 A CN 110677724A CN 201910957803 A CN201910957803 A CN 201910957803A CN 110677724 A CN110677724 A CN 110677724A
Authority
CN
China
Prior art keywords
video data
interval
splicing
cache
splicing point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910957803.9A
Other languages
Chinese (zh)
Other versions
CN110677724B (en
Inventor
陈永安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TP Link Technologies Co Ltd
Original Assignee
TP Link Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TP Link Technologies Co Ltd filed Critical TP Link Technologies Co Ltd
Priority to CN201910957803.9A priority Critical patent/CN110677724B/en
Publication of CN110677724A publication Critical patent/CN110677724A/en
Application granted granted Critical
Publication of CN110677724B publication Critical patent/CN110677724B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/632Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing using a connection between clients on a wide area network, e.g. setting up a peer-to-peer communication via Internet for retrieving video segments from the hard-disk of other client devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44004Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application is suitable for the technical field of video switching, and provides a dual-mode data splicing method, a device and terminal equipment, wherein the method is realized by firstly connecting a first video mode, a video can be acquired and played at any time, then a splicing point matched with a first cache region is searched in a second cache region in a second video mode, video data after the splicing point is played in the second video mode, the problem that seamless switching cannot be realized when a far-end video plays the video at any time on the premise of correspondingly reducing the operation cost is solved, and the effect of achieving balance between user experience and the operation cost is achieved.

Description

Dual-mode data splicing method and device and terminal equipment
Technical Field
The application belongs to the technical field of video switching, and particularly relates to a dual-mode data splicing method and device and terminal equipment.
Background
In the prior art, the remote video playing of security products adopts two connection modes of P2P and relay, and the P2P mode establishes direct connection between two terminals of different local area networks by means of NAT traversal technology; and the relay establishes connection by transferring data through a third-party server. In the P2P mode, the NAT traversal successfully has an impenetrable state, and the establishment process may take a long time, which affects the user experience. The relay mode can ensure that the p2p can be played when the connection cannot be established, and the disadvantage is that the third-party server generates traffic cost and increases operation cost. Therefore, the remote video of the existing security product cannot play the video at any time on the premise of correspondingly reducing the operation cost, and cannot realize seamless switching, and the balance between the user experience and the operation cost cannot be well achieved.
Disclosure of Invention
The embodiment of the application provides a dual-mode data splicing method, a dual-mode data splicing device and terminal equipment, and can solve the problem that seamless switching cannot be realized when a remote video plays a video at any time on the premise that the remote video of a security product correspondingly reduces operation cost.
In a first aspect, an embodiment of the present application provides a dual-mode data splicing method, including:
acquiring first video data in a first cache region corresponding to a first video connection mode; wherein the first video data comprises a plurality of first video data packets;
acquiring second video data in a second cache region corresponding to a second video connection mode; wherein the second video data comprises a plurality of second video data packets;
determining a first target video data packet in the plurality of first video data packets as a first splicing point of the first cache region, and a second target video data packet in the plurality of second video data packets as a second splicing point of the second cache region; wherein the first target video data packet matches the second target video data packet;
stopping writing new video data into the first cache region;
and under the second video connection mode, writing video data behind a second splicing point of the second cache region behind the first splicing point of the first cache region.
In a second aspect, an embodiment of the present application provides a dual-mode data splicing apparatus, including:
the first obtaining module is used for obtaining first video data in a first cache region corresponding to a first video connection mode; wherein the first video data comprises a plurality of first video data packets;
the second obtaining module is used for obtaining second video data in a second cache region corresponding to the second video connection mode; wherein the second video data comprises a plurality of second video data packets;
the splicing module is used for acquiring a first target video data packet from the plurality of first video data packets as a first splicing point of the first cache region, and acquiring a second target video data packet from the plurality of second video data packets as a second splicing point of the second cache region; wherein the first target video data packet matches the second target video data packet;
the suspension module is used for stopping writing new video data into the first cache region;
and the writing module is used for writing video data behind the second splicing point of the second cache region into the first cache region in the second video connection mode.
In a third aspect, an embodiment of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the method according to any one of the preceding claims.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the method according to any one of the preceding claims.
In a fifth aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute the dual-mode data splicing method described in any one of the above first aspects.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Compared with the prior art, the embodiment of the application has the advantages that: under first video connection mode, although consume a large amount of flows, but can acquire and play the video at any time, then through in the second buffer memory district in second video connection mode, seek and first buffer memory district assorted splicing point, through the video data behind the second video connection mode broadcast splicing point, break off first video connection mode, solve under the corresponding prerequisite that reduces the operation cost, can not realize seamless switching's problem when the video is broadcast at any time to the distal end video, reach accomplish balanced effect between user experience and operation cost.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a flow chart illustrating an implementation of a dual-mode data splicing method according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart of another implementation of the dual-mode data splicing method provided by the embodiment of the present application;
FIG. 3 is a schematic diagram of a splicing process provided by an embodiment of the present application;
FIG. 4 is a flowchart illustrating yet another implementation of the dual-mode data splicing method according to the embodiment of the present application;
FIG. 5 is a flowchart illustrating yet another implementation of the dual-mode data splicing method according to the embodiment of the present application;
FIG. 6 is a schematic structural diagram of a dual-mode data splicing apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "comprises" and "comprising," and any variations thereof, in the description and claims of this application and the drawings described above, are intended to cover non-exclusive inclusions. For example, a process, method, or system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. Furthermore, the terms "first," "second," and "third," etc. are used to distinguish between different objects and are not used to describe a particular order.
Example one
The application provides a dual-mode data splicing method, which can be applied to terminal devices such as a security far-end video, a tablet computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA) and the like, and the embodiment of the application does not limit the specific types of the terminal devices.
As shown in fig. 1, the present embodiment provides a dual-mode data splicing method, including:
s101, acquiring first video data in a first cache region corresponding to a first video connection mode; wherein the first video data comprises a plurality of first video data packets;
s102, acquiring second video data in a second cache region corresponding to a second video connection mode; wherein the second video data comprises a plurality of second video data packets;
in application, the first video connection mode and the second video connection mode include, but are not limited to, a relay connection mode and a Peer-to-Peer (P2P) in a server, where the first video connection mode is defined as the relay connection mode, and requires an additional server for assistance, and data flowing through the server incurs traffic cost, which increases operation cost. The first buffer is a relay buffer and is configured to buffer first video data input in the first video connection mode, where the first buffer may buffer one or more first video data, and may also buffer data, such as first audio data and a static load (PAT data), input in the first video connection mode, depending on a buffer size of the first buffer, which is not limited to this. The video data packaging format is not limited, and the output code stream (MPEG2-TS) packaging format is taken as an example for the description in this embodiment. The minimum processing unit of the MPEG2-TS is a PACKET (PACKET) and has 188 Bytes (Bytes). In this embodiment, the PACKET (PACKET) is specifically the first video PACKET.
In application, the second video connection mode is defined as a P2P connection mode, which means that data transmission does not pass through an additional server, and domain name resolution is not required, but data is directly transmitted between network users or devices, and server traffic does not need to be consumed, but a non-penetrable state exists when NAT traversal succeeds, and the establishment process may take a long time, which affects user experience. The second buffer, the second video data, and the second video data packet are consistent with the first buffer, the first video data, and the first video data packet, which will not be discussed in detail.
S103, determining a first target video data packet in the plurality of first video data packets as a first splicing point of the first cache region, and determining a second target video data packet in the plurality of second video data packets as a second splicing point of the second cache region; wherein the first target video data packet matches the second target video data packet;
in the application, when the first video data and the second video data are connected with the playing video, the time for consuming the data is different, and the data content of the first video data and the data content of the second video data may be consistent or partially consistent, which is not limited herein. Specifically, when the video is played through the security remote, the security remote works in two modes (relay, P2P) at the same time within a certain period of time, and the two video data acquired through different modes can be mutually overlaid in content. Therefore, in a plurality of first video data packets of the first video data, there is a splice point that matches one or more second video data packets in the second video data.
As shown in fig. 2, in an embodiment, each of the first video data packets carries a corresponding first unique identification number, and each of the second video data packets carries a corresponding second unique identification number; step S103 includes:
s201, searching a corresponding first unique identification number in the first video data according to a second unique identification number carried by a second video data packet at the head or the tail of the second cache region;
s202, if the corresponding first unique identification number is found, determining a first video data packet corresponding to the first unique identification number as a first target video data packet as a first splicing point of the first cache region, and determining a second video data packet corresponding to the second unique identification number as a second target video data packet as a second splicing point of the second cache region.
In application, the first unique identification number carried by each first video data packet is unique in the first video data, the second unique identification number carried by the second video data packet is also unique in the second video data, and the first video data and the second video data have the same video data packet in the data content. Therefore, the corresponding second unique identification number can be searched according to the first unique identification number, the identification numbers of the second unique identification number are consistent, the corresponding first video data packet is considered as a first target video data packet, and the second video data packet corresponding to the second unique identification number is determined as a second target video data packet.
In application, the head video data in the first buffer is still being consumed and the tail data is still being written, so the head video data and the tail video data in the first buffer are always being transformed, and the same is true for the head data and the tail data in the second buffer. Specifically, if the video data in the second buffer area is 7-22S, the video data in the first buffer area is 5-20S, and therefore the head video data packet of the second buffer area is 7S, and the tail video data packet is 22S, and in the process of searching for the splice point, the 7S can find the corresponding splice point in 5-20S, and the 22S does not find the corresponding splice point in 5-20S, and therefore, the 7S serves as the first splice point and the second splice point in the two buffer areas, as shown in the first case of fig. 3, which is not limited.
In an embodiment, after step S202, the method includes:
if the corresponding first unique identification number is not found, searching the corresponding first unique identification number in the first video data at preset intervals according to a second unique identification number carried by a second video data packet at the head or the tail of the second cache region;
if the corresponding first unique identification number is not found within the preset times, determining a first video data packet at the tail part in the first video data as a first target video data packet as a first splicing point of the first cache region, and determining a second video data packet at the tail part in the second video data as a second target video data packet as a second splicing point of the second cache region;
determining that the first destination video data packet matches the second destination video data packet.
In application, in the relay connection mode, the first video data in the first buffer area is 5-20S, in the P2P connection mode, the second video data in the second buffer area is 3-22S, and then the head first video data packet 3S and the tail second video data packet 22S in the second buffer area cannot find the corresponding splicing point in the first buffer area 5-20S, which is specifically shown in the fourth case of fig. 3. After a preset time, for example, 5S, the search is performed again, and at this time, the head data in the first video data in the first buffer area is consumed all the time, and the tail data is written all the time, so that the first video data after 5S may be 10-25S, the second video data in the second buffer area is 8-27S, and the corresponding splicing point cannot be found in the first buffer area. After repeating the preset number of times, for example, 5 times, at this time, the first video data in the first buffer may be 30-45S, and the second video data may be 28-47S, 45S is directly used as a new first splicing point, and 47S is used as a new second splicing point for splicing. Under the condition of bad network, the distance between the received video data is large due to bad network in both the relay connection mode and the P2P connection mode, therefore, when the splicing point is not found for a time-out time, the splicing point is directly spliced from the tail part (45S) of the first video data, at the moment, although the data is disordered due to the direct splicing, the phenomenon of screen break and screen splash is also caused in consideration of the difference of the picture network played before, so that the method can be accepted, and the preferred implementation scheme for balancing the user experience and the operation cost is realized.
In other embodiments, in the case where no splice point is found for the first time, the preset number of repetitions is aimed at: since the rates of video data consumption by the relay connection mode and the P2P connection mode are not consistent, that is, the first video data in the first buffer is 5-20S, the second video data in the second buffer is 3-22S in the P2P connection mode, and the rate of video data consumption by the P2P connection mode is faster than that in the relay connection mode, after repeating for 2 times, the first video data is 15-30S, and the second video data is 17-36S, the corresponding splicing point can be found.
In other embodiments, there may be a case where the first video data in the first buffer is 5-20S in the relay connection mode, and the second video data in the second buffer is 21-23S in the P2P connection mode, as shown in the fifth case of fig. 3 specifically, or a case where the second video data in the second buffer is 3-5S as shown in the sixth case of fig. 3 specifically, and the specific operation steps are as described above, which will not be discussed in detail.
In another embodiment, the step S101 includes: acquiring all first initial data in the first cache region; wherein the first initial data comprises the first video data;
acquiring the first video data packet according to the first identification number; wherein all of the first video data packets constitute the first video data.
In application, the first initial data is data cached in the first cache region, the data includes first video data, first audio data, PAT data, and the like, each data is composed of a corresponding data packet, and in the process of splicing, the video data may be spliced by the first video data in the first cache region. Specifically, the first video data packets in the first buffer area all have corresponding first identification numbers, such as 1, and the first audio data packets have corresponding first audio identification numbers, such as 2. In practical application, only the video data packets marked as 1 are needed to be acquired, the corresponding video data packets form first video data, and the purpose is that corresponding splicing points in the second cache region are easier to query through the video data packets, the probability of video data packet superposition is high, and the probability of missplicing as matching data is less than one ten thousandth.
In another embodiment, the above-mentioned first unique identification number includes a first identification number and a first serial number, and before step S101, the method further includes:
judging whether all the acquired first serial numbers in the first video data are continuous serial numbers or not;
if all the first serial numbers are not continuous serial numbers, judging that the first video data are missing currently;
and discarding the current first video data, and re-acquiring the first video data of the first cache region.
In application, the first video data includes a plurality of first video data packets, each video data packet carries a corresponding first sequence number, wherein under a standard condition, all the obtained first sequence numbers in the first video data should be consecutive sequence numbers, such as 0, 1, …, N, for this reason, if it is determined that all the first sequence numbers in the first video data are not consecutive, it is determined that the current first video data is missing, the current first video data is discarded, the first video data in the first buffer area is obtained again, and the above steps are repeated.
The corresponding second unique identification number includes a second identification number and a second serial number, and the function of the second unique identification number is consistent with the function of the using step such as the first identification number and the first serial number, which will not be discussed too much.
In an embodiment, before step S101, the method includes:
establishing and storing an association relation between each first video data packet and a first unique identification number;
before step S102, the method includes:
and establishing and storing an association relation between each second video data packet and a second unique identification number.
In application, the association relationship is a single-to-single association relationship, one first video data packet corresponds to one first unique identification number, the association relationship can be preset and stored in a security remote end without limitation, and the association relationship and the storage position of the second data packet and the second unique identification number are consistent with the association relationship and the storage position of the first data packet and the first unique identification number, so that excessive discussion is omitted.
Step S104, stopping writing new video data into the first cache region;
in application, after the first splicing point and the second splicing point are found, writing of new video data into the first cache region is suspended, and consumption of the first video data in the first cache region is suspended. The video data after the first splicing point in the first cache region is written in by the second video connection mode, and in the splicing process, the video data in the first cache region is temporarily consumed, so that the video data at the head part in the first cache region can be prevented from being continuously consumed, the video data packet of the first splicing point is consumed in the splicing process, and the possibility of data disorder caused in the splicing process is reduced.
In other embodiments, the stopping of writing the new video data into the first buffer area also includes stopping writing the new video data into the second buffer area, in order to ensure that the video data after the second splicing point in the second buffer area is correctly written into the first buffer area, and in the splicing process, the video data in the second buffer area is suspended from being consumed, so that it can be avoided that the video data at the head of the second buffer area is continuously consumed, so that the video data packet at the second splicing point is consumed in the splicing process, and further, the possibility of data confusion caused in the splicing process is reduced.
Step S105, in the second video connection mode, writing video data after the second splicing point in the second buffer area after the first splicing point in the first buffer area.
In an application, the video data after the second splicing point includes, but is not limited to, the video data after the second splicing point is a starting point, and also includes the video data inputted by using a certain video data packet after the second splicing point as a starting point, which is not limited to this. Correspondingly, the new video data is written into the first video data packet after the first splicing point, but not limited to the first splicing point, and the new video data is written into the first video data packet after the first splicing point. In application, after the video data after the first splicing point of the first buffer area and the second splicing point of the second buffer area are written, new data can be continuously written into the tail part of the second buffer area, so that the continuity of video data playing is ensured.
As shown in fig. 4, in an embodiment, step S105 includes:
s401, acquiring a first cache interval from the first splicing point to the tail part of first video data;
s402, acquiring a second cache interval from the second splicing point to the tail part of second video data;
s403, determining a new first splicing point according to the first cache interval, and determining a new second splicing point according to the second cache interval;
s404, in the second video connection mode, writing video data behind the new second splicing point of the second cache region after the new first splicing point of the first cache region.
In application, the first cache interval and the second cache interval are both intervals from the splicing point to the tail of the video data, a new first splicing point and a new second splicing point can be correspondingly determined according to the intervals, and the video data behind the new second splicing point of the second cache area is written into the first cache area behind the new first splicing point of the first cache area through the second video connection mode, so that the aim of seamless switching when the far-end video is played is fulfilled.
Referring to fig. 5, in one embodiment, step S403 includes:
s501, judging whether the second cache interval is larger than the first cache interval;
s502, if the second cache interval is larger than the first cache interval, acquiring a second interval area which is more than the first cache interval in the second cache interval;
s503, writing the second video data in the second interval area into the tail part of the first video data in the first cache area to form new first video data;
s504, determining the tail part of the new first video data as a new first splicing point;
and S505, determining the second video data tail as a new second splicing point.
In a specific application, the first buffer area and the second buffer area can be respectively regarded as having a corresponding first video data and a corresponding second video data. Referring to fig. 3, in the relay connection mode, the first video data in the first buffer is 5-20S, and in the P2P connection mode, the second video data in the second buffer is 7-22S, and the corresponding 7S is used as the respective first splicing point and second splicing point, so that the first buffer interval in the first buffer is 7-20S, i.e. a buffer interval of 13S, and the second buffer interval in the second buffer is 7-22S, i.e. a buffer interval of 15S, and therefore, it can be determined that the second buffer interval is greater than the first buffer interval. The extra second interval area is an interval area of 20-22S, new first video data (5-22S) are formed after second video data of 20-22S in the second buffer area is written into a first video data tail (20S) in the first buffer area, then the new first video data tail (22S) is used as a new first splicing point, the second video data tail (22S) is determined as a new second splicing point, and video data after the new second splicing point (22S) in the second buffer area is written into the first buffer area after the new first splicing point (22S) in the P2P connection mode, specifically as shown in the first case of fig. 3. The aim is that if data is written into the first buffer area by directly taking 7S as a splicing point, and video data after the head video data (5S) in the first buffer area is consumed, the first video data may consume the video data of the first splicing point (7S) in the splicing process, so that the data splicing is disordered.
In other applications, if the second cache interval is equal to the first cache interval, determining the tail of the first video data as a new first splicing point; determining the second video data tail as a new second splicing point; and under the second video connection mode, writing video data behind the new second splicing point of the second cache region after the new first splicing point of the second cache region.
In a specific application, referring to fig. 3, if the first video data in the first buffer area is 5-20S, and the second video data in the second buffer area is 7-20S (not shown in the figure) in the P2P connection mode, and the corresponding 7S is used as the respective first splicing point and second splicing point, the first buffer interval in the first buffer area is 7-20S, i.e. 13S, and the second buffer interval in the second buffer area is also 7-20S, i.e. 13S, so that it can be determined that the second buffer interval is equal to the first buffer interval. And then, the first video data tail (20S) is used as a new first splicing point, the second video data tail (20S) is determined as a new second splicing point, and under the P2P connection mode, the video data behind the new second splicing point (20S) of the second buffer area is written behind the new first splicing point (20S) of the first buffer area. The aim is that if data is written into the first buffer area by directly taking 7S as a splicing point, and video data after the head video data (5S) in the first buffer area is consumed, the first video data may consume the video data of the first splicing point (7S) in the splicing process, so that the data splicing is disordered.
As shown in fig. 5, in an embodiment, after step S501, the method includes:
s601, if the second cache interval is smaller than the first cache interval, acquiring a first position of the tail of the second video data, which corresponds to the first video data;
s602, determining the first position as a new first splicing point;
and S603, determining the tail part of the second video data as a new second splicing point.
In application, referring to fig. 3, in the relay connection mode, the first video data in the first buffer is 5-20S, in the P2P connection mode, the second video data in the second buffer is 7-17S, correspondingly, the tail video data (17S) in the second buffer corresponds to the first position (17S) in the first video data (5-20S), and the first position (17S) is determined as a new first splicing point; the second video data tail (17S) is determined as a new second splicing point, and in the P2P connection mode, the video data after the new second splicing point (17S) of the second buffer area is written after the new first splicing point (17S) of the first buffer area, as shown in the second case of fig. 3. The aim is that if data is written into the first buffer area by directly taking 7S as a splicing point, and video data after the head video data (5S) in the first buffer area is consumed, the first video data may consume the video data of the first splicing point (7S) in the splicing process, so that the data splicing is disordered.
In an embodiment, after step S601, the method includes:
taking an interval from the new first splicing point to the tail part of the first video data in the first cache interval as a first interval area;
judging the first video data in the first interval area as invalid data;
discarding the invalid data.
In application, referring to fig. 3, in the relay connection mode, the first video data in the first buffer area is 5-20S, in the P2P connection mode, the second video data in the second buffer area is 7-17S, correspondingly, the interval from the new first splicing point (17S) to the tail (20S) of the first video data is known as the first interval area (17-20S), and in the second video connection mode, after the new second splicing point (17S), the new video data starts to be written behind the new first splicing point (17S) in the first buffer area, so that the first video data in the first interval area (17-20S) does not need to be played by the security remote end, it is determined that the first video data in the first interval area (17-20S) is invalid data, and the invalid data is discarded, as shown in the second case of fig. 3. The method aims to avoid the confusion of double-video data playing and prevent the screen from being blurred when the data behind a new first splicing point (17S) in the first cache region is continuously consumed. If the video data after the first video data tail (20S) needs to be input into the first buffer area, the user needs to wait for the P2P connection mode to write the tail data in the second buffer area after the tail data exceeds or equals to 20S, so that the splicing time is prolonged, and the user experience of watching the video is affected.
In other embodiments, in the relay connection mode, the first video data in the first buffer is 5-20S, in the P2P connection mode, the second video data in the second buffer is 3-17S, and accordingly, the known splicing point is 17S, i.e., the new first splicing point is 17S, and the tail video data in the second buffer is also 17S, i.e., the new second splicing point is unchanged and is 17S. Similarly, the interval from the new first splicing point (17S) to the end portion (20S) of the first video data is used as a first interval area (17-20S), and the second buffering interval is only one packet of video data of the new second splicing point (17S), so that the second buffering interval is smaller than the first buffering interval. In the P2P connection mode, the new second splicing point (17S) starts to write new video data into the new first splicing point (17S) in the first buffer area, so that the first video data in the first interval area (17-20S) does not need to be played by the security remote end, the first video data in the first interval area (17-20S) is determined to be invalid data, and the invalid data is discarded, as shown in the third case of fig. 3. At this time, the P2P link mode inputs the video data after the new second splicing point (17S) into the first buffer, and after the new first splicing point (17S), the first video data in the first buffer is consumed from the head video data (5S) to the video data of the new first splicing point (17S), and a certain time is required for the consumption to smoothly perform splicing, and when the video data of the new first splicing point (17S) is played, the phenomenon of screen blooming is not caused.
In this embodiment, this application is earlier through under the relay connected mode, can acquire and play the video at any time, then through in the second buffer in P2P connected mode, seek the splicing point with first buffer assorted, through the video data behind the P2P connected mode broadcast splicing point, with the solution under the prerequisite that correspondingly reduces the running cost, can not realize seamless handover's problem when the video is played at any time to the far-end video, reach and accomplish balanced effect between user experience and running cost.
Example two
As shown in fig. 6, the present application further provides a dual-mode data stitching device 100, comprising:
a first obtaining module 10, configured to obtain first video data in a first cache region corresponding to a first video connection mode; wherein the first video data comprises a plurality of first video data packets;
a second obtaining module 20, configured to obtain second video data in a second cache region corresponding to a second video connection mode; wherein the second video data comprises a plurality of second video data packets;
a splicing module 30, configured to obtain a first target video data packet from the plurality of first video data packets as a first splicing point of the first buffer, and obtain a second target video data packet from the plurality of second video data packets as a second splicing point of the second buffer; wherein the first target video data packet matches the second target video data packet;
a stopping module 40, configured to stop writing new video data into the first buffer;
a writing module 50, configured to write, in the second video connection mode, video data after the second splicing point of the second buffer area after the first splicing point of the first buffer area.
In an embodiment, each of the first video data packets carries a corresponding first unique identification number, and each of the second video data packets carries a corresponding second unique identification number; the splicing module 30 is also configured to:
searching a corresponding first unique identification number in the first video data according to a second unique identification number carried by a second video data packet at the head or the tail of the second cache region;
if the corresponding first unique identification number is found, determining a first video data packet corresponding to the first unique identification number as a first target video data packet as a first splicing point of the first cache region, and determining a second video data packet corresponding to the second unique identification number as a second target video data packet as a second splicing point of the second cache region.
In one embodiment, the splicing module 30 is further configured to:
if the corresponding first unique identification number is not found, searching the corresponding first unique identification number in the first video data at preset intervals according to a second unique identification number carried by a second video data packet at the head or the tail of the second cache region;
if the corresponding first unique identification number is not found within the preset times, determining a first video data packet at the tail part in the first video data as a first target video data packet as a first splicing point of the first cache region, and determining a second video data packet at the tail part in the second video data as a second target video data packet as a second splicing point of the second cache region;
determining that the first destination video data packet matches the second destination video data packet.
In one embodiment, the write module 50 is further configured to:
acquiring a first cache interval from the first splicing point to the tail part of first video data;
acquiring a second cache interval from the second splicing point to the tail part of second video data;
determining a new first splicing point according to the first cache interval, and determining a new second splicing point according to the second cache interval;
and under the second video connection mode, writing video data behind the new second splicing point of the second cache region after the new first splicing point of the first cache region.
In one embodiment, the write module 50 is further configured to:
judging whether the second cache interval is larger than the first cache interval or not;
if the second cache interval is larger than the first cache interval, acquiring a second interval area which is more than the first cache interval in the second cache interval;
writing the second video data in the second interval area into the tail part of the first video data in the first cache area to form new first video data;
determining the new first video data tail as a new first splicing point;
and determining the second video data tail as a new second splicing point.
In one embodiment, the write module 50 is further configured to:
if the second cache interval is smaller than the first cache interval, acquiring a first position of the tail of the second video data, which corresponds to the first video data;
determining the first location as a new first splice point;
determining the second video data tail as a new second splice point.
In one embodiment, the write module 50 is further configured to:
taking an interval from the new first splicing point to the tail part of the first video data in the first cache interval as a first interval area;
judging the first video data in the first interval area as invalid data;
discarding the invalid data.
In one embodiment, the dual-mode data stitching device 100 further comprises:
the first establishing module is used for establishing and storing the association relationship between each first video data packet and one first unique identification number;
and the second establishing module is used for establishing and storing the association relationship between each second video data packet and one second unique identification number.
In other embodiments, the first unique identification number comprises a first identification number and a first serial number, and the second unique identification number comprises a second identification number and a second serial number; the dual-mode data splicing device further comprises:
a third obtaining module, configured to obtain all first initial data in the first cache region and all second initial data in the second cache region; wherein the first initial data comprises the first video data and the second initial data comprises the second video data;
a fourth obtaining module, configured to obtain the first video data packet according to the first identification number, and obtain the second video data packet according to the second identification number; wherein all of the first video data packets constitute the first video data and all of the second video data packets constitute the second video data;
the judging module is used for judging whether all the acquired first serial numbers in the first video data are continuous serial numbers or not and judging whether all the acquired second serial numbers in the second video data are continuous serial numbers or not;
a determining module, configured to determine that the current first video data is missing if all the first sequence numbers are not consecutive sequence numbers, and determine that the current second video data is missing if all the second sequence numbers are not consecutive sequence numbers;
and the discarding module is used for discarding the current first video data and reacquiring the first video data of the first cache region, and discarding the current second video data and reacquiring the second video data of the second cache region.
In this embodiment, first, through under first video connection mode, can acquire and play the video at any time, then through in the second buffer in second video connection mode, seek the splicing point with first buffer assorted, through the video data behind the second video connection mode broadcast splicing point to solve under the corresponding prerequisite that reduces the operation cost, can not realize seamless handover's problem when the video is broadcast at any time to the distal end video, reach the balanced effect of accomplishing between user experience and operation cost.
EXAMPLE III
An embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the processor implements the method according to any one of the preceding claims.
An embodiment of the application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the method according to any one of the above claims.
The present application provides a computer program product, which, when running on a terminal device, causes the terminal device to execute the dual-mode data splicing method according to any one of the above-mentioned first aspects.
Fig. 7 is a schematic diagram of a terminal device 80 according to an embodiment of the present application. As shown in fig. 7, the terminal device 80 of this embodiment includes: a processor 803, a memory 801 and a computer program 802 stored in the memory 801 and executable on the processor 803. The processor 803 implements the steps in the various method embodiments described above, such as the steps S101 to S105 shown in fig. 1, when executing the computer program 802. Alternatively, the processor 803 realizes the functions of the modules/units in the above-described device embodiments when executing the computer program 802.
Illustratively, the computer program 802 may be partitioned into one or more modules/units that are stored in the memory 801 and executed by the processor 803 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 802 in the terminal device 80. For example, the computer program 802 may be divided into a first obtaining module, a second obtaining module, a splicing module, a suspending module, and a writing module, and each module has the following specific functions:
the first obtaining module is used for obtaining first video data in a first cache region corresponding to a first video connection mode; wherein the first video data comprises a plurality of first video data packets;
the second obtaining module is used for obtaining second video data in a second cache region corresponding to the second video connection mode; wherein the second video data comprises a plurality of second video data packets;
a splicing module, configured to determine a first target video data packet in the plurality of first video data packets as a first splicing point of the first buffer, and determine a second target video data packet in the plurality of second video data packets as a second splicing point of the second buffer; wherein the first target video data packet matches the second target video data packet;
the suspension module is used for stopping writing new video data into the first cache region;
and the writing module is used for writing video data behind the second splicing point of the second cache region into the first cache region in the second video connection mode.
The terminal device 80 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 803 and a memory 801. Those skilled in the art will appreciate that fig. 7 is merely an example of a terminal device 80, and does not constitute a limitation of terminal device 80, and may include more or fewer components than shown, or some components in combination, or different components, e.g., the terminal device may also include input-output devices, network access devices, buses, etc.
The Processor 803 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 801 may be an internal storage unit of the terminal device 80, such as a hard disk or a memory of the terminal device 80. The memory 801 may also be an external storage device of the terminal device 80, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the terminal device 80. In one embodiment, the memory 801 may also include both internal and external memory units of the terminal device 80. The memory 801 is used to store the computer programs and other programs and data required by the terminal device. The memory 801 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/communication terminal and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/communication terminal are merely illustrative, and for example, the division of the modules or units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A dual-mode data stitching method, comprising:
acquiring first video data in a first cache region corresponding to a first video connection mode; wherein the first video data comprises a plurality of first video data packets;
acquiring second video data in a second cache region corresponding to a second video connection mode; wherein the second video data comprises a plurality of second video data packets;
determining a first target video data packet in the plurality of first video data packets as a first splicing point of the first cache region, and determining a second target video data packet in the plurality of second video data packets as a second splicing point of the second cache region; wherein the first target video data packet matches the second target video data packet;
stopping writing new video data into the first cache region;
and under the second video connection mode, writing video data behind a second splicing point of the second cache region behind the first splicing point of the first cache region.
2. The dual-mode data splicing method according to claim 1, wherein each of said first video data packets carries a corresponding first unique identification number, and each of said second video data packets carries a corresponding second unique identification number;
the determining a first target video data packet in the first video data packets as a first splicing point of the first buffer area and a second target video data packet in the second video data packets as a second splicing point of the second buffer area includes:
searching a corresponding first unique identification number in the first video data according to a second unique identification number carried by a second video data packet at the head or the tail of the second cache region;
if the corresponding first unique identification number is found, determining a first video data packet corresponding to the first unique identification number as a first target video data packet as a first splicing point of the first cache region, and determining a second video data packet corresponding to the second unique identification number as a second target video data packet as a second splicing point of the second cache region.
3. The dual-mode data splicing method as claimed in claim 2, wherein after searching for the corresponding first unique identification number in the first video data according to the second unique identification number carried by the second video data packet at the head or the tail of the second buffer area, the method comprises:
if the corresponding first unique identification number is not found, searching the corresponding first unique identification number in the first video data at preset intervals according to a second unique identification number carried by a second video data packet at the head or the tail of the second cache region;
if the corresponding first unique identification number is not found within the preset times, determining a first video data packet at the tail part in the first video data as a first target video data packet as a first splicing point of the first cache region, and determining a second video data packet at the tail part in the second video data as a second target video data packet as a second splicing point of the second cache region;
determining that the first destination video data packet matches the second destination video data packet.
4. The dual-mode data splicing method according to claim 2, wherein said writing video data subsequent to a second splice point of said second buffer area after a first splice point of said first buffer area in said second video connection mode comprises:
acquiring a first cache interval from the first splicing point to the tail part of first video data;
acquiring a second cache interval from the second splicing point to the tail part of second video data;
determining a new first splicing point according to the first cache interval, and determining a new second splicing point according to the second cache interval;
and under the second video connection mode, writing video data behind the new second splicing point of the second cache region after the new first splicing point of the first cache region.
5. The dual mode data stitching method of claim 4, wherein determining a new first splice point based on the first buffer interval and a new second splice point based on the second buffer interval comprises:
judging whether the second cache interval is larger than the first cache interval or not;
if the second cache interval is larger than the first cache interval, acquiring a second interval area which is more than the first cache interval in the second cache interval;
writing the second video data in the second interval area into the tail part of the first video data in the first cache area to form new first video data;
determining the new first video data tail as a new first splicing point;
and determining the second video data tail as a new second splicing point.
6. The dual-mode data splicing method of claim 5, wherein said determining whether the second buffering interval is greater than the first buffering interval comprises:
if the second cache interval is smaller than the first cache interval, acquiring a first position of the tail of the second video data, which corresponds to the first video data;
determining the first location as a new first splice point;
determining the second video data tail as a new second splice point.
7. The dual-mode data splicing method according to claim 6, wherein said obtaining the end portion of the second video data corresponding to the first position in the first video data after the second buffering interval is smaller than the first buffering interval comprises:
taking an interval from the new first splicing point to the tail part of the first video data in the first cache interval as a first interval area;
judging the first video data in the first interval area as invalid data;
discarding the invalid data.
8. The dual-mode data splicing method according to any one of claims 1 to 7, wherein before the obtaining the first video data in the first buffer corresponding to the first video connection mode, the method comprises:
establishing and storing an association relation between each first video data packet and a first unique identification number;
before the obtaining of the second video data in the second buffer corresponding to the second video connection mode, the method includes:
and establishing and storing an association relation between each second video data packet and a second unique identification number.
9. A dual-mode data stitching device, comprising:
the first obtaining module is used for obtaining first video data in a first cache region corresponding to a first video connection mode; wherein the first video data comprises a plurality of first video data packets;
the second obtaining module is used for obtaining second video data in a second cache region corresponding to the second video connection mode; wherein the second video data comprises a plurality of second video data packets;
a splicing module, configured to determine a first target video data packet in the plurality of first video data packets as a first splicing point of the first buffer, and determine a second target video data packet in the plurality of second video data packets as a second splicing point of the second buffer; wherein the first target video data packet matches the second target video data packet;
the suspension module is used for stopping writing new video data into the first cache region;
and the writing module is used for writing video data behind the second splicing point of the second cache region into the first cache region in the second video connection mode.
10. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 8 when executing the computer program.
CN201910957803.9A 2019-10-10 2019-10-10 Dual-mode data splicing method and device and terminal equipment Active CN110677724B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910957803.9A CN110677724B (en) 2019-10-10 2019-10-10 Dual-mode data splicing method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910957803.9A CN110677724B (en) 2019-10-10 2019-10-10 Dual-mode data splicing method and device and terminal equipment

Publications (2)

Publication Number Publication Date
CN110677724A true CN110677724A (en) 2020-01-10
CN110677724B CN110677724B (en) 2022-02-18

Family

ID=69081302

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910957803.9A Active CN110677724B (en) 2019-10-10 2019-10-10 Dual-mode data splicing method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN110677724B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1805445A (en) * 2006-01-12 2006-07-19 北京邮电大学 Method of seamless switching for transmission of mobile stream media
CN101146319A (en) * 2006-09-11 2008-03-19 华为技术有限公司 A gateway for supporting seamless switching multi-mode terminal and its method
CN101227745A (en) * 2008-02-02 2008-07-23 华为软件技术有限公司 System, apparatus and method for switching network of mobile multimedia business
CN101395851A (en) * 2006-02-28 2009-03-25 汤姆逊许可公司 Seamless handover method and system
US20110239262A1 (en) * 2008-12-12 2011-09-29 Huawei Technologies Co., Ltd. Channel switching method, channel switching device, and channel switching system
US20140028779A1 (en) * 2012-07-30 2014-01-30 Kabushiki Kaisha Toshiba Video transmitting apparatus and video transmitting method
CN105745961A (en) * 2013-11-20 2016-07-06 三菱电机株式会社 Wireless communication system, transmitter, receiver, and communication terminal

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1805445A (en) * 2006-01-12 2006-07-19 北京邮电大学 Method of seamless switching for transmission of mobile stream media
CN101395851A (en) * 2006-02-28 2009-03-25 汤姆逊许可公司 Seamless handover method and system
CN101146319A (en) * 2006-09-11 2008-03-19 华为技术有限公司 A gateway for supporting seamless switching multi-mode terminal and its method
CN101227745A (en) * 2008-02-02 2008-07-23 华为软件技术有限公司 System, apparatus and method for switching network of mobile multimedia business
US20110239262A1 (en) * 2008-12-12 2011-09-29 Huawei Technologies Co., Ltd. Channel switching method, channel switching device, and channel switching system
US20140028779A1 (en) * 2012-07-30 2014-01-30 Kabushiki Kaisha Toshiba Video transmitting apparatus and video transmitting method
CN105745961A (en) * 2013-11-20 2016-07-06 三菱电机株式会社 Wireless communication system, transmitter, receiver, and communication terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周浩: "井下宽带移动通信网络中快速切换机制研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Also Published As

Publication number Publication date
CN110677724B (en) 2022-02-18

Similar Documents

Publication Publication Date Title
CN107566786B (en) Method and device for acquiring monitoring video and terminal equipment
WO2017185697A1 (en) Bandwidth sharing method and apparatus
US20140244854A1 (en) Content Streaming Between Devices
CN110535961B (en) Resource acquisition method and device, electronic equipment and storage medium
CN104902017A (en) Remote interaction method of multi-screen synchronous display supporting QoS
EP3253065B1 (en) Streaming media resource downloading method and apparatus, and terminal device
WO2022111027A1 (en) Video acquisition method, electronic device, and storage medium
CN110691331B (en) Conference demonstration method and device based on Bluetooth mesh technology and terminal equipment
CN113395353A (en) File downloading method and device, storage medium and electronic equipment
JP2023510833A (en) Message processing method, device and electronic equipment
EP4297416A1 (en) Angle-of-view switching method, apparatus and system for free angle-of-view video, and device and medium
CN110677724B (en) Dual-mode data splicing method and device and terminal equipment
US20170180468A1 (en) Method, electronic device and non-transitory computer-readable storage medium for connecting P2P network node
CN105721392B (en) A kind of method, apparatus and system for recommending application
CN112486825B (en) Multi-lane environment architecture system, message consumption method, device, equipment and medium
WO2024067114A1 (en) Screen projection method and apparatus, electronic device and storage medium
CN111581560B (en) Page display method and device, electronic equipment and storage medium
CN113542856A (en) Reverse playing method, device, equipment and computer readable medium for online video
CN114584822B (en) Synchronous playing method and device, terminal equipment and storage medium
CN113031895A (en) Screen projection control method and device and electronic equipment
CN110917625B (en) Game equipment display method and device, electronic equipment and storage medium
CN112040328A (en) Data interaction method and device and electronic equipment
CN115114051B (en) Node communication method, device, equipment and storage medium
CN113076195B (en) Object shunting method and device, readable medium and electronic equipment
CN113115074B (en) Video jamming processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant