CN106375793B - video structured information superposition method, user terminal and superposition system - Google Patents
video structured information superposition method, user terminal and superposition system Download PDFInfo
- Publication number
- CN106375793B CN106375793B CN201610763495.2A CN201610763495A CN106375793B CN 106375793 B CN106375793 B CN 106375793B CN 201610763495 A CN201610763495 A CN 201610763495A CN 106375793 B CN106375793 B CN 106375793B
- Authority
- CN
- China
- Prior art keywords
- video
- information
- video frame
- structured information
- structured
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 78
- 238000012544 monitoring process Methods 0.000 claims abstract description 32
- 238000012545 processing Methods 0.000 claims description 72
- 238000009877 rendering Methods 0.000 claims description 33
- 238000004891 communication Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 abstract description 38
- 238000005516 engineering process Methods 0.000 description 9
- 238000003672 processing method Methods 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/254—Management at additional data server, e.g. shopping server, rights management server
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
- H04N21/4351—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reassembling additional data, e.g. rebuilding an executable program from recovered modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
the invention provides a method, a user terminal and a system for overlaying video structured information, wherein the method comprises the following steps: the method comprises the steps that a user terminal receives at least one video frame, and the video frame is synchronously sent to the user terminal and an algorithm server by a video source server; meanwhile, the user terminal also receives structured information sent by the algorithm server, wherein the structured information is generated by the algorithm server for identifying a monitoring target in a video frame according to the monitoring target; and finally, the user terminal superimposes the custom information into the video frame according to the timestamp of the video frame, the video timestamp in the structured information and the position coordinate of the custom information. In the embodiment of the invention, the algorithm server is only used for generating the structured information, the structured information superposition is not needed, the structured information superposition process is executed by the user terminal, and the superposition speed of the structured information is improved, so that the real-time performance of video stream display is ensured, and the requirements on the software and hardware configuration of a superposition system are reduced.
Description
Technical Field
The invention relates to the technical field of video processing, in particular to a method, a user terminal and a system for overlaying video structured information.
background
at present, with the rapid development of video processing technology, different requirements of different users for custom information in videos can be met through a video overlaying technology, some custom information is overlaid to a specified position in a specific video frame in the videos, then the videos overlaid with the custom information are presented to the users, for example, a user terminal applies for subscribing a certain theme information, at the moment, the theme information needs to be overlaid into the videos, and then the videos overlaid with the theme information are sent to corresponding user terminals.
currently, related technologies provide a method for superimposing video structured information, where the method mainly includes: firstly, a video source server sends a video stream to an algorithm server, then the algorithm server directly superimposes the output structural information on a corresponding image in the video, and the video superimposed with the structural information is issued to a streaming media server through secondary coding, and finally, a user terminal directly pulls the video stream in the streaming media server to play. Namely, a streaming media server needs to be added, so that the configuration cost of software and hardware of the system is increased; in addition, when the algorithm server processes the real-time video, not only the structured information to be superimposed needs to be generated, but also the structured information to be superimposed needs to be superimposed on the video, a large amount of calculation is needed, the processing efficiency needs to be real-time, the processing speed is at least guaranteed to be the same as the frame rate of the video, the common computer is difficult to achieve, and meanwhile, the simultaneous processing of multiple paths of videos needs to be met, so that a special server needs to be required to complete the tasks in parallel.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the related art: in the video information superposition technology in the related technology, an algorithm server needs to generate structured information related to a video stream, and simultaneously superpose the structured information to a specific position of a corresponding video frame of the video stream.
disclosure of Invention
in view of this, an object of the embodiments of the present invention is to provide a method, a user terminal, and a system for overlaying structured video information, so as to increase the overlaying speed of the structured video information, thereby not only ensuring the real-time performance of video stream display, but also reducing the requirements on the software and hardware configuration of the overlaying system.
in a first aspect, an embodiment of the present invention provides a method for overlaying structured information of a video, where the method includes:
a user terminal receives at least one video frame, wherein the video frame is synchronously sent to the user terminal and an algorithm server by a video source server;
the user terminal receives structured information sent by the algorithm server, the structured information is generated by the algorithm server according to a monitoring target in the video frame, and the structured information comprises a video timestamp, custom information corresponding to the monitoring target and position coordinates of the custom information;
And the user terminal superimposes the custom information into the video frame according to the timestamp of the video frame, the video timestamp and the position coordinate.
with reference to the first aspect, an embodiment of the present invention provides a first possible implementation manner of the first aspect, where the user terminal receives the video frame and the structural information through different threads respectively.
With reference to the first aspect, an embodiment of the present invention provides a second possible implementation manner of the first aspect, where after the user terminal receives at least one video frame, the method further includes: caching all the video frames, decoding and surface rendering the cached video frames one by one, and outputting the video frames subjected to surface rendering frame by frame according to the surface rendering sequence;
After the user terminal receives the structured information sent by the algorithm server, the method further comprises the following steps: and sequentially storing the received plurality of structured information into a buffer queue.
with reference to the second possible implementation manner of the first aspect, an embodiment of the present invention provides a third possible implementation manner of the first aspect, where the superimposing, by the user terminal, the custom information onto the video frame according to the timestamp of the video frame, the video timestamp, and the position coordinate includes:
selecting the structural information corresponding to the video frame from the cache queue according to the timestamp of the video frame and the video timestamp;
extracting the custom information and the position coordinates from the structured information;
Superimposing the custom information to the location coordinates in the video frame.
with reference to the third possible implementation manner of the first aspect, an embodiment of the present invention provides a fourth possible implementation manner of the first aspect, where before the selecting the structural information corresponding to the video frame from the buffer queue according to the timestamp of the video frame and the video timestamp, the method further includes:
and judging whether the structural information exists in the cache queue, if so, selecting one piece of structural information one by one according to the sequence of storing the structural information in the cache queue, and extracting the video time stamp from the structural information.
with reference to the fourth possible implementation manner of the first aspect, an embodiment of the present invention provides a fifth possible implementation manner of the first aspect, where the selecting the structural information corresponding to the video frame from the buffer queue according to the timestamp of the video frame and the video timestamp includes:
Determining a processing mode of the structural information according to the video time stamp and the time stamp of the video frame;
and correspondingly processing the structural information by using the processing mode, wherein the processing mode comprises the following steps: taking the structured information as structured information corresponding to the video frame, returning the structured information to the cache queue, or clearing the structured information from the cache queue;
And when the processing mode is determined to be that the structural information is used as the structural information corresponding to the video frame or the processing mode is determined to be that the structural information is returned to the cache queue, the operation of selecting the next structural information is stopped, and the corresponding structural information is selected for the next frame of the video frame.
With reference to the fifth possible implementation manner of the first aspect, an embodiment of the present invention provides a sixth possible implementation manner of the first aspect, where the determining, according to the video timestamp and the timestamp of the video frame, a processing manner of the structured information includes:
When the video timestamp is consistent with the timestamp of the video frame, determining that the structural information is processed in a mode of taking the structural information as structural information corresponding to the video frame;
And when the video timestamp is not consistent with the timestamp of the video frame, judging whether the video timestamp is larger than the timestamp of the video frame, if so, determining that the structured information is processed in a mode of returning the structured information to the cache queue, and if not, determining that the structured information is processed in a mode of removing the structured information from the cache queue.
in a second aspect, an embodiment of the present invention further provides a user terminal, where the user terminal includes:
The video frame receiving module is used for receiving at least one video frame, and the video frame is synchronously sent to the user terminal and the algorithm server by the video source server;
The system comprises a structured information receiving module, a video processing module and a video processing module, wherein the structured information receiving module is used for receiving structured information sent by the algorithm server, the structured information is generated by the algorithm server according to a monitoring target in the video frame, and the structured information comprises a video timestamp, custom information corresponding to the monitoring target and position coordinates of the custom information;
And the information superposition module is used for superposing the custom information into the video frame according to the time stamp of the video frame, the video time stamp and the position coordinate.
With reference to the second aspect, an embodiment of the present invention provides a first possible implementation manner of the second aspect, wherein the user terminal receives the video frame and the structural information through different threads respectively.
with reference to the second aspect, an embodiment of the present invention provides a second possible implementation manner of the second aspect, where the user terminal further includes:
the video frame processing module is used for caching the video frames, decoding and surface rendering the cached video frames one by one, and outputting the video frames subjected to surface rendering frame by frame according to the surface rendering sequence;
And the structured information caching module is used for sequentially storing the received plurality of structured information into a caching queue.
With reference to the second possible implementation manner of the second aspect, an embodiment of the present invention provides a third possible implementation manner of the second aspect, where the information overlapping module includes:
The selecting unit is used for selecting the structural information corresponding to the video frame from the cache queue according to the timestamp of the video frame and the video timestamp;
An extracting unit, configured to extract the custom information and the position coordinates from the structured information;
And the superposition unit is used for superposing the custom information to the position coordinate in the video frame.
With reference to the third possible implementation manner of the second aspect, an embodiment of the present invention provides a fourth possible implementation manner of the second aspect, where the information superposition module further includes:
And the judging unit is used for judging whether the structural information exists in the cache queue, if so, selecting one piece of structural information one by one according to the sequence stored in the cache queue, and extracting the video timestamp from the structural information.
with reference to the fourth possible implementation manner of the second aspect, an embodiment of the present invention provides a fifth possible implementation manner of the second aspect, where the selecting unit includes:
the processing mode determining subunit is used for determining a processing mode of the structured information according to the video time stamp and the time stamp of the video frame;
the structured information selecting subunit is configured to perform corresponding processing on the structured information by using the processing manner, where the processing manner includes: taking the structured information as structured information corresponding to the video frame, returning the structured information to the cache queue, or clearing the structured information from the cache queue; and when the processing mode is determined to be that the structural information is used as the structural information corresponding to the video frame or the processing mode is determined to be that the structural information is returned to the cache queue, the operation of selecting the next structural information is stopped, and the corresponding structural information is selected for the next frame of the video frame.
With reference to the fifth possible implementation manner of the second aspect, an embodiment of the present invention provides a sixth possible implementation manner of the second aspect, where the processing manner determining subunit is specifically configured to determine, when the video timestamp matches a timestamp of the video frame, a manner of processing the structured information as structured information corresponding to the video frame; and when the video timestamp is not consistent with the timestamp of the video frame, judging whether the video timestamp is larger than the timestamp of the video frame, if so, determining that the structured information is processed in a mode of returning the structured information to the cache queue, and if not, determining that the structured information is processed in a mode of removing the structured information from the cache queue.
in a third aspect, an embodiment of the present invention further provides a system for superimposing structured video information, where the system includes: the system comprises a video source server, an algorithm server and a user terminal, wherein the video source server is respectively in communication connection with the algorithm server and the user terminal, and the user terminal is in communication connection with the algorithm server;
the video source server is used for synchronously sending at least one video frame to the user terminal and the algorithm server;
the algorithm server is used for receiving the video frame, identifying a monitoring target in the video frame, generating structural information corresponding to the video frame according to the monitoring target, and transmitting the structural information to the user terminal, wherein the structural information comprises a video timestamp, custom information corresponding to the monitoring target, and position coordinates of the custom information;
and the user terminal is used for receiving the video frame and the structural information sent by the algorithm server and superposing the custom information into the video frame according to the timestamp of the video frame, the video timestamp and the position coordinate.
in the method, the user terminal and the system for overlaying the video structured information provided by the embodiment of the invention, the method comprises the following steps: the method comprises the steps that a user terminal receives at least one video frame, and the video frame is synchronously sent to the user terminal and an algorithm server by a video source server; meanwhile, the user terminal also receives structured information sent by the algorithm server, wherein the structured information is generated by the algorithm server for identifying a monitoring target in a video frame according to the monitoring target; and finally, the user terminal superimposes the custom information into the video frame according to the timestamp of the video frame, the video timestamp in the structured information and the position coordinate of the custom information. In the embodiment of the invention, firstly, a video source server synchronously transmits real-time video streams to an algorithm server and a user terminal in a video frame form, the algorithm server is only used for generating the structural information corresponding to each video frame, then, the user terminal simultaneously receives the video streams and the structural information corresponding to each video frame and superposes each structural information to the corresponding video frame, in the whole process of superposing the video structural information, the generation process of the structural information and the superposition process of the structural information are separated, the algorithm server does not need to superpose the structural information, the superposition process of the structural information is executed by the user terminal, the superposition process of the structural information is distributed to a plurality of user terminals, the superposed data processing amount of the structural information of different user terminals is greatly reduced, and the superposition speed of the structural information is improved, therefore, the real-time performance of video stream display is ensured, and the requirements on software and hardware configuration of the superposition system are reduced.
in order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
in order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
fig. 1 is a schematic flow chart illustrating a method for superimposing video structured information according to an embodiment of the present invention;
fig. 2 is a flow chart illustrating a preferred method for superimposing video structured information according to an embodiment of the present invention;
Fig. 3 is a schematic structural diagram of a user terminal according to an embodiment of the present invention;
Fig. 4 is a schematic structural diagram illustrating a video structured information overlaying system according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram illustrating another video structured information overlaying system provided by an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
In consideration of the fact that the video information superposition technology in the related technology needs an algorithm server to generate the structural information related to the video stream and simultaneously superpose the structural information to the specific position of the corresponding video frame of the video stream, the algorithm server needs to ensure the superposition speed of the video information and further has high requirements on the configuration of a video information superposition system because the algorithm server has large processing data volume and needs to ensure the real-time performance of video display in the real-time video processing process. Based on this, the embodiment of the present invention provides an overlay method, a user terminal and an overlay system for video structured information, which are described below through embodiments.
As shown in fig. 1, an embodiment of the present invention provides a method for overlaying structured information of a video, where the method includes steps S102 to S106, which are specifically as follows:
step S102: the method comprises the steps that a user terminal receives at least one video frame, and the video frame is synchronously sent to the user terminal and an algorithm server by a video source server;
Step S104: the user terminal receives structured information sent by the algorithm server, the structured information is generated by the algorithm server according to a monitoring target in the video frame, and the structured information comprises a video timestamp, custom information corresponding to the monitoring target and position coordinates of the custom information;
Step S106: and the user terminal superposes the custom information into the video frame according to the time stamp of the video frame, the video time stamp and the position coordinate.
the specific process of the algorithm server generating the structural information corresponding to each video frame is as follows: the method comprises the following steps that after receiving a real-time video stream sent by a video source server, an algorithm server decodes each video frame one by one, the decoded video frames (YUV data) are transmitted to an algorithm, the algorithm detects and identifies whether a monitoring target exists in each decoded video frame image, if yes, an algorithm result is output, namely structural information corresponding to the video frames, wherein the monitoring target is some predefined target types, the monitoring target can be a human face, a vehicle and the like, the video timestamp is a timestamp of the video frame, custom information corresponding to the monitoring target can be preset according to the type of the monitoring target, and the custom information comprises the following steps: for the text description of the target, the position information of the target, and the like, the position coordinates of the customized information may be the position coordinates of the monitoring target in the video frame image, or may be the position coordinates preset according to the type of the monitoring target.
in the embodiment provided by the invention, firstly, a video source server synchronously transmits real-time video streams to an algorithm server and a user terminal in a video frame form, the algorithm server is only used for generating the structural information corresponding to each video frame, then, the user terminal simultaneously receives the video streams and the structural information corresponding to each video frame and superposes each structural information to the corresponding video frame, in the whole process of superposing the video structural information, the generation process of the structural information and the superposition process of the structural information are separated, the algorithm server does not need to superpose the structural information, the superposition process of the structural information is executed by the user terminal, the superposition process of the structural information is distributed to a plurality of user terminals, the superposed data processing amount of the structural information of different user terminals is greatly reduced, and the superposition speed of the structural information is improved, therefore, the real-time performance of video stream display is ensured, and the requirements on software and hardware configuration of the superposition system are reduced.
In addition, the process of transmitting the structured information generated by the algorithm server to the user terminal can be that the algorithm server directly sends the structured information to the user terminal, or a message server is independently arranged, the algorithm server sends the structured information to the message server, then the message server sends the structured information to the client, the user terminal overlays the custom information in the structured information to the appointed position in the corresponding video frame, and displays the video stream information overlaid with the custom information in real time. The same real-time video source is processed by the algorithm server, the structured information output by each frame is published to the message server in real time, and simultaneously the user terminal ocx subscribes the structured information on the message server and synchronously superimposes the information on the corresponding video frame image, so that the superposition of real-time video self-defined information is realized, and the superposition process of the structured information aims at solving the synchronization, real-time performance and accuracy of the video and the superimposed information.
when the algorithm server processes a video frame, the frame timestamp is used as a unique identifier of each frame of structural information, the structural information contains the accurate position of the user-defined information displayed in the frame of image, and the position of the user-defined information in different video frame images is changed, and then the structural information is issued to a message server or directly sent to a user terminal; and when the user terminal OCX plays the real-time stream, the user terminal OCX subscribes the structured information from the message server at the same time, and synchronously superimposes each piece of structured information on the corresponding video frame image according to the timestamp of each frame, so that the real-time video custom information superposition is realized. The subscription means that after subscribing a certain topic message from the message server, the message server sends the content under the topic to the subscribing user terminal.
further, in order to improve the speed of receiving the video frames and the structured information by the user terminal and improve the speed of superimposing the structured information by the user terminal, based on this, the user terminal receives the video frames and the structured information through different threads, specifically, receives each video frame sent by the video source server through the first thread, receives the structured information corresponding to each video frame sent by the algorithm server through the second thread, and superimposes the received structured information to the designated position in the corresponding video frame through the second thread.
further, in consideration of a certain time required for the message server to generate the structured information, for the user terminal, there is a certain time difference between the received video frame and the structured information corresponding to the video frame, based on which, after the user terminal receives at least one video frame, the method further includes: caching all the video frames, decoding and surface rendering the cached video frames one by one, and outputting the video frames subjected to surface rendering frame by frame according to the surface rendering sequence;
After receiving the structured information sent by the algorithm server, the user terminal further includes: and sequentially storing the received plurality of pieces of structural information into a buffer queue.
Specifically, the user terminal caches the received video frames according to a preset caching time, outputs the cached video frames frame by frame according to the caching sequence, decodes the cached video frames one by one, performs surface rendering on the decoded video frames, and outputs the surface-rendered video frames frame by frame according to the surface rendering sequence; and superposing the structural information on the output video frame after surface rendering.
In the embodiment provided by the invention, the received video frame is cached, the received structural information is stored in the cache queue, and then the structural information corresponding to the video frame after surface rendering is searched from the cache queue according to the timestamp of the video frame, so that the problem that the structural information can not be accurately superposed to the corresponding video frame due to the delay of the calculation of the algorithm server can be solved, and the fluency and the real-time property of the video stream can be further ensured.
specifically, the step of superimposing, by the user terminal, the customized information on the video frame according to the timestamp of the video frame, the video timestamp, and the position coordinate includes:
When a video frame after surface rendering is received, selecting structural information corresponding to the video frame from the cache queue according to the timestamp of the video frame and a video timestamp in the structural information;
extracting the self-defined information and the position coordinates from the structured information;
And superposing the self-defined information to the position coordinates in the video frame.
further, before the selecting the structural information corresponding to the video frame from the buffer queue according to the timestamp of the video frame and the video timestamp, the method further includes:
And judging whether the structural information exists in the cache queue, if so, selecting one piece of structural information one by one according to the sequence of storing the structural information into the cache queue, and extracting the video time stamp from the structural information. If not, the video frame is directly displayed.
wherein the selecting the structured information corresponding to the video frame from the buffer queue according to the timestamp of the video frame and the video timestamp comprises:
determining a processing mode of the structural information according to the video time stamp and the time stamp of the video frame;
and performing corresponding processing on the structured information by using the processing method, wherein the processing method comprises the following steps: using the structured information as structured information corresponding to the video frame, returning the structured information to the cache queue, or removing the structured information from the cache queue;
And when the processing mode is determined to be that the structural information is used as the structural information corresponding to the video frame or the processing mode is determined to be that the structural information is returned to the cache queue, terminating the operation of selecting the next structural information and selecting the corresponding structural information for the next frame of the video frame.
Specifically, when a video frame after surface rendering is received, firstly, the video frame is taken as a current video frame to be superimposed, a timestamp of the current video frame to be superimposed is extracted, and whether structural information exists in a cache queue is judged; if not, directly displaying the video frame; if the current operation object exists, one piece of structured information is selected one by one according to the sequence of the structured information stored in the cache queue as the current operation object, and the video time stamp of the current operation object is extracted; then, determining a processing mode of the current operation object according to the video time stamp of the current operation object and the time stamp of the current video frame to be superposed; and selecting the structural information from the unselected structural information as a current operation object until the current operation object is determined to be the structural information corresponding to the video frame or the video time stamp in the current operation object is determined to be larger than the time stamp of the current video frame to be superposed.
specifically, the determining a processing method of the structured information according to the video time stamp and the time stamp of the video frame includes:
When the video time stamp is consistent with the time stamp of the video frame, determining the structured information to be processed in a way of using the structured information as the structured information corresponding to the video frame;
and when the video timestamp is not consistent with the timestamp of the video frame, judging whether the video timestamp is larger than the timestamp of the video frame, if so, determining that the structured information is processed in a mode of returning the structured information to the cache queue, and if not, determining that the structured information is processed in a mode of removing the structured information from the cache queue.
When a plurality of structured information are stored in the cache queue, the structured information is sequentially taken out according to the sequence stored in the cache queue, and the size relationship between the video time stamp in the structured information and the time stamp of the video frame to be currently superimposed is compared, so that the method mainly comprises the following conditions:
(1) when the video time stamp in the structured information is equal to the time stamp of the current video frame to be superposed, taking the structured information as the structured information corresponding to the video frame, stopping selecting the next structured information from the buffer queue, and taking the next frame of the video frame as the next video frame to be superposed;
(2) when the video time stamp in the structural information is larger than the time stamp of the current video frame to be superposed, returning the structural information to the cache queue, stopping selecting the next structural information from the cache queue, and taking the next frame of the video frame as the next video frame to be superposed;
(3) When the video time stamp in the structural information is smaller than the time stamp of the current video frame to be overlaid, the structural information is cleared from the cache queue, next structural information is continuously selected from the cache queue, the size relation between the video time stamp in the structural information and the time stamp of the current video frame to be overlaid is compared until the video time stamp in the structural information is equal to the time stamp of the current video frame to be overlaid, the video time stamp in the structural information is larger than the time stamp of the current video frame to be overlaid, or the cache queue is empty.
In the embodiment provided by the invention, one piece of structural information is sequentially selected according to the sequence of storing the structural information into the cache queue, the size relation between the video time stamp in the structural information and the time stamp of the current video frame to be superposed is compared, and the processing mode of the structural information is determined according to the size relation, so that the structural information is quickly and accurately superposed into the corresponding video frame.
specifically, as shown in fig. 2, a schematic flow chart of a preferred method for superimposing video structured information is provided, which specifically includes:
s10: the method comprises the steps that a user terminal receives video frames sent by a video source server, namely a real-time video stream is pulled from the video source server through a first thread;
s11: the user terminal performs cache processing on the received video frame according to preset cache time, for example, performs 2-second cache on the video frame, wherein the cache time 2 seconds is determined by processing time consumption in the whole process;
S20: meanwhile, the user terminal receives the structural information sent by the algorithm server or the message server, namely the structural information of each video frame is subscribed from the algorithm server or the message server through a second thread;
S21: continuously storing the received structural information into a cache queue;
S12: then, the user terminal also decodes and surface renders the video frame after buffering through the first thread, namely send each video frame buffered into the decoding module sequentially, output the decoded frame data frame by frame; after outputting a frame of YUV data, performing memory (background) surface rendering;
the method comprises the following steps that (1) internal memory (background) surface rendering refers to a double-buffering technology, namely a common video display technology, namely, an object consistent with a screen drawing area is created in an internal memory, a graph is firstly drawn on the object in the internal memory, and the graph on the object is copied on a screen at one time, so that the drawing speed is greatly increased;
S13: the user terminal also detects whether the structured information exists in the cache queue through the first thread;
s14: when the structural information exists in the cache Queue, one piece of structural information is selected from the structural information cache Queue one by one so as to determine the structural information elements which are the same as the YUV PTS of each video frame;
the video frame is decoded and usually in a YUV data format, PTS refers to Presentation Time Stamp and is mainly used for measuring when the decoded video frame is displayed, and YUV PTS refers to a Time Stamp of the YUV video frame;
s15: judging whether the video time stamp in the selected structural information is consistent with the time stamp of the current video frame to be superposed;
s16: when the video time stamp in the structured information is consistent with the time stamp of the video frame to be currently superposed, taking the structured information as the structured information corresponding to the video frame, superposing the structured information to the video frame to be currently superposed, stopping selecting the next structured information from the buffer queue, and taking the next frame of the video frame as the next video frame to be superposed;
S17: when the video time stamp in the structural information is inconsistent with the time stamp of the current video frame to be superposed, judging whether the video time stamp in the structural information is larger than the time stamp of the current video frame to be superposed;
s18: when the video time stamp in the structural information is larger than the time stamp of the current video frame to be superposed, returning the structural information to the cache queue, stopping selecting the next structural information from the cache queue, and taking the next frame of the video frame as the next video frame to be superposed;
s19: when the video timestamp in the structured information is smaller than the timestamp of the video frame to be currently superimposed, the structured information is removed from the buffer queue, and step S14 is executed: continuing to select next structured information from the cache queue, and judging whether the video time stamp in the structured information is consistent with the time stamp of the video frame to be currently superposed or not until the video time stamp in the structured information is consistent with the time stamp of the video frame to be currently superposed, the video time stamp in the structured information is larger than the time stamp of the video frame to be currently superposed or the cache queue is empty;
s110: when no structural information exists in the cache Queue or the structural information is superposed to the corresponding video frame, directly displaying a memory rendering surface Flip to a front end view window or superposing the structural information to the corresponding video frame to the front end view window, wherein the Flip means switching, that is, switching the memory rendering surface to a front surface (representing a display card device).
in the method for superposing the video structured information, firstly, a video source server synchronously transmits real-time video streams to an algorithm server and a user terminal in a video frame form, the algorithm server is only used for generating the structured information corresponding to each video frame, then, the user terminal simultaneously receives the video streams and the structured information corresponding to each video frame and superposes each structured information to the corresponding video frame, in the whole process of superposing the video structured information, the generation process of the structured information and the superposition process of the structured information are separated, the algorithm server does not need to superpose the structured information, the superposition process of the structured information is executed by the user terminal, the superposition process of the structured information is distributed to a plurality of user terminals, the superposed data processing amount of the structured information of different user terminals is greatly reduced, and the superposition speed of the structured information is improved, therefore, the real-time performance of video stream display is ensured, and the requirements on software and hardware configuration of the superposition system are reduced; furthermore, the received video frames are cached, the received structural information is stored in a cache queue, and then the structural information corresponding to the video frames after surface rendering is searched from the cache queue according to the timestamps of the video frames, so that the problem that the structural information can not be accurately superposed to the corresponding video frames due to the delay of the calculation of the algorithm server can be solved, and the fluency and the real-time property of the video stream can be further ensured; the structured information is sequentially selected according to the sequence of the structured information stored in the cache queue, the size relation between the video time stamp in the structured information and the time stamp of the video frame to be superposed at present is compared, and the processing mode of the structured information is determined according to the size relation, so that the structured information is superposed into the corresponding video frame quickly and accurately.
An embodiment of the present invention further provides a user terminal, as shown in fig. 3, where the user terminal includes:
a video frame receiving module 202, configured to receive at least one video frame, where the video frame is synchronously sent to the user terminal and the algorithm server by a video source server;
a structured information receiving module 204, configured to receive structured information sent by the algorithm server, where the structured information is generated by the algorithm server according to a monitoring target in the video frame, and the structured information includes a video timestamp, custom information corresponding to the monitoring target, and a position coordinate of the custom information;
And an information superimposing module 206, configured to superimpose the custom information onto the video frame according to the timestamp of the video frame, the video timestamp, and the position coordinate.
in the embodiment provided by the invention, firstly, a video source server synchronously transmits real-time video streams to an algorithm server and a user terminal in a video frame form, the algorithm server is only used for generating the structural information corresponding to each video frame, then, the user terminal simultaneously receives the video streams and the structural information corresponding to each video frame and superposes each structural information to the corresponding video frame, in the whole process of superposing the video structural information, the generation process of the structural information and the superposition process of the structural information are separated, the algorithm server does not need to superpose the structural information, the superposition process of the structural information is executed by the user terminal, the superposition process of the structural information is distributed to a plurality of user terminals, the superposed data processing amount of the structural information of different user terminals is greatly reduced, and the superposition speed of the structural information is improved, therefore, the real-time performance of video stream display is ensured, and the requirements on software and hardware configuration of the superposition system are reduced.
Further, in order to increase the speed of receiving the video frame and the structured information by the user terminal and increase the speed of superimposing the structured information by the user terminal, the user terminal receives the video frame and the structured information through different threads respectively.
Further, in consideration of the fact that a certain time is required for the message server to generate the structured information, for the user terminal, there is a certain time difference between the received video frame and the structured information corresponding to the video frame, based on which the user terminal further includes:
the video frame processing module is used for caching the video frames, decoding and surface rendering the cached video frames one by one, and outputting the video frames subjected to surface rendering frame by frame according to the surface rendering sequence;
and the structured information caching module is used for sequentially storing the received plurality of pieces of structured information into a caching queue.
Further, the information superposition module 206 includes: a selecting unit, configured to select the structural information corresponding to the video frame from the buffer queue according to the timestamp of the video frame and the video timestamp;
An extracting unit configured to extract the user-defined information and the position coordinates from the structured information;
And the superposition unit is used for superposing the self-defined information to the position coordinates in the video frame.
further, the information superposition module 206 further includes: and the judging unit is used for judging whether the structural information exists in the cache queue, if so, selecting one piece of structural information one by one according to the sequence stored in the cache queue, and extracting the video time stamp from the structural information.
further, the selecting unit includes: a processing mode determining subunit, configured to determine a processing mode for the structured information according to the video timestamp and the timestamp of the video frame;
a structured information selecting subunit, configured to perform corresponding processing on the structured information by using the processing method, where the processing method includes: using the structured information as structured information corresponding to the video frame, returning the structured information to the cache queue, or removing the structured information from the cache queue; and when the processing mode is determined to be that the structural information is used as the structural information corresponding to the video frame or the processing mode is determined to be that the structural information is returned to the cache queue, terminating the operation of selecting the next structural information and selecting the corresponding structural information for the next frame of the video frame.
further, the processing method determining subunit is specifically configured to determine, when the video timestamp matches a timestamp of the video frame, a processing method for the structured information to be structured information corresponding to the video frame; and when the video timestamp is not consistent with the timestamp of the video frame, judging whether the video timestamp is larger than the timestamp of the video frame, if so, determining that the structured information is processed in a mode of returning the structured information to the cache queue, and if not, determining that the structured information is processed in a mode of removing the structured information from the cache queue.
In the user terminal provided by the invention, the user terminal simultaneously receives the video stream and the structural information corresponding to each video frame and superposes each structural information to the corresponding video frame, the structural information superposition process is executed by the user terminal, and the structural information superposition process is distributed to a plurality of user terminals, the superposed data processing amount of the structural information of different user terminals is greatly reduced, and the superposition speed of the structural information is improved, so that the real-time performance of the video stream display is ensured, and the requirements on the configuration of software and hardware of a superposition system are reduced; furthermore, the received video frames are cached, the received structural information is stored in a cache queue, and then the structural information corresponding to the video frames after surface rendering is searched from the cache queue according to the timestamps of the video frames, so that the problem that the structural information can not be accurately superposed to the corresponding video frames due to the delay of the calculation of the algorithm server can be solved, and the fluency and the real-time property of the video stream can be further ensured; the structured information is sequentially selected according to the sequence of the structured information stored in the cache queue, the size relation between the video time stamp in the structured information and the time stamp of the video frame to be superposed at present is compared, and the processing mode of the structured information is determined according to the size relation, so that the structured information is superposed into the corresponding video frame quickly and accurately.
an embodiment of the present invention further provides a system for superimposing video structured information, as shown in fig. 4, the system includes: a video source server 11, an algorithm server 22 and a user terminal 33, wherein the video source server 11 is respectively connected with the algorithm server 22 and the user terminal 33 in a communication way, and the user terminal 33 is connected with the algorithm server 22 in a communication way;
The video source server 11, configured to send at least one video frame to the user terminal 33 and the algorithm server 22 synchronously;
The algorithm server 22 is configured to receive the video frame, identify a monitoring target in the video frame, generate structural information corresponding to the video frame according to the monitoring target, and transmit the structural information to the user terminal 33, where the structural information includes a video timestamp, custom information corresponding to the monitoring target, and a position coordinate of the custom information;
the user terminal 33 is configured to receive the video frame and the structured information sent by the algorithm server 22, and superimpose the customized information on the video frame according to the timestamp of the video frame, the video timestamp, and the position coordinate.
Further, as shown in fig. 5, the above-mentioned superimposing system may further include: and the message server 44 is in communication connection with the algorithm server 22 and the user terminal 33 respectively, wherein the algorithm server 22 sends the structured information to the message server 44, the message server 44 sends the structured information to the client, and the user terminal 33 overlays the custom information in the structured information to the specified position in the corresponding video frame and displays the video stream information overlaid with the custom information in real time. The same real-time video source is processed by the algorithm server 22, and the structured information output by each frame is published to the message server 44 in real time, and meanwhile, the user terminal 33ocx subscribes to the structured information on the message server 44 and synchronously superimposes the information on the corresponding video frame image, thereby realizing the superimposition of the real-time video custom information.
based on the above analysis, in the system for superimposing structured video information provided in the embodiment of the present invention, the video source server 11 transmits the real-time video stream to the algorithm server 22 and the user terminal 33 synchronously in the form of video frames, the algorithm server 22 is only used for generating the structured information corresponding to each video frame, then the user terminal 33 receives the video stream and the structured information corresponding to each video frame simultaneously, and superimposes each structured information onto the corresponding video frame, during the whole process of superimposing structured video information, the process of generating structured information and the process of superimposing structured information are separated, the algorithm server 22 does not need to superimpose structured information, the process of superimposing structured information is performed by the user terminal 33, and the process of superimposing structured information is distributed to a plurality of user terminals 33, so that the processing amount of superimposed data of structured information of different user terminals 33 is greatly reduced, the superposition speed of the structured information is improved, so that the real-time performance of video stream display is ensured, and the requirements on the configuration of software and hardware of a superposition system are reduced.
The user terminal provided by the embodiment of the invention can be specific hardware on the equipment or software or firmware installed on the equipment and the like. The implementation principle and the generated technical effect of the user terminal and the overlay system provided by the embodiment of the present invention are the same as those of the method embodiment described above, and for a brief description, reference may be made to corresponding contents in the method embodiment described above where no mention is made in the user terminal and the overlay system embodiment. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the user terminal and the unit described above may all refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed user terminal and method can be implemented in other manners. The above-described embodiments of the user terminal are merely illustrative, and for example, the division of the unit is only one logical division, and there may be other divisions when actually implementing, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of the user terminal or unit through some communication interfaces, and may be in an electrical, mechanical or other form.
the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
in addition, functional units in the embodiments provided by the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
the functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus once an item is defined in one figure, it need not be further defined and explained in subsequent figures, and moreover, the terms "first", "second", "third", etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the present invention in its spirit and scope. Are intended to be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (7)
1. a method for superimposing video structured information, the method comprising:
A user terminal receives at least one video frame, wherein the video frame is synchronously sent to the user terminal and an algorithm server by a video source server;
Caching all the video frames, decoding and surface rendering the cached video frames one by one, and outputting the video frames subjected to surface rendering frame by frame according to the surface rendering sequence;
The user terminal receives structured information sent by the algorithm server, the structured information is generated by the algorithm server according to a monitoring target in the video frame, and the structured information comprises a video timestamp, custom information corresponding to the monitoring target and position coordinates of the custom information;
sequentially storing the received plurality of structured information into a cache queue;
Judging whether the structural information exists in the cache queue, if so, selecting one piece of structural information one by one according to the sequence of storing the structural information in the cache queue, and extracting the video time stamp from the structural information;
Determining a processing mode of the structural information according to the video time stamp and the time stamp of the video frame; and correspondingly processing the structural information by using the processing mode, wherein the processing mode comprises the following steps: taking the structured information as structured information corresponding to the video frame, returning the structured information to the cache queue, or clearing the structured information from the cache queue; when the processing mode is determined to be that the structural information is used as the structural information corresponding to the video frame or the processing mode is determined to be that the structural information is returned to the cache queue, the operation of selecting the next structural information is stopped, and the corresponding structural information is selected for the next frame of the video frame;
Selecting the structural information corresponding to the video frame from the cache queue according to the timestamp of the video frame and the video timestamp; extracting the custom information and the position coordinates from the structured information; superimposing the custom information to the location coordinates in the video frame.
2. The method of claim 1, wherein the video frame and the structured information are received by the user terminal through different threads, respectively.
3. the method of claim 1, wherein determining the manner of processing the structured information according to the video timestamp and the timestamp of the video frame comprises:
when the video timestamp is consistent with the timestamp of the video frame, determining that the structural information is processed in a mode of taking the structural information as structural information corresponding to the video frame;
and when the video timestamp is not consistent with the timestamp of the video frame, judging whether the video timestamp is larger than the timestamp of the video frame, if so, determining that the structured information is processed in a mode of returning the structured information to the cache queue, and if not, determining that the structured information is processed in a mode of removing the structured information from the cache queue.
4. A user terminal, characterized in that the user terminal comprises:
The video frame receiving module is used for receiving at least one video frame, and the video frame is synchronously sent to the user terminal and the algorithm server by the video source server;
The video frame processing module is used for caching the video frames, decoding and surface rendering the cached video frames one by one, and outputting the video frames subjected to surface rendering frame by frame according to the surface rendering sequence;
the system comprises a structured information receiving module, a video processing module and a video processing module, wherein the structured information receiving module is used for receiving structured information sent by the algorithm server, the structured information is generated by the algorithm server according to a monitoring target in the video frame, and the structured information comprises a video timestamp, custom information corresponding to the monitoring target and position coordinates of the custom information;
The structured information caching module is used for sequentially storing the received plurality of structured information into a caching queue;
the information superposition module is used for judging whether the structural information exists in the cache queue, if so, selecting one piece of structural information one by one according to the sequence stored in the cache queue, and extracting the video timestamp from the structural information; determining a processing mode of the structural information according to the video time stamp and the time stamp of the video frame; and correspondingly processing the structural information by using the processing mode, wherein the processing mode comprises the following steps: taking the structured information as structured information corresponding to the video frame, returning the structured information to the cache queue, or clearing the structured information from the cache queue; when the processing mode is determined to be that the structural information is used as the structural information corresponding to the video frame or the processing mode is determined to be that the structural information is returned to the cache queue, the operation of selecting the next structural information is stopped, and the corresponding structural information is selected for the next frame of the video frame; selecting the structural information corresponding to the video frame from the cache queue according to the timestamp of the video frame and the video timestamp; extracting the custom information and the position coordinates from the structured information; superimposing the custom information to the location coordinates in the video frame.
5. the user terminal according to claim 4, wherein the user terminal receives the video frame and the structured information through different threads respectively.
6. The user terminal according to claim 4, wherein the processing manner determining subunit is configured to determine, when the video timestamp matches a timestamp of the video frame, that the structured information is to be processed as structured information corresponding to the video frame; and when the video timestamp is not consistent with the timestamp of the video frame, judging whether the video timestamp is larger than the timestamp of the video frame, if so, determining that the structured information is processed in a mode of returning the structured information to the cache queue, and if not, determining that the structured information is processed in a mode of removing the structured information from the cache queue.
7. a system for superimposing structured information on video, the system comprising: the system comprises a video source server, an algorithm server and a user terminal, wherein the video source server is respectively in communication connection with the algorithm server and the user terminal, and the user terminal is in communication connection with the algorithm server;
the video source server is used for synchronously sending at least one video frame to the user terminal and the algorithm server;
caching all the video frames, decoding and surface rendering the cached video frames one by one, and outputting the video frames subjected to surface rendering frame by frame according to the surface rendering sequence;
The algorithm server is used for receiving the video frame, identifying a monitoring target in the video frame, generating structural information corresponding to the video frame according to the monitoring target, and transmitting the structural information to the user terminal, wherein the structural information comprises a video timestamp, custom information corresponding to the monitoring target, and position coordinates of the custom information;
Sequentially storing the received plurality of structured information into a cache queue;
judging whether the structural information exists in the cache queue, if so, selecting one piece of structural information one by one according to the sequence of storing the structural information in the cache queue, and extracting the video time stamp from the structural information;
Determining a processing mode of the structural information according to the video time stamp and the time stamp of the video frame; and correspondingly processing the structural information by using the processing mode, wherein the processing mode comprises the following steps: taking the structured information as structured information corresponding to the video frame, returning the structured information to the cache queue, or clearing the structured information from the cache queue; when the processing mode is determined to be that the structural information is used as the structural information corresponding to the video frame or the processing mode is determined to be that the structural information is returned to the cache queue, the operation of selecting the next structural information is stopped, and the corresponding structural information is selected for the next frame of the video frame;
the user terminal is used for receiving the video frame and the structural information sent by the algorithm server, and selecting the structural information corresponding to the video frame from the cache queue according to the timestamp of the video frame and the video timestamp; extracting the custom information and the position coordinates from the structured information; superimposing the custom information to the location coordinates in the video frame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610763495.2A CN106375793B (en) | 2016-08-29 | 2016-08-29 | video structured information superposition method, user terminal and superposition system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610763495.2A CN106375793B (en) | 2016-08-29 | 2016-08-29 | video structured information superposition method, user terminal and superposition system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106375793A CN106375793A (en) | 2017-02-01 |
CN106375793B true CN106375793B (en) | 2019-12-13 |
Family
ID=57901445
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610763495.2A Active CN106375793B (en) | 2016-08-29 | 2016-08-29 | video structured information superposition method, user terminal and superposition system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106375793B (en) |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107832402B (en) * | 2017-11-01 | 2021-08-03 | 武汉烽火众智数字技术有限责任公司 | Dynamic display system and method for video structured fruit bearing |
CN111064984B (en) * | 2018-10-16 | 2022-02-08 | 杭州海康威视数字技术股份有限公司 | Intelligent information superposition display method and device for video frame and hard disk video recorder |
CN109309817A (en) * | 2018-11-14 | 2019-02-05 | 北京东方国信科技股份有限公司 | The display methods and device of monitor video recognition of face OSD |
WO2020097857A1 (en) * | 2018-11-15 | 2020-05-22 | 北京比特大陆科技有限公司 | Media stream processing method and apparatus, storage medium, and program product |
CN109862396A (en) * | 2019-01-11 | 2019-06-07 | 苏州科达科技股份有限公司 | A kind of analysis method of video code flow, electronic equipment and readable storage medium storing program for executing |
CN111832366B (en) * | 2019-04-22 | 2024-04-02 | 富联精密电子(天津)有限公司 | Image recognition apparatus and method |
CN110087042B (en) * | 2019-05-08 | 2021-07-09 | 深圳英飞拓智能技术有限公司 | Face snapshot method and system for synchronizing video stream and metadata in real time |
CN111294666B (en) * | 2019-07-04 | 2022-07-01 | 杭州萤石软件有限公司 | Video frame transmission method and method, device and system for determining video frame transmission delay |
CN110874424A (en) * | 2019-09-23 | 2020-03-10 | 北京旷视科技有限公司 | Data processing method and device, computer equipment and readable storage medium |
CN110647173B (en) * | 2019-10-10 | 2020-12-29 | 四川赛狄信息技术股份公司 | Video tracking system and method |
CN111629264B (en) * | 2020-06-01 | 2021-07-27 | 复旦大学 | Web-based separate front-end image rendering method |
CN111698546B (en) * | 2020-06-29 | 2023-02-03 | 平安国际智慧城市科技股份有限公司 | Video structured result transmission method and device, terminal equipment and storage medium |
US11683453B2 (en) * | 2020-08-12 | 2023-06-20 | Nvidia Corporation | Overlaying metadata on video streams on demand for intelligent video analysis |
CN112235598B (en) * | 2020-09-27 | 2022-09-20 | 深圳云天励飞技术股份有限公司 | Video structured processing method and device and terminal equipment |
CN112203113B (en) * | 2020-12-07 | 2021-05-25 | 北京沃东天骏信息技术有限公司 | Video stream structuring method and device, electronic equipment and computer readable medium |
CN112584073B (en) * | 2020-12-24 | 2022-08-02 | 杭州叙简科技股份有限公司 | 5G-based law enforcement recorder distributed assistance calculation method |
CN112950951B (en) * | 2021-01-29 | 2023-05-02 | 浙江大华技术股份有限公司 | Intelligent information display method, electronic device and storage medium |
CN114051120A (en) * | 2021-10-26 | 2022-02-15 | 远光软件股份有限公司 | Video alarm method, device, storage medium and electronic equipment |
CN113923507B (en) * | 2021-12-13 | 2022-07-22 | 北京蔚领时代科技有限公司 | Low-delay video rendering method and device for Android terminal |
CN114327708A (en) * | 2021-12-22 | 2022-04-12 | 惠州市德赛西威智能交通技术研究院有限公司 | Method, system and storage medium for processing interaction information between accelerated vehicle and mobile terminal |
CN114900727A (en) * | 2022-05-11 | 2022-08-12 | 上海哔哩哔哩科技有限公司 | Video stream processing method and device |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1852431A (en) * | 2005-12-23 | 2006-10-25 | 华为技术有限公司 | System and method for realizing video frequency information sharing |
CN101593346A (en) * | 2009-07-06 | 2009-12-02 | 中国人民解放军总装备部军械技术研究所 | Integrated general target video image acquisition, recognition and tracking device |
CN102647618A (en) * | 2012-04-28 | 2012-08-22 | 深圳市华鼎视数字移动电视有限公司 | Method and system for interaction with television programs |
CN102957966A (en) * | 2011-08-19 | 2013-03-06 | 深圳市快播科技有限公司 | Player and method for embedding time in video frames of player |
CN103635954A (en) * | 2011-02-08 | 2014-03-12 | 隆沙有限公司 | A system to augment a visual data stream based on geographical and visual information |
CN103646546A (en) * | 2013-11-23 | 2014-03-19 | 安徽蓝盾光电子股份有限公司 | An intelligent traffic system with a large-scale vehicle passing-forbidding function |
CN104113727A (en) * | 2013-04-17 | 2014-10-22 | 华为技术有限公司 | Monitoring video playing method, device and system |
CN105072460A (en) * | 2015-07-15 | 2015-11-18 | 中国科学技术大学先进技术研究院 | Information annotation and association method, system and device based on VCE |
CN105812722A (en) * | 2014-12-30 | 2016-07-27 | 航天信息股份有限公司 | Grain transportation monitoring method and system |
-
2016
- 2016-08-29 CN CN201610763495.2A patent/CN106375793B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1852431A (en) * | 2005-12-23 | 2006-10-25 | 华为技术有限公司 | System and method for realizing video frequency information sharing |
CN101593346A (en) * | 2009-07-06 | 2009-12-02 | 中国人民解放军总装备部军械技术研究所 | Integrated general target video image acquisition, recognition and tracking device |
CN103635954A (en) * | 2011-02-08 | 2014-03-12 | 隆沙有限公司 | A system to augment a visual data stream based on geographical and visual information |
CN102957966A (en) * | 2011-08-19 | 2013-03-06 | 深圳市快播科技有限公司 | Player and method for embedding time in video frames of player |
CN102647618A (en) * | 2012-04-28 | 2012-08-22 | 深圳市华鼎视数字移动电视有限公司 | Method and system for interaction with television programs |
CN104113727A (en) * | 2013-04-17 | 2014-10-22 | 华为技术有限公司 | Monitoring video playing method, device and system |
CN103646546A (en) * | 2013-11-23 | 2014-03-19 | 安徽蓝盾光电子股份有限公司 | An intelligent traffic system with a large-scale vehicle passing-forbidding function |
CN105812722A (en) * | 2014-12-30 | 2016-07-27 | 航天信息股份有限公司 | Grain transportation monitoring method and system |
CN105072460A (en) * | 2015-07-15 | 2015-11-18 | 中国科学技术大学先进技术研究院 | Information annotation and association method, system and device based on VCE |
Also Published As
Publication number | Publication date |
---|---|
CN106375793A (en) | 2017-02-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106375793B (en) | video structured information superposition method, user terminal and superposition system | |
WO2017193576A1 (en) | Video resolution adaptation method and apparatus, and virtual reality terminal | |
WO2018036456A1 (en) | Method and device for tracking and recognizing commodity in video image and displaying commodity information | |
CN111078070B (en) | PPT video barrage play control method, device, terminal and medium | |
CN107027068B (en) | Rendering method, decoding method, and method and device for playing multimedia data stream | |
EP3913924B1 (en) | 360-degree panoramic video playing method, apparatus, and system | |
CN110505471B (en) | Head-mounted display equipment and screen acquisition method and device thereof | |
CN111107411B (en) | Distributed cross-node video synchronization method and system | |
CN104980500B (en) | A kind of information display method and terminal | |
US9680899B2 (en) | Method for synchronizing a rich media action with an audiovisual change, corresponding device and computer software, method for generating a rich media presentation and corresponding computer software | |
CN105933783A (en) | Bullet screen play method and device and terminal equipment | |
CN103677701B (en) | The method and system of large-size screen monitors simultaneous display | |
CN111078078B (en) | Video playing control method, device, terminal and computer readable storage medium | |
CN110475156B (en) | Method and device for calculating video delay value | |
CN103686413A (en) | Auxiliary display method and device | |
US10685642B2 (en) | Information processing method | |
CN112243137A (en) | Live broadcast interface updating method, device, server and system | |
CN111683267A (en) | Method, system, device and storage medium for processing media information | |
CN111954022B (en) | Video playing method and device, electronic equipment and readable storage medium | |
CN105898379A (en) | Method for establishing hyperlink of video image and server | |
CN112019906A (en) | Live broadcast method, computer equipment and readable storage medium | |
CN111835988B (en) | Subtitle generation method, server, terminal equipment and system | |
CN110072144B (en) | Image splicing processing method, device and equipment and computer storage medium | |
CN107995538B (en) | Video annotation method and system | |
CN110198457B (en) | Video playing method and device, system, storage medium, terminal and server thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
PP01 | Preservation of patent right |
Effective date of registration: 20220726 Granted publication date: 20191213 |
|
PP01 | Preservation of patent right |