CN114071246B - Media augmented reality tag method, computer device and storage medium - Google Patents

Media augmented reality tag method, computer device and storage medium Download PDF

Info

Publication number
CN114071246B
CN114071246B CN202010744754.3A CN202010744754A CN114071246B CN 114071246 B CN114071246 B CN 114071246B CN 202010744754 A CN202010744754 A CN 202010744754A CN 114071246 B CN114071246 B CN 114071246B
Authority
CN
China
Prior art keywords
tag
stream
media
server
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010744754.3A
Other languages
Chinese (zh)
Other versions
CN114071246A (en
Inventor
洪家明
程之龙
李鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hytera Communications Corp Ltd
Original Assignee
Hytera Communications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hytera Communications Corp Ltd filed Critical Hytera Communications Corp Ltd
Priority to CN202010744754.3A priority Critical patent/CN114071246B/en
Publication of CN114071246A publication Critical patent/CN114071246A/en
Application granted granted Critical
Publication of CN114071246B publication Critical patent/CN114071246B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • H04N21/6437Real-time Transport Protocol [RTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
    • H04N21/8586Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot by using a URL

Abstract

The application discloses a media augmented reality label method, a computer device and a storage medium, wherein the method comprises the following steps: associating the tag stream with at least one media stream by means of a uniform resource locator and a time stamp, the time at which the tag stream is displayed being determined by the time stamp, the uniform resource locator being used to locate the source of the media stream; the method comprises the steps that a tag stream is sent to a media server, so that the media server sends the media stream and the tag stream to a display terminal, the display terminal displays the media stream and the tag stream, and the media stream and the tag stream share a transmission port; or the tag stream is sent to the display terminal, so that the display terminal displays the media stream and the tag stream, and the transmission ports of the media stream and the tag stream are different. Through the mode, the method and the device can not relate to the change of the media stream, the timeliness of the tag stream is good, the independence is strong, and the adaptability of the tag stream is enhanced.

Description

Media augmented reality tag method, computer device and storage medium
Technical Field
The present application relates to the field of multimedia technologies, and in particular, to a media augmented reality labeling method, a computer device, and a computer readable storage medium.
Background
The Real-time transport protocol (Real-time Transport Protocol, RTP) is a network transport protocol, a transport layer protocol for multimedia data streams over the Internet. The RTP protocol describes a standard packet format for delivering audio and video over the internet. Real-time transport control protocol (Real-time Transport Control Protocol or RTP Control Protocol, abbreviated RTCP) is a sister protocol of RTP, which is commonly used with RTP control protocol RTCP, and at present, RTP and RTCP are commonly used in streaming media systems, and tags are added to video, and have poor timeliness and poor independence.
Disclosure of Invention
The media augmented reality tag method, the computer device and the storage medium can not relate to the change of media streams, the timeliness of tag streams is good, the independence is strong, and the adaptability of the tag streams is enhanced.
In order to solve the technical problems, one technical scheme adopted by the application is as follows: a media augmented reality tagging method is provided. The method comprises the following steps: associating the tag stream with at least one media stream by means of a uniform resource locator and a time stamp, the time at which the tag stream is displayed being determined by the time stamp, the uniform resource locator being used to locate the source of the media stream; the method comprises the steps that a tag stream is sent to a media server, so that the media server sends the media stream and the tag stream to a display terminal, the display terminal displays the media stream and the tag stream, and the media stream and the tag stream share a transmission port; or the tag stream is sent to the display terminal, so that the display terminal displays the media stream and the tag stream, and the transmission ports of the media stream and the tag stream are different.
In order to solve the technical problems, another technical scheme adopted by the application is as follows: a media augmented reality tagging method is provided. The method comprises the following steps: receiving a tag stream which is transmitted by a media server from a tag server, and a media stream which is transmitted by the media server, wherein the tag stream is associated with at least one media stream through a uniform resource locator and a time stamp, the time for displaying the tag stream is determined by the time stamp, the uniform resource locator is used for locating the source of the media stream, and the tag stream and the media stream share a transmission port; or receiving a tag stream from a tag server and a media stream sent by a media server, wherein the tag stream is associated with at least one media stream through a uniform resource locator and a time stamp, the time for displaying the tag stream is determined by the time stamp, the uniform resource locator is used for locating the source of the media stream, and the transmission ports of the tag stream and the media stream are different; the display terminal displays the contents of the tag stream and the media stream.
In order to solve the technical problems, another technical scheme adopted by the application is as follows: a computer device is provided. The apparatus includes: the system comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the computer program to realize the media augmented reality labeling method.
In order to solve the technical problems, another technical scheme adopted by the application is as follows: a computer-readable storage medium is provided. The computer readable storage medium stores a computer program that is executed by a processor to implement the media augmented reality tagging method described above.
The beneficial effects of this application are: the tag stream is associated with at least one media stream through a uniform resource locator and a time stamp, the time for displaying the tag stream is determined by the time stamp, the uniform resource locator is used for locating the source of the media stream, the tag stream is sent to a media server, the media server sends the media stream and the tag stream to a display terminal, the display terminal displays the media stream and the tag stream, and the media stream and the tag stream share a transmission port. Or the tag stream is sent to the display terminal, so that the display terminal displays the media stream and the tag stream, and the transmission ports of the media stream and the tag stream are different. According to the method and the device, the tag stream and the media stream are associated, the change of the media stream is not involved, the timeliness of the tag stream is good, the independence is strong, the tag stream can be transmitted together with the media stream, the transmission ports are the same, the collision is avoided, and the adaptability of the tag stream is enhanced. The tag server can also directly transmit the tag stream to the display terminal, so that the transmission efficiency is increased, and when the media server has the problems of receiving damage and the like, the display terminal can also receive the tag stream and display the tag stream.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, a brief description will be given below of the drawings that are needed in the embodiments or the prior art descriptions, and it is obvious that the drawings in the following description are some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1A is a schematic illustration of a first embodiment of a media augmented reality tagging method provided herein;
FIG. 1B is a schematic illustration of a first process according to a second embodiment of the media augmented reality tagging method provided herein;
FIG. 2A is a schematic diagram of the transmission of the media stream and tag stream of FIG. 1A provided herein;
FIG. 2B is a schematic diagram of the transmission of the media stream and the tag stream of FIG. 1B provided herein;
FIG. 3A is a second flow diagram of a first embodiment of a media augmented reality tagging method provided herein;
FIG. 3B is a second flow diagram of a second embodiment of a media augmented reality tagging method provided herein;
fig. 4A is a schematic diagram of a tag RTP packet message body in the media augmented reality tag method provided in the present application;
Fig. 4B is a schematic diagram of a label RTP packet header structure in the media augmented reality label method provided in the present application;
FIG. 5 is a schematic diagram of an RTCP packet message body in the media augmented reality tag method provided herein;
FIG. 6A is a schematic view of a first process according to a third embodiment of the media augmented reality tagging method provided herein;
FIG. 6B is a second flow chart of a third embodiment of a media augmented reality tagging method provided herein;
fig. 7A is an example schematic diagram of a first example of inserting a tag RTP packet in a media augmented reality tag method provided herein;
fig. 7B is an example schematic diagram of a second example of inserting a label RTP packet in the media augmented reality label method provided herein;
FIG. 8A is an example schematic diagram of a first embodiment of inserting tags and controlling message transmission in a media augmented reality tagging method provided herein;
FIG. 8B is an example schematic diagram of a second embodiment of inserting tags and controlling message transmission in the media augmented reality tagging method provided herein;
fig. 9A and 9B are schematic flow diagrams of updating tags in the media augmented reality tag method provided in the present application;
fig. 10 is a schematic plan view of a tag position calculation in the media augmented reality tag method provided in the present application;
FIG. 11A is a first flow chart of a first embodiment of a display terminal in the media augmented reality tagging method provided by the present application;
FIG. 11B is a first flow chart of a second embodiment of a display terminal in the media augmented reality tagging method provided by the present application;
FIG. 11C is a second flow chart of a first embodiment of a display terminal in the media augmented reality tagging method provided by the present application;
FIG. 11D is a second flow chart of a second embodiment of a display terminal in the media augmented reality tagging method provided by the present application;
FIG. 12 is a schematic diagram of a computer device provided herein;
fig. 13 is a schematic structural view of a computer-readable storage medium provided in the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Tags may be added to the media, such as dynamic advertising tags for advertising systems, scene advertising tags embedded in television broadcasts, and the like. The inventors of the present application have long studied and found that in the prior art, the tags are loaded in SEI (Supplemental Enhancement Information ) units of the video, and decoding to the H264 layer is required to obtain the tags, and at the same time, the adaptation is not sufficient, not all streams have SEI units, nor all streams are video streams. And the tag is added into the video, so that the tag has poor timeliness, poor independence, poor adaptability, difficult insertion and editing of the tag and poor multi-stream relevance. In order to solve the above technical problems, the present application provides the following embodiments.
A method for media augmented reality tagging provided in the present application, which provides a first embodiment, please refer to fig. 1A and fig. 2A, may include the following steps:
s11: the tag stream is associated with at least one media stream by means of a uniform resource locator and a time stamp, the time at which the tag stream is displayed being determined by the time stamp, the uniform resource locator being used to locate the source of the media stream.
The tag server receives the media stream sent by the media server, associates the tag stream with at least one media stream through a time stamp, the time for displaying the tag stream is determined by the time stamp, and the uniform resource locator is used for locating the source of the media stream. The time stamp may be an absolute time stamp. The timestamp may determine the particular frame in the media stream that the tag stream should display. RTP (Real-time Transport Protocol ) timestamps for different media streams may grow at different rates and may have independent random offsets. The RTP time stamp can be used for realizing the synchronization of different media streams, the NTP (Network Time Protocol ) time stamp solves the problem that the RTP time stamp has random offset, and the time stamp pair comprises the RTP time stamp and the NTP time stamp and is used for judging the corresponding relation between the RTP time stamp and the NTP time stamp so as to synchronize the media stream and the tag stream. The uniform resource locator (Uniform Resource Locator, abbreviated as URL) for locating the source of a media stream is a compact representation of the location and access method of resources available on the internet, and may represent the address of standard resources on the internet. The URL in the present application includes a source IP (Internet Protocol ) address and port, a destination IP address and port, a synchronization source identifier, and a type (payload) of the media stream, and a path of media stream associated with the tag stream may be determined through the URL. The tag stream is associated with at least one media stream through a uniform resource locator and a time stamp, and real-time synchronous association is carried out according to the time stamp of the media stream, so that any coding sealing mode of the existing media stream is not required to be changed, and the tag stream can be loosely coupled and associated with specific frame data of the media stream.
The tag stream defined in the present application may be related to the media stream, or may be related to the media stream. The tags may be text, voice, pictures, communications, video, etc. The system can also be intelligent voice recognition, voiceprint recognition, vehicle recognition, face recognition structure, video structured information and the like. Such as fixed location equipment, buildings, etc., or moving objects around a fixed location at a specific point in time, such as equipment, video, telephone, personnel, vehicles, etc., and may also be data, information, intelligence, alarms, etc. The application may also be applied with manually added labels such as circling actions, curves or polygons, etc. The media stream of the present application may be a video stream, an audio stream, a data stream, or the like. The association of the tag stream and the media stream can be performed across media, for example, the tag stream and the media stream can be used for tracking a certain group of targets, associating voice or video at the same time, associating a plurality of video devices with similar regions, and the like.
There are a plurality of tags in the tag stream, the tags including at least one associated tag, the associated tag including a uniform resource locator for locating a source of the tag stream.
Referring to fig. 4A, the transmission of the Tag stream is based on RTP, and the Tag stream may have a plurality of tags, each of which is encapsulated as a Tag RTP packet for transmission, and the encapsulation format of the Tag stream is defined as a TLV format (Type-Length-Value) including a Tag field (Type/Tag), a Length field (Length), and a content field (Value). The tag data packages are structured as a series of TLV structured entities.
The tag stream has a plurality of tag RTP packets, each of which may include at least one associated tag, and may further include a plurality of associated data information tags including at least one of a content tag, a location tag, or a control tag of associated data information of the associated tag.
The tag RTP packet may include at least one associated tag having a URL containing a schema (or protocol), a server name (or IP address), a path, and a file name for locating the source of the media stream, each file on the internet having a unique URL containing information indicating the location of the file and how it should be handled by the browser, and locating the source of the media stream, and identifying the file via the URL. Alternatively, the association tag may include a timestamp and an association URL. The media stream may be associated with the timestamp and URL, i.e. the tag stream may be associated with the media stream by associating the tag. There is a timestamp in the header structure (RTP header) of the RTP tag packet, which can be associated with the media stream. The timestamp in the associated tag may be ignored and the timestamp in the RTP header in the RTP tag packet may be taken. The associated URL may express an associated transmission port, for example, the URL may be file:// e:/abc. Rtp, or may be a port, a web site, or the like. In the application, the URL comprises a source IP and a port of a media stream, a target IP and a port, a synchronous source identifier and a type (payload), and one path of media stream associated with a tag stream can be determined through the URL. The URL may express source information of the RTP packet because of the correspondence between the source IP and the port, and the destination IP and the port, and may be source information of the media RTP packet in the media stream, or source information of the label RTP packet in the label stream.
The tag RTP packet includes a plurality of associated data information tags including at least one of a content tag, a location tag, or a control tag of related data information of the associated tag.
Referring to fig. 4A, the tag RTP packet includes at least one associated tag, and may further include a plurality of associated data information tags, where the plurality of associated data information tags includes at least one of a content tag, a location tag, or a control tag of related data information of the associated tag, and the associated data information tag may also include a content tag, a location tag, and a control tag of related data information of the associated tag, and may further include other related data tags. The plurality of associated data information tags may be tags related to the associated tags, tags unrelated to the associated tags, content information, location information, environment information and the like of the associated tags, identification tags such as media structured tags, face recognition and license plate behavior identification tags, voiceprint identification tags and the like, regional range associated equipment tags (which may be nearby people or equipment), environment information associated tags, business associated tags and the like. The plurality of associated data information labels can be set in a specific situation, and the application does not limit the plurality of associated data information labels.
The content tag may be tag content information, and the content tag may include a content type, a text description, and structured data, wherein the structured data is information data of the tag. The location tag may represent location information of the tag, and the location tag may include an absolute location and a relative location, wherein the absolute location may be location information acquired through GPS (Global Positioning System ); the relative position may represent horizontal or vertical angle information of the tag, be it position information with respect to a camera or sphere, the camera may be a camera or the like. The control tag may include control information for the tag, for example, call control and interface invocation, etc., e.g., call control may be exercised on the tag when the tag is a mobile device, call control may be exercised by the invoking interface.
S12: and sending the tag stream to the media server, so that the media server sends the media stream and the tag stream to the display terminal, and the display terminal displays the media stream and the tag stream, and the media stream and the tag stream share a transmission port.
The tag server sends the associated tag stream to the media server, so that the media server can send the media stream and the tag stream to the display terminal for display, when the media server sends the media stream and the tag stream to the display terminal, the transmission paths are the same, the media stream and the tag stream are the same RTP stream, the media stream and the tag stream share the transmission port, and a communication channel can be shared. This transmission mode may be referred to as a companion stream transmission mode, and the transmission of the media stream and the tag stream may be based on RTP, and the tag stream may have a separate communication channel or may share a communication channel with the media stream. The tag stream and the media stream are associated, the change of the media stream is not involved, the timeliness of the tag stream is good, the independence is strong, the tag stream can be transmitted together with the media stream, the transmission ports are the same, the collision is avoided, and the adaptability of the tag stream is enhanced. The tag stream and the media stream may be real-time streams, and when the real-time streams are sent to a display terminal for display, media augmented reality (Augmented Reality, abbreviated as AR) tags may be implemented.
The application further provides a second embodiment, please refer to fig. 1B and fig. 2B, of a method for media augmented reality labels, which includes the following steps:
s21: the tag stream is associated with at least one media stream by means of a uniform resource locator and a time stamp, the time at which the tag stream is displayed being determined by the time stamp, the uniform resource locator being used to locate the source of the media stream. For the details of the procedure in this step, reference may be made to the implementation procedure in the method for media augmented reality tagging in the foregoing embodiment, and the details are not repeated here.
S22: and sending the tag stream to the display terminal so that the display terminal displays the media stream and the tag stream, and the transmission ports of the media stream and the tag stream are different.
The tag server sends the tag stream associated with the media stream to the display terminal, the media server sends the media stream to the display terminal, and the display terminal displays the media stream and the tag stream according to the associated time stamp, wherein the media stream and the tag stream are different RTP streams, the transmission ports of the media stream and the tag stream are different, and the transmission paths of the media stream and the tag stream are also different. The transmission of the media stream and the tag stream is based on RTP, the media stream may have a separate communication channel, the tag stream may have a separate communication channel, and the transmission mode may be referred to as an independent stream transmission mode. The tag stream and the media stream are associated, the media stream is not changed, the tag stream is good in timeliness and high in independence, the tag stream and the media stream are transmitted separately, collision between the media stream and the tag stream is prevented, adaptability of the tag stream is also enhanced, the tag stream is directly transmitted to the display terminal, transmission efficiency is improved, and when a media server receives damage and other problems, the display terminal can also receive the tag stream and display the tag stream.
In another embodiment, referring to fig. 3A and fig. 3B, after the step S12 or the step S22, the method for media augmented reality labels further includes the following steps:
s13: and receiving control information sent by the display terminal, wherein the control information is reversely transmitted along a transmission path of the tag stream, and the control information comprises a control command for controlling the operation of the tag stream.
The display terminal may transmit control information to the tag server, and the control information may be control information about a tag, which may be a tag stored in the tag server, or a control command for controlling a tag stream operation. The display terminal may also send a control message to the media server, which forwards the control message to the tag server. The tag server receives a control message sent from the display terminal.
Referring to fig. 5, the transmission of the control message is based on RTCP (Real-time transport control protocol), and the control message may have a plurality of control tags, and the control tags have a control command and a uniform resource locator, where the uniform resource locator may locate a source of the media stream or the tag stream, and the control command may be a control of the source tag stream or the media stream represented by the uniform resource locator, and may be an operation of the tag stream. The control information is reversely transmitted along the transmission path of the tag stream, that is, in the direction of the source of the tag stream or the media stream.
Referring to fig. 8A and 8B, the tag server may be a plurality of node tag servers, and the current node tag server determines whether the control information includes a control tag of the current node tag server; wherein the control tag includes a control command for controlling the operation of the tag. And if the control information contains the control label of the current node label server, the control label is taken out. The current node tag server takes out the control tag and can execute the control command to the tag. If the control information does not contain the control label of the current node server, forwarding the control information to a node label server which is the last node label server of the current node label server, and continuing to perform source-back control to the source of the uniform resource locator.
The tag server stores at least one of a synchronization source identifier (Synchronization Source, SSRC) and a uniform resource locator of the tag stream. The current node label server stores at least one of a synchronous source identifier and a uniform resource locator of a label stream. The current node tag server receives the control information, and can judge whether the control information contains the control tag of the current node tag server according to the synchronous information source identifier or the uniform resource locator, and can judge whether the control tag is the control tag of the current node through the synchronous information source identifier or the uniform resource locator, if the control information does not contain the control tag of the current node server, the current node can forward the control information according to the synchronous information source identifier or the uniform resource locator. When the synchronous source identifier and the transmission port corresponding to the synchronous source identifier are stored in the label server, the transmission port can be an IP address and a port, and the current node can forward the control message according to the synchronous source identifier and the corresponding transmission port.
In this embodiment, the method for media augmented reality labels provided in the present application may further include inserting a label RTP packet, updating the label RTP packet, and the like, where the label RTP packet may be inserted before the original label stream is sent, or may be inserted after the original label stream is sent, and the like. In a method for media augmented reality tagging provided in the present application, referring to fig. 6A and 6B, before step S12 or step S22, a third embodiment of the present application further includes the following steps:
referring to fig. 6A, before the step S12, the method may further include the following steps:
s110: and inserting a label RTP packet into the RTP stream, and merging the label RTP packet with the RTP stream, wherein after the label RTP packet is inserted, the sequence number of the label RTP packet adopts the sequence number of the original RTP packet at the insertion position, and the sequence numbers of the RTP packet at the insertion position and all the original RTP packets behind the RTP packet are added with the number of the inserted corresponding label RTP packets.
Referring to fig. 7A, the tag server may insert a tag stream into the media stream, and the tag stream and the media stream may be the same RTP stream transmission. Multiple tag RTP packets may be inserted, or one tag RTP packet may be inserted, and the tag RTP packet is combined with the RTP stream. After the label RTP packet is inserted into the post RTP stream, the sequence number of the label RTP packet is the sequence number of the original RTP packet at the insertion position, and the sequence numbers of the RTP packet at the insertion position and all the original RTP packets behind the RTP packet are added with the number of the corresponding label RTP packets. For example, in the accompanying streaming mode, a tag RTP packet is inserted into the original media RTP stream, and a tag RTP packet is inserted into the media RTP packet with the sequence number of 3, at this time, the sequence number of the media RTP packet inserted with the tag RTP packet is 3, the sequence number of the media RTP packet before the insertion position is not changed, and the sequence number of the media RTP packet after the insertion position is added by one. According to the method and the device, the sequence numbers are reordered after the tag RTP packets are inserted into the RTP stream, whether the packet loss exists or not can be detected through the sequence numbers, and the loss of the tag RTP packets or the media RTP packets can be detected through the sequence numbers after the display terminal receives the tag stream or the media stream.
Referring to fig. 6B, before the step S22, the method may further include the following steps:
s120: and inserting a new label RTP packet into the label stream, and merging the new label RTP packet with the label stream, wherein after the new label RTP packet is inserted, the sequence number of the new label RTP packet adopts the sequence number of the original label RTP packet at the insertion position, and the sequence numbers of the original label RTP packet at the insertion position and all the original label RTP packets behind the original label RTP packet are added with the number of the corresponding new label RTP packets.
A new label stream may be added to the label stream, which is transported in a different RTP stream than the media stream. A new tag RTP packet may be added to the tag stream, and a Sequence number (Sequence number) may be reordered after inserting the new tag RTP packet. A new label RTP packet can be inserted into the label flow, after the new label RTP packet is inserted, the serial number of the new label RTP packet adopts the serial number of the original label RTP packet at the insertion position, the serial number of the label RTP packet before the insertion position is unchanged, and the serial number of the label RTP packet after the insertion position is correspondingly added with one. Referring to fig. 7B, for example, a new tag RTP packet is added at the position with the sequence number of 3 in the original tag stream, after the new tag RTP packet is added, the sequence number of the new tag RTP packet is 3, 1 is added corresponding to the tag RTP packet after the insertion position, the sequence number of the tag RTP packet with the original sequence number of 3 is 4, and the sequence number of the tag RTP packet with the original sequence number of 4 is 5.
A plurality of new tag RTP packets may be inserted at the same time, the sequence number of the tag RTP packet before the insertion position of the plurality of tag RTP packets is not changed, and the number of the plurality of new tag RTP packets inserted is added to the sequence number of the tag RTP packet after the insertion position of the plurality of tag RTP packets. The sequence numbers of the plurality of new tag RTP packets are randomly allocated, and the randomly allocated sequence numbers are between the sequence numbers of the original tag RTP packets at the insertion positions and the sequence numbers of the original tag RTP packets plus the corresponding insertion quantity ranges. For example, 2 new tag RTP packets are added at the position with the sequence number of 3 in the original tag stream, the inserted new tag RTP packets are randomly distributed from the sequence number of 3, the sequence numbers of the newly inserted tag RTP packets can be 3 and 4 in sequence, and the sequence number of the original tag RTP packet after the insertion position is correspondingly added with 2. According to the method and the device, the sequence numbers are reordered after the tag RTP packets are inserted into the tag stream, whether the packet loss exists or not can be detected through the sequence numbers, and the display terminal can detect the lost condition of the tag RTP packets through the sequence numbers after receiving the tag stream.
In step S110 or step S120, the insertion of the tag RTP packet for the same media stream may not be a device, a node, or even a tag RTP packet for processing the same media stream in different regions. In the application, the insertion of the tag stream may be a distributed insertion manner, a plurality of tag servers insert tag RTP packets respectively, serially insert tag RTP packets, and transmit the tag stream, media stream or control message in parallel between the plurality of servers. For example, the tag server may store a synchronization source identifier or uniform resource locator for the inserted tag. When the tag RTP is inserted, the tag RTP can be associated with the media stream, the uniform resource locator and the time stamp, and the tag server can also store the synchronous source identifier or the uniform resource locator corresponding to the media stream which is inserted into the tag stream or the tag RTP packet for association.
The tag server may have a plurality of tag servers, and the plurality of tag servers may be tag servers of the same type of tag, or may be tag servers of different types, and the plurality of tag servers may be inserted into the tag stream respectively. Referring to fig. 8A and 8B, for example, the tag server 1 may insert a tag such as a behavior, a scene recognition, an alarm, etc., the tag server 2 may insert a moving object association tag, etc., the tag server 3 may insert a static location or a device association tag, etc., and a plurality of tag servers may transmit in parallel. The tag stream may be sent to a media server or display terminal.
The tag server may have a plurality of tag servers, and the tag streams may be inserted in a distributed manner in the plurality of tag servers, and may be inserted in parallel or may be inserted in series. Referring to fig. 8A and 8B, the tag servers transmit in parallel, and the tag may be inserted in series in the plurality of tag servers transmitting in parallel, and the tag server 4 may insert the tag stream into the tag stream transmitting in parallel as it is, for example, insert the tag stream at the tag server 2, and the tag server 2 combines the inserted tag stream with the original tag stream to transmit to the tag server 3 of the next node, and finally transmit to the media server or the display terminal.
When a new label stream is inserted, bandwidth is reserved in advance according to the transmission rate of the label stream before the new label stream is inserted, the label stream is transmitted at a preset rate, and the transmission quantity of the label stream is controlled by traffic. The tag stream is transmitted at a predetermined rate, and the width is reserved during the transmission process. After the label flow is inserted, when packaging and packaging are carried out, the width is reserved according to the transmission rate of the original label flow, the transmission of the label flow is controlled by the flow, the label flow is transmitted at the preset rate by adopting a smoothing measure, the flow of the instant label flow can be prevented from being overlarge, and the packet loss in the transmission flow of the overlarge label flow can also be prevented.
In other embodiments, a new label stream may be added to an original label stream, and referring to fig. 4B, when the new label stream and the original label stream are packaged, the new label stream may be inserted into the original label stream by using a header structure extension mode of RTP, and some data may be added according to the characteristics of the code stream, where the filling data may be considered as data of the code stream filling packet by an unidentified processing procedure. RTP packets may consist of two parts: header structure (RTP head) and message body (RTP body). The header may be followed by an extension header in the RTP header structure, the extension header may be marked with a P bit in the RTP header structure, the extension header may be set to indicate that a header extension follows the header, the format may be referred to in fig. 4B, the header extension may be a new label stream inserted, and the new label stream may be inserted into the original label stream in the header structure extension mode.
Referring to fig. 6A and 6B, step S130 may be further included after step S110 or step S120, as follows.
S130: and sending the tag stream to a media server or a display terminal according to the gesture change of the network camera or the state change of the tag stream.
The tag server stores a plurality of tags, and can send the tags in the field of view of the network camera to the media server according to the change of the network camera, and send the tags in the field of view of the network camera to the display terminal. The tag stream with the status change may also be sent to the display terminal. Referring to fig. 9A, 9B and 10, step S130 may include the following steps:
s131: and selecting labels by taking the position of the network camera as the circle center and the maximum visual field range of the network camera as the radius.
The position of a network Camera (IP Camera, abbreviated as IPC) can acquire positioning through a GPS (Global Positioning System ), the network Camera has a predetermined field of view, the GPS position of the network Camera is used as a center of a circle, the maximum field of view of the network Camera is used as a radius to select a tag, and the tag is stored in a tag server. The associated data information tag in the tag may have a position tag, where the position tag has an absolute position and a relative position, where the absolute position may be position information acquired through GPS, and the relative position may represent horizontal angle or vertical angle information of the tag, which is relative to the webcam. And screening the tags according to the GPS position information of the position tags.
For example, referring to fig. 10, there may be a web camera IPC in an arbitrary area a, and a tag C. The absolute positions of the IPC and the label C can be obtained through the GPS, the relative position of the label C relative to the IPC is obtained, and the horizontal angle or vertical angle information of the label C relative to the IPC can be obtained. The maximum visual field radius of the IPC can be R, the maximum visual field R is taken as a radius, the GPS position of the IPC is taken as a circle center, a circular area B can be obtained, the current visual field range of the IPC can be B1 due to the change of the state of the IPC, the labels C can be screened according to the circular area B in any area a, the labels C in the area B can be selected, and the labels C outside the area B are not selected.
S132: and judging whether the gesture of the network camera changes.
The change in the pose of the web camera may be a change in the direction of view of the web camera or a change in the orientation of the web camera. The change in the pose of the network camera may be an angular change of the network camera, and the angle may be any angular change in space, for example, an angular change in three-dimensional space. The posture change of the network camera may be a focal length change of the network camera or the like.
If the pose of the web camera is changed, step S133 is executed, and if the pose of the web camera is not changed, step S134 is executed.
S133: and (3) calculating the positions of all the tags within a first preset time, and transmitting the tags in the visual field after the posture of the network camera is changed.
If the pose of the network camera changes, if any one of the direction, the angle and the focal length of the IPC changes, the position calculation is performed on all the tags within the first predetermined time, and all the tags may be all the tags screened by the IPC or all the tags sent by the tag server. For example, if the IPC posture is changed, the position of the tag screened by IPC is calculated within 3 seconds. When the direction, angle or focal length of the IPC is changed, position calculation can be performed on the screened labels to obtain labels in the current visual field after the IPC is changed, the labels in the current visual field can be sent to the display terminal for display, only the labels in the visual field are sent, and therefore data quantity of the transmitted labels is reduced, and transmission efficiency is improved.
S134: and carrying out position calculation of all the labels within a second preset time, and carrying out carousel of the labels within the second preset time, wherein the second preset time is longer than the first preset time.
If the posture of the network camera is not changed, if the direction, the angle and the focal length of the IPC are not changed, the position calculation of all the labels is carried out within a second preset time, and the carousel of the labels is carried out within the second preset time, wherein the second preset time is longer than the first preset time. All the tags can be selected through the IPC, can be all the tags transmitted by the tag server, can be sequentially displayed through carousel, can be sequentially displayed along the clockwise or anticlockwise direction in the current field of view of the IPC, can be sequentially displayed through the tags in the current field of view of the IPC according to the sequence of the time stamps, can be sequentially displayed through the serial numbers of the tags, and can be displayed through the tag server. The carousel may also be to send all the selected tags to the display terminal for display according to a predetermined time, or sequentially send all the selected tags to the tag server. All the tags screened may be sequentially and cyclically transmitted for a predetermined time. For example, if the IPC pose is not changed, performing position calculation on the tags screened by the IPC within 3 minutes, and obtaining all the tags in the current field of view through position calculation, so that the tags in the current field of view of the IPC can be sent to the display terminal, and the sent tags are sequentially displayed within 3 minutes, thereby being beneficial to reducing the data quantity of the transmitted tags and increasing the transmission efficiency.
The step S131 may further include the steps of:
s142: and judging whether the states of all the tags in the tag stream are changed.
The change of the label stream may be addition of a label to the label stream, deletion of a label, or the like, or may be a change in shape, a change in position, or the like of the label stream. Or a change in state of a tag in the tag stream. The present application does not limit the variation of the label. If the state of the tag changes, step S143 is performed, and if no tag changes, step S144 is performed. The step S142 and the step S132 may be performed simultaneously, or may be determined separately, which is not limited in the sequence of the step S142 and the step S132, and the step S142 or the step S132 may be performed simultaneously or separately.
S143: and (5) performing position calculation on the changed label.
If the state of the tag changes, the position of the tag with the changed state can be calculated, the tag with the changed position or the changed state can be sent to the media server to the display terminal, and the tag with the changed position or the changed state can also be sent to the display terminal. All tags may be all tags screened by IPC. For example, if the tag state in the current field of IPC changes, the position of the changed tag is calculated, if the tag is not in the current field of view, the changed tag is not transmitted, and if the tag state is calculated, the changed tag is transmitted to the display terminal for display. Only the tag with the state change in the current visual field of the IPC is sent, which is beneficial to reducing the data quantity of the transmitted tag and increasing the transmission efficiency.
The user can subscribe and listen to the appointed label type according to own preference at the display terminal, when the user designates the label type at the display terminal, the display terminal can send the control message of the appointed type label to the media to be forwarded to the label service end, or the display terminal can send the control message of the appointed type label to the label service end, and the label server can send the appointed type label to the display terminal for display according to the subscribed label type of the display terminal and the received control message.
When the tag server screens the tags needing to be sent, the tag server packages the storable tags into tag RTP packets. If the mode is the companion stream transmission mode, the tag RTP packet can be inserted into the original media RTP stream to be transmitted to the display terminal for display, wherein the inserted tag RTP packet and the original RTP stream are the same RTP stream, the tag server sends the media stream inserted with the tag RTP stream to the media server, and the media server can send the received media stream inserted with the tag RTP stream to the display terminal for display. If the mode is the independent stream transmission mode, the tag server can form an independent tag RTP stream by the tag RTP packet and send the independent tag RTP stream to the display terminal for display. Wherein the media RTP stream and the label RTP stream are different RTP streams.
S144: step S132 is performed.
If no tag state changes in all the tags, the above step S132 is executed, and specifically, a step of determining whether the pose of the network camera changes is executed. Specific reference may be made to the above step S132, and details are not repeated here.
In a fifth embodiment, referring to fig. 11A, in the method for media augmented reality labeling, the method may include the following steps:
s510: the display terminal receives a tag stream which is transmitted by a media server from a tag server, and a media stream which is transmitted by the media server, wherein the tag stream is associated with at least one media stream through a uniform resource locator and a time stamp, the time for displaying the tag stream is determined by the time stamp, the uniform resource locator is used for locating the source of the media stream, and the tag stream and the media stream share a transmission port.
S511: the display terminal displays the contents of the tag stream and the media stream.
In this embodiment, the specific process in the steps may refer to the process in the above embodiment, and will not be described herein.
For the present embodiment, another implementation manner is provided, referring to fig. 11B, the method includes the following steps:
S520: the display terminal receives a tag stream from a tag server and a media stream sent by the media server, wherein the tag stream is associated with at least one media stream through a uniform resource locator and a time stamp, the time for displaying the tag stream is determined by the time stamp, the uniform resource locator is used for locating the source of the media stream, and the transmission ports of the tag stream and the media stream are different.
S521: the display terminal displays the contents of the tag stream and the media stream.
In this embodiment, the specific process in the step may refer to the implementation process of the display terminal in the method for media augmented reality labels in the foregoing embodiment, which is not described herein.
In other embodiments, referring to fig. 11C and 11D, after the step S511 or the step S521, the method may further include the following steps:
s512: and displaying control information sent by the terminal to the tag server, wherein the control information is reversely transmitted along a transmission path of the tag stream, and the control information comprises a control command for controlling the operation of the tag.
In this embodiment, the specific process in the step may refer to the process of displaying the terminal in the above embodiment, which is not described herein.
Referring to fig. 12, the present application provides a computer device, in which in this embodiment, the computer device 800 includes a processor 810, a memory 820, and a computer program stored in the memory 820 and executable on the processor 810. The processor 810, when executing a computer program, implements a method for media augmented reality tagging in the above-described embodiments.
In this embodiment, the processor 810 may also be referred to as a CPU (Central Processing Unit ). The processor 810 may be an integrated circuit chip having signal processing capabilities. Processor 810 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor 810 may be any conventional processor or the like.
For the method of the above embodiment, which may exist in the form of a computer program, the present application proposes a computer readable storage medium, please refer to fig. 13, in which a computer program 910 is stored in the computer readable storage medium 900 of the present embodiment, which may be executed by a processor to implement a method of media augmented reality labelling in the above embodiment.
The computer readable storage medium 900 of this embodiment may be a medium that may store program instructions, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disc, or may be a server that stores the program instructions, and the server may send the stored program instructions to other devices for execution, or may also self-execute the stored program instructions.
In the several embodiments provided in the present application, it should be understood that the disclosed methods and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some ports, devices or units, and may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all or part of the technical solution contributing to the prior art or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It will be apparent to those skilled in the art that the modules or steps of the application described above may be implemented in a general purpose computing device, they may be centralized on a single computing device, or distributed across a network of computing devices, or they may alternatively be implemented in program code executable by computing devices, such that they may be stored in a memory device and executed by computing devices, or individually fabricated as individual integrated circuit modules, or multiple modules or steps within them may be fabricated as a single integrated circuit module. Thus, the present application is not limited to any specific combination of hardware and software.
The foregoing description is only of embodiments of the present application, and is not intended to limit the scope of the patent application, and all equivalent structures or equivalent processes using the descriptions and the contents of the present application or other related technical fields are included in the scope of the patent application.

Claims (9)

1. A method of media augmented reality tagging, the method comprising:
associating a tag stream with at least one media stream through a uniform resource locator and a time stamp, wherein the time for displaying the tag stream is determined by the time stamp, the uniform resource locator is used for locating the source of the media stream, and the time stamp is used for synchronizing the media stream and the tag stream so as to determine a specific frame picture displayed in the media stream by the tag stream;
the tag stream is sent to a media server, so that the media server sends the media stream and the tag stream to a display terminal, the display terminal displays the media stream and the tag stream according to the associated time stamp, and the media stream and the tag stream share a transmission port; or alternatively
The tag stream is sent to the display terminal, so that the display terminal displays the media stream and the tag stream according to the associated time stamp, and the transmission ports of the media stream and the tag stream are different;
Receiving control information sent by the display terminal, wherein the control information is reversely transmitted along a transmission path of the tag stream, and the control information comprises a control command for controlling the operation of the tag stream; the tag server comprises a plurality of node tag servers, and the current node tag server judges whether the control information comprises a control tag of the current node tag server or not; wherein the control tag comprises a control command for controlling the operation of the tag stream; if the control information contains the control label of the current node server, the control label is taken out; and if the control information does not contain the control label of the current node server, forwarding the control information to a node label server which is the last node label server of the current node label server.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the transmission of the tag stream is based on a real-time transmission protocol, and a plurality of tag real-time transmission protocol packets are arranged in the tag stream; each of the tag real-time transport protocol packets includes a plurality of associated data information tags including at least one of a content tag, a location tag, or a control tag of the associated data information of the associated tag.
3. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the current node label server stores at least one of a synchronous information source identifier and the uniform resource locator of the label flow;
and the current node tag server receives the control information and forwards the control information according to the synchronous source identifier or the uniform resource locator.
4. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the sending the tag stream to a media server or the display terminal previously includes:
and inserting a new tag real-time transmission protocol packet into the tag stream, and combining the new tag real-time transmission protocol packet with the tag stream, wherein after the new tag real-time transmission protocol packet is inserted, the sequence number of the new tag real-time transmission protocol packet adopts the sequence number of the original tag real-time transmission protocol packet at the insertion position, and the sequence numbers of the original tag real-time transmission protocol packet at the insertion position and all the original tag real-time transmission protocol packets behind the original tag real-time transmission protocol packet are added with the number of the new tag real-time transmission protocol packets inserted correspondingly.
5. The method according to claim 1, wherein the method further comprises:
The sending the label stream to a media server or the display terminal further includes:
according to the gesture change of the network camera or the state change of the tag stream, sending the tag stream to the media server or the display terminal, including:
taking the position of the network camera as a circle center and taking the maximum visual field range of the network camera as a radius to screen labels;
judging whether the posture of the network camera changes or not;
if the posture of the network camera changes, carrying out position calculation of all the labels within a first preset time, and sending the labels in the visual field after the posture of the network camera changes;
and if the gesture of the network camera is not changed, calculating the positions of all the labels within a second preset time, and performing carousel of the labels within the second preset time, wherein the second preset time is longer than the first preset time.
6. The method of claim 5, wherein the step of determining the position of the probe is performed,
and screening the labels by taking the position of the network camera as a circle center and taking the maximum field of view of the network camera as a radius, and then further comprising:
Judging whether the states of all the tags are changed or not;
and if the states of all the tags are changed, calculating the positions of the changed tags.
7. A method of media augmented reality tagging, the method comprising:
receiving a tag stream which is transmitted by a media server from a tag server, and a media stream which is transmitted by the media server, wherein the tag stream is associated with at least one media stream through a uniform resource locator and a time stamp, the time for displaying the tag stream is determined by the time stamp, the uniform resource locator is used for locating the source of the media stream, the time stamp is used for synchronizing the media stream and the tag stream so as to determine a specific frame picture displayed in the media stream by the tag stream, and the tag stream and the media stream share a transmission port; or alternatively
Receiving a tag stream from the tag server and a media stream sent by the media server, wherein the tag stream is associated with at least one media stream through the uniform resource locator and the timestamp, the time for displaying the tag stream is determined by the timestamp, the uniform resource locator is used for locating the source of the media stream, the timestamp is used for synchronizing the media stream and the tag stream so as to determine a specific frame picture displayed in the media stream by the tag stream, and the transmission ports of the tag stream and the media stream are different;
Displaying the tag stream and the content of the media stream according to the associated time stamp;
transmitting control information to the tag server, wherein the control information is reversely transmitted along a transmission path of the tag stream, and the control information comprises a control command for controlling the operation of the tag stream; the tag server comprises a plurality of node tag servers, so that the current node tag server judges whether the control information comprises a control tag of the current node tag server or not; wherein the control tag comprises a control command for controlling the operation of the tag stream; if the control information contains the control label of the current node server, the control label is taken out; and if the control information does not contain the control label of the current node server, forwarding the control information to a node label server which is the last node label server of the current node label server.
8. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor executing the computer program to implement the method of any of claims 1-7.
9. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, which is executed by a processor to implement the method of any of claims 1-7.
CN202010744754.3A 2020-07-29 2020-07-29 Media augmented reality tag method, computer device and storage medium Active CN114071246B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010744754.3A CN114071246B (en) 2020-07-29 2020-07-29 Media augmented reality tag method, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010744754.3A CN114071246B (en) 2020-07-29 2020-07-29 Media augmented reality tag method, computer device and storage medium

Publications (2)

Publication Number Publication Date
CN114071246A CN114071246A (en) 2022-02-18
CN114071246B true CN114071246B (en) 2024-04-16

Family

ID=80226806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010744754.3A Active CN114071246B (en) 2020-07-29 2020-07-29 Media augmented reality tag method, computer device and storage medium

Country Status (1)

Country Link
CN (1) CN114071246B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012248096A (en) * 2011-05-30 2012-12-13 Brother Ind Ltd Program of terminal device, terminal device, and control method of terminal device
CN104540007A (en) * 2014-12-15 2015-04-22 百度在线网络技术(北京)有限公司 Method for playing streaming media data and method and device for providing streaming media labels
CN108476324A (en) * 2015-10-08 2018-08-31 皇家Kpn公司 Area-of-interest in the video frame of enhanced video stream
CN109858967A (en) * 2019-01-30 2019-06-07 上海极链网络科技有限公司 One kind launching advertising method, system and electronic equipment towards more video media platforms
WO2019168780A1 (en) * 2018-02-27 2019-09-06 Thin Film Electronics Asa System and method for providing augmented reality experience to objects using wireless tags
WO2019174429A1 (en) * 2018-03-15 2019-09-19 高新兴科技集团股份有限公司 Video map engine system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6130841B2 (en) * 2012-09-07 2017-05-17 日立マクセル株式会社 Receiver
NL2016051B1 (en) * 2015-12-30 2017-07-12 Rollin Video Tech B V Live-stream video advertisement system
US11693827B2 (en) * 2016-12-29 2023-07-04 Microsoft Technology Licensing, Llc Syncing and propagation of metadata changes across multiple endpoints
CN110915180B (en) * 2017-05-16 2022-06-28 瑞典爱立信有限公司 Low-latency media ingestion system, apparatus and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012248096A (en) * 2011-05-30 2012-12-13 Brother Ind Ltd Program of terminal device, terminal device, and control method of terminal device
CN104540007A (en) * 2014-12-15 2015-04-22 百度在线网络技术(北京)有限公司 Method for playing streaming media data and method and device for providing streaming media labels
CN108476324A (en) * 2015-10-08 2018-08-31 皇家Kpn公司 Area-of-interest in the video frame of enhanced video stream
WO2019168780A1 (en) * 2018-02-27 2019-09-06 Thin Film Electronics Asa System and method for providing augmented reality experience to objects using wireless tags
WO2019174429A1 (en) * 2018-03-15 2019-09-19 高新兴科技集团股份有限公司 Video map engine system
CN109858967A (en) * 2019-01-30 2019-06-07 上海极链网络科技有限公司 One kind launching advertising method, system and electronic equipment towards more video media platforms

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
面向移动终端的增强现实浏览器关键技术的研究;李丹;《优秀硕士论文》;全文 *

Also Published As

Publication number Publication date
CN114071246A (en) 2022-02-18

Similar Documents

Publication Publication Date Title
MX2021010448A (en) Method for processing live-streaming interaction video and server.
US7587507B2 (en) Media recording functions in a streaming media server
US20110320629A1 (en) Stream media server, client terminal and method and system for downloading stream media
CN104243920B (en) A kind of image split-joint method and device encapsulated based on basic flow video data
CN112073142B (en) Automatic parking method and system for vehicle
US9124909B1 (en) Metadata for compressed video streams
CN103686432A (en) Screen sharing method and system based on video network
US20080010382A1 (en) Method, system, and computer-readable medium to render repeatable data objects streamed over a network
US11070327B2 (en) Method and apparatus for re-transmitting MMT packet and method and apparatus for requesting MMT packet re-transmission
CN103155584A (en) Method for synchronizing multimedia flows and corresponding device
CN103797810A (en) Synchronized wireless display devices
KR20160079074A (en) Content presentation for mpeg media transport
CN107787586B (en) Method and apparatus for transmitting and receiving signal in multimedia system
JP2001309348A (en) Method and device for processing user request utilizing upstream channel in interactive multimedia contents service
WO2022021145A1 (en) Media augmented reality labeling method, computer device, and storage medium
CN114071246B (en) Media augmented reality tag method, computer device and storage medium
CN106664444B (en) Method and apparatus for receiving media packet in multimedia system
JP2002502169A (en) Method and system for client-server interaction in conversational communication
CN101316161B (en) Synchronous indication method and system for distributed video
CN104284239A (en) Video playing method and device, video playing client side and multimedia server
US11659259B1 (en) Video streaming systems and methods
CN101288286B (en) Method and devices for the transfer of a data flow from a data source to a data sink
CN104320386A (en) Real-time streaming transmission protocol based data sending and receiving method and corresponding device thereof
CN111314629B (en) OSD information superposition method and device
CN110324578B (en) Monitoring video processing method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant