CN114710472A - AR video call processing method and device and communication equipment - Google Patents

AR video call processing method and device and communication equipment Download PDF

Info

Publication number
CN114710472A
CN114710472A CN202011492958.9A CN202011492958A CN114710472A CN 114710472 A CN114710472 A CN 114710472A CN 202011492958 A CN202011492958 A CN 202011492958A CN 114710472 A CN114710472 A CN 114710472A
Authority
CN
China
Prior art keywords
terminal
video data
data
network equipment
network device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011492958.9A
Other languages
Chinese (zh)
Inventor
严砥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Communications Ltd Research Institute filed Critical China Mobile Communications Group Co Ltd
Priority to CN202011492958.9A priority Critical patent/CN114710472A/en
Publication of CN114710472A publication Critical patent/CN114710472A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/10Architectures or entities
    • H04L65/1063Application servers providing network services

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The embodiment of the invention discloses a processing method and device for Augmented Reality (AR) video call and communication equipment. The method comprises the following steps: the network equipment receives video data from a terminal through core network equipment; the terminal is any one of two terminals which establish a long term evolution voice bearer (VoLTE) video connection; and the network equipment adds AR data to the video data to generate AR video data and respectively sends the AR video data to the two terminals.

Description

AR video call processing method and device and communication equipment
Technical Field
The invention relates to the technical field of Augmented Reality (AR), in particular to a processing method and device of an AR video call and communication equipment.
Background
At present, many AR video call capabilities are implemented depending on terminal devices, for example, a mobile phone analyzes original video content captured by a camera and the like, adds AR elements, integrates the AR elements with the original video content, and loads the integrated video to a media channel of a video call.
In this AR video call mode, the operator only needs to provide the bearer channel, but does not provide the AR video service, and cannot obtain revenue except for traffic, which is not favorable for the transition from the channel provider to the service provider. In addition, the existing AR video call schemes are all performed based on Voice over IP (VoIP), and there is no AR video call scheme combined with Voice over Long-Term Evolution (VoLTE).
Disclosure of Invention
In order to solve the existing technical problem, embodiments of the present invention provide a processing method and apparatus for an AR video call, and a communication device.
In order to achieve the above purpose, the technical solution of the embodiment of the present invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a method for processing an AR video call, where the method includes:
the network equipment receives video data from a terminal through core network equipment; the terminal is any one of two terminals with the established VoLTE video connection;
and the network equipment adds AR data to the video data to generate AR video data, and respectively sends the AR video data to the two terminals.
In the foregoing solution, before the network device receives video data from a terminal through a core network device, the method further includes:
the network device and the core network device perform information interaction, the information of the interaction is used for configuring Initial Filter Criteria (iFC) information for the core network device, and the iFC information is used for triggering the network device to execute an AR video service function.
In the above scheme, before the network device receives the video data from the terminal through the core network device, the method further includes:
and the network equipment receives the capability information of any one of the two terminals through the core network equipment, wherein the capability information represents the relevant parameters of any terminal for supporting the AR function.
In the foregoing solution, the receiving, by the network device, the capability information of any one of the two terminals through the core network device includes:
the network equipment receives the capability information of any one of the two terminals through the core network equipment by a Session Initiation Protocol (SIP) message; wherein the SIP message is used to establish a VoLTE video connection between the two terminals.
In the foregoing solution, before the network device adds the AR data to the video data, the method further includes: the network equipment receives AR data from a first terminal through the core network equipment; the first terminal is any one of the two terminals.
In the foregoing solution, before the network device adds the AR data to the video data, the method further includes: the network equipment receives first information from a first terminal through the core network equipment, and mapping relations between a plurality of pieces of information and AR data are stored in the network equipment; the first terminal is any one of the two terminals;
and the network equipment determines AR data corresponding to the first information.
In the above scheme, the method includes: the network device receives a first instruction from a terminal through the core network device, wherein the first instruction is used for starting or stopping the network device from sending the AR video data, or the first instruction is used for regenerating the AR video data.
In a second aspect, an embodiment of the present invention further provides a method for processing an augmented reality AR video call, where the method includes:
a first terminal sends video data and receives the video data from a second terminal; the video data is sent to the network equipment through the core network equipment; the video data are sent after VoLTE video connection is established between the first terminal and the second terminal;
the first terminal receives AR video data from the network equipment, and the AR video data is generated by adding AR data to the video data by the network equipment.
In the foregoing solution, before the first terminal receives the AR video data from the network device, the method further includes: and the first terminal sends capability information to the network equipment through the core network equipment, wherein the capability information represents the relevant parameters of the first terminal for supporting the AR function.
In the foregoing solution, the sending, by the first terminal, the capability information to the network device through the core network device includes: the first terminal sends the capability information of the first terminal through the core network equipment by an SIP message; wherein the SIP message is used to establish a VoLTE video connection between the first terminal and the second terminal.
In the foregoing solution, before the first terminal receives the AR video data from the network device, the method further includes: and the first terminal sends AR data to the network equipment through the core network equipment.
In the foregoing solution, before the first terminal receives the AR video data from the network device, the method further includes: and the first terminal sends first information to the network equipment through the core network equipment, wherein the first information is used for indicating corresponding AR data.
In the foregoing solution, the method further includes: the first terminal sends a first instruction to the network device through the core network device, where the first instruction is used to start or stop the network device from sending the AR video data, or the first instruction is used to regenerate the AR video data.
In a third aspect, an embodiment of the present invention further provides an apparatus for processing an AR video call, where the apparatus includes: the system comprises a first communication unit and a data processing unit; wherein, the first and the second end of the pipe are connected with each other,
the first communication unit is used for receiving video data from a terminal through core network equipment; the terminal is any one of two terminals with the established VoLTE video connection;
the data processing unit is used for adding AR data to the video data to generate AR video data;
the first communication unit is further configured to send the AR video data generated by the data processing unit to the two terminals, respectively.
In the above scheme, the first communication unit is further configured to perform information interaction with the core network device before receiving video data from the terminal through the core network device, where the interacted information is used for configuring iFC information for the core network device, and the iFC information is used for triggering the network device to execute an AR video service function.
In the foregoing solution, the first communication unit is further configured to receive, by the core network device, capability information of any one of the two terminals before receiving video data from the terminal by the core network device, where the capability information represents a relevant parameter of the any one terminal that supports the AR function.
In the above scheme, the first communication unit is configured to receive, through an SIP message and via the core network device, capability information of any one of the two terminals; wherein the SIP message is used to establish a VoLTE video connection between the two terminals.
In the foregoing solution, the first communication unit is further configured to receive, by the data processing unit, AR data from a first terminal through the core network device before the data processing unit adds the AR data to the video data; the first terminal is any one of the two terminals.
In the above scheme, the apparatus further comprises a storage unit;
the first communication unit is further configured to receive, by the data processing unit, first information from a first terminal via the core network device before adding AR data to the video data,
the storage unit is used for storing the mapping relation between a plurality of information and the AR data;
and the data processing unit is used for determining AR data corresponding to the first information, adding the AR data to the video data and generating AR video data.
In the foregoing solution, the first communication unit is further configured to receive, by the core network device, a first instruction from a terminal, where the first instruction is used to start or terminate the network device to send the AR video data, or the first instruction is used to regenerate the AR video data.
In a fourth aspect, an embodiment of the present invention further provides an apparatus for processing an AR video call, where the apparatus includes: a first transmitting unit and a first receiving unit; wherein the content of the first and second substances,
the first sending unit is used for sending video data;
the first receiving unit is used for receiving video data from a second terminal; the network equipment is also used for receiving AR video data from the network equipment, and the AR video data is generated by adding AR data on the video data by the network equipment;
the video data are sent to network equipment through core network equipment; and the video data is sent after the first terminal and the second terminal establish VoLTE video connection.
In the foregoing solution, the first receiving unit is further configured to send, before receiving the AR video data from the network device, capability information to the network device through the core network device, where the capability information represents a relevant parameter of the first terminal for supporting an AR function.
In the foregoing solution, the first sending unit is further configured to send, through an SIP message, capability information of the first terminal through the core network device; wherein the SIP message is used to establish a VoLTE video connection between the first terminal and the second terminal.
In the foregoing solution, the first sending unit is further configured to send, to the network device, the AR data through the core network device.
In the foregoing solution, the first sending unit is further configured to send first information to the network device through the core network device, where the first information is used to indicate corresponding AR data.
In the above scheme, the first sending unit is further configured to send a first instruction to the network device through the core network device, where the first instruction is used to start or terminate sending of the AR video data by the network device, or the first instruction is used to regenerate the AR video data.
In a fifth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the method in the first aspect of the embodiment of the present invention; alternatively, the program is adapted to carry out the steps of the method according to the second aspect of the embodiments of the invention when executed by a processor.
In a sixth aspect, an embodiment of the present invention further provides a communication device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the steps of the method according to the first aspect of the embodiment of the present invention; alternatively, the processor implements the steps of the method according to the second aspect of the embodiment of the present invention when executing the program.
The embodiment of the invention provides a processing method, a device and communication equipment for AR video call, wherein the method comprises the following steps: the network equipment receives video data from a terminal through core network equipment; the terminal is any one of two terminals with the established VoLTE video connection; and the network equipment adds AR data to the video data to generate AR video data and respectively sends the AR video data to the two terminals. By adopting the technical scheme of the embodiment of the invention, the AR video call is realized based on the VoLTE, namely, the AR function is used during the VoLTE video call; and the AR function is combined with the VoLTE video call service of the operator, the operator is changed from a channel provider to a service provider, and the operator can obtain service income except flow.
Drawings
Fig. 1 is a schematic structural diagram of a processing system for an AR video call according to an embodiment of the present invention;
fig. 2 is a first flowchart illustrating a processing method of an AR video call according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating a second method for processing an AR video call according to an embodiment of the present invention;
fig. 4 is an interaction flow diagram of a processing method of an AR video call according to an embodiment of the present invention;
fig. 5 is a first schematic structural diagram illustrating a processing apparatus for an AR video call according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a second configuration of the processing apparatus for AR video call according to the embodiment of the present invention;
fig. 7 is a schematic diagram of a hardware structure of a communication device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a schematic structural diagram of a processing system for an AR video call according to an embodiment of the present invention; as shown in fig. 1, the system may include: a terminal capable of performing a video call, an IP Multimedia Subsystem (IMS) core network, and a network device; the network device is a device capable of performing an AR function, and for example, may add AR data to video data to generate AR video data. Illustratively, the network device may be a MultiMedia Telephony (MMTEL) or MultiMedia Telephony/AR platform. Of course, the network device in this embodiment may also be another network device capable of implementing the above functions. In fig. 1, only two terminals (terminal a and terminal B) are taken as an example for description, and the terminal is not limited to two terminals. In addition, an access network such as a base station is also included between the terminal and the IMS core network, which is not shown in the figure.
Optionally, the core network in the system is not limited to be an IMS core network, and may also be a core network of another communication system or communication network.
In this embodiment, as shown in fig. 1, a video call may be performed between a terminal a and a terminal B through an access network and an IMS core network. When the AR video service function needs to be completed, the network equipment can respectively obtain video data of the terminal A and the terminal B through the core network equipment, and the AR data is added into the video data of the terminal A and/or the terminal B to generate AR video data, and then the AR video data is respectively sent to the terminal A and the terminal B through the core network equipment, so that the AR video service is realized based on VoLTE.
Based on the above system example, the following embodiments of the present invention are proposed.
The embodiment of the invention provides a processing method of an AR video call, which is applied to network equipment. Fig. 2 is a first flowchart illustrating a processing method of an AR video call according to an embodiment of the present invention; as shown in fig. 2, the method includes:
step 101: the network equipment receives video data from a terminal through core network equipment; the terminal is any one of two terminals with the established VoLTE video connection;
step 102: and the network equipment adds AR data to the video data to generate AR video data, and respectively sends the AR video data to the two terminals.
In this embodiment, when two terminals have established a VoLTE video connection, a network device may obtain video data of each terminal through a core network device, where the video data is original video data, and for example, the video data is image data acquired by an image acquisition component of the terminal in real time.
The network device receives video data from the terminal through the core network device, where the video data may be video data of one of the two terminals that have established the VoLTE video connection, or may be video data of each of the two terminals. For example, if AR data needs to be added to video data of a certain terminal, the network device may obtain the video data of the terminal through the core network device. If the AR data needs to be added to the video data of both the two terminals, the network device may obtain the respective video data of the two terminals through the core network device.
In this embodiment, the network device may add the AR data to the video data of at least one of the two terminals as needed, so as to generate AR video data, and then send the AR video data to the terminals through the core network device, respectively.
How the network device determines the added AR data specifically, in an embodiment, before the network device adds AR data to the video data, the method further includes: the network equipment receives AR data from a first terminal through the core network equipment; the first terminal is any one of the two terminals. In another embodiment, before the network device adds AR data to the video data, the method further comprises: the network equipment receives first information from a first terminal through the core network equipment, and mapping relations between a plurality of pieces of information and AR data are stored in the network equipment; the first terminal is any one of the two terminals; and the network equipment determines AR data corresponding to the first information.
In this embodiment, as a first implementation manner, a network device may receive, through a core network device, AR data from a first terminal. Illustratively, a plurality of pieces of AR data are stored in the first terminal in advance, a user can select the AR data to be added by himself through operation, and after the AR data are determined, the first terminal can send the AR data to the network device through the core network device. As a second implementation manner, a plurality of AR data are stored in advance in the network device, and the first terminal may display a legend, a pattern, or a template of the AR data through the interactive interface; a user can select a legend, a pattern or a template of AR data to be added through operation, and after the legend, the pattern or the template of the AR data is determined, a first terminal can send first information corresponding to the AR data to network equipment through core network equipment, wherein the first information can be a mark or a serial number corresponding to the AR data; the network device may search in the stored data according to the first information, and determine the AR data corresponding to the first information. The method does not need to limit the capability of the terminal, the upgrading and the expansion of the AR data only need to be carried out on the network equipment, the capability of each terminal and the expansion of the AR data are not needed, and the expansibility is flexible.
In this embodiment, the network device may receive the AR data or the first information through the core network device via an SIP message; or, the network device may also receive the AR data or the first information through a new data channel via the core network device, where the new data channel is synchronized with a video channel in which the video data is located.
Optionally, the first terminal may send a first request to the network device through the core network device, where the first request is used to add AR data in the video data; the first request may include the AR data or the first information. Illustratively, the first request is carried via a SIP message (SIP MESSAGE) or a SIP INF0 message.
In this embodiment, the network device may add the AR data to the video data of the local terminal or add the AR data to the video data of the opposite terminal based on the indication of the terminal. Optionally, the first request may further include, in addition to the AR data or the first information, indication information corresponding to local video data or peer video data. Illustratively, some indicator bit in the first request may be used to indicate that the local video data or the peer video data.
Optionally, the first request may include, in addition to the AR data or the first information and indication information corresponding to the local video data or the peer video data, location information for adding the AR data to the video data, so that the network device may add the AR data at a corresponding location of the video data based on the indicated location information.
In some optional embodiments of the present invention, before the network device receives the video data from the terminal via the core network device, the method further includes: the network equipment and the core network equipment perform information interaction, the interactive information is used for configuring iFC information by the core network equipment, and the iFC information is used for triggering the network equipment to execute an AR video service function.
In this embodiment, before executing the processing method for the AR video call in steps 101 to 102, the network device first needs to perform information interaction with the core network device, configure iFC information in the core network device, where the configured iFC information is used to trigger the network device to execute an AR video service function, and it may be considered that after configuring the iFC information, the core network device may trigger to access the network device, for example, trigger to access an MMTEL/AR platform, or trigger an MMTEL server to support the AR video service function, so that the network device executes the AR video service function. Illustratively, after receiving the first request, the core network device triggers sending the first request to the network device, and triggers sending the video data of the two terminals to the network device, and the network device adds the AR data to the video data, thereby generating the AR video data.
In some optional embodiments of the present invention, before the network device receives the video data from the terminal via the core network device, the method further includes: and the network equipment receives the capability information of any one of the two terminals through the core network equipment, wherein the capability information represents the relevant parameters of any terminal for supporting the AR function.
In this embodiment, before the network device executes the processing method for the AR video call as in steps 101 to 102, it needs to acquire the capability information of the terminal, that is, the network device and the terminal perform capability negotiation.
Optionally, the receiving, by the network device, capability information of any one of the two terminals through the core network device includes: the network equipment receives the capability information of any one of the two terminals through the core network equipment by using SIP information; wherein the SIP message is used to establish a VoLTE video connection between the two terminals.
For example, in the process of establishing a video call by two terminals according to the IMS protocol, the first terminal may add the capability information in a header of the SIP INVITE message, so that the core network device and the network device know that the first terminal supports the AR function and relevant parameters of the supported AR function. Optionally, the capability information may include information that the terminal supports an AR function; further, the capability information may further include related parameter information supporting the AR function. In one example, whether the terminal supports the AR function may be indicated by a specific bit in the SIP message (header), for example, "1" indicates that the AR function is supported, "0" indicates that the AR function is not supported, and so on. In yet another example, the terminal may also be indicated to support relevant parameter information of the AR function through some fields in the SIP, wherein the relevant parameter information may include supported AR data channels, supported AR data formats, and the like. Of course, the relevant parameter information in the embodiment of the present invention is not limited to the above example.
In some optional embodiments of the invention, the method comprises: the network equipment receives a first instruction from a terminal through the core network equipment, wherein the first instruction is used for starting or stopping the network equipment from sending the AR video data, or the first instruction is used for regenerating the AR video data.
In this embodiment, the terminal may instruct the network device to start or stop sending the AR video data through the first instruction, or regenerate the AR video data through the first instruction. Illustratively, the terminal may send the first instruction by a SIP message or a Dual Tone Multi Frequency (DTMF) signal.
Based on the foregoing embodiment, the embodiment of the present invention further provides a processing method for an AR video call, where the method is applied to a terminal. Fig. 3 is a second flowchart illustrating a processing method of an AR video call according to an embodiment of the present invention; as shown in fig. 3, the method includes:
step 201: a first terminal sends video data and receives the video data from a second terminal; the video data is sent to the network equipment through the core network equipment; the video data are sent after VoLTE video connection is established between the first terminal and the second terminal;
step 202: the first terminal receives AR video data from the network equipment, and the AR video data is generated by adding AR data to the video data by the network equipment.
In this embodiment, the first terminal establishes a VoLTE video connection with the second terminal. The first terminal sends video data to the second terminal, and meanwhile, the second terminal sends the video data to the first terminal; in the process of sending the video data, the core network equipment can receive the video data of the two terminals and send the video data of any one of the two terminals to the network equipment. In an exemplary embodiment, if AR data needs to be added to video data of a certain terminal, the network device may obtain the video data of the terminal through the core network device. If the AR data needs to be added to the video data of both the two terminals, the network device may obtain the respective video data of the two terminals through the core network device.
In this embodiment, the network device may add the AR data to the video data of at least one of the two terminals according to a requirement, so as to generate the AR video data, and then send the AR video data to the terminals through the core network devices, respectively, that is, the first terminal may receive the AR video data sent from the network device through the core network devices.
In this embodiment, the first terminal (i.e. any one of the two terminals that have established the VoLTE video connection) may trigger the AR function according to the user requirement, that is, the first terminal triggers to add AR data in the video data. In one embodiment, before the first terminal receives the AR video data from the network device, the method further comprises: and the first terminal sends AR data to the network equipment through the core network equipment. In another embodiment, before the first terminal receives the AR video data from the network device, the method further includes: and the first terminal sends first information to the network equipment through the core network equipment, wherein the first information is used for indicating corresponding AR data.
In this embodiment, as a first implementation manner, a plurality of pieces of AR data are stored in advance in the first terminal, the user may select the piece of AR data to be added by himself through operation, and after determining the piece of AR data, the first terminal may send the piece of AR data to the network device through the core network device. As a second implementation manner, a plurality of AR data are stored in advance in the network device, and the first terminal may display a legend, a pattern, or a template of the AR data through the interactive interface; a user can select a legend, a pattern or a template of AR data to be added through operation, and after the legend, the pattern or the template of the AR data is determined, a first terminal can send first information corresponding to the AR data to network equipment through core network equipment, wherein the first information can be a mark or a serial number corresponding to the AR data; the network device may search in the stored data according to the first information, and determine the AR data corresponding to the first information. The method does not need to limit the capability of the terminal, the upgrading and the expansion of the AR data only need to be carried out on the network equipment, the capability of each terminal and the expansion of the AR data are not needed, and the expansibility is flexible.
In this embodiment, the first terminal may send the AR data or the first information through the core network device by using an SIP protocol; or, the first terminal may also send the AR data or the first information through a new data channel via the core network device, where the new data channel is synchronized with a video channel in which the video data is located.
Optionally, the first terminal may send a first request to the network device through the core network device, where the first request is used to add AR data in the video data; the first request may include the AR data or the first information. Illustratively, the first request is carried via a SIP message (SIP MESSAGE) or a SIP INF0 message.
In this embodiment, the network device may add the AR data to the video data of the local terminal or add the AR data to the video data of the opposite terminal based on the indication of the terminal. Optionally, the first request may further include, in addition to the AR data or the first information, indication information corresponding to local video data or peer video data. Illustratively, some indicator bit in the first request may be used to indicate that the local video data or the peer video data.
Optionally, the first request may include, in addition to the AR data or the first information and indication information corresponding to the local video data or the peer video data, location information for adding the AR data to the video data, so that the network device may add the AR data at a corresponding location of the video data based on the indicated location information.
In some optional embodiments of the invention, before the first terminal receives the AR video data from the network device, the method further comprises: and the first terminal sends capability information to the network equipment through the core network equipment, wherein the capability information represents the relevant parameters of the first terminal for supporting the AR function.
In this embodiment, before executing the processing method of the AR video call in steps 201 to 202, the first terminal needs to perform capability negotiation with the network device to notify whether the network device and the first terminal support the AR function.
Optionally, the sending, by the first terminal, the capability information to the network device through the core network device includes: the first terminal sends the capability information of the first terminal through the core network equipment by an SIP message; wherein the SIP message is used to establish a VoLTE video connection between the first terminal and the second terminal.
For example, in the process of establishing a video call by two terminals according to the IMS protocol, the first terminal may add the capability information in a header of the SIP INVITE message, so that the core network device and the network device know that the first terminal supports the AR function and relevant parameters of the supported AR function. Optionally, the capability information may include information that the terminal supports an AR function; further, the capability information may further include related parameter information supporting the AR function. In one example, whether the terminal supports the AR function may be indicated by a specific bit in the SIP message (header), for example, "1" indicates that the AR function is supported, "0" indicates that the AR function is not supported, and so on. In yet another example, the terminal may also be indicated to support relevant parameter information of the AR function through some fields in the SIP, wherein the relevant parameter information may include supported AR data channels, supported AR data formats, and the like. Of course, the relevant parameter information in the embodiment of the present invention is not limited to the above example.
In some optional embodiments of the invention, the method further comprises: the first terminal sends a first instruction to the network device through the core network device, where the first instruction is used to start or stop the network device from sending the AR video data, or the first instruction is used to regenerate the AR video data.
In this embodiment, the terminal may instruct the network device to start or stop sending the AR video data through the first instruction, or regenerate the AR video data through the first instruction. Illustratively, the terminal may send the first instruction by a SIP message or a DTMF signal.
The following describes a processing method of an AR video call according to an embodiment of the present invention with reference to a specific example. In this example, a network device is taken as an example of an MMTEL/AR platform.
Fig. 4 is an interaction flow diagram of a processing method of an AR video call according to an embodiment of the present invention; as shown in fig. 4, the method includes:
step 300: the MMTEL/AR platform carries out information interaction with an IMS core network, the interactive information is used for configuring iFC information by the IMS core network, and the iFC information is used for triggering the MMTEL/AR platform to execute an AR video service function.
Step 301: and the terminal A and the terminal B establish VoLTE video connection, and in the VoLTE video connection establishing process, the two terminals and the MMTEL/AR platform carry out capability negotiation.
Step 302: and the terminal B needs to add AR data on the original video data of the terminal A, and the MMTEL/AR platform receives the original video data sent by the terminal A and the AR data to be added by the terminal B.
According to one implementation mode, the terminal B can directly send the AR data to the MMTEL/AR platform through the core network equipment, a plurality of AR data are stored in the terminal B in advance, and a user can select the AR data to be added through operation. In another embodiment, the terminal B may also send the first information corresponding to the AR data to the MMTEL/AR platform through the core network device, where the MMTEL/AR platform stores a plurality of AR data; the terminal B can display the legend, pattern or template corresponding to the AR data stored in the MMTEL/AR platform through the interactive interface, the user can select the legend, pattern or template corresponding to the AR data to be added through operation, and then the selected legend, pattern or template (namely the first information) is sent to the MMTEL/AR platform through the core network equipment.
The AR data or the first information can be transmitted through SIP information or through a new data channel.
Step 303: and the MMTEL/AR platform adds AR data to the original video data to generate AR video data.
Step 304: the MMTEL/AR platform sends the AR video data to the terminal A, and the terminal A can present the original video data and the AR video data sent by the local terminal.
Step 305: the MMTEL/AR platform sends the AR video data back to the terminal B so that a user of the terminal B can check the AR effect, and the terminal B simultaneously presents original video data and AR video data sent by an opposite terminal (namely, the terminal A); optionally, terminal B may also present the video data of the home terminal.
In this embodiment, the MMTEL/AR platform may send AR video data to the terminal a or the terminal B through a hypertext Transfer Protocol (HTTP) or a new data channel.
Optionally, before sending the AR video data to the terminal a, the MMTEL/AR platform may send a notification message to the terminal a, where the notification message is used to notify that the AR video data is ready to be completed; before sending the AR video data to the terminal B, the MMTEL/AR platform may send a notification message to the terminal B, where the notification message is used to notify that the AR video data is ready to be completed.
Optionally, the method further comprises step 306: the terminal B can send a first instruction to the MMTEL/AR platform through the SIP message or the DTMF signaling, the first instruction can start or stop the AR video data sent to the terminal A by the MMTEL/AR platform, or stop sending the AR video data back to the terminal B, or the user B is unsatisfied with the AR effect and asks the MMTEL/AR platform to regenerate the AR video data.
The AR video call processing method is suitable for various scenes of rich video calls, such as remote medical treatment, remote education, remote equipment control and the like.
For example, a user a and a user B make a video call through a terminal a and a terminal B, respectively, and the user B wants to instruct the user a through an image at which position in the image a performs some processing. In order to make the position more clear, the user B wants to add the AR data to the original video data of the terminal a, and the terminal B may indicate, on one hand, to add the AR data to the video data of the terminal a (that is, the peer device) in the SIP message, and on the other hand, may also carry the AR data or the identifier corresponding to the AR data through the SIP message, and the like.
The embodiment of the invention also provides a processing device of the AR video call. Fig. 5 is a first schematic structural diagram illustrating a processing apparatus for an AR video call according to an embodiment of the present invention; as shown in fig. 5, the apparatus includes: a first communication unit 41 and a data processing unit 42; wherein, the first and the second end of the pipe are connected with each other,
the first communication unit 41 is configured to receive video data from a terminal through a core network device; the terminal is any one of two terminals with the established VoLTE video connection;
the data processing unit 42 is configured to add AR data to the video data to generate AR video data;
the first communication unit 41 is further configured to send the AR video data generated by the data processing unit 42 to the two terminals, respectively.
In some optional embodiments of the present invention, the first communication unit 41 is further configured to perform information interaction with a core network device before receiving video data from a terminal through the core network device, where the interacted information is used for configuring iFC information by the core network device, and the iFC information is used for triggering the network device to execute an AR video service function.
In some optional embodiments of the present invention, the first communication unit 41 is further configured to receive, via the core network device, capability information of any one of the two terminals before receiving the video data from the terminal via the core network device, where the capability information represents a relevant parameter that the any one terminal supports the AR function.
In some optional embodiments of the present invention, the first communication unit 41 is configured to receive, through an SIP message, capability information of any one of the two terminals through the core network device; wherein the SIP message is used to establish a VoLTE video connection between the two terminals.
In some optional embodiments of the present invention, the first communication unit 41 is further configured to receive, by the core network device, AR data from a first terminal before the data processing unit 42 adds the AR data to the video data; the first terminal is any one of the two terminals.
In some optional embodiments of the invention, the apparatus further comprises a storage unit 43;
the first communication unit 41 is further configured to receive, by the data processing unit 42, first information from a first terminal via the core network device before adding AR data to the video data,
the storage unit 43 is configured to store mapping relationships between multiple pieces of information and AR data;
the data processing unit 42 is configured to determine AR data corresponding to the first information, add the AR data to the video data, and generate AR video data.
In some optional embodiments of the present invention, the first communication unit 41 is further configured to receive, by the core network device, a first instruction from a terminal, where the first instruction is used to start or terminate the network device to send the AR video data, or the first instruction is used to regenerate the AR video data.
In the embodiment of the present invention, the data Processing Unit 42 in the Processing apparatus for AR video call may be implemented by a Central Processing Unit (CPU), a Digital Signal Processor (DSP), a Micro Control Unit (MCU), or a Programmable Gate Array (FPGA) in practical application; the first communication unit 41 in the processing apparatus for AR video call can be implemented by a communication module (including a basic communication suite, an operating system, a communication module, a standardized interface, a protocol, etc.) and a transceiver antenna in practical application; the storage unit 43 in the processing apparatus for AR video call can be implemented by a memory in practical application.
The embodiment of the invention also provides a processing device of the AR video call. Fig. 6 is a schematic structural diagram of a second component of the processing apparatus for AR video call according to the embodiment of the present invention; as shown in fig. 6, the apparatus includes: a first transmitting unit 51 and a first receiving unit 52; wherein the content of the first and second substances,
the first sending unit 51 is configured to send video data;
the first receiving unit 52 is configured to receive video data from a second terminal; the network equipment is also used for receiving AR video data from the network equipment, and the AR video data is generated by adding AR data on the video data by the network equipment;
the video data are sent to network equipment through core network equipment; and the video data is sent after the first terminal and the second terminal establish VoLTE video connection.
In some optional embodiments of the present invention, the first receiving unit 52 is further configured to send, before receiving the AR video data from the network device, capability information to the network device through the core network device, where the capability information represents a relevant parameter that the first terminal supports an AR function.
In some optional embodiments of the present invention, the first sending unit 51 is further configured to send, through an SIP message, the capability information of the first terminal through the core network device; wherein the SIP message is used to establish a VoLTE video connection between the first terminal and the second terminal.
In some optional embodiments of the present invention, the first sending unit 51 is further configured to send, to the network device, AR data via the core network device.
In some optional embodiments of the present invention, the first sending unit 51 is further configured to send, to the network device through the core network device, first information, where the first information is used to indicate corresponding AR data.
In some optional embodiments of the present invention, the first sending unit 51 is further configured to send a first instruction to the network device through the core network device, where the first instruction is used to start or terminate sending, by the network device, the AR video data, or the first instruction is used to regenerate the AR video data.
In the embodiment of the present invention, the first sending unit 51 and the first receiving unit 52 in the processing apparatus for AR video call can be implemented by a communication module (including a basic communication suite, an operating system, a communication module, a standardized interface, a standardized protocol, etc.) and a transceiver antenna in practical application.
It should be noted that: in the processing apparatus for AR video call provided in the foregoing embodiment, when performing processing for AR video call, only the division of the above program modules is exemplified, and in practical applications, the processing may be distributed to be completed by different program modules according to needs, that is, the internal structure of the apparatus is divided into different program modules, so as to complete all or part of the processing described above. In addition, the processing apparatus for the AR video call and the processing method for the AR video call provided in the above embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiments and are not described herein again.
The embodiment of the present invention further provides a communication device, which may specifically be a network device or a terminal in the foregoing embodiment. Fig. 7 is a schematic hardware configuration diagram of a communication device according to an embodiment of the present invention, as shown in fig. 7, the communication device includes a memory 62, a processor 61, and a computer program stored in the memory 62 and executable on the processor 61, and when the processor 61 executes the program, the processor 61 implements the steps of the processing method applied to the AR video call in the network device according to the embodiment of the present invention, or when the processor 61 executes the program, the processor 61 implements the steps of the processing method applied to the AR video call in the terminal according to the embodiment of the present invention.
The communication device may further include a network interface 63. Optionally, the various components in the communication device are coupled together by a bus system 64. It will be appreciated that the bus system 64 is used to enable communications among the components. The bus system 64 includes a power bus, a control bus, and a status signal bus in addition to the data bus. For clarity of illustration, however, the various buses are labeled as bus system 64 in FIG. 7.
It will be appreciated that the memory 62 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), Synchronous Dynamic Random Access Memory (SLDRAM), Direct Memory (DRmb Access), and Random Access Memory (DRAM). The memory 62 described in connection with the embodiments of the invention is intended to comprise, without being limited to, these and any other suitable types of memory.
The method disclosed in the above embodiments of the present invention may be applied to the processor 61, or implemented by the processor 61. The processor 61 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 61. The processor 61 described above may be a general purpose processor, a DSP, or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. Processor 61 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed by the embodiment of the invention can be directly implemented by a hardware decoding processor, or can be implemented by combining hardware and software modules in the decoding processor. The software modules may be located in a storage medium located in the memory 62, and the processor 61 reads the information in the memory 62 and performs the steps of the aforementioned method in conjunction with its hardware.
In an exemplary embodiment, the communication Device may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), FPGAs, general purpose processors, controllers, MCUs, microprocessors (microprocessors), or other electronic components for performing the aforementioned methods.
In an exemplary embodiment, the present invention further provides a computer readable storage medium, such as a memory 62 comprising a computer program, which is executable by a processor 61 of a communication device to perform the steps of the aforementioned method. The computer readable storage medium can be Memory such as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface Memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
The computer-readable storage medium provided by the embodiment of the present invention stores thereon a computer program, which when executed by a processor, implements the steps of the processing method applied to the AR video call in the network device according to the embodiment of the present invention; or, the program, when executed by the processor, implements the steps of the processing method applied to the AR video call in the terminal according to the embodiment of the present invention.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in several of the product embodiments provided in the present application may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only one logical function division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media capable of storing program code.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (17)

1. A processing method for AR video call, the method comprising:
the network equipment receives video data from a terminal through core network equipment; the terminal is any one of two terminals which establish a long-term evolution voice-bearing VoLTE video connection;
and the network equipment adds AR data to the video data to generate AR video data, and respectively sends the AR video data to the two terminals.
2. The method of claim 1, wherein before the network device receives the video data from the terminal via the core network device, the method further comprises:
the network equipment and the core network equipment perform information interaction, the interactive information is used for the core network equipment to configure initial filtering rule iFC information, and the iFC information is used for triggering the network equipment to execute an AR video service function.
3. The method according to claim 1 or 2, wherein before the network device receives the video data from the terminal via the core network device, the method further comprises:
and the network equipment receives the capability information of any one of the two terminals through the core network equipment, wherein the capability information represents the relevant parameters of any terminal for supporting the AR function.
4. The method of claim 3, wherein the network device receives capability information of any of the two terminals via the core network device, and wherein the receiving comprises:
the network equipment receives the capability information of any one of the two terminals through the core network equipment by a Session Initiation Protocol (SIP) message; wherein the SIP message is used to establish a VoLTE video connection between the two terminals.
5. The method of claim 1, wherein before the network device adds AR data to the video data, the method further comprises:
the network equipment receives AR data from a first terminal through the core network equipment; the first terminal is any one of the two terminals.
6. The method of claim 1, wherein before the network device adds AR data to the video data, the method further comprises:
the network equipment receives first information from a first terminal through the core network equipment, and mapping relations between a plurality of pieces of information and AR data are stored in the network equipment; the first terminal is any one of the two terminals;
and the network equipment determines AR data corresponding to the first information.
7. The method according to claim 1 or 2, characterized in that the method comprises:
the network device receives a first instruction from a terminal through the core network device, wherein the first instruction is used for starting or stopping the network device from sending the AR video data, or the first instruction is used for regenerating the AR video data.
8. A processing method for AR video call, the method comprising:
a first terminal sends video data and receives the video data from a second terminal; the video data is sent to the network equipment through the core network equipment; the video data are sent after VoLTE video connection is established between the first terminal and the second terminal;
the first terminal receives AR video data from the network equipment, and the AR video data is generated by adding AR data to the video data by the network equipment.
9. The method of claim 8, wherein before the first terminal receives the AR video data from the network device, the method further comprises:
and the first terminal sends capability information to the network equipment through the core network equipment, wherein the capability information represents the relevant parameters of the first terminal for supporting the AR function.
10. The method of claim 9, wherein the sending, by the first terminal, the capability information to the network device via the core network device comprises:
the first terminal sends the capability information of the first terminal through the core network equipment by an SIP message; wherein the SIP message is used to establish a VoLTE video connection between the first terminal and the second terminal.
11. The method of claim 8, wherein before the first terminal receives the AR video data from the network device, the method further comprises:
and the first terminal sends AR data to the network equipment through the core network equipment.
12. The method of claim 8, wherein before the first terminal receives the AR video data from the network device, the method further comprises:
and the first terminal sends first information to the network equipment through the core network equipment, wherein the first information is used for indicating corresponding AR data.
13. The method of claim 8, further comprising:
the first terminal sends a first instruction to the network device through the core network device, where the first instruction is used to start or stop the network device from sending the AR video data, or the first instruction is used to regenerate the AR video data.
14. An apparatus for processing an AR video call, the apparatus comprising: the system comprises a first communication unit and a data processing unit; wherein the content of the first and second substances,
the first communication unit is used for receiving video data from a terminal through core network equipment; the terminal is any one of two terminals with the established VoLTE video connection;
the data processing unit is used for adding AR data to the video data to generate AR video data;
the first communication unit is further configured to send the AR video data generated by the data processing unit to the two terminals, respectively.
15. An apparatus for processing an AR video call, the apparatus comprising: a first transmitting unit and a first receiving unit; wherein the content of the first and second substances,
the first sending unit is used for sending video data;
the first receiving unit is used for receiving video data from a second terminal; the network equipment is also used for receiving AR video data from the network equipment, and the AR video data is generated by adding AR data on the video data by the network equipment;
the video data are sent to network equipment through core network equipment; and the video data is sent after the first terminal and the second terminal establish VoLTE video connection.
16. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7; alternatively, the program is adapted to carry out the steps of the method of any one of claims 8 to 13 when executed by a processor.
17. A communication device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method of any one of claims 1 to 7 are implemented when the program is executed by the processor; alternatively, the processor implements the steps of the method of any one of claims 8 to 13 when executing the program.
CN202011492958.9A 2020-12-16 2020-12-16 AR video call processing method and device and communication equipment Pending CN114710472A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011492958.9A CN114710472A (en) 2020-12-16 2020-12-16 AR video call processing method and device and communication equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011492958.9A CN114710472A (en) 2020-12-16 2020-12-16 AR video call processing method and device and communication equipment

Publications (1)

Publication Number Publication Date
CN114710472A true CN114710472A (en) 2022-07-05

Family

ID=82167384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011492958.9A Pending CN114710472A (en) 2020-12-16 2020-12-16 AR video call processing method and device and communication equipment

Country Status (1)

Country Link
CN (1) CN114710472A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105721821A (en) * 2016-04-01 2016-06-29 宇龙计算机通信科技(深圳)有限公司 Video calling method and device
CN106803921A (en) * 2017-03-20 2017-06-06 深圳市丰巨泰科电子有限公司 Instant audio/video communication means and device based on AR technologies
CN106911688A (en) * 2017-02-21 2017-06-30 中国联合网络通信集团有限公司 Voice business realizing method and device based on IMS
WO2018018698A1 (en) * 2016-07-29 2018-02-01 宇龙计算机通信科技(深圳)有限公司 Augmented reality information processing method, device and system
WO2019019927A1 (en) * 2017-07-27 2019-01-31 腾讯科技(深圳)有限公司 Video processing method, network device and storage medium
CN109636922A (en) * 2018-08-28 2019-04-16 亮风台(上海)信息科技有限公司 A kind of method and apparatus of the content of augmented reality for rendering
CN109740476A (en) * 2018-12-25 2019-05-10 北京琳云信息科技有限责任公司 Instant communication method, device and server
CN110266992A (en) * 2019-06-24 2019-09-20 苏芯物联技术(南京)有限公司 A kind of long-distance video interactive system and method based on augmented reality
CN110650081A (en) * 2019-08-22 2020-01-03 南京洁源电力科技发展有限公司 Virtual reality instant messaging method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105721821A (en) * 2016-04-01 2016-06-29 宇龙计算机通信科技(深圳)有限公司 Video calling method and device
WO2018018698A1 (en) * 2016-07-29 2018-02-01 宇龙计算机通信科技(深圳)有限公司 Augmented reality information processing method, device and system
CN106911688A (en) * 2017-02-21 2017-06-30 中国联合网络通信集团有限公司 Voice business realizing method and device based on IMS
CN106803921A (en) * 2017-03-20 2017-06-06 深圳市丰巨泰科电子有限公司 Instant audio/video communication means and device based on AR technologies
WO2019019927A1 (en) * 2017-07-27 2019-01-31 腾讯科技(深圳)有限公司 Video processing method, network device and storage medium
CN109636922A (en) * 2018-08-28 2019-04-16 亮风台(上海)信息科技有限公司 A kind of method and apparatus of the content of augmented reality for rendering
CN109740476A (en) * 2018-12-25 2019-05-10 北京琳云信息科技有限责任公司 Instant communication method, device and server
CN110266992A (en) * 2019-06-24 2019-09-20 苏芯物联技术(南京)有限公司 A kind of long-distance video interactive system and method based on augmented reality
CN110650081A (en) * 2019-08-22 2020-01-03 南京洁源电力科技发展有限公司 Virtual reality instant messaging method

Similar Documents

Publication Publication Date Title
US11336702B2 (en) Interaction information transmission method and apparatus
EP2107714A1 (en) Method and system for realizing multimedia color ring service and multimedia color inspire service
US9723032B2 (en) Data communication
US8351906B2 (en) Calling methods and systems for video phone
KR20050025365A (en) A method and a apparatus of transmitting multimedia signal with divide for mobile phone
US20210092164A1 (en) Method for internet protocol based multimedia subsystem registration and device, communication device, and storage medium
US8903905B2 (en) Method and system for automatically storing a communication session
US20230353673A1 (en) Call processing method, call processing apparatus, and related device
RU2665303C2 (en) Multimedia subsystem on basis of internet protocol (ims) and method and device for configuring service in ims
CN105704684B (en) Method, device, server and system for implementing color ring back tone
CN113966011A (en) Call establishment method, device, equipment and storage medium
CN111131753B (en) Conference processing method and conference management platform server
CN109842590A (en) A kind of processing method that surveying task, device and computer readable storage medium
CN102024310A (en) Alarm processing method of video monitoring system and video monitoring front-end equipment
CN114710472A (en) AR video call processing method and device and communication equipment
CN110650254A (en) Information transmission method, information reception method, terminal, and storage medium
CN112543298A (en) Video conference establishing method and system based on vehicle-mounted terminal, storage medium and vehicle-mounted terminal
CN107295493B (en) Information reporting method, device, terminal and computer readable storage medium
CN101800974A (en) Method for processing task request of mobile equipment and user agent application server
CN114979384B (en) Audio and video conference call method and device and electronic equipment
CN115842808A (en) Call interaction method, device, network node and storage medium
CN115348641A (en) Information acquisition method and device, communication equipment and storage medium
CN117176861A (en) Service realization method, device, network equipment, terminal and storage medium
CN115941761A (en) Method, equipment and storage medium for establishing communication and data channel
CN111447334A (en) Call method, device, phone terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination