WO2023016177A1 - 一种呼叫处理方法、装置及系统 - Google Patents

一种呼叫处理方法、装置及系统 Download PDF

Info

Publication number
WO2023016177A1
WO2023016177A1 PCT/CN2022/105394 CN2022105394W WO2023016177A1 WO 2023016177 A1 WO2023016177 A1 WO 2023016177A1 CN 2022105394 W CN2022105394 W CN 2022105394W WO 2023016177 A1 WO2023016177 A1 WO 2023016177A1
Authority
WO
WIPO (PCT)
Prior art keywords
media
endpoint
network element
unified
function network
Prior art date
Application number
PCT/CN2022/105394
Other languages
English (en)
French (fr)
Inventor
林宏达
黄锴
李拓
欧文军
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP22855160.2A priority Critical patent/EP4351103A4/en
Publication of WO2023016177A1 publication Critical patent/WO2023016177A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1069Session establishment or de-establishment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/10Architectures or entities
    • H04L65/1016IP multimedia subsystem [IMS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/10Architectures or entities
    • H04L65/102Gateways
    • H04L65/1033Signalling gateways
    • H04L65/104Signalling gateways in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1101Session protocols
    • H04L65/1104Session initiation protocol [SIP]

Definitions

  • the present application relates to the technical field of communications, and in particular to a call processing method, device and system.
  • IP multimedia subsystem (IP multimedia subsystem, IMS) is an IP-based network that provides a general network architecture for multimedia services.
  • the IMS network uses the session initiation protocol (session initiation protocol, SIP) as the main signaling protocol, so that operators can provide end-to-end all-IP multimedia services for users.
  • SIP uses the session description protocol (session description protocol, SDP) and initiation/response (offer/answer, referred to as OA) mechanism to complete the media negotiation of the participants in the session, such as audio and video codec, host address, network transmission protocol, etc. .
  • the IMS network includes the following media processing network elements: IMS access media gateway (IMS-AGW), multimedia resource function processor (multimedia resource function processor, MRFP), transition gateway (transition gateway, TrGW) , IP multimedia gateway (IP multimedia media gateway, IM-MGW).
  • IMS-AGW IMS access media gateway
  • multimedia resource function processor multimedia resource function processor
  • MRFP multimedia resource function processor
  • TrGW transition gateway
  • IP multimedia gateway IP multimedia media gateway
  • IM-MGW IP multimedia media gateway
  • the above media processing network elements can only be directly controlled by specific media control network elements, for example, IMS-AGW is controlled by proxy call session control function (proxy call session control function, P-CSCF) network elements.
  • proxy call session control function proxy call session control function, P-CSCF
  • the media control network element completes media negotiation with other network elements or terminal equipment through SIP signaling and SDP protocol, and then controls the media processing network element to complete actual media processing and media data transmission.
  • the basic call flow and the media negotiation flow of the IMS are realized by the interaction between the calling side and the called side. Since IMS is a half-call model, regardless of whether the calling side and the called side access the IMS network through the IMS-AGW, logically the calling side and the called side are two IMS-AGWs, and two media contexts need to be established. causes media multi-hop forwarding (for example, real-time transport protocol (real-time transport protocol, RTP) media will be forwarded twice), resulting in waste of resources. Moreover, the media processing capabilities provided by different media processing network elements in the IMS network are limited, and the processing of different media types may require multiple media processing network elements, resulting in media detours that cannot meet future extended reality (XR), etc. Low latency requirements for strong interaction services.
  • XR extended reality
  • Embodiments of the present application provide a call processing method, device, and system.
  • the call processing method is based on the IMS network architecture and is executed by a unified media function network element.
  • the access side provides media access capabilities, reduces media detours, and reduces media traffic. Transmission delay is conducive to supporting the requirements of delay-sensitive services such as extended reality on the IMS network architecture.
  • the embodiment of the present application provides a call processing method, and the call processing method is executed by a unified media function network element.
  • the unified media function network element receives the first media connection request message from the first access media control network element, and creates the first media endpoint for the first terminal device according to the first media connection request message;
  • the unified media function network element Receive a second media connection request message from the second access media control network element, and create a second media endpoint for the second terminal device according to the second media connection request message.
  • the unified media function network element transmits the media data between the first terminal device and the second terminal device through the first media endpoint and the second media endpoint.
  • the unified media function network element directly transmits the media data between the first terminal device and the second terminal device without forwarding by other intermediate network elements, which can reduce media detours, reduce media service transmission delay, and help support expansion Reality and other delay-sensitive services require IMS network architecture.
  • the unified media function network element creates a session media processing context according to the first media connection request message, and the session media processing context includes the first media endpoint and the second media endpoint.
  • the unified media function network element creates the first media stream according to the first media connection request message.
  • the unified media function network element creates the second media stream according to the second media connection request message.
  • the session media processing context also includes a first media stream and a second media stream.
  • the unified media function network element sends the resource identifier of the first media endpoint to the first access media control network element, and sends the resource identifier of the second media endpoint to the second access media control network element.
  • the unified media function network element can transmit the unified media resource identification URL, and the unified media resource identification URL can be accessed by all required network elements in the IMS trust domain, thereby realizing media resource sharing.
  • the unified media function network element associates the first media endpoint with the second media endpoint.
  • the first media endpoint and the second media endpoint can transmit media data between the first terminal device and the second terminal device.
  • the unified media function network element receives the interworking media connection request message from the interworking border control function network element, and creates an interworking media endpoint for the interworking device according to the interworking media connection request message.
  • the unified media function network element transmits the media data between the first terminal device and the interworking device through the first media endpoint and the interworking media endpoint, or the unified media function network element transmits the second terminal device and the interworking media endpoint through the second media endpoint and the interworking media endpoint.
  • Media data between intercommunicating devices can be deployed on the interworking side to provide media capabilities on the interworking side and reduce media detours.
  • the unified media function network element receives the media redirection indication message from the application server, and creates a media redirection endpoint for the media redirection indication message according to the media redirection indication message.
  • the unified media function network element transmits the media data between the first terminal device and the resource or device indicated by the media redirection indication message through the first media endpoint and the media redirection endpoint, and/or, the unified media function network element transmits the media data between the first terminal device and the resource or device indicated by the media redirection indication message, and/or, the unified media function network element transmits the media data through the second media endpoint
  • the endpoint and the media redirection endpoint transmit media data between the second terminal device and the resource or device indicated by the media redirection indication message.
  • the unified media function network element can provide a unified media resource identifier URL and a non-OA media control interface (including but not limited to a media gateway control protocol interface, a service interface, etc.), thereby avoiding media renegotiation and realizing business logic It is decoupled from the OA state, while avoiding possible business conflicts and media collisions.
  • a non-OA media control interface including but not limited to a media gateway control protocol interface, a service interface, etc.
  • the resources indicated by the media redirection indication message are playback resources.
  • the unified media function network element plays the sound to the first terminal device through the first media endpoint and the media redirection endpoint, or, the unified media function network element sends the sound to the first terminal device through the second media endpoint and the media redirection endpoint.
  • the second terminal device plays the sound.
  • the unified media function network element receives the playback stop indication message from the application server, and according to the playback stop indication message, disassociates the first media endpoint from the media redirection endpoint; or, disassociates the second Media endpoints and media redirection endpoints.
  • the unified media function network element directly releases the playback resources when the playback is completed, without involving modification of user media connection information, which is beneficial to avoid media re-negotiation.
  • the unified media function network element creates the third media endpoint for the third terminal device according to the media redirection indication message.
  • the application server triggers media redirection services such as forwarding
  • the unified media function network element can create a new media endpoint according to the control information, and the whole process does not need to update the media information of the first terminal device or the second terminal device, which is effective. Helps avoid media renegotiation.
  • the unified media function network element deletes the first media endpoint or the second media endpoint according to the media connection cancel message.
  • the unified media function network element can complete the refresh of the media endpoint, and the whole process does not need to update the media information of the first terminal device or the second terminal device.
  • the unified media function network element when the first media endpoint is associated with the media redirection endpoint, changes the association relationship between the first media endpoint and the second media endpoint to that of the first media endpoint and the media redirection endpoint.
  • the application server when the media information needs to be updated, the application server only needs to directly modify the media connection of the unified media function network element through the interface, and does not need to go through a complicated media renegotiation process, thereby avoiding possible service conflicts and media collisions.
  • the embodiment of the present application provides another call processing method, and the call processing method is executed by a first access media control network element.
  • the first access media control network element sends a first media connection request message to the unified media function network element, and the first media connection request message is used to request to create a first media endpoint for the first terminal device.
  • the first media access control network element receives the first media connection response from the unified media function network element, where the first media connection response includes the resource identifier of the first media endpoint.
  • the unified media function network element can send the unified media resource identifier to the control network element in the IMS trust domain, which is beneficial to the sharing of media resources in the IMS network.
  • the first media access control network element sends a session request message to the second media access control network element, where the session request message includes the resource identifier and media information of the first media endpoint.
  • the first access media control network element receives a session response message from the second access media control network element, where the session response message includes the resource identifier and media information of the second media endpoint created for the second terminal device.
  • the unified media resource identifier can be transmitted in the IMS trust domain, which is beneficial to the sharing of media resources in the IMS network.
  • the first access media control network element sends the first media update indication message to the unified media function network element.
  • the first media update instruction message is used to instruct the unified media function network element to change the association relationship between the first media endpoint and the second media endpoint to the association relationship between the first media endpoint and the media redirection endpoint.
  • the embodiment of the present application provides another call processing method, and the call processing method is executed by the second access media control network element.
  • the second access media control network element sends a second media connection request message to the unified media function network element, and the second media connection request message is used to request to create a second media endpoint for the second terminal device.
  • the second media access control network element receives the second media connection response from the unified media function network element, where the second media connection response includes the resource identifier of the second media endpoint.
  • the unified media function network element can send the unified media resource identifier to the control network element in the IMS trust domain, which is beneficial to the sharing of media resources in the IMS network.
  • the second access media control network element receives a session request message from the first access media control network element, where the session request message includes the resource identifier and media information of the first media endpoint.
  • the second access media control network element sends an association instruction message to the unified media function network element, where the association instruction message is used to instruct the unified media function network element to associate the first media endpoint with the second media endpoint.
  • the second access media control network element sends a session response message to the first access media control network element, where the session response message includes the resource identifier and media information of the second media endpoint created for the second terminal device.
  • the second access media control network element sends the second media update indication message to the unified media function network element.
  • the second media update instruction message is used to instruct the unified media function network element to change the association relationship between the second media endpoint and the first media endpoint to the association relationship between the second media endpoint and the media redirection endpoint.
  • the embodiment of the present application provides a call processing device, where the device includes a communication module and a processing module.
  • the communication module is used for receiving a first media connection request message from the first access media control network element, and the first media connection request message is used for requesting to create a first media endpoint for the first terminal device.
  • the processing module is configured to create a first media endpoint for the first terminal device according to the first media connection request message.
  • the communication module is further configured to receive a second media connection request message from the second access media control network element, where the second media connection request message is used to request to create a second media endpoint for the second terminal device.
  • the processing module is further configured to create a second media endpoint for the second terminal device according to the second media connection request message.
  • the communication module is also used for transmitting media data between the first terminal device and the second terminal device through the first media endpoint and the second media endpoint.
  • the embodiment of the present application provides another call processing device, which includes a communication module and a processing module.
  • the communication module is used to send a first media connection request message to the unified media function network element, and the first media connection request message is used to request to create a first media endpoint for the first terminal device.
  • the communication module is further configured to receive a first media connection response from the unified media function network element, where the first media connection response includes the resource identifier of the first media endpoint.
  • the embodiment of the present application provides another call processing device, which includes a communication module and a processing module.
  • the communication module is used to send a second media connection request message to the unified media function network element, and the second media connection request message is used to request to create a second media endpoint for the second terminal device.
  • the communication module is also used to receive a second media connection response from the unified media function network element, where the second media connection response includes the resource identifier of the second media endpoint.
  • the embodiment of the present application provides another call processing apparatus.
  • the call processing apparatus may be a device or a chip or a circuit disposed in the device.
  • the call processing device includes units and/or modules for executing the call processing method provided in the above-mentioned first to third aspects and any possible design thereof, so that the first to third aspects and The beneficial effect possessed by the call processing method provided in any possible design thereof.
  • the embodiments of the present application provide a computer-readable storage medium, the readable storage medium includes a program or an instruction, and when the program or instruction is run on a computer, the computer executes the first aspect to the third aspect and A method in any of its possible implementations.
  • the embodiment of the present application provides a computer program or computer program product, including codes or instructions, when the codes or instructions are run on a computer, the computer executes the first aspect to the third aspect and any possible implementation thereof methods in methods.
  • the embodiment of the present application provides a chip or a chip system, the chip or chip system includes at least one processor and an interface, the interface and the at least one processor are interconnected through a line, and the at least one processor is used to run computer programs or instructions, To perform the method described in any one of the first aspect to the third aspect and any possible implementation thereof.
  • the interface in the chip may be an input/output interface, a pin or a circuit, and the like.
  • the chip system in the above aspect can be a system on chip (system on chip, SOC), and can also be a baseband chip, etc., wherein the baseband chip can include a processor, a channel encoder, a digital signal processor, a modem, and an interface module.
  • SOC system on chip
  • baseband chip can include a processor, a channel encoder, a digital signal processor, a modem, and an interface module.
  • the chip or the chip system described above in the present application further includes at least one memory, and instructions are stored in the at least one memory.
  • the memory may be a storage unit inside the chip, such as a register, a cache, etc., or a storage unit of the chip (eg, a read-only memory, a random access memory, etc.).
  • the session media processing context in the above aspect is called Context
  • the media endpoint in the above aspect is called Endpoint
  • the media stream in the above aspect is called Stream.
  • the media endpoint in the foregoing aspect may also be called a Connector.
  • FIG. 1 is a schematic diagram of an IMS network architecture
  • FIG. 2 is a schematic diagram of a network architecture provided by an embodiment of the present application.
  • FIG. 3 is a schematic flow diagram of an IMS basic call and media negotiation
  • FIG. 4 is a schematic diagram of a basic call flow of an IMS and a media connection topology after the media negotiation process is completed;
  • FIG. 5 is a schematic flow chart of IMS playback
  • FIG. 6 is a schematic flow chart of IMS forwarding
  • Fig. 7 is a schematic diagram of media collision
  • Fig. 8 is a schematic diagram of a media resource model defined by the H.248 protocol
  • FIG. 9 is a schematic flowchart of a call processing method provided in an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a session media processing context provided by an embodiment of the present application.
  • FIG. 11 is a schematic flow diagram of implementing media redirection after the call processing method provided in the embodiment of the present application executes the basic call processing flow;
  • FIG. 12 is a schematic flowchart of the application of the call processing method provided in the embodiment of the present application in an interworking scenario
  • FIG. 13 is a schematic flow diagram of the application of the call processing method provided by the embodiment of the present application in the scenario of calling and called audio basic call establishment;
  • FIG. 14 is a schematic diagram of a media connection topology of an audio call provided by an embodiment of the present application.
  • FIG. 15 is a schematic diagram of a media connection topology of a video call provided by an embodiment of the present application.
  • FIG. 16 is a schematic flow diagram of the application of the call processing method provided by the embodiment of the present application in the playback service scenario;
  • FIG. 17 is a schematic diagram of a media connection topology for playing video CRBT provided by an embodiment of the present application.
  • FIG. 18 is a schematic flowchart of the application of the call processing method provided by the embodiment of the present application in the forwarding service scenario;
  • FIG. 19 is a schematic diagram of a media connection topology of a forwarding service provided by an embodiment of the present application.
  • FIG. 20 is a schematic diagram of a call processing device provided in an embodiment of the present application.
  • FIG. 21 is a schematic diagram of another call processing device provided in the embodiment of the present application.
  • FIG. 22 is a schematic diagram of another call processing device provided in the embodiment of the present application.
  • FIG. 23 is a schematic diagram of another call processing device provided in the embodiment of the present application.
  • FIG. 24 is a schematic diagram of another call processing device provided in the embodiment of the present application.
  • FIG. 25 is a schematic diagram of another call processing apparatus provided by an embodiment of the present application.
  • words such as “exemplary” or “for example” are used as examples, illustrations or illustrations. Any embodiment or design scheme described as “exemplary” or “for example” in the embodiments of the present application shall not be interpreted as being more preferred or more advantageous than other embodiments or design schemes. Rather, the use of words such as “exemplary” or “such as” is intended to present related concepts in a concrete manner.
  • determining B according to A does not mean determining B only according to A, and B may also be determined according to A and/or other information.
  • IP multimedia subsystem (IP multimedia subsystem, IMS) is an IP-based network that provides a general network architecture for multimedia services.
  • the IMS network uses the session initiation protocol (session initiation protocol, SIP) as the main signaling protocol, so that operators can provide end-to-end all-IP multimedia services for users.
  • SIP uses the session description protocol (session description protocol, SDP) and initiation/response (offer/answer, referred to as OA) mechanism to complete the media negotiation of the participants in the session, such as audio and video codec, host address, network transmission protocol, etc. .
  • FIG. 1 is a schematic diagram of an IMS network architecture.
  • the IMS network architecture can be divided into access side, center side and interworking side.
  • FIG. 1 only shows network elements related to media processing in the IMS network.
  • media processing network elements can only be directly controlled by specific media control network elements.
  • the media control network element completes media negotiation with other network elements or terminal devices through SIP signaling and SDP protocol, and then controls the media processing network element to complete actual media processing and media data transmission.
  • the H.248 gateway control protocol is usually used between the media control network element and the media processing network element to complete the management of the media processing nodes and their connection relationship.
  • the terminal equipment involved in this embodiment of the present application can also be referred to as a terminal, which can be a device with a wireless transceiver function, which can be deployed on land, including indoor or outdoor, handheld or vehicle-mounted; it can also be deployed on water (such as Ships, etc.); can also be deployed in the air (such as aircraft, balloons and satellites, etc.).
  • the terminal device may be user equipment (user equipment, UE), where the UE includes a handheld device, a vehicle-mounted device, a wearable device, or a computing device with a wireless communication function.
  • the UE may be a mobile phone (mobile phone), a tablet computer or a computer with a wireless transceiver function.
  • the terminal device can also be a virtual reality (virtual reality, VR) terminal device, an augmented reality (augmented reality, AR) terminal device, a wireless terminal in industrial control, a wireless terminal in unmanned driving, a wireless terminal in telemedicine, a smart Wireless terminals in power grids, wireless terminals in smart cities, wireless terminals in smart homes, etc.
  • the device for realizing the function of the terminal may be a terminal; it may also be a device capable of supporting the terminal to realize the function, such as a chip system, and the device may be installed in the terminal.
  • the system-on-a-chip may be composed of chips, or may include chips and other discrete devices.
  • the technical solutions provided in the embodiments of the present application are described by taking the terminal as an example where the device for realizing the functions of the terminal is used.
  • the access side includes IMS access media gateway (IMS-AGW), proxy call session control function (proxy call session control function, P-CSCF) network elements, etc.
  • IMS-AGW is a media processing network element on the access side, which is used to implement media access proxy and forwarding of terminal equipment, network address translation (network address translation, NAT) traversal, audio codec conversion and other functions.
  • P-CSCF is used to control the IMS-AGW and complete media processing on the access side.
  • the center side includes multimedia resource function processor (multimedia resource function processor, MRFP), multimedia resource function controller (multimedia resource function controller, MRFC), home server (home subscriber server, HSS), application server (application server, AS), Query call session control function (interrogating CSCF, I-CSCF) network elements, serving call session control function (serving CSCF, S-CSCF) network elements, etc.
  • the MRFP is a media processing network element on the center side, which is used to provide functions such as playing announcements and collecting numbers, and voice conferencing.
  • MRFC is used to control MRFP to complete functions such as announcement and number collection, voice conference, etc.
  • the interworking side includes network elements of the interconnection border control function (IBCF), transition gateway (transition gateway, TrGW), IP multimedia gateway (IP multimedia media gateway, IM-MGW), media gateway control function (media gateway control function) , MGCF) network elements, etc.
  • IBCF interconnection border control function
  • TrGW transition gateway
  • IP multimedia gateway IP multimedia media gateway
  • IM-MGW media gateway control function
  • MGCF media gateway control function
  • the TrGW and the IM-MGW are media processing network elements on the interworking side, and are used to implement media interworking between the IMS network and other networks.
  • the TrGW is used to realize the intercommunication between the IMS network and other IP network media planes, including media NAT, quality of service (quality of service, QoS) control and codec conversion, etc.
  • the IM-MGW is used to realize the intercommunication of the media plane between the IMS network and other non-IP networks (such as the public switched telephone network (PSTN), public land mobile network (PLMN)).
  • the IBCF is used to control the TrGW to complete media interworking control with other IP domains.
  • the MGCF is used to control the IM-MGW to complete the media interworking control with the non-IP domain.
  • the basic call flow and the media negotiation flow in the IMS network are realized by the interaction between the calling side and the called side.
  • IMS is a half-call model
  • no matter whether the calling terminal device and the called terminal device access the IMS network through the IMS-AGW logically there are two IMS-AGWs on the calling side and the called side, and two IMS-AGWs need to be established.
  • a media context For example, the calling IMS-AGW creates a media context for receiving real-time transport protocol (real-time transport protocol, RTP) media from the calling terminal device and forwarding the RTP media.
  • real-time transport protocol real-time transport protocol
  • the called IMS-AGW creates another media context for receiving the RTP media forwarded by the calling IMS-AGW, and forwarding the RTP media to the called terminal device. That is to say, when the RTP media of the calling terminal device is transmitted to the called terminal device through the IMS network, at least two hops are required, which leads to multi-hop forwarding of the media and wastes resources.
  • both the calling side and the called side may trigger media redirection, resulting in media collision.
  • the calling side triggers the announcement
  • the called side triggers the forwarding.
  • the UPDATE message is used to complete the media redirection.
  • different media processing network elements in the IMS network provide limited media processing capabilities, and processing services of different media types may need to pass through multiple media processing network elements.
  • the media processing network element on the access side of the IMS provides limited functions. Functions such as audio playback and audio and video conferencing need to be provided by the media processing network element on the center side, resulting in media detours and large service delays, which cannot meet future expansion realities ( Extended reality (XR) and other strong interactive services require low latency.
  • XR Extended reality
  • an IMS multimedia telephony application server (MMTEL AS) is used to provide traditional telecommunication services
  • a video ring back tone AS is used to provide video ring back ring services.
  • MMTEL AS multimedia telephony application server
  • a video ring back tone AS is used to provide video ring back ring services.
  • the media resources of the MMTEL AS and the video ring back tone AS cannot be shared by multiple ASs when they are connected in series.
  • Each AS must apply for an independent media resource, resulting in waste of media resources and service conflicts and media collisions.
  • an embodiment of the present application provides a call processing method, which is based on an IMS network architecture and executed by a newly added unified media function (unified media function, UMF) network element.
  • UMF unified media function
  • UMF provides media access capabilities through the access side, reduces media detours, and reduces media service transmission delays, which is conducive to supporting the needs of delay-sensitive services such as extended reality for IMS network architecture.
  • the call processing method provided in the embodiment of the present application is applied to a network architecture as shown in FIG. 2 .
  • the network architecture shown in FIG. 2 includes the UMF newly added in the embodiment of the present application.
  • UMF can also provide media processing functions required by future new services, such as data channel services (data channel server) .
  • UMF can be deployed on the access side, center side or interworking side at the same time, and all network elements in the IMS trust domain can control UMF.
  • UMF is used to provide rich media processing capabilities, reduce media detours, and meet the demands of low-latency sensitive services such as XR in the future.
  • the network architecture shown in Figure 2 also includes AS, MGCF, IBCF, I-CSCF, S-CSCF, P-CSCF, HSS, and terminal equipment.
  • the functions of each network element or equipment are the same as those in the existing IMS network. Or devices have similar functions, and also support future media capability evolution.
  • the media function control in the embodiment of this application (for example, when the P-CSCF controls the UMF to perform basic call establishment) adopts a non-OA mechanism, and controls the UMF through a media gateway control protocol (such as the H.248 protocol) interface or a service interface.
  • a media gateway control protocol such as the H.248 protocol
  • FIG. 2 focuses on media services, and does not show network elements related to non-media services.
  • CSCF generally does not participate in media services, so the control relationship between CSCF and UMF is not shown in Figure 2, but this embodiment of the application does not limit it.
  • FIG. 3 is a schematic flow diagram of an IMS basic call and media negotiation.
  • the basic call and media negotiation process is realized by interaction among terminal equipment, IMS-AGW, P-CSCF, and CSCF/AS. It should be noted that since the IMS is a half-call model, regardless of whether the calling party and the called party access through the same physical IMS network, their call processing is completed by two different logical entities.
  • the logical uplink (mobile original, MO) in Figure 3 includes the calling terminal equipment, calling IMS-AGW, calling P-CSCF and calling CSCF/AS, and the downlink (mobile terminated, MT) includes The called terminal device, the called IMS-AGW, the called P-CSCF and the called CSCF/AS.
  • the basic call and media negotiation process shown in Figure 3 includes:
  • the calling terminal device carries its own media information (SDP) through the initial INVITE message, and the media information is SDP Offer.
  • SDP media information
  • the calling P-CSCF receives the INVITE message, records and stores the media information of the calling terminal equipment. And, the calling P-CSCF instructs the calling IMS-AGW to allocate the output media endpoint O2, and the corresponding local media information is SDP(O2).
  • the calling P-CSCF replaces the calling media information with SDP (O2), and then transmits it to subsequent network elements through SIP signaling. For example, the SIP signaling is sent by the calling P-CSCF to the calling CSCF/AS, then sent by the calling CSCF/AS to the called CSCF/AS, and then sent by the called CSCF/AS to the called P-CSCF.
  • the called P-CSCF receives the INVITE message, records and stores the calling side media information. And, the called P-CSCF instructs the called IMS-AGW to allocate the output media endpoint T2, and the corresponding local media information is SDP(T2). The called P-CSCF then sends the SDP (T2) to the called terminal equipment.
  • the called terminal device carries media information through the first reliable 1XX (183 or 180) response message, and the media information is SDP Answer.
  • the called P-CSCF receives the 1XX response message, and updates the remote media information of T2 with the media information of the called terminal device. And, the called P-CSCF instructs the called IMS-AGW to allocate the input media endpoint T1, the corresponding local media information is SDP(T1), and the corresponding remote media information is the previously saved calling media information SDP(O2).
  • the called P-CSCF instructs the called IMS-AGW to associate T1 and T2, and replaces the called SDP with the SDP (T1), and then transmits it to the subsequent network element through SIP signaling.
  • the SIP signaling is sent by the called P-CSCF to the called CSCF/AS, then sent by the called CSCF/AS to the calling CSCF/AS, and then sent by the calling CSCF/AS to the calling P-CSCF.
  • the called terminal device may carry the SDP Answer through the 200 (INVITE) response.
  • the calling P-CSCF receives the called SDP Answer (T1), and updates the remote media information of O2 to SDP (T1).
  • the calling P-CSCF instructs the calling IMS-AGW to allocate the input media endpoint O1, the corresponding local media information is SDP(O1), and the corresponding remote media information is the previously saved media information of the calling terminal device.
  • the calling P-CSCF instructs the calling IMS-AGW to associate O1 and O2, and replaces the calling SDP with SDP (O1), and then transmits it to the calling terminal device through SIP signaling.
  • FIG. 4 is a schematic diagram of a basic IMS call flow and a media connection topology after the media negotiation flow is completed.
  • the calling IMS-AGW on the access side creates two media endpoints O1 and O2
  • the called IMS-AGW on the access side creates two media endpoints T1 and T2 . It can be seen that when the media data of the calling terminal device is transmitted to the called terminal device, at least two hops are required.
  • the media negotiation can be completed through the following two processes:
  • Method 1 The called side carries the SDP Offer in the first reliable 18X (183 or 180) response message, and the calling party replies with the SDP Answer through the PRACK message to complete the media negotiation.
  • Method 2 The called party carries the SDP Offer in the 200 (INVITE) response message, and the calling party replies with the SDP Answer through the ACK message to complete the media negotiation.
  • the calling side and the called side will perform call release.
  • the specific implementation method refers to the description in the existing protocol standard, and will not be repeated here.
  • Media renegotiation In addition to the basic call, the media service in the IMS network also has a large number of supplementary services. These services require media renegotiation after a media negotiation has been completed. For example, a typical media renegotiation includes:
  • Announcement For example, for services such as CRBT, which are continued after the announcement, one party in the session needs to be connected to the multimedia resource function MRF to play the announcement. After the announcement is completed, media renegotiation is required to restore the session. Media connection relationship.
  • media renegotiation services such as announcement and forwarding in the IMS network are usually triggered by services on the AS.
  • the AS cannot directly control the media processing network elements. Therefore, under the existing IMS network architecture, the AS completes the media negotiation (or media renegotiation) with the media control network element (such as MRFC) through SIP signaling, and updates the media information on the media processing network element (such as MRFP) , enabling media redirection.
  • Fig. 5 is a schematic flow chart of an IMS announcement.
  • the announcement process is described by taking the announcement process of the CRBT service as an example.
  • MRF magnetic resonance frequency
  • MRFC and MRFP are combined in Fig. 5, collectively referred to as MRF, and the internal interactions between MRFC and MRFP are not listed in the figure.
  • the basic call and media negotiation process between the calling side and the called side is omitted in the playback process shown in FIG. 5 (see FIG. 4 for the specific process).
  • the equipment and network elements on the calling side in Figure 5 are not described.
  • the calling side shown in Figure 5 includes equipment and network elements such as calling terminal equipment, IMS-AGW, P-CSCF and AS Yuan.
  • the playback process shown in Figure 5 includes:
  • the AS sends an UPDATE message to the calling side, and the UPDATE message carries the MRF media information.
  • the remote media information on the calling side is updated to the MRF, the media negotiation between the calling side and the MRF is completed, and a media connection is established.
  • the AS needs to re-establish the media connection between the calling side and the called side, and the media redirection can be completed through the following two processes:
  • Media redirection is completed through UPDATE message before the call is established: the media redirection is completed through the UPDATE/200 message between the AS and the calling side, and the remote media on the calling side is updated to the called SDP, and then the caller and called party are completed. The call is established.
  • the media redirection is completed through the reINVITE message: the AS first completes the call establishment between the calling party and the called party. Then the AS sends a reINVITE message (without SDP) to the calling party (or the called party), and the calling side and the called side pass a 200/ACK message to complete the media redirection of the calling party and the called party.
  • Fig. 6 is a schematic flow chart of IMS forwarding.
  • the forwarding process includes the calling terminal device, the called terminal device and the forwarding terminal device. It should be noted that according to the IMS half-call model, the called terminal device and the forwarding terminal device should correspond to different IMS logical network elements. For simplicity of description, no distinction is made in FIG. 6 (that is, it is assumed that the called terminal device and the forwarding terminal device use the same IMS-AGW, P-CSCF and AS). It should also be noted that the forwarding process shown in Figure 6 is only for the scenario where the caller and the called party have completed the media negotiation and the forwarding occurs after the media negotiation has been completed. scene), which is not limited here.
  • the forwarding process shown in Figure 6 includes:
  • the AS After the AS triggers forwarding, it sends an initial INVITE message to the forwarding terminal device through the P-CSCF, and the initial INVITE message carries the calling side media information (SDP Offer).
  • SDP Offer calling side media information
  • the forwarding terminal device sends a 1XX (usually 183 or 180) response message to the AS through the P-CSCF, and the 1XX response message carries the SDP Answer.
  • the media redirection can be completed through the following two processes:
  • the media redirection is completed through the reINVITE message.
  • media renegotiation involves a complex end-to-end signaling process.
  • the business AS must deeply participate in the SIP renegotiation OA process, perceive the OA state, understand the SDP content, and decide to adopt different processes according to the OA state.
  • the renegotiation OA process is usually irrelevant to business logic, which leads to the coupling of business logic and media negotiation.
  • the unilateral renegotiation initiated by the service AS may cause media renegotiation oscillations. For example, in an announcement scenario, after the announcement is over, the AS updates the remote media information on the calling side to the called SDP through an UPDATE message, and the calling party replies with an SDP Answer in 200 (UPDATE).
  • UPDATE SDP Answer in 200
  • This SDP Answer should be the same as The SDP carried by the calling side in the initial INVITE message is the same, otherwise, the AS must complete the unilateral negotiation with the called side again through the UPDATE/200 message.
  • the SDP Answer returned by the caller and the callee in 200 UPDATE
  • a media renegotiation shock will occur, and the media negotiation between the caller and the callee cannot be completed, resulting in the failure to establish the call normally.
  • FIG. 7 is a schematic diagram of media collision.
  • the IMS at the calling side and the called side both trigger media renegotiation and send UPDATE (or reINVITE) messages to the opposite end at the same time, resulting in media collision.
  • FIG. 8 is a media resource model defined by the H.248 protocol.
  • the media resource model includes a terminal (terminal) and a context (context).
  • the terminal is a logical entity, which is used to send and receive one or more media, and the media type may be a narrowband time slot, an analog line, or an IP-based RTP stream.
  • the terminal can send video stream (video stream), audio stream (audio stream) and information stream (message stream), etc., as shown in Figure 8.
  • a terminal belongs to one and only one context.
  • the context represents the connection relationship between terminals, and it describes the topology relationship between terminals and the parameters of media mixing/exchange.
  • the H.248 protocol also defines a corresponding protocol interface, and the control network element manages internal media resources and connection relationships through the protocol interface.
  • the protocol interface includes but not limited to: Add command (used to add a terminal in the context), Modify command (used to modify a terminal's attributes, events, etc.), Subtract command (used to delete a terminal from the context, and return Statistical state of the terminal), Move command (used to move a terminal from one context to another context), Notify command (used to notify the control network element of the detected event).
  • KMS Kurento Media Server
  • AS application server
  • AS media control network element in IMS network
  • AS controls KMS through Kurento protocol.
  • Kurento protocol adopts JSON-RPC interface based on websocket. Similar to the H.248 protocol, KMS also provides a corresponding media resource model, using the Kurento protocol to describe the management of media resources and their connection relationships.
  • the KMS media resource model adopts object-oriented thinking and abstracts media resources into various objects.
  • the most important media resource objects are Media Pipeline and Media Elements.
  • KMS adopts the pipeline architecture to describe media services, and the Media Pipeline object represents the media processing pipeline, which is used to describe the connection topology relationship between media processing objects, which is equivalent to the context in the H.248 protocol.
  • Media Elements represent media processing component objects. According to the functions they provide, media processing components are divided into Endpoints, Filters, Hubs, etc. Endpoints mainly provide the sending and receiving of media streams, which is equivalent to the terminals in H.248. Filters mainly provide some media processing capabilities, such as audio and video codecs.
  • KMS can provide rich media processing capabilities through Filters and has good scalability. Similar to the H.248 protocol, Kurento protocol describes the management of media resources (including creation, modification, deletion), event subscription, and connection topology of media resources. The format is JSON (or other formats can also be used).
  • FIG. 9 is a schematic flowchart of a call processing method provided by an embodiment of the present application.
  • the call processing method is executed by the unified media function network element UMF, including the following steps:
  • a unified media function network element receives a first media connection request message from a first access media control network element.
  • the first access media control network element in the embodiment of the present application is the access media control network element of the calling side (for example, the calling P-CSCF).
  • the first media connection request message is used to request to create a first media endpoint for the first terminal device.
  • the first terminal device is, for example, the calling terminal device, and the first media connection request message is used to request the calling terminal device to create a media endpoint for the calling terminal device.
  • the first media connection request message is determined by the first access media control network element according to the call establishment request message from the first terminal device.
  • the call establishment request message may be, for example, an INVITE message in a basic call flow.
  • the INVITE message carries media information (SDP) of the first terminal device.
  • the first access media control network element receives the INVITE message, acquires the media information of the first terminal device, and generates a first media connection request message according to the media information of the first terminal device.
  • the first media access control network element acquires IP address information, port information, and protocol type information in each m line of the SDP for each m line in the SDP.
  • the first access media control network element determines the IP address in each m row in the SDP of the first terminal device included in the first media connection request message information, port information, and protocol type information.
  • the unified media function network element creates a first media endpoint for the first terminal device according to the first media connection request message.
  • the media endpoint in the embodiment of the present application is similar to the terminal in the media resource model shown in FIG. 8 , and is a logical entity for sending and receiving one or more media.
  • a media endpoint (Endpoint) is used to identify a quintuple of an IP connection, including the IP address, port and transmission type of the local end and the remote end respectively.
  • the local IP address and port of the Endpoint refer to the IP address and port of the unified media function network element.
  • the remote IP address and port of the Endpoint refer to the IP address and port of the destination of the connection.
  • the local IP address and port of the first media endpoint are the IP address and port of the UMF
  • the remote IP address and port of the first media endpoint refer to IP address and port of the first terminal device.
  • Transmission types may include, but are not limited to, RTP over user datagram protocol (UDP) or data channel (data channel) over UDP/datagram transport layer security (DTLS)/flow control transmission protocol (stream control transmission protocol, SCTP) and so on.
  • different media endpoints are distinguished by resource identifiers of the media endpoints, for example, by a uniform resource locator (uniform resource locator, URL).
  • the resource identifier of the media endpoint is EndpointURL, and the format of EndpointURL is: endpoints/ ⁇ endpoint_id ⁇ .
  • endpoints means that the URL is EndpointURL, and ⁇ endpoint_id ⁇ is the number of Endpoint, which is dynamically allocated at runtime. It can be understood that when the UMF creates the first media endpoint, it also creates the resource identifier of the first media endpoint.
  • the resource identifier of the first media endpoint is, for example, Endpoint1URL.
  • the unified media function network element also creates the first media stream for the first terminal device according to the first media connection request message.
  • the media stream in the embodiment of the present application is similar to the audio stream in the media resource model shown in FIG. 8 , and includes detailed information of the media stream, for example, the media stream includes audio and video codec information, and the like.
  • one Endpoint can have multiple Streams, for example, multiple video streams on one Endpoint in a multi-party call, multiple channels on a datachannel, etc.
  • different media streams are distinguished by resource identifiers of the media streams.
  • the resource identifier of a media stream is StreamURL
  • the format of StreamURL is: streams/ ⁇ stream_id ⁇ .
  • streams indicates that the URL is StreamURL, and ⁇ stream_id ⁇ is the number of Stream, which is dynamically allocated at runtime. It can be understood that when the UMF creates the first media stream, it also creates the resource identifier of the first media stream. For example, the resource identifier of the first media stream is Stream1URL.
  • the unified media function network element also creates a session media processing context (Context) according to the first media connection request message.
  • the session media processing context can be regarded as a logical container for recording media endpoints and media streams.
  • the session media processing context is an RTP processing entity or a datachannel processing entity.
  • a Context can have multiple Endpoints, and an Endpoint can have multiple Streams.
  • different session media processing contexts are distinguished by resource identifiers of session media processing contexts.
  • the resource identifier of the session media processing context is ContextURL
  • the format of ContextURL is: ⁇ umf_domain ⁇ /contexts/ ⁇ context_id ⁇ .
  • ⁇ umf_domain ⁇ is the domain name of UMF
  • contexts indicates that the URL is ContextURL
  • ⁇ context_id ⁇ is the number of Context, which is dynamically allocated at runtime.
  • the UMF after the UMF creates the first media endpoint for the first terminal device, it sends a media connection response message to the first access media control network element.
  • the media connection response message includes the resource identifier of the first media endpoint and the resource identifier of the first media stream.
  • the first access media control network element may send a call setup request message (for example, an INVITE message) to a subsequent network element.
  • the INVITE message includes the SDP Offer generated by the first access media control network element based on information such as the remote IP address, the remote port, and the audio and video codec type of the first media endpoint, and also includes a newly added extended SIP header field (such as Local -Media header field), carrying the resource identifier of the first media endpoint.
  • the INVITE message is forwarded by intermediate network elements such as CSCF ⁇ AS on the calling and called sides, where the Local-Media header field is transparently transmitted by the intermediate network elements.
  • the unified media function network element receives the second media connection request message from the second access media control network element.
  • the second access media control network element in the embodiment of the present application is the access media control network element of the called side (for example, the called P-CSCF).
  • the second media connection request message is used to request to create a second media endpoint for the second terminal device.
  • the second terminal device is, for example, the called terminal device, and the second media connection request message is used to request for the called terminal device to create a media endpoint for the called terminal device.
  • the second media connection request message is determined by the second access media control network element according to the INVITE message.
  • the second access media control network element receives the INVITE message described in step 902, obtains the resource identifier of the first media endpoint from the Local-Media header field, and obtains the remote IP address of the first media endpoint from the INVITE message. Address, remote port, audio and video codec type and other information.
  • the second access media control network element determines that the second media connection request message includes information such as the remote IP address, remote port, and audio/video codec type of the first media endpoint, and sends the second media connection request message to the UMF.
  • the unified media function network element creates a second media endpoint for the second terminal device according to the second media connection request message.
  • the second media endpoint is similar to the first media endpoint described in the foregoing embodiments, and is also a logical entity for sending and receiving one or more kinds of media.
  • the local IP address and port of the second media endpoint are the IP address and port of the unified media function network element
  • the remote IP address and port of the second media endpoint are the IP address and port of the second terminal device.
  • the resource identifier of the second media endpoint is similar to the resource identifier of the first media endpoint.
  • the resource identifier of the second media endpoint is Endpoint2URL.
  • the unified media function network element further creates a second media stream for the second terminal device according to the second media connection request message, and creates a resource identifier of the second media stream.
  • the resource identifier of the second media endpoint is Stream2URL.
  • FIG. 10 is a schematic diagram of a session media processing context provided by an embodiment of the present application.
  • the session media processing context includes a first media endpoint and a second media endpoint, and a first media stream and a second media stream.
  • the resource identifier of the session media processing context is ContextURL
  • the format of ContextURL is: ⁇ umf_domain ⁇ /contexts/ ⁇ context_id ⁇ .
  • the ContextURL is umf.huawei.com/contexts/1000.
  • the URL of Endpoint1 is umf.huawei.com/contexts/1000/endpoints/5000.
  • the Endpoint2URL is umf.huawei.com/contexts/1000/endpoints/6000.
  • the unified media function network element sends the resource identifier of the first media endpoint to the first access media control network element, and sends the resource identifier of the second media endpoint to the second access media control network element.
  • the UMF can send EndpointURL (or StreamURL, ContextURL) to all network elements in the IMS trust domain, so that all network elements in the IMS trust domain can directly control the UMF. Therefore, it is possible to share the media resources of the calling side and the called side under the IMS half-call model. Logically, only one UMF is needed, and media forwarding only needs to be done once, while saving media resources.
  • UMF passes the media resource URL.
  • the UMF sends Endpoint1URL, Endpoint2URL, Stream1URL, and Stream2URL to the access media control network element on the MO/MT side, and the access media control network element on the MO/MT side can transparently transmit the above URLs to multiple ASs, which facilitates the realization of MO Media resources are shared between network elements on the /MT side and multiple ASs, reducing the number of media forwarding hops, avoiding media service conflicts and media collisions when multiple ASs are connected in series, and reducing the waste of media resources.
  • the interface provided by UMF to IMS network elements can adopt various control protocols, including but not limited to H.248 media gateway control protocol, RESTful API, RPC, etc.
  • the unified media function network element transmits the media data between the first terminal device and the second terminal device through the first media endpoint and the second media endpoint.
  • the unified media function network element creates a first media endpoint and a second media endpoint, and creates a first media stream and a second media stream for transmitting media data between the first terminal device and the second terminal device.
  • the unified media function network element associates the first media endpoint with the second media endpoint.
  • the session media processing context shown in FIG. 10 also includes an association (association) between the first media endpoint and the second media endpoint, which is used to describe the association relationship between the media endpoints.
  • the first media endpoint and the second media endpoint are associated through association.
  • one inbound stream may be associated with multiple outbound streams (ie copying), and multiple inbound streams may be associated with one outbound stream (ie mixed streams).
  • multiple outbound streams may be associated with one outbound stream (ie mixed streams).
  • the embodiment of the present application provides a call processing method.
  • the call processing method is based on an IMS network architecture and executed by a newly added unified media function network element.
  • Unified media function network elements provide media access capabilities through the access side, reducing media detours and media service transmission delays, which is conducive to supporting the needs of delay-sensitive services such as extended reality for the IMS network architecture.
  • the UMF transmits the URL to the control network element in the IMS trust domain, and realizes media resource sharing between the calling side and the called side under the IMS half-call model.
  • Multiple control network elements can control the session media processing context of the same UMF network element. Logically, only one UMF is needed, and media forwarding only needs to be done once, which also saves media resources.
  • FIG. 11 is a schematic flow diagram of implementing media redirection after the call processing method provided in the embodiment of the present application executes a basic call processing flow.
  • the media redirection process may also be performed, including the following steps:
  • the unified media function network element receives a media redirection indication message from an application server.
  • the unified media function network element has established the first media endpoint and the second media endpoint and associated the first media endpoint and the second media endpoint after executing the basic call processing flow according to the steps in the embodiment in FIG. 9 .
  • the access media control network element also needs to control the UMF to implement media redirection.
  • the called party rings the called AS starts the CRBT service process and sends a media redirection indication message to the UMF.
  • the media redirection indication message is used to request to create a media redirection message for the resource or device indicated by the media redirection indication message Directed endpoint.
  • the resource indicated by the media redirection instruction message is, for example, an audio playback resource
  • the audio playback resource is used to play an audio to a designated media endpoint, thereby playing an audio to a designated terminal device.
  • the device indicated by the media redirection indication message is, for example, a third terminal device, that is, a new terminal device other than the first terminal device and the second terminal device.
  • the third terminal device may perform media redirection with the first terminal device (that is, the second terminal device no longer establishes a media connection with the first terminal device), or may perform media redirection with the second terminal device, which is not limited in this embodiment .
  • the unified media function network element creates a media redirection endpoint for the media redirection instruction message according to the media redirection instruction message.
  • the unified media function network element creates a media redirection endpoint for the resources indicated by the media redirection indication message according to the resources indicated by the media redirection indication message.
  • the resource indicated by the media redirection indication message is the playback resource.
  • the UMF creates a media redirection endpoint for the voice ringtone.
  • the playback resource is a video ring back tone
  • the UMF creates a first media redirection endpoint (ie, an audio endpoint) and a second media redirection endpoint (ie, a video endpoint) for the video ringback tone.
  • the unified media function network element associates the first media endpoint with the media redirection endpoint according to the media redirection instruction message.
  • the playback resource is a resource stored in the UMF and can be called by the UMF
  • the unified media function network element plays the playback to the first terminal device through the first media endpoint and the media redirection endpoint.
  • the UMF plays a voice ringtone to the first terminal device.
  • the unified media function network element associates the second media endpoint with the media redirection endpoint according to the media redirection instruction message.
  • the unified media function network element plays the sound to the second terminal device through the second media endpoint and the media redirection endpoint.
  • the AS only needs to directly modify the media connection of the UMF through a non-OA media control interface (eg, a service interface).
  • a non-OA media control interface eg, a service interface
  • the unified media function network element changes the association relationship between the first media endpoint and the second media endpoint to the first media endpoint and the media redirection endpoint relationship.
  • the unified media function network element changes the association relationship between the second media endpoint and the first media endpoint to the second media endpoint and the media redirection endpoint. Endpoint relationship.
  • UMF only needs to modify the association between media endpoints, and does not need to update the media information on the terminal side. Therefore, it does not need to execute the media redirection process on the terminal side, avoiding possible service conflicts and media collisions.
  • the UMF deployed on the access side directly provides announcement, which is beneficial to reduce media detour and delay.
  • the UMF when the first terminal device and the second terminal device still have services to be connected after the announcement after the announcement is completed, the UMF also performs the service according to the instructions of the first access media control network element and the second access media control network element.
  • the information will re-associate the first media endpoint and the second media endpoint, so that the media data in the service that the first terminal device and the second terminal device continue after the announcement is transmitted through the first media endpoint and the second media endpoint.
  • the unified media function network element creates a media redirection endpoint for the device indicated by the media redirection indication message according to the device indicated by the media redirection indication message.
  • the device indicated by the media redirection indication message is the third terminal device.
  • the third terminal device is a device corresponding to the forwarding service, that is, the first terminal device/the second terminal device may trigger the forwarding service and forward the service to the third terminal device.
  • the unified media function network element creates a third media endpoint for the third terminal device according to the media redirection indication message.
  • the unified media function network element associates the first media endpoint with the third media endpoint according to the media redirection instruction message.
  • the unified media function network element transmits the media data between the first terminal device and the third terminal device through the first media endpoint and the third media endpoint.
  • the unified media function network element also receives a media connection cancel message from the second access media control network element. Wherein, the media connection cancel message is used to request to delete the second media endpoint.
  • the UMF when the first media endpoint performs media redirection, the UMF will delete the second media endpoint that has established a media connection with the first media endpoint according to the media connection cancel message. Then, the UMF creates a third media endpoint for the third terminal device according to the media redirection indication message, and associates the first media endpoint with the third media endpoint.
  • the unified media function network element updates the association relationship between the first media endpoint and the second media endpoint. That is to say, in the above forwarding process, the AS only needs to directly modify the media connection of the UMF through the non-OA media control interface, and does not need to go through the complicated media renegotiation process, avoiding possible service conflicts and media collisions. It should be noted that in the above forwarding process, when the codec information of the terminal device changes, only unilateral negotiation is required (for example, only codec information needs to be negotiated), and media negotiation between the first terminal device and the third terminal device is not required , to avoid possible problems such as media negotiation shock and terminal compatibility. When the IP address and/or port of the terminal device changes, there is no need to modify the media information of the opposite end, which avoids problems such as possible media negotiation oscillation and terminal compatibility.
  • the unified media function network element associates the second media endpoint with the third media endpoint according to the media redirection instruction message.
  • the unified media function network element transmits the media data between the second terminal device and the third terminal device through the second media endpoint and the third media endpoint.
  • the unified media function network element also receives a media connection cancel message from the first access media control network element. Wherein, the media connection cancel message is used to request to delete the first media endpoint.
  • the UMF when the second media endpoint performs media redirection, the UMF will delete the first media endpoint that has established a media connection with the second media endpoint according to the media connection cancel message. Then, the UMF creates a third media endpoint for the third terminal device according to the media redirection indication message, and associates the second media endpoint with the third media endpoint.
  • the unified media function network element updates the association relationship between the first media endpoint and the second media endpoint. That is to say, in the above forwarding process, the AS only needs to directly modify the media connection of the UMF through the non-OA media control interface, and does not need to go through the complicated media renegotiation process, avoiding possible service conflicts and media collisions.
  • the unified media function network element transmits the media data between the first terminal device and the resource or device indicated by the media redirection indication message through the first media endpoint and the media redirection endpoint, and/or, the unified media function network element
  • the media data between the second terminal device and the resource or device indicated by the media redirection indication message is transmitted through the second media endpoint and the media redirection endpoint.
  • the unified media function network element in the announcement service, for the first terminal device on the calling side, associates the first media endpoint with the media redirection endpoint (announcement resource endpoint), and passes the first media endpoint Play the sound to the first terminal device.
  • the unified media function network element associates the second media endpoint with the media redirection endpoint, and plays the sound to the second terminal device through the second media endpoint.
  • the application server sends an instruction message to stop the announcement to the unified media function network element.
  • the unified media function network element disassociates the first media endpoint from the media redirection endpoint, or disassociates the second media endpoint from the media redirection endpoint according to the playback stop indication message.
  • the first terminal device and the second terminal device may still have a connection service after the announcement, then the UMF will disassociate the first media endpoint according to the indication message of stopping the announcement. and the media redirection endpoint, and re-associate the first media endpoint with the second media endpoint.
  • the unified media function network element associates the first media endpoint and the media redirection endpoint (third media endpoint), and transmits the first terminal device through the first media endpoint and the third media endpoint Media data between the third terminal device and the forwarding party.
  • the IMS network there are also some services in the IMS network that play audio and have media redirection at the same time. For example, in the call waiting service, user A is talking with user B, and another user C calls user A at this time. Then user A can associate two conversations, the first one is a normal call, and the second one is on hold. The AS plays the audio to the held party (user B or user C), and the status of the two sessions can be switched at any time.
  • the unified media function network element creates the first media endpoint and the second media endpoint, and associates the first media endpoint with the second media endpoint.
  • the unified media function network element creates a media redirection endpoint (including the media endpoint corresponding to the playback resource and the third media endpoint).
  • the application server controls the UMF to transmit media data between different media endpoints according to the session state. For example, when user A is talking with user B, the access media control network element and the application server control the UMF to transmit the media data between user A and user B through the first media endpoint and the second media endpoint, and use the playback resource The corresponding media endpoint and the third media endpoint play audio to user C.
  • the embodiment of the present application provides a call processing method.
  • the call processing method is based on an IMS network architecture and executed by a newly added unified media function network element.
  • the unified media function network element executes the basic call flow under the control of the control network element in the IMS trust domain, it can also execute service flows that require media redirection (such as announcement service and forwarding service, etc.).
  • the application server When performing the media redirection process, the application server only needs to directly modify the media connection of the unified media function network element through the non-OA media control interface, and does not need to go through the complicated media renegotiation process, avoiding possible service conflicts and media issues such as collisions.
  • FIG. 12 is a schematic flowchart of the application of the call processing method provided in the embodiment of the present application in the intercommunication scenario, including the following steps:
  • the unified media function network element receives an interworking media connection request message from the interworking border control function network element.
  • the interworking media connection request message is used to request to create an interworking media endpoint for the interworking device.
  • the UMF provides the IP media interworking capability, and the interworking devices are devices in other IP domains, for example, terminal devices in other IP domains.
  • the interworking media connection request message is determined by the interworking border control function network element according to the media information of the interworking device.
  • the interworking border control function network element carries the media information of the interworking device in the interworking media connection request message, and sends the interworking media connection request message to the UMF.
  • the unified media function network element creates an interworking media endpoint for the interworking device according to the interworking media connection request message.
  • the unified media function network element when the interworking media connection request message is used to request the establishment of a media connection for a basic call, creates an interworking media endpoint for the interworking device according to the media information of the interworking device in the interworking media connection request message .
  • the unified media function network element is associated with the media endpoint in the UMF and the interworking media endpoint.
  • the UMF creates an interworking media endpoint according to the interworking media connection request message, and associates the first media endpoint with the interworking media endpoint.
  • the specific implementation refer to the specific implementation of creating the first media endpoint/second media endpoint and associating the first media endpoint and the second media endpoint in the UMF embodiment in FIG. 9 , which will not be repeated here.
  • the unified media function network element when the interworking media connection request message is used to request the establishment of a media redirection media connection, creates an interworking media for the interworking device according to the media information of the interworking device in the interworking media connection request message endpoint, and disassociates the media endpoint that has been negotiated in the UMF.
  • the UMF creates an interworking media endpoint according to the interworking media connection request message, and associates the first media endpoint with the Interworking media endpoints. And, the UMF disassociates the first media endpoint from the second media endpoint.
  • the unified media function network element transmits the media data between the first terminal device and the interworking device through the first media endpoint and the interworking media endpoint, or, the unified media function network element transmits the second terminal through the second media endpoint and the interworking media endpoint Media data between devices and interworking devices.
  • the first media endpoint is associated with an interworking media endpoint.
  • the unified media function network element transmits the media data between the first terminal device and the interworking device through the first media endpoint and the interworking media endpoint.
  • the interworking media endpoint associates with the second media endpoint.
  • the unified media function network element transmits the media data between the second terminal device and the interworking device through the second media endpoint and the interworking media endpoint.
  • UMF transmits the main Media data between the terminal device on the calling side and the interworking device on the forwarding party.
  • FIG. 13 is a schematic flowchart of the application of the call processing method provided by the embodiment of the present application in the scenario of calling and called audio basic call establishment.
  • the calling and called audio basic call establishment scenario includes the MO side and the MT side, and the MO side includes the first terminal equipment UE1, the first access media control network element P-CSCF1, the unified media function network element UMF, and the first application server CSCF1 /AS1, the MT side includes a second terminal equipment UE2, a second media access control network element P-CSCF2, and a second application server CSCF2/AS2.
  • the calling and called audio basic call setup process shown in Figure 13 includes the following steps:
  • UE1 sends an INVITE message to P-CSCF1, carrying UE1's media information SDP (UE1).
  • P-CSCF1 carrying UE1's media information SDP (UE1).
  • the P-CSCF1 at the calling side receives the INVITE message, and sends a first media connection request message to the UMF, requesting to create a first media endpoint.
  • the P-CSCF carries the following content in the first media connection request message according to the SDP of UE1:
  • the UMF receives the first media connection request message from P-CSCF1, and creates a session media processing context, a first media endpoint and a first media stream according to the first media connection request message. Specifically divided into the following three situations:
  • P-CSCF1 receives the first media connection response message from UMF, and forwards the INVITE message to subsequent network elements.
  • the INVITE message includes an SDP offer generated according to information such as the remote IP/port/codec type of endpoint1, and an extended SIP header field Local-Media is added at the same time, and the Local-Media carries context1 and endpoint1.
  • intermediate network elements such as CSCF1, AS1, CSCF2, and AS2 will forward the INVITE message and transparently transmit the Local-Media header field.
  • the specific transmission mode is not shown in this example.
  • the P-CSCF2 at the called side receives the INVITE message, and obtains context1 from the Local-Media header field. P-CSCF2 sends a second media connection request message to UMF, instructing UMF to create Endpoint and Stream under context1.
  • the processing process of the P-CSCF2 is similar to the processing process of the P-CSCF1 at the calling side, the difference is that the Context does not need to be recreated, which will not be repeated here.
  • the UMF receives the second media connection request message from P-CSCF2, and creates a second media endpoint and a second media stream according to the second media connection request message.
  • the specific process is similar to the creation of the first media endpoint and the first media stream by UMF. For example, UMF creates Endpoint and Stream in context1, assuming endpoint2 and stream2.
  • P-CSCF2 receives the second media connection response message from UMF, and sends an INVITE message to UE2.
  • the INVITE message includes the SDP offer generated according to information such as the local IP/port/codec type of endpoint2.
  • the INVITE message does not include the Local-Media header field.
  • Border network elements such as P-CSCF, MGCF, and IBCF need to delete the Local-Media header when sending SIP messages to non-IMS domains. area.
  • the S-CSCF may delete the Local-Media header field when triggering the third-party AS.
  • UE2 sends a second media connection response message (for example, 183 message or 180 message) to P-CSCF2, and the second media connection response message carries UE2's SDP answer.
  • a second media connection response message for example, 183 message or 180 message
  • P-CSCF2 receives the second media connection response message and obtains the SDP answer.
  • P-CSCF2 sends a message to UMF, and the message carries UE2's media information (SDP answer), which is used to indicate that the remote media information of endpoint2 is UE2's media information.
  • SDP answer UE2's media information
  • P-CSCF2 queries all Streams under endpoint1 and endpoint2, and establishes associations for Streams of the same media type. For example, P-CSCF2 finds that the Stream under endpoint1 is stream1, the Stream under endpoint2 is stream2, and the media types are the same.
  • the P-CSCF2 sends a first association establishment instruction message to the UMF to establish the association from stream1 to stream2.
  • the P-CSCF2 sends a second association establishment instruction message to the UMF to establish an association from stream2 to stream1.
  • P-CSCF2 sends a 1XX response message to P-CSCF1.
  • the 1XX response message includes: the SDP Answer generated by P-CSCF2 based on the local IP/port of endpoint2 and the codec type information returned by UE2, and the extended SIP header field Local-Media, context1 and endpoint2 are carried in Local-Media.
  • intermediate network elements such as CSCF1, AS1, CSCF2, and AS2 will forward the 1XX response message and transparently transmit the Local-Media header field.
  • the specific transmission mode is not shown in this example.
  • P-CSCF1 receives the 1XX response message, and modifies the local IP and port of endpoint1 to the IP and port of the SDP answer in the response message. Then send a 1XX response message to UE1, and delete the Local-Media header field at the same time.
  • Instances such as Context, Endpoint, and Stream can only be modified and deleted by the creator of the instance, and other control nodes can only be queried.
  • UMF restricts the operation of control nodes on internal resource instances by recording the creator's identity or using Token and other mechanisms.
  • FIG. 14 is a schematic diagram of a media connection topology of an audio call provided in an embodiment of the present application.
  • the voice stream between the first terminal equipment UE1 on the calling side and the second terminal equipment UE2 on the called side is transmitted through the association between the first media endpoint endpoint1 and the second media endpoint endpoint2 created by UMF and between endpoint1 and endpoint2 .
  • FIG. 15 is a schematic diagram of a media connection topology of a video call provided in an embodiment of the present application.
  • the media connection topology in Fig. 15 has two more media endpoints (endpoint3 and endpoint4).
  • the voice stream between the first terminal equipment UE1 on the calling side and the second terminal equipment UE2 on the called side is transmitted through the association between endpoint1 and endpoint3 and endpoint1 and endpoint3 created by UMF.
  • the video stream between the first terminal equipment UE1 on the calling side and the second terminal equipment UE2 on the called side is transmitted through endpoint2 and endpoint4 and the association between endpoint2 and endpoint4 created by UMF.
  • the UMF replaces the original IMS-AGW to provide media services on the access side, which enriches the media service capabilities on the access side.
  • UMF provides UMI URL and non-OA media control interface (such as service interface), and by extending the IMS signaling process, UMF transmits UMI URL to realize UMF media resource sharing.
  • FIG. 16 is a schematic flowchart of the application of the call processing method provided in the embodiment of the present application in a voice playback service scenario.
  • the announcement service scenario includes the MO side and the MT side, and the MO side includes the first terminal equipment UE1, the first access media control network element P-CSCF1, the unified media function network element UMF, and the first application server CSCF1/AS1, MT
  • the side includes the second terminal equipment UE2, the second access media control network element P-CSCF2 and the second application server CSCF2/AS2.
  • the playback process shown in Figure 16 includes the following steps:
  • Steps 1 to 12 are the same as steps 1 to 12 of the audio basic call flow shown in Figure 13.
  • the calling and called P-CSCFs respectively instruct UMF to create Context, Endpoint, and Stream.
  • the playback process described in Figure 16 is a video call
  • the calling side and the called side need to create two Endpoints, respectively, endpoint1 and endpoint2, endpoint3 and endpoint4, as shown in Figure 15.
  • the called side rings, and AS2 on the called side receives a 180 response, starts the video ring back ring service processing, sends a play instruction message to UMF, and instructs UMF to play the video ring back ring (including voice stream and video stream) to the user connected to endpoint1/endpoint2 ).
  • UMF creates the media endpoint corresponding to the playback resource, and creates an association between endpoint1/endpoint2 and the media endpoint corresponding to the playback resource respectively.
  • the UMF only needs to update the association of endpoint1/endpoint2 during the playback of the video ring tone, and does not need to update the media information on the UE side, so the media renegotiation process is not required.
  • AS2 sends a 180 response message to the P-CSCF1 on the calling side.
  • the 180 response message carries the SDP answer.
  • the specific implementation method is similar to the corresponding steps in the basic call flow shown in Figure 13, and will not be repeated here.
  • the P-CSCF1 on the calling side sends a 180 response message to the first terminal equipment UE1.
  • the 180 response message carries the SDP answer.
  • the specific implementation method is similar to the corresponding steps in the basic call flow shown in Figure 13, and will not be repeated here. repeat.
  • the called party picks up the hook, and the second terminal equipment UE2 sends a 200 (INVITE) response message to the P-CSCF2.
  • the P-CSCF2 sends the 200 (INVITE) response message to the application server AS2.
  • the specific implementation method is similar to the corresponding steps in the basic call flow shown in FIG. 13 , and will not be repeated here.
  • AS2 receives the 200 (INVITE) response message, and sends a stop playing instruction message to the UMF.
  • the stop playing indication message indicates that the UMF no longer plays the sound to UE1 through endpoint1/endpoint2.
  • AS2 sends a 200 (INVITE) response message to the P-CSCF1 on the calling side.
  • P-CSCF1 sends a 200 (INVITE) response message to UE1.
  • UE1 sends an ACK message to UE2 on the called side.
  • each network element in the IMS forwards the ACK message to UE2, as shown by 22-24 in FIG. 16 .
  • FIG. 17 is a schematic diagram of a media connection topology for playback of a video CRBT provided in an embodiment of the present application.
  • the UMF plays the sound (including voice and video) to the first terminal device UE1 on the calling side through the media endpoint corresponding to the sound playing resource.
  • the AS when the media information needs to be updated, the AS only needs to directly modify the media connection of the UMF through the interface not negotiated by the OA, and does not need to go through the complicated media renegotiation process, which avoids possible Issues such as business conflicts and media collisions.
  • the UMF deployed on the access side provides announcements, which helps reduce media detours and service delays.
  • FIG. 18 is a schematic flowchart of the call processing method provided in the embodiment of the present application applied to a forwarding service scenario.
  • the forwarding service scenario includes the first terminal equipment UE1, the first access media control network element P-CSCF1, the unified media function network element UMF, the first application server CSCF1/AS1, the second terminal equipment UE2, the second access The media control network element P-CSCF2, the second application server CSCF2/AS2, the third terminal equipment UE3, the third access media control network element P-CSCF3 and the third application server CSCF3/AS3.
  • a specific forwarding service scenario assumes that UE1 calls UE2, and after media negotiation is completed, UE1 triggers forwarding to UE3.
  • the forwarding process shown in Figure 18 includes the following steps:
  • AS2 on the called side sends a CANCEL message to P-CSCF2 to cancel the call from UE1 to UE2.
  • the P-CSCF2 receives the CANCEL message, instructing the UMF to delete the created second media endpoint endpoint2 and all associated associations (for example, including the association between endpoint2 and endpoint1).
  • AS2 sends an INVITE message to P-CSCF3 of the forwarding destination, carrying UE1's SDP offer and Local-Media header field.
  • P-CSCF3 receives the INVITE message, and P-CSCF3 instructs UMF to create a third media endpoint endpoint3 for UE3.
  • the UMF creates the URL of endpoint3 according to the third media connection request message from P-CSCF3, and sends the URL of endpoint3 to P-CSCF3.
  • P-CSCF3 sends an INVITE message to UE3.
  • UE3 sends a 1XX response message to P-CSCF3, and the 1XX response message carries the SDP answer.
  • P-CSCF3 instructs UMF to update the remote media information of endpoint3.
  • the specific implementation is similar to the corresponding steps in the basic call flow shown in FIG. 13 .
  • P-CSCF3 sends a 1XX response message to AS2, and the 1XX response message carries the SDP answer.
  • AS1/AS2 need to instruct UMF to update the calling side media information. Specifically include the following steps:
  • AS2 sends an UPDATE message to P-CSCF1 on the calling side, and the UPDATE message carries the SDP offer.
  • the IP address, port number and other parameters in the SDP offer are determined according to the local IP and port number of endpoint3, and the codec information is generated according to the SDP answer fed back by UE3.
  • P-CSCF1 receives the UPDATE message, and modifies the parameters such as the IP address and port number in the SDP offer to the information corresponding to endpoint1. P-CSCF1 sends an UPDATE message to UE1.
  • UE1 sends a 200 response message to P-CSCF1, and the 200 response message carries the SDP answer.
  • P-CSCF1 receives the 200 response message, updates the remote IP address and port number of endpoint1 according to the IP address and port in the SDP answer, and updates the codec information of endpoint1 according to the codec information in the SDP answer.
  • P-CSCF1 forwards the 200 response message to AS2, and modifies the IP address and port in the SDP answer to the IP address and port of endpoint1.
  • AS2 After receiving the 200 response message, AS2 terminates the 200 response message.
  • FIG. 19 is a schematic diagram of a media connection topology of a forwarding service provided in an embodiment of the present application.
  • the UMF deletes the second media endpoint endpoint2 originally associated with the first media endpoint endpoint1, and creates a new third media endpoint endpoint3.
  • the voice stream between the first terminal equipment UE1 and the third terminal equipment UE3 is transmitted through the association between the first media endpoint endpoint1 and the third media endpoint endpoint3 and the endpoint1 and endpoint3 created by the UMF.
  • the AS when the media information needs to be updated, the AS only needs to directly modify the media connection of the UMF through the interface not negotiated by the OA, and does not need to go through the complicated media renegotiation process, avoiding possible service conflicts Collision with the media and other issues.
  • the media IP or port of the calling side changes, only the remote IP and port of the media endpoint inside the UMF need to be modified, and media renegotiation between the calling side and the called side is not required, avoiding possible media renegotiation shocks and terminal compatibility issues.
  • the unified media processing network element, the first access media control network element and the second access media control network element may include hardware structures and/or software modules. Structure, software module, or hardware structure plus software module to realize the above functions. Whether one of the above-mentioned functions is executed in the form of a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraints of the technical solution.
  • each functional module in each embodiment of the present application can be integrated into a processing In the controller, it can also be physically present separately, or two or more modules can be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or in the form of software function modules.
  • a call processing device 2000 provided by the embodiment of the present application is used to realize the function of the unified media function network element in the above method embodiment.
  • the device may be a device, or a device in a device, or a device that can be matched with the device.
  • the device may be a system on a chip.
  • the system-on-a-chip may consist of chips, or may include chips and other discrete devices.
  • the call processing apparatus 2000 includes at least one processor 2020, configured to realize the function of the unified media function network element in the call processing method provided in the embodiment of the present application.
  • the processor 2020 creates the first media endpoint for the first terminal device according to the first media connection request message, and details are not described here.
  • the apparatus 2000 may also include at least one memory 2030 for storing program instructions and/or data.
  • the memory 2030 is coupled to the processor 2020 .
  • the coupling in the embodiments of the present application is an indirect coupling or a communication connection between devices, units or modules, which may be in electrical, mechanical or other forms, and is used for information exchange between devices, units or modules.
  • Processor 2020 may cooperate with memory 2030 .
  • Processor 2020 may execute program instructions stored in memory 2030 . At least one of the at least one memory may be included in the processor.
  • the device 2000 may further include a communication interface 2010, which may be, for example, a transceiver, an interface, a bus, a circuit, or a device capable of implementing a transceiver function.
  • the communication interface 2010 is used to communicate with other devices through a transmission medium, so that the devices used in the device 2000 can communicate with other devices.
  • the processor 2020 uses the communication interface 2010 to send and receive data, and is used to implement the method performed by the unified media function network element described in the embodiments corresponding to FIG. 9 to FIG. .
  • the specific connection medium among the communication interface 2010, the processor 2020, and the memory 2030 is not limited.
  • the memory 2030, the processor 2020, and the communication interface 2010 are connected through the bus 2040.
  • the bus is represented by a thick line in FIG. 20, and the connection between other components is only for schematic illustration. , is not limited.
  • the bus can be divided into address bus, data bus, control bus and so on. For ease of representation, only one thick line is used in FIG. 20 , but it does not mean that there is only one bus or one type of bus.
  • the call processing apparatus 2100 includes at least one processor 2120, configured to implement the function of the first access media control network element in the method provided in the embodiment of the present application.
  • the processor 2120 invokes the communication interface 2110 to send a first media connection request message to the unified media function network element.
  • the first media connection request message is used to request to create a first media endpoint for the first terminal device.
  • the apparatus 2100 may also include at least one memory 2130 for storing program instructions and/or data.
  • the memory 2130 is coupled to the processor 2120 .
  • the coupling in the embodiments of the present application is an indirect coupling or a communication connection between devices, units or modules, which may be in electrical, mechanical or other forms, and is used for information exchange between devices, units or modules.
  • Processor 2120 may cooperate with memory 2130 .
  • Processor 2120 may execute program instructions stored in memory 2130 . At least one of the at least one memory may be included in the processor.
  • the device 2100 may further include a communication interface 2110, which may be, for example, a transceiver, an interface, a bus, a circuit, or a device capable of implementing a transceiver function.
  • the communication interface 2110 is used to communicate with other devices through a transmission medium, so that the devices used in the device 2100 can communicate with other devices.
  • the processor 2120 uses the communication interface 2110 to send and receive data, and is used to implement the method performed by the first access medium control network element described in the embodiments corresponding to FIG. 9 to FIG. 19 .
  • a specific connection medium among the communication interface 2110, the processor 2120, and the memory 2130 is not limited.
  • the memory 2130, the processor 2120, and the communication interface 2110 are connected through the bus 2140.
  • the bus is represented by a thick line in FIG. 21, and the connection between other components is only for schematic illustration. , is not limited.
  • the bus can be divided into address bus, data bus, control bus and so on. For ease of representation, only one thick line is used in Figure 21, but it does not mean that there is only one bus or one type of bus.
  • the device may be a device, or a device in a device, or a device that can be matched with the device. Wherein, the device may be a system on a chip.
  • the call processing apparatus 2200 includes at least one processor 2220, configured to implement the function of the second access media control network element in the method provided in the embodiment of the present application.
  • the processor 2220 may invoke the communication interface 2210 to send a second media connection request message to the unified media function network element, and the second media connection request message is used to request to create a second media endpoint for the second terminal device.
  • the method example The detailed description in , will not be repeated here.
  • the apparatus 2200 may also include at least one memory 2230 for storing program instructions and/or data.
  • the memory 2230 is coupled to the processor 2220 .
  • the coupling in the embodiments of the present application is an indirect coupling or a communication connection between devices, units or modules, which may be in electrical, mechanical or other forms, and is used for information exchange between devices, units or modules.
  • Processor 2220 may cooperate with memory 2230 .
  • Processor 2220 may execute program instructions stored in memory 2230 . At least one of the at least one memory may be included in the processor.
  • the device 2200 may further include a communication interface 2210, which may be, for example, a transceiver, an interface, a bus, a circuit, or a device capable of implementing a transceiver function.
  • the communication interface 2210 is used to communicate with other devices through a transmission medium, so that the devices used in the device 2200 can communicate with other devices.
  • the processor 2220 uses the communication interface 2210 to send and receive data, and is used to implement the method performed by the second access media control network element described in the embodiments corresponding to FIG. 9 to FIG. 19 .
  • the specific connection medium among the communication interface 2210, the processor 2220, and the memory 2230 is not limited.
  • the memory 2230, the processor 2220, and the communication interface 2210 are connected through the bus 2240.
  • the bus is represented by a thick line in FIG. 22, and the connection mode between other components is only for schematic illustration. , is not limited.
  • the bus can be divided into address bus, data bus, control bus and so on. For ease of representation, only one thick line is used in FIG. 22 , but it does not mean that there is only one bus or one type of bus.
  • the processor may be a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, and may implement or Execute the methods, steps and logic block diagrams disclosed in the embodiments of the present application.
  • a general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the methods disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor.
  • the memory may be a non-volatile memory, such as HDD or SSD, and may also be a volatile memory, such as RAM.
  • a memory is, but is not limited to, any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • the memory in the embodiment of the present application may also be a circuit or any other device capable of implementing a storage function, and is used for storing program instructions and/or data.
  • the call processing device may be a unified media processing network element, or a device in the unified media processing network element, or a device that can be matched with the unified media processing network element.
  • the call processing device may include a one-to-one corresponding module for executing the methods/operations/steps/actions described in the examples corresponding to FIG. 9 to FIG. It can be implemented by combining hardware circuits with software.
  • the device may include a communication module 2301 and a processing module 2302.
  • the communication module 2301 is configured to receive a first media connection request message from a first access media control network element.
  • the processing module 2302 is configured to create a first media endpoint for the first terminal device according to the first media connection request message.
  • the call processing device may be the first access media control network element, or may be a device in the first access media control network element, Or it is a device that can be matched and used with the first access media control network element.
  • the call processing device may include a one-to-one corresponding module for executing the methods/operations/steps/actions described in the examples corresponding to FIG. 9 to FIG. It can be implemented by combining hardware circuits with software.
  • the device may include a communication module 2401 and a processing module 2402. Exemplarily, the communication module 2401 is configured to send the first media connection request message to the unified media function network element.
  • the processing module 2402 is configured to instruct the unified media function network element to create the first media endpoint. For details, refer to the detailed description in the examples in FIG. 9 to FIG. 19 , and details are not repeated here.
  • the call processing device may be the second access media control network element, or a device in the second access media control network element, Or it is a device that can be matched and used with the second access media control network element.
  • the call processing device may include a one-to-one corresponding module for executing the methods/operations/steps/actions described in the examples corresponding to FIG. 9 to FIG. It can be implemented by combining hardware circuits with software.
  • the device may include a communication module 2501 and a processing module 2502.
  • the communication module 2501 is configured to send the second media connection request message to the unified media function network element.
  • the processing module 2502 is configured to instruct the unified media function network element to create the second media endpoint.
  • the technical solutions provided by the embodiments of the present application may be fully or partially implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions.
  • the computer program instructions When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part.
  • the computer may be a general computer, a special computer, a computer network, a network device, a terminal device or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server or data center by wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, a digital video disc (digital video disc, DVD)), or a semiconductor medium.
  • the various embodiments may refer to each other, for example, the methods and/or terms between the method embodiments may refer to each other, such as the functions and/or terms between the device embodiments Or terms may refer to each other, for example, functions and/or terms between the apparatus embodiment and the method embodiment may refer to each other.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Telephonic Communication Services (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本申请实施例提供一种呼叫处理方法、装置及系统,该呼叫处理方法基于IMS网络架构,由新增的统一媒体功能网元所执行。统一媒体功能网元通过接入侧提供媒体接入能力,减少媒体迂回,降低媒体业务传输时延,有利于支持扩展现实XR等时延敏感业务对IMS网络架构的需求。并且UMF通过扩展IMS信令流程,向IMS信任域中的控制网元传递URL,在IMS半呼叫模型下实现主叫侧与被叫侧的媒体资源共享。多个控制网元可对同一个UMF网元的会话媒体处理上下文进行控制,当主被叫通过同一个物理UMF接入时,在逻辑上只需要一个UMF,媒体转发只需要一次即可,同时也节省了媒体资源。

Description

一种呼叫处理方法、装置及系统
本申请要求于2021年8月13日提交中国国家知识产权局、申请号为202110929881.5、申请名称为“一种呼叫处理方法、装置及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信技术领域,尤其涉及一种呼叫处理方法、装置及系统。
背景技术
IP多媒体子系统(IP multimedia subsystem,IMS)为一种基于IP的网络的提供多媒体业务的通用网络架构。IMS网络采用会话发起协议(session initiation protocol,SIP)作为主要的信令协议,使得运营商可以为用户提供端到端的全IP的多媒体业务。其中,SIP采用会话描述协议(session description protocol,SDP)与发起/应答(offer/answer,简称OA)机制完成会话各参与方的媒体协商,例如音视频编解码,主机地址,网络传输协议等协商。
目前IMS网络包括以下几种媒体处理网元:IMS接入媒体网关(IMS access media gateway,IMS-AGW)、多媒体资源功能处理器(multimedia resource function processor,MRFP)、转换网关(transition gateway,TrGW)、IP多媒体网关(IP multimedia media gateway,IM-MGW)。在IMS网络架构下,上述媒体处理网元只能由特定的媒体控制网元直接控制,例如,IMS-AGW由代理呼叫会话控制功能(proxy call session control function,P-CSCF)网元控制。其中,媒体控制网元通过SIP信令和SDP协议与其它网元或终端设备完成媒体协商,然后控制媒体处理网元完成实际的媒体处理与媒体数据传输。其中,IMS的基本呼叫流程与媒体协商流程是由主叫侧与被叫侧之间的交互实现。由于IMS为半呼叫模型,无论主叫侧与被叫侧是否通过IMS-AGW接入IMS网络,在逻辑上主叫侧与被叫侧都是两个IMS-AGW,需要建立两个媒体上下文,导致媒体多跳转发(例如实时传输协议(real-time transport protocol,RTP)媒体将被转发两次),导致资源浪费。并且,IMS网络中不同的媒体处理网元提供的媒体处理能力有限,处理不同媒体类型业务可能需要经过多个媒体处理网元,导致的媒体迂回,无法满足未来扩展现实(extended reality,XR)等强交互业务的低时延诉求。
因此,在IMS半呼叫模型下,如何避免媒体多跳转发,降低业务处理时延成为待解决的问题。
发明内容
本申请实施例提供一种呼叫处理方法、装置及系统,该呼叫处理方法基于IMS网络架构,由统一媒体功能网元所执行,通过接入侧提供媒体接入能力,减少媒体迂回,降低媒体业务传输时延,有利于支持扩展现实等时延敏感业务对IMS网络架构的需求。
第一方面,本申请实施例提供一种呼叫处理方法,该呼叫处理方法由统一媒体功能网元所执行。其中,统一媒体功能网元接收来自第一接入媒体控制网元的第一媒体连接请求消息,并根据第一媒体连接请求消息创建针对第一终端设备的第一媒体端点;统一媒体功能网元接 收来自第二接入媒体控制网元的第二媒体连接请求消息,并根据第二媒体连接请求消息创建针对第二终端设备的第二媒体端点。统一媒体功能网元通过第一媒体端点和第二媒体端点传输第一终端设备与第二终端设备之间的媒体数据。通过该方法,统一媒体功能网元直接传输第一终端设备和第二终端设备之间的媒体数据,不需要其他中间网元转发,可以减少媒体迂回,降低媒体业务传输时延,有利于支持扩展现实等时延敏感业务对IMS网络架构的需求。
在一种可能的设计中,统一媒体功能网元根据第一媒体连接请求消息创建会话媒体处理上下文,会话媒体处理上下文包括第一媒体端点和第二媒体端点。
在一种可能的设计中,统一媒体功能网元根据第一媒体连接请求消息创建第一媒体流。统一媒体功能网元根据第二媒体连接请求消息创建第二媒体流。会话媒体处理上下文还包括第一媒体流和第二媒体流。
在一种可能的设计中,统一媒体功能网元向第一接入媒体控制网元发送第一媒体端点的资源标识,并向第二接入媒体控制网元发送第二媒体端点的资源标识。通过该方法,统一媒体功能网元可以传递统一媒体资源标识URL,统一媒体资源标识URL可以被IMS信任域所有需要的网元访问,实现媒体资源共享。
在一种可能的设计中,统一媒体功能网元关联第一媒体端点与第二媒体端点。通过该方法,第一媒体端点和第二媒体端点可以传输第一终端设备和第二终端设备之间的媒体数据。
在一种可能的设计中,统一媒体功能网元接收来自互通边界控制功能网元的互通媒体连接请求消息,并根据互通媒体连接请求消息创建针对互通设备的互通媒体端点。统一媒体功能网元通过第一媒体端点和互通媒体端点传输第一终端设备与互通设备之间的媒体数据,或者,统一媒体功能网元通过第二媒体端点和互通媒体端点传输第二终端设备与互通设备之间的媒体数据。通过该方法,统一媒体功能网元可以部署在互通侧,提供互通侧的媒体能力,减少媒体迂回。
在一种可能的设计中,统一媒体功能网元接收来自应用服务器的媒体重定向指示消息,并根据媒体重定向指示消息创建针对媒体重定向指示消息的媒体重定向端点。统一媒体功能网元通过第一媒体端点和媒体重定向端点传输第一终端设备与媒体重定向指示消息指示的资源或设备之间的媒体数据,和/或,统一媒体功能网元通过第二媒体端点和媒体重定向端点传输第二终端设备与媒体重定向指示消息指示的资源或设备之间的媒体数据。通过该方法,统一媒体功能网元能够提供统一媒体资源标识URL和非OA的媒体控制接口(包括但不限于类媒体网关控制协议接口,服务化接口等),从而避免媒体重协商,实现业务逻辑与OA状态解耦,同时避免了可能的业务冲突和媒体碰撞。
在一种可能的设计中,媒体重定向指示消息指示的资源为放音资源。
在一种可能的设计中,统一媒体功能网元通过第一媒体端点和媒体重定向端点向第一终端设备放音,或者,统一媒体功能网元通过第二媒体端点和媒体重定向端点向第二终端设备放音。通过该方法,部署在接入侧的统一媒体功能网元可以直接为终端设备提供放音,减少媒体迂回,有利于降低媒体业务传输时延。
在一种可能的设计中,统一媒体功能网元接收来自应用服务器的停止放音指示消息,并根据停止放音指示消息,取消关联第一媒体端点与媒体重定向端点;或者,取消关联第二媒体端点与媒体重定向端点。通过该方法,统一媒体功能网元放音完成时直接释放放音资源,不涉及用户媒体连接信息的修改,有利于避免媒体重协商。
在一种可能的设计中,统一媒体功能网元根据媒体重定向指示消息创建针对第三终端设备的第三媒体端点。通过该方法,应用服务器触发前转等媒体重定向业务时,统一媒体功能 网元根据控制信息可以创建新的媒体端点,整个过程不需要更新第一终端设备或第二终端设备的媒体信息,有利于避免媒体重协商。
在一种可能的设计中,统一媒体功能网元根据媒体连接取消消息删除第一媒体端点或第二媒体端点。通过该方法,统一媒体功能网元可以完成媒体端点的刷新,整个过程不需要更新第一终端设备或第二终端设备的媒体信息。
在一种可能的设计中,当第一媒体端点与媒体重定向端点关联时,统一媒体功能网元将第一媒体端点与所述第二媒体端点的关联关系变更为所述第一媒体端点与所述媒体重定向端点的关联关系;或者,当第二媒体端点与媒体重定向端点关联时,统一媒体功能网元将第二媒体端点与所述第一媒体端点的关联关系变更为所述第二媒体端点与所述媒体重定向端点的关联关系。通过该方法,当需要更新媒体信息时,应用服务器只需要通过接口直接修改统一媒体功能网元的媒体连接,不需要通过复杂的媒体重协商流程,避免了可能的业务冲突和媒体碰撞等问题。
第二方面,本申请实施例提供另一种呼叫处理方法,该呼叫处理方法由第一接入媒体控制网元所执行。其中,第一接入媒体控制网元向统一媒体功能网元发送第一媒体连接请求消息,第一媒体连接请求消息用于请求为第一终端设备创建第一媒体端点。第一接入媒体控制网元接收来自统一媒体功能网元的第一媒体连接响应,第一媒体连接响应包括第一媒体端点的资源标识。通过该方法,统一媒体功能网元可以向IMS信任域中的控制网元发送统一媒体资源标识,有利于IMS网络中的媒体资源共享。
在一种可能的设计中,第一接入媒体控制网元向第二接入媒体控制网元发送会话请求消息,会话请求消息包括第一媒体端点的资源标识和媒体信息。第一接入媒体控制网元接收来自第二接入媒体控制网元的会话响应消息,会话响应消息包括针对第二终端设备创建的第二媒体端点的资源标识和媒体信息。统一媒体资源标识可以在IMS信任域中传输,有利于IMS网络中的媒体资源共享。
在一种可能的设计中,当统一媒体功能网元创建针对媒体重定向指示消息的媒体重定向端点时,第一接入媒体控制网元向统一媒体功能网元发送第一媒体更新指示消息。第一媒体更新指示消息用于指示统一媒体功能网元将第一媒体端点与第二媒体端点的关联关系变更为第一媒体端点与媒体重定向端点的关联关系。
第三方面,本申请实施例提供另一种呼叫处理方法,该呼叫处理方法由第二接入媒体控制网元所执行。其中,第二接入媒体控制网元向统一媒体功能网元发送第二媒体连接请求消息,第二媒体连接请求消息用于请求为第二终端设备创建第二媒体端点。第二接入媒体控制网元接收来自统一媒体功能网元的第二媒体连接响应,第二媒体连接响应包括第二媒体端点的资源标识。通过该方法,通过该方法,统一媒体功能网元可以向IMS信任域中的控制网元发送统一媒体资源标识,有利于IMS网络中的媒体资源共享。
在一种可能的设计中,第二接入媒体控制网元接收来自第一接入媒体控制网元的会话请求消息,会话请求消息包括第一媒体端点的资源标识和媒体信息。第二接入媒体控制网元向统一媒体功能网元发送关联指示消息,关联指示消息用于指示统一媒体功能网元关联第一媒体端点和第二媒体端点。第二接入媒体控制网元向第一接入媒体控制网元发送会话响应消息,会话响应消息包括针对第二终端设备创建的第二媒体端点的资源标识和媒体信息。
在一种可能的设计中,当统一媒体功能网元创建针对媒体重定向指示消息的媒体重定向端点时,第二接入媒体控制网元向统一媒体功能网元发送第二媒体更新指示消息。第二媒体更新指示消息用于指示统一媒体功能网元将第二媒体端点与第一媒体端点的关联关系变更为 第二媒体端点与媒体重定向端点的关联关系。
第四方面,本申请实施例提供一种呼叫处理装置,该装置包括通信模块和处理模块。其中,通信模块用于接收来自第一接入媒体控制网元的第一媒体连接请求消息,第一媒体连接请求消息用于请求为第一终端设备创建第一媒体端点。处理模块用于根据第一媒体连接请求消息创建针对第一终端设备的第一媒体端点。通信模块还用于接收来自第二接入媒体控制网元的第二媒体连接请求消息,第二媒体连接请求消息用于请求为第二终端设备创建第二媒体端点。处理模块还用于根据第二媒体连接请求消息创建针对第二终端设备的第二媒体端点。通信模块还用于通过第一媒体端点和第二媒体端点传输第一终端设备与第二终端设备之间的媒体数据。
关于通信模块和处理模块实现的其他功能请参见第一方面,此处不再赘述。
第五方面,本申请实施例提供另一种呼叫处理装置,该装置包括通信模块和处理模块。其中,通信模块用于向统一媒体功能网元发送第一媒体连接请求消息,第一媒体连接请求消息用于请求为第一终端设备创建第一媒体端点。通信模块还用于接收来自统一媒体功能网元的第一媒体连接响应,第一媒体连接响应包括所述第一媒体端点的资源标识。
关于通信模块和处理模块实现的其他功能请参见第二方面,此处不再赘述。
第六方面,本申请实施例提供另一种呼叫处理装置,该装置包括通信模块和处理模块。其中,通信模块用于向统一媒体功能网元发送第二媒体连接请求消息,第二媒体连接请求消息用于请求为第二终端设备创建第二媒体端点。通信模块还用于接收来自统一媒体功能网元的第二媒体连接响应,第二媒体连接响应包括第二媒体端点的资源标识。
关于通信模块和处理模块实现的其他功能请参见第二方面,此处不再赘述。
第七方面,本申请实施例提供另一种呼叫处理装置,该呼叫处理装置可以为设备或设置于设备中的芯片或电路。该呼叫处理装置包括用于执行上述第一方面至第三方面及其任意一种可能的设计中所提供的呼叫处理方法的单元和/或模块,因此也能实现第一方面至第三方面及其任意一种可能的设计中提供的呼叫处理方法所具备的有益效果。
第八方面,本申请实施例提供一种计算机可读存储介质,该可读存储介质包括程序或指令,当所述程序或指令在计算机上运行时,使得计算机执行第一方面至第三方面及其任一种可能实现方式中的方法。
第九方面,本申请实施例提供一种计算机程序或计算机程序产品,包括代码或指令,当代码或指令在计算机上运行时,使得计算机执行第一方面至第三方面及其任一种可能实现方式中的方法。
第十方面,本申请实施例提供一种芯片或者芯片系统,该芯片或者芯片系统包括至少一个处理器和接口,接口和至少一个处理器通过线路互联,至少一个处理器用于运行计算机程序或指令,以进行第一方面至第三方面及其任一种可能的实现方式中任一项所描述的方法。
其中,芯片中的接口可以为输入/输出接口、管脚或电路等。
上述方面中的芯片系统可以是片上系统(system on chip,SOC),也可以是基带芯片等,其中基带芯片可以包括处理器、信道编码器、数字信号处理器、调制解调器和接口模块等。
在一种可能的实现中,本申请中上述描述的芯片或者芯片系统还包括至少一个存储器,该至少一个存储器中存储有指令。该存储器可以为芯片内部的存储单元,例如,寄存器、缓存等,也可以是该芯片的存储单元(例如,只读存储器、随机存取存储器等)。
在一种可能的实现中,上述方面中的会话媒体处理上下文称为Context,上述方面中的媒体端点称为Endpoint,上述方面中的媒体流称为Stream。
在一种可能的实现中,上述方面中的媒体端点也可以称为Connector。
附图说明
图1为一种IMS网络架构的示意图;
图2为本申请实施例提供的一种网络架构的示意图;
图3为一种IMS基本呼叫与媒体协商的流程示意图;
图4为一种IMS的基本呼叫流程与媒体协商流程完成后的媒体连接拓扑的示意图;
图5为一种IMS放音的流程示意图;
图6为一种IMS前转的流程示意图;
图7为一种媒体碰撞的示意图;
图8为一种H.248协议定义的媒体资源模型的示意图;
图9为本申请实施例提供的一种呼叫处理方法的流程示意图;
图10为本申请实施例提供的一种会话媒体处理上下文的示意图;
图11为本申请实施例提供的呼叫处理方法在执行基本呼叫处理流程后,实现媒体重定向的流程示意图;
图12为本申请实施例提供的呼叫处理方法应用于互通场景中的流程示意图;
图13为本申请实施例提供的呼叫处理方法应用于主被叫音频基本呼叫建立场景中的流程示意图;
图14为本申请实施例提供的一种音频呼叫的媒体连接拓扑的示意图;
图15为本申请实施例提供的一种视频呼叫的媒体连接拓扑的示意图;
图16为本申请实施例提供的呼叫处理方法应用于放音业务场景中的流程示意图;
图17为本申请实施例提供的一种视频彩铃放音的媒体连接拓扑的示意图;
图18为本申请实施例提供的呼叫处理方法应用于前转业务场景中的流程示意图;
图19为本申请实施例提供的一种前转业务的媒体连接拓扑的示意图;
图20为本申请实施例提供的一种呼叫处理装置的示意图;
图21为本申请实施例提供的另一种呼叫处理装置的示意图;
图22为本申请实施例提供的另一种呼叫处理装置的示意图;
图23为本申请实施例提供的另一种呼叫处理装置的示意图;
图24为本申请实施例提供的另一种呼叫处理装置的示意图;
图25为本申请实施例提供的另一种呼叫处理装置的示意图。
具体实施方式
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
在本申请的实施例中,术语“第一”、“第二”等仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第二”、“第一”的特征可以明示或者隐含地包括一个或者更多个该特征。
应理解,在本文中对各种所述示例的描述中所使用的术语只是为了描述特定示例,而并 非旨在进行限制。如在对各种所述示例的描述和所附权利要求书中所使用的那样,单数形式“一个(“a”,“an”)”和“该”旨在也包括复数形式,除非上下文另外明确地指示。
还应理解,在本申请的各个实施例中,各个过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
还应理解,根据A确定B并不意味着仅仅根据A确定B,还可以根据A和/或其它信息确定B。
还应理解,术语“包括”(也称“includes”、“including”、“comprises”和/或“comprising”)当在本说明书中使用时指定存在所陈述的特征、整数、步骤、操作、元素、和/或部件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元素、部件、和/或其分组。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。
IP多媒体子系统(IP multimedia subsystem,IMS)为一种基于IP的网络的提供多媒体业务的通用网络架构。IMS网络采用会话发起协议(session initiation protocol,SIP)作为主要的信令协议,使得运营商可以为用户提供端到端的全IP的多媒体业务。其中,SIP采用会话描述协议(session description protocol,SDP)与发起/应答(offer/answer,简称OA)机制完成会话各参与方的媒体协商,例如音视频编解码,主机地址,网络传输协议等协商。
图1为一种IMS网络架构的示意图。IMS网络架构可以分为接入侧、中心侧和互通侧。为了便于描述,图1仅示出了IMS网络中与媒体处理相关的网元。在IMS网络架构下,媒体处理网元只能由特定的媒体控制网元直接控制。例如,媒体控制网元通过SIP信令和SDP协议与其它网元或终端设备完成媒体协商,然后控制媒体处理网元完成实际的媒体处理与媒体数据传输。其中,媒体控制网元与媒体处理网元之间通常采用H.248网关控制协议,完成媒体处理节点及其连接关系的管理。
本申请实施例涉及到的终端设备还可以称为终端,可以是一种具有无线收发功能的设备,其可以部署在陆地上,包括室内或室外、手持或车载;也可以部署在水面上(如轮船等);还可以部署在空中(例如飞机、气球和卫星上等)。终端设备可以是用户设备(user equipment,UE),其中,UE包括具有无线通信功能的手持式设备、车载设备、可穿戴设备或计算设备。示例性地,UE可以是手机(mobile phone)、平板电脑或带无线收发功能的电脑。终端设备还可以是虚拟现实(virtual reality,VR)终端设备、增强现实(augmented reality,AR)终端设备、工业控制中的无线终端、无人驾驶中的无线终端、远程医疗中的无线终端、智能电网中的无线终端、智慧城市(smart city)中的无线终端、智慧家庭(smart home)中的无线终端等等。本申请实施例中,用于实现终端的功能的装置可以是终端;也可以是能够支持终端实现该功能的装置,例如芯片系统,该装置可以被安装在终端中。本申请实施例中,芯片系统可以由芯片构成,也可以包括芯片和其他分立器件。本申请实施例提供的技术方案中,以用于实现终端的功能的装置是终端为例,描述本申请实施例提供的技术方案。
接入侧包括IMS接入媒体网关(IMS access media gateway,IMS-AGW)、代理呼叫会话控制功能(proxy call session control function,P-CSCF)网元等。其中,IMS-AGW为接入侧媒体处理网元,用于实现终端设备的媒体接入代理与转发,网络地址转换(network address translation,NAT)穿越,音频编解码转换等功能。P-CSCF用于控制IMS-AGW,完成接入侧的媒体处理。
中心侧包括多媒体资源功能处理器(multimedia resource function processor,MRFP)、多媒体资源功能控制器(multimedia resource function controller,MRFC)、归属服务器(home  subscriber server,HSS)、应用服务器(application server,AS)、查询呼叫会话控制功能(interrogating CSCF,I-CSCF)网元、服务呼叫会话控制功能(serving CSCF,S-CSCF)网元等。其中,MRFP是中心侧的媒体处理网元,用于提供放音收号、语音会议等功能。MRFC用于控制MRFP完成放音收号、语音会议等功能。
互通侧包括互通边界控制功能(interconnection border control function,IBCF)网元、转换网关(transition gateway,TrGW)、IP多媒体网关(IP multimedia media gateway,IM-MGW)、媒体网关控制功能(media gateway control function,MGCF)网元等。其中,TrGW与IM-MGW是互通侧的媒体处理网元,用于实现IMS网络与其它网络的媒体互通。例如,TrGW用于实现IMS网络与其他IP网络媒体面的互通,包括媒体NAT、服务质量(quality of service,QoS)控制和编解码转换等。IM-MGW用于实现IMS网络和其他非IP网络(例如公共交换电话网络(public switched telephone network,PSTN)、公用陆地移动通信网(public land mobile network,PLMN))媒体面的互通。IBCF用于控制TrGW,完成与其它IP域的媒体互通控制。MGCF用于控制IM-MGW,完成与非IP域的媒体互通控制。
其中,IMS网络中的基本呼叫流程与媒体协商流程是由主叫侧与被叫侧之间的交互实现。一方面,由于IMS为半呼叫模型,无论主叫终端设备与被叫终端设备是否通过IMS-AGW接入IMS网络,在逻辑上主叫侧与被叫侧为两个IMS-AGW,需要建立两个媒体上下文。例如,主叫IMS-AGW创建一个媒体上下文,用于接收来自主叫终端设备的实时传输协议(real-time transport protocol,RTP)媒体,并转发该RTP媒体。被叫IMS-AGW创建另一个媒体上下文,用于接收主叫IMS-AGW转发的RTP媒体,并向被叫终端设备转发该RTP媒体。也就是说,主叫终端设备的RTP媒体通过IMS网络传输至被叫终端设备时,至少要经过两跳,即导致媒体多跳转发,导致资源浪费。
另一方面,IMS半呼叫模型下,主叫侧和被叫侧均有可能触发媒体重定向,导致媒体碰撞。例如,主叫侧触发放音,被叫侧触发前转,在呼叫建立前均通过UPDATE消息完成媒体重定向。当主叫侧触发放音且被叫侧触发前转时,系统将发生媒体碰撞。
再一方面,IMS网络中不同的媒体处理网元提供的媒体处理能力有限,处理不同媒体类型业务可能需要经过多个媒体处理网元。例如,IMS接入侧的媒体处理网元提供的功能有限,放音、音视频会议等功能需要由中心侧的媒体处理网元提供,导致媒体迂回,业务时延大,无法满足未来扩展现实(extended reality,XR)等强交互业务的低时延诉求。
又一方面,随着IMS提供的媒体相关业务越来越多,需要串接多个AS来提供不同的业务。例如,多媒体电话服务器(IMS multimedia telephony application server,MMTEL AS)用于提供传统电信业务,视频彩铃AS用于提供视频彩铃业务。但是MMTEL AS和视频彩铃AS在串接时的媒体资源无法被多个AS共享,每个AS都要申请独立的媒体资源,造成媒体资源浪费且会导致业务冲突和媒体碰撞。
因此,在IMS半呼叫模型下,如何避免媒体多跳转发,降低业务处理时延,避免业务冲突和媒体碰撞成为待解决的问题。
为了解决上述问题,本申请实施例提供一种呼叫处理方法,该呼叫处理方法基于IMS网络架构,由新增的统一媒体功能(unified media function,UMF)网元所执行。UMF通过接入侧提供媒体接入能力,减少媒体迂回,降低媒体业务传输时延,有利于支持扩展现实等时延敏感业务对IMS网络架构的需求。
本申请实施例提供的呼叫处理方法应用于如图2所示的一种网络架构中。图2所示的网络架构包括本申请实施例新增的UMF。UMF在功能上除了融合当前IMS网络架构中的媒体 处理网元,例如IMS-AGW、MRFP、TrGW等之外,还可以提供未来新业务需要的媒体处理功能,如数据通道服务(data channel server)。UMF可以同时部署在接入侧、中心侧或互通侧,IMS信任域的所有网元均可控制UMF。UMF用于提供丰富的媒体处理能力,减少媒体迂回,满足未来XR等低时延敏感业务的诉求。图2所示的网络架构还包括AS、MGCF、IBCF、I-CSCF、S-CSCF、P-CSCF、HSS和终端设备,各个网元或设备的功能与现有的IMS网络中对应的网元或设备的功能类似,并且还支持未来媒体能力演进。例如,本申请实施例中的媒体功能控制(例如P-CSCF控制UMF执行基本呼叫建立时)采用非OA机制,通过如媒体网关控制协议(例如H.248协议)接口或服务化接口等控制UMF,从而在执行媒体业务时避免媒体重协商,实现IMS信令面(包括OA协商)与媒体能力解耦,有利于支持未来媒体能力演进。应注意,图2所示的网络架构重点体现媒体业务,未示出非媒体业务相关的网元。CSCF一般不参与媒体业务,因此图2中不体现CSCF与UMF的控制关系,但是本申请实施例不作限定。
下面对本申请实施例涉及的相关概念进行描述。
1、IMS基本呼叫与媒体协商:图3为一种IMS基本呼叫与媒体协商的流程示意图。该基本呼叫与媒体协商流程是由终端设备、IMS-AGW、P-CSCF、CSCF/AS之间的交互实现。应注意,由于IMS为半呼叫模型,无论主叫与被叫是否通过同一个物理IMS网络接入,其呼叫处理分别由两个不同的逻辑实体完成。也就是说,即图3中逻辑上的上行(mobile original,MO)包括主叫终端设备、主叫IMS-AGW、主叫P-CSCF和主叫CSCF/AS,下行(mobile terminated,MT)包括被叫终端设备、被叫IMS-AGW、被叫P-CSCF和被叫CSCF/AS。图3所示的基本呼叫与媒体协商流程包括:
A、主叫终端设备通过初始INVITE消息携带自身媒体信息(SDP),该媒体信息为SDP Offer。
B、主叫P-CSCF接收INVITE消息,记录并储存主叫终端设备的媒体信息。并且,主叫P-CSCF指示主叫IMS-AGW分配输出媒体端点O2,对应的本地媒体信息为SDP(O2)。主叫P-CSCF将主叫媒体信息替换为SDP(O2),然后通过SIP信令传递给后续网元。例如,该SIP信令由主叫P-CSCF发送给主叫CSCF/AS,再由主叫CSCF/AS发送给被叫CSCF/AS,再由被叫CSCF/AS发送给被叫P-CSCF。
C、被叫P-CSCF接收INVITE消息,记录并储存主叫侧媒体信息。并且,被叫P-CSCF指示被叫IMS-AGW分配输出媒体端点T2,对应本地媒体信息为SDP(T2)。被叫P-CSCF再将SDP(T2)发送给被叫终端设备。
D、被叫终端设备通过首个可靠1XX(183或180)响应消息携带媒体信息,该媒体信息为SDP Answer。
E、被叫P-CSCF接收1XX响应消息,将T2的远端媒体信息更新为被叫终端设备的媒体信息。并且,被叫P-CSCF指示被叫IMS-AGW分配输入媒体端点T1,对应的本地媒体信息为SDP(T1),对应的远端媒体信息为之前保存的主叫媒体信息SDP(O2)。
F、被叫P-CSCF指示被叫IMS-AGW关联T1和T2,并将被叫SDP替换为SDP(T1),然后通过SIP信令传递给后续网元。例如,该SIP信令由被叫P-CSCF发送给被叫CSCF/AS,再由被叫CSCF/AS发送给主叫CSCF/AS,再由主叫CSCF/AS发送给主叫P-CSCF。其中,若被叫终端设备不支持标准协议RFC3262定义的1XX响应可靠传输机制,则被叫终端设备可以通过200(INVITE)响应携带SDP Answer。
G、主叫P-CSCF接收被叫的SDP Answer(T1),将O2的远端媒体信息更新为SDP(T1)。 主叫P-CSCF指示主叫IMS-AGW分配输入媒体端点O1,对应的本地媒体信息为SDP(O1),对应的远端媒体信息为之前保存的主叫终端设备的媒体信息。
H、主叫P-CSCF指示主叫IMS-AGW关联O1和O2,并将主叫SDP替换为SDP(O1),然后通过SIP信令传递给主叫终端设备。
综上所述,媒体协商过程完成,主叫终端设备和被叫终端设备可以传输RTP媒体,如图4所示。其中,图4为一种IMS的基本呼叫流程与媒体协商流程完成后的媒体连接拓扑的示意图。图4所示的接入侧(主叫)中的主叫IMS-AGW创建两个媒体端点O1和O2,接入侧(被叫)中的被叫IMS-AGW创建两个媒体端点T1和T2。可见,主叫终端设备的媒体数据传输至被叫终端设备时,至少要经过两跳。
可选的,若主叫侧在初始INVITE消息中不携带媒体信息(慢启流程),则可通过以下2种流程完成媒体协商:
方式一:被叫侧在首个可靠18X(183或180)响应消息中携带SDP Offer,主叫通过PRACK消息回复SDP Answer,完成媒体协商。
方式二:被叫在200(INVITE)响应消息中携带SDP Offer,主叫通过ACK消息回复SDP Answer,完成媒体协商。
可选的,主叫终端设备和被叫终端设备之间的通话结束后,主叫侧和被叫侧将执行呼叫释放,具体实现方式参考现有协议标准中的描述,在此不再赘述。
2、媒体重协商:IMS网络中的媒体业务除了基本呼叫,还有大量补充业务,这些业务需要在已经完成一次媒体协商的情况下进行媒体重协商。例如,典型的媒体重协商包括:
(1)放音:例如,彩铃等放音后接续的业务,需要将会话中的一方先连接到多媒体资源功能MRF进行放音,待放音完成后还需要进行媒体重协商从而恢复会话中的媒体连接关系。
(2)前转:主叫与原被叫已经完成媒体协商,触发前转后需要将主叫与前转目的方重新建立媒体连接。
其中,IMS网络中的放音、前转等媒体重协商业务通常由AS上的业务触发。但是,AS无法直接控制媒体处理网元。因此在现有的IMS网络架构下,都是由AS通过SIP信令与媒体控制网元(如MRFC)完成媒体协商(或媒体重协商),更新媒体处理网元(如MRFP)上的媒体信息,从而实现媒体重定向。
图5为一种IMS放音的流程示意图。该放音流程以彩铃业务放音流程为例进行描述。应注意,图5中将MRFC与MRFP合并,统称为MRF,MRFC与MRFP之间的内部交互未在图中列出。还应注意,图5所示的放音流程中省略了主叫侧和被叫侧之间的基本呼叫与媒体协商流程(具体流程可以参考图4)。还应注意,为了描述简单,图5中主叫侧的设备和网元未展开描述,图5所示的主叫侧包括主叫终端设备、IMS-AGW、P-CSCF和AS等设备和网元。图5所示的放音流程包括:
A.AS向主叫侧发送UPDATE消息,该UPDATE消息携带MRF媒体信息。
B.主叫侧的远端媒体信息更新为MRF,完成主叫侧与MRF之间的媒体协商,建立媒体连接。
C.被叫摘机后,AS需要让主叫侧与被叫侧重新建立媒体连接,则可通过以下两种流程完成媒体重定向:
通话建立前通过UPDATE消息完成媒体重定向:AS与主叫侧之间先通过UPDATE/200消息完成媒体重定向,将主叫侧的远端媒体更新为被叫的SDP,再完成主被叫的通话建立。
通话建立后通过reINVITE消息完成媒体重定向:AS先完成主被叫的通话建立。然后AS 向主叫(也可以是被叫)发送reINVITE消息(不携带SDP),主叫侧和被叫侧再通过200/ACK消息,从而完成主被叫的媒体重定向。
图6为一种IMS前转的流程示意图。该前转流程中包括主叫终端设备,被叫终端设备和前转终端设备。应注意,根据IMS半呼叫模型,被叫终端设备和前转终端设备应该对应不同的IMS逻辑网元。为了描述简单,图6中未做区分(即假设被叫终端设备和前转终端设备采用相同的IMS-AGW、P-CSCF和AS)。还应注意,图6所示的前转流程仅针对主被叫已经完成媒体协商后发生前转的场景,实际业务中也有主被叫未完成媒体协商即发生前转的场景(例如无条件前转场景),在此不作限定。图6所示的前转流程包括:
A.AS触发前转后,通过P-CSCF向前转终端设备发送初始INVITE消息,该初始INVITE消息携带主叫侧媒体信息(SDP Offer)。
B.前转终端设备通过P-CSCF向AS发送1XX(通常是183或180)响应消息,该1XX响应消息携带SDP Answer。AS收到1XX响应消息后,与放音场景类似,则可通过以下两种流程完成媒体重定向:
通话建立前通过UPDATE消息完成媒体重定向。
通话建立后通过reINVITE消息完成媒体重定向。
图6中完成媒体重定向的具体实现方式与放音流程类似,在此不再赘述。
其中,媒体重协商涉及复杂的端到端信令流程。一方面,业务AS必须深度参与SIP重协商OA过程,感知OA状态,理解SDP内容,并根据OA状态决定采用不同的流程。但是,重协商OA过程通常与业务逻辑无关,由此导致业务逻辑与媒体协商耦合。另一方面,业务AS通过发起的单边重协商可能引起媒体重协商震荡。例如,在放音场景中,放音结束后,AS通过UPDATE消息将主叫侧的远端媒体信息更新为被叫的SDP,主叫在200(UPDATE)中回复SDP Answer,此SDP Answer应该与主叫侧在初始INVITE消息中携带的SDP相同,否则,AS就必须再通过UPDATE/200消息再次与被叫侧完成单边协商。极端场景下,如果主被叫在200(UPDATE)中回复的SDP Answer均与上次不同,将形成媒体重协商震荡,主被叫的媒体协商无法完成,导致无法正常建立呼叫。
3、媒体碰撞:通信的一方通过UPDATE(或reINVITE)向对端发送SDP offer,在未收到对端响应的SDP Answer前,收到对端的SDP offer。例如,图7为一种媒体碰撞的示意图。其中,主叫侧和被叫侧IMS均触发媒体重协商,同时向对端发送UPDATE(或reINVITE)消息,导致媒体碰撞。
4、基于媒体网关控制协议的媒体资源模型:现网中对媒体资源的控制,除了基于SDP协议的OA机制,还有基于媒体网关控制协议(如H.248)的媒体资源模型。图8为H.248协议定义的媒体资源模型,该媒体资源模型包括终端(terminal)和上下文(context)。其中,终端是逻辑实体,用于发送和接收一种或多种媒体,媒体类型可以是窄带的时隙、模拟线,也可以是基于IP的RTP流等。例如,终端可以发送视频流(video stream)、音频流(audio stream)和信息流(message stream)等,如图8所示。应注意,一个终端属于且只能属于一个上下文。上下文表示终端间的连接关系,它描述了终端之间的拓扑关系及媒体混合/交换的参数。基于图8所示的媒体资源模型,H.248协议还定义了对应的协议接口,控制网元通过协议接口管理内部的媒体资源及连接关系。其中,协议接口包括但不限于:Add命令(用于在上下文中新增终端)、Modify命令(用于修改一个终端的属性、事件等)、Subtract命令(用于从上下文中删除终端,并且返回终端的统计状态)、Move命令(用于将一个终端从一个上下文移到另一个上下文中)、Notify命令(用于将检测到的事件通知控制网元)。
5、基于服务化接口的媒体资源模型:业界开源媒体服务器大多提供了服务化接口,如REST API或RPC,用于控制媒体处理网元,典型的如Kurento。其中,Kurento的逻辑架构包括Kurento媒体服务器(KurentoMedia Server,KMS)、应用服务器(application server,AS)等模块。其中,KMS相当于IMS中的媒体处理网元,AS相当于IMS网络中的媒体控制网元,AS通过Kurento protocol控制KMS。Kurento protocol采用基于websocket的JSON-RPC接口。与H.248协议类似,KMS内部也提供了对应的媒体资源模型,通过Kurento protocol来描述媒体资源的管理及其连接关系。KMS媒体资源模型采用了面向对象的思想,将媒体资源抽象为各种对象,最主要的媒体资源对象是Media Pipeline和Media Elements。KMS采用流水线架构描述媒体业务,Media Pipeline对象表示媒体处理流水线,用于描述媒体处理对象之间的连接拓扑关系,相当于H.248协议中的context。Media Elements表示媒体处理组件对象,根据其提供的功能,媒体处理组件又分为Endpoints,Filters,Hubs等。Endpoints主要提供媒体流的收发,相当于H.248中的终端。Filters主要提供一些媒体处理能力,如音视频编解码。KMS可通过Filters提供丰富的媒体处理能力,具备很好的扩展性。与H.248协议类似,Kurento protocol描述了媒体资源的管理(包括创建,修改,删除)、事件订阅以及媒体资源的连接拓扑关系,其格式为JSON(或者也可以采用其他格式)。
下面对本申请实施例的方法进行详细的描述。
图9为本申请实施例提供的一种呼叫处理方法的流程示意图。该呼叫处理方法由统一媒体功能网元UMF所执行,包括以下步骤:
901,统一媒体功能网元接收来自第一接入媒体控制网元的第一媒体连接请求消息。
其中,本申请实施例中的第一接入媒体控制网元为主叫侧的接入媒体控制网元(例如为主叫P-CSCF)。第一媒体连接请求消息用于请求为第一终端设备创建第一媒体端点。第一终端设备例如为主叫终端设备,则第一媒体连接请求消息用于请求为主叫终端设备创建针对主叫终端设备的媒体端点。
一种实现方式中,第一媒体连接请求消息是由第一接入媒体控制网元根据来自第一终端设备的呼叫建立请求消息确定的。其中,呼叫建立请求消息例如可以是基本呼叫流程中的INVITE消息。INVITE消息中携带第一终端设备的媒体信息(SDP)。第一接入媒体控制网元接收该INVITE消息,获取第一终端设备的媒体信息,并根据第一终端设备的媒体信息生成第一媒体连接请求消息。例如,第一接入媒体控制网元根据第一终端设备的SDP,针对SDP中的每个m行,获取每个m行中的IP地址信息、端口信息和协议类型信息。第一接入媒体控制网元根据每个m行中的IP地址信息、端口信息和协议类型信息,确定第一媒体连接请求消息中包括第一终端设备的SDP中每个m行中的IP地址信息、端口信息和协议类型信息。
902,统一媒体功能网元根据第一媒体连接请求消息创建针对第一终端设备的第一媒体端点。
其中,本申请实施例中的媒体端点类似于图8所示的媒体资源模型中的终端,为一种逻辑实体,用于发送和接收一种或多种媒体。一个媒体端点(Endpoint)用于标识一个IP连接的五元组,包括本端和远端分别的IP地址、端口及传输类型。其中,Endpoint的本端IP地址和端口是指统一媒体功能网元的IP地址和端口。Endpoint的远端IP地址和端口是指连接的目的方的IP地址和端口。例如,第一媒体端点为针对第一终端设备创建的媒体端点,则第一媒体端点的本端IP地址和端口为UMF的IP地址和端口,第一媒体端点的远端IP地址和端口是指第一终端设备的IP地址和端口。传输类型可以包括但不限于RTP over用户数据报协议(user datagram protocol,UDP)或数据通道(data channel)over UDP/数据包传输层安全性 协议(datagram transport layer security,DTLS)/流控制传输协议(stream control transmission protocol,SCTP)等。
一种实现方式中,不同的媒体端点通过媒体端点的资源标识来区分,例如,通过统一资源定位符(uniform resource locator,URL)来区分。媒体端点的资源标识为EndpointURL,EndpointURL的格式为:endpoints/{endpoint_id}。其中,endpoints表示该URL为EndpointURL,{endpoint_id}为Endpoint的编号,在运行时动态分配。可以理解,UMF创建第一媒体端点时,也创建第一媒体端点的资源标识。其中,第一媒体端点的资源标识例如为Endpoint1URL。
可选的,统一媒体功能网元还根据第一媒体连接请求消息创建针对第一终端设备的第一媒体流。其中,本申请实施例中的媒体流类似于图8所示的媒体资源模型中的音频流等,包括了媒体流的详细信息,例如媒体流包括音视频的编解码信息等。其中,一个Endpoint上可以有多个Stream,例如,多方通话中一个Endpoint上的视频多流,datachannel上的多个通道等。其中,不同的媒体流通过媒体流的资源标识来区分。例如,媒体流的资源标识为StreamURL,StreamURL的格式为:streams/{stream_id}。其中,streams表示该URL为StreamURL,{stream_id}为Stream的编号,在运行时动态分配。可以理解,UMF创建第一媒体流时,也创建第一媒体流的资源标识。例如,第一媒体流的资源标识为Stream1URL。
可选的,统一媒体功能网元还根据第一媒体连接请求消息创建会话媒体处理上下文(Context)。其中,会话媒体处理上下文可以视为逻辑上的一种容器,用于记录媒体端点和媒体流。例如,会话媒体处理上下文为一个RTP处理实体或一个datachannel处理实体。一个Context可以有多个Endpoint,一个Endpoint可以有多条Stream。其中,不同的会话媒体处理上下文通过会话媒体处理上下文的资源标识来区分。例如,会话媒体处理上下文的资源标识为ContextURL,ContextURL的格式为:{umf_domain}/contexts/{context_id}。其中,{umf_domain}为UMF的域名,contexts表示该URL为ContextURL,{context_id}为Context的编号,在运行时动态分配。
可选的,UMF创建针对第一终端设备的第一媒体端点后,向第一接入媒体控制网元发送媒体连接响应消息。该媒体连接响应消息中包括第一媒体端点的资源标识和第一媒体流的资源标识。第一接入媒体控制网元接收媒体连接响应消息后,可以向后续网元发送呼叫建立请求消息(例如INVITE消息)。该INVITE消息包括第一接入媒体控制网元根据第一媒体端点的远端IP地址、远端端口和音视频编解码类型等信息生成的SDP Offer,还包括新增的扩展SIP头域(如Local-Media头域),携带第一媒体端点的资源标识。该INVITE消息通过主被叫侧的CSCF\AS等中间网元转发,其中,Local-Media头域被中间网元透传。
903,统一媒体功能网元接收来自第二接入媒体控制网元的第二媒体连接请求消息。
其中,本申请实施例中的第二接入媒体控制网元为被叫侧的接入媒体控制网元(例如为被叫P-CSCF)。第二媒体连接请求消息用于请求为第二终端设备创建第二媒体端点。第二终端设备例如为被叫终端设备,则第二媒体连接请求消息用于请求为被叫终端设备创建针对被叫终端设备的媒体端点。
一种实现方式中,第二媒体连接请求消息是由第二接入媒体控制网元根据INVITE消息确定的。例如,第二接入媒体控制网元收到步骤902中描述的INVITE消息,从Local-Media头域中获取第一媒体端点的资源标识,并从INVITE消息中获取第一媒体端点的远端IP地址、远端端口和音视频编解码类型等信息。第二接入媒体控制网元确定第二媒体连接请求消息包括第一媒体端点的远端IP地址、远端端口和音视频编解码类型等信息,并向UMF发送第二媒体连接请求消息。
904,统一媒体功能网元根据第二媒体连接请求消息创建针对第二终端设备的第二媒体端点。
其中,第二媒体端点与前文实施例中描述的第一媒体端点类似,也是一种逻辑实体,用于发送和接收一种或多种媒体。其中,第二媒体端点的本端IP地址和端口为统一媒体功能网元的IP地址和端口,第二媒体端点的远端IP地址和端口为第二终端设备的IP地址和端口。一种实现方式中,第二媒体端点的资源标识与第一媒体端点的资源标识类似。例如,第二媒体端点的资源标识为Endpoint2URL。
可选的,统一媒体功能网元还根据第二媒体连接请求消息创建针对第二终端设备的第二媒体流,以及创建第二媒体流的资源标识。具体创建方式参考步骤902中对应的描述,在此不再赘述。例如,第二媒体端点的资源标识为Stream2URL。
一种实现方式中,统一媒体功能网元在创建第二媒体端点和第二媒体流时,可以在步骤902中描述的会话媒体处理上下文中创建,而无需新建一个会话媒体处理上下文。例如,图10为本申请实施例提供的一种会话媒体处理上下文的示意图。该会话媒体处理上下文包括第一媒体端点和第二媒体端点,以及第一媒体流和第二媒体流。其中,会话媒体处理上下文的资源标识为ContextURL,ContextURL的格式为:{umf_domain}/contexts/{context_id}。假设{umf_domain}为umf.huawei.com,Context的编号为1000,则ContextURL为umf.huawei.com/contexts/1000。假设第一媒体端点的编号为5000,则Endpoint1URL为umf.huawei.com/contexts/1000/endpoints/5000。假设第二媒体端点的编号为6000,则Endpoint2URL为umf.huawei.com/contexts/1000/endpoints/6000。
一种实现方式中,统一媒体功能网元向第一接入媒体控制网元发送第一媒体端点的资源标识,并向第二接入媒体控制网元发送第二媒体端点的资源标识。也就是说,UMF可以向IMS信任域中所有网元发送EndpointURL(或StreamURL、ContextURL),以使IMS信任域中所有网元能够直接控制UMF。从而使得在IMS半呼叫模型下共享主叫侧与被叫侧的媒体资源成为可能,在逻辑上只需要一个UMF,媒体转发只需要一次即可,同时也节省了媒体资源。
其中,UMF传递媒体资源URL。例如,UMF向MO/MT侧的接入媒体控制网元发送Endpoint1URL、Endpoint2URL、Stream1URL和Stream2URL,MO/MT侧的接入媒体控制网元可以向多个AS透传上述URL,从而有利于实现MO/MT侧网元以及多个AS之间共享媒体资源,减少媒体转发跳数,避免多AS串接时的媒体业务冲突和媒体碰撞,减少媒体资源浪费。其中,UMF对IMS网元提供的接口,可以采用各种控制协议,包括但不限于H.248媒体网关控制协议、RESTful API、RPC等。
905,统一媒体功能网元通过第一媒体端点和第二媒体端点传输第一终端设备与第二终端设备之间的媒体数据。
其中,统一媒体功能网元创建第一媒体端点和第二媒体端点,并创建第一媒体流和第二媒体流,用于传输第一终端设备和第二终端设备之间的媒体数据。一种实现方式中,统一媒体功能网元关联第一媒体端点与第二媒体端点。例如,图10所示的会话媒体处理上下文还包括第一媒体端点与第二媒体端点的关联(association),用于描述媒体端点之间的关联关系。例如,第一媒体端点和第二媒体端点通过association关联。其中,对于媒体端点传输的媒体流,一个入流可以关联多个出流(即复制),多个入流可以关联一个出流(即混流)。其中,若入流和出流的编解码属性不同,则需要添加编解码转换组件。
本申请实施例提供一种呼叫处理方法,该呼叫处理方法基于IMS网络架构,由新增的统一媒体功能网元所执行。统一媒体功能网元通过接入侧提供媒体接入能力,减少媒体迂回, 降低媒体业务传输时延,有利于支持扩展现实等时延敏感业务对IMS网络架构的需求。并且UMF向IMS信任域中的控制网元传递URL,在IMS半呼叫模型下实现主叫侧与被叫侧的媒体资源共享。多个控制网元可对同一个UMF网元的会话媒体处理上下文进行控制,在逻辑上只需要一个UMF,媒体转发只需要一次即可,同时也节省了媒体资源。
一种示例中,图11为本申请实施例提供的呼叫处理方法在执行基本呼叫处理流程后,实现媒体重定向的流程示意图。其中,统一媒体功能网元与IMS信任域中的控制网元完成基本呼叫处理流程后,还可能执行媒体重定向流程,包括以下步骤:
1101,统一媒体功能网元接收来自应用服务器的媒体重定向指示消息。
其中,统一媒体功能网元按照图9实施例中的步骤执行基本呼叫处理流程后,已建立第一媒体端点和第二媒体端点,以及关联第一媒体端点和第二媒体端点。当IMS网络中还存在大量补充业务(例如放音业务和前转业务等)时,接入媒体控制网元还需要控制UMF实现媒体重定向。例如,当被叫振铃时,被叫AS启动彩铃业务处理流程,向UMF发送媒体重定向指示消息,该媒体重定向指示消息用于请求为媒体重定向指示消息指示的资源或设备创建媒体重定向端点。其中,媒体重定向指示消息指示的资源例如为放音资源,该放音资源用于向指定的媒体端点放音,从而向指定的终端设备放音。媒体重定向指示消息指示的设备例如为第三终端设备,即为除第一终端设备和第二终端设备之外的新的终端设备。该第三终端设备可以与第一终端设备进行媒体重定向(即第二终端设备不再与第一终端设备建立媒体连接),也可以与第二终端设备进行媒体重定向,本实施例不作限定。
1102,统一媒体功能网元根据媒体重定向指示消息创建针对媒体重定向指示消息的媒体重定向端点。
一种实现方式中,统一媒体功能网元根据媒体重定向指示消息指示的资源,创建针对媒体重定向指示消息指示的资源的媒体重定向端点。其中,媒体重定向指示消息指示的资源为放音资源。例如,当放音资源为语音彩铃时,则UMF创建针对该语音彩铃的媒体重定向端点。当放音资源为视频彩铃时,则UMF创建针对该视频彩铃的第一媒体重定向端点(即音频端点)和第二媒体重定向端点(即视频端点)。
可选的,当媒体重定向指示消息用于指示第一媒体端点执行媒体重定向时,统一媒体功能网元根据该媒体重定向指示消息,关联第一媒体端点和媒体重定向端点。本申请实施例中假设放音资源为UMF中存储的资源,可以被UMF调用,则统一媒体功能网元通过第一媒体端点和媒体重定向端点向第一终端设备放音。例如,UMF向第一终端设备播放语音彩铃。
可选的,当媒体重定向指示消息用于指示第二媒体端点执行媒体重定向时,统一媒体功能网元根据该媒体重定向指示消息,关联第二媒体端点和媒体重定向端点。统一媒体功能网元通过第二媒体端点和媒体重定向端点向第二终端设备放音。
应注意,上述放音过程中,AS只需要通过非OA的媒体控制接口(例如服务化接口)直接修改UMF的媒体连接。例如,当第一媒体端点与媒体重定向端点关联时,统一媒体功能网元将第一媒体端点与所述第二媒体端点的关联关系变更为所述第一媒体端点与所述媒体重定向端点的关联关系。又例如,当第二媒体端点与媒体重定向端点关联时,统一媒体功能网元将第二媒体端点与所述第一媒体端点的关联关系变更为所述第二媒体端点与所述媒体重定向端点的关联关系。也就是说,UMF只需要修改媒体端点之间的association,不需要更新终端侧的媒体信息,因此不需要执行终端侧的媒体重定向流程,避免了可能的业务冲突和媒体碰撞等问题。并且,上述放音过程中,直接由部署在接入侧的UMF提供放音,有利于减少媒体迂回,降低时延。
可选的,当放音完成后第一终端设备和第二终端设备还存在放音后接续的业务时,UMF还根据第一接入媒体控制网元和第二接入媒体控制网元的指示信息,将重新关联第一媒体端点和第二媒体端点,从而通过第一媒体端点和第二媒体端点传输第一终端设备和第二终端设备在放音后接续的业务中的媒体数据。
一种实现方式中,统一媒体功能网元根据媒体重定向指示消息指示的设备,创建针对媒体重定向指示消息指示的设备的媒体重定向端点。例如,媒体重定向指示消息指示的设备为第三终端设备。其中,第三终端设备为前转业务对应的设备,即第一终端设备/第二终端设备可以触发前转业务,前转至第三终端设备。统一媒体功能网元根据媒体重定向指示消息创建针对第三终端设备的第三媒体端点。
可选的,当媒体重定向指示消息用于指示第一媒体端点执行媒体重定向时,统一媒体功能网元根据该媒体重定向指示消息,关联第一媒体端点和第三媒体端点。统一媒体功能网元通过第一媒体端点和第三媒体端点传输第一终端设备和第三终端设备之间的媒体数据。可选的,当媒体重定向指示消息用于指示第一媒体端点执行媒体重定向时,统一媒体功能网元还接收来自第二接入媒体控制网元的媒体连接取消消息。其中,媒体连接取消消息用于请求删除第二媒体端点。也就是说,当第一媒体端点执行媒体重定向时,UMF将根据媒体连接取消消息,删除已与第一媒体端点建立媒体连接的第二媒体端点。然后,UMF根据媒体重定向指示消息创建针对第三终端设备的第三媒体端点,以及关联第一媒体端点和第三媒体端点。
其中,当第一媒体端点与第三媒体端点关联时,统一媒体功能网元更新第一媒体端点与第二媒体端点之间的关联关系。也就是说,上述前转过程中,AS只需要通过非OA的媒体控制接口直接修改UMF的媒体连接,不需要通过复杂的媒体重协商流程,避免了可能的业务冲突和媒体碰撞等问题。应注意,上述前转过程中,当终端设备的编解码信息发生变化时,只需要单边协商(例如,只需要协商编解码信息),不需要第一终端设备和第三终端设备进行媒体协商,避免了可能的媒体协商震荡和终端兼容性等问题。当终端设备的IP地址和/或端口变化时,也不需要修改对端的媒体信息,避免了可能的媒体协商震荡和终端兼容性等问题。
可选的,当媒体重定向指示消息用于指示第二媒体端点执行媒体重定向时,统一媒体功能网元根据该媒体重定向指示消息,关联第二媒体端点和第三媒体端点。统一媒体功能网元通过第二媒体端点和第三媒体端点传输第二终端设备和第三终端设备之间的媒体数据。可选的,当媒体重定向指示消息用于指示第二媒体端点执行媒体重定向时,统一媒体功能网元还接收来自第一接入媒体控制网元的媒体连接取消消息。其中,媒体连接取消消息用于请求删除第一媒体端点。也就是说,当第二媒体端点执行媒体重定向时,UMF将根据媒体连接取消消息,删除已与第二媒体端点建立媒体连接的第一媒体端点。然后,UMF根据媒体重定向指示消息创建针对第三终端设备的第三媒体端点,以及关联第二媒体端点和第三媒体端点。
其中,当第二媒体端点与第三媒体端点关联时,统一媒体功能网元更新第一媒体端点与第二媒体端点之间的关联关系。也就是说,上述前转过程中,AS只需要通过非OA的媒体控制接口直接修改UMF的媒体连接,不需要通过复杂的媒体重协商流程,避免了可能的业务冲突和媒体碰撞等问题。
1103,统一媒体功能网元通过所述第一媒体端点和媒体重定向端点传输第一终端设备与媒体重定向指示消息指示的资源或设备之间的媒体数据,和/或,统一媒体功能网元通过第二媒体端点和媒体重定向端点传输第二终端设备与媒体重定向指示消息指示的资源或设备之间的媒体数据。
一种实现方式中,在放音业务中,对于主叫侧的第一终端设备,统一媒体功能网元关联 第一媒体端点和媒体重定向端点(放音资源端点),并通过第一媒体端点向第一终端设备放音。可选的,对于被叫侧的第二终端设备,统一媒体功能网元关联第二媒体端点和媒体重定向端点,并通过第二媒体端点向第二终端设备放音。
可选的,当放音完成后,应用服务器向统一媒体功能网元发送停止放音指示消息。统一媒体功能网元根据停止放音指示消息,取消关联第一媒体端点和媒体重定向端点,或者,取消关联第二媒体端点与媒体重定向端点。例如,对于主叫侧的第一终端设备,放音完成后第一终端设备和第二终端设备还可能存在放音后的接续业务,则UMF根据停止放音指示消息,取消关联第一媒体端点和媒体重定向端点,并重新关联第一媒体端点和第二媒体端点。
一种实现方式中,在前转业务中,统一媒体功能网元关联第一媒体端点和媒体重定向端点(第三媒体端点),并通过第一媒体端点和第三媒体端点传输第一终端设备和前转方的第三终端设备之间的媒体数据。
一种实现方式中,IMS网络中也有一些业务是放音的同时有媒体重定向。例如呼叫等待业务中,用户A与用户B正在通话,此时另一个用户C呼叫用户A。则用户A可以关联两路会话,第一路正常通话,第二路是保持状态。由AS向被保持方(用户B或用户C)放音,两路会话的状态可以随时切换。在这种情况下,统一媒体功能网元创建第一媒体端点和第二媒体端点,并关联第一媒体端点和第二媒体端点。统一媒体功能网元创建媒体重定向端点(包括放音资源对应的媒体端点和第三媒体端点)。应用服务器根据会话状态,控制UMF根据会话状态在不同的媒体端点之间传输媒体数据。例如,当用户A与用户B正在通话时,接入媒体控制网元和应用服务器控制UMF通过第一媒体端点和第二媒体端点传输用户A和用户B之间的媒体数据,并通过放音资源对应的媒体端点和第三媒体端点向用户C放音。
本申请实施例提供一种呼叫处理方法,该呼叫处理方法基于IMS网络架构,由新增的统一媒体功能网元所执行。当统一媒体功能网元在IMS信任域中的控制网元的控制下执行基本呼叫流程后,还可以执行需要进行媒体重定向的业务流程(如放音业务和前转业务等)。在执行媒体重定向流程时,应用服务器只需要通过非OA的媒体控制接口直接修改统一媒体功能网元的媒体连接即可,不需要通过复杂的媒体重协商流程,避免了可能的业务冲突和媒体碰撞等问题。
一种示例中,图9和图11所示的呼叫处理方法可以应用于互通场景中。其中,互通场景是指UMF部署在互通侧,实现IMS与其它网络的媒体互通。其中,图12为本申请实施例提供的呼叫处理方法应用于互通场景中的流程示意图,包括以下步骤:
1201,统一媒体功能网元接收来自互通边界控制功能网元的互通媒体连接请求消息。
其中,互通媒体连接请求消息用于请求为互通设备创建互通媒体端点。其中,UMF提供IP媒体互通能力,互通设备为其他IP域中的设备,例如为其他IP域中的终端设备。
一种实现方式中,互通媒体连接请求消息是由互通边界控制功能网元根据互通设备的媒体信息确定的。例如,互通边界控制功能网元将互通设备的媒体信息携带在互通媒体连接请求消息中,并向UMF发送互通媒体连接请求消息。
1202,统一媒体功能网元根据互通媒体连接请求消息创建针对互通设备的互通媒体端点。
一种实现方式中,当互通媒体连接请求消息用于请求创建基本呼叫的媒体连接时,统一媒体功能网元根据互通媒体连接请求消息中的互通设备的媒体信息,创建针对互通设备的互通媒体端点。可选的,统一媒体功能网元关联UMF中的媒体端点和互通媒体端点。例如,当互通媒体连接请求消息用于请求创建第一终端设备和互通设备之间的基本呼叫的媒体连接时,UMF根据互通媒体连接请求消息,创建互通媒体端点,并关联第一媒体端点和互通媒体 端点。具体实现方式,参考图9实施例中UMF创建第一媒体端点/第二媒体端点,以及关联第一媒体端点和第二媒体端点的具体实现方式,在此不再赘述。
一种实现方式中,当互通媒体连接请求消息用于请求创建媒体重定向的媒体连接时,统一媒体功能网元根据互通媒体连接请求消息中的互通设备的媒体信息,创建针对互通设备的互通媒体端点,并取消关联UMF中已进行媒体协商的媒体端点。例如,当互通媒体连接请求消息用于请求创建第一终端设备和互通设备之间的媒体重定向的媒体连接时,UMF根据互通媒体连接请求消息,创建互通媒体端点,并关联第一媒体端点和互通媒体端点。并且,UMF取消关联第一媒体端点和第二媒体端点。具体实现方式,参考图11实施例中UMF创建媒体重定向端点,以及关联第一媒体端点和媒体重定向端点,或者关联第二媒体端点和媒体重定向端点的具体实现方式,在此不再赘述。
1203,统一媒体功能网元通过第一媒体端点和互通媒体端点传输第一终端设备与互通设备之间的媒体数据,或者,统一媒体功能网元通过第二媒体端点和互通媒体端点传输第二终端设备与互通设备之间的媒体数据。
一种实现方式中,当IMS用户呼叫其他IP域用户时,第一媒体端点关联互通媒体端点。统一媒体功能网元通过第一媒体端点和互通媒体端点传输第一终端设备与互通设备之间的媒体数据。另一种实现方式中,当其他IP域用户呼叫IMS用户时,互通媒体端点关联第二媒体端点。统一媒体功能网元通过第二媒体端点和互通媒体端点传输第二终端设备与互通设备之间的媒体数据。
可选的,当UMF创建的互通媒体端点为媒体重定向端点,且媒体重定向流程为前转流程时,UMF通过主叫侧的媒体端点(例如为第一媒体端点)和互通媒体端点传输主叫侧的终端设备和前转方的互通设备之间的媒体数据。
下面对图9至图12所示的呼叫处理方法应用于具体场景中的应用实施例进行详细的描述。
一种示例中,图13为本申请实施例提供的呼叫处理方法应用于主被叫音频基本呼叫建立场景中的流程示意图。该主被叫音频基本呼叫建立场景中包括MO侧和MT侧,MO侧包括第一终端设备UE1、第一接入媒体控制网元P-CSCF1、统一媒体功能网元UMF和第一应用服务器CSCF1/AS1,MT侧包括第二终端设备UE2、第二接入媒体控制网元P-CSCF2和第二应用服务器CSCF2/AS2。图13所示的主被叫音频基本呼叫建立流程包括以下步骤:
1、UE1向P-CSCF1发送INVITE消息,携带UE1的媒体信息SDP(UE1)。
2、主叫侧P-CSCF1接收INVITE消息,并向UMF发送第一媒体连接请求消息,请求创建第一媒体端点。其中,P-CSCF根据UE1的SDP,在第一媒体连接请求消息中携带如下内容:
(1)对SDP中的每个m行,生成一个Endpoint,根据m行中IP、端口、协议类型对应地填充到Endpoint中的远端IP、端口和传输类型字段中。
(2)针对m行中的每个媒体流,在对应的Endpoint下创建一个Stream,同时将SDP中的负载类型(payload type)值,各个a行中的编解码信息等填充到Stream中。
3、UMF接收来自P-CSCF1的第一媒体连接请求消息,并根据第一媒体连接请求消息,创建会话媒体处理上下文,第一媒体端点和第一媒体流。具体分为以下三种情况:
情况一:若第一媒体连接请求消息中的context_id为空,UMF创建一个Context。UMF再创建endpoint1和stream1。具体实现方式,参考图9实施例中对应的描述,在此不再赘述。
情况二:若第一媒体连接请求消息中携带了Endpoint和Stream,UMF为每个Endpoint 和Stream分配Endpoint_和Stream分别的ID。并且,UMF还为每个Endpoint分配本地IP和端口,并填充到对应的Endpoint的本地IP和端口字段中。
情况三:若第一媒体连接请求消息中未携带Endpoint和Stream,UMF根据自身支持的媒体类型(如音频,视频,data channel等),为每种媒体类型创建一个Endpoint,并且为每个Endpoint分配对应的IP和端口,填充到Endpoint的本地IP和端口字段中。UMF在每个Endpoint下创建一个Stream,为Stream分配一个payload type值,将自身支持的所有编解码类型填充到Stream中。UMF将Endpoint和Stream填充到Context中,然后向P-CSCF1发送第一媒体连接响应消息。本示例中假设UMF返回的Context、Endpoint、Stream的ID分别为context1、endpoint1和stream1。
4、P-CSCF1接收来自UMF的第一媒体连接响应消息,并向后续网元转发INVITE消息。其中,该INVITE消息中包括根据endpoint1的远端IP/端口/编解码类型等信息生成的SDP offer,同时增加一个扩展SIP头域Local-Media,Local-Media中携带context1和endpoint1。
其中,CSCF1、AS1、CSCF2、AS2等中间网元将转发INVITE消息,并透传Local-Media头域。具体的传输方式本示例中未示出。
5、被叫侧P-CSCF2接收INVITE消息,从Local-Media头域中获取context1。P-CSCF2向UMF发送第二媒体连接请求消息,指示UMF在context1下创建Endpoint和Stream。其中,P-CSCF2的处理过程与主叫侧P-CSCF1的处理过程类似,区别在于不用重新创建Context,在此不再赘述。
6、UMF接收来自P-CSCF2的第二媒体连接请求消息,并根据第二媒体连接请求消息创建第二媒体端点和第二媒体流。具体处理过程与UMF创建第一媒体端点和第一媒体流类似,例如,UMF在context1中创建Endpoint和Stream,假设为endpoint2和stream2。
7、P-CSCF2接收来自UMF的第二媒体连接响应消息,并向UE2发送INVITE消息。其中,该INVITE消息中包括根据endpoint2的本地IP/端口/编解码类型等信息生成的SDP offer。并且相较于P-CSCF2向UMF发送的第二媒体连接响应消息,该INVITE消息中不包括Local-Media头域。应注意,为增强安全性,新增的Local-Media头域只在IMS信任域内传递,P-CSCF,MGCF,IBCF等边界网元在向非IMS域发送SIP消息时,需要删除Local-Media头域。可选的,IMS域如果部署了第三方的AS,则S-CSCF在触发第三方的AS时可删除Local-Media头域。
8、UE2向P-CSCF2发送第二媒体连接响应消息(例如为183消息或180消息),该第二媒体连接响应消息中携带UE2的SDP answer。
9、P-CSCF2接收第二媒体连接响应消息,获取SDP answer。P-CSCF2向UMF发送消息,该消息中携带UE2的媒体信息(SDP answer),用于指示endpoint2的远端媒体信息为UE2的媒体信息。其中,P-CSCF2查询endpoint1和endpoint2下的所有Stream,为相同媒体类型的Stream建立关联。例如,P-CSCF2查询得到endpoint1下的Stream为stream1,endpoint2下的Stream为stream2,且媒体类型相同。
10、P-CSCF2向UMF发送第一关联建立指示消息,建立stream1到stream2的关联。
11、P-CSCF2向UMF发送第二关联建立指示消息,建立stream2到stream1的关联。
12、P-CSCF2向P-CSCF1发送1XX响应消息,该1XX响应消息中包括:P-CSCF2根据endpoint2的本地IP/端口,以及UE2返回的编解码类型信息生成的SDP Answer,以及扩展SIP头域Local-Media,Local-Media中携带context1和endpoint2。
其中,CSCF1、AS1、CSCF2、AS2等中间网元将转发1XX响应消息,并透传Local-Media 头域。具体的传输方式本示例中未示出。
13、P-CSCF1接收1XX响应消息,将endpoint1的本地IP和端口修改为响应消息中SDP answer的IP和端口。然后向UE1发送1XX响应消息,同时删除Local-Media头域。
其中,后续流程与现有协议中的基本呼叫处理流程的实现相同,在此不再赘述。
应注意,为避免多个控制节点操作同一个媒体资源导致的冲突,对同一个媒体资源的操作可采取如下原则:
(1)Context、Endpoint、Stream等实例只能由实例的创建者修改和删除,其它控制节点只能查询。
(2)UMF通过记录创建者标识,或者采用Token等机制,限制控制节点对内部资源实例的操作。
一种实现方式中,图14为本申请实施例提供的一种音频呼叫的媒体连接拓扑的示意图。其中,主叫侧的第一终端设备UE1和被叫侧的第二终端设备UE2之间的语音流通过UMF创建的第一媒体端点endpoint1和第二媒体端点endpoint2以及endpoint1和endpoint2之间的association传输。
一种实现方式中,图15为本申请实施例提供的一种视频呼叫的媒体连接拓扑的示意图。相较于图14,图15中的媒体连接拓扑多了两个媒体端点(endpoint3和endpoint4)。其中,主叫侧的第一终端设备UE1和被叫侧的第二终端设备UE2之间的语音流通过UMF创建的endpoint1和endpoint3以及endpoint1和endpoint3之间的association传输。主叫侧的第一终端设备UE1和被叫侧的第二终端设备UE2之间的视频流通过UMF创建的endpoint2和endpoint4以及endpoint2和endpoint4之间的association传输。
可见,本示例中的主被叫音频基本呼叫建立的流程中,UMF代替原有的IMS-AGW提供接入侧的媒体业务,丰富了接入侧的媒体业务能力。UMF提供统一媒体资源标识URL和非OA的媒体控制接口(如服务化接口),并通过扩展IMS信令流程,传递统一媒体资源标识URL,实现UMF媒体资源共享。通过上述优化措施,可达成在IMS半呼叫模型下,逻辑上只需要一个UMF,媒体转发只需要一次即可,降低业务传输时延。
一种示例中,图16为本申请实施例提供的呼叫处理方法应用于放音业务场景中的流程示意图。该放音业务场景中包括MO侧和MT侧,MO侧包括第一终端设备UE1、第一接入媒体控制网元P-CSCF1、统一媒体功能网元UMF和第一应用服务器CSCF1/AS1,MT侧包括第二终端设备UE2、第二接入媒体控制网元P-CSCF2和第二应用服务器CSCF2/AS2。图16所示的放音流程包括以下步骤:
1~12同图13所示的音频基本呼叫流程的步骤1~12,主被叫P-CSCF分别指示UMF创建Context、Endpoint、Stream。其中,由于图16所描述的放音流程为视频呼叫,主叫侧和被叫侧需要分别创建2个Endpoint,分别为endpoint1和endpoint2,endpoint3和endpoint4,如图15所示。
13、被叫侧振铃,被叫侧的AS2接收180响应,启动视频彩铃业务处理,向UMF发送放音指示消息,指示UMF向endpoint1/endpoint2连接的用户播放视频彩铃(包括语音流和视频流)。例如,UMF创建放音资源对应的媒体端点,并创建endpoint1/endpoint2分别与放音资源对应的媒体端点之间的association。其中,视频彩铃播放过程中UMF只需要更新endpoint1/endpoint2的关联关系(association),不需要更新UE侧的媒体信息,因此无需媒体重协商流程。
14、AS2向主叫侧的P-CSCF1发送180响应消息,该180响应消息携带SDP answer,具 体实现方式与图13所示的基本呼叫流程中对应的步骤类似,在此不再赘述。
其中,本示例中CSCF1/CSCF2以及主叫侧的AS1等其它网元的处理过程参考现有协议中放音流程对应的操作,在此不再赘述。
15、主叫侧的P-CSCF1向第一终端设备UE1发送180响应消息,该180响应消息携带SDP answer,具体实现方式与图13所示的基本呼叫流程中对应的步骤类似,在此不再赘述。
16、被叫摘机,第二终端设备UE2向P-CSCF2发送200(INVITE)响应消息。
17、P-CSCF2向应用服务器AS2发送该200(INVITE)响应消息,具体实现方式与图13所示的基本呼叫流程中对应的步骤类似,在此不再赘述。
18、AS2接收200(INVITE)响应消息,向UMF发送停止放音指示消息。其中,停止放音指示消息指示UMF不再通过endpoint1/endpoint2向UE1放音。
19、AS2向主叫侧的P-CSCF1发送200(INVITE)响应消息。
20、P-CSCF1向UE1发送200(INVITE)响应消息。
21、UE1向被叫侧的UE2发送ACK消息。
并且,IMS中的各网元向UE2转发ACK消息,如图16中的22-24所示。
一种实现方式中,图17为本申请实施例提供的一种视频彩铃放音的媒体连接拓扑的示意图。其中,UMF通过放音资源对应的媒体端点向主叫侧的第一终端设备UE1放音(包括语音和视频)。
可见,本示例中的视频彩铃放音流程中,当需要更新媒体信息时,AS只需要通过非OA协商的接口直接修改UMF的媒体连接,不需要通过复杂的媒体重协商流程,避免了可能的业务冲突和媒体碰撞等问题。并且由部署在接入侧的UMF提供放音,有利于减少媒体迂回,降低业务时延。
一种示例中,图18为本申请实施例提供的呼叫处理方法应用于前转业务场景中的流程示意图。该前转业务场景中包括第一终端设备UE1、第一接入媒体控制网元P-CSCF1、统一媒体功能网元UMF、第一应用服务器CSCF1/AS1、第二终端设备UE2、第二接入媒体控制网元P-CSCF2、第二应用服务器CSCF2/AS2、第三终端设备UE3、第三接入媒体控制网元P-CSCF3和第三应用服务器CSCF3/AS3。具体的前转业务场景假设为UE1呼叫UE2,完成媒体协商后,UE1触发前转到UE3。应注意,第三应用服务器CSCF3/AS3仅用于透传消息,不涉及对消息的操作,故图18中未示出第三应用服务器CSCF3/AS3。图18所示的前转流程包括以下步骤:
1、被叫侧的AS2向P-CSCF2发送CANCEL消息,取消UE1到UE2的呼叫。
2、P-CSCF2接收CANCEL消息,指示UMF删除已创建的第二媒体端点endpoint2及其对应的所有association(例如包括endpoint2和endpoint1之间的association)。
3、AS2向前转目的方的P-CSCF3发送INVITE消息,携带UE1的SDP offer及Local-Media头域。
4、P-CSCF3接收INVITE消息,P-CSCF3指示UMF创建针对UE3的第三媒体端点endpoint3。
5、UMF根据来自P-CSCF3的第三媒体连接请求消息创建endpoint3的URL,并向P-CSCF3发送endpoint3的URL。
6、P-CSCF3向UE3发送INVITE消息。
7、UE3向P-CSCF3发送1XX响应消息,该1XX响应消息携带SDP answer。
8、P-CSCF3指示UMF更新endpoint3的远端媒体信息,具体实现方式与图13所示的基 本呼叫流程中对应的步骤类似。
9、P-CSCF3向AS2发送1XX响应消息,该1XX响应消息携带SDP answer。
其中,当AS2接收1XX响应消息后,若SDP answer中的编解码信息无变化,则无需将该SDP answer发送给主叫侧,后续处理流程与基本呼叫流程类似。
其中,若SDP answer中的编解码信息有变化,例如,UE1与UE2协商的编解码为G.711,UE1与UE3协商的编解码为G.729,则AS1/AS2需要指示UMF更新主叫侧的媒体信息。具体包括以下步骤:
10、AS2向主叫侧的P-CSCF1发送UPDATE消息,该UPDATE消息携带SDP offer。其中,SDP offer中的IP地址、端口号等参数是根据endpoint3的本地IP和端口号确定的,编解码信息是根据UE3反馈的SDP answer生成的。
11、P-CSCF1接收UPDATE消息,将SDP offer中的IP地址、端口号等参数修改为endpoint1对应的信息。P-CSCF1向UE1发送UPDATE消息。
12、UE1向P-CSCF1发送200响应消息,该200响应消息携带SDP answer。
13、P-CSCF1接收200响应消息,根据SDP answer中的IP地址和端口更新endpoint1的远端IP地址和端口号,并根据SDP answer中的编解码信息更新endpoint1的编解码信息。
14、P-CSCF1向AS2转发200响应消息,将SDP answer中的IP地址和端口修改为endpoint1的IP地址和端口。
其中,AS2接收该200响应消息后,终结该200响应消息。
可以理解,与IMS现有流程不同的是,若主叫侧的UE1返回的媒体IP和端口号发生变化,UMF只需要更新endpoint1对应的远端IP地址和端口号,不需要与被叫侧进行媒体重协商,从而避免了媒体重协商震荡问题。
一种实现方式中,图19为本申请实施例提供的一种前转业务的媒体连接拓扑的示意图。其中,第一终端设备UE1前转至第三终端设备UE3时,UMF删除原本与第一媒体端点endpoint1关联的第二媒体端点endpoint2,并创建新的第三媒体端点endpoint3。其中,第一终端设备UE1和第三终端设备UE3之间的语音流通过UMF创建的第一媒体端点endpoint1和第三媒体端点endpoint3以及endpoint1和endpoint3之间的association传输。
可见,本示例中的前转流程中,当需要更新媒体信息时,AS只需要通过非OA协商的接口直接修改UMF的媒体连接,不需要通过复杂的媒体重协商流程,避免了可能的业务冲突和媒体碰撞等问题。当主叫侧的媒体IP或端口发生变化时,只需要修改UMF内部的媒体端点的远端IP和端口,不需要主叫侧和被叫侧进行媒体重协商,避免了可能的媒体重协商震荡和终端兼容性等问题。
为了实现上述本申请实施例提供的方法中的各功能,统一媒体处理网元、第一接入媒体控制网元和第二接入媒体控制网元可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。
本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,另外,在本申请各个实施例中的各功能模块可以集成在一个处理器中,也可以是单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
如图20所示为本申请实施例提供的一种呼叫处理装置2000,用于实现上述方法实施例 中统一媒体功能网元的功能。该装置可以是设备,也可以是设备中的装置,或者能够和设备匹配使用的装置。其中,该装置可以为芯片系统。本申请实施例中,芯片系统可以由芯片构成,也可以包含芯片和其他分立器件。呼叫处理装置2000包括至少一个处理器2020,用于实现本申请实施例提供的呼叫处理方法中统一媒体功能网元的功能。示例性地,处理器2020根据第一媒体连接请求消息创建针对第一终端设备的第一媒体端点,此处不做赘述。
装置2000还可以包括至少一个存储器2030,用于存储程序指令和/或数据。存储器2030和处理器2020耦合。本申请实施例中的耦合是装置、单元或模块之间的间接耦合或通信连接,可以是电性,机械或其它的形式,用于装置、单元或模块之间的信息交互。处理器2020可能和存储器2030协同操作。处理器2020可能执行存储器2030中存储的程序指令。所述至少一个存储器中的至少一个可以包括于处理器中。
装置2000还可以包括通信接口2010,该通信接口例如可以是收发器、接口、总线、电路或者能够实现收发功能的装置。其中,通信接口2010用于通过传输介质和其它设备进行通信,从而用于装置2000中的装置可以和其它设备进行通信。处理器2020利用通信接口2010收发数据,并用于实现图9至图19对应的实施例中所述的统一媒体功能网元所执行的方法,具体参见方法示例中的详细描述,此处不做赘述。
本申请实施例中不限定上述通信接口2010、处理器2020以及存储器2030之间的具体连接介质。本申请实施例在图20中以存储器2030、处理器2020以及通信接口2010之间通过总线2040连接,总线在图20中以粗线表示,其它部件之间的连接方式,仅是进行示意性说明,并不引以为限。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图20中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
如图21所示为本申请实施例提供的另一种呼叫处理装置2100,用于实现上述方法实施例中第一接入媒体控制网元的功能。该装置可以是设备,也可以是设备中的装置,或者能够和设备匹配使用的装置。其中,该装置可以为芯片系统。呼叫处理装置2100包括至少一个处理器2120,用于实现本申请实施例提供的方法中第一接入媒体控制网元的功能。示例性地,处理器2120调用通信接口2110向统一媒体功能网元发送第一媒体连接请求消息,第一媒体连接请求消息用于请求为第一终端设备创建第一媒体端点,具体参见方法示例中的详细描述,此处不做赘述。
装置2100还可以包括至少一个存储器2130,用于存储程序指令和/或数据。存储器2130和处理器2120耦合。本申请实施例中的耦合是装置、单元或模块之间的间接耦合或通信连接,可以是电性,机械或其它的形式,用于装置、单元或模块之间的信息交互。处理器2120可能和存储器2130协同操作。处理器2120可能执行存储器2130中存储的程序指令。所述至少一个存储器中的至少一个可以包括于处理器中。
装置2100还可以包括通信接口2110,该通信接口例如可以是收发器、接口、总线、电路或者能够实现收发功能的装置。其中,通信接口2110用于通过传输介质和其它设备进行通信,从而用于装置2100中的装置可以和其它设备进行通信。处理器2120利用通信接口2110收发数据,并用于实现图9至图19对应的实施例中所述的第一接入媒体控制网元所执行的方法。
本申请实施例中不限定上述通信接口2110、处理器2120以及存储器2130之间的具体连接介质。本申请实施例在图21中以存储器2130、处理器2120以及通信接口2110之间通过总线2140连接,总线在图21中以粗线表示,其它部件之间的连接方式,仅是进行示意性说明,并不引以为限。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图 21中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
如图22所示为本申请实施例提供的另一种呼叫处理装置2200,用于实现上述方法实施例中第二接入媒体控制网元的功能。该装置可以是设备,也可以是设备中的装置,或者能够和设备匹配使用的装置。其中,该装置可以为芯片系统。呼叫处理装置2200包括至少一个处理器2220,用于实现本申请实施例提供的方法中第二接入媒体控制网元的功能。示例性地,处理器2220可以调用通信接口2210向统一媒体功能网元发送第二媒体连接请求消息,第二媒体连接请求消息用于请求为第二终端设备创建第二媒体端点,具体参见方法示例中的详细描述,此处不做赘述。
装置2200还可以包括至少一个存储器2230,用于存储程序指令和/或数据。存储器2230和处理器2220耦合。本申请实施例中的耦合是装置、单元或模块之间的间接耦合或通信连接,可以是电性,机械或其它的形式,用于装置、单元或模块之间的信息交互。处理器2220可能和存储器2230协同操作。处理器2220可能执行存储器2230中存储的程序指令。所述至少一个存储器中的至少一个可以包括于处理器中。
装置2200还可以包括通信接口2210,该通信接口例如可以是收发器、接口、总线、电路或者能够实现收发功能的装置。其中,通信接口2210用于通过传输介质和其它设备进行通信,从而用于装置2200中的装置可以和其它设备进行通信。处理器2220利用通信接口2210收发数据,并用于实现图9至图19对应的实施例中所述的第二接入媒体控制网元所执行的方法。
本申请实施例中不限定上述通信接口2210、处理器2220以及存储器2230之间的具体连接介质。本申请实施例在图22中以存储器2230、处理器2220以及通信接口2210之间通过总线2240连接,总线在图22中以粗线表示,其它部件之间的连接方式,仅是进行示意性说明,并不引以为限。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图22中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
在本申请实施例中,处理器可以是通用处理器、数字信号处理器、专用集成电路、现场可编程门阵列或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
在本申请实施例中,存储器可以是非易失性存储器,比如HDD或SSD等,还可以是volatile memory,例如RAM。存储器是能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。本申请实施例中的存储器还可以是电路或者其它任意能够实现存储功能的装置,用于存储程序指令和/或数据。
如图23所示为本申请实施例提供的另一种呼叫处理装置2300。该呼叫处理装置可以是统一媒体处理网元,也可以是统一媒体处理网元中的装置,或者是能够和统一媒体处理网元匹配使用的装置。一种设计中,该呼叫处理装置可以包括执行图9至图19对应的示例中所描述的方法/操作/步骤/动作所一一对应的模块,该模块可以是硬件电路,也可是软件,也可以是硬件电路结合软件实现。一种设计中,该装置可以包括通信模块2301和处理模块2302。示例性地,通信模块2301用于接收来自第一接入媒体控制网元的第一媒体连接请求消息。处理模块2302用于根据第一媒体连接请求消息创建针对第一终端设备的第一媒体端点。具体参见图9至图19示例中的详细描述,此处不做赘述。
如图24所示为本申请实施例提供的另一种呼叫处理装置2400,该呼叫处理装置可以是 第一接入媒体控制网元,也可以是第一接入媒体控制网元中的装置,或者是能够和第一接入媒体控制网元匹配使用的装置。一种设计中,该呼叫处理装置可以包括执行图9至图19对应的示例中所描述的方法/操作/步骤/动作所一一对应的模块,该模块可以是硬件电路,也可是软件,也可以是硬件电路结合软件实现。一种设计中,该装置可以包括通信模块2401和处理模块2402。示例性地,通信模块2401用于向统一媒体功能网元发送第一媒体连接请求消息。处理模块2402用于指示统一媒体功能网元创建第一媒体端点。具体参见图9至图19示例中的详细描述,此处不做赘述。
如图25所示为本申请实施例提供的另一种呼叫处理装置2500,该呼叫处理装置可以是第二接入媒体控制网元,也可以是第二接入媒体控制网元中的装置,或者是能够和第二接入媒体控制网元匹配使用的装置。一种设计中,该呼叫处理装置可以包括执行图9至图19对应的示例中所描述的方法/操作/步骤/动作所一一对应的模块,该模块可以是硬件电路,也可是软件,也可以是硬件电路结合软件实现。一种设计中,该装置可以包括通信模块2501和处理模块2502。示例性地,通信模块2501用于向统一媒体功能网元发送第二媒体连接请求消息。处理模块2502用于指示统一媒体功能网元创建第二媒体端点。具体参见图9至图19示例中的详细描述,此处不做赘述。
本申请实施例提供的技术方案可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、网络设备、终端设备或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机可以存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,数字视频光盘(digital video disc,DVD))、或者半导体介质等。
在本申请实施例中,在无逻辑矛盾的前提下,各实施例之间可以相互引用,例如方法实施例之间的方法和/或术语可以相互引用,例如装置实施例之间的功能和/或术语可以相互引用,例如装置实施例和方法实施例之间的功能和/或术语可以相互引用。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (32)

  1. 一种呼叫处理方法,其特征在于,包括:
    统一媒体功能网元接收来自第一接入媒体控制网元的第一媒体连接请求消息,所述第一媒体连接请求消息用于请求为第一终端设备创建第一媒体端点;
    所述统一媒体功能网元根据所述第一媒体连接请求消息创建针对所述第一终端设备的第一媒体端点;
    所述统一媒体功能网元接收来自第二接入媒体控制网元的第二媒体连接请求消息,所述第二媒体连接请求消息用于请求为第二终端设备创建第二媒体端点;
    所述统一媒体功能网元根据所述第二媒体连接请求消息创建针对所述第二终端设备的第二媒体端点;
    所述统一媒体功能网元通过所述第一媒体端点和所述第二媒体端点传输所述第一终端设备与所述第二终端设备之间的媒体数据。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述统一媒体功能网元向所述第一接入媒体控制网元发送所述第一媒体端点的资源标识,并向所述第二接入媒体控制网元发送所述第二媒体端点的资源标识。
  3. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:
    所述统一媒体功能网元关联所述第一媒体端点与所述第二媒体端点。
  4. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述统一媒体功能网元接收来自互通边界控制功能网元的互通媒体连接请求消息,所述互通媒体连接请求消息用于请求为互通设备创建互通媒体端点;
    所述统一媒体功能网元根据所述互通媒体连接请求消息创建针对所述互通设备的互通媒体端点;
    所述统一媒体功能网元通过所述第一媒体端点和所述互通媒体端点传输所述第一终端设备与所述互通设备之间的媒体数据,或者,所述统一媒体功能网元通过所述第二媒体端点和所述互通媒体端点传输所述第二终端设备与所述互通设备之间的媒体数据。
  5. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述统一媒体功能网元接收来自应用服务器的媒体重定向指示消息,所述媒体重定向指示消息用于请求为所述媒体重定向指示消息指示的资源或设备创建媒体重定向端点;
    所述统一媒体功能网元根据所述媒体重定向指示消息创建针对所述媒体重定向指示消息的媒体重定向端点;
    所述统一媒体功能网元通过所述第一媒体端点和所述媒体重定向端点传输所述第一终端设备与媒体重定向指示消息指示的资源或设备之间的媒体数据,和/或,所述统一媒体功能网元通过所述第二媒体端点和所述媒体重定向端点传输所述第二终端设备与所述媒体重定向指示消息指示的资源或设备之间的媒体数据。
  6. 根据权利要求5所述的方法,其特征在于,所述媒体重定向指示消息指示的资源为放音资源。
  7. 根据权利要求6所述的方法,其特征在于,所述方法还包括:
    所述统一媒体功能网元通过所述第一媒体端点和所述媒体重定向端点向所述第一终端设备放音,和/或,所述统一媒体功能网元通过所述第二媒体端点和所述媒体重定向端点向所述第二终端设备放音。
  8. 根据权利要求6或7所述的方法,其特征在于,所述方法还包括:
    所述统一媒体功能网元接收来自所述应用服务器的停止放音指示消息;
    所述统一媒体功能网元根据所述停止放音指示消息,取消关联所述第一媒体端点与所述媒体重定向端点,或者,取消关联所述第二媒体端点与所述媒体重定向端点。
  9. 根据权利要求5所述的方法,其特征在于,所述统一媒体功能网元根据所述媒体重定向指示消息创建针对所述媒体重定向指示消息的媒体重定向端点,包括:
    所述统一媒体功能网元根据所述媒体重定向指示消息创建针对第三终端设备的第三媒体端点。
  10. 根据权利要求9所述的方法,其特征在于,所述方法还包括:
    所述统一媒体功能网元接收媒体连接取消消息,所述媒体连接取消消息用于请求删除所述第一媒体端点或所述第二媒体端点;
    所述统一媒体功能网元根据所述媒体连接取消消息删除所述第一媒体端点或所述第二媒体端点。
  11. 根据权利要求5所述的方法,其特征在于,所述方法还包括:
    当所述第一媒体端点与所述媒体重定向端点关联时,所述统一媒体功能网元将所述第一媒体端点与所述第二媒体端点的关联关系变更为所述第一媒体端点与所述媒体重定向端点的关联关系;或者,
    当所述第二媒体端点与所述媒体重定向端点关联时,所述统一媒体功能网元将所述第二媒体端点与所述第一媒体端点的关联关系变更为所述第二媒体端点与所述媒体重定向端点的关联关系。
  12. 一种呼叫处理方法,其特征在于,包括:
    第一接入媒体控制网元向统一媒体功能网元发送第一媒体连接请求消息,所述第一媒体连接请求消息用于请求为第一终端设备创建第一媒体端点;
    所述第一接入媒体控制网元接收来自所述统一媒体功能网元的第一媒体连接响应,所述第一媒体连接响应包括所述第一媒体端点的资源标识。
  13. 根据权利要求12所述的方法,其特征在于,所述方法还包括:
    所述第一接入媒体控制网元向第二接入媒体控制网元发送会话请求消息,所述会话请求消息包括所述第一媒体端点的资源标识和媒体信息;
    所述第一接入媒体控制网元接收来自所述第二接入媒体控制网元的会话响应消息,所述会话响应消息包括针对第二终端设备创建的第二媒体端点的资源标识和媒体信息。
  14. 根据权利要求12或13所述的方法,其特征在于,所述方法还包括:
    当所述统一媒体功能网元创建针对媒体重定向指示消息的媒体重定向端点时,所述第一接入媒体控制网元向所述统一媒体功能网元发送第一媒体更新指示消息,所述第一媒体更新指示消息用于指示所述统一媒体功能网元将所述第一媒体端点与所述第二媒体端点的关联关系变更为所述第一媒体端点与所述媒体重定向端点的关联关系。
  15. 一种呼叫处理方法,其特征在于,包括:
    第二接入媒体控制网元向统一媒体功能网元发送第二媒体连接请求消息,所述第二媒体连接请求消息用于请求为第二终端设备创建第二媒体端点;
    所述第二接入媒体控制网元接收来自所述统一媒体功能网元的第二媒体连接响应,所述第二媒体连接响应包括所述第二媒体端点的资源标识。
  16. 根据权利要求15所述的方法,其特征在于,所述方法还包括:
    所述第二接入媒体控制网元接收来自所述第一接入媒体控制网元的会话请求消息,所述会话请求消息包括所述第一媒体端点的资源标识和媒体信息;
    所述第二接入媒体控制网元向所述统一媒体功能网元发送关联指示消息,所述关联指示消息用于指示所述统一媒体功能网元关联所述第一媒体端点和所述第二媒体端点;
    所述第二接入媒体控制网元向所述第一接入媒体控制网元发送会话响应消息,所述会话响应消息包括针对第二终端设备创建的第二媒体端点的资源标识和媒体信息。
  17. 根据权利要求15或16所述的方法,其特征在于,所述方法还包括:
    当所述统一媒体功能网元创建针对媒体重定向指示消息的媒体重定向端点时,所述第二接入媒体控制网元向所述统一媒体功能网元发送第二媒体更新指示消息,所述第二媒体更新指示消息用于指示所述统一媒体功能网元将所述第二媒体端点与所述第一媒体端点的关联关系变更为所述第二媒体端点与所述媒体重定向端点的关联关系。
  18. 一种呼叫处理装置,其特征在于,用于实现如权利要求1至17中任一项所述的方法。
  19. 一种呼叫处理装置,包括处理器和存储器,所述存储器和所述处理器耦合,所述处理器用于执行权利要求1至17中任一项所述的方法。
  20. 一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行权利要求1至17中任一项所述的方法。
  21. 一种呼叫处理方法,其特征在于,包括:
    第一接入媒体控制网元向统一媒体功能网元发送第一媒体连接请求消息,所述第一媒体连接请求消息用于请求为第一终端设备创建第一媒体端点;
    所述统一媒体功能网元根据所述第一媒体连接请求消息创建针对所述第一终端设备的第一媒体端点;
    所述统一媒体功能网元向所述第一接入媒体控制网元发送第一媒体连接响应,所述第一媒体连接响应包括所述第一媒体端点的资源标识;
    第二接入媒体控制网元向所述统一媒体功能网元发送第二媒体连接请求消息,所述第二媒体连接请求消息用于请求为第二终端设备创建第二媒体端点;
    所述统一媒体功能网元根据所述第二媒体连接请求消息创建针对所述第二终端设备的第二媒体端点;
    所述统一媒体功能网元向所述第二接入媒体控制网元发送第二媒体连接响应,所述第二媒体连接响应包括所述第二媒体端点的资源标识;
    所述统一媒体功能网元通过所述第一媒体端点和所述第二媒体端点传输所述第一终端设备与所述第二终端设备之间的媒体数据。
  22. 根据权利要求21所述的方法,其特征在于,所述方法还包括:
    所述统一媒体功能网元接收来自互通边界控制功能网元的互通媒体连接请求消息,所述互通媒体连接请求消息用于请求为互通设备创建互通媒体端点;
    所述统一媒体功能网元根据所述互通媒体连接请求消息创建针对所述互通设备的互通媒体端点;
    所述统一媒体功能网元通过所述第一媒体端点和所述互通媒体端点传输所述第一终端设备与所述互通设备之间的媒体数据,或者,所述统一媒体功能网元通过所述第二媒体端点和所述互通媒体端点传输所述第二终端设备与所述互通设备之间的媒体数据。
  23. 根据权利要求21所述的方法,其特征在于,所述方法还包括:
    所述统一媒体功能网元接收来自应用服务器的媒体重定向指示消息,所述媒体重定向指 示消息用于请求为所述媒体重定向指示消息指示的资源或设备创建媒体重定向端点;
    所述统一媒体功能网元根据所述媒体重定向指示消息创建针对所述媒体重定向指示消息的媒体重定向端点;
    所述统一媒体功能网元通过所述第一媒体端点和所述媒体重定向端点传输所述第一终端设备与媒体重定向指示消息指示的资源或设备之间的媒体数据,和/或,所述统一媒体功能网元通过所述第二媒体端点和所述媒体重定向端点传输所述第二终端设备与所述媒体重定向指示消息指示的资源或设备之间的媒体数据。
  24. 根据权利要求23所述的方法,其特征在于,所述媒体重定向指示消息指示的资源为放音资源。
  25. 根据权利要求24所述的方法,其特征在于,所述方法还包括:
    所述统一媒体功能网元通过所述第一媒体端点和所述媒体重定向端点向所述第一终端设备放音,和/或,所述统一媒体功能网元通过所述第二媒体端点和所述媒体重定向端点向所述第二终端设备放音。
  26. 根据权利要求24或25所述的方法,其特征在于,所述方法还包括:
    所述统一媒体功能网元接收来自所述应用服务器的停止放音指示消息;
    所述统一媒体功能网元根据所述停止放音指示消息,取消关联所述第一媒体端点与所述媒体重定向端点,或者,取消关联所述第二媒体端点与所述媒体重定向端点。
  27. 根据权利要求23所述的方法,其特征在于,所述统一媒体功能网元根据所述媒体重定向指示消息创建针对所述媒体重定向指示消息的媒体重定向端点,包括:
    所述统一媒体功能网元根据所述媒体重定向指示消息创建针对第三终端设备的第三媒体端点。
  28. 根据权利要求27所述的方法,其特征在于,所述方法还包括:
    所述统一媒体功能网元接收媒体连接取消消息,所述媒体连接取消消息用于请求删除所述第一媒体端点或所述第二媒体端点;
    所述统一媒体功能网元根据所述媒体连接取消消息删除所述第一媒体端点或所述第二媒体端点。
  29. 根据权利要求23所述的方法,其特征在于,所述方法还包括:
    当所述第一媒体端点与所述媒体重定向端点关联时,所述统一媒体功能网元将所述第一媒体端点与所述第二媒体端点的关联关系变更为所述第一媒体端点与所述媒体重定向端点的关联关系;或者,
    当所述第二媒体端点与所述媒体重定向端点关联时,所述统一媒体功能网元将所述第二媒体端点与所述第一媒体端点的关联关系变更为所述第二媒体端点与所述媒体重定向端点的关联关系。
  30. 根据权利要求21所述的方法,其特征在于,所述方法还包括:
    所述第一接入媒体控制网元向第二接入媒体控制网元发送会话请求消息,所述会话请求消息包括所述第一媒体端点的资源标识和媒体信息;
    所述第二接入媒体控制网元向所述统一媒体功能网元发送关联指示消息,所述关联指示消息用于指示所述统一媒体功能网元关联所述第一媒体端点和所述第二媒体端点;
    所述第二接入媒体控制网元向所述第一接入媒体控制网元发送会话响应消息,所述会话响应消息包括针对第二终端设备创建的第二媒体端点的资源标识和媒体信息。
  31. 根据权利要求21或23所述的方法,其特征在于,所述方法还包括:
    当所述统一媒体功能网元创建针对媒体重定向指示消息的媒体重定向端点时,所述第一接入媒体控制网元向所述统一媒体功能网元发送第一媒体更新指示消息,所述第一媒体更新指示消息用于指示所述统一媒体功能网元将所述第一媒体端点与所述第二媒体端点的关联关系变更为所述第一媒体端点与所述媒体重定向端点的关联关系。
  32. 根据权利要求21或23所述的方法,其特征在于,所述方法还包括:
    当所述统一媒体功能网元创建针对媒体重定向指示消息的媒体重定向端点时,所述第二接入媒体控制网元向所述统一媒体功能网元发送第二媒体更新指示消息,所述第二媒体更新指示消息用于指示所述统一媒体功能网元将所述第二媒体端点与所述第一媒体端点的关联关系变更为所述第二媒体端点与所述媒体重定向端点的关联关系。
PCT/CN2022/105394 2021-08-13 2022-07-13 一种呼叫处理方法、装置及系统 WO2023016177A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP22855160.2A EP4351103A4 (en) 2021-08-13 2022-07-13 CALL PROCESSING METHOD, APPARATUS AND SYSTEM

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110929881.5A CN115842807A (zh) 2021-08-13 2021-08-13 一种呼叫处理方法、装置及系统
CN202110929881.5 2021-08-13

Publications (1)

Publication Number Publication Date
WO2023016177A1 true WO2023016177A1 (zh) 2023-02-16

Family

ID=85200541

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/105394 WO2023016177A1 (zh) 2021-08-13 2022-07-13 一种呼叫处理方法、装置及系统

Country Status (3)

Country Link
EP (1) EP4351103A4 (zh)
CN (1) CN115842807A (zh)
WO (1) WO2023016177A1 (zh)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108353072A (zh) * 2015-11-09 2018-07-31 诺基亚通信公司 web实时通信场景中的增强媒体平面优化
US20200204595A1 (en) * 2017-06-16 2020-06-25 Telefonaktiebolaget Lm Ericsson (Publ) Media protection within the core network of an ims network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108353072A (zh) * 2015-11-09 2018-07-31 诺基亚通信公司 web实时通信场景中的增强媒体平面优化
US20180316732A1 (en) * 2015-11-09 2018-11-01 Nokia Solutions And Networks Oy Enhanced media plane optimization in web real time communication scenarios
US20200204595A1 (en) * 2017-06-16 2020-06-25 Telefonaktiebolaget Lm Ericsson (Publ) Media protection within the core network of an ims network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; 3G security; Lawful interception architecture and functions (Release 14)", 3GPP STANDARD ; TECHNICAL SPECIFICATION ; 3GPP TS 33.107, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. SA WG3, no. V14.2.0, 16 June 2017 (2017-06-16), Mobile Competence Centre ; 650, route des Lucioles ; F-06921 Sophia-Antipolis Cedex ; France , pages 1 - 329, XP051298748 *
See also references of EP4351103A4

Also Published As

Publication number Publication date
EP4351103A1 (en) 2024-04-10
CN115842807A (zh) 2023-03-24
EP4351103A4 (en) 2024-04-10

Similar Documents

Publication Publication Date Title
US7359725B2 (en) Push-to-talk apparatus and method for communication between an application server and media resource function processor
CN102347952B (zh) 基于ip多媒体子系统的交互式媒体会话建立系统和方法、装置
US20060256748A1 (en) System and method for interworking between IMS network and H.323 network
US9686709B2 (en) Method, apparatus and system for guaranteeing QoS of communication service in NAT scenario
EP2247031B1 (en) Implementation method, system and device for ims monitoring
WO2012000347A1 (zh) 一种跨平台会议融合的方法、装置和系统
WO2008000188A1 (fr) Procédé et système pour réaliser une interaction de flux multimédia, contrôleur de passerelle multimédia, et passerelle multimédia
WO2006073488A1 (en) Method for remotely controlling media devices via a communication network
WO2009059559A1 (fr) Procédé de commande d'appel de session multimédia et serveur d'application
US20110116495A1 (en) Method and apparatus for inter-device session transfer between internet protocol (ip) multimedia subsystem (ims) and h.323 based clients
WO2015127793A1 (zh) 录音方法、语音交换设备、录音服务器及录音系统
WO2015096302A1 (zh) 基于sip媒体能力重协商的nat穿越方法、代理服务器和系统
US8320363B2 (en) Implementation method, system and device of IMS interception
WO2007019777A1 (fr) Méthode d’établissement de session et nœud de contrôle de session
WO2009089797A1 (fr) Procédé de mise en oeuvre de service de tonalité de retour d'appel et/ou de tonalité de reour d'appel multimédia et de production de demande sdp multimédia anticipée
WO2009121284A1 (zh) 一种提供智能业务的方法、系统及网关
WO2023016177A1 (zh) 一种呼叫处理方法、装置及系统
WO2023005316A1 (zh) 通信方法、信令控制网元、媒体控制网元及通信系统
WO2007115472A1 (fr) Procédé de réalisation d'une multisession et système et dispositif de réalisation de la multisession
WO2009121310A1 (zh) 一种网关选择的方法、系统及设备
WO2023016172A1 (zh) 一种呼叫处理方法、装置及系统
CN105391876A (zh) 一种为通话提供媒体服务的方法与装置
WO2020192435A1 (zh) 一种播放多媒体彩振、彩铃的方法、应用服务器
WO2009030171A1 (fr) Procédé d'implémentation de service média, système de communication et dispositifs associés
WO2006111062A1 (fr) Procede d'appel entre les terminaux d'un systeme de communication multimedia par paquets

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22855160

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022855160

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022855160

Country of ref document: EP

Effective date: 20240104

NENP Non-entry into the national phase

Ref country code: DE