WO2024067309A1 - Procédé et appareil de traitement de guidage de position, support de stockage et appareil électronique - Google Patents

Procédé et appareil de traitement de guidage de position, support de stockage et appareil électronique Download PDF

Info

Publication number
WO2024067309A1
WO2024067309A1 PCT/CN2023/120163 CN2023120163W WO2024067309A1 WO 2024067309 A1 WO2024067309 A1 WO 2024067309A1 CN 2023120163 W CN2023120163 W CN 2023120163W WO 2024067309 A1 WO2024067309 A1 WO 2024067309A1
Authority
WO
WIPO (PCT)
Prior art keywords
terminal
video
audio
media
user
Prior art date
Application number
PCT/CN2023/120163
Other languages
English (en)
Chinese (zh)
Inventor
靳彬
韩晶晶
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2024067309A1 publication Critical patent/WO2024067309A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information

Definitions

  • the embodiments of the present disclosure relate to the field of communications, and in particular, to a location guidance processing method, device, storage medium, and electronic device.
  • the location information of the calling and called terminals is provided through call signaling before the call is answered, but the terminal location information cannot be provided in real time during the call.
  • the existing audio and video communication system cannot provide a method for real-time location information service. For example, during the delivery of express delivery, the recipient cannot accurately guide the courier's delivery route in real time. Therefore, the existing communication system only uses the signaling channel to transmit location information, which is increasingly unable to meet the requirements of real-time communication in the 5G communication era.
  • the embodiments of the present disclosure provide a location guidance processing method, device, storage medium and electronic device to at least solve the problem in the related art that terminal location information cannot be provided during a call.
  • a location guidance processing method comprising:
  • target location guidance processing is performed on the second terminal based on the destination information.
  • a position guidance processing device comprising:
  • a receiving module configured to receive a target location guidance request initiated by the first terminal through an established data channel during an audio or video call between the first terminal and the second terminal, wherein the target location guidance request carries destination information;
  • An anchoring module configured to release the audio and video call of the first terminal and anchor the audio and video media of the second terminal
  • the location guidance module is configured to perform target location guidance processing on the second terminal based on the destination information through a video media channel established with the second terminal.
  • a computer-readable storage medium in which a computer program is stored, wherein the computer program is configured to execute the steps of any of the above method embodiments when running.
  • an electronic device including a memory and a processor, wherein the memory stores a computer program, and the processor is configured to run the computer program to execute the steps in any one of the above method embodiments.
  • FIG1 is a hardware structure block diagram of a computer terminal of a location guidance processing method according to an embodiment of the present disclosure
  • FIG2 is a flow chart of a location guidance processing method according to an embodiment of the present disclosure
  • FIG3 is a block diagram of a position information control system according to the present embodiment.
  • FIG4 is a flow chart of location information collection learning and guidance according to the present embodiment.
  • FIG. 6 is a flow chart of using a data channel and an audio and video channel to learn position information according to this embodiment
  • FIG. 7 is a flow chart of user A initiating location information guidance using a data channel and an audio and video channel according to this embodiment
  • FIG. 8 is a flow chart of location information guidance initiated by user B using a data channel and an audio and video channel according to this embodiment
  • FIG. 9 is a flow chart of user A using audio and video channels to perform manual guidance of position information and automatic learning of position information according to this embodiment
  • FIG. 10 is a flow chart of user A using a data channel and an audio and video channel to perform manual guidance of position information and automatic learning of position information according to this embodiment;
  • FIG. 11 is a block diagram of a location guidance processing device according to an embodiment of the present disclosure.
  • FIG. 1 is a hardware structure block diagram of a computer terminal of the location guidance processing method of the embodiment of the present disclosure.
  • the computer terminal may include one or more (only one is shown in FIG. 1 ) processors 102 (the processor 102 may include but is not limited to a processing device such as a microprocessor MCU or a programmable logic device) and a memory 104 configured to store data, wherein the above-mentioned computer terminal may also include a transmission device 106 configured to have a communication function and an input/output device 108.
  • FIG. 1 is only for illustration and does not limit the structure of the above-mentioned computer terminal.
  • the computer terminal may also include more or fewer components than those shown in FIG. 1 , or have a configuration different from that shown in FIG. 1 .
  • the memory 104 may be configured to store computer programs, for example, software programs and modules of application software, such as the computer program corresponding to the location guidance processing method in the embodiment of the present disclosure.
  • the processor 102 executes various functional applications and location guidance processing by running the computer program stored in the memory 104, that is, implementing the above method.
  • the memory 104 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic storage devices, flash memory, Or other non-volatile solid-state memory.
  • the memory 104 may further include a memory remotely arranged relative to the processor 102, and these remote memories may be connected to the computer terminal via a network. Examples of the above-mentioned network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the transmission device 106 is configured to receive or send data via a network.
  • Specific examples of the above-mentioned network may include a wireless network provided by a communication provider of a computer terminal.
  • the transmission device 106 includes a network adapter (Network Interface Controller, referred to as NIC), which can be connected to other network devices through a base station so as to communicate with the Internet.
  • the transmission device 106 can be a radio frequency (Radio Frequency, referred to as RF) module, which is configured to communicate with the Internet wirelessly.
  • RF Radio Frequency
  • FIG. 2 is a flow chart of the location guidance processing method according to an embodiment of the present disclosure. As shown in FIG. 2 , the process includes the following steps:
  • Step S202 during the audio or video call between the first terminal and the second terminal, receiving a target location guidance request initiated by the first terminal through the established data channel, wherein the target location guidance request carries destination information;
  • step S202 may specifically include: receiving a target location guidance request initiated by the first terminal through a data channel established with the first terminal; and receiving a target location guidance request initiated by the second terminal through a data channel established with the second terminal.
  • Step S204 releasing the audio and video call of the first terminal, and anchoring the audio and video media to the second terminal;
  • anchoring the audio and video media for the second terminal may specifically include: applying for audio and video anchoring media for the second terminal, sending the audio and video anchoring media to the second terminal; and transferring the second terminal from the audio and video call to the audio and video anchoring media.
  • Step S206 performing target location guidance processing on the second terminal based on the destination information through the video media channel established with the second terminal.
  • the above-mentioned step S206 may specifically include: receiving the route video image transmitted in real time by the second terminal through the data channel established with the second terminal; comparing the route video image transmitted in real time by the second terminal with a pre-set geographic location information library to obtain the video location information of the second terminal; determining a recommended route based on the video location information of the second terminal and the destination information; guiding the second terminal to the target location according to the recommended route through the data channel established with the second terminal, and further, generating audio and video prompt content according to the recommended route; guiding the second terminal to the target location based on the audio and video prompt content through the data channel established with the second terminal.
  • the method also includes: establishing an audio and video channel with the collection terminal through the IMS system, or establishing a data channel and an audio and video channel with the first terminal through the IMS system; receiving a control instruction or an audio and video call request through the audio and video channel, and collecting geographic location information according to the control instruction or the audio and video call request; storing the geographic location information in the geographic location information library, and further receiving a call request or a control instruction from the collection terminal; determining that the collection terminal belongs to a white list for location information learning, and performing audio and video media anchoring on the collection terminal; receiving audio and video from the collection terminal, and performing location information learning based on the audio and video to obtain geographic location information.
  • the 3GPP R16 specification defines the entire process of negotiation and creation of the IMS data channel between the terminal device and the Internet, which has the advantages of security, practicality and stability.
  • This embodiment provides a location information control system based on the IMS data channel and audio and video media channel architecture, including: location information collection and learning center and location information collection and learning center.
  • Location Information Guidance Center The location information collection and learning center is responsible for communicating with users, collecting location information data, automatic identification and auxiliary correction, etc., and saving them to the location information database.
  • the location information guidance center is responsible for communicating with various handheld devices of users (mobile phones, AR devices, etc.), and providing users with real-time route guidance based on the generated location information database information.
  • the IMS data channel is created separately from the audio and video security channel.
  • Figure 3 is a block diagram of a location information control system according to this embodiment, as shown in Figure 3, including:
  • the Location Information Collection Learning Center Establish data channels and audio and video channels with end users through the IMS system. Receive control instructions or call requests and perform logical control processing. Functionally, it includes "call entity telephone call function, media entity audio and video recognition function", etc., mainly to generate and save geographic location information database information.
  • the Location Information Collection Learning Center includes the following functions:
  • the telephone call function of the calling entity responsible for receiving various media such as external voice/video calls/text messages, and sending and receiving media streams;
  • Audio and video recognition function of media entities responsible for identifying the voice media stream of the call and converting it into text information; performing feature recognition on the video media stream of the call, including road sign markings, street sign recognition, etc., to generate the data information required for the geographic location information database.
  • the Location Information Guidance Center Establishes data channels and audio and video channels with end users through the IMS system. Receives control instructions or call requests and performs logical control processing. Functionally, it includes the telephone call function of the calling entity, the audio and video recognition/marking function of the media entity, the 2D/3D scene rendering function of the media entity, etc., providing immersive user experience services.
  • the Location Information Guidance Center includes the following functions:
  • the telephone call function of the calling entity responsible for receiving various media such as external voice/video calls/text messages, and sending and receiving media streams;
  • Audio and video recognition/marking function of media entities responsible for identifying the media stream of the call and converting it into text information; performing feature recognition and comparison on the video media stream of the call, including road sign markings, street entity sign recognition, etc., marking key geographical locations on the map and transmitting them to the user terminal through the video stream.
  • a target location guidance request initiated by the first terminal is received through an established data channel, wherein the target location guidance request carries destination information; the audio or video call of the first terminal is released, and audio or video media is anchored to the second terminal; and target location guidance processing is performed on the second terminal based on the destination information through a video media channel established with the second terminal.
  • FIG4 is a flow chart of location information collection learning and guidance according to this embodiment, as shown in FIG4 , including:
  • S401 user terminal equipment, including mobile phones or handheld AR devices, etc.
  • location information collection learning and guidance system including call entity, media entity, business application entity, establish data channel and audio and video channel with the user's terminal user through IMS system.
  • Perform logical control processing by receiving control instructions or call requests. After receiving call requests or control instructions, identify whether data channel processing is required this time, establish data channel with the user's terminal device, perform audio and video media anchoring and location information collection learning or real-time location guidance.
  • SBC/P-CSCF access control entity
  • Session Control Entity I/S-CSCF: Interrogating/Serving-CSCF (Call Session Control Function) query/service-call session control function, provides registration authentication, session control, call routing and other basic functions in IMS network for multi-application terminals, and can trigger calls to calling entities. There are no special requirements in the present invention.
  • HSS Home Subscriber Server
  • call entity as a signaling side control network element of the multi-application system, it undertakes the IMS call management capability; as a capability network element for multi-application access, it provides external opening of communication capabilities, which includes:
  • Call entities can implement call control of audio and video calls and transparent data channels, as well as application for media service resources, by providing open interfaces.
  • manage media entities including but not limited to the application, modification, and deletion of transparent data channels, the application, modification, and deletion of audio and video resources, and the application, modification, and deletion of voice recognition and video recognition capabilities;
  • media entity as a media plane control network element of a multi-application system, undertakes IMS media service capabilities, specifically including:
  • the media entity receives application data from the application entity and forwards it to the terminal through the data channel; or the terminal sends application data to the media entity through the data channel, and the media entity extracts the application data and forwards it to the application entity.
  • business application entity Based on the IMS system, it provides the control and processing logic of the location information acquisition learning and guidance system. It connects with the call entity, obtains the session event information from the call entity, and controls the session according to the specific business logic, including but not limited to: modifying the media path of the session, anchoring the session media to the media entity; connecting with the media entity, sending the application data to the terminal through the data channel; and receiving the application data and business control instructions from the terminal side through the data channel of the media entity.
  • User A dials the learning platform number, carrying audio and video media, which is routed to the calling entity through the IMS core network.
  • the calling entity reports the call start event to the business application entity, analyzes that the called party is a number learned from the called party information, and issues a media anchoring indication.
  • the calling entity applies for media from the media entity.
  • the media entity returns the media to the calling entity, the calling entity sends a response message, carrying the media to user A; and notifies the business application entity of the success of the media anchoring. In this way, the audio and video channel between user A and the media entity is established.
  • the business application entity After receiving the notification of successful media anchoring, the business application entity sends a location information learning indication and plays a prompt tone to user A: Start learning, please move.
  • FIG5 is a flow chart of using audio and video channels to learn location information according to this embodiment, as shown in FIG5,
  • This embodiment is based on the premise that the user's mobile phone number has been set as a whitelist in the location information collection and learning center.
  • Step S501 the mobile phone initiates a video call to the location information collection and learning center, and calls INVITE signaling to the SBC of the IMS system;
  • Step S502 the SBC forwards the INVITE message to the S-CSCF;
  • Step S503 S-CSCF forwards the INVITE request to calling entity-A through routing
  • Step S504 reporting the call start event to the service application entity-A;
  • Step S505 the service application entity-A analyzes that the called number is a number learned from the location information, and then issues a media anchoring instruction;
  • Step S506 carrying the user SDP to apply for media plane media
  • Step S507 media entity-A returns media plane media
  • Step S508 respond with a 2000K message carrying media plane media to the S-CSCF;
  • Step S509 the reply 2000K message is forwarded to the SBC
  • Step S510 UE-A receives a response 2000K message, and the media negotiation is completed;
  • Step S511 sending a media anchoring success notification message to the service application entity-A;
  • Steps S512-S513, media entity-A receives the location information learning instruction and plays a prompt tone to A: if you want to start learning, please move; then reply with a location information learning response.
  • Step S514 The location information collection center receives the video image of the user terminal in real time, analyzes the video stream information in real time, automatically identifies the video feature information, and learns the location path.
  • the media entity-A converts the voice recognition into instruction information to save the learned map path, and the learning is completed.
  • the media entity-A plays a prompt tone to the user: if the learning result is saved successfully, the location information learning is now complete.
  • User A dials the learning platform number, carrying audio and video media and data channel media, which are routed to the calling entity through the IMS core network.
  • the calling entity reports the call start event to the business application entity, analyzes that the called party is the number of the called party information learning, and issues a media anchoring instruction.
  • the calling entity applies for all media from the media entity.
  • the calling entity After the media entity returns the corresponding media to the calling entity, the calling entity sends a response message, carrying this media to user A; and notifies the business application entity of the successful media anchoring.
  • User A guides the download and update or installation of the location sharing application through the data channel. Use this application to trigger the establishment of the application data channel.
  • FIG6 is a flow chart of using a data channel and an audio and video channel to learn position information according to this embodiment, as shown in FIG6 , including:
  • This embodiment uses a data channel, and the user terminal can use the application to perform richer interaction and information transmission methods to learn location information.
  • Step S601 the mobile phone initiates a call to the "location information collection and learning center" for video media and data channel media, and calls INVITE signaling to the SBC of the IMS system;
  • Step S602 SBC forwards the INVITE message to S-CSCF;
  • Step S603 S-CSCF forwards the INVITE request to calling entity-A through routing
  • Step S604 reporting the call start event to the service application entity-A;
  • Step S605 the service application entity-A analyzes that the called number is a number learned from the location information, and then issues a media anchoring instruction;
  • Step S606 carrying all SDP application media of the user
  • Step S607 media entity-A returns media plane media
  • Step S608 respond with a 2000K message carrying media plane media to the S-CSCF;
  • Step S609 the reply 2000K message is forwarded to the SBC
  • Step S610 UE-A receives a response 2000K message, and the media negotiation is completed;
  • Step S611 sending a media anchoring success notification message to the service application entity-A;
  • Steps S612-S613 User terminal A updates or downloads and installs the location sharing application through the data channel, and continues to establish the application data channel using the application.
  • Steps S614-S616 User terminal A initiates a Reinvite to calling entity-A.
  • Step S617 Report the application data channel establishment event to the business application entity-A.
  • Steps S618-S619 carry the user SDP to apply for media plane media, and successfully receive the media plane media response.
  • Steps S620-S622 a response 2000K message carrying the media plane media is replied to the user terminal A, and the application data channel negotiation is completed.
  • Step S623 business application entity-A receives a notification of successful establishment of the application data channel and sends a location information learning indication.
  • Steps S624-S625, media entity-A receives the location information learning instruction, and sends an instruction such as "Start learning, please move" through the application data channel.
  • the instruction is not limited to voice, video, text and other forms; then it replies with a location information learning response.
  • Step S626 The location information collection center receives the video image of the user terminal in real time, analyzes the video stream information in real time, automatically identifies the video feature information, and learns the location path.
  • the media entity-A converts the voice recognition into instruction information to save the learned map path, and the learning is completed.
  • Steps S627-S628, the business application entity-A receives the learning completion notification and responds.
  • the media entity-A responds to the user through the application data channel that the learning result has been successfully saved. At this point, the location information learning is complete.
  • User A initiates the process of using data channels and audio and video channels for location information guidance.
  • a typical scenario is that user A and user B are on a call, and user B asks about the expected destination.
  • User A's application actively triggers the establishment of an application data channel with the network side.
  • After user A enters the destination on the application he clicks the "Start Target Location Guidance" button to initiate a target location guidance request to the business application entity.
  • the business application entity will release user A's call, then apply to the media entity for anchored media and send it to user B to complete the call transfer between user B and the media entity.
  • the business application entity After receiving the notification of successful media anchoring, the business application entity initiates target location guidance.
  • FIG. 7 is a flowchart of user A initiating the use of data channels and audio and video channels for location information guidance according to this embodiment, as shown in Figure 7, including:
  • user A interacts by establishing a data channel through the call entity, media entity and business application entity on this side, and transfers user B to the location information guidance center, thereby completing the location information guidance and reaching the destination.
  • Step S701 User A and user B establish a video call.
  • Steps S702-S703 The application of user terminal A triggers the establishment of an application data channel with the network side, thereby completing Establishment of data channel.
  • Steps S704-S705 After user A inputs the destination information, he clicks the "Start Target Location Guidance" button to start sending a target location guidance request to the business application entity.
  • Step S710 The calling entity receives an instruction to anchor the media of user B.
  • Steps S711-S712 the media entity receives the media request for applying for the media plane, and replies with a media plane media response to the calling entity.
  • Steps S713-S714. The calling entity sends Reinvite with media plane SDP to user B, and user B replies 2000K with its own SDP.
  • Steps S715-S716 interact with the media entity to update the user B media and return the media plane media to complete the media negotiation.
  • Step S717 reply ACK to user B.
  • Step S718 The service application entity receives the media anchoring success notification message and starts sending target location guidance notifications.
  • the media entity receives the target location guidance instructions, performs feature recognition and destination recommended route guidance based on the video scene transmitted in real time by user B, such as voice notification, map marking, and 2D/3D scene display.
  • Steps S722-S723, end the call after completing the target location guidance.
  • User B initiates the process of using the data channel and the audio and video channel for location information guidance.
  • a typical scenario is that user A and user B are on a call, and user B asks about the expected destination.
  • user B's application triggers the establishment of an application data channel with the network side.
  • user B enters the destination on the application he clicks the "Start Target Location Guidance" button to initiate a target location guidance request to the business application entity.
  • the business application entity will release the call of user A, then apply to the media entity for anchored media and send it to user B, completing the call transfer between user B and the media entity.
  • the business application entity initiates target location guidance.
  • FIG. 8 is a flow chart of user B initiating the use of data channels and audio and video channels for location information guidance according to this embodiment, as shown in Figure 8, including:
  • user B interacts by establishing a data channel through the call entity, media entity and business application entity on this side, and transfers the user himself to the location information guidance center, thereby completing the location information guidance and reaching the destination.
  • Step S801 User A establishes a video call with user B
  • Steps S802-S803 The application of user terminal B triggers the establishment of an application data channel with the network side, thereby completing the establishment of the data channel.
  • Steps S804-S805 After user B inputs the destination information obtained from user A, he clicks the "Start Target Location Guidance" button to start sending a target location guidance request to the business application entity.
  • Step S810 The calling entity receives an instruction to anchor the media of user B.
  • Steps S811-S812 the media entity receives the media request for applying for the media plane, and replies with a media plane media response to the calling entity.
  • Steps S815-S816 interact with the media entity to update the user B media and return the media plane media to complete the media negotiation.
  • Step S817 reply ACK to user B.
  • Step S818 The service application entity receives the media anchoring success notification message and starts sending target location guidance notifications.
  • the media entity receives the target location guidance instructions, performs feature recognition and destination recommended route guidance based on the video scene transmitted in real time by user B, such as voice notification, map marking, and 2D/3D scene display.
  • Steps S822-S823, end the call after completing the target location guidance.
  • FIG. 9 is a flow chart of user A using audio and video channels for manual guidance of location information and automatic learning of location information according to this embodiment, as shown in Figure 9, including:
  • user A uses the local call entity, media entity, and business application entity to establish an audio and video channel.
  • the location information collection center simultaneously performs voice recognition and video feature information recognition to generate a geographic location information database; thus, manual location information guidance and location information learning are synchronized.
  • This diagram omits the network element devices of the IMS system such as SBC and SCSCF.
  • Steps S901-S907 User terminal A and user terminal B start to establish a video call.
  • User terminal A initiates an Invite request to user terminal B. After media negotiation is completed, user terminal B rings.
  • Step S908 User B picks up the phone and answers, and the calling entity receives the 2000K message.
  • Step S909 the business application entity receives a response event notification.
  • Step S910 The calling entity receives an instruction to anchor the media of user A and user B to the media entity.
  • Steps S911-S913 the calling entity sends ACK first, and then initiates a Reinvite request to the called user B, and performs media anchoring according to the user B media carried in 2000K.
  • Steps S914-S915 the media entity receives the media interface request and responds to the calling entity with the media interface request.
  • step S916-S917 user A will receive the Update message and will perform media update according to the user A media carried in 2000K.
  • Steps S918-S919 the media entity receives the media request for updating the user, and responds to the calling entity with the media interface.
  • Steps S920-S921. ACK messages are sent to user A and user B in turn to complete signaling and media negotiation, and user A and user B start a video call.
  • Step S922 The service application entity receives a notification of successful media anchoring.
  • Step S923 when user A and user B transmit video stream and voice stream in real time for interaction, the location information collection center analyzes the video stream in real time, performs voice recognition and video feature information recognition synchronously, and generates a map location information library.
  • User A uses the data channel and the audio and video channel to perform manual location information guidance and automatic location information learning processes.
  • user A After user A establishes a video call with user B, user A's application triggers the establishment of an application data channel with the network side.
  • user A enters the destination on the application he clicks the "Start Target Location Guidance" button to initiate a manual target location guidance request to the service application entity.
  • the call entity, media entity, and service application entity cooperate to transfer the media of user A and user B.
  • Media anchoring on the network side.
  • the business application entity After receiving the notification of successful media anchoring, the business application entity transmits a manual location guidance instruction in the form of text instructions, video, etc., not limited to voice, through the application data channel.
  • FIG. 10 is a flow chart of user A using data channels and audio and video channels for manual location information guidance and automatic location information learning according to this embodiment, as shown in Figure 10, including:
  • FIG. 10 is a flow chart of user A using data channels and audio and video channels for manual location information guidance and automatic location information learning according to this embodiment.
  • network element devices of the IMS system such as SBC and SCSCF are omitted in the figure.
  • Step S1001 User terminal A and user terminal B complete establishing a video call.
  • Steps S1002-S1003 the application of user terminal A triggers the establishment of an application data channel with the network side, thereby completing the establishment of the data channel.
  • Steps S1004-S1005 After user A inputs the destination information, he clicks the "Start Manual Guidance to Target Location” button to start sending a manual guidance request to the business application entity.
  • Step S1006 The calling entity receives an instruction to anchor the media of user A and user B to the media entity.
  • Steps S1007-S1008 the media entity receives the request for applying for media interface media, and replies with the media interface media to the calling entity.
  • step S1009-S1010 user B will receive the Reinvite message and will perform media update according to the user B media carried in 2000K.
  • Steps S1011-S1012 the media entity receives the media update request of user B, and replies with the media interface to the calling entity.
  • Step S1013 Send an ACK message to user B to complete the media anchoring of user B.
  • Steps S1014-S1015 User A will receive the Reinvite message and will perform media update according to the user A media carried in 2000K.
  • Steps S1016-S1017 the media entity receives the media update request of user A, and replies the media interface media to the calling entity.
  • Step S1018 Send an ACK message to user A to complete the media anchoring of user A.
  • Step S1019 The service application entity receives a notification of successful media anchoring.
  • Step S1020 the business application entity sends a manual guidance instruction for starting position information to the application program through a data channel.
  • Step S1021 when user A and user B transmit video stream and voice stream in real time for interaction, the location information collection center analyzes the video stream in real time, performs voice recognition and video feature information recognition synchronously, and generates a map location information library.
  • Steps S1022-S1024 after the manual location information guidance is completed, when user B releases the call, the location information collection center synchronously saves the learned map location information.
  • FIG. 11 is a block diagram of the position guidance processing device according to an embodiment of the present disclosure. As shown in FIG. 11 , the device includes:
  • the receiving module 112 is configured to receive the data during the audio and video call between the first terminal and the second terminal through the established data
  • the channel receives a target location guidance request initiated by the first terminal, wherein the target location guidance request carries destination information
  • Anchoring module 114 configured to release the audio and video call of the first terminal and anchor the audio and video media of the second terminal;
  • the location guidance module 116 is configured to perform target location guidance processing on the second terminal based on the destination information through a video media channel established with the second terminal.
  • the location guidance module 116 includes:
  • a receiving submodule configured to receive a route video image transmitted in real time by the second terminal through a data channel established with the second terminal;
  • a comparison submodule configured to compare the route video image transmitted in real time by the second terminal with a preset geographic location information library to obtain the video location information of the second terminal;
  • a determination submodule configured to determine a recommended route according to the video location information of the second terminal and the destination information
  • the location guidance submodule is configured to provide target location guidance to the second terminal according to the recommended route through a data channel established with the second terminal.
  • the location guidance submodule is further configured to generate audio and video prompt content according to the recommended route; and provide target location guidance to the second terminal based on the audio and video prompt content through a data channel established with the second terminal.
  • the anchoring module 114 is further configured to apply for audio and video anchoring media for the second terminal, send the audio and video anchoring media to the second terminal, and transfer the second terminal from the audio and video call to the audio and video anchoring media.
  • the receiving module 112 is further configured to receive a target location guidance request initiated by the first terminal through a data channel established with the first terminal; and receive a target location guidance request initiated by the second terminal through a data channel established with the second terminal.
  • the device further comprises:
  • An establishment module configured to establish an audio and video channel with the acquisition terminal through the IMS system, or to establish a data channel and an audio and video channel with the first terminal through the IMS system;
  • a collection module configured to receive a control instruction or an audio and video call request through the audio and video channel, and collect geographic location information according to the control instruction or the audio and video call request;
  • the geographical location information is stored in the geographical location information library.
  • the acquisition module is also configured to receive a call request or control instruction from the acquisition terminal; determine that the acquisition terminal belongs to a whitelist for location information learning, and perform audio and video media anchoring on the acquisition terminal; receive audio and video from the acquisition terminal, and perform location information learning based on the audio and video to obtain geographic location information.
  • An embodiment of the present disclosure further provides a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to execute the steps of any of the above method embodiments when running.
  • the above-mentioned computer-readable storage medium may include, but is not limited to: a USB flash drive, a read-only memory (ROM), a random access memory (RAM), a mobile hard disk, a magnetic disk or an optical disk, and other media that can store computer programs.
  • the embodiment of the present disclosure also provides an electronic device, including a memory and a processor, wherein the memory stores a computing
  • the processor is configured to run the computer program to execute the steps in any one of the above method embodiments.
  • the electronic device may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
  • modules or steps of the above-mentioned embodiments of the present disclosure can be implemented by a general computing device, they can be concentrated on a single computing device, or distributed on a network composed of multiple computing devices, they can be implemented by a program code executable by a computing device, so that they can be stored in a storage device and executed by the computing device, and in some cases, the steps shown or described can be executed in a different order than here, or they can be made into individual integrated circuit modules, or multiple modules or steps therein can be made into a single integrated circuit module for implementation. In this way, the embodiments of the present disclosure are not limited to any specific combination of hardware and software.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Telephonic Communication Services (AREA)

Abstract

Les modes de réalisation de la présente invention comprennent un procédé et un appareil de traitement de guidage de position, un support de stockage et un appareil électronique. Le procédé consiste à : pendant le processus dans lequel un premier terminal et un second terminal effectuent un appel audio et vidéo, recevoir, au moyen d'un canal de données établi, une demande de guidage de position cible initiée par le premier terminal, la demande de guidage de position cible transportant des informations de destination ; libérer l'appel audio et vidéo du premier terminal, et effectuer un ancrage multimédia audio et vidéo sur le second terminal ; et, au moyen d'un canal multimédia vidéo établi avec le second terminal, effectuer un traitement de guidage de position cible sur le second terminal sur la base des informations de destination.
PCT/CN2023/120163 2022-09-26 2023-09-20 Procédé et appareil de traitement de guidage de position, support de stockage et appareil électronique WO2024067309A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211187875.8 2022-09-26
CN202211187875.8A CN117768834A (zh) 2022-09-26 2022-09-26 位置指导处理方法、装置、存储介质及电子装置

Publications (1)

Publication Number Publication Date
WO2024067309A1 true WO2024067309A1 (fr) 2024-04-04

Family

ID=90320673

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/120163 WO2024067309A1 (fr) 2022-09-26 2023-09-20 Procédé et appareil de traitement de guidage de position, support de stockage et appareil électronique

Country Status (2)

Country Link
CN (1) CN117768834A (fr)
WO (1) WO2024067309A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103512582A (zh) * 2013-09-16 2014-01-15 惠州华阳通用电子有限公司 一种基于音频通信的辅助导航方法
CN103618991A (zh) * 2013-11-06 2014-03-05 四川长虹电器股份有限公司 基于移动通信终端设备的位置分享和导航方法及其系统
US20140297178A1 (en) * 2013-01-24 2014-10-02 Tencent Technology (Shenzhen) Company Limited Navigation method, device for navigation and navigation system
CN104296770A (zh) * 2014-10-14 2015-01-21 广东翼卡车联网服务有限公司 终端转发社交软件接收的位置信息的方法、系统及终端
KR20170059716A (ko) * 2015-11-23 2017-05-31 권형석 음성통화만으로 실시간 위치정보를 제공할 수 있는 시스템 및 방법.

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140297178A1 (en) * 2013-01-24 2014-10-02 Tencent Technology (Shenzhen) Company Limited Navigation method, device for navigation and navigation system
CN103512582A (zh) * 2013-09-16 2014-01-15 惠州华阳通用电子有限公司 一种基于音频通信的辅助导航方法
CN103618991A (zh) * 2013-11-06 2014-03-05 四川长虹电器股份有限公司 基于移动通信终端设备的位置分享和导航方法及其系统
CN104296770A (zh) * 2014-10-14 2015-01-21 广东翼卡车联网服务有限公司 终端转发社交软件接收的位置信息的方法、系统及终端
KR20170059716A (ko) * 2015-11-23 2017-05-31 권형석 음성통화만으로 실시간 위치정보를 제공할 수 있는 시스템 및 방법.

Also Published As

Publication number Publication date
CN117768834A (zh) 2024-03-26

Similar Documents

Publication Publication Date Title
US10038772B2 (en) Communication systems and methods
CN113709190B (zh) 业务设置方法和装置、存储介质及电子设备
US9602553B2 (en) Method, apparatus, and system for implementing VOIP call in cloud computing environment
US20150022619A1 (en) System and method for sharing multimedia content using a television receiver during a voice call
CN113785555A (zh) 使用i/o用户设备集合提供通信服务
CN108156634B (zh) 业务处理方法、装置及系统
CN101194443A (zh) 利用终端性能版本来执行组合业务的终端、方法以及系统
EP2974159B1 (fr) Procédé, dispositif et système permettant une communication vocale
CN105704684B (zh) 一种彩铃的实现方法、装置、服务器及系统
CN113660449A (zh) 手势通信方法、装置、存储介质及电子装置
WO2018001294A1 (fr) Procédé de communication basé sur un facilitateur de système de communication de groupe (gcse), et serveur
WO2024067309A1 (fr) Procédé et appareil de traitement de guidage de position, support de stockage et appareil électronique
EP4380296A1 (fr) Procédé et appareil de paiement de commande, et support de stockage, dispositif et système
WO2022142492A1 (fr) Procédé de partage d'écran pour des terminaux mobiles, terminal mobile et support de stockage
CN117412254A (zh) 视频通话控制方法、通信设备以及存储介质
CN107852577B (zh) 一种补充业务实现方法、终端设备和ims服务器
US8588394B2 (en) Content switch for enhancing directory assistance
US20120271959A1 (en) Feature set based content communications systems and methods
US8804928B2 (en) System and method for allowing virtual private network users to obtain presence status and/or location of others on demand
EP3111717B1 (fr) Procédé et système de communication d'ims utilisant des conditions préalables
KR20060010071A (ko) 음성통화 중 파일 전송 시스템 및 그 방법
WO2023015987A1 (fr) Procédé et système de mise en œuvre d'une interprétation simultanée durant un appel, et support de stockage
WO2024108900A1 (fr) Procédé et appareil de vérification de signature électronique
TWI640925B (zh) 支援多元app接取ict網路服務之方法及系統
CN108616485B (zh) 一种基于融合设备的通信方法和设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23870541

Country of ref document: EP

Kind code of ref document: A1