CN114245060B - Path processing method, device, equipment and storage medium - Google Patents

Path processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN114245060B
CN114245060B CN202210184795.0A CN202210184795A CN114245060B CN 114245060 B CN114245060 B CN 114245060B CN 202210184795 A CN202210184795 A CN 202210184795A CN 114245060 B CN114245060 B CN 114245060B
Authority
CN
China
Prior art keywords
call
cooperative
state
voice data
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210184795.0A
Other languages
Chinese (zh)
Other versions
CN114245060A (en
Inventor
刘能宾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202210184795.0A priority Critical patent/CN114245060B/en
Publication of CN114245060A publication Critical patent/CN114245060A/en
Application granted granted Critical
Publication of CN114245060B publication Critical patent/CN114245060B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The application discloses a path processing method, a path processing device and a path processing storage medium, and belongs to the technical field of terminals. The method comprises the following steps: if the cooperative call function is switched from a closed state to an open state, detecting the state of the call recording function, if the call recording function is detected to be in the open state, closing a first use case corresponding to the call recording function to close a first access and a second access, and opening a second use case and a third use case corresponding to the cooperative call function to open the second access and a third access; and sending the call uplink voice data to the far-end call equipment through the third channel, sending the call downlink voice data acquired through the second channel to the second equipment for playing, and carrying out sound mixing processing on the call uplink voice data sent through the third channel and the call downlink voice data acquired through the second channel to obtain call recording data. The method and the device can avoid the problem of channel conflict when the cooperative call and the call recording are used simultaneously, and can ensure the normal acquisition of the call voice data.

Description

Path processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a method, an apparatus, a device, and a storage medium for path processing.
Background
With the rapid development of terminal technology, the multi-screen cooperation technology is widely applied. The multi-screen cooperation refers to that after a first device (such as a mobile phone) is connected with a second device (such as a tablet computer), a screen picture of the first device is displayed in an interface of the second device in a mirror image mode. In this case, the user may cause the first device to execute a corresponding function by operating the screen of the first device displayed in the interface of the second device.
Under the scene of multi-screen cooperation, if the first device communicates with the far-end communication device, the second device can be switched to collect and play the communication voice, and then the cooperative communication can be performed. Specifically, when the first device and the second device perform a cooperative call, a microphone of the second device collects a call voice of a local user and sends the call voice to the first device, and the first device sends the call voice to the far-end call device; the far-end communication equipment sends the communication voice of the far-end user to the first equipment, the first equipment sends the communication voice to the second equipment, and the communication voice is played through a loudspeaker of the second equipment.
However, under the circumstance that the first device and the second device perform the cooperative call, if the call recording is started by the first device, the acquisition of the call voice in the cooperative call process may conflict with the acquisition of the call voice in the call recording process, so that both devices may not normally acquire the call voice.
Disclosure of Invention
The application provides a path processing method, a device, equipment, a storage medium and a program product, which can avoid the problem of path conflict when a cooperative call and a call recording are used at the same time, and can ensure that both the cooperative call and the call recording can normally acquire call voice data. The technical scheme is as follows:
in a first aspect, a path processing method is provided and applied to a first device. In the method, if the cooperative call function is switched from a closed state to an open state, the state of the call recording function is detected. If the call recording function is detected to be in the open state, closing a first use case corresponding to the call recording function, and opening a second use case and a third use case corresponding to the cooperative call function, wherein the first use case is used for opening a first path and a second path, the second use case is used for opening the second path, and the third use case is used for opening the third path. And then, sending the call uplink voice data to the far-end call equipment which has a call with the first equipment through a third channel, sending the call downlink voice data obtained through the second channel to the second equipment for playing, and carrying out sound mixing processing on the call uplink voice data sent through the third channel and the call downlink voice data obtained through the second channel to obtain call recording data.
When the first device and the second device are in a multi-screen coordination state, a screen picture of the first device is displayed on an interface of the second device. In this case, the user may cause the first device to execute a corresponding function by operating the screen of the first device displayed in the interface of the second device.
And the cooperative call function is used for indicating that the second equipment in a multi-screen cooperative state with the first equipment acquires and plays call voice. The first device can start the cooperative call function so as to collect and play call voice through the second device.
The first channel is used for acquiring call uplink voice data acquired by first equipment, the second channel is used for acquiring call downlink voice data sent to the first equipment by far-end call equipment, and the third channel is used for sending the call uplink voice data sent to the first equipment by the second equipment to the far-end call equipment.
In the application, under the condition that the cooperative call function and the call recording function are both in the on state, the first use case corresponding to the call recording function is not opened, and only the second use case and the third use case corresponding to the cooperative call function are opened. The first device can acquire the call downlink voice data sent to the first device by the far-end call device through the second path opened by the second use case, send the call downlink voice data acquired by the second path to the second device for playing, and send the call uplink voice data sent to the first device by the second device to the far-end call device through the third path opened by the third use case, so that cooperative call is realized. And the first device can perform sound mixing processing on the call uplink voice data sent through the third channel and the call downlink voice data acquired through the second channel to obtain call recording data, so that call recording is realized. Therefore, the problem of path conflict when the cooperative call and the call recording are used at the same time can be solved through a simple processing flow, the cooperative call and the call recording can be ensured to normally acquire call voice data, and logic is easy to understand and maintain.
Optionally, after detecting the state of the call recording function if the cooperative call function is switched from the closed state to the open state, opening the second use case and the third use case if the call recording function is detected to be in the closed state; and sending the call uplink voice data to the far-end call equipment through the third channel, and sending the call downlink voice data acquired by the second channel to the second equipment for playing.
When the cooperative call function is switched from the closed state to the open state, if the first device detects that the call recording function is in the closed state, the first use case corresponding to the call recording function is not opened currently, so that the second use case and the third use case corresponding to the cooperative call function can be directly opened at the moment, and then normal acquisition of call voice data by the cooperative call can be realized through the second access opened by the second use case and the third access opened by the third use case.
Optionally, if the cooperative call recording function is switched from the on state to the off state, the state of the call recording function is detected, so as to determine whether to open the first use case corresponding to the call recording function according to the state of the call recording function. And if the call recording function is detected to be in a closed state, closing the second use case and the third use case. If the call recording function is detected to be in the on state, the second use case and the third use case are closed, the first use case is opened to open the first access and the second access, and the call uplink voice data acquired by the first access and the call downlink voice data acquired by the second access are subjected to sound mixing processing to obtain call recording data, so that normal call recording can be guaranteed.
Optionally, if the call recording function is switched from the closed state to the open state, the state of the cooperative call function is detected, so as to determine a use case that needs to be opened and closed according to the state of the cooperative call function. And if the cooperative call function is detected to be in a closed state, opening a first case, and performing sound mixing processing on the call uplink voice data acquired by the first channel and the call downlink voice data acquired by the second channel to obtain call recording data.
And if the cooperative call function is detected to be in the opening state, not executing the operation of opening the first use case corresponding to the cooperative call function. Under the condition that the cooperative call function is in an open state, the first device can directly realize call recording through the second use case and the third use case corresponding to the cooperative call function, that is, the first device can directly perform sound mixing processing on call downlink voice data acquired through a second channel opened by the second use case corresponding to the cooperative call function and call uplink voice data sent through a third channel opened by the third use case corresponding to the cooperative call function to acquire channel recording data. Thus the first device need not open the first use case at this point.
Optionally, if the call recording function is switched from the on state to the off state, the state of the cooperative call function is detected. If the cooperative call function is detected to be in the closed state, it indicates that the first use case is opened, and therefore the first use case needs to be closed at this time to end the call recording. If the first device detects that the cooperative call function is in an open state, which indicates that the first case is not opened, the operation of closing the first case is not executed, and the audio mixing processing of the call uplink voice data sent through the third channel and the call downlink voice data obtained through the second channel is stopped.
In a second aspect, a path processing apparatus is provided, which has a function of implementing the behavior of the path processing method in the first aspect. The path processing device comprises at least one module, and the at least one module is used for realizing the path processing method provided by the first aspect.
In a third aspect, a path processing apparatus is provided, where the structure of the path processing apparatus includes a processor and a memory, and the memory is used for storing a program for supporting the path processing apparatus to execute the path processing method provided in the first aspect, and storing data for implementing the path processing method in the first aspect. The processor is configured to execute programs stored in the memory. The path processing device may further include a communication bus for establishing a connection between the processor and the memory.
In a fourth aspect, a computer-readable storage medium is provided, which has instructions stored therein, which when run on a computer, cause the computer to perform the path processing method of the first aspect described above.
In a fifth aspect, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the path processing method of the first aspect.
The technical effects obtained by the second, third, fourth and fifth aspects are similar to the technical effects obtained by the corresponding technical means in the first aspect, and are not described herein again.
Drawings
Fig. 1 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 2 is a block diagram of a software system of a terminal according to an embodiment of the present disclosure;
fig. 3 is an interface schematic diagram in a first multi-screen collaborative scene according to an embodiment of the present application;
fig. 4 is a schematic interface diagram of a tablet computer according to an embodiment of the present disclosure;
fig. 5 is a schematic interface diagram of a mobile phone according to an embodiment of the present application;
fig. 6 is a schematic interface diagram of another tablet computer provided in the embodiment of the present application;
FIG. 7 is a schematic interface diagram of another mobile phone provided in the embodiments of the present application;
FIG. 8 is a schematic diagram of a multi-screen collaborative system according to an embodiment of the present application;
fig. 9 is an interface schematic diagram in a second multi-screen collaborative scene according to an embodiment of the present application;
FIG. 10 is a schematic interface diagram in a third multi-screen collaborative scene according to an embodiment of the present application;
fig. 11 is an interface schematic diagram in a fourth multi-screen collaborative scene provided by the embodiment of the present application;
fig. 12 is an interface schematic diagram in a fifth multi-screen collaborative scene according to an embodiment of the present application;
FIG. 13 is a schematic diagram of a use case and path provided by an embodiment of the present application;
fig. 14 is a flowchart of a path processing method according to an embodiment of the present application;
fig. 15 is a flowchart of another path processing method provided in the embodiment of the present application;
fig. 16 is a flowchart of a call recording process according to an embodiment of the present application;
FIG. 17 is a schematic diagram of another use case and path provided by embodiments of the present application;
FIG. 18 is a schematic diagram of yet another use case and path provided by an embodiment of the present application;
fig. 19 is a schematic structural diagram of a pathway processing apparatus according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that reference to "a plurality" in this application means two or more. In the description of the present application, "/" means "or" unless otherwise stated, for example, a/B may mean a or B; "and/or" herein is only an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, for the convenience of clearly describing the technical solutions of the present application, the terms "first", "second", and the like are used to distinguish the same items or similar items having substantially the same functions and actions. Those skilled in the art will appreciate that the terms "first," "second," etc. do not denote any order or quantity, nor do the terms "first," "second," etc. denote any order or importance.
Reference throughout this application to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. Furthermore, the terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The following describes a terminal according to an embodiment of the present application.
Fig. 1 is a schematic structural diagram of a terminal according to an embodiment of the present application. Referring to fig. 1, the terminal 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation to the terminal 100. In other embodiments of the present application, terminal 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be, among other things, a neural center and a command center of the terminal 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the terminal 100. The charging management module 140 may also supply power to the terminal 100 through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the terminal 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication and the like applied to the terminal 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication applied to the terminal 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
The terminal 100 implements a display function through the GPU, the display screen 194, and the application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The terminal 100 may implement a photographing function through the ISP, the camera 193, the video codec, the GPU, the display screen 194, and the application processor, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the terminal 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. Such as saving files of music, video, etc. in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the terminal 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (e.g., audio data, a phonebook, etc.) created during use of the terminal 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The terminal 100 can implement audio functions, such as music playing, recording, etc., through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the terminal 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The terminal 100 may support 1 or N SIM card interfaces, where N is an integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 is also compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The terminal 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the terminal 100 employs eSIM, namely: an embedded SIM card. The eSIM card can be embedded in the terminal 100 and cannot be separated from the terminal 100.
Next, a software system of the terminal 100 will be explained.
The software system of the terminal 100 may adopt a hierarchical architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. In the embodiment of the present application, an Android (Android) system with a layered architecture is taken as an example to exemplarily explain a software system of the terminal 100.
Fig. 2 is a block diagram of a software system of the terminal 100 according to an embodiment of the present disclosure. Referring to fig. 2, the layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into an application layer, an application framework layer, an Android runtime (Android runtime) and system layer, an extension layer, and a kernel layer from top to bottom.
The application layer may include a series of application packages. As shown in fig. 2, the application package may include applications such as multi-screen collaboration, camera, gallery, calendar, call, map, call recording, WLAN, bluetooth, short message, etc. The multi-screen cooperative application program is used for starting a multi-screen cooperative function.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions. As shown in fig. 2, the application Framework layer may include an Audio Framework (Audio Framework), a distributed mobile sensing platform (DMSDP), a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and the like.
The audio framework is responsible for outputting playback data, collecting recording data, comprehensively managing audio transactions and the like. The DMSDP is used for providing functional support for the multi-screen coordination process, for example, coordination call in the multi-screen coordination process can be realized. The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like. The content provider is used to store and retrieve data, which may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc., and makes the data accessible to applications. The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system can be used for constructing a display interface of an application program, and the display interface can be composed of one or more views, such as a view for displaying a short message notification icon, a view for displaying characters and a view for displaying pictures. The phone manager is used to provide communication functions of the terminal 100, such as management of call states (including connection, disconnection, etc.). The resource manager provides various resources, such as localized strings, icons, pictures, layout files, video files, etc., to the application. The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. For example, a notification manager is used to notify that a download is complete, a message alert, etc. The notification manager may also be a notification that appears in the form of a chart or scrollbar text at the top status bar of the system, such as a notification of a background running application. The notification manager may also be a notification that appears on the screen in the form of a dialog window, such as prompting a text message in a status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system. The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android. The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system layer may include a plurality of functional modules, such as: audio Services (Audio Services), surface manager (surface manager), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), etc. The audio service is a maker of audio strategies and is responsible for decision of strategies for switching audio equipment, volume adjustment strategies and the like, and an executor of the audio strategies and is responsible for management of input and output stream equipment and processing and transmission of audio stream data. The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications. The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc. The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like. The two-dimensional graphics engine is a two-dimensional drawing engine.
The extension layer may also be referred to as a Hardware Abstraction Layer (HAL), which may implement encapsulation of the kernel driver, provide an interface upwards, and shield implementation details of lower layers. The expansion layer is upwards connected with Android Runtime and Framework, and is downwards connected with a driver. The extension layer may include an Audio hardware abstraction layer (Audio HAL) responsible for interaction with Audio hardware devices.
The kernel layer is a layer between hardware and software. The core layer may include a display driver, a camera driver, an audio driver, a sensor driver, a Pulse Code Modulation (PCM) device, and the like. Among them, the PCM Device may also be referred to as a PCM Device.
The following describes an application scenario related to the embodiment of the present application, taking multi-screen coordination between a mobile phone and a tablet computer as an example.
The mobile phone can communicate with the far-end communication equipment under the condition of making a call or receiving an incoming call. The call may include a voice call, a video call, and the like, which is not limited in this embodiment of the application. For example, the call may be a call established through a carrier service, such as a 2G call, a 3G call, a 4G call, a 5G call, and so on.
Under the condition that the mobile phone and the tablet computer are in the multi-screen cooperative state, if the mobile phone is in communication with the far-end communication device, as shown in fig. 3, the tablet computer displays the communication interface of the mobile phone in a mirror image manner. Under the condition, the method can be switched to the tablet computer to collect and play the conversation voice, namely, the cooperative conversation can be carried out. In addition, during the process of the call between the mobile phone and the far-end call device, as shown in fig. 3, if the user clicks the recording button 31 in the call interface, call recording can be performed.
At present, a path opened by a use case used by a mobile phone in a cooperative call conflicts with a path opened by a use case used by the mobile phone in a call recording. Therefore, in the related art, conflict processing is performed according to various scenes of the collaborative call and the call recording, and conflict processing logic is designed for the various scenes independently. That is, different conflict handling logics are designed according to different opening and closing sequences of the cooperative call and call recording so as to open or close a certain path and solve the problem of path conflict. However, since there are many scenarios for the path conflict processing and the implementation is complicated, problems are likely to occur.
Therefore, the embodiment of the application optimizes the existing path conflict processing method, and provides a simple path processing method, which can selectively close and open the corresponding use case according to the detected state of the call recording function when the cooperative call function is switched from the closed state to the open state, so as to avoid the problem of path conflict and realize that the logic is easy to understand and maintain.
Several possible connection modes of multi-screen coordination are described below.
1. The connection is established via bluetooth.
For example, if the user wants to cooperate the mobile phone with the tablet computer, the bluetooth of both the mobile phone and the tablet computer may be turned on first. Then, the user manually opens the multi-screen cooperative function in the mobile phone. For example, a user may find a multi-screen cooperation switch in an interface of a mobile phone through a path of "set" - "more connect" - "multi-screen cooperation", and set the switch to an on state, so as to start a multi-screen cooperation function of the mobile phone.
Referring to the interface diagram of the tablet computer shown in fig. 4, as shown in (a) of fig. 4, the user slides down a notification panel from the status bar of the tablet computer, and the notification panel includes a "multi-screen collaboration" option 41. The user clicks the "multi-screen collaboration" option 41, and the tablet pc displays a first prompt window in response to a trigger operation of the user on the "multi-screen collaboration" option 41, where the first prompt window includes first operation prompt information for instructing how the user operates to implement multi-screen collaboration. For example, as shown in fig. 4 (b), the first operation prompt message includes "1," bluetooth of your mobile phone is turned on and close to the mobile phone, and "connect" is clicked after the mobile phone is found. 2. After connection, a user can operate the mobile phone on the tablet personal computer to realize data sharing among devices. "is used as the prompt. Therefore, the user can perform corresponding operation according to the first operation prompt message, for example, the mobile phone is close to the tablet personal computer.
In one example, referring to the interface schematic diagram of the mobile phone shown in fig. 5, when the mobile phone finds a tablet computer during the process that the mobile phone approaches the tablet computer, the mobile phone displays a second prompt window, as shown in fig. 5 (a), where the second prompt window includes a prompt content of "whether to establish a collaborative connection with the found device", and a "connection" option 51 and a "cancellation" option 52. When the user clicks the connection option 51, it indicates that the user confirms that the cooperative connection is to be established, and the mobile phone responds to the triggering operation of the user on the connection option 51 and establishes the cooperative connection with the tablet computer through the bluetooth. When the user clicks the cancel option 52, it indicates that the user does not want to establish the cooperative connection, and the mobile phone does not execute the operation of establishing the cooperative connection in response to the triggering operation of the user on the cancel option 52. In another example, during the process that the mobile phone approaches the tablet computer, when the mobile phone finds the tablet computer, the second prompt window may not be displayed, and the cooperative connection with the tablet computer is automatically established through bluetooth.
By way of example and not limitation, in the process of establishing the cooperative connection between the mobile phone and the tablet computer through bluetooth, in order to display the progress of establishing the cooperative connection, the mobile phone may further display a third prompt window for indicating that the connection is being made, for example, the third prompt window shown in (b) in fig. 5 may be displayed. Optionally, a "cancel" option is included in the third prompt window to facilitate the user to cancel the connection at any time if desired.
2. And establishing connection in a code scanning mode.
For example, the user may find a button of "scan connection" in the interface of the tablet computer through a "my mobile phone" - "immediate connection" - "scan connection" path, the user clicks the button, and the tablet computer displays a two-dimensional code for establishing a cooperative connection in response to a trigger operation of the user on the button, for example, the two-dimensional code shown in fig. 6 may be displayed. Optionally, the tablet pc may further display a second operation prompt message for prompting the user how to operate to implement multi-screen coordination, for example, as shown in fig. 6, the second operation prompt message may be "scan code connection using a mobile browser".
In one example, referring to the interface schematic diagram of the mobile phone shown in fig. 7, a user may enter an interface with a "scan" option displayed in a browser (or smart vision) of the mobile phone, for example, may enter an interface of the browser shown in fig. 7 (a), where a "scan" option 71 is displayed. The user can click the "scan" option 71, and the mobile phone starts the camera in response to the triggering operation of the user on the "scan" option 71, and displays the code scanning interface shown in (b) in fig. 7, so that the user can align the camera with the two-dimensional code displayed by the tablet computer to perform code scanning operation.
In one example, after the mobile phone successfully scans the code, a request for establishing the cooperative connection is sent to the tablet computer. After receiving the request sent by the mobile phone, the tablet pc may display a fourth prompt window, where the fourth prompt window includes prompt information for prompting whether the user agrees to establish the cooperative connection, for example, the prompt information may include prompt contents of "xx device requests to establish the cooperative connection with the home terminal, and whether the user agrees to establish the cooperative connection", as well as an "agreement" option and a "rejection" option. When the user clicks the 'consent' option, the user is indicated to allow the mobile phone to establish the cooperative connection with the tablet personal computer, and the tablet personal computer responds to the triggering operation of the user on the 'consent' option and establishes the cooperative connection with the mobile phone. When the user clicks the 'reject' option, the user is indicated that the mobile phone is not allowed to establish the cooperative connection with the tablet computer, and the tablet computer responds to the trigger operation of the user on the 'reject' option and informs the mobile phone that the cooperative connection establishment fails.
It should be noted that, the above description is only given by taking an example that the user opens the two-dimensional code in the tablet computer through a path from "my mobile phone" - "immediate connection" - "scan code connection". Alternatively, the two-dimensional code may be opened through other paths. For example, as shown in fig. 4 (b), the first prompt window includes, in addition to the first operation prompt message, a prompt content of "if no native machine can be found, you can scan a code connection", where four words of "scan a code connection" are triggerable. The user can click the content of the code scanning connection in the first prompt window, and the tablet computer responds to the trigger operation of the user on the content of the code scanning connection and displays the two-dimensional code shown in fig. 6. Therefore, the user can scan the two-dimensional code displayed by the tablet personal computer through the mobile phone, and the cooperative connection is established in a code scanning mode.
3. The connection is established by means of a bump-on-bump.
The user can start the NFC and multi-screen cooperative function in both the mobile phone and the tablet computer. Then, the user touches the NFC region on the back of the mobile phone (usually located around the camera on the back of the mobile phone) to the NFC region of the tablet computer (usually located in the lower right corner region of the tablet computer), and the mobile phone and the tablet computer respond to the touch operation of the user and establish the cooperative connection through NFC. Optionally, before the cooperative connection is established through NFC, the tablet computer and the mobile phone may further prompt the user whether to agree to establish the cooperative connection, and after the user agrees to establish the cooperative connection, the mobile phone and the tablet computer perform an operation of establishing the cooperative connection. In one example, when the mobile phone and the tablet computer successfully establish the cooperative connection, the mobile phone may further remind the user by vibrating or ringing.
It should be noted that, the above several possible connection manners are all described by taking a wireless connection manner as an example. In another embodiment, the implementation may also be implemented by a wired connection manner, for example, the implementation may be implemented by a connection line of a Type-C to high-definition multimedia interface (HDMI), which is not limited in this embodiment of the present application.
A multi-screen cooperative system related to the path processing method provided in the embodiment of the present application is described below.
Fig. 8 is a schematic diagram of a multi-screen coordination system according to an embodiment of the present application. Referring to fig. 8, the multi-screen collaborative system may include a first device 801 and a second device 802. The first device 801 and the second device 802 may communicate through a wired connection or a wireless connection.
The first device 801 and the second device 802 may both be terminals, which may be terminals as described above with respect to the embodiments of fig. 1-2. For example, the terminal may be a mobile phone, a tablet computer, a wearable device, an in-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), a television, and the like, which is not limited in this embodiment.
The first device 801 and the second device 802 may perform multi-screen cooperation, and the first device 801 and the second device 802 may perform multi-screen cooperation in multiple possible manners, for example, may perform multi-screen cooperation in a manner of bluetooth, code scanning, touch-and-touch, and the like, which are described in detail above, specifically refer to the relevant descriptions of fig. 4 to fig. 7, and are not described again in this embodiment of the present application.
When the first device 801 and the second device 802 are in a multi-screen coordination state, as shown in fig. 9, a screen of the first device 801 may be displayed on an interface of the second device 802. In this way, the user can operate the screen of the first device 801 displayed by the second device 802 in the second device 802 according to the requirement, so that the first device 801 executes the corresponding function.
The first device 801 and the second device 802 may be different types of terminals, and may also be the same type of terminals, which is not limited in this embodiment of the present application. For example, both of them may be terminals such as a mobile phone or a tablet computer.
In a possible implementation manner, the screen size of the first device 801 is smaller than the screen size of the second device 802, so that when the small screen and the large screen are in multi-screen coordination, the screen picture of the small screen is displayed to the interface of the large screen as a window, which can be used by a user to operate the screen picture of the small screen in the interface of the large screen, and the operation experience of the user is improved. For example, the first device 801 is a mobile phone, and the second device 802 is a tablet computer or a television. Alternatively, the first device 801 is a tablet computer and the second device 802 is a television. Of course, the screen size of the first device 801 may also be larger than the screen size of the second device 802. For example, the first device 801 is a tablet computer, and the second device 802 is a mobile phone.
When the first device 801 and the second device 802 are in the multi-screen cooperative state, if the first device 801 communicates with a far-end communication device, the cooperative communication function may be started to switch to the second device 802 to collect and play the communication voice. In addition, during the call of the first device 801, a call recording function may be activated to record the call voice.
The path processing method provided by the embodiment of the present application is applied to a scene in which the first device 801 and the second device 802 perform multi-screen coordination. In this case, by executing the path processing method provided in this embodiment of the present application, when the first device 801 and the second device 802 are in the multi-screen cooperative state, the opening and closing of the corresponding use case may be dynamically adjusted according to the opening and closing of the cooperative call function and the call recording function, so as to obtain call voice data through a suitable path, and ensure normal operation of the cooperative call and the call recording.
The following explains the via processing method provided in the embodiments of the present application in detail.
The path processing method provided by the embodiment of the application is applied to the first device, and the first device and the second device can perform multi-screen cooperation. For example, as shown in fig. 9, when the first device 801 and the second device 802 are in a multi-screen cooperative state, a screen of the first device 801 is displayed on an interface of the second device 802. In this case, the user can cause the first device 801 to execute a corresponding function by operating the screen of the first device 801 displayed in the interface of the second device 802.
The first device and the second device may implement multi-screen cooperation through multiple possible manners, for example, the multi-screen cooperation may be implemented through bluetooth, code scanning, touch-and-touch, and the like, which are described in detail above, and reference may be specifically made to the relevant descriptions of fig. 4 to fig. 7, which is not described again in this embodiment of the present application.
The first device is a device capable of making a call. The first device may have one or more SIM cards installed therein, and the first device may use any one of the one or more SIM cards to perform a call. For example, in a case where the first device has only one SIM card installed, the first device may directly use the SIM card to perform a call, for example, make a call using the SIM card, or receive an incoming call using the SIM card. In the case that the first device has multiple SIM cards, the first device may use one of the multiple SIM cards to communicate, for example, to make a call or receive an incoming call using the one SIM card.
Next, a call scenario, a cooperative call scenario, and a call recording scenario involved in the first device will be described.
A call scene:
as an example, in a case where the first device is not in the multi-screen coordination state with the second device, the user may directly operate in the first device to cause the first device to start talking with the far-end talking device. For example, a user may directly make a call or receive an incoming call in the first device to enable the first device to start a call with the far-end communication device, which is referred to related technologies and will not be described in detail in this application.
As another example, when the first device and the second device are in the multi-screen coordination state, the user may directly operate in the first device to cause the first device to start talking with the far-end talking device, or the user may operate in the second device to cause the first device to start talking with the far-end talking device.
Several possible operating modes will be described below, taking as an example the initiation of a call by dialing a telephone number in the first device.
In a first possible operation mode, if the user wants to make a call in the first device, the operation can be performed directly on the first device.
For example, as shown in fig. 9, when the first device 801 and the second device 802 are in a multi-screen cooperative state, the first device 801 and the second device 802 synchronously display a main interface of the first device 801, and if a user wants to make a call, the user may click an icon 901 of the call in the main interface of the first device 801 to open a dialing interface of the first device, at this time, as shown in fig. 10, the first device 801 and the second device 802 synchronously display the dialing interface of the first device 801. The user can then perform a dialing operation in the dialing interface of the first device 801 to dial a telephone in the first device 801 to start a call. In the case where the first device 801 has only one SIM card installed therein, the user can directly perform a dialing operation in the dialing interface of the first device 801, and thus make a call using the SIM card in the first device 801. Alternatively, in the case where the first device 801 has multiple SIM cards installed, as shown in fig. 10, an icon 92 of each of the multiple SIM cards is displayed in the dialing interface of the first device 801, and the user may select an icon 92 of one SIM card in the dialing interface of the first device 801 and then perform a dialing operation on the dialing interface of the first device 801, so that the selected SIM card is used in the first device 801 to make a call.
In a second possible operation manner, if a user wants to make a call in a first device, the operation may be performed on a second device in a multi-screen coordination state with the first device.
For example, as shown in fig. 9, when the first device 801 and the second device 802 are in a multi-screen cooperative state, the first device 801 and the second device 802 synchronously display a main interface of the first device 801, and if a user wants to make a call, the user may click the icon 91 for making a call in the main interface of the first device 801 displayed on the second device 802 to open a dialing interface of the first device 801, at this time, as shown in fig. 10, the first device 801 and the second device 802 synchronously display the dialing interface of the first device 801. The user can then perform a dialing operation in the dialing interface of the first device 801 displayed on the second device 802 to enable a telephone call to be dialed in the first device 801 to start a call. In the case that only one SIM card is installed in the first device 801, the user may directly perform a dialing operation in the dialing interface of the first device 801 displayed on the second device 802, so that the user may use the SIM card in the first device 801 to make a call. Alternatively, in the case where the first device 801 has multiple SIM cards, as shown in fig. 10, an icon 92 of each of the multiple SIM cards is displayed in the dialing interface of the first device 801, and the user may select an icon 92 of one SIM card in the dialing interface of the first device 801 displayed on the second device 802, and then perform a dialing operation on the dialing interface of the first device 801 displayed on the second device 802, so that the user can dial a call using the selected SIM card in the first device 801.
In a third possible operation manner, if a user wants to make a call in the first device, the operation may be performed on both the first device and the second device in a multi-screen coordination state with the first device.
For example, as shown in fig. 9, when the first device 801 and the second device 802 are in a multi-screen cooperative state, the first device 801 and the second device 802 synchronously display a main interface of the first device 801, and if a user wants to make a call, the user may click the icon 91 for making a call in the main interface of the first device 801 displayed on the second device 802 to open a dialing interface of the first device 801, at this time, as shown in fig. 10, the first device 801 and the second device 802 synchronously display the dialing interface of the first device 801. The user can then perform a dialing operation in the dialing interface of the first device 801 to dial a telephone in the first device 801 to start a call. In the case where the first device 801 has only one SIM card installed therein, the user can directly perform a dialing operation in the dialing interface of the first device 801, and thus make a call using the SIM card in the first device 801. Alternatively, in the case where the first device 801 has multiple SIM cards installed, as shown in fig. 10, an icon 92 of each of the multiple SIM cards is displayed in the dialing interface of the first device 801, and the user may select an icon 92 of one SIM card in the dialing interface of the first device 801 and then perform a dialing operation on the dialing interface of the first device 801, so that the selected SIM card is used in the first device 801 to make a call.
For another example, as shown in fig. 9, when the first device 801 and the second device 802 are in a multi-screen cooperative state, the first device 801 and the second device 802 synchronously display a main interface of the first device 801, and if a user wants to make a call, the user may click the icon 91 for making a call in the main interface of the first device 801 to open a dialing interface of the first device 801, at this time, as shown in fig. 10, the first device 801 and the second device 802 synchronously display the dialing interface of the first device 801. The user can then perform a dialing operation in the dialing interface of the first device 801 displayed on the second device 802 to enable a telephone call to be dialed in the first device 801 to start a call. In the case that only one SIM card is installed in the first device 801, the user may directly perform a dialing operation in the dialing interface of the first device 801 displayed on the second device 802, so that the user may use the SIM card in the first device 801 to make a call. Alternatively, in the case where the first device 801 has multiple SIM cards, as shown in fig. 10, an icon 92 of each of the multiple SIM cards is displayed in the dialing interface of the first device 801, and the user may select an icon 92 of one SIM card in the dialing interface of the first device 801 displayed on the second device 802, and then perform a dialing operation on the dialing interface of the first device 801 displayed on the second device 802, so that the user can dial a call using the selected SIM card in the first device 801.
Several possible operation modes will be described below by taking as an example the case where the first device receives an incoming call to start a call.
The first possible operation manner may be implemented by directly executing an operation on the first device if the user wants to answer the incoming call in the first device when the first device has the incoming call.
For example, when the first device 801 and the second device 802 are in the multi-screen coordination state, if there is an incoming call to the first device 801, as shown in fig. 11, the first device 801 and the second device 802 may synchronously display an incoming call interface of the first device 801. If the user wants to answer the incoming call, the user can click the answer button 93 in the incoming call interface of the first device 801 to answer the incoming call in the first device 801 to start the call. In the case that the first device 801 is only equipped with one SIM card, the incoming call in the first device 801 is the incoming call for the SIM card, so that the incoming call is answered by using the SIM card in the first device 801; alternatively, in a case where the first device 801 has multiple SIM cards installed, an incoming call in the first device 801 is an incoming call for one of the multiple SIM cards, and thus the incoming call is answered by using the one SIM card in the first device 801.
In a second possible operation manner, when there is an incoming call in the first device, if a user wants to answer the incoming call in the first device, the operation may be performed on the second device in the multi-screen coordination state with the first device.
For example, when the first device 801 and the second device 802 are in the multi-screen coordination state, if there is an incoming call to the first device 801, as shown in fig. 11, the first device 801 and the second device 802 may synchronously display an incoming call interface of the first device 801. If the user wants to answer the incoming call, the user can click the answer button 93 in the incoming call interface of the first device 801 displayed on the second device 802, so as to answer the incoming call in the first device 801 to start the call. In the case that the first device 801 is only equipped with one SIM card, the incoming call in the first device 801 is the incoming call for the SIM card, so that the incoming call is answered by using the SIM card in the first device 801; alternatively, in a case where the first device 801 has multiple SIM cards installed, the incoming call in the first device 801 is an incoming call for one of the multiple SIM cards, and thus the incoming call is answered by using the one SIM card in the first device 801.
A collaborative call scenario:
when the first device and the second device are in a multi-screen coordination state, the first device and the second device both comprise coordination call switches, and the coordination call switches in the first device and the second device are synchronously turned on or turned off. The cooperative call switch is used for indicating whether the second device collects and plays call voice in the call process of the first device, namely indicating whether the cooperative call function is started in the call process of the first device. That is, when the cooperative call switch is turned on, the cooperative call function is turned on during the call of the first device; and under the condition that the cooperative call switch is closed, the cooperative call function is closed in the call process of the first equipment.
For example, when the first device 801 and the second device 802 are in a multi-screen cooperative state, as shown in fig. 12, assuming that the first device 801 is a mobile phone and the second device 802 is a tablet computer, after the user pulls down the notification bar of the second device 802 in the second device 802, the notification bar of the second device 802 may display a content of "cooperative to mobile phone", where the notification bar may further include a cooperative call switch 94 for switching a call voice to the tablet computer, and the user may open or close the cooperative call switch 94 in the second device 802 according to a requirement to indicate whether to open the cooperative call function in a call process of the first device 801.
Or, when the first device and the second device are in the multi-screen cooperative state, assuming that the first device is a mobile phone and the second device is a tablet computer, the user may pull down a notification bar of the first device in the first device, the notification bar of the first device may display a prompt content of "cooperative to tablet computer", the notification bar may further include a cooperative call switch for switching a call voice to the tablet computer, and the user may turn on or turn off the cooperative call switch in the first device according to a requirement to indicate whether to turn on the cooperative call function in a call process of the first device.
The above description is only given by taking an example that the cooperative call switch is disposed in the pull-down notification bar of the first device and the second device, and in practical applications, the cooperative call switch may be disposed in other interfaces of the first device and the second device, which is not limited in this embodiment of the application.
When the cooperative call switch is turned on, the cooperative call function is turned on in the call process of the first device to perform cooperative call, that is, call voice is collected and played through the second device in the call process of the first device. Specifically, when the first device and the second device perform a cooperative call, a microphone of the second device collects a call voice of a local user and sends the call voice to the first device, and the first device sends the call voice to a far-end call device; the far-end communication equipment sends the communication voice of the far-end user to the first equipment, the first equipment sends the communication voice to the second equipment, and the communication voice is played through a loudspeaker of the second equipment.
When the cooperative call switch is turned off, the cooperative call function is turned off in the call process of the first device, that is, the cooperative call is not performed, that is, the first device still performs call voice collection and play in the call process of the first device. Specifically, a microphone of the first device collects a call voice of a local user and sends the call voice to the far-end call device; the far-end communication equipment sends the communication voice of the far-end user to the first equipment, and the communication voice is played by a loudspeaker or a receiver of the first equipment.
Several possible turn-on and turn-off scenarios for the cooperative telephony feature are described below:
in a first possible case, the first device and the second device are in a multi-screen cooperative state, but the first device has not yet made a call with the far-end call device, and at this time, the cooperative call switch may be turned on or turned off. And then, the first device starts to communicate with the far-end communication device, and at the moment, whether the cooperative communication function is started or not can be selected according to the state of the cooperative communication switch.
In this case, the first device performs multi-screen coordination with the second device, and then performs a call with the remote call device.
In some embodiments, when the first device and the second device start to perform multi-screen coordination, the first device and the second device may default to turn on the cooperative call switch, that is, when the first device and the second device establish a multi-screen cooperative connection, the first device and the second device may automatically turn on the cooperative call switch. Of course, when the first device and the second device start to perform multi-screen coordination, the first device and the second device may also turn off the cooperative call switch by default.
In other embodiments, during the multi-screen collaboration between the first device and the second device, the collaborative call switch may be triggered to be turned on by some specific mode association, for example, if the user plays audio or video in the second device, the first device and the second device may automatically turn on the collaborative call switch synchronously. Of course, during the multi-screen coordination of the first device and the second device, the cooperative call switch may also be triggered to be turned off by another specific mode association, for example, if the user plays audio or video in the first device, the first device and the second device may automatically turn off the cooperative call switch synchronously.
In still other embodiments, during multi-screen coordination between the first device and the second device, the cooperative call switch in the first device or the second device may be manually turned on by a user, and after the cooperative call switch in one of the first device and the second device is manually turned on by the user, the cooperative call switch in the other device is also automatically turned on in synchronization. Of course, in the process of performing multi-screen coordination between the first device and the second device, the cooperative call switch in the first device or the second device may also be manually turned off by the user, and after the cooperative call switch in one of the first device and the second device is manually turned off by the user, the cooperative call switch in the other device may also be automatically turned off in synchronization.
In still other embodiments, the on and off of the cooperative talk switch may also be determined by the user's act of making or receiving an incoming call. Specifically, in the process of multi-screen coordination between the first device and the second device, if a user operates a screen of the first device displayed by the second device to make or receive a call on the first device, both the first device and the second device turn on a coordination call switch. Or, if a user directly makes a call or answers an incoming call on the first device in the process of multi-screen coordination between the first device and the second device, both the first device and the second device turn off the coordination call switch.
Under the first possible situation, if the cooperative call switch is turned on in the process of performing multi-screen cooperation between the first device and the second device, the cooperative call function is turned on when the first device subsequently starts to communicate with the remote call device; and if the cooperative call switch is closed in the process of multi-screen cooperation between the first device and the second device, the cooperative call function will not be started when the first device subsequently starts to communicate with the far-end call device.
That is to say, when the first device and the second device are in the multi-screen cooperative state, if the first device starts to communicate with the far-end call device, and at this time, if the cooperative call switch is in the open state, the first device may start the cooperative call function to perform the cooperative call, that is, the second device performs the collection and the playing of the call voice in the call process of the first device; and at this moment, if the cooperative call switch is in a closed state, the first device does not start the cooperative call function, that is, the cooperative call is not performed, and the first device still performs the collection and the playing of the call voice.
In a second possible scenario, the first device is not in the multi-screen coordination state with the second device, but the first device starts to talk with the far-end talking device. And then, the first device establishes multi-screen cooperative connection with the second device in the call process, when the first device and the second device are in a multi-screen cooperative state, the cooperative call switch can be turned on or off, and the first device selects whether to turn on the cooperative call function according to the state of the cooperative call switch.
In this case, the first device firstly makes a call with the far-end call device, and then performs multi-screen cooperation with the second device during the call.
In some embodiments, if the first device performs multi-screen coordination with the second device during a call, and if the first device and the second device initially perform multi-screen coordination while the first device and the second device default to turn on the cooperative call switch, the first device may turn on the cooperative call function when the first device and the second device initially perform multi-screen coordination while the first device and the second device initially perform the cooperative call, that is, switch to the second device to perform collection and play of call voices. And if the first device and the second device start to perform multi-screen coordination at the same time, the first device and the second device default to turn off the coordination call switch, the first device does not turn on the coordination call function while performing multi-screen coordination with the second device at the same time, and at this time, the first device continues to collect and play call voice.
In a third possible case, the first device and the second device are in a multi-screen cooperative state, and the first device communicates with the far-end call device, at this time, the state of the cooperative call switch may be switched, and the first device switches the state of the cooperative call function when the state of the cooperative call switch is switched.
In some embodiments, during the multi-screen collaboration between the first device and the second device, the collaborative call switch may be triggered to be opened by some specific mode association, for example, if a user plays audio or video in the second device, the first device and the second device may automatically and synchronously switch the collaborative call switch from the off state to the on state. If the first device detects that the cooperative call switch is switched from the closed state to the open state in the call process, the cooperative call function is switched from the closed state to the open state to perform cooperative call, that is, the first device is switched to the second device to perform call voice acquisition and play.
Certainly, in the process of performing multi-screen coordination between the first device and the second device, the coordinated call switch may also be triggered to be turned off by another specific mode association, for example, if the user plays audio or video in the first device, the first device and the second device may automatically and synchronously switch the coordinated call switch from the on state to the off state. If the first device detects that the cooperative call switch is switched from the on state to the off state in the call process, the cooperative call function is switched from the on state to the off state to end the cooperative call, namely the first device is switched back to collect and play call voice.
In other embodiments, in the process of performing multi-screen coordination between the first device and the second device, a user may manually open a coordination call switch in the first device or the second device, and after the coordination call switch in one of the first device and the second device is manually opened by the user, the coordination call switch in the other device is automatically and synchronously opened, and at this time, the coordination call switch in the first device and the second device is switched from a closed state to an open state. If the first device detects that the cooperative call switch is switched from the closed state to the open state in the call process, the cooperative call function is switched from the closed state to the open state to perform cooperative call, that is, the first device is switched to the second device to perform call voice acquisition and play.
Certainly, in the process of performing multi-screen coordination between the first device and the second device, the user may also manually turn off the coordination call switch in the first device or the second device, and after the coordination call switch in one of the first device and the second device is manually turned off by the user, the coordination call switch in the other device is also automatically turned off in synchronization. At the moment, the cooperative call switches in the first device and the second device are switched from a closed state to an open state. If the first device detects that the cooperative call switch is switched from the on state to the off state in the call process, the cooperative call function is switched from the on state to the off state to end the cooperative call, namely the first device is switched back to collect and play call voice.
Call recording scene:
as an example, in a case that the first device is not in the multi-screen coordination state with the second device, if the first device starts to talk with the far-end talking device, the first device displays a talking interface. The call interface may include a recording button, and during a call of the first device, a user may start or close a call recording function in the first device by operating the recording button in the call interface. Of course, the user may also turn on or turn off the call recording function in the call process of the first device in other manners, which is not limited in this embodiment of the application.
As another example, when the first device and the second device are in the multi-screen coordination state, if the first device and the far-end call device perform a call, the first device and the second device may synchronously display a call interface of the first device. The call interface may include a recording button, and during a call of the first device, a user may start or close a call recording function by operating the recording button. Of course, the user may also turn on or turn off the call recording function in the call process of the first device in other manners, which is not limited in this embodiment of the application.
Several possible operation modes will be described below by taking as an example the case of turning on or off the call recording function by operating the recording button in the call interface.
In a first possible operation manner, if a user wants to start a call recording function during a call of the first device, the call recording function may be directly performed on the first device.
For example, as shown in fig. 12, when the first device 801 and the second device 802 are in a multi-screen coordination state, if the first device 801 communicates with a far-end call device, the first device 801 and the second device 802 synchronously display a call interface of the first device 801, where the call interface includes the recording button 95. The user may click the recording button 95 in the call interface of the first device 801 to start the call recording function, and at this time, the first device 801 may record the call to obtain call recording data.
In a second possible operation manner, if a user wants to start a call recording function during a call of a first device, the call recording function may be implemented by executing an operation on a second device that performs multi-screen coordination with the first device.
For example, as shown in fig. 12, when the first device 801 and the second device 802 are in a multi-screen coordination state, if the first device 801 communicates with a far-end call device, the first device 801 and the second device 802 synchronously display a call interface of the first device 801, where the call interface includes the recording button 95. The user may click the recording button 95 in the call interface of the first device 801 displayed by the second device 802 to start the call recording function, and at this time, the first device 801 may record the call to obtain call recording data.
In a third possible operation manner, if the user wants to turn off the call recording function during the call of the first device, the user may directly perform an operation on the first device.
For example, as shown in fig. 12, when the first device 801 and the second device 802 are in a multi-screen coordination state, if the first device 801 communicates with a far-end call device, the first device 801 and the second device 802 synchronously display a call interface of the first device 801, where the call interface includes the recording button 95. After the user starts the call recording function by clicking the recording button 95 in the call interface of the first device 801 or by clicking the recording button 95 in the call interface of the first device 801 displayed by the second device 802, the user may click the recording button 95 in the call interface of the first device 801 again to close the call recording function, and at this time, the first device 801 may end the call recording and no longer obtain the call recording data.
In a fourth possible operation manner, if a user wants to close the call recording function during a call of the first device, the operation may be performed on the second device performing multi-screen coordination with the first device.
For example, as shown in fig. 12, when the first device 801 and the second device 802 are in a multi-screen coordination state, if the first device 801 communicates with a far-end call device, the first device 801 and the second device 802 synchronously display a call interface of the first device 801, where the call interface includes the recording button 95. After the user starts the call recording function by clicking the recording button 95 in the call interface of the first device 801 or by clicking the recording button 95 in the call interface of the first device 801 displayed by the second device 802, the user may click the recording button 95 again in the call interface of the first device 801 displayed by the second device 802 to close the call recording function, and at this time, the first device 801 may end the call recording and no longer acquire the call recording data.
A description will be given below of a use case relating to the call recording function and the cooperative call function.
In the process of a call of the first device, the collection of uplink (uplink) voice data of the call and the playing of downlink (downlink) voice data of the call are involved. The call uplink voice data refers to call voice data of a local end user acquired by the first device or second devices performing cooperative call with the first device in a call process of the first device, and the call uplink voice data needs to be sent to the far-end call device. The call downlink voice data refers to the call voice data of the remote user, which is received by the first device and sent by the remote call device in the call process of the first device, and the call downlink voice data needs to be played in the first device or the second device performing the cooperative call with the first device.
Under the condition that the first equipment does not have a cooperative call with the second equipment, a microphone (Mic) of the first equipment acquires the call voice of a local end user and sends the call voice to far-end call equipment; the far-end conversation equipment sends the conversation voice of the far-end user to the first equipment, and the first equipment plays the conversation voice through a Speaker (Speaker) or an earphone of the first equipment.
In the case that the first device does not have a cooperative call with the second device, the uplink and downlink voice data of the call are circulated between the audio chip of the first device and devices (including but not limited to a microphone, a loudspeaker, and a receiver), where the audio chip may be adsp (audio dsp), or the like. Specifically, a microphone of the first device collects call voice of a local user, the call voice is sent to an audio chip of the first device, and the audio chip of the first device sends the call voice to the far-end call device through a modem processor (modem processor) of the first device. Meanwhile, the audio chip of the first device receives the call voice of the far-end user sent by the far-end call device through the modulation and demodulation processor of the first device, and sends the call voice to the loudspeaker or the receiver of the first device for playing.
Under the condition that the first equipment and the second equipment carry out cooperative conversation, a microphone of the second equipment collects the conversation voice of a local end user and sends the conversation voice to the first equipment, and the first equipment sends the conversation voice to the far-end conversation equipment; the far-end communication equipment sends the communication voice of the far-end user to the first equipment, the first equipment sends the communication voice to the second equipment, and the communication voice is played through a loudspeaker of the second equipment.
Under the condition that the first device and the second device carry out cooperative call, uplink and downlink voice data of the call are circulated between an Audio chip of the first device and the Audio HAL. Specifically, the microphone of the second device collects the call voice of the local user, and sends the call voice to the first device. After receiving the call voice, the Audio HAL of the first device sends the call voice to the Audio chip of the first device, and the Audio chip of the first device sends the call voice to the far-end call device through the modem processor of the first device. Meanwhile, the Audio chip of the first device receives the call voice of the remote user sent by the remote call device through the modem processor of the first device, and sends the call voice to the Audio HAL of the first device, and the Audio HAL of the first device sends the call voice to the second device, and the call voice is played by a loudspeaker of the second device.
In the process of the cooperative call between the first device and the second device, the first device and the second device synchronously display a call interface of the first device, which may also be referred to as a User Interface (UI). If the user clicks the recording button of the call interface, call recording can be carried out so as to obtain call recording data.
As an example, three use cases (usecast) for acquiring call voice data are provided in the Audio HAL, each use case being used to open a corresponding Path (Path) in the Audio chip, which is used to acquire call voice data.
As shown in Table 1 below, these three use cases may include the Incall Record Uplink and Downlink use case, the Incall Record Downlink use case, and the Incall Record Uplink use case. The Incall Record Uplink and Downlink use case is used for opening an Incall-rec-Uplink-and-Downlink channel, and the Incall-rec-Uplink-and-Downlink channel is used for acquiring Uplink and Downlink voice data of a call; the Incall Record Downlink case is used for opening an Incall-rec-Downlink channel, and the Incall-rec-Downlink channel is used for acquiring call Downlink voice data; the Incall Record Uplink use case is used for opening an initial-rec-Uplink channel, and the initial-rec-Uplink channel is used for acquiring call Uplink voice data.
TABLE 1
Figure DEST_PATH_IMAGE001
In the embodiments of the present application, the three examples are described by taking the above table 1 as an example, and the above table 1 does not limit the embodiments of the present application.
Optionally, as shown in fig. 13, the three use cases may open corresponding paths through an Audio routing module (Audio Route) in the Audio chip to obtain call voice data. Specifically, in any one of the three use cases, a corresponding path in the audio routing module may be opened, and after acquiring the call voice data, the path in the audio routing module may send the call voice data to the PCM device in the kernel layer, and this use case reads the call voice data from the PCM device.
Referring to fig. 13, the audio routing module includes two paths, namely, an include-rec-downlink path and an include-rec-uplink path, and the include-rec-uplink-and-downlink path is implemented by simultaneously opening the include-rec-uplink path and the include-rec-downlink path.
That is, when the Incall Record Uplink and Downlink use case is opened, an include-rec-Uplink channel and an include-rec-Downlink channel in the audio routing module are both opened, call Uplink voice data and call Downlink voice data are simultaneously acquired, and the call Uplink voice data and the call Downlink voice data are sent to the PCM device.
When the Incall Record Downlink case is opened, an Incall-rec-Downlink channel in the audio routing module is opened, conversation Downlink voice data are obtained, the conversation Downlink voice data are sent to PCM equipment, and the Incall Record Downlink case can read the conversation Downlink voice data from the PCM equipment.
When the Incall Record Uplink use case is opened, an include-rec-Uplink passage in the audio routing module is opened, call Uplink voice data are obtained, the call Uplink voice data are sent to the PCM equipment, and the Incall Record Uplink use case can read the call Uplink voice data from the PCM equipment.
In this case, referring to table 1 above, the cooperative call function needs to be implemented by the above-mentioned Incall Record Downlink use case to obtain the call Downlink voice data. The call recording function needs to be implemented by the above-mentioned Incall Record Uplink and Downlink use case to acquire call Uplink voice data and call Downlink voice data.
In summary, the call recording function and the cooperative call function both need to acquire call voice data, and the acquisition of the call voice data is implemented through a use case in the Audio HAL in the first device. The use case in the Audio HAL in the first device may open a path in an Audio chip in the first device, thereby acquiring call voice data.
The call recording function and the cooperative call function correspond to different use cases. When the call recording function or the cooperative call function is started, the Audio HAL may open a corresponding use case to open a corresponding path in the Audio chip through the opened use case to acquire the required call voice data.
Specifically, the call recording function corresponds to a first use case in the Audio HAL, the first use case is used for opening a first path and a second path in an Audio chip, the first path is used for acquiring call uplink voice data acquired by first equipment, and the second path is used for acquiring call downlink voice data sent to the first equipment by far-end call equipment. For example, the first example may be the above-mentioned Incall Record Uplink and Downlink use case, the first path may be the above-mentioned Incall-rec-Uplink path, and the second path may be the above-mentioned Incall-rec-Downlink path.
When the call recording function is started, the first device can open the first example to open the first channel and the second channel, so as to obtain the call uplink voice data through the first channel and obtain the call downlink voice data through the second channel, and then perform sound mixing processing on the call uplink voice data obtained through the first channel and the call downlink voice data obtained through the second channel to obtain call recording data, so as to realize call recording.
The cooperative call function corresponds to a second use case and a third use case in the Audio HAL, the second use case is used for opening a second access in the Audio chip, the third use case is used for opening a third access in the Audio chip, and the third access is used for sending the call uplink voice data sent by the second device to the first device to the far-end call device. For example, the second use case may be the above-mentioned Incall Record Downlink use case, the second path may be the above-mentioned inner-rec-Downlink path, and the third path may be the inner-music-delivery path.
When the first device starts the cooperative call function, the second use case can be opened to open the second channel, the third use case can be opened to open the third channel, the call uplink voice data are sent to the far-end call device through the third channel, the call downlink voice data are obtained through the second channel, and the call downlink voice data obtained through the second channel are sent to the second device to be played, so that the cooperative call is realized.
Next, the overall flow of the path processing method provided in the embodiment of the present application will be described.
In the communication process, the first device can control the opening and closing of the second use case and the third use case corresponding to the cooperative communication function and the first use case corresponding to the communication recording function according to the opening and closing of the cooperative communication function and the communication recording function, so that the normal acquisition of communication voice data is ensured under the condition of avoiding channel conflict.
The following describes a processing flow when the first device detects that the cooperative call function is switched from the open state to the closed state, or detects that the cooperative call function is switched from the closed state to the open state.
Fig. 14 is a flowchart of a path processing method according to an embodiment of the present application. Referring to fig. 14, the method includes the steps of:
step 1401: the first device detects that the cooperative call function is switched from the closed state to the open state.
The first device can start the cooperative call function to switch to the second device to collect and play the call voice. The first device may start the cooperative call function in multiple possible manners, which have been described in detail in the above cooperative call scenario, and this is not described in detail in this embodiment of the present application.
For example, when the first device starts to perform a call with the far-end call device in response to the received call operation, if it is detected that the cooperative call switch is in the on state, the cooperative call function may be switched from the off state to the on state. Or, the first device establishes a multi-screen cooperative connection with the second device in a process of communicating with the far-end communication device, and if the first device detects that the cooperative communication switch is in an open state when the first device successfully starts multi-screen cooperation with the second device, the cooperative communication function may be switched from the closed state to the open state. Or, if the first device detects that the cooperative call switch is switched from the off state to the on state in the process of communicating with the far-end call device, the cooperative call function may be switched from the off state to the on state.
Step 1402: the first device detects whether the call recording function is in an open state.
If the first device detects that the cooperative call function is switched from the closed state to the open state, the state of the call recording function needs to be detected, so that a use case needing to be opened and closed is determined according to the state of the call recording function.
The first device, in the case of detecting that the call recording function is in the on state, performs steps 1403 and 1404 as follows. The first device, upon detecting that the call recording function is in the off state, directly performs step 1404 as follows.
Step 1403: the first device closes a first use case corresponding to the call recording function so as to close the first access and the second access.
The first case is for opening the first and second passages. That is, opening the first instance opens the first path and the second path; the first example of closing is that the first path and the second path are closed. The first channel is used for acquiring the uplink voice data of the call collected by the first equipment, and the second channel is used for acquiring the downlink voice data of the call sent to the first equipment by the far-end call equipment.
If the cooperative call function is switched from the closed state to the open state, the call recording function is already in the open state, which indicates that the first use case corresponding to the call recording function is already opened currently, so that at this time, step 1403 needs to be executed to close the first use case first, and then step 1404 needs to be executed to open the second use case and the third use case corresponding to the cooperative call function. In this way, under the condition that the cooperative call function and the call recording function are both in the on state, the first use case corresponding to the call recording function is not opened, and only the second use case and the third use case corresponding to the cooperative call function are opened. And then, normal acquisition of call voice data by cooperation of call and call recording can be realized through the second path opened by the second case and the third path opened by the third case.
If the cooperative call function is switched from the closed state to the open state, the call recording function is in the closed state, which indicates that the first use case corresponding to the call recording function is not opened currently, so that the step 1403 does not need to be executed at this time, and the step 1404 is directly executed to open the second use case and the third use case corresponding to the cooperative call function.
Step 1404: and the first equipment opens the second use case and the third use case corresponding to the cooperative call function so as to open the second path and the third path.
The second example is for opening the second path and the third example is for opening the third path. That is, opening the second example, i.e., opening the second path; the second example is closed, i.e. the second path is closed. The third example is opened, namely a third passage is opened; the third example is closed, namely the third passage is closed. And the third path is used for sending the call uplink voice data sent by the second equipment to the first equipment to the far-end call equipment.
Step 1405: the first device sends the call uplink voice data to the far-end call device through the third channel, and sends the call downlink voice data acquired by the second channel to the second device for playing.
The first device sends the call uplink voice data to the far-end call device through the third channel, and sends the call downlink voice data acquired by the second channel to the second device for playing, so that the cooperative call is realized.
If the call recording function is in the off state when the cooperative call function is switched from the off state to the on state, the first device implements the cooperative call function by executing step 1405 without executing step 1406.
If the call recording function is already in the on state when the cooperative call function is switched from the off state to the on state, the first device further executes the following step 1406 to implement the call recording function.
Step 1406: and the first equipment performs sound mixing processing on the call uplink voice data sent through the third channel and the call downlink voice data acquired through the second channel to obtain call recording data.
And the first equipment performs sound mixing processing on the call uplink voice data sent through the third channel and the call downlink voice data acquired through the second channel to obtain call recording data, so that call recording is realized.
It is worth mentioning that, in the embodiment of the present application, under the condition that both the cooperative call function and the call recording function are in the on state, the first device does not open the first use case corresponding to the call recording function, and only opens the second use case and the third use case corresponding to the cooperative call function. The first device can acquire the call downlink voice data sent to the first device by the far-end call device through the second path opened by the second use case, send the call downlink voice data acquired by the second path to the second device for playing, and send the call uplink voice data sent to the first device by the second device to the far-end call device through the third path opened by the third use case, so that cooperative call is realized. And the first device can perform sound mixing processing on the call uplink voice data sent through the third channel and the call downlink voice data acquired through the second channel to obtain call recording data, so that call recording is realized. Therefore, the problem of channel conflict when the cooperative call and the call recording are used at the same time can be solved through a simple processing flow, the cooperative call and the call recording can be ensured to be normally acquired to call voice data, and the logic is easy to understand and maintain.
It should be noted that, if the call recording function is in the off state when the cooperative call function is switched from the off state to the on state, the first device does not perform call recording, that is, does not perform audio mixing processing on the call uplink voice data sent through the third path and the call downlink voice data obtained through the second path.
And then, if the first device detects that the call recording function is switched from the closed state to the open state in the process that the cooperative call function is in the open state, carrying out sound mixing processing on the call uplink voice data sent through the third channel and the call downlink voice data acquired through the second channel to obtain call recording data, so as to realize call recording.
And then, if the first device detects that the call recording function is switched from the open state to the closed state in the process that the cooperative call function is in the open state, the first device does not record the call any more, that is, does not perform audio mixing processing on the call uplink voice data sent through the third channel and the call downlink voice data acquired through the second channel.
The first device can start or close the call recording function in the call process. The first device may turn on or turn off the call recording function in multiple possible manners, which have been described in detail in the above call recording scenario, and this is not described in detail in this embodiment of the present application.
For example, when the call recording function is in the closed state, if it is detected that the user clicks a recording button in the call interface of the first device or clicks a recording button in the call interface of the first device displayed by the second device performing multi-screen coordination with the first device, the call recording function is switched from the closed state to the open state. And under the condition that the call recording function is in the open state, if the first equipment detects that a user clicks a recording button in a call interface of the first equipment or clicks a recording button in the call interface of the first equipment displayed by second equipment performing multi-screen cooperation with the first equipment, the call recording function is switched from the open state to the closed state.
Further, if the first device detects that the cooperative call function is switched from the on state to the off state, the state of the call recording function is detected, so as to determine whether to open the first use case corresponding to the call recording function according to the state of the call recording function. And if the first equipment detects that the call recording function is in a closed state, closing the second use case and the third use case. If the first device detects that the call recording function is in an on state, the second use case and the third use case are closed, the first use case is opened to open the first channel and the second channel, and the call uplink voice data obtained through the first channel and the call downlink voice data obtained through the second channel are subjected to sound mixing processing to obtain call recording data, so that normal call recording can be guaranteed.
The first device may close the cooperative call function to switch back to the first device to collect and play the call voice. The first device may close the cooperative call function in multiple possible manners, which have been described in detail in the above cooperative call scenario, and this is not described in detail in this embodiment of the present application.
For example, when the cooperative call function is already turned on by the first device during a call with the far-end call device, if the first device disconnects the multi-screen cooperative connection with the second device, the first device may switch the cooperative call function from the on state to the off state. Or, if the first device detects that the cooperative call switch is switched from the on state to the off state in the process of communicating with the far-end call device, the first device may switch the cooperative call function from the on state to the off state.
The following describes a processing flow when the first device detects that the call recording function is switched from the on state to the off state, or detects that the call recording function is switched from the off state to the on state.
Fig. 15 is a flowchart of a path processing method according to an embodiment of the present application. Referring to fig. 15, the method includes the steps of:
step 1501: the first device detects that the call recording function is switched from the closed state to the open state.
The first device can start a call recording function in the call process. The first device may start the call recording function in multiple possible manners, which have been described in detail in the above call recording scenario, and this is not described in detail in this embodiment of the present application.
For example, when the call recording function is in the closed state, if it is detected that the user clicks a recording button in the call interface of the first device or clicks a recording button in the call interface of the first device displayed by the second device performing multi-screen coordination with the first device, the call recording function is switched from the closed state to the open state.
Step 1502: the first device detects whether the cooperative call function is in an open state.
If the first device detects that the call recording function is switched from the closed state to the open state, the state of the cooperative call function needs to be detected, so that a use case needing to be opened and closed is determined according to the state of the cooperative call function.
And the first device does not execute the operation of opening the first use case corresponding to the call recording function under the condition that the cooperative call function is detected to be in the open state. Under the condition that the cooperative call function is in an open state, the first device can directly realize call recording through the second use case and the third use case corresponding to the cooperative call function, that is, the first device can directly perform sound mixing processing on call downlink voice data acquired through a second channel opened by the second use case corresponding to the cooperative call function and call uplink voice data sent through a third channel opened by the third use case corresponding to the cooperative call function to acquire channel recording data. So that the first device does not need to open the first use case at this time.
In order to normally perform call recording in the case where the first device detects that the cooperative call function is in the off state, the first device may perform steps 1503 and 1504 as follows.
Step 1503: and if the first equipment detects that the cooperative call function is in a closed state, opening the first example to open a first channel and a second channel.
When the call recording function is switched from the closed state to the open state, if the cooperative call function is in the closed state, the first device needs to open the first use case to acquire the call uplink voice data through the first path opened by the first use case, and acquire the call downlink voice data through the second path opened by the first use case.
Step 1504: and the first equipment performs sound mixing processing on the call uplink voice data acquired by the first channel and the call downlink voice data acquired by the second channel to obtain call recording data.
And the first equipment performs sound mixing processing on the call uplink voice data acquired by the first channel and the call downlink voice data acquired by the second channel to obtain call recording data, so that call recording is realized.
Further, if the first device detects that the call recording function is switched from the on state to the off state, the state of the cooperative call function is detected. If the first device detects that the cooperative call function is in an open state, which indicates that the first case is not opened currently, the operation of closing the first case is not executed at this time, and at this time, the voice mixing processing of the call uplink voice data sent through the third channel and the call downlink voice data obtained through the second channel is stopped, so as to stop call recording. If the first device detects that the cooperative call function is in a closed state, it indicates that the first use case is opened, and therefore the first use case needs to be closed at this time to end call recording.
For ease of understanding, the above-described path processing method will be exemplified below with reference to the software system shown in fig. 2 and the flowchart of the path processing procedure shown in fig. 16.
Referring to fig. 2, the software system of the first device may include a call application, a call recording application, and a multi-screen collaborative application in an application layer, and an Audio framework and DMSDP in an application framework layer, and an Audio service in a system layer, and an Audio HAL in an extension layer, and a PCM device in a kernel layer, which may be, for example, a Primary Audio HAL. In addition, the first device further includes an audio chip, and the audio chip may communicate with the far-end call device, for example, the audio chip may communicate with the far-end call device through the modem processor, that is, the modem processor sends uplink call voice data to the far-end call device, and the modem processor receives downlink call voice data sent by the far-end call device.
Referring to fig. 16, the path processing procedure may include steps 1601 to 1612 as follows.
Step 1601: and when the call application program detects the operation of making a call or answering the call on the call interface, the Audio service is indicated to call the Audio HAL to establish a call with the far-end call equipment through the Audio frame.
When the call application program detects an operation of making a call or answering a call on a call interface (i.e., an incall UI interface), it determines that the first device needs to communicate with the remote call device, and thus, a call establishment procedure can be started. At this time, the call application may instruct the Audio service to call the Audio HAL through the Audio frame, so that the Audio HAL starts a voice call (voice call) flow to implement call establishment.
Step 1602: the Audio HAL instructs the Audio chip to establish a call with the far-end telephony device.
And after the Audio HAL starts a voice call flow, the Audio HAL instructs an Audio chip to select sound production equipment and perform route switching so as to realize call establishment. The voice call process can refer to the related art, which is not described in detail in the embodiments of the present application.
After the first device establishes a call with the remote communication device through the steps 1601 and 1602, the communication status is changed from idle to in-call.
Step 1603: and the multi-screen cooperative application program monitors the call state when the first equipment and the second equipment are in the multi-screen cooperative state, and detects the state of the cooperative call switch if the call state is switched from idle to call.
The multi-screen cooperative application program can continuously monitor the call state of the first device in the multi-screen cooperative process, and if the call state is monitored to be converted from idle (idle) to in-call (offhook), it is determined that the first device starts to call with the far-end call device, and at this time, the multi-screen cooperative application program can detect the state of the cooperative call switch. If the cooperative call switch is in an open state, the cooperative call function needs to be opened in the call process so as to carry out cooperative call; if the cooperative call switch is in the off state, the cooperative call function does not need to be started in the call process, that is, the cooperative call is not performed.
When the cooperative call switch is in the on state, the user is allowed to establish the cooperative call in the call process, and when the cooperative call switch is in the off state, the user is not allowed to establish the cooperative call in the call process. The first device may turn on and turn off the cooperative call switch in a plurality of possible manners, which have been described in detail in the above cooperative call scenario, and this is not described in detail in this embodiment of the present application.
Step 1604: and under the condition that the multi-screen cooperative application program detects that the cooperative call switch is in the closed state, continuously detecting the state of the cooperative call switch.
And the multi-screen cooperative application program does not establish the cooperative call under the condition that the cooperative call switch is closed, and the cooperative call function is in a closed state at the moment.
It is to be noted that, in other embodiments, when the multi-screen cooperative application monitors that the call state is changed from idle to call, if it is detected that the cooperative call switch is in the on state, the multi-screen cooperative application may send a first switch instruction to the DMSDP to instruct the DMSDP to establish the cooperative call.
Step 1605: and when the call recording application program detects the call recording operation, the Audio service is indicated to call the Audio HAL for call recording through the Audio framework.
The call recording operation is an operation for instructing to record a call, for example, the call recording operation may be a click operation of a user on call recording in the call interface.
When the call recording application program calls the Audio HAL to record the call, the Audio HAL can determine that the call recording function is switched from the closed state to the open state.
Step 1606: when the call recording function is switched from the closed state to the open state, the Audio HAL detects the state of the cooperative call function, and if the cooperative call function is detected to be in the closed state, the Audio HAL opens the first use case corresponding to the call recording function so as to create a first Stream corresponding to the first use case in the Audio HAL, and when the first Stream is created, the Audio HAL opens the first path and the second path in the Audio chip.
Under the condition, the microphone of the first device collects the call uplink voice data and sends the call uplink voice data to the audio chip, and the audio chip receives the call uplink voice data through the first channel and sends the call uplink voice data to the far-end call device. The far-end communication equipment sends the communication downlink voice data to the audio chip, and the audio chip receives the communication downlink voice data through the second channel and sends the communication downlink voice data to a loudspeaker or a receiver of the first equipment for playing. And the audio chip performs audio mixing processing on the call uplink voice data and the call downlink voice data to obtain call recording data, and sends the call recording data to the PCM equipment. The first Stream in the Audio HAL reads the call record data from the PCM device and then sends the call record data to the call record application. Therefore, call recording can be realized.
For example, as shown in fig. 17, the audio chip has an audio routing module therein, and the audio routing module has a first path and a second path therein. When the call recording function is switched from a closed state to an open state, if the Audio HAL detects that the cooperative call recording function is in the closed state, the Audio HAL opens a first use case corresponding to the call recording function so as to open a first path and a second path in the Audio routing module. The audio chip performs audio mixing processing on the call uplink voice data acquired by the first channel in the audio routing module and the call downlink voice data acquired by the second channel in the audio routing module to obtain call recording data, and sends the call recording data to the PCM equipment. The first Stream corresponding to the first instance in the Audio HAL may read the call record data from the PCM device.
Step 1607: when the multi-screen cooperative application program detects that the cooperative call switch is switched from the closed state to the open state, and under the condition that the monitored call state of the first device is in call, a first switching instruction is sent to the DMSDP to indicate the DMSDP to establish the cooperative call.
The first switching instruction is used for instructing the DMSDP to establish a collaborative call, that is, instructing to switch the call voice to the second device, so that the second device collects and plays the call voice.
Step 1608: after receiving a first switching instruction sent by a multi-screen collaborative application program, the DMSDP calls an Audio HAL to establish a collaborative call if the call state monitored by the DMSDP is determined to be in a call.
The DMSDP can also continuously monitor the call state of the first device in the multi-screen coordination process. After receiving the first switching instruction, the DMSDP determines that the multi-screen collaborative application program indicates that a collaborative call needs to be established. At this time, DMSDP determines whether to establish a cooperative call according to the call state monitored by DMSDP.
Optionally, when the call state monitored by the DMSDP is idle, if it is determined that the received first handover instruction does not conform to the call state monitored by the DMSDP, the cooperative call is not established. And under the condition that the call state monitored by the DMSDP is in call, determining that the received first switching instruction is consistent with the call state monitored by the DMSDP, and calling the Audio HAL to establish a cooperative call.
When DMSDP calls Audio HAL to establish the cooperative call, the Audio HAL can determine that the cooperative call function is switched from a closed state to an open state.
Step 1609: the Audio HAL detects a state of the call recording function when the cooperative call function is switched from a closed state to an open state, and closes a first use case corresponding to the call recording function to disconnect a first Stream corresponding to a first use case in the Audio HAL if the call recording function is detected to be in the open state, and closes a first path and a second path in the Audio chip when the first Stream is disconnected. And then the Audio HAL opens a second use case and a third use case corresponding to the cooperative call function so as to create a second Stream corresponding to the second use case and a third Stream corresponding to the third use case in the Audio HAL, wherein the second Stream opens a second passage in the Audio chip when being created, and the third Stream opens a third passage in the Audio chip when being created, so as to realize the establishment of the cooperative call.
Under the circumstance, in the process of the cooperative call between the first device and the second device, the DMSDP receives call uplink voice data which is sent by the second device and collected by the second device, the DMSDP sends the call uplink voice data to the Audio HAL, a third Stream in the Audio HAL writes the call uplink voice data into the PCM device, the PCM device sends the call uplink voice data to the Audio chip, and the Audio chip sends the call uplink voice data to the far-end call device through the third path. Meanwhile, the Audio chip receives call downlink voice data sent by the far-end call device through the second channel, the call downlink voice data are sent to the PCM device, a second Stream in the Audio HAL reads the call downlink voice data from the PCM device, the call downlink voice data are sent to the DMSDP, and the DMSDP sends the call downlink voice data to the second device and plays the call downlink voice data by the second device. Thus, the cooperative call is realized.
When the call recording function is in the on state, the Audio HAL performs Audio mixing processing on the call upstream voice data (i.e., the call upstream voice data sent through the third path) received by the third Stream from the DMSDP and the call downstream voice data (i.e., the call downstream voice data acquired through the second path) read by the second Stream from the PCM device, so as to obtain call recording data. The Audio HAL sends the call record data to the call record application. Therefore, call recording is realized.
For example, as shown in fig. 18, the audio chip has an audio routing module therein, and the audio routing module has a first path, a second path, and a third path therein. When the Audio HAL switches the cooperative call function from the closed state to the open state, if it is detected that the call recording function is in the open state, the Audio HAL closes the first use case corresponding to the call recording function to close the first path and the second path in the Audio routing module. Then, the Audio HAL opens the second use case and the third use case corresponding to the cooperative call function to open the second path and the third path in the Audio routing module.
And the third Stream corresponding to the third example in the Audio HAL writes the call uplink voice data into the PCM equipment, and the PCM equipment sends the call uplink voice data to a third channel in an Audio routing module in the Audio chip. Meanwhile, the Audio chip sends the call downstream voice data acquired by the second channel in the Audio routing module to the PCM device, and a second Stream corresponding to the second use case in the Audio HAL reads the call downstream voice data from the PCM device. And, the Audio HAL performs Audio mixing processing on the call uplink voice data and the call downlink voice data to obtain call recording data.
It is to be noted that, in other embodiments, when the collaborative call function is switched from the off state to the on state, if it is detected that the call recording function is in the off state, the Audio HAL directly opens the second use case and the third use case to create a second Stream corresponding to the second use case and create a third Stream corresponding to the third use case in the Audio HAL, the second Stream will open the second path in the Audio chip when created, and the third Stream will open the third path in the Audio chip when created, so as to establish the collaborative call.
Under the circumstance, in the process of the cooperative call between the first device and the second device, the DMSDP receives call uplink voice data which is sent by the second device and collected by the second device, the DMSDP sends the call uplink voice data to the Audio HAL, a third Stream in the Audio HAL writes the call uplink voice data into the PCM device, the PCM device sends the call uplink voice data to the Audio chip, and the Audio chip sends the call uplink voice data to the far-end call device through the third path. Meanwhile, the Audio chip receives call downlink voice data sent by the far-end call device through the second channel, the call downlink voice data are sent to the PCM device, a second Stream in the Audio HAL reads the call downlink voice data from the PCM device, the call downlink voice data are sent to the DMSDP, and the DMSDP sends the call downlink voice data to the second device and plays the call downlink voice data by the second device.
And in the process that the cooperative call function is in the on state, if the Audio HAL detects that the call recording function is switched from the off state to the on state, the Audio HAL performs Audio mixing processing on the call uplink voice data received by the third Stream from the DMSDP and the call downlink voice data read by the second Stream from the PCM device to obtain call recording data. The Audio HAL sends the call record data to the call record application.
Step 1610: and if the multi-screen cooperative application program detects that the cooperative call switch is switched from the open state to the closed state, sending a second switching instruction to the DMSDP to indicate that the cooperative call is closed.
The second switching instruction is used for indicating the DMSDP to close the collaborative call, that is, indicating to switch the call voice back to the first device, so that the first device collects and plays the call voice.
If the multi-screen cooperative application program detects that the cooperative call switch is switched from the on state to the off state, which indicates that the user wants to collect and play call voice in the first device, the multi-screen cooperative application program may instruct to turn off the cooperative call.
It is to be noted that, in the embodiment of the present application, it is only described as an example that the multi-screen cooperative application program sends the second switch instruction to the DMSDP when detecting that the cooperative call switch is switched from the on state to the off state, and the multi-screen cooperative application program may also send the second switch instruction to the DMSDP under other conditions, for example, the multi-screen cooperative application program may send the second switch instruction to the DMSDP to instruct to turn off the cooperative call when the multi-screen cooperation between the first device and the second device is disconnected or the first device hangs up the call.
Step 1611: and after receiving a second switching instruction sent by the multi-screen cooperative application program, the DMSDP calls the Audio HAL to close the cooperative call.
And after receiving the second switching instruction, the DMSDP determines that the multi-screen cooperative application program indicates that the cooperative call needs to be ended. Thus, DMSDP can call Audio HAL to close the collaborative session at this point.
When the DMSDP calls the Audio HAL to close the cooperative call, the Audio HAL can determine that the cooperative call function is switched from the open state to the closed state.
Step 1612: the Audio HAL detects a state of the call recording function when the cooperative call function is switched from an on state to an off state, and closes the second use case and the third use case if the call recording function is detected to be in the on state, so as to disconnect a second Stream corresponding to the second use case and a third Stream corresponding to the third use case in the Audio HAL, wherein the second Stream closes a second path in the Audio chip when disconnected, and the third Stream closes a third path in the Audio chip when disconnected. The Audio HAL then opens the first use case to create a first Stream in the Audio HAL corresponding to the first use case, the first Stream creating opening a first path and a second path in the Audio chip.
Under the condition, the microphone of the first device collects the call uplink voice data and sends the call uplink voice data to the audio chip, and the audio chip receives the call uplink voice data through the first channel and sends the call uplink voice data to the far-end call device. The far-end communication equipment sends the communication downlink voice data to the audio chip, and the audio chip receives the communication downlink voice data through the second channel and sends the communication downlink voice data to a loudspeaker or a receiver of the first equipment for playing. And the audio chip performs audio mixing processing on the call uplink voice data and the call downlink voice data to obtain call recording data, and sends the call recording data to the PCM equipment. The first Stream in the Audio HAL reads the call record data from the PCM device and then sends the call record data to the call record application.
It is noted that in some embodiments, if the call recording application detects a call recording operation, the Audio service is instructed to call the Audio HAL for on-channel recording via the Audio framework. In this case, the Audio HAL may detect the status of the cooperative call function if it determines that the on-channel recording function is switched from the off state to the on state.
If the Audio HAL detects that the cooperative call function is in an open state, the Audio HAL does not execute the operation of opening the first case, but performs Audio mixing processing on the call uplink voice data received by the third Stream from the DMSDP and the call downlink voice data read by the second Stream from the PCM device to obtain call recording data, and then sends the call recording data to the call recording application program.
If the Audio HAL detects that the cooperative call function is in a closed state, the Audio HAL opens a first use case to create a first Stream corresponding to the first use case in the Audio HAL, and a first path and a second path in the Audio chip are opened when the first Stream is created. Under the condition, the microphone of the first device collects the call uplink voice data and sends the call uplink voice data to the audio chip, and the audio chip receives the call uplink voice data through the first channel and sends the call uplink voice data to the far-end call device. The far-end communication equipment sends the communication downlink voice data to the audio chip, and the audio chip receives the communication downlink voice data through the second channel and sends the communication downlink voice data to a loudspeaker or a receiver of the first equipment for playing. And the audio chip performs audio mixing processing on the call uplink voice data and the call downlink voice data to obtain call recording data, and sends the call recording data to the PCM equipment. The first Stream in the Audio HAL reads the call record data from the PCM device and then sends the call record data to the call record application.
In other embodiments, if the call recording application detects a call recording end operation, the Audio service is instructed to call the Audio HAL to end the on-line recording through the Audio framework. In this case, the Audio HAL may detect the state of the cooperative call function by determining that the access recording function is switched from the on state to the off state. If the cooperative call function is in the on state, the Audio HAL does not execute the operation of closing the first use case, and stops performing Audio mixing processing on the call uplink voice data received by the third Stream from the DMSDP and the call downlink voice data read by the second Stream from the PCM device. If the cooperative call function is in a closed state, the Audio HAL closes the first instance to disconnect a first Stream corresponding to the first instance in the Audio HAL, and when the first Stream is disconnected, the Audio HAL closes the first path and the second path in the Audio chip. The call recording end operation is an operation for instructing to end call recording, for example, the call recording end operation may be a click operation of a user on call recording in a call interface when the first device is performing call recording, or the call recording end operation may be an operation of the first device hanging up a call.
Fig. 19 is a schematic structural diagram of a path processing apparatus according to an embodiment of the present application, where the apparatus and a first device are in a multi-screen coordination state. The apparatus may be implemented by software, hardware or a combination of both as part or all of a computer device, which may be a terminal as shown in the embodiments of fig. 1-2. Referring to fig. 19, the apparatus includes: a first detection module 1901, a first open module 1902 and a first call recording module 1903.
A first detecting module 1901, configured to detect a state of a call recording function if the cooperative call function is switched from a closed state to an open state, where the cooperative call function is used to instruct a second device in a multi-screen cooperative state with the apparatus to collect and play call voices;
a first opening module 1902, configured to close a first use case corresponding to the call recording function and open a second use case and a third use case corresponding to the collaborative call function if it is detected that the call recording function is in an open state, where the first use case is used to open a first path and a second path, the second use case is used to open a second path, and the third use case is used to open a third path, the first path is used to obtain call uplink voice data collected by the apparatus, the second path is used to obtain call downlink voice data sent to the apparatus by a far-end call device that makes a call with the apparatus, and the third path is used to send the call uplink voice data sent to the apparatus by the second device to the far-end call device;
the first call recording module 1903 is configured to send the call uplink voice data to the far-end call device through the third path, send the call downlink voice data obtained by the second path to the second device for playing, and perform sound mixing processing on the call uplink voice data sent by the third path and the call downlink voice data obtained by the second path to obtain call recording data.
Optionally, the apparatus further comprises:
the second opening module is used for opening a second use case and a third use case if the call recording function is detected to be in a closed state;
and the cooperative call module is used for sending the call uplink voice data to the far-end call equipment through the third channel and sending the call downlink voice data acquired by the second channel to the second equipment for playing.
Optionally, the apparatus further comprises:
the second detection module is used for detecting the state of the call recording function if the cooperative call function is switched from the open state to the closed state;
and the first closing module is used for closing the second use case and the third use case if the call recording function is detected to be in a closed state.
Optionally, the apparatus further comprises:
the third opening module is used for closing the second use case and the third use case and opening the first use case if the call recording function is detected to be in an opening state;
and the second call recording module is used for carrying out sound mixing processing on the call uplink voice data acquired by the first channel and the call downlink voice data acquired by the second channel to obtain call recording data.
Optionally, the apparatus further comprises:
the third detection module is used for detecting the state of the cooperative call function if the call recording function is switched from a closed state to an open state;
the fourth opening module is used for opening the first case if the cooperative call function is detected to be in a closed state;
and the third call recording module is used for carrying out sound mixing processing on the call uplink voice data acquired by the first channel and the call downlink voice data acquired by the second channel to obtain call recording data.
Optionally, the apparatus further comprises:
and the fourth call recording module is used for carrying out sound mixing processing on the call uplink voice data sent by the third channel and the call downlink voice data obtained by the second channel to obtain call recording data if the cooperative call function is detected to be in the open state.
Optionally, the apparatus further comprises:
the fourth detection module is used for detecting the state of the cooperative call function if the call recording function is switched from the open state to the closed state;
and the second closing module is used for closing the first example if the cooperative call function is detected to be in a closed state.
Optionally, the apparatus further comprises:
and the call recording stopping module is used for stopping performing sound mixing processing on the call uplink voice data sent through the third channel and the call downlink voice data acquired through the second channel if the cooperative call function is detected to be in the open state.
In the embodiment of the application, under the condition that the cooperative call function and the call recording function are both in the on state, the first use case corresponding to the call recording function is not opened, and only the second use case and the third use case corresponding to the cooperative call function are opened. The method comprises the steps of obtaining call downlink voice data sent to the device by the far-end call equipment through a second path opened by a second use case, sending the call downlink voice data obtained by the second path to the second equipment for playing, and sending call uplink voice data sent to the device by the second equipment to the far-end call equipment through a third path opened by a third use case, so that cooperative call is achieved. And the call uplink voice data sent through the third channel and the call downlink voice data acquired through the second channel can be subjected to sound mixing processing to obtain call recording data, so that call recording is realized. Therefore, the embodiment of the application can avoid the problem of channel conflict when the cooperative call and the call recording are used at the same time through a simple processing flow, can ensure that both the cooperative call and the call recording can normally acquire call voice data, and realizes that the logic is easy to understand and maintain.
It should be noted that: in the path processing apparatus provided in the foregoing embodiment, only the division of each functional module is illustrated in the foregoing, and in practical applications, the above function allocation may be completed by different functional modules as needed, that is, the internal structure of the apparatus is divided into different functional modules to complete all or part of the above described functions.
Each functional unit and module in the above embodiments may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used to limit the protection scope of the embodiments of the present application.
The embodiments of the path processing apparatus and the path processing method provided in the above embodiments belong to the same concept, and for specific working processes and technical effects brought by the units and modules in the above embodiments, reference may be made to the portions of the embodiments of the methods, which are not described herein again.
In the above embodiments, the implementation may be wholly or partly realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., Digital Versatile Disk (DVD)), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above description is not intended to limit the present application to the particular embodiments disclosed, but rather, the present application is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present application.

Claims (11)

1. A path processing method applied to a first device, the method comprising:
if the cooperative call function is switched from a closed state to an open state, detecting the state of a call recording function, wherein the cooperative call function is used for indicating that a second device in a multi-screen cooperative state with the first device collects and plays call voice;
if the call recording function is detected to be in an open state, closing a first use case corresponding to the call recording function, and opening a second use case and a third use case corresponding to the collaborative call function, wherein the first use case is used for opening a first channel and a second channel, the second use case is used for opening the second channel, the third use case is used for opening a third channel, the first channel is used for acquiring call uplink voice data acquired by the first equipment, the second channel is used for acquiring call downlink voice data sent to the first equipment by far-end call equipment which carries out a call with the first equipment, and the third channel is used for sending the call uplink voice data sent to the first equipment by the second equipment to the far-end call equipment;
and sending the call uplink voice data to the remote call equipment through the third channel, sending the call downlink voice data acquired through the second channel to the second equipment for playing, and carrying out sound mixing processing on the call uplink voice data sent through the third channel and the call downlink voice data acquired through the second channel to obtain call recording data.
2. The method as claimed in claim 1, wherein if the cooperative call function is switched from the off state to the on state, after detecting the state of the call recording function, the method further comprises:
if the call recording function is detected to be in a closed state, opening the second use case and the third use case;
and sending the call uplink voice data to the remote call equipment through the third channel, and sending the call downlink voice data acquired by the second channel to the second equipment for playing.
3. The method of claim 1, wherein the method further comprises:
if the cooperative call function is switched from an open state to a closed state, detecting the state of the call recording function;
and if the call recording function is detected to be in a closed state, closing the second use case and the third use case.
4. The method as claimed in claim 3, wherein after detecting the state of the call recording function if the cooperative call function is switched from the on state to the off state, the method further comprises:
if the call recording function is detected to be in an open state, closing the second use case and the third use case, and opening the first use case;
and carrying out sound mixing processing on the call uplink voice data acquired by the first channel and the call downlink voice data acquired by the second channel to obtain the call recording data.
5. The method of any of claims 1-4, wherein the method further comprises:
if the call recording function is switched from a closed state to an open state, detecting the state of the cooperative call function;
if the cooperative call function is detected to be in a closed state, opening the first case;
and carrying out sound mixing processing on the call uplink voice data acquired by the first channel and the call downlink voice data acquired by the second channel to obtain the call recording data.
6. The method as claimed in claim 5, wherein after detecting the state of the cooperative call function if the call recording function is switched from the off state to the on state, the method further comprises:
and if the cooperative call function is detected to be in an open state, carrying out sound mixing processing on the call uplink voice data sent through the third path and the call downlink voice data acquired through the second path to obtain call recording data.
7. The method of claim 5, wherein the method further comprises:
if the call recording function is switched from an open state to a closed state, detecting the state of the cooperative call function;
and if the cooperative call function is detected to be in a closed state, closing the first use case.
8. The method as claimed in claim 7, wherein after detecting the state of the cooperative call function if the call recording function is switched from the on state to the off state, the method further comprises:
and if the cooperative call function is detected to be in an open state, stopping performing sound mixing processing on the call uplink voice data sent through the third path and the call downlink voice data acquired through the second path.
9. A pathway processing apparatus, the apparatus comprising:
the first detection module is used for detecting the state of the call recording function if the cooperative call function is switched from a closed state to an open state, wherein the cooperative call function is used for indicating that the second equipment in a multi-screen cooperative state with the device carries out call voice acquisition and playing;
the device comprises a first opening module, a second opening module and a third opening module, wherein the first opening module is used for closing a first use case corresponding to the call recording function and opening a second use case and a third use case corresponding to the cooperative call function if the call recording function is detected to be in an open state, the first use case is used for opening a first channel and a second channel, the second use case is used for opening the second channel, the third use case is used for opening a third channel, the first channel is used for acquiring call uplink voice data acquired by the device, the second channel is used for acquiring call downlink voice data sent to the device by far-end call equipment in a call with the device, and the third channel is used for sending the call uplink voice data sent to the device by the second equipment to the far-end call equipment;
and the first call recording module is used for sending the call uplink voice data to the far-end call equipment through the third channel, sending the call downlink voice data acquired through the second channel to the second equipment for playing, and carrying out sound mixing processing on the call uplink voice data sent through the third channel and the call downlink voice data acquired through the second channel to obtain call recording data.
10. A computer arrangement, characterized in that the computer arrangement comprises a memory, a processor and a computer program stored in the memory and executable on the processor, which computer program, when executed by the processor, implements the method according to any one of claims 1-8.
11. A computer-readable storage medium having stored therein instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1-8.
CN202210184795.0A 2022-02-28 2022-02-28 Path processing method, device, equipment and storage medium Active CN114245060B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210184795.0A CN114245060B (en) 2022-02-28 2022-02-28 Path processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210184795.0A CN114245060B (en) 2022-02-28 2022-02-28 Path processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114245060A CN114245060A (en) 2022-03-25
CN114245060B true CN114245060B (en) 2022-07-05

Family

ID=80748283

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210184795.0A Active CN114245060B (en) 2022-02-28 2022-02-28 Path processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114245060B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105429851A (en) * 2015-11-10 2016-03-23 河海大学 Multiplayer collaborative recording system and identification method based on instant communication
CN106385485A (en) * 2016-08-25 2017-02-08 广东欧珀移动通信有限公司 Call recording method, call recording device and mobile terminal
CN110781014A (en) * 2019-10-28 2020-02-11 苏州思必驰信息科技有限公司 Recording data multi-process distribution method and system based on Android device
CN112202961A (en) * 2020-10-29 2021-01-08 歌尔科技有限公司 Audio channel switching method and device and computer readable storage medium
CN113301525A (en) * 2021-05-07 2021-08-24 上海小鹏汽车科技有限公司 Call control method and device, electronic controller and vehicle
CN113923305A (en) * 2021-12-14 2022-01-11 荣耀终端有限公司 Multi-screen cooperative communication method, system, terminal and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140297755A1 (en) * 2013-04-02 2014-10-02 Research In Motion Limited Method and system for switching between collaborative applications

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105429851A (en) * 2015-11-10 2016-03-23 河海大学 Multiplayer collaborative recording system and identification method based on instant communication
CN106385485A (en) * 2016-08-25 2017-02-08 广东欧珀移动通信有限公司 Call recording method, call recording device and mobile terminal
CN110781014A (en) * 2019-10-28 2020-02-11 苏州思必驰信息科技有限公司 Recording data multi-process distribution method and system based on Android device
CN112202961A (en) * 2020-10-29 2021-01-08 歌尔科技有限公司 Audio channel switching method and device and computer readable storage medium
CN113301525A (en) * 2021-05-07 2021-08-24 上海小鹏汽车科技有限公司 Call control method and device, electronic controller and vehicle
CN113923305A (en) * 2021-12-14 2022-01-11 荣耀终端有限公司 Multi-screen cooperative communication method, system, terminal and storage medium

Also Published As

Publication number Publication date
CN114245060A (en) 2022-03-25

Similar Documents

Publication Publication Date Title
WO2023130991A1 (en) Collaborative calling method and apparatus, device, storage medium, and program product
CN113497909B (en) Equipment interaction method and electronic equipment
EP4192057A1 (en) Bluetooth communication method, wearable device, and system
JP7268275B2 (en) Method and electronic device for presenting video on electronic device when there is an incoming call
WO2021031865A1 (en) Call method and apparatus
CN111294884A (en) Communication terminal supporting dual-card dual-standby single-pass and data service switching method
WO2023088209A1 (en) Cross-device audio data transmission method and electronic devices
WO2023184825A1 (en) Video recording control method of electronic device, electronic device, and readable medium
EP4283454A1 (en) Card widget display method, graphical user interface, and related apparatus
CN114640747A (en) Call method, related device and system
CN115242994B (en) Video call system, method and device
CN114245060B (en) Path processing method, device, equipment and storage medium
CN113923305B (en) Multi-screen cooperative communication method, system, terminal and storage medium
CN115002820B (en) Call state monitoring method, device, equipment and storage medium
CN115550559A (en) Video picture display method, device, equipment and storage medium
CN115002821B (en) Call state monitoring method, device, equipment and storage medium
CN113543366A (en) Mobile terminal, call method thereof, call server and call system
CN114173315B (en) Bluetooth reconnection method and terminal equipment
EP4266164A1 (en) Display method and electronic device
WO2023236646A1 (en) Incoming call display method and electronic devices
US11973895B2 (en) Call method and apparatus
WO2023036001A1 (en) Call method and electronic device
WO2024067170A1 (en) Device management method and electronic device
CN115016871B (en) Multimedia editing method, electronic device and storage medium
CN116301541A (en) Method for sharing file, electronic device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant