WO2020233556A1 - 一种通话内容处理方法和电子设备 - Google Patents

一种通话内容处理方法和电子设备 Download PDF

Info

Publication number
WO2020233556A1
WO2020233556A1 PCT/CN2020/090956 CN2020090956W WO2020233556A1 WO 2020233556 A1 WO2020233556 A1 WO 2020233556A1 CN 2020090956 W CN2020090956 W CN 2020090956W WO 2020233556 A1 WO2020233556 A1 WO 2020233556A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
interface
call
webpage
application
Prior art date
Application number
PCT/CN2020/090956
Other languages
English (en)
French (fr)
Inventor
丁宁
张子曰
曹林
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2020233556A1 publication Critical patent/WO2020233556A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/725Cordless telephones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/12Details of telephonic subscriber devices including a sensor for measuring a physical value, e.g. temperature or motion

Definitions

  • the embodiments of the present application relate to communication technologies, and in particular, to a method for processing call content and an electronic device.
  • the electronic device displays a first interface 101.
  • the first interface 101 is a text message dialogue interface, which may include text and images. The user can perform multi-touch operations on any area in the first interface 101.
  • the electronic device in response to a user's multi-touch operation on a certain area (for example, the first area), the electronic device performs image processing on the area to extract text in the area (for example, listening to movies A is good), split the text into multiple fields (for example, "heard", "movie A” and "good”). Then, the electronic device displays the second interface 102.
  • the second interface 102 includes multiple tags 1021, and each tag corresponds to a field. The user can click the search button 1022 after selecting one or more field labels (for example, selecting the field label of "Movie A").
  • the electronic device in response to the user clicking the search button 1022, the electronic device starts a browser application and displays an interface 103.
  • the interface 103 is an application interface of a browser.
  • the electronic device automatically fills the field of the field label selected by the user (for example, movie A) into the search box and performs a search, so as to obtain information related to the field.
  • the prior art recognizes the displayed content to provide relevant information, and users hope to provide richer information services.
  • the embodiments of the present application provide a call content processing method and electronic device, which can process the call content to provide information related to the call content.
  • a method for processing call content includes: when the electronic device is in a call connection state, the electronic device receives a first input; in response to the first input, the electronic device obtains at least one key of the call content Information; the electronic device displays a first interface according to the above key information, and the first interface includes a label corresponding to the key information.
  • the content of the call may include first voice data, second voice data, first voice data, and second voice data.
  • the first voice data is voice data generated by the electronic device by collecting external sounds
  • the second voice data is voice data that the electronic device receives from other electronic devices connected to it in a call.
  • the key information may include text data of part of the call content, keywords in the call content, and webpage links related to the keywords.
  • the tags include any one or more of text tags, keyword tags, and information tags. Text tags correspond to text data; keyword tags correspond to keywords; information tags correspond to web links.
  • the first input may be that the electronic device is folded or the electronic device is unfolded.
  • the key information can be obtained in different ways.
  • the electronic device sends the content of the call to the first server; the first server receives the content of the call sent by the first electronic device, and converts the content of the call into text data; then, the first server extracts keywords from the text data , Sending the keyword to the second server to obtain a webpage link related to the keyword; finally, the first server sends the webpage link, text data or/and the keyword to the electronic device.
  • a user can obtain key information related to the call content while making a call.
  • the electronic device receives a third input to the information tag, and in response to the third input, the electronic device displays a web page associated with the information tag.
  • the electronic device can access the webpage associated with the information tag through the browser core, and display the webpage in a web view (Webview).
  • Webview a web view
  • users can quickly access web pages related to the content of the call through the information tag. Similar to browsing a web page in a browser application, the user can operate the web page to access other web pages.
  • the electronic device may receive a fourth input to the webpage; in response to the fourth input, the electronic device may display other webpages.
  • the electronic device can record the webpage link of the webpage visited by the user.
  • the electronic device can jump to the webpage displayed at the end of the call, so that the user can continue to browse the webpage through the browser application after the call, which improves the user experience.
  • the electronic device can jump to the corresponding application interface when the call is ended.
  • the design method specifically includes: the electronic device obtains the latest recorded webpage link; determines the application related to the recently recorded webpage link according to the recently recorded webpage link; if the application is installed, the electronic device obtains the application corresponding to the webpage link Link; start the related application, and display the corresponding application interface according to the application link; if the application is not installed, start the browser application, and display the corresponding webpage according to the webpage link.
  • an embodiment of the present application provides an electronic device that includes a display screen, a processor, and a memory for storing a computer program, where the computer program includes instructions, and when the instructions are executed by the processor, The electronic device is caused to perform the method according to any one of the first aspects.
  • the present application provides a computer storage medium including computer instructions, which when the computer instructions run on an electronic device, cause the electronic device to execute the method described in any one of the first aspect.
  • this application provides a computer program product, which when the computer program product runs on an electronic device, causes the electronic device to execute the method described in any one of the first aspect.
  • the present application provides a graphical user interface, which specifically includes a graphical user interface displayed when an electronic device executes any method as in the first aspect.
  • the electronic equipment described in the second aspect, the computer storage medium described in the third aspect, the computer program product described in the fourth aspect, and the graphical user interface described in the fifth aspect provided above are all used to execute
  • the beneficial effects that can be achieved can refer to the beneficial effects of the corresponding method provided above, which will not be repeated here.
  • FIG. 1 is a schematic diagram of a scene of a display content processing method provided by the prior art
  • FIG. 2 is a schematic structural diagram of an electronic device provided by an embodiment of the application.
  • FIG. 3 is a software structure block diagram of an electronic device provided by an embodiment of the application.
  • FIG. 4 is a schematic flowchart of a call content processing provided by an embodiment of the application.
  • FIG. 5 is a schematic diagram of a scene of another method for processing call content according to an embodiment of the application.
  • FIG. 6 is a schematic structural diagram of yet another electronic device provided by an embodiment of the application.
  • FIG. 7 is a schematic scenario diagram of another method for processing call content provided by an embodiment of the application.
  • FIG. 8 is a schematic diagram of a scene of another method for processing call content according to an embodiment of the application.
  • FIG. 9 is a schematic diagram of a scene of another method for processing call content according to an embodiment of the application.
  • FIG. 10 is a schematic diagram of a scene of another call content processing method provided by an embodiment of the application.
  • FIG. 11 is a schematic diagram of another scene of a call content processing method provided by an embodiment of the application.
  • FIG. 12 is a schematic diagram of a scene of another method for processing call content according to an embodiment of the application.
  • FIG. 13 is a schematic flowchart of another method for processing call content according to an embodiment of this application.
  • FIG. 14 is a schematic diagram of a scene of another method for processing call content according to an embodiment of the application.
  • the "user” in the embodiments of this application refers to a user who uses an electronic device.
  • a and/or B in the embodiments of the present application is merely an association relationship describing associated objects, indicating that there can be three types of relationships, for example, there may be three types of relationships, such as A alone, A and B at the same time, and B alone.
  • the character "/" in the embodiment of the present application generally indicates that the associated objects before and after are in an "or" relationship.
  • the method for processing call content provided by the embodiment of the present application may be applied to an electronic device.
  • the electronic device may be, for example, a mobile phone, a tablet (Personal Computer), a laptop (Laptop Computer), a digital camera, a personal digital assistant (PDA for short), a navigation device, and a mobile Internet Device (Mobile Internet Device, MID) or Wearable Device (Wearable Device), etc.
  • FIG. 2 shows a schematic diagram of the structure of the electronic device 100.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2.
  • Mobile communication module 150 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM Subscriber identification module
  • the sensor module 180 may include pressure sensor 180A, gyroscope sensor 180B, air pressure sensor 180C, magnetic sensor 180D, acceleration sensor 180E, distance sensor 180F, proximity light sensor 180G, fingerprint sensor 180H, temperature sensor 180J, touch sensor 180K, ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU), etc.
  • AP application processor
  • modem processor modem processor
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (PCM) interface, and a universal asynchronous transmitter receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / Or Universal Serial Bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • UART universal asynchronous transmitter receiver/transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB Universal Serial Bus
  • the I2C interface is a two-way synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may include multiple sets of I2C buses.
  • the processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc. through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through an I2C bus interface to implement the touch function of the electronic device 100.
  • the I2S interface can be used for audio communication.
  • the processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to realize communication between the processor 110 and the audio module 170.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through an I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communication to sample, quantize and encode analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a two-way communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is generally used to connect the processor 110 and the wireless communication module 160.
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with the display screen 194, the camera 193 and other peripheral devices.
  • the MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc.
  • the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the electronic device 100.
  • the processor 110 and the display screen 194 communicate through a DSI interface to realize the display function of the electronic device 100.
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and so on.
  • GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that complies with the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transfer data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through the headphones. This interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiment of the present application is merely a schematic description, and does not constitute a structural limitation of the electronic device 100.
  • the electronic device 100 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
  • the charging management module 140 may receive the wireless charging input through the wireless charging coil of the electronic device 100. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110.
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 100.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation via the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110 and be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellites.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, perform frequency modulation, amplify it, and convert it into electromagnetic wave radiation via the antenna 2.
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the electronic device 100 implements a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, connected to the display 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, etc.
  • the display screen 194 includes a display panel.
  • the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active-matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the electronic device 100 may include one or N display screens 194, and N is a positive integer greater than one.
  • the electronic device 100 can implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
  • the ISP is used to process the data fed back from the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transfers the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193.
  • the camera 193 is used to capture still images or videos.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats.
  • the electronic device 100 may include 1 or N cameras 193, and N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects the frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in a variety of encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
  • MPEG moving picture experts group
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • the NPU can realize applications such as intelligent cognition of the electronic device 100, such as image recognition, face recognition, voice recognition, text understanding, and so on.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, at least one application program (such as a sound playback function, an image playback function, etc.) required by at least one function.
  • the data storage area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 100.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), etc.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by running instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
  • the electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110.
  • the microphone 170C converts the collected sound signal into an electrical signal, which is received by the audio module 170 and then converted into an audio signal.
  • the audio module can convert the audio signal into an electrical signal, which is received by the speaker 170A and converted into a sound signal for output.
  • the speaker 170A also called a “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 answers a call or voice message, it can receive the voice by bringing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can approach the microphone 170C through the mouth to make a sound, and input the sound signal to the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, which can implement noise reduction functions in addition to collecting sound signals. In some other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
  • the earphone interface 170D is used to connect wired earphones.
  • the earphone interface 170D may be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A may be provided on the display screen 194. Pressure sensor 180A
  • the capacitive pressure sensor may include at least two parallel plates with conductive material.
  • the electronic device 100 determines the intensity of the pressure according to the change in capacitance.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations that act on the same touch location but have different touch operation strengths may correspond to different operation instructions.
  • the gyro sensor 180B may be used to determine the movement posture of the electronic device 100.
  • the angular velocity of the electronic device 100 around three axes ie, x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shake of the electronic device 100 through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 can use the magnetic sensor 180D to detect the opening and closing of the flip holster.
  • the electronic device 100 can detect the opening and closing of the flip according to the magnetic sensor 180D.
  • features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and used in applications such as horizontal and vertical screen switching, pedometers and so on.
  • the electronic device 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 180F to measure the distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 100 emits infrared light to the outside through the light emitting diode.
  • the electronic device 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 can determine that there is no object near the electronic device 100.
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, and the pocket mode will automatically unlock and lock the screen.
  • the ambient light sensor 180L is used to sense the brightness of the ambient light.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived brightness of the ambient light.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, etc.
  • the temperature sensor 180J is used to detect temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold value, the electronic device 100 executes to reduce the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection.
  • the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 due to low temperature.
  • the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 180K also called “touch device”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also called a “touch screen”.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100, which is different from the position of the display screen 194.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can obtain the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the human pulse and receive the blood pressure pulse signal.
  • the bone conduction sensor 180M may also be provided in the earphone, combined with the bone conduction earphone.
  • the audio module 170 can parse the voice signal based on the vibration signal of the vibrating bone block of the voice obtained by the bone conduction sensor 180M, and realize the voice function.
  • the application processor may analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M, and realize the heart rate detection function.
  • the button 190 includes a power button, a volume button, and so on.
  • the button 190 may be a mechanical button. It can also be a touch button.
  • the electronic device 100 may receive key input, and generate key signal input related to user settings and function control of the electronic device 100.
  • the motor 191 can generate vibration prompts.
  • the motor 191 can be used for incoming call vibration notification, and can also be used for touch vibration feedback.
  • touch operations applied to different applications can correspond to different vibration feedback effects.
  • Acting on touch operations in different areas of the display screen 194, the motor 191 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminding, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 may be an indicator light, which may be used to indicate the charging status, power change, or to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 195 is used to connect to the SIM card.
  • the SIM card can be inserted into the SIM card interface 195 or pulled out from the SIM card interface 195 to achieve contact and separation with the electronic device 100.
  • the electronic device 100 may support 1 or N SIM card interfaces, and N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc.
  • the same SIM card interface 195 can insert multiple cards at the same time. The types of the multiple cards can be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 may also be compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to implement functions such as call and data communication.
  • the electronic device 100 adopts an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of the present application takes a layered Android system as an example to illustrate the software structure of the electronic device 100.
  • FIG. 3 is a block diagram of the software structure of the electronic device 100 according to an embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Communication between layers through software interface.
  • the Android system is divided into four layers, from top to bottom, the application layer, the application framework layer, the Android runtime and system library, and the kernel layer.
  • the application layer can include a series of application packages.
  • the application package can include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
  • the application framework layer provides application programming interfaces (application programming interface, API) and programming frameworks for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer can include a window manager, a content provider, a view system, a phone manager, a resource manager, and a notification manager.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display, determine whether there is a status bar, lock the screen, take a screenshot, etc.
  • the content provider is used to store and retrieve data and make these data accessible to applications.
  • the data may include video, image, audio, phone calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls that display text and controls that display pictures.
  • the view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface that includes a short message notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide the communication function of the electronic device 100. For example, the management of the call status (including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, etc.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and it can disappear automatically after a short stay without user interaction.
  • the notification manager is used to notify the download completion, message reminder, etc.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, text messages are prompted in the status bar, prompt sounds, electronic devices vibrate, and indicator lights flash.
  • Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function functions that the java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in a virtual machine.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), three-dimensional graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides a combination of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to realize 3D graphics drawing, image rendering, synthesis, and layer processing.
  • the 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
  • the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into original input events (including touch coordinates, time stamps of touch operations, etc.).
  • the original input events are stored in the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer, and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and the control corresponding to the click operation is the control of the camera application icon as an example, the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer.
  • the camera 193 captures still images or videos.
  • FIG. 4 is a schematic flowchart of a method for processing call content according to an embodiment of the application. As shown in Figure 4, the method includes:
  • Step 401 When the electronic device is in a call connection state, the electronic device receives a first input.
  • the electronic device being in the call connection state means that the electronic device has established a call connection with other electronic devices.
  • the call may include: a voice call or a video call.
  • the first user uses an electronic device (eg, the first electronic device) to make a video call with other electronic devices (eg, the second electronic device) used by the second user.
  • the electronic device is in a call connection state with other electronic devices.
  • the electronic device displays a call interface (for example, interface 501).
  • the call interface may include a call service activation button (eg, button 5011) for starting the call service.
  • the electronic device receives the user's first input.
  • the first input is: the call service activation button (eg, button 5011) is touched.
  • the first input includes, but is not limited to, the foregoing manner.
  • the first input may be a signal sent by the other device for instructing the electronic device to start a call service.
  • the Bluetooth headset sends a signal to the electronic device to instruct the electronic device to start the call service.
  • the first input may be: unfolding or folding the electronic device.
  • the electronic device is a foldable mobile phone.
  • Step 402 In response to the first input, the electronic device starts a call service.
  • the call service refers to providing at least one key information required by the user by processing the content of the call.
  • the call service includes, but is not limited to: uncommon word interpretation, sentence supplement, voice error correction, grammatical error correction, schedule management, translation, weather query, and navigation.
  • the electronic device in response to the first input (button 5011 is touched), the electronic device initiates a call service.
  • the electronic device when starting the call service or in the process of starting the call service, the electronic device may display a notification message 5021 for notifying that the electronic device is starting the call service.
  • Step 403 The electronic device obtains at least one key information provided by the call service.
  • step 403 may specifically include steps 4031-4039:
  • Step 4031 The electronic device sends the first data to the first server.
  • the first data may include: first voice data and/or second voice data.
  • the application scenario shown in Figure 5(c) is: after the electronic device starts the call service, the second user asks "Is there any good movie recently?"; the first user replied: "I heard movie A (The name of a movie) is good”.
  • the mobile communication module 150 of the electronic device receives second voice data sent by other devices (eg, the second electronic device). After the second voice data is converted into an electrical signal by the audio module 170, The speaker 170A is converted into a sound signal output (for example, "Is there any good movie recently?"). The electronic device may send the second voice data to the first server.
  • the first user approaches the microphone 170C to speak, and the first user's voice signal (for example, "I heard that movie A is good") is converted into an electrical signal by the microphone 170C, and then converted into the first voice data by the audio module 170.
  • the first voice data is voice data generated by the electronic device by collecting external sounds.
  • the electronic device sends the first voice data to the first server.
  • the first voice data corresponds to the content said by the first user
  • the second voice data corresponds to the content said by the second user. That is, the electronic device may send the content of the call between the first user and the second user to the first server, that is, the content of the call between the first electronic device and the second electronic device.
  • Step 4032 The first server receives the first data sent by the electronic device.
  • Step 4033 The first server generates second data according to the first data.
  • the generating of the second data according to the first data may include the following steps 40331 to 40334:
  • Step 40331 The first server converts the voice data in the first data into text data.
  • the electronic device sends the second voice data to the first server, and the first server converts the voice data into text data, and the second text data is “What’s a good movie? ?”.
  • the electronic device sends the first voice data to the first server, and the first server converts the voice data into text data to obtain the first text data: "I heard that movie A is good!.
  • Step 40332 The first server extracts keywords in the text data.
  • the keyword extracted by the first server may be: “movies”.
  • the keyword extracted by the first server may be: “movie A”.
  • Step 40333 The first server determines the second server.
  • the second server is used to provide key information related to the extracted keywords.
  • the first server may determine the second server according to the extracted keywords.
  • the first server may store one or more lists. As shown in Table 1, the list includes one or more keywords and the names of one or more servers corresponding to the keywords.
  • the first server matches the extracted keywords with the keywords in the list, and determines that the second server is the server corresponding to the keywords in the list that matches the extracted keywords.
  • the second server can be one server or multiple servers.
  • the first server determines that the second server is "Fandango”.
  • the first server may determine the second server according to the user's setting or the user's usage habits.
  • the keywords extracted by the first server are "buy”, “mobile phone” and “Huawei P30Pro”.
  • the servers corresponding to the above keywords are “Amazon” and “Taobao”.
  • the first server may determine that the second server is "Taobao”.
  • the server list also includes semantic types.
  • the first server may determine a server corresponding to the voice type according to the semantic type.
  • Step 40334 The first server generates second data.
  • the second data may include keywords.
  • the second data may be data in JavaScript Object Notation (JSON) format.
  • JSON is a built-in language feature of JavaScript that provides a simple way to describe complex objects.
  • the form of the second data may be different according to the second server.
  • the second data may be: information: ⁇ "type”: “mobile phone”; name: “Huawei P30Pro” " ⁇ .
  • the second data may be: information: ⁇ name: "movie A" ⁇ .
  • Step 4034 The first server sends a first processing request to the second server.
  • the first processing request includes the second data.
  • the first processing request is used to instruct the second server to provide information related to the second data.
  • Step 4035 The second server receives the first processing request sent by the first server.
  • Step 4036 In response to the first processing request, the second server sends third data to the first server.
  • the third data may include links, such as Uniform/Universal Resource Locator (URL), etc.
  • the information related to keywords can be obtained through the linked electronic device.
  • the information may include: product information, plot introduction, movie reviews, full text of poems, maps, singer introduction, news, hotel rankings, etc.
  • the third data may include: Taobao
  • an electronic device accesses a webpage through the webpage link, it can obtain information related to Huawei P30Pro.
  • the third data may include: Fandango webpage Go to the web link of the content introduction page of Movie A, such as ⁇ "web": www.fandango.com/movie-a/movie-overview ⁇ .
  • the electronic device accesses a webpage through the webpage link, it can obtain information related to Movie A.
  • Step 4037 The first server receives the third data sent by the second server.
  • Step 4038 The first server sends the fourth data to the electronic device.
  • the fourth data may include: keywords, text data and/or links.
  • the fourth data may further include: the name of the second server.
  • the fourth data may be:
  • Step 4039 The electronic device receives the fourth data sent by the first server.
  • the electronic device can acquire at least one key information of the content of the conversation with the first electronic device and the second electronic device. It is understandable that at this time, at least one key piece of information provided by the call service is the fourth data.
  • Step 404 The electronic device displays a call service interface according to the at least one key information.
  • the electronic device displays an interface 503.
  • the interface 503 includes a call interface (e.g., interface 5036) and a call service interface (e.g., interface 5037).
  • the electronic device displays the call interface in the display area 5031, and displays the call service interface in the display area 5032.
  • the electronic device can generate a text label, a keyword label and/or an information label according to at least one key information provided by the call service.
  • the call service interface may include text labels (for example, label 5033 "Are there any good movies recently?"), keyword labels (for example, label 5034 "Movie A") and/or information labels (for example, label 5035" Movie Introduction", "Reservation”).
  • the text label is used to display text data.
  • the keyword tag is used to display keywords.
  • the information label is associated with the link.
  • the information tag may be located near a text tag or keyword tag related to it.
  • the electronic device may receive a third input of the user to the information tag.
  • the third input may be: the user touches the information label.
  • the electronic device can access the webpage associated with the information tag through the browser kernel.
  • the electronic device can display the web page (e.g., the introduction page of movie A, the booking page of movie A) in a web view.
  • the electronic device may launch a browser application to access the webpage associated with the information tag.
  • the electronic device may display the application interface of the browser application in the second display area.
  • the call service interface may also include a website label.
  • the website tag is used to indicate the server corresponding to the web page link, such as Taobao, Amazon, etc.
  • the application tag or website tag may be located near the information tag related thereto.
  • the electronic device can distinguish and display labels of different categories by different colors, shapes, sizes, etc.
  • Step 405 The electronic device determines whether the second input is received.
  • step 407 If the electronic device receives the second input, the electronic device executes step 407; if the electronic device does not receive the second input of the first user, the electronic device executes step 406.
  • the second input is used to instruct the electronic device to end the call service.
  • the electronic device ends the call service.
  • the second input may be: the user long presses the power button.
  • the second input includes, but is not limited to, the above methods.
  • the second input may be: unfolding or folding the electronic device.
  • the electronic device is a foldable mobile phone, and when the angle ⁇ formed by the first part of the electronic device and the second part of the electronic device is less than a predetermined threshold, the electronic device ends the call service.
  • the second input may be a signal sent by the other device to instruct the electronic device to close the call service.
  • Step 406 The electronic device judges whether the call ends.
  • step 407 is executed; if the electronic device is still in the call state, step 403 is executed.
  • Step 407 The electronic device ends the call service.
  • the first electronic device when the first user or the second user hangs up the phone, the first electronic device ends the call connection with the second electronic device.
  • the first electronic device determines that the call is over and ends the call service.
  • the electronic device displays an interface 504.
  • the electronic device ends the call service, the electronic device ends the acquisition of the key information.
  • the method for processing call content can provide at least one key information related to the call content by processing the call content in real time through a server.
  • the first user can obtain key information related to the content of the call while talking with the second user.
  • users can also quickly access web pages related to the call content by touching the information label.
  • the first display area 5031 for displaying the call interface may be smaller, and the second display area for displaying the call service interface The area 5032 can be larger.
  • the call interface may be displayed as an icon (for example, icon 701) floating.
  • the electronic device repeatedly executes step 403 and step 404, and the electronic device generates one label after another.
  • the first user conducts a video call with the second user.
  • the electronic device repeatedly performs step 403 and step 404 to sequentially generate tags A-1, B-1, A-2, B-2, B-3, A-3, B-4, and B-5.
  • A-1, A-2 and A-3 are related to what the first user said.
  • B-1, B-2, B-3, B-4, and B-5 are related to what the second user said.
  • the electronic device can arrange multiple tags in chronological order.
  • the new label for example, label A-3) is located under the old label (for example, label B-3).
  • the electronic device can arrange the tags according to users.
  • the tags related to the second user’s conversation for example, tags B-1, B-2, and B-3) are arranged on the left;
  • tags related to the dialogue of tags A-1, A-2, A-3) are arranged on the right.
  • the electronic device can scroll to display labels.
  • the old label disappears from above, and the new label appears from below, so that the user can view the labels related to the current conversation.
  • the electronic device can set the scrolling speed of the label according to the speech rate of the call, or the user can also specify the scrolling speed of the label.
  • the electronic device may stop scrolling to display the one or more related labels.
  • the electronic device may stop scrolling display.
  • the electronic device scrolls quickly to display tags related to the current conversation.
  • the electronic device may determine the arrangement direction and scroll direction of the tags according to the number of users participating in the call.
  • the electronic device sequentially generates tags E-1, C-1, D-1, E-2, and C-2.
  • C-1 and C-2 are related to the content said by the first user;
  • D-1 is related to the content said by the second user;
  • E-1 and E-2 are related to the content said by the third user.
  • the electronic device can display the label of the first user in the upper third area, the label of the second user in the middle third area, and the label of the third user.
  • the label is displayed in the lower third of the area.
  • the new label is to the right of the old label.
  • the old label disappears from the left, and the new label appears from the right.
  • the new label can also be located to the left of the old label; the old label disappears from the right, and the new label appears from the left.
  • the style of the label and the display of the label in the embodiment of the present application include but are not limited to the manner described in the above embodiment.
  • the label style can be a round bubble.
  • the electronic device can determine the display position of the tag. Exemplarily, as shown in FIG. 9, when "movie A” is mentioned multiple times in this call, the electronic device may display a label related to movie A in the middle. For another example, the electronic device can determine the size of the label. Exemplarily, as shown in Figure 9, when the frequency of mentioning "movie A” in this call is higher than the frequency of mentioning "movie B", the size of the tag associated with movie A determined by the electronic device is greater than The size of the tags related to movie B.
  • the electronic device can also determine the display attributes such as the shape, transparency or color of the label.
  • keywords, text data, and web links are taken as examples to illustrate the key information of the call content. It is understandable that, as shown in FIG. 10, the key information includes but is not limited to this .
  • FIG. 10 is a schematic diagram of a scene of a method for processing call content according to an embodiment of the application.
  • the first server when the system language of the electronic device is the first language (for example, Chinese) and the call language is the second language (for example, English), the first server will perform according to the first data (for example, Voice data: "What's up") generates second data (for example, text data: "What's up?").
  • the first server may send a first processing request to the second server, the first processing request including the second data.
  • the first processing request may be used to instruct the second server to provide the first language translation of the second data (for example, the Chinese translation of "What's up?").
  • the second server sends third data (for example, text data: "How are you doing?") to the first server.
  • the first server sends fourth data to the electronic device, where the fourth data includes third data and second data.
  • the electronic device According to the fourth data, the electronic device generates a call service interface. (For example, display the text data "What’s up” and "How are you doing?").
  • the call content processing method in the embodiment of the present application includes, but is not limited to, providing data related to the second data by another server different from the first server.
  • the application scenario shown in Figure 10(b) is: Jack said: "Twinkle, twinkle, little star”.
  • the first server can provide sentence supplement services.
  • the first server may include a lyrics library.
  • the first server matches the second data (text data: "All the sky is little stars") with the lyrics in the lyrics library to obtain data related to the second data.
  • the data related to the second data may be: the subsequent lyrics of the lyrics involved in the text data, for example, "How I Wonder What You Are”.
  • the first server may include a poetry dictionary and/or a famous saying and sentence library to provide various sentence supplement services.
  • the application scenario shown in Figure 10(c) is: Jack said: “My major is mechanical major,” but Jack mistakenly pronounced “mechanical” as “jie”.
  • the first server can provide voice error correction services. Send the phonetic symbol (xie) of "Machine" to the electronic device.
  • the electronic device may also provide data related to the second data.
  • the application scenario shown in Figure 10(d) is: Tom asks "Are you free in the afternoon of April 3?"
  • the first server generates second data (for example, text data: "Are you free in the afternoon on April 3?”) based on the first data (for example, voice data: "Are you free in the afternoon on April 3?").
  • the first server sends the above-mentioned second data to the electronic device.
  • the internal memory 121 of the electronic device includes the user's schedule.
  • the electronic device acquires and displays data related to the second data according to the second data (for example, Jack's itinerary for the afternoon of April 3).
  • the first data sent by the electronic device to the first server is voice data
  • the first server processes the voice data to generate the second data as an example.
  • the call content processing method in the embodiment of the present application may perform the conversion of voice data to text data (step 40331) and the extraction of keywords (step 40332) by the electronic device. Send the keyword as the first data to the first server.
  • the electronic device sends the user's voice data and the voice data of other users to the first server.
  • the electronic device (such as the first electronic device) may send the voice data of the first user to the first server, and the electronic device of other users (such as the second electronic device) may send the second user to the first server.
  • Voice data may be sent.
  • the electronic device may record the webpage links of each webpage visited by the user.
  • the electronic device can start a browser application and jump to the web page displayed when the call service ends according to the recorded web page link.
  • FIG. 11 is a schematic diagram of a scene of a method for processing call content according to an embodiment of the application.
  • the user touches the first information label (for example, the information label "Book Ticket").
  • the first information tag is associated with the first web page (for example, the movie theater selection page under the booking page of movie A).
  • the electronic device accesses the first webpage through the browser kernel, and the electronic device displays the first webpage in a network view.
  • the first webpage may include one or more webpage tags. Similar to browsing a webpage in a browser application, the user can operate on the webpage tags in the webpage to access more webpages.
  • the first webpage includes a first webpage tag (eg, webpage tag "ABC Cinema"), and the first webpage tag is associated with a second webpage (eg, the time selection page of ABC cinema under the booking page of movie A) .
  • the electronic device receives the user's fourth input to the first webpage.
  • the fourth input may be: touching the first webpage tag in the first webpage.
  • the electronic device accesses the second webpage through the browser kernel, and displays the second webpage in a webpage view.
  • the second webpage may also include one or more webpage tags, and the user can continue to operate on one or more webpage tags in the second webpage to access one or more webpage tags in the second webpage. Or multiple web pages associated with web tags.
  • the electronic device can record the webpage link of the webpage.
  • Table 2 the electronic device may store one or more historical access lists to record web links of one or more web pages visited by the user.
  • the electronic device starts a browser application, and displays the second webpage according to the webpage link of the second webpage that is newly recorded.
  • FIG. 11(d) and FIG. 11(f) when the call ends, the electronic device starts a browser application, and displays the second webpage according to the webpage link of the second webpage that is newly recorded.
  • FIG. 11(d) and FIG. 11(f) when the call ends, the electronic device starts a browser application, and displays the second webpage according to the webpage link of the second webpage that is newly recorded.
  • FIG. 11(d) and FIG. 11(f) when the call ends, the electronic device starts a browser application, and displays the second webpage
  • the electronic device may display a notification message to remind the user that the electronic device is closing the call service, and ask the user whether to continue access.
  • the electronic device can start the browser application and jump to the webpage displayed at the end of the call.
  • the electronic device may not perform the above steps.
  • Figure 11(a) to Figure 11 is another method for processing call content provided in the embodiment of the application.
  • the browser application is launched after the call ends, and the web page displayed at the end of the call is jumped to, so that the user can coherently after the end of the call Browsing the web improves the user experience.
  • the applications may include installation-free applications such as quick applications and small programs.
  • installation-free applications such as quick applications and small programs.
  • the electronic device starts the application and jumps to the application interface corresponding to the webpage last visited by the user.
  • the method may further include:
  • Step 1301 the electronic device obtains the latest recorded webpage link.
  • the electronic device accesses the first webpage through the browser kernel, and the user can operate on the webpage tags in the webpage to access other webpages.
  • the newly recorded webpage link refers to the webpage link last visited by the user before ending the call service.
  • Step 1302 The electronic device determines whether an application related to the recently recorded webpage link is installed.
  • the electronic device can determine the related application according to the webpage link.
  • the application related to the webpage link may be a Taobao application.
  • step 1304 If the electronic device installs the application corresponding to the webpage, execute step 1304; if the electronic device does not install the application, execute step 1303;
  • Step 1303 The electronic device starts the browser application, and jumps to the corresponding webpage according to the webpage link.
  • Step 1304 The electronic device sends the webpage link to the first server.
  • Step 1305 The first server receives the webpage link sent by the electronic device.
  • Step 1306 The first server sends a second processing request to the second server, where the processing request includes the webpage link.
  • the second processing request is used to instruct the second server to return an application link corresponding to the webpage link.
  • Step 1307 The second server receives the second processing request sent by the first server.
  • Step 1308 In response to the second processing request, the second server sends an application link to the first server.
  • Step 1309 The first server receives the application link sent by the second server.
  • Step 1310 The first server sends the application link to the electronic device.
  • Step 1311. The electronic device receives the application link sent by the first server.
  • Step 1312. The electronic device starts the application, and jumps to the corresponding application interface according to the application link.
  • the electronic device can obtain the application link corresponding to the webpage link, and jump to the application interface corresponding to the webpage last visited by the user, so that the user can View related information through the app.
  • the electronic device may directly send the second processing request to the second server without going through the first server, and the second server may directly send the application link to the electronic device.
  • the electronic device may directly send a webpage link to the first server without going through the second server, and the first server may directly send the application link to the electronic device.
  • the first server stores a comparison table, and the comparison table includes a web page link and a link address of an application interface corresponding to the web page link, that is, an application link. The first server determines the application link corresponding to the webpage link according to the comparison table, and sends the application link to the electronic device.
  • the electronic device may also determine the corresponding application link according to the web page link.
  • the link in the third data includes a webpage link of a webpage and an application link of an application interface corresponding to the webpage link.
  • the electronic device may determine the application link corresponding to the recently recorded webpage link according to the recently recorded webpage link, the webpage link and the application link included in the third data.
  • the third data is: ⁇ "web": www.amazon.com/phones/huawei/p30-pro/; "app”: "amazon://phones/huawei/p30-pro/” ⁇ .
  • the electronic device will link the most recently recorded webpage (for example: www.amazon.com/phones/huawei/p30-pro/spec/) and the webpage link in the third data (www.amazon.com/phones/huawei/p30-pro) /)
  • the added field (spec/) as a suffix to the application link (amazon://phones/huawei/p30-pro/) in the third data to get the link to the most recently recorded webpage
  • the corresponding application link (amazon://phones/huawei/p30-pro/spec/). It should be noted that when the above method is adopted, the webpage link and the application link need to maintain the same or similar suffix form.
  • the electronic device can start multiple corresponding applications and jump to the corresponding application interface. Or the electronic device can start a browser to display corresponding multiple webpage interfaces through multiple windows. Or the electronic device can also start the corresponding application and the browser to display the corresponding interface respectively.
  • an exemplary description will be given below in conjunction with the application scenario shown in FIG. 14.
  • the electronic device displays a call service interface 1410.
  • the user touches the first information tag (the information tag "book ticket"), and the first information tag is associated with the first web page (the movie theater selection page under the booking page of movie A).
  • the electronic device accesses the first webpage through the browser kernel, and the electronic device displays the webpage interface 1411 of the first webpage.
  • the electronic device receives the fourth input.
  • the fourth input may be: touching the first webpage tag in the first webpage.
  • the electronic device In response to a touch operation on the first webpage label (webpage label "ABC Cinema") in the first webpage, the electronic device accesses the second webpage (time selection page under the booking page of movie A) through the browser kernel, The electronic device displays the web interface 1412 of the second web page.
  • the electronic device may also display a call service home button 1401, a return button 1402 and/or a delete button 1403.
  • the return button is used to instruct the electronic device to display the webpage visited by the last user.
  • the electronic device displays the second web page 1412 and the user touches the return button 1402, in response to the touch operation on the return button, the electronic device displays the web page interface 1411 visited by the last user.
  • the delete button is used to instruct the electronic device to end the display of the web page interface and display the call service interface.
  • the delete button 1403 the electronic device ends the display of the web interface and displays the call service interface in the second display area.
  • the call service home button 1401 is used to instruct the electronic device to display the call service interface.
  • the electronic device displays the call service interface 1410 together with the web interface 1412 in the second display area.
  • the electronic device displays the third information tag in the second display area.
  • the second display area includes the second webpage 1412 (the time selection page under the booking page of movie A) and the fourth webpage 1413 (the purchase page of Mate20Pro) at the end of the call, assuming the electronic device If the first application (Fandango) corresponding to the second webpage is installed, and the application (Amazon) corresponding to the fourth webpage is not installed, the electronic device starts the first application and jumps to the one corresponding to the second webpage. Apply the interface 1414, and start the browser to jump to the webpage 1415 corresponding to the fourth webpage direction.
  • the embodiment of the application discloses an electronic device, including: a display screen; a processor; a memory; one or more sensors; an application program; a computer program and a communication module.
  • the above devices can be connected through one or more communication buses.
  • the one or more computer programs are stored in the foregoing memory and configured to be executed by the one or more processors, and the one or more computer programs include instructions, and the foregoing instructions may be used to execute the foregoing application embodiments. The various steps.
  • the foregoing processor may specifically be the processor 110 shown in FIG. 1
  • the foregoing memory may specifically be the internal memory and/or the external memory 120 shown in FIG. 1
  • the foregoing display screen may specifically be the display shown in FIG. Screen 194
  • the above-mentioned sensor may specifically be one or more sensors in the sensor module 180 shown in FIG. 1
  • the above-mentioned communication module may be the mobile communication module 150 and/or the wireless communication module 160 shown in the embodiment of the present application. Do any restrictions.
  • GUI graphical user interface
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium can be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)

Abstract

本申请实施例提供的一种通话内容处理方法和电子设备。所述方法包括:当电子设备处于通话连接状态时,电子设备接收第一输入;响应于第一输入,电子设备获取通话内容的至少一个关键信息;电子设备根据上述关键信息显示第一界面,第一界面包括与该关键信息对应的标签。本申请实施例能够对通话内容进行处理,提供与通话内容相关的至少一个关键信息。

Description

一种通话内容处理方法和电子设备
本申请要求在2019年5月20日提交中国国家知识产权局、申请号为201910416825.4的中国专利申请的优先权,发明名称为“一种通话内容处理方法和电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及通信技术,尤其涉及一种通话内容处理方法和电子设备。
背景技术
目前,随着科技的发展,信息对于人们的生活越来越重要,越来越多的电子设备为用户提供信息服务,向用户提供所需信息。
如图1所示,以手机为例,现有技术中信息服务的一种实现方案为:
如图1(a)所示,电子设备显示第一界面101。其中,所述第一界面101为文本消息对话界面,可以包括文本和图像。用户可以对第一界面101中的任意区域进行多点触摸操作。
如图1(b)所示,响应于用户对某个区域(如,第一区域)的多点触摸操作,电子设备对该区域进行图像处理,提取该区域中的文本(例如,听说电影A不错),将该文本拆分为多个字段(例如,“听说”、“电影A”以及“不错”)。然后,电子设备显示第二界面102。其中,所述第二界面102包括多个标签1021,每个标签分别对应一个字段。用户可以在选择一个或多个字段标签(如,选择“电影A”的字段标签)后,点击搜索按键1022。
如图1(c)所示,响应于用户点击所述搜索按钮1022,电子设备启动浏览器应用,显示界面103。其中,所述界面103为浏览器的应用界面。然后,电子设备将用户选择的字段标签的字段(如,电影A)自动填入搜索框中并执行搜索,从而获得与该字段相关的信息。
现有技术对显示内容进行识别以提供与之相关的信息,用户希望提供更为丰富的信息服务。
发明内容
本申请实施例提供了一种通话内容处理方法和电子设备,能够对通话内容进行处理以提供与通话内容相关的信息。
第一方面,本申请实施例提供的一种通话内容处理方法,包括:当电子设备处于通话连接状态时,电子设备接收第一输入;响应于第一输入,电子设备获取通话内容的至少一个关键信息;电子设备根据上述关键信息显示第一界面,第一界面包括与该关键信息对应的标签。
其中,通话内容可以包括第一语音数据,第二语音数据,第一语音数据和第二语音数据。第一语音数据为电子设备通过采集外部声音生成的语音数据,第二语音数据为电子设备从与之通话连接的其他电子设备接收的语音数据。其中,关键信息可以包括部分通话内容的文本数据,通话内容中的关键词,与该关键词相关的网页链接。其中,标签包括文本标签,关键词标签和信息标签中的任意一种或多种。文本标签与文本数据对应;关键词标签与关键词对应;信息标签与网页链接对应。其中,第一输入可以为电子设备被折叠或者电子设备被展开。
其中,可以通过不同的方式获取关键信息的方法。一种可能的方式,电子设备向第一服务器发送通话内容;第一服务器接收第一电子设备发送的通话内容,将该通话内容转换为文本数据;然后,第一服务器从该文本数据提取关键词,向第二服务器发送该关键词以获取与该关键词相关的网页链接;最后,第一服务器将网页链接,文本数据或/和关键词发送给电子设备。
本申请实施例提供的通话内容处理方法,用户可以一边通话一边获取与通话内容相关的关键信息。
在一种可能的设计方法中,电子设备接收对信息标签的第三输入,响应于第三输入,电子设备显示与所述信息标签相关联的网页。电子设备可以通过浏览器内核访问与该信息标签相关联的网页,以网络视图(Webview)显示该网页。由此,通过信息标签用户可以快速访问与通话内容相关的网页。类似在浏览器应用中浏览网页,用户可以对该网页进行操作以访问其他网页。电子设备可以接收对该网页的第四输入;响应于第四输入电子设备可以显示其他网页。
在一种可能的设计方法中,电子设备可以记录用户访问的网页的网页链接。结束通话时,通过该记录的网页链接,电子设备可以跳转至通话结束时显示的网页,使得用户可以在通话结束后通过浏览器应用继续浏览网页,改善了用户的体验。
在另一种可能的设计方法中,如果电子设备安装了相关的应用,结束通话时,电子设备可以跳转对应的应用界面。该设计方法具体包括:电子设备获取最新记录的网页链接;根据最近记录的网页链接确定与该最近记录的网页链接相关的应用;如果安装了该应用,则电子设备获取与该网页链接对应的应用链接;启动该相关的应用,并根据该应用链接显示对应的应用界面;如果未安装该应用,则启动浏览器应用,根据网页链接显示对应的网页。
第二方面,本申请实施例提供一种电子设备,所述电子设备包括显示屏、处理器、以及用于存储计算机程序的存储器,其中,计算机程序包括指令,当该指令被处理器执行时,使得电子设备执行如第一方面中任一项所述的方法。
第三方面,本申请提供一种计算机存储介质,包括计算机指令,当计算机指令在电子设备上运行时,使得电子设备执行如第一方面中任一项所述的方法。
第四方面,本申请提供一种计算机程序产品,当计算机程序产品在电子设备上运行时,使得电子设备执行如第一方面中任一项所述的方法。
第五方面,本申请提供一种图形用户界面,该图形用户界面具体包括电子设备在执行如第一方面中任一项方法时显示的图形用户界面。
可以理解地,上述提供的第二方面所述的电子设备、第三方面所述的计算机存储介质、第四方面所述的计算机程序产品以及第五方面所述的图形用户界面,均用于执行上文所提供 的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
附图说明
图1为现有技术提供的一种显示内容处理方法的场景示意图;
图2为本申请实施例提供的一种电子设备的结构示意图;
图3为本申请实施例提供的一种电子设备的软件结构框图;
图4为本申请实施例提供的一种通话内容处理的流程示意图;
图5为本申请实施例提供的又一种通话内容处理方法的场景示意图;
图6为本申请实施例提供的又一种电子设备的结构示意图;
图7为本申请实施例提供的又一种通话内容处理方法的场景示意图;
图8为本申请实施例提供的又一种通话内容处理方法的场景示意图;
图9为本申请实施例提供的又一种通话内容处理方法的场景示意图;
图10为本申请实施例提供的又一种通话内容处理方法的场景示意图;
图11为本申请实施例提供的又一种通话内容处理方法的场景示意图;
图12为本申请实施例提供的又一种通话内容处理方法的场景示意图;
图13为本申请实施例提供的又一种通话内容处理方法的流程示意图;
图14为本申请实施例提供的又一种通话内容处理方法的场景示意图。
具体实施方式
需要说明的是,本申请实施例中的“第一”、“第二”等描述,是用于区分不同的消息、设备、模块等,不代表先后顺序,也不限定“第一”和“第二”是不同的类型。
除特别指明以外,本申请实施例中的“用户”是指使用电子设备的用户。
本申请实施例中的术语“A和/或B”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如单独存在A,同时存在A和B,单独存在B这三种情况。另外,本申请实施例中字符"/",一般表示前后关联对象是一种"或"的关系。
本申请实施例中描述的一些流程中,包含了按照特定顺序出现的多个操作,但是应该清楚了解,这些操作可以不按照其在本申请实施例中出现的顺序来执行或并行执行,操作的序号如101、102等,仅仅是用于区分开各个不同的操作,序号本身不代表任何的执行顺序。另外,这些流程可以包括更多或更少的操作,并且这些操作可以按顺序执行或并行执行。
本申请实施例提供的一种通话内容处理方法,可以应用于电子设备。示例性的,该电子设备例如可以为:手机、平板电脑(Tablet Personal Computer)、膝上型电脑(Laptop Computer)、数码相机、个人数字助理(personal digital assistant,简称PDA)、导航装置、移动上网装置(Mobile Internet Device,MID)或可穿戴式设备(Wearable Device)等。
图2示出了电子设备100的结构示意图。
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话 器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本申请实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备100的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块 170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备100的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解 决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在 一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在内部存储器121的指令,和/或存储在设置于处理器中的存储器的指令,执行电子设备100的各种功能应用以及数据处理。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110 中。在另一些实施例中,麦克风170C将采集的声音信号转换为电信号,由音频模块170接收后转换为音频信号。在另一些实施例中,音频模块可以将音频信号转换为电信号,由扬声器170A接收后转换为声音信号输出。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A
的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。
气压传感器180C用于测量气压。在一些实施例中,电子设备100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当电子设备100是翻盖机时,电子设备100可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖 屏切换,计步器等应用。
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备100通过发光二极管向外发射红外光。电子设备100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备100附近有物体。当检测到不充分的反射光时,电子设备100可以确定电子设备100附近没有物体。电子设备100可以利用接近光传感器180G检测用户手持电子设备100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备100执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备100对电池142加热,以避免低温导致电子设备100异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备100对电池142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器180K,也称“触控器件”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100中,不能和电子设备100分离。
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android系统为例,示例性说明电子设备100的软件结构。
图3是本申请实施例的电子设备100的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图3所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图3所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
下面结合捕获拍照场景,示例性说明电子设备100软件以及硬件的工作流程。
当触摸传感器180K接收到触摸操作,相应的硬件中断被发给内核层。内核层将触摸操作加工成原始输入事件(包括触摸坐标,触摸操作的时间戳等信息)。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,识别该输入事件所对应的控件。以该触摸操作是触摸单击操作,该单击操作所对应的控件为相机应用图标的控件为例,相机应用调用应用框架层的接口,启动相机应用,进而通过调用内核层启动摄像头驱动,通过摄像头193捕获静态图像或视频。
以下图4到图14所示的实施例提供的方法应用于前述各实施例提供的电子设备中。
图4为本申请实施例提供的一种通话内容处理方法的流程示意图。如图4所示,所述方法包括:
步骤401、当电子设备处于通话连接状态时,电子设备接收第一输入。
其中,电子设备处于通话连接状态是指所述电子设备与其他电子设备已建立通话连接。所述通话可以包括:语音通话或视频通话。示例性的,如图5(a)所示,第一用户使用电子设备(如,第一电子设备)与第二用户所使用的其他电子设备(如,第二电子设备)进行视频通话。此时,电子设备处于与其他电子设备的通话连接状态。电子设备显示通话界面(如,界面501)。所述通话界面可以包括用于启动通话服务的通话服务启动按钮(如,按钮5011)。电子设备接收用户的第一输入。所述第一输入为:所述通话服务启动按钮(如,按钮5011)被触摸。
需要说明的是,所述第一输入包括但不限于所述上述方式。例如,当所述电子设备与其他设备(例如,蓝牙耳机,智能手表)连接时,所述第一输入可以为:其他设备发送的用于指示电子设备启动通话服务的信号。示例性的,当用户长按蓝牙耳机的电源键时,蓝牙耳机向电子设备发送信号指示电子设备启动通话服务。又如,当所述电子设备可折叠时,所述第一输入可以为:展开或折叠所述电子设备。示例性的,如图6所示,电子设备为折叠式手机。当用户展开电子设备,使得电子设备的第一部分与电子设备的第二部分所呈的夹 角ɑ大于预定的阈值时,电子设备启动通话服务。
步骤402、响应于所述第一输入,电子设备启动通话服务。
其中,通话服务是指通过处理通话内容提供用户所需至少一个关键信息。所述通话服务包括但不限于:生僻词释义,语句补充,语音纠错,语法纠错,日程管理,翻译,天气查询,导航。
示例性的,如图5(b)所示,响应于所述第一输入(按钮5011被触摸),电子设备启动通话服务。
可选的,如图5(b)所示,在开始启动通话服务时或在启动通话服务的过程中,电子设备可以显示用于通知所述电子设备正在启动通话服务的通知消息5021。
步骤403、电子设备获取通话服务所提供的至少一个关键信息。
其中,步骤403具体可以包括步骤4031-4039:
步骤4031、电子设备向第一服务器发送第一数据。
示例性的,所述第一数据可以包括:第一语音数据和/或第二语音数据。
示例性的,如图5(c)所示的应用场景为:电子设备启动通话服务后,第二用户问道“最近有什么好看的电影吗”;第一用户答道:“听说电影A(某电影的名称)不错”。
电子设备(如,第一电子设备)的移动通信模块150接收其他设备(如,第二电子设备)发送的第二语音数据,该第二语音数据由音频模块170转换为电信号后,又由扬声器170A转换为声音信号输出(如,“最近有什么好看的电影吗”)。电子设备可以向第一服务器发送第二语音数据。
第一用户靠近麦克风170C发声,第一用户的声音信号(如,“听说电影A不错”)由麦克风170C转换为电信号后,又由音频模块170转换为第一语音数据。也就是说,所述第一语音数据为电子设备通过采集外部声音生成的语音数据。电子设备向第一服务器发送第一语音数据。
可以理解的是,第一语音数据对应于第一用户说的内容;第二语音数据对应于第二用户说的内容。也就是说,电子设备可以向第一服务器发送第一用户与第二用户的通话内容,也就是,第一电子设备与第二电子设备之间的通话内容。
步骤4032、第一服务器接收电子设备发送的第一数据。
步骤4033、第一服务器根据第一数据生成第二数据。
示例性的,所述根据第一数据生成第二数据可以包括以下步骤40331-步骤40334:
步骤40331、第一服务器将第一数据中的语音数据转化为文本数据。
示例性的,如图5(c)所示,电子设备将第二语音数据发送给第一服务器,第一服务器将该语音数据转化为文本数据,得到第二文本数据为“有什么好看的电影?”。类似地,电子设备将第一语音数据发送给第一服务器,第一服务器将该语音数据转化为文本数据,得到第一文本数据为:“听说电影A不错!”。
步骤40332、第一服务器提取文本数据中的关键词。
示例性的,如图5(c)所示,当文本数据为:“有什么好看的电影?”时,第一服务器提取的关键词可以为:“电影”。当文本数据为:“听说电影A不错!”时,第一服务器提取的关键词可以为:“电影A”。
需要说明的是,本申请实施例不限定语音识别、提取关键词的方法。本领域技术人员可以采用现有技术中的方法提取关键词。
步骤40333、第一服务器确定第二服务器。
其中,所述第二服务器用于提供与所述提取的关键词相关的关键信息。
示例性的,第一服务器可以根据提取的关键词确定第二服务器。例如,第一服务器可以存储一个或多个列表。如表1所示,所述列表包括一个或多个关键词和与该关键词对应的一个或多个服务器的名称。第一服务器将提取的关键词与列表中的关键词进行匹配,确定第二服务器为与提取的关键词相匹配的列表中的关键词所对应的服务器。第二服务器可以为一个服务器,也可以为多个服务器。
示例性的,如图5(c)所示,第一服务器提取的关键词为“电影A(一电影名称)”时,第一服务器确定第二服务器为“Fandango”。
需要说明的是,当多个服务器与列表中关键词所对应时,第一服务器可以根据用户的设定或用户的使用习惯确定第二服务器。
示例性的,假设第一服务器提取的关键词为“买”、“手机”以及“华为P30Pro”。根据表1所示的服务器列表可知,与上述关键词对应的服务器为“亚马逊”和“淘宝”。假设用户使用“淘宝”应用的频率高于使用“亚马逊”应用的频率,则第一服务器可以确定第二服务器为“淘宝”。
可选的,服务器列表还包括语义类型。第一服务器可以根据所述语义类型确定与所述语音类型对应的服务器。
表1
序号 语义类型 关键词 服务器名称
1 购物类 “买”、“价格”、商品名 淘宝、亚马逊
2 视频类 视频名称 Youtube、腾讯视频
3 文学类 诗词、成语、作者名 华为
4 交通类 地理名称 谷歌地图、百度地图
5 订票类 “订”、“预约”、电影名称 Fandango
6 新闻类 “新闻”、“报道” BBC新闻
7 旅游类 酒店名、“假期” Booking
步骤40334、第一服务器生成第二数据。
所述第二数据可以包括关键词。所述第二数据可以为JavaScript对象表示法(Java Script Object Notation,JSON)格式的数据。JSON是JavaScript的一个内置的语言特征,提供了一种简单的描述复杂对象的方法。所述第二数据的形式可以根据第二服务器的不同而不同。
示例性的,假设第一服务器提取的关键词为“买”、“手机”以及“华为P30Pro”,则第二数据可以为:information:{“type”:“mobile phone”;name:“Huawei P30Pro”}。
或者,如图5(c)所示,当关键词为“电影”和“电影A”时,第二数据可以为:information:{name:“movie A”}。
步骤4034、第一服务器向第二服务器发送第一处理请求。其中,所述第一处理请求包括所述第二数据。所述第一处理请求用于指示第二服务器提供与所述第二数据相关的信息。
步骤4035、第二服务器接收第一服务器发送的第一处理请求。
步骤4036、响应于所述第一处理请求,第二服务器向第一服务器发送第三数据。
所述第三数据可以包括链接,如统一资源定位符(Uniform/Universal Resource Locator,URL)等。通过所述链接电子设备可以获取与关键词相关的信息。所述信息可以包括:商品信息、剧情介绍、影评、诗词全文、地图、歌手介绍、新闻消息、酒店排行榜等。
示例性的,假设第二服务器接收的第一处理请求中包括的第二数据为:information:{“type”:“mobile phone”;name:“Huawei P30Pro”},则第三数据可以包括:淘宝网上华为P30Pro手机的介绍页的网页链接,如{“web”:www.taobao.com/phones/huawei/p30-pro/}。也就是说,当电子设备通过该网页链接访问网页时,可以获取与Huawei P30Pro相关的信息。
示例性的,如图5(c)所示,第二服务器接收的第一处理请求中包括的第二数据为:information:{name:“movie A”}时,第三数据可以包括:Fandango网页上Movie A的内容介绍页的网页链接,如{“web”:www.fandango.com/movie-a/movie-overview}。也就是说,当电子设备通过该网页链接访问网页时,可以获取与Movie A相关的信息。
步骤4037、第一服务器接收第二服务器发送的第三数据。
步骤4038、第一服务器向电子设备发送第四数据。
其中,所述第四数据可以包括:关键词,文本数据和/或链接。可选的,所述第四数据还可以包括:第二服务器的名称。
示例性的,第四数据可以为:
Figure PCTCN2020090956-appb-000001
步骤4039、电子设备接收第一服务器发送的第四数据。
综上所述,电子设备可以获取与第一电子设备与第二电子设备之间的通话内容的至少一个关键信息。可以理解的是,此时,通话服务所提供的至少一个关键信息即为第四数据。
步骤404、电子设备根据所述至少一个关键信息显示通话服务界面。
示例性的,如图5(c)所示,电子设备显示界面503。所述界面503包括通话界面(如,界面5036)和通话服务界面(如,界面5037)。电子设备在显示区域5031显示通话界面,在显示区域5032中显示通话服务界面。电子设备可以根据通话服务所提供的至少一个关键信息生成文本标签,关键词标签和/或信息标签。所述通话服务界面可以包括文本标签(如,标签5033“最近有什么好看的电影吗?”),关键词标签(如,标签5034“电影A”)和/或信息标签(如,标签5035“电影介绍”、“订座”)。其中,文本标签用于显示文本数据。关键词标签用于显示关键词。信息标签与链接相关联。所述信息标签可以位于与之相关的文本标签或关键词标签附近。
假设信息标签与网页链接相关联。电子设备可以接收用户对所述信息标签的第三输入。所述,第三输入可以为:用户触摸所述信息标签。响应于对所述信息标签的第三输入,电子设备可以通过浏览器内核访问与所述信息标签相关联的网页。电子设备可以以网 络视图(Webview)显示该网页(如,电影A的介绍页,电影A的订票页)。
或者,响应于对所述信息标签的第三输入,电子设备可以启动浏览器应用访问与所述信息标签相关联的网页。电子设备可以在第二显示区域显示所述浏览器应用的应用界面。
尽管图中未示出,所述通话服务界面还可以包括网站标签。所述网站标签用于表示网页链接所对应的服务器,如淘宝、亚马逊等。所述应用标签或网站标签可以位于与之相关的信息标签的附近。
可选的,电子设备可以通过不同颜色,形状,大小等区分显示不同类别的标签。
步骤405、电子设备判断是否接收到第二输入。
如果电子设备接收到第二输入,则电子设备执行步骤407;如果电子设备未接收到第一用户的第二输入,则电子设备执行步骤406。
其中,所述第二输入用于指示电子设备结束通话服务。电子设备响应于所述第二输入,结束所述通话服务。示例性的,所述第二输入可以为:用户长按电源键。
需要说明的是,所述第二输入包括但不限于上述方式。例如,所述第二输入可以为:展开或折叠所述电子设备。示例性的,如图6所示,电子设备为折叠式手机,当电子设备的第一部分与电子设备的第二部分所呈的夹角ɑ小于预定的阈值时,电子设备结束通话服务。又如,当所述电子设备与其他设备(例如,蓝牙耳机,智能手表)连接时,所述第二输入可以为:其他设备发送的用于指示电子设备关闭通话服务的信号。
步骤406、电子设备判断通话是否结束。
如果通话结束,则执行步骤407;如果电子设备仍处于通话状态,则执行步骤403。
步骤407、电子设备结束所述通话服务。
如图5(d)所示,当第一用户或第二用户挂断电话时,所述第一电子设备结束与所述第二电子设备的通话连接。第一电子设备判断通话结束,结束所述通话服务。电子设备显示界面504。
可以理解的是,电子设备结束所述通话服务后,电子设备结束所述关键信息的获取。
综上所述,本申请实施例提供的一种通话内容处理方法,通过服务器实时地处理通话内容,可以提供与通话内容相关的至少一个关键信息。第一用户可以一边与第二用户通话,一边获取与通话内容相关的关键信息。必要时,用户还可以通过触摸信息标签,快速访问与通话内容相关的网页。
可选的,如图7(a)所示,当第一用户与第二用户语音通话时,用于显示通话界面的第一显示区域5031可以较小,用于显示通话服务界面的第二显示区域5032可以较大。或者,如图7(b)所示,通话界面可以作为一个图标(如,图标701)悬浮显示。
可以理解的是,随着通话的进行电子设备反复执行步骤403和步骤404,电子设备生成一个又一个的标签。示例性的,第一用户与第二用户进行视频通话。电子设备反复执行步骤403和步骤404,依次生成标签A-1、B-1、A-2、B-2、B-3、A-3、B-4和B-5。其中,A-1、A-2和A-3与第一用户说的内容相关。B-1、B-2、B-3、B-4和B-5与第二用户说的内容相关。
可选的,电子设备可以按照时间顺序排列多个标签。示例性的,如图8(a)所示,新标签(如,标签A-3)位于旧标签(如,标签B-3)的下方。
可选的,电子设备可以按照用户排列所述标签。示例性的,如图8(a)所示,将与第二用户的对话相关的标签(如,标签B-1、B-2和B-3)排列在左侧;将与第一用户(如,标签 A-1、A-2、A-3)的对话相关的标签排列在右侧。
可选的,电子设备可以滚动显示标签。示例性的,如图8(a)至图8(b)所示,旧标签从上方消失,新标签从下方出现,以便用户查看与当前对话相关的标签。
电子设备可以根据通话的语速设定标签的滚动速度,或者也可以由用户指定标签的滚动的速度。可选的,当手指悬浮接近某标签时,电子设备可以停止滚动显示所述某个或某多个与之相关的标签。或者,当手指悬浮接近第二显示区域时,电子设备可以停止滚动显示。当手指移开时,电子设备快速滚动以显示与当前对话相关的标签。
可选的,电子设备可以根据参与通话的用户数量确定标签的排列方向和滚动方向。
区别于图8(a)至8(b),只有两位用户(如第一用户和第二用户)参与通话时,电子设备左右排列不同用户的标签,上下滚动显示标签;如图8(c)和(d)所示,当三位以上用户(如,第一用户、第二用户以及第三用户)参与通话时,电子设备可以上下排列不同用户的标签,左右滚动显示标签。
具体地,电子设备依次生成标签E-1、C-1、D-1、E-2和C-2。其中,C-1和C-2与第一用户说的内容相关;D-1与第二用户说的内容相关;E-1和E-2与第三用户说的内容相关。
如图8(c)和8(d)所示,电子设备可以将第一用户的标签显示在上三分之一区域,第二用户的标签显示在中间三分之一区域,第三用户的标签显示在下三分之一的区域。新标签位于旧标签的右侧。旧标签从左侧消失,新标签从右侧出现。当然,新标签也可以位于旧标签的左侧;旧标签从右侧消失,新标签从左侧出现。
可以理解的是,本申请实施例中标签的样式,标签的显示包括但不限于上述实施例中所描述的方式。例如,如图9所示,标签的样式可以是圆形气泡。
电子设备可以确定标签的显示位置。示例性的,如图9所示,当在本次通话中多次提及“电影A”时,电子设备可以将与电影A相关的标签显示在中部。又如,电子设备可以确定标签的大小。示例性的,如图9所示,当在本次通话中提及“电影A”的频率高于提及“电影B”的频率时,电子设备确定的与电影A相关的标签的大小大于与电影B相关的标签的大小。
可以理解的,电子设备还可以确定标签的形状,透明度或颜色等显示属性。
需要说明的是,上述实施例中以关键词,文本数据以及网页链接为例对通话内容的关键信息进行了说明,可以理解的是,如图10所示,所述关键信息包括但不限于此。
图10为本申请实施例提供的一种通话内容处理方法的场景示意图。
如图10(a)所示,当电子设备的系统语言为第一语言(例如,中文)而通话语言为第二语言(例如,英文)时,第一服务器根据所述第一数据(例如,语音数据:“What’s up”)生成第二数据(例如,文本数据:“What’s up?”)。第一服务器可以向第二服务器发送第一处理请求,所述第一处理请求包括第二数据。所述第一处理请求可以用于指示第二服务器提供第二数据的第一语言翻译(例如,“What’s up?”的中文翻译)。响应于上述第一处理请求,第二服务器向第一服务器发送第三数据(例如,文本数据:“近来过得如何?”)。第一服务器向电子设备发送第四数据,所述第四数据包括第三数据和第二数据。根据所述第四数据,电子设备生成通话服务界面。(例如,显示文本数据“What’s up”和“近来过得如何?”)。
需要说明的是,本申请实施例中的通话内容处理方法包括但不限于由与第一服务器不同的其他服务器提供与第二数据相关的数据。
例如,如图10(b)所示的应用场景为:Jack说道:“一闪一闪亮晶晶(Twinkle,twinkle,little star)”。第一服务器可以提供语句补充服务。具体地,第一服务器可以包括歌词库。第一服务器将所述第二数据(文本数据:“漫天都是小星星”)与所述歌词库中的歌词进行匹配以获取与所述第二数据相关的数据。例如所述与第二数据相关的数据可以为:该文本数据所涉及的歌词的后续歌词,例如“漫天都是小星星(How I wonder what you are)”。可以理解的是,所述第一服务器可以包括诗词库和/或名言名句库等以提供多样的语句补充服务。
又如,如图10(c)所示的应用场景为:Jack说道:“我的专业是机械专业”,但是Jack错误的将“械”读成了“jie”。第一服务器可以提供语音纠错服务。将“械”的音标(xie)发送给电子设备。
此外,本申请实施例中提供的通话内容处理方法也可以由电子设备提供与第二数据相关的数据。
如图10(d)所示的应用场景为:Tom问道“4月3号下午有空吗”。第一服务器根据所述第一数据(例如,语音数据:“4月3号下午有空吗”)生成的第二数据(例如,文本数据:“4月3号下午有空吗?”)。然后,第一服务器将上述第二数据发送给电子设备。电子设备的内部存储器121包括用户的行程安排。电子设备根据所述第二数据获取并显示与所述第二数据相关的数据(例如,Jack的4月3日下午的行程安排)。
需要说明的是,上述实施例中,以电子设备向第一服务器发送的第一数据为语音数据,第一服务器对语音数据进行处理以生成第二数据为例进行了说明。替代性的,为了保护用户的隐私,本申请实施例中的通话内容处理方法可以由电子设备进行语音数据到文本数据的转换(步骤40331)和关键词的提取(步骤40332)。将所述关键词作为第一数据发送给第一服务器。
需要说明的是,上述实施例中,电子设备向第一服务器发送用户的语音数据和其他用户的语音数据。替代性地,也可以由电子设备(如第一电子设备)向第一服务器发送第一用户的语音数据,由其他用户的电子设备(如,第二电子设备)向第一服务器发送第二用户的语音数据。
可选的,当电子设备通过浏览器内核访问网页时,电子设备可以记录用户访问的各个网页的网页链接。当通话服务结束时,电子设备可以启动浏览器应用,根据所述记录的网页链接跳转到结束通话服务时显示的网页。
为了方便理解,下面结合具体的应用场景,对上述方法进行具体地说明。图11为本申请实施例提供的一种通话内容处理方法的场景示意图。
如图11(a)所示,用户触摸第一信息标签(如,信息标签“订票”)。第一信息标签与第一网页(如,电影A的订票页下的电影院选择页)相关联。如图11(b)所示,响应于对第一信息标签的第三输入,电子设备通过浏览器内核访问第一网页,电子设备以网络视图显示第一网页。电子设备记录所述第一网页的网页链接(www.fandango.com/ticket?movie="movie A")。可以理解的是,第一网页可以包括一个或多个网页标签,类似在浏览器应用中浏览网页,用户可以对网页中的网页标签进行操作以访问更多的网页。例如,第一网页包括第一网页标签(如,网页标签“ABC电影院”),所述第一网页标签与第二网页(如,电影A的订票页下ABC电影院的时间选择页)相关联。如图11(b)至图11(c)所示,电子设备接收用户对所述第一网页的第四输入。所述第四输入可以为:触摸所 述第一网页中的第一网页标签。响应于对所述第一网页中的第一网页标签的触摸操作,电子设备通过浏览器内核访问第二网页,以网页视图显示第二网页。电子设备记录所述第二网页的网页链接(www.fandango.com/ticket?movie="movieA"&museum="ABC")。可以理解的是,所述第二网页还可以包括一个或多个网页标签,用户可以继续对所述第二网页中的一个或多个网页标签进行操作以访问与所述第二网页中的一个或多个网页标签相关联的网页。然后,类似地,电子设备可以记录该网页的网页链接。如下表2所示,电子设备可以存储一个或多个历史访问列表以记录用户访问的一个或多个网页的网页链接。如图11(d)和图11(f)所示,通话结束时,电子设备启动浏览器应用,根据最新记录的所述第二网页的网页链接显示所述第二网页。可选的,如图11(e)所示,当通话结束时,电子设备可以显示通知消息以提示用户电子设备正在关闭通话服务,询问用户是否继续访问。当用户选择继续访问时,电子设备可以启动浏览器应用并跳转至通话结束时显示的网页。当用户选择不继续访问时,电子设备可以不执行上述步骤。
表2
序号 网站链接
1 www.fandango.com/movie-a/movie-overview
2 www.fandango.com/ticket?movie="movieA"
3 www.fandango.com/ticket?movie="movieA"&museum="ABC"
综上所述,相比于图12(a)至图12(f)所示,在通话结束后需要用户打开浏览器应用并找到相应的网页订票的应用场景,如图11(a)至图11本申请实施例中提供的又一种通话内容处理方法,通过记录网页链接,在通话结束后启动浏览器应用,跳转至通话结束时显示的网页,使得用户可以在通话结束后连贯地浏览网页,改善了用户的体验。
电子设备中安装有各种各样的应用。其中,所述应用可以包括快应用,小程序等免安装应用程序。例如,当用户安装有淘宝的应用时,相比于用浏览器应用访问淘宝的网页,用户可能更倾向于使用淘宝的应用查看相关的信息。因此,可选的,电子设备启动应用并跳转至与用户最后访问的网页相对应的应用界面。
具体地,如图13所示,在图4所示的步骤407之后(即、结束通话服务之后),当用户选择继续访问网站时(或者无需用户选择,电子设备可以自动跳转继续访问网站),所述方法还可以包括:
步骤1301、电子设备获取最新记录的网页链接。
如上述图11所示,响应于用户对第一信息标签的第三输入,电子设备通过浏览器内核访问第一网页,用户可以对网页中的网页标签进行操作访问其他网页。其中,所述最新记录的网页链接是指,结束通话服务之前,用户最后访问的网页链接。
步骤1302、电子设备判断是否安装了与所述最近记录的网页链接相关的应用。
可以理解的是,电子设备可以根据所述网页链接确定与之相关的应用。例如,网站地址为 www.taobao.com/phones/huawei/p30-pro/时,与所述网页链接相关的应用可以为淘宝应用。
如果电子设备安装了与所述网页相应的应用,则执行步骤1304;如果电子设备未安装所述应用,则执行步骤1303;
步骤1303、电子设备启动浏览器应用,根据所述网页链接跳转至相应的网页。
步骤1304、电子设备向第一服务器发送所述网页链接。
步骤1305、第一服务器接收电子设备发送的所述网页链接。
步骤1306、第一服务器向第二服务器发送第二处理请求,所述处理请求包括所述网页链接。其中,所述第二处理请求用于指示第二服务器返回与所述网页链接相对应的应用链接。
步骤1307、第二服务器接收第一服务器发送的所述第二处理请求。
步骤1308、响应于所述第二处理请求,第二服务器向第一服务器发送应用链接。
步骤1309、第一服务器接收第二服务器发送的所述应用链接。
步骤1310、第一服务器向电子设备发送所述应用链接。
步骤1311、电子设备接收第一服务器发送的所述应用链接。
步骤1312、电子设备启动所述应用,根据所述应用链接跳转至相应的应用界面。
综上所述,当电子设备中安装有相应的应用时,电子设备可以获取与所述网页链接相对应的应用链接,跳转至与用户最后访问的网页相对应的应用界面,从而使得用户可以通过该应用查看相关的信息。
替代性的,也可以不经由第一服务器,由电子设备直接向第二服务器发送第二处理请求,由第二服务器直接向电子设备发送所述应用链接。或者,也可以不经由第二服务器,电子设备直接向第一服务器发送网页链接,由第一服务器直接向电子设备发送所述应用链接。示例性的,第一服务器存储对照表,所述对照表中包括网页链接和与所述网页链接相对应的应用界面的链接地址、即应用链接。第一服务器根据对照表确定与所述网页链接相对应的应用链接,将所述应用链接发送给电子设备。
替代性的,也可以由电子设备根据所述网页链接确定相应的应用链接。示例性的,所述第三数据中的链接包括网页的网页链接和与所述网页链接相对应的应用界面的应用链接。电子设备可以根据最近记录的网页链接、第三数据中包括的网页链接和应用链接确定与所述最近记录的网页链接相对应的应用链接。例如,第三数据为:{“web”:www.amazon.com/phones/huawei/p30-pro/;“app”:“amazon://phones/huawei/p30-pro/”}。电子设备将最近记录的网页链接(例如:www.amazon.com/phones/huawei/p30-pro/spec/)和第三数据中的网页链接(www.amazon.com/phones/huawei/p30-pro/)进行比较,将增加的字段(spec/)作为后缀添加到第三数据中的应用链接(amazon://phones/huawei/p30-pro/)中,从而得到与所述最近记录的网页链接相对应的应用链接(amazon://phones/huawei/p30-pro/spec/)。需要说明的是,采用上述方法时,所述网页链接和所述应用链接需要保持相同或相似的后缀形式。
需要说明的是,如果通话结束时第二显示区域中包括多个网页界面,则电子设备可以启动多个与之相应的应用并跳转至相应的应用界面。或者电子设备可以启动浏览器通过多个窗口显示相应的多个网页界面。或者电子设备也可以启动相应的应用和浏览器分别显示相应的界面。为了方便理解,下面结合图14中所示的应用场景进行示例性地说明。
图14(a)所示,电子设备显示通话服务界面1410。用户触摸第一信息标签(信息标签“订票”),第一信息标签与第一网页(电影A的订票页下的电影院选择页)相关联。如图14(b)所示,响应于用户对第一信息标签的触摸操作,电子设备通过浏览器内核访问第一网页,电子设备显示第一网页的网页界面1411。如图14(c)所示,电子设备接收第四输入。 所述第四输入可以为:触摸第一网页中的第一网页标签。响应于对所述第一网页中的第一网页标签(网页标签“ABC电影院”)的触摸操作,电子设备通过浏览器内核访问第二网页(电影A的订票页下的时间选择页),电子设备显示第二网页的网页界面1412。
当电子设备显示网页界面时,电子设备还可以显示通话服务主页按钮1401,返回按钮1402和/或删除按钮1403。其中,所述返回按钮用于指示电子设备显示上一用户访问的网页。例如,当电子设备显示第二网页1412时,用户触摸所述返回按钮1402,则响应于对所述返回按钮的触摸操作,电子设备显示上一用户访问的网页界面1411。所述删除按钮用于指示电子设备结束所述网页界面的显示并显示通话服务界面。当用户触摸所述删除按钮1403时,电子设备结束所述网页界面的显示,在第二显示区域中显示通话服务界面。所述通话服务主页按钮1401用于指示电子设备显示通话服务界面。也就是说,当用户触摸所述通话服务主页按钮1401按钮时,如图14(d)所示,电子设备在所述第二显示区域中与网页界面1412一同显示所述通话服务界面1410。如图14(d)至图14(e)所示,响应于用户对第三信息标签(“购买”标签)的操作,电子设备在所述第二显示区域中显示与所述第三信息标签相关联的第四网页的网页界面1413。如果通话结束时,所述第二显示区域包括多个网页界面,则电子设备可以分别根据与之相应的应用链接或网页链接,启动相应的应用或浏览器应用并跳转至相应的应用界面或网页界面。如图14(f)所示,如果通话结束时第二显示区域包括第二网页1412(电影A的订票页下的时间选择页)和第四网页1413(Mate20Pro的购买页),假设电子设备安装有与第二网页相对应的第一应用(Fandango),未安装与第四网页相对应的应用程序(Amazon),则电子设备启动第一应用跳转至与所述第二网页相对应的应用界面1414,并启动浏览器跳转至与所述第四网页向对应的网页1415。
本申请实施例公开了一种电子设备,包括:显示屏;处理器;存储器;一个或多个传感器;应用程序;计算机程序以及通信模块。上述各器件可以通过一个或多个通信总线连接。其中,该一个或多个计算机程序被存储在上述存储器中并被配置为被该一个或多个处理器执行,该一个或多个计算机程序包括指令,上述指令可以用于执行上述应实施例中的各个步骤。
示例性的,上述处理器具体可以为图1所示的处理器110,上述存储器具体可以为图1所示的内部存储器和/或外部存储器120,上述显示屏具体可以为图1所示的显示屏194,上述传感器具体可以为图1所示的传感器模块180中的一个或多个传感器,上述通信模块可以所示的移动通信模块150和/或无线通信模块160,本申请实施例对此不做任何限制。
另外,本申请实施例还提供一种电子设备上的图形用户界面(GUI),该图形用户界面具体包括电子设备在执行所述方法时显示的图形用户界面。
在上述实施例中,可以全部或部分的通过软件,硬件,固件或者其任意组合来实现。当使用软件程序实现时,可以全部或部分地以计算机程序产品的形式出现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。该可用 介质可以是磁性介质,(例如,软盘,硬盘、磁带)、光介质(例如,DVD)或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
以上所述,仅为本申请实施例的具体实施方式,但本申请实施例的保护范围并不局限于此,任何在本申请实施例揭露的技术范围内的变化或替换,都应涵盖在本申请实施例的保护范围之内。因此,本申请实施例的保护范围应以所述权利要求的保护范围为准。

Claims (37)

  1. 一种通话内容处理方法,其特征在于,包括:
    当第一电子设备处于与第二电子设备的通话连接状态时,所述第一电子设备接收第一输入;
    响应于所述第一输入,所述第一电子设备获取所述第一电子设备与所述第二电子设备之间的通话内容的至少一个关键信息;
    所述第一电子设备根据所述至少一个关键信息显示第一界面,所述第一界面包括与所述关键信息对应的标签。
  2. 根据权利要求1所述的方法,其特征在于,所述关键信息包括以下的任意一个或多个:
    所述通话内容的至少一部分对应的文本数据;
    所述通话内容中的关键词;和/或
    所述通话内容中的关键词相关的网页链接。
  3. 根据权利要求2所述的方法,其特征在于,所述标签包括文本标签,关键词标签和信息标签中的任意一种或多种;
    所述文本标签与所述文本数据对应;
    所述关键词标签与所述关键词对应;
    所述信息标签与所述网页链接对应。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述第一电子设备获取所述第一电子设备与所述第二电子设备之间的通话内容的至少一个关键信息,包括:
    所述第一电子设备向第一服务器发送所述通话内容;
    所述第一电子设备接收所述第一服务器发送的所述至少一个关键信息。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述第一界面包括所述信息标签,所述第一电子设备根据所述至少一个关键信息显示第一界面之后,还包括:
    所述第一电子设备接收对所述信息标签的第三输入;
    响应于对所述信息标签的所述第三输入,所述第一电子设备显示与所述信息标签相关联的网页。
  6. 根据权利要求5所述的方法,其特征在于,所述第一电子设备显示与所述信息标签关联的网页,包括:
    所述第一电子设备通过浏览器内核访问与所述信息标签相关联的所述网页;
    所述第一电子设备以网络视图(Webview)显示与所述信息标签相关联的所述网页。
  7. 根据权利要求5或6所述的方法,其特征在于,所述第一电子设备显示与所述信息标签关联的网页之后,还包括:
    所述第一电子设备接收对与所述信息标签相关联的所述网页的第四输入;
    响应于所述第四输入,所述第一电子设备显示其他网页。
  8. 根据权利要求7所述的方法,其特征在于,所述第一电子设备接收对与所述信息标签相关联的所述网页的第四输入之后,还包括
    所述第一电子设备记录其他网页的网页链接。
  9. 根据权利要求1-8任一项所述的方法,其特征在于,所述第一电子设备根据所述至少 一个关键信息显示第一界面之后,还包括
    所述第一电子设备接收第二输入;响应于所述第二输入,所述第一电子设备结束所述关键信息的获取。
  10. 根据权利要求1-9任一项所述的方法,其特征在于,所述第一电子设备根据所述至少一个关键信息显示第一界面之后,还包括
    如果所述第一电子设备结束与所述第二电子设备的所述通话连接;
    响应于所述通话连接的结束,所述第一电子设备结束所述关键信息的获取。
  11. 根据权利要求10所述的方法,其特征在于,所述第一电子设备结束所述关键信息的获取之后,还包括:
    所述第一电子设备获取所述第一电子设备最新记录的网页链接;
    所述第一电子设备启动浏览器应用,根据所述最新记录的网页链接显示与所述最新记录的网页链接对应的网页。
  12. 根据权利要求10所述的方法,其特征在于,所述第一电子设备结束所述关键信息的获取之后,还包括:
    所述第一电子设备获取所述第一电子设备最新记录的网页链接;
    所述第一电子设备根据所述最近记录的网页链接确定与所述最近记录的网页链接相关的应用;
    如果所述第一电子设备安装了所述应用,则所述第一电子设备获取与所述最新记录的网页链接对应的应用链接;所述第一电子设备启动所述应用,并根据所述应用链接显示与所述应用链接对应的应用界面;
    如果所述第一电子设备未安装所述应用,则所述第一电子设备启动浏览器应用,根据所述最新记录的网页链接显示与所述最新记录的网页链接对应的网页。
  13. 根据权利要求1-12任一项所述的方法,其特征在于,所述第一电子设备接收第一输入之前,还包括:
    所述第一电子设备显示第二界面,所述第二界面为通话界面。
  14. 根据权利要求13所述的方法,其特征在于,
    所述第一电子设备显示所述第一界面时,所述第二界面被缩小并与所述第一界面同时显示;或者
    所述第一电子设备显示所述第一界面时,所述第二界面缩小为一个图标悬浮显示。
  15. 根据权利要求1-14任一项所述的方法,其特征在于,
    所述通话内容包括第一语音数据和第二语音数据中的一个或两个;
    其中,所述第一语音数据为所述第一电子设备通过采集外部声音生成的语音数据;
    所述第二语音数据为所述第一电子设备从所述第二电子设备接收的语音数据。
  16. 根据权利要求1-15任一项所述的方法,其特征在于,
    所述第一输入为所述第一电子设备被折叠;或者
    所述第一输入为所述第一电子设备被展开。
  17. 根据权利要求16所述的方法,其特征在于,
    所述第一电子设备被折叠包括所述第一电子设备被折叠至所述第一电子设备的第一部分与所述第一电子设备的第二部分的夹角小于第一角度;或者
    所述第一电子设备被展开包括所述第一电子设备被展开至所述第一电子设备的第一部分 与所述电子设备的第二部分的夹角大于第二角度。
  18. 一种处理通话内容的电子设备,其特征在于,包括:
    显示屏;
    处理器;
    存储器,用于存储计算机程序;
    所述计算机程序包括指令,当所述指令被所述处理器执行时,使得所述电子设备执行以下步骤:
    当所述电子设备处于与其他电子设备的通话连接状态时,接收第一输入;
    响应于所述第一输入,获取所述电子设备与所述其他电子设备之间的通话内容的至少一个关键信息;
    根据所述至少一个关键信息在所述显示屏上显示第一界面,所述第一界面包括与所述关键信息对应的标签。
  19. 根据权利要求18所述的电子设备,其特征在于,所述关键信息包括以下的任意一个或多个:
    所述通话内容的至少一部分对应的文本数据;
    所述通话内容中的关键词;和/或
    所述通话内容中的关键词相关的网页链接。
  20. 根据权利要求19所述的电子设备,其特征在于,所述标签包括文本标签,关键词标签和信息标签中的任意一种或多种;
    所述文本标签与所述文本数据对应;
    所述关键词标签与所述关键词对应;
    所述信息标签与所述网页链接对应。
  21. 根据权利要求18-20任一项所述的电子设备,其特征在于,所述获取所述电子设备与所述其他电子设备之间的通话内容的至少一个关键信息,包括:
    向第一服务器发送所述通话内容;
    接收所述第一服务器发送的所述至少一个关键信息。
  22. 根据权利要求18-21任一项所述的电子设备,其特征在于,
    当所述指令被所述处理器执行时,使得所述电子设备在执行所述根据所述至少一个关键信息显示第一界面之后,还执行以下步骤:
    接收对所述信息标签的第三输入;
    响应于对所述信息标签的所述第三输入,在所述显示屏上显示与所述信息标签相关联的网页。
  23. 根据权利要求22所述的电子设备,其特征在于,所述显示与所述信息标签关联的网页,包括:
    所述电子设备通过浏览器内核访问与所述信息标签相关联的所述网页;
    所述电子设备以网络视图(Webview)显示与所述信息标签相关联的所述网页。
  24. 根据权利要求22或23所述的电子设备,其特征在于,当所述指令被所述处理器执行时,使得所述电子设备在执行所述显示与所述信息标签关联的网页之后,还执行以下步骤:
    接收对与所述信息标签相关联的所述网页的第四输入;
    响应于所述第四输入在所述显示屏上显示其他网页。
  25. 根据权利要求24所述的电子设备,其特征在于,当所述指令被所述处理器执行时,使得所述电子设备在执行所述接收对与所述信息标签相关联的所述网页的第四输入之后,还执行步骤:
    记录其他网页的网页链接。
  26. 根据权利要求28-25任一项所述的电子设备,其特征在于,当所述指令被所述处理器执行时,使得所述电子设备在执行所述根据所述至少一个关键信息显示第一界面之后,还执行步骤:
    接收第二输入;
    响应于所述第二输入,结束所述关键信息的获取。
  27. 根据权利要求18-26任一项所述的电子设备,其特征在于,当所述指令被所述处理器执行时,使得所述电子设备在执行所述根据所述至少一个关键信息显示第一界面之后,还执行步骤:
    如果所述电子设备结束与其他电子设备的所述通话连接;
    响应于所述通话连接的结束,结束所述关键信息的获取。
  28. 根据权利要求27所述的电子设备,其特征在于,其特征在于,当所述指令被所述处理器执行时,使得所述电子设备在执行所述结束所述关键信息的获取之后,还执行步骤:
    获取电子设备最新记录的网页链接;
    启动浏览器应用,根据所述最新记录的网页链接在所述显示屏上显示与所述最新记录的网页链接对应的网页。
  29. 根据权利要求27所述的电子设备,其特征在于,其特征在于,当所述指令被所述处理器执行时,使得所述电子设备在执行所述结束所述关键信息的获取之后,还执行步骤:
    获取所述电子设备最新记录的网页链接;
    根据所述最近记录的网页链接确定与所述最近记录的网页链接相关的应用;
    如果所述电子设备安装了所述应用,则获取与所述最新记录的网页链接对应的应用链接;启动所述应用,并根据所述应用链接在所述显示屏上显示与所述应用链接对应的应用界面;
    如果所述电子设备未安装所述应用,则启动浏览器应用,根据所述最新记录的网页链接显示与所述最新记录的网页链接对应的网页。
  30. 根据权利要求17-28任一项所述的电子设备,其特征在于,当所述指令被所述处理器执行时,使得所述电子设备在执行所述接收第一输入之前,还执行步骤:
    在所述显示屏上显示第二界面,所述第二界面为通话界面。
  31. 根据权利要求29所述的电子设备,其特征在于,
    显示所述第一界面时,所述第二界面被缩小并与所述第一界面同时显示;或者
    显示所述第一界面时,所述第二界面缩小为一个图标悬浮显示。
  32. 根据权利要求17-29任一项所述的电子设备,其特征在于,
    所述通话内容包括第一语音数据和第二语音数据中的一个或两个;
    其中,所述第一语音数据为所述电子设备通过采集外部声音生成的语音数据;
    所述第二语音数据为所述电子设备从所述其他电子设备接收的语音数据。
  33. 根据权利要求17-31任一项所述的电子设备,其特征在于,
    所述第一输入为所述电子设备被折叠;或者
    所述第一输入为所述电子设备被展开。
  34. 根据权利要求33所述的电子设备,其特征在于,
    所述电子设备被折叠包括所述电子设备被折叠至所述电子设备的第一部分与第一电子设备的第二部分的夹角小于第一角度;
    所述电子设备被展开包括所述电子设备被展开至所述电子设备的第一部分与所述电子设备的第二部分的夹角大于第二角度。
  35. 一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,其特征在于,当所述指令在电子设备上运行时,使得所述电子设备执行如权利要求1-17中任一项所述的一种通话内容处理方法。
  36. 一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在电子设备上运行时,使得所述电子设备执行如权利要求1-17中任一项所述的一种通话内容处理方法。
  37. 一种图形用户界面GUI,所述图像用户界面存储在电子设备中,所述电子设备包括显示屏、存储器、处理器,所述处理器用于执行存储在所述存储器中的计算机程序,其特征在于,所述图形用户界面包括电子设备在执行如权利要求1-17时显示的图形用户界面。
PCT/CN2020/090956 2019-05-20 2020-05-19 一种通话内容处理方法和电子设备 WO2020233556A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910416825.4 2019-05-20
CN201910416825.4A CN111970401B (zh) 2019-05-20 2019-05-20 一种通话内容处理方法、电子设备和存储介质

Publications (1)

Publication Number Publication Date
WO2020233556A1 true WO2020233556A1 (zh) 2020-11-26

Family

ID=73357796

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/090956 WO2020233556A1 (zh) 2019-05-20 2020-05-19 一种通话内容处理方法和电子设备

Country Status (2)

Country Link
CN (1) CN111970401B (zh)
WO (1) WO2020233556A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113660375A (zh) * 2021-08-11 2021-11-16 维沃移动通信有限公司 通话方法、装置及电子设备
CN115268736A (zh) * 2021-04-30 2022-11-01 华为技术有限公司 界面切换方法及电子设备
CN115309312A (zh) * 2021-04-21 2022-11-08 花瓣云科技有限公司 一种内容显示方法与电子设备

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113672152A (zh) * 2021-08-11 2021-11-19 维沃移动通信(杭州)有限公司 显示方法及装置
CN113761881A (zh) * 2021-09-06 2021-12-07 北京字跳网络技术有限公司 一种错别词识别方法及装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090232288A1 (en) * 2008-03-15 2009-09-17 Microsoft Corporation Appending Content To A Telephone Communication
CN105279202A (zh) * 2014-07-25 2016-01-27 中兴通讯股份有限公司 一种检索信息的方法及装置
CN105550235A (zh) * 2015-12-07 2016-05-04 小米科技有限责任公司 信息获取方法及装置
CN106713628A (zh) * 2016-12-14 2017-05-24 北京小米移动软件有限公司 查询移动终端内存储的信息的方法、装置及移动终端
CN106777320A (zh) * 2017-01-05 2017-05-31 珠海市魅族科技有限公司 通话辅助方法及装置
US20170230497A1 (en) * 2016-02-04 2017-08-10 Samsung Electronics Co., Ltd. Electronic device and method of voice command processing therefor
CN107547717A (zh) * 2017-08-01 2018-01-05 联想(北京)有限公司 信息处理方法、电子设备及计算机存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103379013B (zh) * 2012-04-12 2016-03-09 腾讯科技(深圳)有限公司 一种基于即时通信的地理信息提供方法和系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090232288A1 (en) * 2008-03-15 2009-09-17 Microsoft Corporation Appending Content To A Telephone Communication
CN105279202A (zh) * 2014-07-25 2016-01-27 中兴通讯股份有限公司 一种检索信息的方法及装置
CN105550235A (zh) * 2015-12-07 2016-05-04 小米科技有限责任公司 信息获取方法及装置
US20170230497A1 (en) * 2016-02-04 2017-08-10 Samsung Electronics Co., Ltd. Electronic device and method of voice command processing therefor
CN106713628A (zh) * 2016-12-14 2017-05-24 北京小米移动软件有限公司 查询移动终端内存储的信息的方法、装置及移动终端
CN106777320A (zh) * 2017-01-05 2017-05-31 珠海市魅族科技有限公司 通话辅助方法及装置
CN107547717A (zh) * 2017-08-01 2018-01-05 联想(北京)有限公司 信息处理方法、电子设备及计算机存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115309312A (zh) * 2021-04-21 2022-11-08 花瓣云科技有限公司 一种内容显示方法与电子设备
CN115268736A (zh) * 2021-04-30 2022-11-01 华为技术有限公司 界面切换方法及电子设备
CN113660375A (zh) * 2021-08-11 2021-11-16 维沃移动通信有限公司 通话方法、装置及电子设备
CN113660375B (zh) * 2021-08-11 2023-02-03 维沃移动通信有限公司 通话方法、装置及电子设备

Also Published As

Publication number Publication date
CN111970401B (zh) 2022-04-05
CN111970401A (zh) 2020-11-20

Similar Documents

Publication Publication Date Title
WO2021013158A1 (zh) 显示方法及相关装置
WO2020238356A1 (zh) 界面显示方法、装置、终端及存储介质
WO2020177622A1 (zh) Ui组件显示的方法及电子设备
WO2021063343A1 (zh) 语音交互方法及装置
CN110597512B (zh) 显示用户界面的方法及电子设备
WO2020253758A1 (zh) 一种用户界面布局方法及电子设备
WO2020233556A1 (zh) 一种通话内容处理方法和电子设备
WO2021082835A1 (zh) 启动功能的方法及电子设备
WO2020221063A1 (zh) 切换父页面和子页面的方法、相关装置
CN111669459B (zh) 键盘显示方法、电子设备和计算机可读存储介质
WO2022052776A1 (zh) 一种人机交互的方法、电子设备及系统
WO2020156230A1 (zh) 一种电子设备在来电时呈现视频的方法和电子设备
WO2021000841A1 (zh) 一种生成用户头像的方法及电子设备
WO2022068819A1 (zh) 一种界面显示方法及相关装置
CN109819306B (zh) 一种媒体文件裁剪的方法、电子设备和服务器
WO2020238759A1 (zh) 一种界面显示方法和电子设备
CN113961157A (zh) 显示交互系统、显示方法及设备
WO2022033432A1 (zh) 内容推荐方法、电子设备和服务器
CN113852714A (zh) 一种用于电子设备的交互方法和电子设备
WO2022057889A1 (zh) 一种对应用程序的界面进行翻译的方法及相关设备
WO2022022674A1 (zh) 应用图标布局方法及相关装置
WO2021196980A1 (zh) 多屏交互方法、电子设备及计算机可读存储介质
WO2021031862A1 (zh) 一种数据处理方法及其装置
WO2022135157A1 (zh) 页面显示的方法、装置、电子设备以及可读存储介质
CN116339568A (zh) 屏幕显示方法和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20810121

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20810121

Country of ref document: EP

Kind code of ref document: A1