CN112992135A - Electronic equipment and voice control display method - Google Patents

Electronic equipment and voice control display method Download PDF

Info

Publication number
CN112992135A
CN112992135A CN201911304749.4A CN201911304749A CN112992135A CN 112992135 A CN112992135 A CN 112992135A CN 201911304749 A CN201911304749 A CN 201911304749A CN 112992135 A CN112992135 A CN 112992135A
Authority
CN
China
Prior art keywords
voice
processing chip
module
preset
processing module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911304749.4A
Other languages
Chinese (zh)
Inventor
黄亮
方攀
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911304749.4A priority Critical patent/CN112992135A/en
Publication of CN112992135A publication Critical patent/CN112992135A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W52/00Power management, e.g. TPC [Transmission Power Control], power saving or power classes
    • H04W52/02Power saving arrangements
    • H04W52/0209Power saving arrangements in terminal devices
    • H04W52/0225Power saving arrangements in terminal devices using monitoring of external events, e.g. the presence of a signal
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The embodiment of the application provides electronic equipment and a voice control display method, wherein the electronic equipment comprises a display screen; the main processing chip is electrically connected with the display screen and is used for controlling the display screen to enter a screen-off display mode when receiving a first operation instruction; the auxiliary processing chip is electrically connected with the display screen and the main processing chip, and the operation power consumption of the auxiliary processing chip is less than that of the main processing chip; when the display screen enters a screen-off display mode, the main processing chip wakes up the co-processing chip and enters a dormant state after waking up the co-processing chip, the co-processing chip is used for receiving voice signals and displaying preset keywords on the display screen when detecting that the voice signals contain the preset keywords. The whole process is controlled by the low-power-consumption co-processing chip, and the text display is carried out in the screen-off display state, so that the interest of the electronic equipment can be improved, and meanwhile, the power consumption of the electronic equipment can be reduced.

Description

Electronic equipment and voice control display method
Technical Field
The present disclosure relates to computer technologies, and in particular, to an electronic device and a voice-controlled display method.
Background
With the development of electronic device technology, various electronic devices have become indispensable tools in people's life and work, and more functions can be supported by electronic devices. For example, the user may implement a call function, an online shopping function, a navigation function, a game function, an electronic book function, and the like through the electronic device.
However, the electronic devices support more and more functions, and the power consumption of the electronic devices is higher and higher. How to reduce the power consumption of the electronic device and prolong the endurance time of the electronic device becomes a problem to be solved urgently at present.
Disclosure of Invention
The embodiment of the application provides electronic equipment and a voice control display method, which can reduce the power consumption of the electronic equipment, can display voice information on a display screen, and improves the interestingness of the electronic equipment.
In a first aspect, an embodiment of the present application provides an electronic device, including:
a display screen;
the main processing chip is electrically connected with the display screen and is used for controlling the display screen to enter a screen-off display mode when receiving a first operation instruction;
the auxiliary processing chip is electrically connected with the display screen and the main processing chip, and the operation power consumption of the auxiliary processing chip is less than that of the main processing chip;
when the display screen enters a screen-off display mode, the main processing chip wakes up the co-processing chip and enters a dormant state after waking up the co-processing chip, the co-processing chip is used for receiving voice signals and displaying preset keywords on the display screen when detecting that the voice signals contain the preset keywords.
In a second aspect, an embodiment of the present application provides a voice-controlled display method, which is applied to an electronic device, where the electronic device includes a main processing chip, a co-processing chip, and a display screen, and an operation power consumption of the co-processing chip is less than an operation power consumption of the main processing chip; the method comprises the following steps:
when receiving a first operation instruction, the main processing chip controls the display screen to enter a screen-off display mode, wakes up the co-processing chip, and enters a dormant state after waking up the co-processing chip;
the co-processing chip receives the voice signal and detects whether the voice signal contains a preset keyword or not;
and when the voice signal contains the preset keyword, the co-processing chip displays the preset keyword on the display screen.
The electronic equipment that this application embodiment provided, main processing chip and coprocessing chip have, when main processing chip is in the dormancy state, coprocessing chip can receive speech signal, and detect the speech signal, when detecting that contain in the speech signal and predetermine the keyword, coprocessing chip shows the keyword on the display screen, because coprocessing chip's operation consumption is less than main processing chip's operation consumption, coprocessing chip based on low-power consumption carries out the detection of speech signal and will predetermine the keyword and show on the display screen, can effectively reduce electronic equipment's whole consumption, and then prolong electronic equipment's time of endurance, and simultaneously, show its predetermined keyword that contains on the display screen through discernment pronunciation, can promote electronic equipment's interest.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a first structural schematic diagram of an electronic device according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a first structure of a co-processing chip of an electronic device according to an embodiment of the present disclosure.
Fig. 3 is a schematic diagram of a second structure of a co-processing chip of an electronic device according to an embodiment of the present disclosure.
Fig. 4 is a schematic diagram of a third structure of a co-processing chip of an electronic device according to an embodiment of the present application.
Fig. 5 is a schematic diagram of a fourth structure of a co-processing chip of an electronic device according to an embodiment of the present application.
Fig. 6 is a fifth structural schematic diagram of a co-processing chip of an electronic device according to an embodiment of the present application.
Fig. 7 is a sixth schematic structural diagram of a co-processing chip of an electronic device according to an embodiment of the present application.
Fig. 8 is a schematic diagram of a seventh structure of a co-processing chip of an electronic device according to an embodiment of the present application.
Fig. 9 is a schematic structural diagram of a second electronic device according to an embodiment of the present application.
Fig. 10 is a third schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The embodiment of the application provides electronic equipment. The electronic device may be a smart phone, a smart watch, a tablet computer, or the like, or may be a game device, an AR (Augmented Reality) device, an automobile device, a data storage device, an audio playing device, a video playing device, a notebook computer, a desktop computing device, or the like, or may be a wearable electronic device such as an electronic helmet, electronic glasses, electronic clothing, or the like.
Referring to fig. 1, fig. 1 is a first structural schematic diagram of an electronic device 100 according to an embodiment of the present disclosure.
The electronic device 100 includes a main processing chip 10, a co-processing chip 20 and a display 30. The co-processing chip 20 is electrically connected to the main processing chip 10, and the display 30 is electrically connected to the co-processing chip 20 and the main processing chip 10, respectively. It will be appreciated that an electrical connection may be a direct connection to enable transfer of electrical signals, or an indirect connection, such as through a switch or other electronic device to enable transfer of electrical signals.
The main processing Chip 10 may serve as a main control SOC (System on Chip) of the electronic device 100. The main processing chip 10 may have integrated thereon a processor and a memory, such as a first processor which may perform data processing and a first memory which may store data, including a first operating system and an application program. The main processing chip 10 may run a first operating system and applications.
The co-processing chip 20 is a low power SOC. The operating power consumption of the co-processing chip 20 is less than the operating power consumption of the main processing chip 10. The co-processing chip 20 may also have integrated thereon a processor and a memory, such as a second processor that may perform data processing and a second memory that may store data, including a second operating system and application programs. The co-processing chip 20 may run a second operating system and applications.
When the main processing chip 10 receives the first operation instruction, the main processing chip 10 controls the display screen 30 to be in the screen-off mode or in the screen-off display mode, and the main processing chip 10 enters the sleep state. When the main processing chip 10 is in the sleep state and wakes up the co-processing chip 20, the co-processing chip 20 can receive the voice signal transmitted by the peripheral component, such as the microphone, and detect the voice signal. When detecting that the voice signal contains the preset keyword, the co-processing chip 20 displays the preset keyword on the display screen 30.
Because the operation power consumption of the co-processing chip 20 is less than that of the main processing chip 10, the low-power co-processing chip 20 detects the voice signal and displays the preset keywords on the display screen 30, so that the overall power consumption of the electronic device 100 can be effectively reduced, the endurance time of the electronic device 100 can be further prolonged, and meanwhile, the preset keywords contained in the electronic device are displayed on the display screen 30 through voice recognition, so that the interestingness of the electronic device can be improved.
When the main processing chip 10 is in the sleep state, a part of modules of the co-processing chip 20 is waken up, for example, the voice detection module, the voice processing module, the co-processor, and the like, and other modules of the co-processing chip 20 are still in the deep sleep state, so that the power consumption of the electronic device 100 can be further reduced, and the endurance time of the electronic device 100 can be further prolonged.
Referring to fig. 2, fig. 2 is a schematic diagram illustrating a first structure of a co-processing chip of an electronic device according to an embodiment of the present disclosure. The co-processing chip 20 includes a voice processing module 21. The voice processing module 21 is used for performing audio data processing. The voice processing module 21 may be configured to perform keyword extraction on the audio data.
In some embodiments, voiceprint recognition may also be performed by the voice processing module 21, so that the voice control of the user is authenticated by the voiceprint recognition result of the voice processing module 21. The processing frequency of the voice processing module 21 can reach 400MHz, and the voice algorithm can be processed more efficiently.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating a second structure of a co-processing chip of an electronic device according to an embodiment of the present disclosure. The co-processing chip 20 comprises a voice processing module 21, a voice detection module 22 and a communication bus 23, wherein the voice processing module 21 is electrically connected with the voice check module 22.
The communication bus 23 is, for example, NOC (Network on Chip). The voice processing module 21 may be electrically connected to the voice detection module 22 through a communication bus 23, so as to implement communication between the voice detection module 22 and the voice processing module 21.
The voice detection module 22 is a normally open (always) module, which is always in a power-on working state, and no matter the co-processing chip 20 is in a working state or a sleep state, the voice detection module 22 is always in the power-on working state, that is, the voice detection module 22 is not powered off, and always detects the input voice signal. The voice detection module 22 is configured to detect whether the amplitude of the voice signal is greater than a preset amplitude when the voice signal is received; and when detecting that the amplitude of the voice signal is greater than the preset amplitude, waking up the voice processing module 21; the voice processing module 21 is configured to obtain voice data corresponding to the voice signal after switching from the dormant state to the working state, and detect whether the voice data includes a preset keyword, and when detecting that the voice data includes the preset keyword, the co-processing chip 20 may display the preset keyword on the display screen 30.
In some embodiments, please refer to fig. 4, and fig. 4 is a schematic diagram illustrating a third structure of a co-processing chip of an electronic device according to an embodiment of the present disclosure. The co-processing chip 20 further includes a storage module 24, and the storage module 24 is electrically connected to the voice detection module 22 and the voice processing module 21. The memory module 24 may also be electrically connected to the communication bus 23 to enable communication with other modules.
The voice detection module 22 is further configured to trigger an interrupt signal to wake up the voice processing module 21 when detecting that the amplitude of the voice signal is greater than the preset amplitude; the voice processing module 21 is further configured to switch the operating mode of the storage module 24 from the exclusive mode to the bus mode and enter the operating state when the interrupt signal is detected. In the exclusive mode, the voice processing module 21 only responds to the interrupt signal of the voice detection module 22, and in the bus mode, the voice processing module 21 acquires the voice data in the storage module 24 for processing.
The voice data stored in the memory module 24 by the voice detection module 22 is not detected until the voice processing module 21 is awakened, and it only responds to an interrupt signal from the voice detection module 22. When the voice processing module 21 detects the interrupt signal sent by the voice detecting module 22, the memory module 24 can be switched from the exclusive mode to the bus mode by using the register of the memory module 24, which is equivalent to connecting the communication between the voice processing module 21 and the memory module 24, and the voice processing module 21 acquires the voice data in the memory module 24 for processing.
Referring to fig. 4, in some embodiments, the voice detection module 22 is further configured to store the voice data in the storage module 24 when detecting that the amplitude of the voice signal is greater than the preset amplitude; the voice processing module 21 is further configured to obtain voice data from the storage module 24 after the sleep state is switched to the working state.
For example, when the voice detection module 22 receives a voice signal and detects whether the amplitude of the voice signal is greater than a preset amplitude, the voice data is stored in the storage module 24 when the amplitude of the voice signal is greater than the preset amplitude. Meanwhile, the voice detection module 22 wakes up the voice processing module 21, the voice processing module 21 is used for acquiring the voice data from the storage module 24 after being switched to the working state from the dormant state, and detecting whether the voice data contains preset keywords, and when detecting that the voice data contains the preset keywords, the co-processing chip 20 displays the preset keywords on the display screen 30.
In some embodiments, the speech processing module 21 may include a DSP and WDT 1. The DSP is used to perform voice data detection and the WDT1 is a timer used to instruct the DSP to time out acquiring data. If the DSP detects that the voice data does not contain the preset keywords, the DSP continues to acquire the voice data from the storage module 24; when the voice data is not acquired from the storage module 24 within the preset time period, the sleep state is entered. If the DSP acquires the voice data from the storage module 24, the DSP controls the time recorded by the WDT1 to return to zero, otherwise, the WDT1 keeps timing, and when the timing reaches a preset duration, triggers a sleep signal, and when the DSP detects the sleep signal, the DSP switches from the working state to the sleep state.
It should be noted that, in a typical electronic device, the storage module 24 (e.g., a Memory) and the DSP are disposed in the same module, that is, the storage module 24 is typically disposed in the voice processing module 21, and the storage module 24 is only used for storing voice data, but not used for storing other data. In the co-processing chip 20 of the present application, the storage module 24 is independent, and the storage module 24 can be used for storing not only audio data but also other data, so that the sharing of the storage module 24 can be realized, and the utilization rate of the storage module 24 can be improved.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating a fourth structure of a co-processing chip of an electronic device according to an embodiment of the present disclosure. The co-processing chip 20 further comprises a co-processor 25, the co-processor 25 being electrically connected to the speech processing module 21. The voice processing module 21 is further configured to wake up the coprocessor 25 when it is detected that the voice data includes the preset keyword; the coprocessor 25 is configured to display the preset keyword on the display screen 30 after switching from the sleep state to the working state.
The coprocessor 25, such as a CPU, is used for controlling and performing data operations on the entire coprocessor chip 20, such as initializing the coprocessor 25 when the coprocessor 25 starts to operate, performing data operations when the coprocessor 25 operates, and the like. The CPU may be understood as a control core of the co-processing chip 20. The processing frequency of the CPU can reach 300MHz, and the CPU can be used for control and logic processing.
In some embodiments, the voice processing module 21 is further configured to perform voiceprint recognition on the voice signal to obtain voiceprint information when it is detected that the voice data corresponding to the voice signal includes a preset keyword; and when the voiceprint information is successfully matched with the preset voiceprint information, waking up the coprocessor 25; the coprocessor 25 is configured to execute an operation matched with the preset keyword after switching from the sleep state to the working state.
For example, when the voice processing module 21 detects that the voice data corresponding to the voice signal includes a preset keyword, it performs voiceprint recognition on the voice signal to obtain voiceprint information, and matches the voiceprint information with the preset voiceprint information stored in advance, and when the matching is successful, the coprocessor 25 is waken up, and the coprocessor 25 can perform an operation matching with the preset keyword.
In some embodiments, please refer to fig. 6, and fig. 6 is a fifth structural diagram of a co-processing chip according to an embodiment of the present disclosure. The co-processing chip 20 further comprises an image processing module 26, and the image processing module 26 is electrically connected with the voice processing module 21; the voice processing module 21 is further configured to wake up the image processing module 26 when it is detected that the voice data includes the preset keyword; the image processing module 26 is configured to process the preset keyword after switching from the sleep state to the working state, and wake up the coprocessor 25 after processing the preset keyword; the coprocessor 25 is configured to switch the sleep state to the working state, and display the processed preset keyword on the display screen 30.
For example, the voice processing module 21 is further configured to wake up the image processing module 26 when it is detected that the voice data includes the preset keyword, the image processing module 26 processes the preset keyword, and may process the preset keyword into a sons body, an art word, a regular body, and the like, and after the image processing module 26 processes the preset keyword, wake up the coprocessor 25, and the coprocessor 25 displays the processed preset keyword on the display screen 30.
Among other things, an image processing module 26, such as VOP & DSC, is used to perform image processing, such as compositing of layers and rendering of images. The image processing module 26 supports MIPI-DSI of 4-way image data, supports 3-way image layer synthesis, and supports vesacdsc. It should be noted that, in a general electronic device, image rendering needs to be performed by a GPU (Graphics Processing Unit), while the co-Processing chip 20 of the present application may directly perform image rendering by the image Processing module 26, and the GPU does not need to participate in image rendering when the co-Processing chip 20 works, so that power consumption of the GPU may be saved, and power consumption of the electronic device 100 may be further reduced.
In some embodiments, in order to further improve the interest of the electronic device, the voice processing module 21 is further configured to wake up the coprocessor 25 when the voice data includes the preset keyword, and after waking up the coprocessor 25, detect the receiving frequency of the voice signal, determine whether the receiving frequency of the voice signal is greater than the preset receiving frequency, and when the receiving frequency of the voice signal is greater than the preset receiving frequency, the coprocessor 25 is configured to switch from the sleep state to the working state, and then dynamically display the preset keyword on the display screen 30.
For example, the voice processing module 21 is further configured to wake up the coprocessor 25 when detecting that the voice data includes the preset keyword, and detect whether the receiving frequency of the voice signal is greater than the preset receiving frequency after waking up the coprocessor 25, where the preset receiving frequency may be a fixed value pre-stored in the storage module, and when the receiving frequency is greater than the preset receiving frequency, the coprocessor 25 dynamically displays the preset keyword, so as to further increase the interest of the electronic device.
In some embodiments, please refer to fig. 7 and 8, where fig. 7 is a sixth structural schematic diagram of a co-processing chip of an electronic device according to an embodiment of the present disclosure, and fig. 8 is a seventh structural schematic diagram of the co-processing chip of the electronic device according to the embodiment of the present disclosure. In order to improve the accuracy of detecting the preset keyword in the voice signal, the main processing chip 10 may process the voice signal to obtain the preset keyword, which is as follows: the co-processing chip 20 further includes a communication interface module 27, and the communication interface module 27 is electrically connected to the voice processing module 21.
The voice detection module 22 is configured to receive a voice signal in a screen-off display mode, and detect whether the amplitude of the voice signal is greater than a preset amplitude; when the amplitude of the voice signal is detected to be larger than the preset amplitude, the voice processing module 21 is awakened; the voice processing module 21 is further configured to send a wake-up request to the main processing chip 10 through the communication interface module 27 after switching from the sleep state to the working state, and enter the sleep state after sending the voice signal to the main processing chip 10; the main processing chip 10 is further configured to enter a working state when receiving a wake-up request sent by the voice processing module 21, detect whether the voice signal includes a preset keyword, send the wake-up request to the voice processing module 21 through the communication interface module 27 when detecting that the voice signal includes the preset keyword, and enter a sleep state after sending the preset keyword to the voice processing module 21; the voice processing module 21 is further configured to enter a working state when receiving a wake-up request sent by the main processing chip 10, and receive the preset keyword; the voice processing module 21 is further configured to wake up the coprocessor 25 after receiving the preset keyword; the coprocessor 25 is configured to display the preset keyword on the display screen 30 after switching from the sleep state to the working state. The accuracy of detecting the preset keyword in the voice signal can be improved by detecting the preset keyword of the voice signal through the main processing chip 10.
Wherein, the communication interface module 27 is used for communicating with the main processing chip 10. The communication interface module 27 may include an SPISLV interface and an I2CSLV interface, and both the SPISLV interface and the I2CSLV interface may be used for communicating with the main processing chip 10.
Wherein, the coprocessor 25 and the communication interface module 27 may belong to the same power domain. After the power of the coprocessor 25 is turned on, the communication interface module 27 may be activated to implement communication with the main processing chip 10.
In some embodiments, the co-processing chip 20 further includes a power management module electrically connected to the co-processor 24; the power management module is configured to control power supply to each module in the co-processing chip 20, for example, the voice processing module 21 is further configured to wake up the co-processor 25 through a register corresponding to the co-processor 25 in the power management module when it is detected that the voice data includes a preset keyword.
As shown in fig. 8, the voice detection module 22 includes a data conversion unit 221 and a voice detection unit 222. The data conversion unit 221 is configured to determine whether the voice signal is an analog signal when the voice signal is received; when the voice signal is an analog signal, converting the voice signal into a digital signal, and transmitting the digital signal to the voice detection unit 222; and, when the voice signal is a digital signal, directly transmitting the digital signal to the voice detection unit 222; the voice detection unit 222 is configured to detect whether the amplitude of the received voice signal is greater than a preset amplitude; and when the amplitude of the voice signal is greater than the preset amplitude, storing the voice data in the storage module 24. The peripheral components may include microphones of different types, different formats of voice signals, some types of microphones transmitting analog signals, and some types of microphones transmitting digital signals.
The data conversion unit 221 may be, for example, an ADC (Analog-to-Digital Converter), or a Codec (coder-decoder). The Voice Detection unit 222 may be, for example, a Voice Activity Detection (VAD) for detecting whether the amplitude of the Voice signal is greater than a preset amplitude, wherein the preset amplitude is a value preset in the electronic device.
The electronic device provided by the embodiment of the present application is described next by using a specific application scenario. As shown in fig. 7 and 8, when the electronic device is in the screen-off state, the main processing chip 10 is in the sleep state, the voice signal input by the user is received by the microphone of the electronic device and transmitted to the voice detection module 22, the data conversion unit 221 of the voice detection module 22 receives the voice signal, if the data conversion unit 221 detects that the voice signal is an analog signal, the voice signal is converted into a digital signal and transmitted to the voice detection unit 222, the voice detection unit 222 detects whether the amplitude of the voice signal is greater than a preset amplitude, if the amplitude of the voice signal is greater than the preset amplitude, the voice data is stored in the storage module 24, and meanwhile, the voice detection module 22 sends an interrupt signal to the voice processing module 21 to wake up the voice processing module 21, and after receiving the interrupt signal, the voice processing module 21 switches from the sleep state to the working state, the method comprises the steps of obtaining voice data from a storage module 24, detecting whether preset keywords are contained in the voice data by adopting a preset voice recognition algorithm, if the preset keywords are detected to be contained, waking up a coprocessor 25 through a communication bus 23 by a voice processing module 21 to start a communication interface module 26, sending a wake-up request to a main processing chip 10 through the communication interface module 26 to wake up the main processing chip 10, and displaying the keywords on a display screen 30 by the coprocessor 25.
In some embodiments, the co-processing chip 10 is further configured to wake up the main processing chip 20 when receiving the second operation instruction, and enter a sleep state after waking up the main processing chip 20; the main processing chip 20 is further configured to obtain the voice data after entering a working state, and determine whether the voice data includes the preset keyword; when the voice data contains the preset keyword, responding to the voice signal; and entering a dormant state when the voice data does not contain the preset keyword.
For example, when the main processing chip 20 detects that the voice data includes the preset keyword, the main processing chip 20 controls a microphone and a vibration motor of the electronic device to vibrate, so as to respond to the detection of the preset keyword by the co-processing chip 10, and increase the entertainment of turning off the screen of the electronic device.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a second electronic device according to an embodiment of the present disclosure. In some embodiments, electronic device 100 may also include peripheral components 40. Both the main processing chip 10 and the co-processing chip 20 are electrically connected to the peripheral components 40.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a third electronic device 100 according to an embodiment of the present disclosure. The peripheral component 40 further includes a display screen 41 such as the display screen 30, a touch circuit 42, a power amplifier circuit 43, a near field communication device 44, a bluetooth device 45, and an infrared device 46.
The touch circuit 42 is configured to detect a touch operation of a user and generate a corresponding touch signal according to the touch operation of the user, so as to implement a touch operation of the user on the electronic device 100. The touch circuit 42 may be disposed inside the display screen 41 of the electronic device 100, for example, in which case the display screen 41 of the electronic device 100 may be understood as a touch screen, that is, an integration of the display screen and the touch circuit. The power amplifier circuit 43 is configured to amplify the power of the audio signal to be output in the electronic device 100, so that the electronic device 100 outputs the audio signal to the outside. The near field communication device 44 is used for realizing near field communication between the electronic device 100 and other devices, the bluetooth device 45 is used for realizing bluetooth communication between the electronic device 100 and other devices, and the infrared device 46 is used for realizing infrared communication between the electronic device 100 and other devices.
The main processing chip 10 includes a communication port 10a, a display control port 10b, a touch port 10c, a power amplifier control port 10d, a near field communication port 10e, a bluetooth communication port 10f, and an infrared communication port 10 g.
The display control port 10b is electrically connected to the main processing chip 10 to transmit the image data generated by the main processing chip 10 to the display screen 41 when the electronic device 100 operates in the first mode.
The touch port 10c is electrically connected to the touch circuit 42, so that when the electronic device 100 operates in the first mode, the main processing chip 10 receives the touch signal generated by the touch circuit 42 through the touch port 10 c.
The power amplifier control port 10d is electrically connected to the power amplifier circuit 43, so that when the electronic device 100 operates in the first mode, the power amplifier control port 10c sends a power amplifier control signal to the power amplifier circuit 43, and the main processing chip 10 controls the power amplifier circuit 43.
The near field communication port 10e is electrically connected to the near field communication device 44, so that when the electronic device 100 operates in the first mode, the near field communication port 10e sends a control signal to the near field communication device 44, and the main processing chip 10 controls the near field communication device 44.
The bluetooth communication port 10f is electrically connected to the bluetooth device 45, so that when the electronic device 100 operates in the first mode, the bluetooth communication port 10f sends a control signal to the bluetooth device 45, and the main processing chip 10 controls the bluetooth device 45.
The infrared communication port 10g is electrically connected to the infrared device 46, so that when the electronic device 100 operates in the first mode, the infrared communication port 10g sends a control signal to the infrared device 46, and the main processing chip 10 controls the infrared device 46.
The co-processing chip 20 includes a communication port 20a, a display control port 20b, a touch control port 20c, a power amplifier control port 20d, a near field communication port 20e, a bluetooth communication port 20f, and an infrared communication port 20 g.
The communication port 20a of the co-processing chip 20 is electrically connected with the communication port 10a of the main processing chip 10 to realize communication between the co-processing chip 20 and the main processing chip 10. The communication port 20a may be, for example, an SPISLV interface and an I2CSLV interface of the co-processing chip 20.
The display control port 20b is electrically connected to the switch to transmit the image data generated by the co-processing chip 20 to the display 41 through the switch when the electronic device 100 operates in the second mode.
The display control port 20b is electrically connected to the co-processing chip 20 to transmit the image data generated by the co-processing chip 20 to the display screen 41 when the electronic device 100 operates in the second mode.
The touch port 20c is electrically connected to the touch circuit 32, so that when the electronic device 100 operates in the second mode, the co-processing chip 20 receives the touch signal generated by the touch circuit 42 through the touch port 20 c.
The power amplifier control port 20d is electrically connected to the power amplifier circuit 43, so that when the electronic device 100 operates in the second mode, the power amplifier control port 20c sends a power amplifier control signal to the power amplifier circuit 43, and the co-processing chip 20 controls the power amplifier circuit 43.
The near field communication port 20e is electrically connected to the near field communication device 44, so that when the electronic device 100 operates in the second mode, the control signal is sent to the near field communication device 44 through the near field communication port 20e, and the co-processing chip 20 controls the near field communication device 44.
The bluetooth communication port 20f is electrically connected to the bluetooth device 45, so that when the electronic device 100 operates in the second mode, the control signal is sent to the bluetooth device 45 through the bluetooth communication port 20f, and the control of the co-processing chip 20 on the bluetooth device 45 is realized.
The infrared communication port 20g is electrically connected to the infrared device 46, so that when the electronic device 100 operates in the second mode, the infrared communication port 20g sends a control signal to the infrared device 46, and the co-processing chip 20 controls the infrared device 46.
The embodiment of the application further provides a voice-controlled display method, which is applied to the electronic device, and the method includes:
101. when receiving a first operation instruction, the main processing chip controls the display screen to enter a screen-off display mode, wakes up the co-processing chip, and enters a dormant state after waking up the co-processing chip.
The first operation instruction may be a trigger instruction generated when the user presses the screen locking key, or the first operation instruction may be a screen turn-off instruction generated when the user does not operate the electronic device for a long time in the screen lighting mode of the electronic device, and the screen turn-off instruction is used as the first operation instruction. When the main processing chip detects that the electronic equipment is in the screen-off display mode, the main processing chip sends an awakening instruction to the co-processing chip so as to awaken the co-processing chip, and the main processing chip enters a dormant state after awakening the co-processing chip.
After the electronic equipment enters the screen-off display mode, the main processing chip enters the dormant state and does not respond to the operation of the user any more. At this time, if the user operates the application program installed on the operating system of the co-processing chip, the co-processing chip directly responds and executes the corresponding control operation.
102. And the co-processing chip receives the voice signal and detects whether the voice signal contains preset keywords.
The voice signal received by the co-processing chip can be a voice signal detected by a microphone, and the co-processing chip judges whether the preset keyword is included according to the voice signal.
The preset keyword may be a word preset in the co-processing chip in advance, for example: "I", "love", "you", etc.
103. And the co-processing chip is also used for displaying the preset keywords on the display screen when detecting that the voice information contains the preset keywords.
When the voice information contains the preset keywords, the co-processing chip controls the display screen to display the preset keywords in the screen-off display mode, so that the interestingness of the electronic equipment in the screen-off display mode is increased.
104. When it is detected that the voice message does not include the preset keyword, the step 102 is continuously executed.
The voice recognition control method provided by the embodiment of the application has the advantages that the voice signal is received, the preset keywords are detected, and the preset keywords are displayed on the display screen and are controlled by the low-power-consumption co-processing chip, so that the overall power consumption of the electronic equipment can be effectively reduced, the endurance time of the electronic equipment is further prolonged, meanwhile, the preset keywords contained in the voice are displayed on the display screen through the recognized voice, and the interestingness of the electronic equipment can be improved.
An embodiment of the present application further provides a storage medium, where a computer program is stored in the storage medium, and when the computer program runs on a processor, the processor executes the control method for speech recognition according to the foregoing embodiment.
It should be noted that, all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, which may include, but is not limited to: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The electronic device and the voice-controlled display method provided by the embodiment of the application are described in detail above. The principles and implementations of the present application are described herein using specific examples, which are presented only to aid in understanding the present application. Meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An electronic device, comprising:
a display screen;
the main processing chip is electrically connected with the display screen and is used for controlling the display screen to enter a screen-off display mode when receiving a first operation instruction;
the auxiliary processing chip is electrically connected with the display screen and the main processing chip, and the operation power consumption of the auxiliary processing chip is less than that of the main processing chip;
when the display screen enters a screen-off display mode, the main processing chip wakes up the co-processing chip and enters a dormant state after waking up the co-processing chip, the co-processing chip is used for receiving voice signals and displaying preset keywords on the display screen when detecting that the voice signals contain the preset keywords.
2. The electronic device of claim 1, wherein the co-processing chip comprises a voice detection module and a voice processing module, and the voice processing module is electrically connected to the voice detection module;
the voice detection module is used for receiving a voice signal in a screen-off display mode and detecting whether the amplitude of the voice signal is larger than a preset amplitude or not; when the amplitude of the voice signal is detected to be larger than the preset amplitude, the voice processing module is awakened;
the voice processing module is used for acquiring voice data corresponding to the voice signal after switching from a dormant state to a working state, and detecting whether the voice data contains the preset keywords.
3. The electronic device of claim 2, wherein the co-processing chip further comprises a memory module, the memory module being electrically connected to the voice detection module and the voice processing module;
the voice detection module is further used for storing the voice data into the storage module when the amplitude of the voice signal is detected to be larger than the preset amplitude;
the voice processing module is further used for acquiring the voice data from the storage module after the dormant state is switched to the working state.
4. The electronic device of claim 2, wherein the co-processing chip further comprises a co-processor electrically connected to the voice processing module;
the voice processing module is further used for waking up the coprocessor when the voice data is detected to contain the preset keywords;
the coprocessor is used for displaying the preset keywords on the display screen after the dormant state is switched to the working state.
5. The electronic device of claim 4, wherein:
the voice processing module is further used for carrying out voiceprint recognition on the voice signal to acquire voiceprint information when detecting that the voice data corresponding to the voice signal contains preset keywords;
the voice processing module is further used for awakening the coprocessor when the voiceprint information is successfully matched with preset voiceprint information;
and the coprocessor is also used for executing the operation matched with the preset keyword after the dormant state is switched to the working state.
6. The electronic device of claim 4, wherein the co-processing chip further comprises an image processing module electrically connected to the voice processing module and the co-processor;
the voice processing module is further used for awakening the image processing module when the voice data is checked to contain the preset keywords;
the image processing module is used for processing the preset keywords after switching from a dormant state to a working state, and awakening the coprocessor after the preset keywords are processed;
and the coprocessor is used for displaying the processed preset keywords on the display screen after the dormant state is switched to the working state.
7. The electronic device of claim 1, wherein the co-processing chip further comprises a voice detection module, a voice processing module, and a communication interface module, the voice processing module being electrically connected to the communication interface;
the voice detection module is used for receiving a voice signal in a screen-off display mode and detecting whether the amplitude of the voice signal is larger than a preset amplitude or not; when the amplitude of the voice signal is detected to be larger than the preset amplitude, the voice processing module is awakened;
the voice processing module is also used for sending a wake-up request to the main processing chip through the communication interface module after switching from the dormant state to the working state, and entering the dormant state after sending the voice signal to the main processing chip;
the main processing chip is further used for entering a working state when receiving a wake-up request sent by the voice processing module, detecting whether the voice signal contains a preset keyword, sending the wake-up request to the voice processing module through the communication interface module when detecting that the voice signal contains the preset keyword, and entering a dormant state after sending the preset keyword to the voice processing module;
the voice processing module is further used for entering a working state and receiving the preset keywords when receiving the awakening request sent by the main processing chip.
8. The electronic device of claim 7, wherein the co-processing chip further comprises a co-processor electrically connected to the voice processing module;
the voice processing module is also used for awakening the coprocessor after receiving the preset keywords;
the coprocessor is used for displaying the preset keywords on the display screen after the dormant state is switched to the working state.
9. The electronic device of any of claims 1-8, wherein:
the co-processing chip is also used for awakening the main processing chip when receiving a second operation instruction, and entering a dormant state after awakening the main processing chip;
the main processing chip is further used for acquiring the voice data after entering a working state and judging whether the voice data contains the preset keywords or not;
the main processing chip is also used for responding to the voice signal when the voice data contains the preset keyword; and entering a dormant state when the voice data does not contain the preset keyword.
10. A voice-controlled display method is applied to electronic equipment and is characterized in that the electronic equipment comprises a main processing chip, a co-processing chip and a display screen, and the operation power consumption of the co-processing chip is smaller than that of the main processing chip; the method comprises the following steps:
when receiving a first operation instruction, the main processing chip controls the display screen to enter a screen-off display mode, wakes up the co-processing chip, and enters a dormant state after waking up the co-processing chip;
the co-processing chip receives the voice signal and detects whether the voice signal contains a preset keyword or not;
and the co-processing chip is also used for displaying the preset keywords on the display screen when the voice signals contain the preset keywords.
CN201911304749.4A 2019-12-17 2019-12-17 Electronic equipment and voice control display method Pending CN112992135A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911304749.4A CN112992135A (en) 2019-12-17 2019-12-17 Electronic equipment and voice control display method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911304749.4A CN112992135A (en) 2019-12-17 2019-12-17 Electronic equipment and voice control display method

Publications (1)

Publication Number Publication Date
CN112992135A true CN112992135A (en) 2021-06-18

Family

ID=76343679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911304749.4A Pending CN112992135A (en) 2019-12-17 2019-12-17 Electronic equipment and voice control display method

Country Status (1)

Country Link
CN (1) CN112992135A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023071506A1 (en) * 2021-10-29 2023-05-04 Oppo广东移动通信有限公司 Screen display method and apparatus, and storage medium and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110124375A1 (en) * 2008-04-07 2011-05-26 St-Ericsson Sa Mobile phone with low-power media rendering sub-system
US20140278443A1 (en) * 2012-10-30 2014-09-18 Motorola Mobility Llc Voice Control User Interface with Progressive Command Engagement
CN105388748A (en) * 2015-10-28 2016-03-09 广东欧珀移动通信有限公司 Method for displaying time by smart watch and smart watch
CN106775569A (en) * 2017-01-12 2017-05-31 环旭电子股份有限公司 Setting position prompt system and method
CN108337363A (en) * 2017-12-26 2018-07-27 努比亚技术有限公司 A kind of terminal puts out screen display control method, terminal
CN110427097A (en) * 2019-06-18 2019-11-08 华为技术有限公司 Voice data processing method, apparatus and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110124375A1 (en) * 2008-04-07 2011-05-26 St-Ericsson Sa Mobile phone with low-power media rendering sub-system
US20140278443A1 (en) * 2012-10-30 2014-09-18 Motorola Mobility Llc Voice Control User Interface with Progressive Command Engagement
CN105388748A (en) * 2015-10-28 2016-03-09 广东欧珀移动通信有限公司 Method for displaying time by smart watch and smart watch
CN106775569A (en) * 2017-01-12 2017-05-31 环旭电子股份有限公司 Setting position prompt system and method
CN108337363A (en) * 2017-12-26 2018-07-27 努比亚技术有限公司 A kind of terminal puts out screen display control method, terminal
CN110427097A (en) * 2019-06-18 2019-11-08 华为技术有限公司 Voice data processing method, apparatus and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023071506A1 (en) * 2021-10-29 2023-05-04 Oppo广东移动通信有限公司 Screen display method and apparatus, and storage medium and electronic device

Similar Documents

Publication Publication Date Title
CN107360327B (en) Speech recognition method, apparatus and storage medium
EP3552076B1 (en) Low-power ambient computing system with machine learning
US9153232B2 (en) Voice control device and voice control method
WO2017156925A1 (en) Unlock method and mobile terminal
KR101770932B1 (en) Always-on audio control for mobile device
US8736516B2 (en) Bluetooth or other wireless interface with power management for head mounted display
CN108712566B (en) Voice assistant awakening method and mobile terminal
US11366510B2 (en) Processing method for reducing power consumption and mobile terminal
CN111831099B (en) Electronic device
CN112987986B (en) Method, device, storage medium and electronic equipment for realizing game application
WO2021115211A1 (en) Electronic device and co-processing chip
WO2014117500A1 (en) Touch screen terminal and working method thereof
EP2617202A2 (en) Bluetooth or other wireless interface with power management for head mounted display
US11860708B2 (en) Application processor and mobile terminal
WO2021109882A1 (en) Application starting method and apparatus, and storage medium and electronic device
CN104142728A (en) Electronic device
WO2021115151A1 (en) Electronic device
CN110853644B (en) Voice wake-up method, device, equipment and storage medium
CN113031749A (en) Electronic device
CN109389977B (en) Voice interaction method and device
CN111292716A (en) Voice chip and electronic equipment
WO2022068544A1 (en) Voice wake-up method, electronic device, and chip system
CN112992135A (en) Electronic equipment and voice control display method
CN115223561A (en) Voice wake-up control method of handheld device and related device
CN111045738B (en) Electronic equipment control method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination