CN115775560A - Prompting method of awakening response and display equipment - Google Patents

Prompting method of awakening response and display equipment Download PDF

Info

Publication number
CN115775560A
CN115775560A CN202211447616.4A CN202211447616A CN115775560A CN 115775560 A CN115775560 A CN 115775560A CN 202211447616 A CN202211447616 A CN 202211447616A CN 115775560 A CN115775560 A CN 115775560A
Authority
CN
China
Prior art keywords
state
main processor
display
processor
indicator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211447616.4A
Other languages
Chinese (zh)
Inventor
杨香斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202211447616.4A priority Critical patent/CN115775560A/en
Publication of CN115775560A publication Critical patent/CN115775560A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4418Suspend and resume; Hibernate and awake
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42203Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment provides a prompt method and display equipment for wake-up response. The display equipment comprises a main processor, a sub-processor and an indicator, wherein the sub-processor firstly receives voice data input by a user, and when the voice data are awakening words, the main processor is triggered to enter an open state, and meanwhile, the indicator is controlled to be opened. And when the voice data is not the awakening word, not triggering the main processor to enter an on state and not controlling the indicator to be on. The prompting method of the embodiment of the application can judge whether the voice data is the awakening word or not through the sub-processor after the voice data is received, and starts the controller to respond if the voice data is the awakening word. Compared with the scheme that the response is made after the awakening confirmation in the prior art, the process has short time consumption, so that the user can obtain the response without waiting for a long time, and the user experience is improved.

Description

Prompting method of awakening response and display equipment
The application is a divisional application of domestic application (application number: 202110281719.7, application date: 2021-03-16, invention name: a prompting method of awakening response and display equipment).
Technical Field
The present application relates to the field of sound processing technologies, and in particular, to a method for prompting a wake-up response and a display device.
Background
With the continuous development of voice recognition technology and smart home, the voice recognition technology is widely applied. When the smart television is in a standby state, a user can wake up the smart television by using a voice recognition technology, namely, the smart television is wakened up through a far-field voice instruction, so that the smart television enters a starting state from the standby state.
The traditional smart television wake-up process comprises the following steps: firstly, the voice of the user is identified by the awakening word, and then the awakening word is input into the awakening model for awakening confirmation. To reduce power consumption, wake word recognition is typically handled with a small core with low power consumption. And the process of performing wakeup confirmation through the wakeup model is completed through the main chip. And after the main chip confirms that the awakening is successful, the screen is controlled to be lightened, and the starting state is entered. If the intelligent television is mistakenly awakened, the intelligent television enters the standby state again.
However, after the low-power-consumption corelet recognizes the awakening word, the main chip is informed to be started to perform voice awakening confirmation of the user, and the display screen is lightened only after the confirmation is successful. The whole process is long, and no prompt message exists in the whole process. Finally, after the user inputs the voice, the user needs to wait for a long time to obtain the response, which results in poor user experience.
Disclosure of Invention
The application provides a sound output method of a display device and the display device, which are used for solving the problems that the purpose of independently outputting bass cannot be realized and the audio-visual experience of a user is poor if the existing display device is not connected with a special sound box capable of filtering bass audio data.
In a first aspect, the present embodiment provides a display device comprising,
a main processor;
an indicator for indicating an on state of the main processor;
a sub-processor for performing:
receiving voice data input by a user, triggering the main processor to enter an open state when the voice data is a wakeup word, and controlling the indicator to open;
and when the voice data is not the awakening word, not triggering the main processor to enter an on state and not controlling the indicator to be on.
In a second aspect, the present embodiment provides a processor, including:
a main processor;
a sub-processor for performing:
receiving voice data input by a user, triggering the main processor to enter an on state when the voice data are wakeup words, and controlling an indicator to be turned on, wherein the indicator is used for prompting the on state of the main processor;
and when the voice data is not the awakening word, not triggering the main processor to enter an on state and not controlling the indicator to be on.
In a third aspect, this embodiment provides a method for prompting a wake-up response, where the method is applied to a sub-processor of a display device, the display device further includes a main processor and an indicator, where the indicator is used to indicate an on state of the main processor, and the method includes:
receiving voice data input by a user, triggering the main processor to enter an open state when the voice data is a wakeup word, and controlling the indicator to open;
and when the voice data is not the awakening word, not triggering the main processor to enter an on state and not controlling the indicator to be on.
In a fourth aspect, this embodiment provides a method for prompting a wake-up response, where the method is applied to a main processor of a display device, and the display device further includes a sub-processor and an indicator, where the indicator is used to indicate an on state of the main processor, and the method includes:
receiving a trigger instruction sent by the sub-processor, and entering an open state, wherein the trigger instruction is generated when the sub-processor judges that voice data is a wakeup word after receiving the voice data input by a user, and meanwhile, the sub-processor controls the indicator to open;
and receiving the voice data, judging whether the voice data is a target awakening word or not according to an awakening model, and controlling a display screen to be lightened when the voice data is the target awakening word.
The display equipment provided by the embodiment of the application comprises a main processor, sub-processors and an indicator, wherein the sub-processors firstly receive voice data input by a user, and when the voice data are awakening words, the main processor is triggered to enter an open state, and meanwhile, the indicator is controlled to be opened. And when the voice data is not the awakening word, the main processor is not triggered to enter the starting state, and the indicator is not controlled to be started. The prompting method of the embodiment of the application can judge whether the voice data is the awakening word or not through the sub-processor after receiving the voice data, and starts the controller to respond if the voice data is the awakening word. Compared with the scheme that the response is made after the awakening confirmation in the prior art, the process has short time consumption, so that the user can obtain the response without waiting for a long time, and the user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 illustrates a usage scenario of a display device according to some embodiments;
fig. 2 illustrates a block diagram of a hardware configuration of the control apparatus 100 according to some embodiments;
fig. 3 illustrates a hardware configuration block diagram of the display apparatus 200 according to some embodiments;
FIG. 4 illustrates a software configuration diagram in the display device 200 according to some embodiments;
FIG. 5 illustrates an icon control interface display of an application in the display device 200, in accordance with some embodiments;
fig. 6 shows a hardware configuration diagram of a display device 200 according to some embodiments;
FIG. 7 illustrates a prompt method signaling diagram for a wake response according to some embodiments;
fig. 8 illustrates yet another wake response hinting method signaling diagram in accordance with some embodiments.
Detailed Description
To make the purpose and embodiments of the present application clearer, the following will clearly and completely describe the exemplary embodiments of the present application with reference to the attached drawings in the exemplary embodiments of the present application, and it is obvious that the described exemplary embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
It should be noted that the brief descriptions of the terms in the present application are only for the convenience of understanding the embodiments described below, and are not intended to limit the embodiments of the present application. These terms should be understood in their ordinary and customary meaning unless otherwise indicated.
The terms "first," "second," "third," and the like in the description and claims of this application and in the foregoing drawings are used for distinguishing between similar or analogous objects or entities and are not necessarily intended to limit the order or sequence in which they are presented unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements expressly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
Fig. 1 is a schematic diagram of a usage scenario of a display device according to an embodiment. As shown in fig. 1, the display apparatus 200 is also in data communication with a server 400, and a user can operate the display apparatus 200 through the smart device 300 or the control device 100.
In some embodiments, the control apparatus 100 may be a remote controller, and the communication between the remote controller and the display device includes at least one of an infrared protocol communication or a bluetooth protocol communication, and other short-distance communication methods, and controls the display device 200 in a wireless or wired manner. The user may control the display apparatus 200 by inputting a user instruction through at least one of a key on a remote controller, a voice input, a control panel input, and the like.
In some embodiments, the smart device 300 may include any of a mobile terminal 300A, a tablet, a computer, a laptop, an AR/VR device, and the like.
In some embodiments, the smart device 300 may also be used to control the display device 200. For example, the display device 200 is controlled using an application program running on the smart device.
In some embodiments, the smart device 300 and the display device may also be used for communication of data.
In some embodiments, the display device 200 may also be controlled in a manner other than the control apparatus 100 and the smart device 300, for example, the voice command control of the user may be directly received through a module configured inside the display device 200 to obtain the voice command, or may be received through a voice control apparatus provided outside the display device 200.
In some embodiments, the display device 200 is also in data communication with a server 400. The display device 200 may be allowed to be communicatively connected through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display apparatus 200. The server 400 may be a cluster or a plurality of clusters, and may include one or more types of servers.
In some embodiments, software steps executed by one step execution agent may migrate to another step execution agent in data communication therewith for execution as needed. Illustratively, software steps performed by the server may be migrated to be performed on a display device in data communication therewith, and vice versa, as desired.
Fig. 2 exemplarily shows a block diagram of a configuration of the control apparatus 100 according to an exemplary embodiment. As shown in fig. 2, the control device 100 includes a controller 110, a communication interface 130, a user input/output interface 140, a memory, and a power supply. The control apparatus 100 may receive an input operation instruction of a user and convert the operation instruction into an instruction recognizable and responsive to the display device 200, serving as an interaction intermediary between the user and the display device 200.
In some embodiments, the communication interface 130 is used for external communication, and includes at least one of a WIFI chip, a bluetooth module, NFC, or an alternative module.
In some embodiments, the user input/output interface 140 includes at least one of a microphone, a touchpad, a sensor, a key, or an alternative module.
Fig. 3 illustrates a hardware configuration block diagram of the display apparatus 200 according to an exemplary embodiment.
In some embodiments, the display apparatus 200 includes at least one of a tuner demodulator 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, a memory, a power supply, a user interface.
In some embodiments the controller comprises a central processor, a video processor, an audio processor, a graphics processor, a RAM, a ROM, a first interface to an nth interface for input/output.
In some embodiments, the display 260 includes a display screen component for displaying pictures, and a driving component for driving image display, a component for receiving image signals from the controller output, displaying video content, image content, and menu manipulation interface, and a user manipulation UI interface, etc.
In some embodiments, the display 260 may be at least one of a liquid crystal display, an OLED display, and a projection display, and may also be a projection device and a projection screen.
In some embodiments, the tuner demodulator 210 receives broadcast television signals via wired or wireless reception and demodulates audio/video signals, such as EPG data signals, from a plurality of wireless or wired broadcast television signals.
In some embodiments, communicator 220 is a component for communicating with external devices or servers according to various communication protocol types. For example: the communicator may include at least one of a Wifi module, a bluetooth module, a wired ethernet module, and other network communication protocol chips or near field communication protocol chips, and an infrared receiver. The display apparatus 200 may establish transmission and reception of control signals and data signals with the control device 100 or the server 400 through the communicator 220.
In some embodiments, the detector 230 is used to collect signals of the external environment or interaction with the outside. For example, detector 230 includes a light receiver, a sensor for collecting the intensity of ambient light; alternatively, the detector 230 includes an image collector, such as a camera, which can be used to collect external environment scenes, attributes of the user, or user interaction gestures, or the detector 230 includes a sound collector, such as a microphone, which is used to receive external sounds.
In some embodiments, the external device interface 240 may include, but is not limited to, the following: high Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, and the like. The interface may be a composite input/output interface formed by the plurality of interfaces.
In some embodiments, the controller 250 and the modem 210 may be located in different separate devices, that is, the modem 210 may also be located in an external device of the main device where the controller 250 is located, such as an external set-top box.
In some embodiments, the controller 250 controls the operation of the display device and responds to user operations through various software control programs stored in memory. The controller 250 controls the overall operation of the display apparatus 200. For example: in response to receiving a user command for selecting a UI object to be displayed on the display 260, the controller 250 may perform an operation related to the object selected by the user command.
In some embodiments, the object may be any one of selectable objects, such as a hyperlink, an icon, or other operable control. The operations related to the selected object are: displaying an operation connected to a hyperlink page, document, image, or the like, or performing an operation of a program corresponding to the icon.
In some embodiments, the controller includes at least one of a Central Processing Unit (CPU), a video processor, an audio processor, a Graphic Processing Unit (GPU), a RAM Random Access Memory (RAM), a ROM (Read-Only Memory), a first interface to an nth interface for input/output, a communication Bus (Bus), and the like.
A CPU processor. For executing operating system and application program instructions stored in the memory, and executing various application programs, data and contents according to various interactive instructions receiving external input, so as to finally display and play various audio-video contents. The CPU processor may include a plurality of processors. E.g., comprising a main processor and one or more sub-processors.
In some embodiments, a graphics processor for generating various graphics objects, such as: at least one of an icon, an operation menu, and a user input instruction display figure. The graphic processor comprises an arithmetic unit, which performs operation by receiving various interactive instructions input by a user and displays various objects according to display attributes; the system also comprises a renderer for rendering various objects obtained based on the arithmetic unit, wherein the rendered objects are used for being displayed on a display.
In some embodiments, the video processor is configured to receive an external video signal, and perform at least one of video processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, and image synthesis according to a standard codec protocol of the input signal, so as to obtain a signal displayed or played on the direct display device 200.
In some embodiments, the video processor includes at least one of a demultiplexing module, a video decoding module, an image compositing module, a frame rate conversion module, a display formatting module, and the like. The demultiplexing module is used for demultiplexing the input audio and video data stream. And the video decoding module is used for processing the demultiplexed video signal, including decoding, scaling and the like. And the image synthesis module is used for carrying out superposition mixing processing on the GUI signal input by the user or generated by the user and the video image after the zooming processing by the graphic generator so as to generate an image signal for display. And the frame rate conversion module is used for converting the frame rate of the input video. And the display formatting module is used for converting the received video output signal after the frame rate conversion, and changing the signal to be in accordance with the signal of the display format, such as an output RGB data signal.
In some embodiments, the audio processor is configured to receive an external audio signal, decompress and decode the received audio signal according to a standard codec protocol of the input signal, and perform at least one of noise reduction, digital-to-analog conversion, and amplification processing to obtain a sound signal that can be played in the speaker.
In some embodiments, a user may enter user commands on a Graphical User Interface (GUI) displayed on display 260, and the user input interface receives the user input commands through the Graphical User Interface (GUI). Alternatively, the user may input a user command by inputting a specific sound or gesture, and the user input interface receives the user input command by recognizing the sound or gesture through the sensor.
In some embodiments, a "user interface" is a media interface for interaction and information exchange between an application or operating system and a user that enables conversion between an internal form of information and a form that is acceptable to the user. A commonly used presentation form of the User Interface is a Graphical User Interface (GUI), which refers to a User Interface related to computer operations and displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in the display screen of the electronic device, where the control may include at least one of an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc. visual interface elements.
In some embodiments, user interface 280 is an interface that may be used to receive control inputs (e.g., physical keys on the body of the display device, or the like).
In some embodiments, the system of the display device may include a Kernel (Kernel), a command parser (shell), a file system, and an application. The kernel, shell, and file system together form the basic operating system structure that allows users to manage files, run programs, and use the system. After power-on, the kernel is started, kernel space is activated, hardware is abstracted, hardware parameters are initialized, and virtual memory, a scheduler, signals and interprocess communication (IPC) are operated and maintained. And after the kernel is started, loading the Shell and the user application program. The application program is compiled into machine code after being started, and a process is formed.
Referring to fig. 4, in some embodiments, the system is divided into four layers, which are an Application (Applications) layer (abbreviated as "Application layer"), an Application Framework (Application Framework) layer (abbreviated as "Framework layer"), an Android runtime (Android runtime) and system library layer (abbreviated as "system runtime library layer"), and a kernel layer from top to bottom.
In some embodiments, at least one application program runs in the application program layer, and the application programs may be windows (Window) programs carried by an operating system, system setting programs, clock programs or the like; or may be an application developed by a third party developer. In particular implementations, the application packages in the application layer are not limited to the above examples.
The framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions. The application framework layer acts as a processing center that decides to let the applications in the application layer act. The application program can access the resources in the system and obtain the services of the system in execution through the API interface.
As shown in fig. 4, in the embodiment of the present application, the application framework layer includes a manager (Managers), a Content Provider (Content Provider), and the like, where the manager includes at least one of the following modules: an Activity Manager (Activity Manager) is used for interacting with all activities running in the system; the Location Manager (Location Manager) is used for providing the system service or application with the access of the system Location service; a Package Manager (Package Manager) for retrieving various information related to an application Package currently installed on the device; a Notification Manager (Notification Manager) for controlling display and clearing of Notification messages; a Window Manager (Window Manager) is used to manage the icons, windows, toolbars, wallpapers, and desktop components on a user interface.
In some embodiments, the activity manager is used to manage the lifecycle of the various applications as well as general navigational fallback functions, such as controlling exit, opening, fallback, etc. of the applications. The window manager is used for managing all window programs, such as obtaining the size of a display screen, judging whether a status bar exists, locking the screen, intercepting the screen, controlling the change of the display window (for example, reducing the display window, displaying a shake, displaying a distortion deformation, and the like), and the like.
In some embodiments, the system runtime layer provides support for an upper layer, i.e., the framework layer, and when the framework layer is used, the android operating system runs the C/C + + library included in the system runtime layer to implement the functions to be implemented by the framework layer.
In some embodiments, the kernel layer is a layer between hardware and software. As shown in fig. 4, the core layer includes at least one of the following drivers: audio drive, display driver, bluetooth drive, camera drive, WIFI drive, USB drive, HDMI drive, sensor drive (like fingerprint sensor, temperature sensor, pressure sensor etc.) and power drive etc..
In some embodiments, the display device may directly enter an interface of a preset video-on-demand program after being started, and the interface of the video-on-demand program may include at least a navigation bar 510 and a content display area located below the navigation bar 510, as shown in fig. 5, where content displayed in the content display area may change according to a change of a selected control in the navigation bar. The programs in the application program layer can be integrated in the video-on-demand program and displayed through one control of the navigation bar, and can also be further displayed after the application control in the navigation bar is selected.
In some embodiments, the display device may directly enter a display interface of a signal source selected last time after being started, or a signal source selection interface, where the signal source may be a preset video-on-demand program, or may be at least one of an HDMI interface and a live tv interface, and after a user selects a different signal source, the display may display content obtained from the different signal source.
With the continuous development of voice recognition technology and smart home, the requirements of human-computer interaction on user experience are higher and higher, the distance of human-computer voice conversation is not limited to a near field, and far-field voice recognition is more and more popular. One of the most widely used functions of far-field speech is the wake-up of a display device. When the display device is in a standby state, the display device can be awakened through a far-field voice instruction, so that the display device enters a power-on state from the standby state.
The awakening process of the display device is as follows: firstly, the voice of the user is identified by the awakening word, and then the awakening word is input into the awakening model for confirmation. At present, standby awakening of display equipment is realized by identifying awakening words by means of a main chip of the display equipment. However, the main chip has high computing power, and the computing power and the power consumption are strongly related. Therefore, if the main chip is used for wake-up word recognition before the real wake-up, a large power consumption is generated in the wake-up stage.
In order to solve the problems, the method can adopt a small core with low power consumption to firstly recognize the awakening words, and then the small core inputs the awakening words into a main chip to confirm the awakening model after successfully recognizing the awakening words. And after the main chip confirms that the awakening is successful, the control screen is lightened to enter a starting state. If the display device is mistakenly awakened, the display device enters the standby state again.
However, after the small core recognizes the wake-up word, the main chip is notified to turn on, and then the voice wake-up confirmation of the user can be performed. After the awakening confirmation is successful, the display screen is lightened. The whole process is long, and the whole process has no any prompt message. Finally, after the user inputs the voice, the user needs to wait for a long time to obtain the response, which results in poor user experience.
In order to solve the above problem, the present application provides a display device, as shown in fig. 6, according to a hardware configuration diagram of the display device in the embodiment. When a user tries to wake up the display device of the embodiment of the application, after voice data is input, the system converts the voice data into a voice text. If the speech text is a wake-up word, a response can be obtained without waiting for a long time.
The display apparatus 200 includes a sound collector 230A for collecting voice data input by a user. The sound collector 230A further includes a signal receiving circuit, a signal processing circuit, and a signal output circuit. A voice recognition module may be further disposed between the sound collector and the processor 250A for converting voice data input by the user into a voice text.
The display device also includes a processor 250A and an indicator 290, the processor 250A in turn including a main processor 250A-1 and sub-processors 250A-2. The main processor is used for confirming the awakening words by utilizing the awakening model, and the sub-processor is used for judging whether the voice data voice texts input by the user are the awakening words. The indicator is used for indicating the opening current state of the main processor, and the current state comprises a dormant state and an opening state. Here the indicator, if on, indicates that the main processor is in an on state. Conversely, the indicator, if off, indicates that the main processor is in the off sleep state. It is also possible that the indicator, if flashing, indicates that the main processor is in an on state. The indicator, if not blinking, is only on, indicating the main processor sleep state. The embodiment of the application only needs the indicator to have two different states which can be distinguished by human eyes, and the two different states are respectively used for indicating the sleep state and the starting state of the main processor, and specifically indicating the state, and the application is not limited.
In some embodiments, the computing power and power consumption of the processor are strongly correlated, and the sub-processor is only used for identifying the wake-up word, so that the computing power requirement is lower and the power consumption is also lower. The main processor needs to use a larger wake-up model to wake up and confirm the voice data, so the requirement on computing power is higher, and the power consumption is also higher. Therefore, the main processor may be a System-on-a-Chip (SOC) Chip, and the sub-processors may be a Digital Signal Processing (DSP) Chip or an ARM core-M4 (embedded processor developed by ARM company, uk).
In this embodiment of the present application, the sub-processors may be integrated on the main processor, or may be in a state separated from the main processor, which is not limited in this embodiment of the present application.
In some embodiments, the initial state of the display device is that the main processor is in a sleep state. The sub-processor of low power consumption may be kept in an on state. And after receiving the voice text, the sub-processor judges whether the text message voice text is a wake-up word. If the text message voice text is a wake-up word, the main processor is triggered to enter an open state from a dormant state, and meanwhile, the control indicator indicates that the current state of the main processor is the open state, and the indicator prompts a user that the wake-up behavior is effective. The display device of the embodiment can respond without waiting for the main processor to judge whether the text message is the target awakening word, and the time for the user to wait for the response is short, so that the user experience is improved.
In addition, when the main processor enters the on state from the sleep state, the display screen is not controlled to be lightened, and after further awakening confirmation is needed to be successful, the display screen is controlled to be lightened.
In some embodiments, if the spoken text is not a wake word, the main processor is not triggered to enter the on state from the sleep state while the control indicator still indicates that the current state of the main processor is the sleep state. The user can then know through the indicator that the current wake-up behavior is invalid.
In some embodiments, the indicator may be a conventional LED light, and the action of indicating that the main processor is in the on state is the illumination of the LED. The indicator can also be a marquee, and the action of indicating the main processor to be in the on state is that the marquee flickers.
Illustratively, the sub-processor receives the speech text "hello, haixin", and determines whether the speech text "hello, haixin" is a wake-up word. If the voice text 'hello, haixin' is a wake-up word, the main processor is triggered to enter a starting state from a dormant state, and meanwhile, the LED lamp is controlled to be turned on. If the voice text 'hello, haixin' is not a wake-up word, the main processor is not triggered to enter an open state from a dormant state, and the main processor keeps the dormant state. Meanwhile, the LED lamp is not controlled to be lightened, and the LED lamp is still in an off state. The user can therefore know quickly whether the wake-up action is effective or not through the state of the LED lamp.
In some embodiments, after triggering the main processor to enter the on state from the sleep state when the voice text is a wake word, the main processor receives the voice text from the sub-processor and inputs the voice text into the wake model. And judging whether the voice text is the target awakening word or not according to the awakening model. And if the voice text is the target awakening word, controlling the display screen to be lightened, and displaying the homepage after lightening. Wherein, the wake-up model can be a pre-trained neural network based speech recognition model.
In some embodiments, if the spoken text is not the target wake word, the display screen is not controlled to light up, the main processor re-enters the sleep state from the on state, and the main processor controls the indicator to indicate that the current state of the main processor is the sleep state.
Illustratively, the voice text "hello, hyacing" is sent to the main processor, and the main processor judges whether the voice text "hello, hyacing" is a target wake word according to the wake model. If the voice text message "hello, sea message" is the target wake-up word, the display screen is controlled to light up. Meanwhile, the LED can be controlled to be turned off, and the LED can be controlled to keep a lighting state. If the voice text 'hello, haixin' is not the target awakening word, the display screen is not controlled to be lightened, and meanwhile the LED is controlled to be turned off. The user can be informed by the prompt that the wake-up action is invalid. The LED is extinguished to give feedback to the user that the LED is mistakenly awakened, and compared with the method that the screen is turned off after being turned on, the influence on the user is smaller.
An embodiment of the present application provides a method for prompting an awake response, such as a signaling diagram of the method for prompting an awake response shown in fig. 7, where the method includes the following steps:
the method comprises the steps that a sub-processor receives a voice text, when the voice text is a wake-up word, a trigger instruction is sent to a main processor to trigger the main processor to enter an open state from a dormant state, and a control instruction is sent to an indicator to control the indicator to indicate that the current state of the main processor is the open state.
And step two, when the voice text is not a wakeup word, not sending a trigger instruction to the main processor and not sending a control instruction to the indicator. The host processor is not triggered to enter the on state and the indicator still indicates that the current state of the host processor is the sleep state.
In the method embodiment, after the user accesses the voice data, the response can be made without waiting for the main processor to judge whether the voice text corresponding to the voice data is the target awakening word, and the time for the user to wait for the response is shorter, so that the user experience is improved.
Based on the foregoing method embodiment, an embodiment of the present application provides another wake-up response prompting method, such as a signaling diagram of the wake-up response prompting method shown in fig. 8, where the method includes the following steps:
the method comprises the steps that a sub-processor receives a voice text, when the voice text is a wake-up word, a trigger instruction is sent to a main processor to trigger the main processor to enter an open state from a dormant state, and a control instruction is sent to an indicator to control the indicator to indicate that the current state of the main processor is the open state.
And step two, the sub-processor sends the voice text to the main processor, the main processor judges whether the voice text is a target awakening word or not according to the awakening model after receiving the voice text, and if the voice text is the target awakening word, the sub-processor controls a display screen to be lightened. And if the voice text is not the target awakening word, not controlling the display screen to be lightened. The primary processor returns to the sleep state from the on state while the control indicator indicates that the current state of the primary processor is the sleep state.
In the embodiment of the method, after the voice text is judged to be the awakening word, the main processor is triggered to enter the starting state from the dormant state, and meanwhile, the control indicator indicates that the current state of the main processor is the starting state. The main processor further judges whether the voice text is a target awakening word, namely whether the voice text is awakened successfully, but not woken by mistake. And if the awakening is successful, controlling the display screen to be lightened, and if the awakening is false, not controlling the display screen to be lightened. Therefore, the method embodiment not only can shorten the time for waiting for response of the user, but also can effectively prompt the user that the awakening action is false awakening through different states indicated by the indicator under the condition of false awakening.
The same or similar contents in the embodiments of the present application may be referred to each other, and the related embodiments are not described in detail.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and these modifications or substitutions do not depart from the scope of the technical solutions of the embodiments of the present application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (7)

1. A display device, comprising:
a display;
the main processor is used for controlling the screen of the display to be bright or black, wherein the main processor can be in one of a dormant state and an open state;
an indicator, the indicator being capable of assuming a first state or a second state, for indicating whether the primary processor is currently in an on state or a sleep state;
the voice collector is used for receiving voice data input by a user;
a sub-processor for performing:
when the main processor is in a dormant state and the display is in a black screen state, responding to received voice data, and when the voice data contains a wakeup word, triggering the main processor to enter an open state from the dormant state, and controlling the indicator to present a first state, and meanwhile, keeping the display in the black screen state;
after triggering the main processor to enter an on state from a sleep state, the main processor is configured to perform:
and judging whether the voice data contains a target awakening word or not according to the awakening model, and if so, controlling the display screen to be lightened.
2. The display device according to claim 1, wherein all of the determining whether the voice data includes the target wake-up word according to the wake-up model further comprises:
if not, keeping the display black, and controlling the indicator to be switched from the first state to the second state so as to indicate the current state of the main processor to return to the dormant state.
3. The display device according to claim 1, wherein one end of the sub-processor is communicatively connected to the sound collector, and the other end is communicatively connected to the main processor, wherein the sub-processor is always in an on state.
4. A display device, comprising:
a display;
the main processor is used for controlling the screen of the display to be bright or black;
an indicator, the indicator being capable of assuming a first state or a second state, for indicating whether the main processor is to initiate operation;
the voice collector is used for receiving voice data input by a user;
one end of the sub-processor is in communication connection with the sound collector, the other end of the sub-processor is in communication connection with the main processor, and the sub-processor is used for executing the following steps:
when the display equipment is in a standby state, responding to received voice data, and judging whether the voice data contains a wakeup word or not, wherein when the display equipment is in the standby state, the main processor is in a dormant state, and the display is in a second state;
if the awakening word is contained, triggering the main processor to start operation so that the main processor secondarily judges whether the awakening word is a target awakening word;
after the main processor starts to operate, the indicator is controlled to be in a first state, and the display is kept in a black screen state; and controlling the display to be on until the main processor determines that the awakening word is the target awakening word.
5. The display device according to claim 4, wherein the display device triggers the main processor to start running if the display device executes the wake-up word, so that the main processor secondarily determines whether the wake-up word is a target wake-up word, specifically including:
after the main processor starts to operate, controlling the indicator to present a first state, and keeping the display in a black screen;
if the main processor determines that the awakening word is the target awakening word, controlling the display to light up;
if the main processor determines that the wake-up word is not the target wake-up word, the main processor switches back to the sleep mode, controls the indicator to be in the second state, and keeps the display blank.
6. The display apparatus according to claim 4, wherein the sub-processor is always in an on state when the display apparatus is in a standby state.
7. A prompting method of wake-up response, the method is applied to a main processor of a display device, the display device further comprises a display, a sub-processor, an indicator and a sound collector, wherein the indicator can present a first state or a second state for indicating whether the main processor is currently in an on state or a dormant state, the method comprises the following steps:
receiving a trigger instruction sent by the sub-processor, entering an open state from a dormant state, and simultaneously keeping a screen of the display screen black, wherein the trigger instruction is generated when the sub-processor judges that voice data input by a user contains a wakeup word after the sub-processor receives the voice data, and meanwhile, the sub-processor controls the indicator to be in a first state so as to indicate that the current state of the main processor is in the open state;
receiving the voice data, judging whether the voice data contains a target awakening word or not according to an awakening model, controlling a display screen to be on when the voice data contains the target awakening word, and controlling the indicator to be in a second state;
and when the voice data does not contain a target awakening word, controlling the indicator to be in a turned-off state so as to indicate that the current state of the main processor is in a dormant state, and simultaneously keeping the screen of the display screen black.
CN202211447616.4A 2021-03-16 2021-03-16 Prompting method of awakening response and display equipment Pending CN115775560A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211447616.4A CN115775560A (en) 2021-03-16 2021-03-16 Prompting method of awakening response and display equipment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110281719.7A CN113066490B (en) 2021-03-16 2021-03-16 Prompting method of awakening response and display equipment
CN202211447616.4A CN115775560A (en) 2021-03-16 2021-03-16 Prompting method of awakening response and display equipment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202110281719.7A Division CN113066490B (en) 2021-03-16 2021-03-16 Prompting method of awakening response and display equipment

Publications (1)

Publication Number Publication Date
CN115775560A true CN115775560A (en) 2023-03-10

Family

ID=76560693

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202211447616.4A Pending CN115775560A (en) 2021-03-16 2021-03-16 Prompting method of awakening response and display equipment
CN202110281719.7A Active CN113066490B (en) 2021-03-16 2021-03-16 Prompting method of awakening response and display equipment

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202110281719.7A Active CN113066490B (en) 2021-03-16 2021-03-16 Prompting method of awakening response and display equipment

Country Status (1)

Country Link
CN (2) CN115775560A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116185513A (en) * 2023-04-27 2023-05-30 北京大上科技有限公司 Screen locking system and method
WO2024193723A1 (en) * 2023-03-22 2024-09-26 海信视像科技股份有限公司 Terminal device, and standby wake-up method based on far-field voice

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113782021B (en) * 2021-09-14 2023-10-24 Vidaa(荷兰)国际控股有限公司 Display equipment and prompt tone playing method
CN114038458A (en) * 2021-11-08 2022-02-11 百度在线网络技术(北京)有限公司 Voice interaction method, device, equipment, storage medium and computer program product
CN114155854B (en) * 2021-12-13 2023-09-26 海信视像科技股份有限公司 Voice data processing method and device
WO2023160087A1 (en) * 2022-02-28 2023-08-31 海信视像科技股份有限公司 Prompting method for response state of voice instruction and display device

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI525532B (en) * 2015-03-30 2016-03-11 Yu-Wei Chen Set the name of the person to wake up the name for voice manipulation
US10467877B1 (en) * 2017-07-09 2019-11-05 Dsp Group Ltd. Monitor and a method for monitoring a baby in a vehicle
CN109032675B (en) * 2018-06-25 2024-06-28 北京集创北方科技股份有限公司 Screen unlocking method and device of terminal equipment and terminal equipment
KR102628211B1 (en) * 2018-08-29 2024-01-23 삼성전자주식회사 Electronic apparatus and thereof control method
WO2020073288A1 (en) * 2018-10-11 2020-04-16 华为技术有限公司 Method for triggering electronic device to execute function and electronic device
CN109725545A (en) * 2018-12-27 2019-05-07 广东美的厨房电器制造有限公司 Smart machine and its control method, computer readable storage medium
CN109903761A (en) * 2019-01-02 2019-06-18 百度在线网络技术(北京)有限公司 Voice interactive method, device and storage medium
CN109979438A (en) * 2019-04-04 2019-07-05 Oppo广东移动通信有限公司 Voice awakening method and electronic equipment
CN111862965B (en) * 2019-04-28 2024-10-15 阿里巴巴集团控股有限公司 Wake-up processing method and device, intelligent sound box and electronic equipment
CN110201334A (en) * 2019-06-10 2019-09-06 芜湖安佳捷汽车科技有限公司 A kind of long-range quick start system of fire fighting truck
CN112712803B (en) * 2019-07-15 2022-02-25 华为技术有限公司 Voice awakening method and electronic equipment
CN110827820B (en) * 2019-11-27 2022-09-27 北京梧桐车联科技有限责任公司 Voice awakening method, device, equipment, computer storage medium and vehicle
CN111200746B (en) * 2019-12-04 2021-06-01 海信视像科技股份有限公司 Method for awakening display equipment in standby state and display equipment
CN111522592A (en) * 2020-04-24 2020-08-11 腾讯科技(深圳)有限公司 Intelligent terminal awakening method and device based on artificial intelligence

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024193723A1 (en) * 2023-03-22 2024-09-26 海信视像科技股份有限公司 Terminal device, and standby wake-up method based on far-field voice
CN116185513A (en) * 2023-04-27 2023-05-30 北京大上科技有限公司 Screen locking system and method

Also Published As

Publication number Publication date
CN113066490A (en) 2021-07-02
CN113066490B (en) 2022-10-14

Similar Documents

Publication Publication Date Title
CN113066490B (en) Prompting method of awakening response and display equipment
CN112511882A (en) Display device and voice call-up method
CN114302238B (en) Display method and display device for prompt information in sound box mode
CN114302201B (en) Method for automatically switching on and off screen in sound box mode, intelligent terminal and display device
CN112764627B (en) Upgrade package installation progress display method and display device
CN113038048B (en) Far-field voice awakening method and display device
CN112860331B (en) Display equipment and voice interaction prompting method
CN112601109A (en) Audio playing method and display device
CN113556609B (en) Display device and startup picture display method
CN113542852B (en) Display device and control method for fast pairing with external device
CN113438553B (en) Display device awakening method and display device
CN113655936B (en) Display device and screen protection method
CN115150644B (en) Application awakening method of display device, mobile terminal and server
CN111901649B (en) Video playing method and display equipment
CN112616090B (en) Display equipment system upgrading method and display equipment
CN111970624B (en) Display device loudspeaker state detection method and display device
CN112492393A (en) Method for realizing MIC switch associated energy-saving mode and display equipment
CN113542882A (en) Method for awakening standby display device, display device and terminal
CN112885347A (en) Voice control method of display device, display device and server
CN112668546A (en) Video thumbnail display method and display equipment
CN113064691A (en) Display method and display equipment for starting user interface
CN113064534A (en) Display method and display equipment of user interface
CN113064515A (en) Touch display device and USB device switching method
CN114302197A (en) Voice separation control method and display device
CN112882780A (en) Setting page display method and display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination