WO2017162019A1 - 智能终端控制方法和智能终端 - Google Patents

智能终端控制方法和智能终端 Download PDF

Info

Publication number
WO2017162019A1
WO2017162019A1 PCT/CN2017/075846 CN2017075846W WO2017162019A1 WO 2017162019 A1 WO2017162019 A1 WO 2017162019A1 CN 2017075846 W CN2017075846 W CN 2017075846W WO 2017162019 A1 WO2017162019 A1 WO 2017162019A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
target
voice
identity information
recognition result
Prior art date
Application number
PCT/CN2017/075846
Other languages
English (en)
French (fr)
Inventor
刘国华
Original Assignee
深圳市国华识别科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市国华识别科技开发有限公司 filed Critical 深圳市国华识别科技开发有限公司
Priority to US16/087,618 priority Critical patent/US20190104340A1/en
Priority to EP17769296.9A priority patent/EP3422726A4/en
Priority to JP2018549772A priority patent/JP2019519830A/ja
Publication of WO2017162019A1 publication Critical patent/WO2017162019A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42203Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • H04N21/4415Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card using biometric characteristics of the user, e.g. by voice recognition or fingerprint scanning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/221Announcement of recognition results
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Definitions

  • the present application relates to the field of intelligent terminal control technologies, and in particular, to an intelligent terminal control method and an intelligent terminal.
  • an intelligent terminal control method and an intelligent terminal are provided.
  • An intelligent terminal control method includes:
  • the application request includes user identity information currently logged into the application
  • An intelligent terminal comprising a memory and a processor; wherein the memory stores computer readable instructions, and when the computer readable instructions are executed by the calculator, the processor performs the following steps:
  • the application request includes user identity information currently logged into the application
  • FIG. 1 is a flowchart of a method for controlling an intelligent terminal in an embodiment
  • FIG. 2 is a flowchart of a method for controlling an intelligent terminal in another embodiment
  • FIG. 3 is a flowchart of a method for controlling an intelligent terminal in still another embodiment
  • FIG. 4 is a structural block diagram of an intelligent terminal control in an embodiment
  • FIG. 5 is a structural block diagram of intelligent terminal control in another embodiment
  • FIG. 6 is a schematic diagram showing the internal structure of an intelligent terminal in an embodiment.
  • An intelligent terminal control method for controlling an intelligent terminal installed with at least one application is as shown in FIG. 6, including a processor connected through a system bus, a non-volatile storage medium, a communication interface, a power interface, a memory, a voice collection device, an image acquisition device, and a display. Screen, speakers and input devices.
  • the storage medium of the smart terminal stores an operating system and also stores computer readable instructions.
  • the computer readable instructions are executed by the processor to enable the processor to implement an intelligent terminal control method.
  • At least one application is installed on the smart terminal, and the application runs in the environment provided by the operating system.
  • the processor is configured to provide computing and control capabilities to support operation of the entire intelligent terminal, and the processor is configured to execute the flow of the intelligent terminal control method.
  • the memory in the intelligent terminal provides an environment for the operation of the intelligent terminal control system in the storage medium.
  • the network interface is used to connect to the network side device for network communication.
  • the display of the smart terminal can be a liquid crystal display or an electronic ink display.
  • the input device may be a touch screen covered on the display screen, or a button, a trackball or a touchpad provided on the smart terminal casing, or an external keyboard, a touchpad or a mouse.
  • the voice collection device may be a microphone that is provided by the smart terminal or an external microphone device.
  • the image acquisition device can be a camera that is provided by the smart terminal or an external camera.
  • the smart terminal can be a digital device such as a smart TV, a computer, a tablet computer, or a smart game machine. Such smart terminals usually have a large display screen so that multiple users can simultaneously perform video viewing or information sharing. Therefore, it can be understood that the smart terminal control method and system can also be applied to a smart phone, an IPAD, and the like that can simultaneously be viewed by multiple users.
  • the application installed in the smart terminal can be an application that comes with the system or a third-party application that the user downloads and installs.
  • the application can include instant messaging applications such as MSN, QQ, WeChat, Facebook, Fetion, etc., and can also include SMS, phone, E-mail, knowledge quiz and other applications.
  • a smart TV is taken as an example, and at least one instant messaging application of MSN, QQ or WeChat is installed on the smart TV.
  • the intelligent terminal control method in an embodiment includes the following steps:
  • S102 Receive an application request sent by an application installed on the smart terminal.
  • some applications will run into the background according to the user's settings or usage. For example, after turning on the smart TV, the user will run an instant messaging application installed on the smart TV, such as MSN, QQ or WeChat and E-mail applications. Typically, these applications run into the background to save resources when users are not using these applications.
  • an application request is sent to the user.
  • an application such as QQ receives a new message
  • it usually sends a prompt to receive a new message by means of an icon flashing or prompts by voice, and some smart terminals directly receive the new message.
  • the message is shown on the display. Therefore, when the new message is a private message and there are multiple users watching the smart terminal at the same time, it is not easy to implement personal privacy protection.
  • the application issues an application request, the message prompt or display is not immediately performed.
  • the application request sent by the received application needs to include the user identity information currently logged into the application. If the application that sends the application request is a system application or other application that can be used without logging in, the currently logged-in user is the default user of the smart terminal, that is, the user information is the default user information.
  • the default user can be set by the user.
  • the default user can be one or more.
  • the user identity information may include a username, a user ID, and the like that can be uniquely identified by the smart terminal.
  • S104 Collect a face image of the user in the target area according to the received application request.
  • the target area generally refers to the front area of the display screen of the smart terminal, and the range of the target area needs to be determined according to the collection angle of the image capturing device such as the camera.
  • the image collection device may be a device that is provided by the smart terminal, or may be an external device that is connected to the smart terminal through the connection port.
  • face images are collected for multiple users.
  • the image capturing device can acquire an image in a range of the target area by imaging, thereby identifying a face on the collected image.
  • the calibration module calibrates the recognized face, such as setting an identifier (such as a code) for the recognized face.
  • the calibration module can also be calibrated before face recognition.
  • the image capturing device performs motion tracking on the face recognized in the target area.
  • the calibration module clears the calibration of the user, that is, stops the calibration code of the user. use.
  • the image acquisition module collects a new user to enter the target area, only the face image of the user entering the target area is collected by the tracking technology, and the face recognition of the user is performed through the calibration module. Calibration.
  • the facial feature information of the user is extracted according to the collected facial image. Since the different user has different facial feature information, the extracted facial feature information can be compared with the pre-stored facial feature information of each user to determine the user identity information corresponding to the collected facial image. .
  • the smart terminal first stores the common user identity information and the facial feature information that matches the user identity information, so as to identify the user identity information according to the collected face image.
  • the smart terminal may further store each user identity information and a face image that matches the user identity information, so as to compare the collected face image with the pre-stored face image. When the degree exceeds the preset value, the two can be considered to be the same, thereby identifying the user identity information corresponding to the face image.
  • the common user identity information and the facial feature information matching the user identity information are stored on a cloud server or a remote server. Therefore, the smart terminal can acquire the related information from the cloud server or the remote server to complete the face recognition.
  • S104 is to perform face image collection on multiple users in the target area of the smart terminal at the same time.
  • S104, S106, and S108 may be repeatedly executed in sequence, that is, after collecting a face image of a user in sequence, the user is identified and whether the user identity information matches the user identity information in the application request. If not, the processes S104, S106, and S108 are continued until the determination result of S108 is YES or all users in the target area range are judged.
  • the matching user is marked as a target user, and the application request is displayed.
  • the calibration of the matching user as the target user is mainly to calibrate the location information of the matching user, thereby facilitating subsequent operations.
  • the application request is displayed, thereby preventing the target user from being interfered with the current viewing user and leaking the login user information when the target user is not in the target area, thereby improving information security.
  • the application request can be directly displayed at a preset position of the smart terminal or voice prompting by voice.
  • the application of the smart terminal on the smart terminal other than the target application is controlled to be in a silent or pause mode.
  • the smart terminal After the smart terminal displays the application request, it enters the trajectory recognition mode to identify the motion trajectory of the target user's target part.
  • the target portion is the head.
  • the motion trajectory refers to the trajectory of the head swing, such as swinging left and right or swinging up and down.
  • the target site may also be a hand, and the motion trajectory may be used as a trajectory of the hand swing or a static gesture formed when the hand is finally stationary.
  • the collected motion trajectory is identified to output a motion trajectory recognition result.
  • a motion track library is predefined on the smart terminal or the server, and each motion track corresponds to a response command.
  • the target part is the head. Therefore, the movement track library defines a bit head movement track and a shaking head motion track, so that the head motion can be determined to be a nodding or a shaking head motion according to the collected motion track.
  • the target part may also be a hand, and the motion track library may define motion trajectories as needed, such as left and right swing, up and down swing, back and forth swing, and the like, or character trajectories capable of fast recognition, such as "W", "V", " ⁇ ", etc.
  • a response operation corresponding to the motion trajectory is also stored, such as accepting the application request, rejecting the application request, and the like.
  • the response operation corresponding to the nod movement track defined in the motion track library is to receive the response request, that is, to open or play the received new message; the shaking head correspondingly rejects the response request, that is, does not open or play the received message.
  • the new message also controls that the application request is no longer displayed. For example, when it is detected that the user A is within the target area, the smart TV displays the application request of the QQ, such as "A has a new message, whether to view a new message" or the like. It can be understood that the displayed application request also displays an identifier that reflects the identity information of the currently logged-in user. Therefore, when A sees or hears the application request, he can nod if he wants to view the message.
  • the smart TV After the smart TV collects the nodding movement of A, it can be determined that A agrees to display or play the new message, and then displays or plays the new message. When A doesn't want to see personal messages in front of everyone, he can shake his head. After the smart TV collects the shaking action of A, it can be determined that A does not agree to display the new message, then the new message is not displayed or played, and the display of the application request is closed, so as not to affect the user to continue watching the video program.
  • the application request can also be replied to by the above control method. That is, when it is detected that the user (or the default user) currently logged into the application is within the target area, the application request of the version upgrade is displayed, and the target user can agree to upgrade or shake the head by nodding after the application requests the display. Reject the upgrade request.
  • the above control method can also be applied to the knowledge quiz process of the intelligent terminal.
  • the target user's motion trajectory is identified, so that the user's answer is obtained according to the recognition trajectory recognition result and the question and answer of the next question is entered.
  • the motion track or the static gesture corresponding to the answer option may be defined in advance, so that the user can determine that the answer option is “A” or “B” after the user makes the corresponding gesture or motion track.
  • the storage module also stores the user's answer record.
  • the application request is not displayed so as not to affect the normal viewing of the current user.
  • the face information of the user in the target area is first collected to determine whether the user who logs in the application is within the target area, only when it is in the target area.
  • the application request is displayed in the range, and the motion track of the target part of the user is collected to respond to the application request of the application according to the motion track recognition result, and the user does not need to directly contact the smart terminal or use other devices.
  • the response operation can be performed, the operation is simple, and the information security is good.
  • the foregoing smart terminal control method further includes the following steps, as shown in FIG. 2.
  • S210 Collect a face image and a gesture image of the user in the target area.
  • the image acquisition module of the intelligent terminal performs external image acquisition at intervals of time to collect the face image and the gesture image of the user within the target area.
  • S220 Identify user identity information according to the collected face image.
  • the identity information of the user can be determined according to the collected face image.
  • the collected face image is subjected to feature extraction, so that the extracted facial feature information is compared with the pre-stored facial feature information of each user to identify the identity information of the user.
  • the gesture image is a static gesture image.
  • the smart terminal or server also defines a gesture library to compare the captured gesture image with the gesture in the gesture library to output the gesture recognition result.
  • the gestures in the gesture library all correspond to a target operation.
  • an OK gesture corresponds to opening QQ
  • a V gesture corresponds to opening WeChat. Therefore, the target application that logs in with the user identity information can be operated according to the gesture recognition result.
  • the image acquisition module can recognize the identity and the gesture after acquiring the facial image and the gesture image. Thereby opening the QQ of its login. This operation also requires no user contact with the smart terminal or a device such as a remote control, which is convenient for the user to operate.
  • the user can only open the corresponding application logged in with his identity information and cannot open the target application logged in by other users, thereby further improving information security.
  • the smart terminal control method further includes steps S250 to S270.
  • S250 Control the smart terminal to be in the silent mode.
  • the smart terminal can be set to the silent mode so as not to affect the user's operation on the target application. It can be understood that the smart terminal can be controlled to enter the silent mode by adjusting the volume of the smart terminal to 0 or suspending the application currently running on the smart terminal.
  • S260 Collect voice information of the target user.
  • the user When the user issues an instruction to perform a target operation on the target application, the user can be determined as the target user. Therefore, the voice information sent by it is collected to perform subsequent operations on the target application. The voice information is collected only for the target user, which can reduce the workload of the voice collection module and the recognition module.
  • the target operation object and the target operation instruction are identified by identifying the voice information content. For example, when the B user needs to send voice information to one of the friends C after opening the QQ according to the gesture, the user may send “send voice information to C”, “send voice information to C”, or directly say C name “C”. voice message.
  • the voice recognition module can recognize that the target object is C according to the received voice information, and the target operation is to send voice information, then the dialog window of C is popped up and the recording function is turned on.
  • the intelligent terminal can control the recording module to end recording according to the voice command, pause duration and other actions sent by the user, and send or stop sending the recording.
  • the recording module may be controlled to end recording and send the voice message when the user's voice input pause time is greater than the preset duration.
  • the smart terminal When the user directly speaks the name of C, the smart terminal will find C from the buddy list and pop up the C dialog box. After the dialog box of C is popped up, the user can perform corresponding operations by inputting corresponding voice commands, gestures, etc., such as sending voice information, making a video call, and the like.
  • the foregoing intelligent terminal control method further includes the following steps, as shown in FIG. 3.
  • the image acquisition module of the intelligent terminal performs external image acquisition at intervals of time to collect the gesture image of the user within the target area. In the present embodiment, it is not necessary to collect the face image when performing gesture image recognition.
  • the gesture recognition result is mainly to turn on the voice recognition mode.
  • the gesture can be customized by the user, such as when the gesture is defined as a fist, the voice recognition mode is turned on.
  • the smart terminal When the smart terminal is controlled to open the voice recognition mode, the smart terminal is controlled to enter the silent mode, thereby preventing the sound emitted by the smart terminal from interfering with the voice collection of the voice collection module.
  • S340 Collect voice information of the user.
  • the voice collection module can confirm whether to end the collection of voice information according to the user's pause duration or gesture.
  • the smart terminal pre-stores the user identity information and the voice feature information matched thereto. Therefore, the extracted voice feature information is compared with the pre-stored voice feature information to obtain the sender's user identity information.
  • the voice information is identified to identify the target application and target operations. For example, if the D user needs to enable QQ, he can send the voice message of “turn on QQ” or “open QQ”. According to the voice feature information of the collected voice information, it can be confirmed that the voice information is sent by D, and the target application is QQ, and the target operation is open. The recognition result is thus outputted, so that the interactive control module opens the QQ of the D login.
  • the target operation can be performed on the target application registered with the identity information of the voice information sender.
  • a smart terminal 400 is provided.
  • the internal structure of the smart terminal 400 may correspond to the structure as shown in FIG. 6, and each of the following modules may be implemented in whole or in part by software or a combination thereof.
  • the smart terminal 400 includes a receiving module 402, an image collecting module 404, an identifying module 406, a determining module 408, a calibration module 410, a display module 412, and an interaction control module 414.
  • the receiving module 402 is configured to receive an application request sent by an application.
  • the application request contains the user identity information currently logged into the application.
  • the image acquisition module 404 is configured to collect a face image of a user within a target area.
  • the identification module 406 is configured to identify user identity information according to the collected face image.
  • the determining module 408 is configured to determine whether the identified user identity information matches the user identity information in the application request.
  • the calibration module 410 is configured to target the matching user as a target user.
  • the display module 412 is configured to display the application request when the determination result of the determination module 408 is YES.
  • the image acquisition module 404 is further configured to collect a motion trajectory of a target part of the target user.
  • the identification module 406 is further configured to identify the motion trajectory and output a motion trajectory recognition result.
  • the interaction control module 414 is configured to perform a corresponding response operation on the corresponding application request according to the motion trajectory recognition result.
  • the smart terminal 400 when an application on the smart terminal issues an application request, first collects face information of the user in the target area to determine whether the user who logs in the application is within the target area, only when it is in the target area.
  • the application request is displayed, and the motion track of the target part of the user is collected to respond to the application request of the application according to the motion track recognition result, and the user does not need to directly contact the smart terminal or use other devices.
  • the response operation is simple, and the information security is good.
  • the smart terminal 400 further includes a voice collection module 416, as shown in FIG.
  • the image acquisition module 404 is further configured to collect a face image and a gesture image of the user in the target area.
  • the identification module 406 is further configured to identify user identity information according to the collected face image, perform gesture recognition on the gesture image, and output a gesture recognition result.
  • the gesture recognition result is a target operation performed on the target application.
  • the interaction control module 414 is further configured to perform a target operation on the target application that logs in with the identified user identity information according to the gesture recognition result.
  • the interaction control module 414 is further configured to control the smart terminal to be in a silent mode.
  • the voice collection module 416 is configured to collect voice information of the target user.
  • the identification module 406 is further configured to identify the voice information and output a voice recognition result.
  • the interaction control module 414 is further configured to perform a corresponding operation on the target application according to the voice recognition result.
  • the image acquisition module 404 in the smart terminal 400 is further configured to collect a gesture image of a user in a target area.
  • the identification module 406 is further configured to recognize a user gesture according to the gesture image and output a gesture recognition result.
  • the gesture recognition result is to turn on the voice recognition mode.
  • the interaction control module 414 is further configured to control the smart terminal to turn on the voice recognition mode according to the gesture recognition result.
  • the voice collection module 416 is configured to collect voice information of the user.
  • the identification module 406 is further configured to extract the voice feature information according to the voice information to obtain the user identity information of the sender, and identify the voice message and output the voice recognition result.
  • the result of the speech recognition is to perform a target operation on the target application.
  • the interaction control module 414 is further configured to perform a target operation on the target application that logs in with the identified user identity information according to the voice recognition result.
  • the smart terminal 400 further includes a storage module 418.
  • the storage module 418 is configured to store user identity information and facial feature information, voice feature information, and the like that match the user identity information.
  • the storage module 418 is disposed in the smart terminal or directly stored by using a memory in the smart terminal.
  • the storage module 418 is a cloud storage or a remote server.
  • an intelligent terminal includes a memory and a processor. Storing, in the memory, computer readable instructions, when executed by the processor, causing the processor to perform the steps of: receiving an application request issued by an application installed on the smart terminal; the application request includes a current login User identity information of the application; collecting a face image of the user in the target area according to the application request; identifying the user identity information according to the face image; determining whether the identified user identity information is in the application request The user identity information is matched; if yes, the matching user is marked as the target user, and the application request is displayed, the motion track of the target user's target part is collected, the motion track is identified, and the motion track recognition result is output.
  • the display application request is to display the application request in a preset position of the smart terminal or perform voice prompting by using a voice. And, in an embodiment, when the voice prompt is performed by voice, the computer readable instructions further cause the processor to perform: controlling another application on the smart terminal except the target application to be in a silent or pause mode .
  • the computer discardable instruction when it is determined that the identified user identity information does not match the user identity information in the application request, the computer discardable instruction further causes the processor to execute: not displaying the application request.
  • the computer readable instructions when the voice prompt is made by voice, the computer readable instructions further cause the processor to perform: controlling other applications on the smart terminal other than the target application to be in a mute or pause mode.
  • the processor when the computer readable instructions are executed by the processor, the processor further causes the processor to: pre-store respective defined motion trajectories and store acknowledgment operations in one-to-one correspondence with the motion trajectories.
  • the processor when the computer readable instructions are executed by the processor, the processor further causes the processor to: collect a face image and a gesture image of a user within a target area; according to the face image Identifying user identity information; performing gesture recognition on the gesture image and outputting a gesture recognition result; the gesture recognition result is performing a target operation on the target application; and performing execution on the target application registered with the identified user identity information according to the gesture recognition result Target operation.
  • the computer readable instructions further cause the processor to perform: controlling the smart terminal to be muted a mode; collecting voice information of the target user; identifying the voice information and outputting the voice recognition result; and performing a corresponding operation on the target application according to the voice recognition result.
  • the processor when the computer readable instructions are executed by the processor, the processor further causes the processor to: acquire a gesture image of a user within a target area; identify a user gesture according to the gesture image and output a gesture recognition result; the gesture recognition result is an open voice recognition mode; controlling, according to the gesture recognition result, the smart terminal to start a voice recognition mode; collecting voice information of the user; and extracting voice feature information according to the voice information to obtain a sender User identity information; identifying the voice information and outputting a voice recognition result; the voice recognition result is a target operation performed on the target application; and the target application logged in with the recognized user identity information according to the voice recognition result Perform the target operation.
  • collecting the voice information of the user includes: confirming whether to end the collection of the voice information according to the pause duration or the gesture of the user.
  • the computer readable instructions when executed by the processor, further causing the processor to perform the step of storing user identity information and facial feature information that matches the user identity information;
  • the face image identifying the user identity information includes: extracting facial feature information of the user according to the facial image; and acquiring user identity information that matches the facial feature information according to the facial feature information.
  • the storage medium may be a magnetic disk, an optical disk, or a read-only storage memory (Read-Only)
  • a nonvolatile storage medium such as a memory or a ROM, or a random access memory (RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Social Psychology (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Databases & Information Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Psychiatry (AREA)
  • Computer Hardware Design (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)
  • Telephone Function (AREA)

Abstract

本申请公开了一种智能终端控制方法,包括:接收智能终端上安装的应用程序发出的应用请求;所述应用请求中包含当前登录所述应用程序的用户身份信息;根据所述应用请求采集目标区域范围内用户的人脸图像;根据所述人脸图像识别用户身份信息;判断识别到的用户身份信息是否与所述应用请求中的用户身份信息匹配;若是,则将匹配用户标定为目标用户,并展示所述应用请求,采集目标用户的目标部位的运动轨迹,对运动轨迹进行识别并输出运动轨迹识别结果,根据运动轨迹识别结果对所述应用请求进行相应的应答操作。此外本申请还提供一种智能终端。

Description

智能终端控制方法和智能终端
本申请要求于2016年3月24日提交中国专利局、申请号为2016101739378、申请名称为“智能终端控制方法和系统、智能终端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
【技术领域】
本申请涉及智能终端控制技术领域,特别是涉及一种智能终端控制方法和智能终端。
【背景技术】
为满足用户的多样化需求,越来越多的智能终端如智能电视、计算机、平板电脑以及智能游戏机等均安装有相应的应用程序(如MSN、QQ、微信、邮箱客户端等)。这些应用程序往往会在智能终端启动后进入后台运行。当这些应用程序接收到新消息或者发出操作请求时,用户需要通遥控器、鼠标、键盘等输入设备进行相应的操作,且当智能终端具有多个操作用户时,任意用户均可以对该应用程序的新消息进行查看或者对操作请求进行应答。由于应用程序上通常记载有或者能够反映一些用户的个人信息,因此这种控制方法不利于进行用户的隐私保护且不便于用户进行操作。
【发明内容】
根据本申请的各种实施例,提供一种智能终端控制方法及智能终端。
一种智能终端控制方法,包括:
接收所述智能终端上安装的应用程序发出的应用请求;所述应用请求中包含当前登录所述应用程序的用户身份信息;
根据所述应用请求采集目标区域范围内用户的人脸图像;
根据所述人脸图像识别用户身份信息;
判断识别到的用户身份信息是否与所述应用请求中的用户身份信息匹配;
若是,则
将匹配用户标定为目标用户,并展示所述应用请求,
采集所述目标用户的目标部位的运动轨迹,
对所述运动轨迹进行识别并输出运动轨迹识别结果,以及
根据所述运动轨迹识别结果对所述应用请求进行相应的应答操作。
一种智能终端,包括存储器和处理器;所述存储器中存储有计算机可读指令,所述计算机可读指令被所述计算器执行时,使得所述处理器执行以下步骤:
接收所述智能终端上安装的应用程序发出的应用请求;所述应用请求中包含当前登录所述应用程序的用户身份信息;
根据所述应用请求采集目标区域范围内用户的人脸图像;
根据所述人脸图像识别用户身份信息;
判断识别到的用户身份信息是否与所述应用请求中的用户身份信息匹配;
若是,则
将匹配用户标定为目标用户,并展示所述应用请求,
采集所述目标用户的目标部位的运动轨迹,
对所述运动轨迹进行识别并输出运动轨迹识别结果,以及
根据所述运动轨迹识别结果对所述应用请求进行相应的应答操作。
本发明的一个或多个实施例的细节在下面的附图和描述中提出。本发明的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
【附图说明】
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他实施例的附图。
图1为一实施例中的智能终端控制方法的流程图;
图2为另一实施例中的智能终端控制方法的流程图;
图3为又一实施例中的智能终端控制方法的流程图;
图4为一实施例中的智能终端控制的结构框图;
图5为另一实施例中的智能终端控制的结构框图;
图6为一实施例中的智能终端的内部结构示意图。
【具体实施方式】
为了便于理解本申请,下面将参照相关附图对本申请进行更全面的描述。附图中给出了本申请的较佳实施例。但是,本申请可以以许多不同的形式来实现,并不限于本文所描述的实施例。相反地,提供这些实施例的目的是使对本申请的公开内容的理解更加透彻全面。
一种智能终端控制方法,用于对安装有至少一种应用程序的智能终端进行控制。在一实施例中,智能终端的内部结构如图6所示,包括通过系统总线连接的处理器、非易失性存储介质、通信接口、电源接口、内存、语音采集装置、图像采集装置、显示屏、扬声器和输入装置。智能终端的存储介质存储有操作系统,还存储有计算机可读指令。该计算机可读指令被处理器执行时,使得处理器能够实现一种智能终端控制方法。智能终端上至少安装有一个应用程序,该应用程序运行在操作系统提供的环境中。该处理器用于提供计算和控制能力,支撑整个智能终端的运行,且处理器被配置用于执行智能终端控制方法的流程。智能终端中的内存为存储介质中的智能终端控制系统的运行提供环境。网络接口用于与网络侧设备连接进行网络通信。智能终端的显示屏可以是液晶显示屏或者电子墨水显示屏等。输入装置可以是显示屏上覆盖的触摸屏,也可以是智能终端外壳上设置的按键、轨迹球或者触控板,也可以是外接的键盘、触控板或者鼠标等。语音采集装置可以为智能终端自带的麦克风或者外接的麦克风设备。图像采集装置可以为智能终端自带的摄像头或者外接的摄像头。
该智能终端可以为智能电视、计算机、平板电脑以及智能游戏机等数码设备。这类智能终端通常具有较大的显示屏从而可以供多个用户同时进行视频观看或者信息共享等。因此,可以理解,本智能终端控制方法和系统同样可以适用于能够同时供多个用户进行观看的智能手机、IPAD等设备中。智能终端中安装的应用程序可以是系统自带的应用程序或者用户自行下载安装的第三方应用程序。该应用程序可以包括即时通信应用程序如MSN、QQ、微信、Facebook、飞信等,也可以包括短信、电话、E-mail、知识问答等应用程序。下文中均以智能电视为例进行说明,且智能电视上至少安装有MSN、QQ或者微信中的一种即时通信应用程序。
如图1所示,一实施例中的智能终端控制方法包括以下步骤:
S102,接收智能终端上安装的应用程序发送的应用请求。
智能终端启动后,部分应用程序会根据用户的设置或者使用进入后台运行。例如,用户在开启智能电视后,会运行安装在智能电视上的即时通信应用程序,如MSN、QQ或者微信以及E-mail等应用程序。通常,当用户不使用这些应用程序时,为节约资源,这些应用程序会进入后台运行。当这些应用程序接收到新的消息或者检测到相关的版本升级信息、异常信息等信息时,会向用户发送应用请求。传统的智能终端中,当QQ等应用程序接收到新的消息时,通常是通过图标闪烁的方式发出接收到新消息的提示或者通过语音进行提示,也有部分智能终端则会直接将接收到的新消息在显示屏上显示。因此,当该新消息为私人消息且有多个用户同时观看该智能终端时,就不容易实现个人的隐私保护。本实施例中,当应用程序发出应用请求时,并不会立即进行消息提示或者展示。
接收到的应用程序发送的应用请求中需要包含当前登录所述应用程序的用户身份信息。如果发送应用请求的应用程序为系统应用程序或者其他无需登录即可使用的应用程序时,将当前登录用户为智能终端的默认用户,即其用户信息为默认用户信息。默认用户可以由用户进行设定。默认用户可以为一个或者多个。在本实施例中,用户身份信息可以包括能够供智能终端进行唯一识别的用户名、用户ID等。
S104,根据接收到的应用请求采集目标区域范围内用户的人脸图像。
目标区域范围通常是指智能终端的显示屏前方区域,其范围大小则需要根据图像采集设备如摄像头的采集角度来确定。图像采集设备可以为智能终端自带的设备,也可以为通过连接端口接入到智能终端的外置设备。当目标区域范围内有多个用户时,会对多个用户均进行人脸图像采集。具体地,图像采集设备可以通过摄像获取目标区域范围内的图像,从而对采集到的图像上的人脸进行识别。在本实施例中,标定模块会对识别出来的人脸进行标定,如给识别出来的人脸设定一个标识符(例如代码)。标定模块也可以在人脸识别前进行标定。在本实施例中,图像采集设备会对目标区域范围内识别到的人脸进行运动跟踪,当检测到有用户离开时,标定模块会清除对该用户的标定,即停止该用户的标定代码的运用。当图像采集模块采集到有新的用户进入到目标区域范围内时,会通过跟踪技术仅采集新进入目标区域范围内的用户的人脸图像,并对该用户进行人脸识别后通过标定模块进行标定。
S106,根据采集的人脸图像识别用户身份信息。
具体地,根据采集到的人脸图像提取用户的脸部特征信息。由于不同的用户有不同的脸部特征信息,因此可以将提取到的脸部特征信息与预先存储的各用户的脸部特征信息进行比对,以确定采集到的人脸图像对应的用户身份信息。在本实施例中,智能终端会先存储常用用户身份信息以及与用户身份信息匹配的脸部特征信息,以便根据采集到的人脸图像识别用户身份信息。在其他的实施例中,智能终端还可以预先存储各用户身份信息以及与用户身份信息匹配的人脸图像,从而通过将采集到的人脸图像与预先存储的人脸图像进行比对,当相似度超过预设值时则可以认为二者相同,从而识别出该人脸图像所对应的用户身份信息。在另一实施例中,常用用户身份信息以及与用户身份信息匹配的脸部特征信息储存于云服务器或者远程服务器上。因此,智能终端可以从云服务器或者远程服务器上获取该相关信息以完成人脸识别。
S108,判断识别到的用户身份信息是否与应用请求中的用户身份信息匹配。
通过判断识别到的用户身份信息是否与该应用请求中的用户身份信息匹配来判断当前登录发出该应用请求的应用程序的用户是否在目标区域范围内。例如,当前智能电视机中A登录的QQ发出了应用请求,那么对智能电视的目标区域范围内(正前方观看区域)的用户进行人脸识别从而确认A是否目标区域范围内,即A是否在观看智能电视。当判断出识别到的用户身份信息与应用请求中的用户身份信息匹配时(即登录发送应用请求的应用程序的用户在目标区域范围内时),执行S110以及后续步骤,否则执行步骤S118。在本实施例中,S104是同时对智能终端的目标区域范围内的多个用户进行人脸图像采集。在其他的实施例中,S104、S106以及S108可以依次重复执行,即按顺序采集一个用户的人脸图像后对该用户进行身份识别并判断该用户身份信息是否与应用请求中的用户身份信息匹配,如果不是则继续执行S104、S106以及S108,直至S108的判断结果为是或者对目标区域范围内的所有用户均进行了判断。
S110,将匹配用户标定为目标用户,并展示该应用请求。
将匹配用户标定为目标用户主要是标定匹配用户的位置信息,从而方便后续操作。当确认目标用户在目标区域范围内后才展示该应用请求,从而可以避免目标用户不在目标区域范围内时展示应用请求对当前观看用户的干扰以及对登录用户信息的泄露,有利于提高信息安全性。应用请求可以直接显示在智能终端的预设位置处或者通过语音进行语音提示。当应用请求通过语音进行语音提示时,会在语音提示的同时控制智能终端上除目标应用程序之外的其它应用程序处于静音或者暂停等模式。
S112,采集目标用户的目标部位的运动轨迹。
当智能终端对应用请求进行展示后,即进入轨迹识别模式,对目标用户的目标部位的运动轨迹进行识别。在本实施例中,目标部位为头部。运动轨迹则是指头部摆动的轨迹,如左右摆动或者上下摆动等。在其他的实施例中,目标部位也可以为手部,运动轨迹则可用为手部摆动的轨迹或者手部最终静止时形成的静态手势。
S114,对采集到的运动轨迹进行识别并输出运动轨迹识别结果。
对采集到的运动轨迹进行识别从而输出运动轨迹识别结果。智能终端或者服务器上预先定义有运动轨迹库,且每一个运动轨迹对应于一应答指令。在本实施例中,目标部位为头部,因此,运动轨迹库中定义有点头运动轨迹和摇头动作轨迹,从而可以根据采集到的运动轨迹确定该头部运动为点头还是摇头动作。在其他的实施例中,目标部位也可以为手部,则运动轨迹库中可以根据需要定义运动轨迹,如左右摆动、上下摆动、前后摆动等运动轨迹,或者能够进行快速识别的字符轨迹,如“W”、“V”、“✔”等。
S116,根据运动轨迹识别结果对应用请求进行相应的应答操作。
运动轨迹库中在存储各自定义的运动轨迹的同时,还会存储与该运动轨迹一一对应的应答操作,如接受该应用请求、拒绝该应用请求等操作。
在本实施例中,运动轨迹库中定义点头运动轨迹对应的应答操作为接收该应答请求,即打开或者播放接收到的新消息;摇头则对应拒绝该应答请求,即不打开或者播放接收到的新消息同时控制不再展示该应用请求。例如,当检测到用户A在目标区域范围内时,智能电视展示QQ的应用请求,如“A有新消息,是否查看新消息”等。可以理解,展示的应用请求中同样会显示能够反映当前登录的用户身份信息的标识。因此,当A看到或者听到该应用请求时,如果想要查看该消息,则可以点头。智能电视采集到A的点头运动后,即可以确定A同意显示或者播放该新消息,则显示或者播放该新消息。当A不想在大家面前查看个人消息时,则可以摇头。智能电视采集到A的摇头动作后,即可以确定A不同意展示该新消息,则不对该新消息进行显示或播放,同时关闭对应用请求的展示,以不影响用户继续观看视频节目。
又例如,当某些应用程序检测到新的版本时,通常会发送是否进行自动版本升级的应用请求。通过上述控制方法同样可以对该应用请求做出应答操作。即当检测到当前登录该应用程序的用户(或者默认用户)在目标区域范围内时,展示该版本升级的应用请求,目标用户则可以在该应用请求展示后通过点头来同意升级或者通过摇头来拒绝该升级请求。
上述控制方法同样可以适用于智能终端的知识问答过程中。例如,当知识问答应用程序启动后,对目标用户的运动轨迹进行识别,从而根据识运动轨迹识别结果获取用户做出的答案并进入下一题的问答中。具体地,当识别到用户摇头时,即可确定用户做出的答案为“否”,当识别到用户点头时,即可确定用户做出的答案为“是”。又或者,可以预先定义答题选项所对应的运动轨迹或者静态手势,从而在用户做出相应手势或者运动轨迹后即可确定其答题选项为“A”或者“B”等。在本实施例中,存储模块还会存储用户的答题记录。
S118,不展示该应用请求。
当目标用户不在目标区域范围内时,不对该应用请求进行展示从而不会对当前用户的正常观看产生影响。
上述智能终端控制方法,在智能终端上的应用程序发出应用请求时先采集目标区域范围内用户的人脸信息以判断出登录该应用程序的用户是否在目标区域范围内,仅当其在目标区域范围内才会展示该应用请求,并采集该用户的目标部位的运动轨迹从而根据其运动轨迹识别结果对应用程序的应用请求进行相应的应答操作,用户无需直接与智能终端接触或者借助其他设备即可进行应答操作,操作简单且信息安全性较好。
在另一实施例中,上述智能终端控制方法还包括以下步骤,如图2所示。
S210,采集目标区域范围内用户的人脸图像和手势图像。
智能终端的图像采集模块会在每隔一间隔时长进行外部图像采集,以采集目标区域范围内用户的人脸图像和手势图像。
S220,根据采集到的人脸图像识别用户身份信息。
根据采集到的人脸图像就可以确定出该用户的身份信息。例如,将采集到的人脸图像进行特征提取,从而将提取到的脸部特征信息与预先存储的各用户的脸部特征信息进行比对,以识别该用户的身份信息。
S230,对该手势图像进行识别以输出手势识别结果。
在本实施例中,手势图像为静态手势图像。智能终端或者服务器同样会定义一手势库,从而将采集到的手势图像与手势库中的手势进行比对,以输出手势识别结果。
S240,根据手势识别结果对以识别到的用户身份信息登录的目标应用程序执行目标操作。
具体地,手势库中手势均一一对应一目标操作。例如,OK手势对应于打开QQ,V手势则对应于打开微信。因此,根据手势识别结果即可对以该用户身份信息进行登录的目标应用程序进行操作。又例如,当B用户想要打开其登录在智能电视中的QQ时,其只需要做出OK手势,图像采集模块采集到其脸部图像以及手势图像后即可对其身份以及手势进行识别,从而打开其登录的QQ。此操作过程同样无需用户接触智能终端或者借助遥控器等设备,方便用户操作。并且,用户仅能够打开以其身份信息登录的相应应用程序而不能打开其他用户登录的目标应用程序,从而进一步提高了信息安全性。
在本实施例中,上述智能终端控制方法还包括步骤S250~S270。
S250,控制智能终端处于静音模式。
当用户对目标应用程序进行操作时,为避免干扰,可以将智能终端设置为静音模式,以不影响用户对目标应用程序的操作。可以理解,可以通过将智能终端的音量调整至0或者将智能终端上当前正在运行的应用程序暂停来控制智能终端进入静音模式。
S260,采集目标用户的语音信息。
当用户发出对目标应用程序进行目标操作的指令后,该用户即可确定为目标用户。因此采集其发出的语音信息,以对该目标应用程序进行后续操作。仅对目标用户进行语音信息的采集,可以减少语音采集模块以及识别模块的工作量。
S270,识别该语音信息,并根据语音识别结果对目标应用程序执行相应操作。
通过对语音信息内容进行识别以识别出目标操作对象以及目标操作指令。例如,当B用户根据手势开启QQ后,其需要向其中一好友C发送语音信息,则可以发送“向C发送语音信息”、“发送语音信息给C”或者直接说C的名字“C”等语音信息。语音识别模块则可以根据接收到的语音信息识别出目标对象为C,目标操作为发送语音信息,则弹出C的对话窗口并开启录音功能。智能终端可以根据用户发送的语音指令、停顿时长以及其他动作控制录音模块结束录音并发送或者停止发送该录音。例如,可以通过用户的语音输入停顿时长大于预设时长时控制录音模块结束录音并发送该语音消息。当用户直接说出C的名字时,智能终端会从好友列表中查找到C,并弹出C的对话框。用户可以在弹出C的对话框后,通过输入相应的语音指令、手势等进行相应的操作,如发送语音信息、进行视频通话等。
通过上述智能终端控制方法,还可以对智能终端上的应用程序进行一些基本操作,从而方便用户操作。并且,操作同样仅能针对有权限的应用程序进行,提高了信息安全性。
在又一实施例中,上述智能终端控制方法还包括以下步骤,如图3所示。
S310,采集目标区域范围内用户的手势图像。
智能终端的图像采集模块会在每隔一间隔时长进行外部图像采集,以采集目标区域范围内用户的手势图像。在本实施例中,在进行手势图像识别时无需对脸部图像进行采集。
S320,根据采集到的手势图像识别用户手势并输出手势识别结果。
当仅对用户手势图像进行识别时,手势识别结果主要为开启语音识别模式。该手势可以由用户进行自定义,如定义手势为拳头时为开启语音识别模式。
S330,根据所述手势识别结果控制智能终端开启语音识别模式。
在控制智能终端开启语音识别模式的同时会控制智能终端进入静音模式,从而避免智能终端发出的声音对语音采集模块的语音采集产生干扰。
S340,采集用户的语音信息。
在进入语音识别模式后开始采集用户的语音信息。语音采集模块可以根据用户的停顿时长或者手势来确认是否结束语音信息的采集。
S350,根据采集到的语音信息提取语音特征信息以获取发送者的用户身份信息。
在本实施例中,智能终端会预先存储用户身份信息以及与之匹配的语音特征信息。因此,将提取到的语音特征信息与预存的语音特征信息进行比对以获取发送者的用户身份信息。
S360,识别该语音信息并输出语音识别结果。
对语音信息进行识别,以识别出目标应用程序和目标操作。例如,D用户需要开启QQ,则可以发送“开启QQ”或者“打开QQ”的语音信息。根据采集到的语音信息的语音特征信息可以确认该语音信息由D发送,并且目标应用程序为QQ,目标操作为打开。从而输出该识别结果,以便于交互控制模块打开D登录的QQ。
S370,根据该语音识别结果对以识别出的用户身份信息登录的目标应用程序执行目标操作。
根据识别到的语音结果就可以对以语音信息发送者的身份信息登录的目标应用程序执行目标操作。
如图4所示,在一实施例中,提供了一种智能终端400。智能终端400的内部结构可对应于如图6所示的结构,下述每个模块可全部或部分通过软件或其组合来实现。该智能终端400包括接收模块402、图像采集模块404、识别模块406、判断模块408、标定模块410、展示模块412以及交互控制模块414。
接收模块402用于接收应用程序发出的应用请求。应用请求中包含当前登录该应用程序的用户身份信息。图像采集模块404用于采集目标区域范围内用户的人脸图像。识别模块406用于根据采集到的人脸图像识别用户身份信息。判断模块408用于判断识别到的用户身份信息是否与该应用请求中的用户身份信息匹配。标定模块410用于将匹配用户标定为目标用户。展示模块412则用于在判断模块408的判断结果为是时,展示该应用请求。图像采集模块404还用于采集目标用户的目标部位的运动轨迹。识别模块406则还用于对该运动轨迹进行识别并输出运动轨迹识别结果。交互控制模块414用于根据该运动轨迹识别结果对相应的应用请求进行相应的应答操作。
上述智能终端400,在智能终端上的应用程序发出应用请求时先采集目标区域范围内用户的人脸信息以判断出登录该应用程序的用户是否在目标区域范围内,仅当其在目标区域范围内才会展示该应用请求,并采集该用户的目标部位的运动轨迹从而根据其运动轨迹识别结果对应用程序的应用请求进行相应的应答操作,用户无需直接与智能终端接触或者借助其他设备即可进行应答操作,操作简单且信息安全性较好。
在另一实施例中,上述智能终端400还包括语音采集模块416,如图5所示。在本实施例中,图像采集模块404还用于采集目标区域范围内用户的人脸图像和手势图像。识别模块406还用于根据采集到的人脸图像识别用户身份信息,并对手势图像进行手势识别并输出手势识别结果。该手势识别结果为对与目标应用程序执行目标操作。交互控制模块414则还用于根据手势识别结果对以识别到的用户身份信息登录的目标应用程序执行目标操作。在本实施例中,交互控制模块414还用于控制智能终端处于静音模式。语音采集模块416用于采集目标用户的语音信息。识别模块406还用于识别该语音信息并输出语音识别结果。交互控制模块414还用于根据该语音识别结果对目标应用程序执行相应的操作。
在另一实施例中,上述智能终端400中的图像采集模块404还用于采集目标区域范围内用户的手势图像。识别模块406还用于根据该手势图像识别用户手势并输出手势识别结果。该手势识别结果为开启语音识别模式。交互控制模块414还用于根据该手势识别结果控制智能终端开启语音识别模式。语音采集模块416用于采集用户的语音信息。识别模块406还用于根据该语音信息提取语音特征信息以获取发送者的用户身份信息,并识别该语音信息后输出语音识别结果。语音识别结果为对目标应用程序执行目标操作。交互控制模块414还用于根据该语音识别结果对以识别到的用户身份信息登录的目标应用程序执行目标操作。
在本实施例中,上述智能终端400还包括存储模块418。存储模块418用于存储用户身份信息以及与该用户身份信息匹配的脸部特征信息以及语音特征信息等。在一实施例中,存储模块418设置于智能终端内或者直接利用智能终端内的存储器进行存储。在另一实施例中,存储模块418为云存储器或者远程服务器。
在一实施例中,提供了一种智能终端,包括存储器和处理器。存储器中存储有计算机可读指令,该计算机可读指令被处理器执行时,使得处理器执行以下步骤:接收所述智能终端上安装的应用程序发出的应用请求;所述应用请求中包含当前登录所述应用程序的用户身份信息;根据所述应用请求采集目标区域范围内用户的人脸图像;根据所述人脸图像识别用户身份信息;判断识别到的用户身份信息是否与所述应用请求中的用户身份信息匹配;若是,则将匹配用户标定为目标用户,并展示所述应用请求,采集所述目标用户的目标部位的运动轨迹,对所述运动轨迹进行识别并输出运动轨迹识别结果,以及根据所述运动轨迹识别结果对所述应用请求进行相应的应答操作。其中,展示应用请求为将所述应用请求显示在所述智能终端的预设位置或者通过语音进行语音提示。并且,在一实施例中,通过语音进行语音提示时,所述计算机可读指令还使得所述处理器执行:控制所述智能终端上除目标应用程序之外的其他应用程序处于静音或者暂停模式。
在本实施例中,在判断出识别到的用户身份信息与所述应用请求中的用户身份信息不匹配时,所述计算机可丢指令还使得所述处理器执行:不展示所述应用请求。
在一实施例中,通过语音进行语音提示时,所述计算机可读指令还使得所述处理器执行:控制所述智能终端上除目标应用程序之外的其他应用程序处于静音或者暂停模式。
在一实施例中,所述计算机可读指令被所述处理器执行时,还使得所述处理器执行:预先存储各自定义的运动轨迹,并存储与所述运动轨迹一一对应的应答操作。
在一实施例中,所述计算机可读指令被所述处理器执行时,还使得所述处理器执行以下步骤:采集目标区域范围内用户的人脸图像和手势图像;根据所述人脸图像识别用户身份信息;对手势图像进行手势识别并输出手势识别结果;所述手势识别结果为对目标应用程序执行目标操作;及根据手势识别结果对以识别到的用户身份信息登录的目标应用程序执行目标操作。
在一实施例中,在根据手势识别结果对以识别到的用户身份信息登录的目标应用程序执行目标操作之后,所述计算机可读指令还使得所述处理器执行:控制所述智能终端处于静音模式;采集目标用户的语音信息;识别所述语音信息并输出语音识别结果;以及根据所述语音识别结果对所述目标应用程序执行相应的操作。
在一实施例中,所述计算机可读指令被所述处理器执行时,还使得所述处理器执行以下步骤:采集目标区域范围内用户的手势图像;根据所述手势图像识别用户手势并输出手势识别结果;所述手势识别结果为开启语音识别模式;根据所述手势识别结果控制所述智能终端开启语音识别模式;采集用户的语音信息;根据所述语音信息提取语音特征信息以获取发送者的用户身份信息;识别所述语音信息并输出语音识别结果;所述语音识别结果为对目标应用程序执行目标操作;以及根据所述语音识别结果对以识别到的用户身份信息登录的目标应用程序执行目标操作。
在一实施例中,采集用户的语音信息包括:根据用户的停顿时长或者手势来确认是否结束语音信息的采集。
在一实施例中,所述计算机可读指令被所述处理器执行时,还使得所述处理器执行存储用户身份信息以及与所述用户身份信息匹配的脸部特征信息的步骤;所述根据人脸图像识别用户身份信息,包括:根据所述人脸图像提取用户的脸部特征信息;以及根据所述脸部特征信息获取与所述脸部特征信息匹配的用户身份信息。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等非易失性存储介质,或随机存储记忆体(Random Access Memory,RAM)等。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种智能终端控制方法,包括:
    接收所述智能终端上安装的应用程序发出的应用请求;所述应用请求中包含当前登录所述应用程序的用户身份信息;
    根据所述应用请求采集目标区域范围内用户的人脸图像;
    根据所述人脸图像识别用户身份信息;
    判断识别到的用户身份信息是否与所述应用请求中的用户身份信息匹配;
    若是,则
    将匹配用户标定为目标用户,并展示所述应用请求,
    采集所述目标用户的目标部位的运动轨迹,
    对所述运动轨迹进行识别并输出运动轨迹识别结果,以及
    根据所述运动轨迹识别结果对所述应用请求进行相应的应答操作。
  2. 根据权利要求1所述的方法,其特征在于,所述判断识别到的用户身份信息是否与所述应用请求中的用户身份信息匹配中,若识别到的用户身份信息与所述应用请求中的用户身份信息不匹配,则不展示所述应用请求。
  3. 根据权利要求1所述的方法,其特征在于,展示所述应用请求为将所述应用请求显示在所述智能终端的预设位置或者通过语音进行语音提示。
  4. 根据权利要求3所述的方法,其特征在于,所述通过语音进行语音提示时,控制所述智能终端上除目标应用程序之外的其他应用程序处于静音或者暂停模式。
  5. 根据权利要求1所述的方法,其特征在于,还包括:
    预先存储各自定义的运动轨迹,并存储与所述运动轨迹一一对应的应答操作。
  6. 根据权利要求1所述的方法,其特征在于,还包括:
    采集目标区域范围内用户的人脸图像和手势图像;
    根据所述人脸图像识别用户身份信息;
    对手势图像进行手势识别并输出手势识别结果;所述手势识别结果为对目标应用程序执行目标操作;及
    根据手势识别结果对以识别到的用户身份信息登录的目标应用程序执行目标操作。
  7. 根据权利要求6所述的方法,其特征在于,所述根据手势识别结果对以识别到的用户身份信息登录的目标应用程序执行目标操作之后,所述方法还包括:
    控制所述智能终端处于静音模式;
    采集目标用户的语音信息;
    识别所述语音信息并输出语音识别结果;以及
    根据所述语音识别结果对所述目标应用程序执行相应的操作。
  8. 根据权利要求1所述的方法,其特征在于,还包括:
    采集目标区域范围内用户的手势图像;
    根据所述手势图像识别用户手势并输出手势识别结果;所述手势识别结果为开启语音识别模式;
    根据所述手势识别结果控制所述智能终端开启语音识别模式;
    采集用户的语音信息;
    根据所述语音信息提取语音特征信息以获取发送者的用户身份信息;
    识别所述语音信息并输出语音识别结果;所述语音识别结果为对目标应用程序执行目标操作;以及
    根据所述语音识别结果对以识别到的用户身份信息登录的目标应用程序执行目标操作。
  9. 根据权利要求8所述的方法,其特征在于,所述采集用户的语音信息包括:根据用户的停顿时长或者手势来确认是否结束语音信息的采集。
  10. 根据权利要求1所述的方法,其特征在于,所述方法还包括存储用户身份信息以及与所述用户身份信息匹配的脸部特征信息;
    所述根据人脸图像识别用户身份信息的步骤包括:
    根据所述人脸图像提取用户的脸部特征信息;以及
    根据所述脸部特征信息获取与所述脸部特征信息匹配的用户身份信息。
  11. 一种智能终端,包括存储器和处理器;所述存储器中存储有计算机可读指令,所述计算机可读指令被所述计算器执行时,使得所述处理器执行以下步骤:
    接收所述智能终端上安装的应用程序发出的应用请求;所述应用请求中包含当前登录所述应用程序的用户身份信息;
    根据所述应用请求采集目标区域范围内用户的人脸图像;
    根据所述人脸图像识别用户身份信息;
    判断识别到的用户身份信息是否与所述应用请求中的用户身份信息匹配;
    若是,则
    将匹配用户标定为目标用户,并展示所述应用请求,
    采集所述目标用户的目标部位的运动轨迹,
    对所述运动轨迹进行识别并输出运动轨迹识别结果,以及
    根据所述运动轨迹识别结果对所述应用请求进行相应的应答操作。
  12. 根据权利要求11所述的智能终端,其特征在于,在判断出识别到的用户身份信息与所述应用请求中的用户身份信息不匹配时,所述计算机可丢指令还使得所述处理器执行:不展示所述应用请求。
  13. 根据权利要求11所述的智能终端,其特征在于,展示所述应用请求为将所述应用请求显示在所述智能终端的预设位置或者通过语音进行语音提示。
  14. 根据权利要求13所述的智能终端,其特征在于,通过语音进行语音提示时,所述计算机可读指令还使得所述处理器执行:控制所述智能终端上除目标应用程序之外的其他应用程序处于静音或者暂停模式。
  15. 根据权利要求11所述的智能终端,其特征在于,所述计算机可读指令被所述处理器执行时,还使得所述处理器执行:预先存储各自定义的运动轨迹,并存储与所述运动轨迹一一对应的应答操作。
  16. 根据权利要求11所述的智能终端,其特征在于,所述计算机可读指令被所述处理器执行时,还使得所述处理器执行以下步骤:
    采集目标区域范围内用户的人脸图像和手势图像;
    根据所述人脸图像识别用户身份信息;
    对手势图像进行手势识别并输出手势识别结果;所述手势识别结果为对目标应用程序执行目标操作;及
    根据手势识别结果对以识别到的用户身份信息登录的目标应用程序执行目标操作。
  17. 根据权利要求16所述的智能终端,其特征在于,在根据手势识别结果对以识别到的用户身份信息登录的目标应用程序执行目标操作之后,所述计算机可读指令还使得所述处理器执行:
    控制所述智能终端处于静音模式;
    采集目标用户的语音信息;
    识别所述语音信息并输出语音识别结果;以及
    根据所述语音识别结果对所述目标应用程序执行相应的操作。
  18. 根据权利要求11所述的智能终端,其特征在于,所述计算机可读指令被所述处理器执行时,还使得所述处理器执行以下步骤:
    采集目标区域范围内用户的手势图像;
    根据所述手势图像识别用户手势并输出手势识别结果;所述手势识别结果为开启语音识别模式;
    根据所述手势识别结果控制所述智能终端开启语音识别模式;
    采集用户的语音信息;
    根据所述语音信息提取语音特征信息以获取发送者的用户身份信息;
    识别所述语音信息并输出语音识别结果;所述语音识别结果为对目标应用程序执行目标操作;以及
    根据所述语音识别结果对以识别到的用户身份信息登录的目标应用程序执行目标操作。
  19. 根据权利要求18所述的智能终端,其特征在于,采集用户的语音信息包括:根据用户的停顿时长或者手势来确认是否结束语音信息的采集。
  20. 根据权利要求11所述的智能终端,其特征在于,所述计算机可读指令被所述处理器执行时,还使得所述处理器执行存储用户身份信息以及与所述用户身份信息匹配的脸部特征信息的步骤;
    所述根据人脸图像识别用户身份信息,包括:根据所述人脸图像提取用户的脸部特征信息;以及
    根据所述脸部特征信息获取与所述脸部特征信息匹配的用户身份信息。
PCT/CN2017/075846 2016-03-24 2017-03-07 智能终端控制方法和智能终端 WO2017162019A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US16/087,618 US20190104340A1 (en) 2016-03-24 2017-03-07 Intelligent Terminal Control Method and Intelligent Terminal
EP17769296.9A EP3422726A4 (en) 2016-03-24 2017-03-07 METHOD FOR CONTROLLING INTELLIGENT TERMINAL AND INTELLIGENT TERMINAL
JP2018549772A JP2019519830A (ja) 2016-03-24 2017-03-07 スマート端末を制御する方法、及びスマート端末

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610173937.8A CN105872685A (zh) 2016-03-24 2016-03-24 智能终端控制方法和系统、智能终端
CN201610173937.8 2016-03-24

Publications (1)

Publication Number Publication Date
WO2017162019A1 true WO2017162019A1 (zh) 2017-09-28

Family

ID=56625785

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/075846 WO2017162019A1 (zh) 2016-03-24 2017-03-07 智能终端控制方法和智能终端

Country Status (5)

Country Link
US (1) US20190104340A1 (zh)
EP (1) EP3422726A4 (zh)
JP (1) JP2019519830A (zh)
CN (1) CN105872685A (zh)
WO (1) WO2017162019A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111402885A (zh) * 2020-04-22 2020-07-10 北京万向新元科技有限公司 一种基于语音和空气成像技术的交互方法及其系统
CN112015171A (zh) * 2019-05-31 2020-12-01 北京京东振世信息技术有限公司 一种智能音箱和控制智能音箱的方法、装置和存储介质
CN113269124A (zh) * 2021-06-09 2021-08-17 重庆中科云从科技有限公司 一种对象识别方法、系统、设备及计算机可读介质
CN113885710A (zh) * 2021-11-02 2022-01-04 珠海格力电器股份有限公司 智能设备的控制方法、控制装置及智能系统

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105872685A (zh) * 2016-03-24 2016-08-17 深圳市国华识别科技开发有限公司 智能终端控制方法和系统、智能终端
CN106648760A (zh) * 2016-11-30 2017-05-10 捷开通讯(深圳)有限公司 终端及其基于人脸识别清理后台应用程序的方法
CN106681504B (zh) * 2016-12-20 2020-09-11 宇龙计算机通信科技(深圳)有限公司 终端操控方法及装置
CN107679860A (zh) * 2017-08-09 2018-02-09 百度在线网络技术(北京)有限公司 一种用户认证的方法、装置、设备和计算机存储介质
CN107678288A (zh) * 2017-09-21 2018-02-09 厦门盈趣科技股份有限公司 一种室内智能设备自动控制系统及方法
CN110096251B (zh) * 2018-01-30 2024-02-27 钉钉控股(开曼)有限公司 交互方法及装置
KR102543656B1 (ko) * 2018-03-16 2023-06-15 삼성전자주식회사 화면 제어 방법 및 이를 지원하는 전자 장치
CN108491709A (zh) * 2018-03-21 2018-09-04 百度在线网络技术(北京)有限公司 用于识别权限的方法和装置
CN110298218B (zh) * 2018-03-23 2022-03-04 上海史贝斯健身管理有限公司 交互式健身装置和交互式健身系统
CN108537029B (zh) * 2018-04-17 2023-01-24 嘉楠明芯(北京)科技有限公司 移动终端控制方法、装置及移动终端
US20210224368A1 (en) * 2018-05-09 2021-07-22 Chao Fang Device control method and system
CN109067883B (zh) * 2018-08-10 2021-06-29 珠海格力电器股份有限公司 信息推送方法及装置
CN110175490B (zh) * 2018-09-21 2021-04-16 泰州市津达电子科技有限公司 游戏机历史账号分析系统
CN109065058B (zh) * 2018-09-30 2024-03-15 合肥鑫晟光电科技有限公司 语音通信方法、装置及系统
CN109543569A (zh) * 2018-11-06 2019-03-29 深圳绿米联创科技有限公司 目标识别方法、装置、视觉传感器及智能家居系统
CN109727596B (zh) * 2019-01-04 2020-03-17 北京市第一〇一中学 控制遥控器的方法和遥控器
CN110488616A (zh) * 2019-07-08 2019-11-22 深圳职业技术学院 基于物联网的智能家居控制系统及方法
CN113033266A (zh) * 2019-12-25 2021-06-25 杭州海康威视数字技术股份有限公司 人员运动轨迹追踪方法、装置、系统及电子设备
CN111580653A (zh) * 2020-05-07 2020-08-25 讯飞幻境(北京)科技有限公司 一种智能交互方法及智能交互式课桌
CN111901682A (zh) * 2020-07-30 2020-11-06 深圳创维-Rgb电子有限公司 基于自动识别的电视模式处理方法、系统、电视
US11899845B2 (en) * 2020-08-04 2024-02-13 Samsung Electronics Co., Ltd. Electronic device for recognizing gesture and method for operating the same
CN114529977A (zh) * 2020-11-02 2022-05-24 青岛海尔多媒体有限公司 用于智能设备的手势控制方法及装置、智能设备
CN112270302A (zh) * 2020-11-17 2021-01-26 支付宝(杭州)信息技术有限公司 肢体控制方法、装置和电子设备
CN112286122A (zh) * 2020-11-30 2021-01-29 捷开通讯(深圳)有限公司 一种智能家居控制方法、装置、终端以及存储介质
CN112908321A (zh) * 2020-12-02 2021-06-04 青岛海尔科技有限公司 设备控制方法、装置、存储介质及电子装置
CN112699739A (zh) * 2020-12-10 2021-04-23 华帝股份有限公司 一种基于结构光3d摄像头识别手势控制油烟机的方法
CN112905148B (zh) * 2021-03-12 2023-09-22 拉扎斯网络科技(上海)有限公司 一种语音播报的控制方法和装置,存储介质和电子设备
CN113076007A (zh) * 2021-04-29 2021-07-06 深圳创维-Rgb电子有限公司 一种显示屏视角调节方法、设备及存储介质
CN115877719A (zh) * 2021-08-25 2023-03-31 青岛海尔洗衣机有限公司 一种智能终端的控制方法及智能终端
CN114363549B (zh) * 2022-01-12 2023-06-27 关晓辉 一种智能剧本走秀录制处理方法、装置及系统
CN114513380B (zh) * 2022-01-25 2024-01-16 青岛海尔空调器有限总公司 用于控制家电设备的方法及装置、家电设备、存储介质
CN116596650B (zh) * 2023-07-17 2023-09-22 上海银行股份有限公司 一种基于智能识别技术的银行实物管理系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824011A (zh) * 2014-03-24 2014-05-28 联想(北京)有限公司 一种安全认证过程中的信息提示方法及电子设备
WO2015088141A1 (en) * 2013-12-11 2015-06-18 Lg Electronics Inc. Smart home appliances, operating method of thereof, and voice recognition system using the smart home appliances
CN104838336A (zh) * 2012-10-05 2015-08-12 微软技术许可有限责任公司 基于设备接近度的数据和用户交互
CN104978019A (zh) * 2014-07-11 2015-10-14 腾讯科技(深圳)有限公司 一种浏览器显示控制方法及电子终端
CN105045140A (zh) * 2015-05-26 2015-11-11 深圳创维-Rgb电子有限公司 智能控制受控设备的方法和装置
CN105184134A (zh) * 2015-08-26 2015-12-23 广东欧珀移动通信有限公司 一种基于智能手表的信息显示方法及智能手表
CN105872685A (zh) * 2016-03-24 2016-08-17 深圳市国华识别科技开发有限公司 智能终端控制方法和系统、智能终端

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000099076A (ja) * 1998-09-25 2000-04-07 Fujitsu Ltd 音声認識を活用した実行環境設定装置及び方法
JP2000322358A (ja) * 1999-05-11 2000-11-24 Fujitsu Ltd データ表示装置及び情報表示のためのプログラムを記録した記録媒体
KR100713281B1 (ko) * 2005-03-29 2007-05-04 엘지전자 주식회사 감정 상태에 따른 프로그램 추천 기능을 갖는 영상표시기기및 그 제어방법
AU2010221722A1 (en) * 2009-02-06 2011-08-18 Oculis Labs, Inc. Video-based privacy supporting system
KR20120051212A (ko) * 2010-11-12 2012-05-22 엘지전자 주식회사 멀티미디어 장치의 사용자 제스쳐 인식 방법 및 그에 따른 멀티미디어 장치
JP6070142B2 (ja) * 2012-12-12 2017-02-01 キヤノンマーケティングジャパン株式会社 携帯端末、情報処理方法、プログラム
JP2015175983A (ja) * 2014-03-14 2015-10-05 キヤノン株式会社 音声認識装置、音声認識方法及びプログラム
JP6494926B2 (ja) * 2014-05-28 2019-04-03 京セラ株式会社 携帯端末、ジェスチャ制御プログラムおよびジェスチャ制御方法
US9766702B2 (en) * 2014-06-19 2017-09-19 Apple Inc. User detection by a computing device
JP2016018264A (ja) * 2014-07-04 2016-02-01 株式会社リコー 画像形成装置、画像形成方法、及びプログラム
US20160057090A1 (en) * 2014-08-20 2016-02-25 Google Inc. Displaying private information on personal devices

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104838336A (zh) * 2012-10-05 2015-08-12 微软技术许可有限责任公司 基于设备接近度的数据和用户交互
WO2015088141A1 (en) * 2013-12-11 2015-06-18 Lg Electronics Inc. Smart home appliances, operating method of thereof, and voice recognition system using the smart home appliances
CN103824011A (zh) * 2014-03-24 2014-05-28 联想(北京)有限公司 一种安全认证过程中的信息提示方法及电子设备
CN104978019A (zh) * 2014-07-11 2015-10-14 腾讯科技(深圳)有限公司 一种浏览器显示控制方法及电子终端
CN105045140A (zh) * 2015-05-26 2015-11-11 深圳创维-Rgb电子有限公司 智能控制受控设备的方法和装置
CN105184134A (zh) * 2015-08-26 2015-12-23 广东欧珀移动通信有限公司 一种基于智能手表的信息显示方法及智能手表
CN105872685A (zh) * 2016-03-24 2016-08-17 深圳市国华识别科技开发有限公司 智能终端控制方法和系统、智能终端

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3422726A4 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112015171A (zh) * 2019-05-31 2020-12-01 北京京东振世信息技术有限公司 一种智能音箱和控制智能音箱的方法、装置和存储介质
CN111402885A (zh) * 2020-04-22 2020-07-10 北京万向新元科技有限公司 一种基于语音和空气成像技术的交互方法及其系统
CN113269124A (zh) * 2021-06-09 2021-08-17 重庆中科云从科技有限公司 一种对象识别方法、系统、设备及计算机可读介质
CN113269124B (zh) * 2021-06-09 2023-05-09 重庆中科云从科技有限公司 一种对象识别方法、系统、设备及计算机可读介质
CN113885710A (zh) * 2021-11-02 2022-01-04 珠海格力电器股份有限公司 智能设备的控制方法、控制装置及智能系统
CN113885710B (zh) * 2021-11-02 2023-12-08 珠海格力电器股份有限公司 智能设备的控制方法、控制装置及智能系统

Also Published As

Publication number Publication date
CN105872685A (zh) 2016-08-17
EP3422726A4 (en) 2019-08-07
US20190104340A1 (en) 2019-04-04
JP2019519830A (ja) 2019-07-11
EP3422726A1 (en) 2019-01-02

Similar Documents

Publication Publication Date Title
WO2017162019A1 (zh) 智能终端控制方法和智能终端
WO2019174090A1 (zh) 截屏文件分享的控制方法、装置、设备和计算机存储介质
WO2017018683A1 (en) User terminal apparatus and controlling method thereof
WO2017113974A1 (zh) 一种语音处理的方法、装置以及终端
WO2014017858A1 (en) User terminal apparatus and control method thereof
CN104503688A (zh) 智能硬件设备的控制实现方法及装置
CN104992091A (zh) 访问终端的方法及装置
US11232180B2 (en) Face authentication system, face authentication method, biometrics authentication system, biometrics authentication method, and storage medium
WO2020226289A1 (en) Electronic apparatus, user terminal, and method of controlling the electronic apparatus and the user terminal
EP3410676B1 (en) Communication terminal, communication system, display control method, and program
WO2019161598A1 (zh) 即时通讯与邮件的交互方法、装置、设备和存储介质
CN104363205A (zh) 应用登录方法和装置
WO2017092398A1 (zh) 一种基于移动终端的拍照方法及该移动终端
CN104899501A (zh) 对话列表的显示方法、装置及终端
WO2022142330A1 (zh) 一种身份认证方法及装置、电子设备和存储介质
CN107783715A (zh) 应用启动方法及装置
WO2020108024A1 (zh) 信息交互方法、装置、电子设备及存储介质
WO2012053875A2 (ko) 지문 정보를 통한 데이터 송신 수신 장치 및 시스템
WO2013182073A1 (zh) 鉴定文件安全性的方法、系统及存储介质
WO2016013693A1 (ko) 단말 장치 및 단말 장치의 제어 방법
WO2017069411A1 (ko) 보안 처리된 객체의 보안 상태 해제 방법 및 그 장치
WO2017039250A1 (en) Video communication device and operation thereof
CN107153621A (zh) 设备识别方法及装置
CN105159181B (zh) 智能设备的控制方法和装置
CN105846223A (zh) Usb接口塞、控制方法及装置

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2018549772

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2017769296

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2017769296

Country of ref document: EP

Effective date: 20180928

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17769296

Country of ref document: EP

Kind code of ref document: A1