WO2019241920A1 - Terminal control method and device - Google Patents

Terminal control method and device Download PDF

Info

Publication number
WO2019241920A1
WO2019241920A1 PCT/CN2018/091927 CN2018091927W WO2019241920A1 WO 2019241920 A1 WO2019241920 A1 WO 2019241920A1 CN 2018091927 W CN2018091927 W CN 2018091927W WO 2019241920 A1 WO2019241920 A1 WO 2019241920A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
expression
terminal
virtual
instruction
Prior art date
Application number
PCT/CN2018/091927
Other languages
French (fr)
Chinese (zh)
Inventor
张霞
Original Assignee
优视科技新加坡有限公司
优视科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 优视科技新加坡有限公司, 优视科技有限公司 filed Critical 优视科技新加坡有限公司
Priority to CN201880001138.XA priority Critical patent/CN109496289A/en
Priority to PCT/CN2018/091927 priority patent/WO2019241920A1/en
Publication of WO2019241920A1 publication Critical patent/WO2019241920A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Abstract

Disclosed in the embodiments of the present application are a terminal control method and device. The preferred embodiment for the present method comprises: in some specific scenarios, a user may issue a non-touch instruction, such as a motion, facial expression or voice instruction to a terminal, and after receiving the non-touch instruction issued by the user, the terminal may execute a corresponding operation. By employing the described embodiment, touch interaction between a user and a terminal may be reduced to a certain extent, and the convenience of interaction is increased.

Description

一种终端控制方法和装置Terminal control method and device 技术领域Technical field
本申请涉及计算机技术领域,具体涉及互联网技术领域,尤其涉及一种终端控制方法和装置。The present application relates to the field of computer technology, and specifically to the field of Internet technology, and in particular, to a method and device for controlling a terminal.
背景技术Background technique
随着信息技术的发展,终端所具备的功能越来越丰富,广泛应用于人们的日常生活及工作中。人机交互的便捷性已然成为重要的研究课题和发展方向。With the development of information technology, terminals have more and more functions, which are widely used in people's daily life and work. The convenience of human-computer interaction has become an important research topic and development direction.
现有技术中,终端在实现相应功能时,往往需要用户通过与终端之间进行接触式地交互,如:用户在触控屏上点击应用(Application,App)图标后启动应用、用户通过手指在触控屏上滑动解锁终端等。In the prior art, when a terminal implements a corresponding function, the user often needs to interact with the terminal through contact. For example, a user starts an application after clicking an Application (App) icon on a touch screen, and the user uses a finger to Slide the touch screen to unlock the terminal, etc.
基于现有技术,需要一种更为便捷的控制终端的方式。Based on the existing technology, a more convenient way to control the terminal is needed.
发明内容Summary of the Invention
本申请的目的在于提供一种终端控制方法和装置,用来解决在某些应用场景中采用接触式交互的方式控制终端较为不便的问题。The purpose of this application is to provide a terminal control method and device, which are used to solve the problem that it is inconvenient to control a terminal by using contact interaction in some application scenarios.
第一方面,本申请实施例提供一种终端控制方法,包括:In a first aspect, an embodiment of the present application provides a terminal control method, including:
终端在特定状态下接收用户的非接触式指令;The terminal receives the user's contactless instruction in a specific state;
根据接收到的所述非接触式指令,执行预先设定的与所述非接触式指令相对应的操作。According to the received non-contact instruction, a preset operation corresponding to the non-contact instruction is executed.
在一些实施例中,终端在特定状态下接收用户的非接触式指令,包括:In some embodiments, the terminal receiving a contactless instruction from a user in a specific state includes:
所述终端在所述特定状态下调用图像采集单元采集所述用户的表情特征;The terminal invoking an image acquisition unit in the specific state to collect facial expression characteristics of the user;
将所述表情特征确定为非接触式指令。The expression feature is determined as a non-contact instruction.
在一些实施例中,所述终端在所述特定状态下调用图像采集单元采集所述用户的表情特征,包括:In some embodiments, the terminal invoking an image acquisition unit to collect the expression feature of the user in the specific state includes:
所述终端接收用户在即时通讯界面上发出的表情采集触发操作;Receiving, by the terminal, an expression collection trigger operation sent by a user on an instant messaging interface;
当接收到的所述表情采集触发操作后,调用所述图像采集单元采集所述用户的表情特征。When the received expression collection trigger operation is received, the image acquisition unit is called to collect the user's expression characteristics.
在一些实施例中,根据接收到的所述非接触式指令,执行预先设定的与所述非接触式指令相对应的操作,包括:In some embodiments, performing a preset operation corresponding to the contactless instruction according to the received contactless instruction includes:
根据所述表情特征,在已有的虚拟表情中,查找与所述表情特征相匹配的虚拟表情;Find the virtual expression that matches the expression feature among the existing virtual expressions according to the expression feature;
将查找出的所述虚拟表情展示给所述用户。Displaying the found virtual expression to the user.
在一些实施例中,将查找出的所述虚拟表情展示给所述用户,包括:In some embodiments, displaying the found virtual emoticon to the user includes:
将查找出的所述虚拟表情展示在终端界面的指定区域中,以便用户选择。Displaying the found virtual expression in a designated area of the terminal interface for a user to select.
第二方面,本申请实施例还提供一种终端控制装置,包括:In a second aspect, an embodiment of the present application further provides a terminal control device, including:
接收处理模块,配置用于在特定状态下接收用户的非接触式指令;A receiving processing module configured to receive a user's contactless instruction in a specific state;
执行模块,配置用于根据接收到的所述非接触式指令,执行预先设定的与所述非接触式指令相对应的操作。The execution module is configured to execute a preset operation corresponding to the contactless instruction according to the received contactless instruction.
在一些实施例中,所述接收处理模块,配置用于在所述特定状态下调用图 像采集单元采集所述用户的表情特征,将所述表情特征确定为非接触式指令。In some embodiments, the receiving processing module is configured to call an image acquisition unit in the specific state to collect an expression feature of the user, and determine the expression feature as a non-contact instruction.
在一些实施例中,所述接收处理模块,配置用于接收用户在即时通讯界面上发出的表情采集触发操作,当接收到的所述表情采集触发操作后,调用所述图像采集单元采集所述用户的表情特征。In some embodiments, the receiving processing module is configured to receive an expression acquisition trigger operation sent by a user on an instant messaging interface, and when the received expression acquisition trigger operation is received, call the image acquisition unit to acquire the User emoticons.
在一些实施例中,所述执行模块,配置用于根据所述表情特征,在已有的虚拟表情中,查找与所述表情特征相匹配的虚拟表情,将查找出的所述虚拟表情展示给所述用户。In some embodiments, the execution module is configured to find a virtual expression matching the expression characteristic among the existing virtual expressions based on the expression characteristics, and display the found virtual expression to The user.
在一些实施例中,所述执行模块,配置用于将查找出的所述虚拟表情展示在终端界面的指定区域中,以便用户选择。In some embodiments, the execution module is configured to display the found virtual emoticon in a designated area of a terminal interface for a user to select.
本申请提供的终端控制方法和装置,通过在一些特定的场景中,用户可以向终端发出诸如动作、表情、声音等非接触式指令,终端在接收到用户所发出的非接触式指令后,便可执行相应的操作。这样的控制方式能够在一定程度上减少用户与终端之间的接触式交互,从而提升交互的便捷性。而且,采用非接触式指令的方式无需借助与终端绑定的可穿戴智能设备,能够降低相应的成本。The method and device for controlling a terminal provided in this application, in some specific scenarios, the user can send non-contact instructions such as actions, expressions, sounds, etc. to the terminal. After receiving the non-contact instructions issued by the user, the terminal then Perform the corresponding operation. Such a control method can reduce the contact interaction between the user and the terminal to a certain extent, thereby improving the convenience of the interaction. In addition, the non-contact instruction method can reduce the corresponding cost without using a wearable smart device bound to the terminal.
具体地,在社交过程中发送表情的场景下,终端可以采集用户的面部表情,并基于用户的面部表情,从已有的虚拟表情中查找并选出与用户的面部表情相匹配的虚拟表情。正是采用本申请中的上述方法,用户无需在虚拟表情候选区域中滑动翻页查找虚拟表情,而是可以采用一种更为便捷的交互方式----做出相应的表情,相应地,终端可基于用户的表情,为用户查找到匹配的虚拟表情。从而可在一定程度上减少用户为查找某些虚拟表情所执行的交互操作,使得交互更为便捷,同时,也能够缩短查找虚拟表情的耗时。Specifically, in a scenario where an expression is sent during a social process, the terminal can collect a user's facial expression, and based on the user's facial expression, find and select a virtual expression that matches the user's facial expression from existing virtual expressions. It is precisely with the above method in this application that the user does not need to scroll through pages in the virtual expression candidate area to find a virtual expression, but can use a more convenient interaction method-make corresponding expressions, and accordingly, The terminal may find a matching virtual expression for the user based on the user's expression. As a result, the interactive operations performed by the user to find some virtual expressions can be reduced to a certain extent, the interaction is more convenient, and the time required to find the virtual expressions can be shortened.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本申请的其它特征、目的和优点将会变得更明显:Other features, objects, and advantages of the present application will become more apparent by reading the detailed description of the non-limiting embodiments with reference to the following drawings:
图1是本申请实施例提供的用户与终端之间的交互示意图;FIG. 1 is a schematic diagram of interaction between a user and a terminal according to an embodiment of the present application;
图2是本申请实施例提供的终端控制方法流程示意图;2 is a schematic flowchart of a terminal control method according to an embodiment of the present application;
图3是本申请实施例提供的表情候选区域的示意图;3 is a schematic diagram of an expression candidate region provided by an embodiment of the present application;
图4是本申请实施例提供的在由表情控制查询虚拟表情场景下的终端控制方法的流程示意图;4 is a schematic flowchart of a terminal control method in a virtual expression scene query by expression control according to an embodiment of the present application;
图5a是本申请实施例提供的表情控件示意图;5a is a schematic diagram of an expression control provided by an embodiment of the present application;
图5b是本申请实施例提供的由表情控制查询虚拟表情场景下的交互示意图;FIG. 5b is a schematic diagram of interaction in a virtual expression scene query by expression control provided by an embodiment of the present application; FIG.
图5c是本申请实施例提供的发送表情的示意图;5c is a schematic diagram of sending an expression according to an embodiment of the present application;
图6a是本申请实施例提供的一种基于用户表情查找到的虚拟表情展示方式示意图;6a is a schematic diagram of a display mode of a virtual expression found based on a user's expression according to an embodiment of the present application;
图6b是本申请实施例提供的另一种基于用户表情查找到的虚拟表情展示方式示意图;6b is a schematic diagram of another virtual expression display method based on a user's expression provided by an embodiment of the present application;
图7是本申请实施例提供的在由表情控制拍摄场景下的终端控制方法的流 程示意图;7 is a schematic flowchart of a terminal control method in a shooting scene controlled by an expression according to an embodiment of the present application;
图8是本申请实施例提供的终端控制装置结构示意图。FIG. 8 is a schematic structural diagram of a terminal control device according to an embodiment of the present application.
具体实施方式detailed description
下面结合附图和实施例对本申请作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释相关发明,而非对该发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关发明相关的部分。The following describes the present application in detail with reference to the accompanying drawings and embodiments. It can be understood that the specific embodiments described herein are only used to explain the related invention, but not to limit the invention. It should also be noted that, for convenience of description, only the parts related to the related invention are shown in the drawings.
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本申请。It should be noted that, in the case of no conflict, the embodiments in the present application and the features in the embodiments can be combined with each other. The application will be described in detail below with reference to the drawings and embodiments.
如前所述,用户通常需要通过接触式的交互方式控制终端执行某些操作,这里所述的接触式的交互方式,可包括但不限于:针对终端屏幕、物理按键或连接终端的外接设备所执行的点击、按压、拖拽、滑动等。As mentioned above, users usually need to control the terminal to perform certain operations through contact interaction methods. The contact interaction methods described herein may include, but are not limited to, the terminal screen, physical buttons, or external devices connected to the terminal. Clicks, presses, drags, swipes, etc. performed.
但在一些情况下,前述接触式的交互方式较为不便。例如:在用户使用平板电脑进行自拍时,用户需要通过拍摄界面上所显示的虚拟快门按键实现拍摄控制,但由于平板电脑的屏幕尺寸较大,用户通过拍摄界面中的虚拟快门按键控制拍照较为不便,有可能导致拍照角度发生变化或产生抖动,影响拍照效果。又例如:移动通讯终端上的即时通讯(Instant Messaging,IM)功能为用户提供了大量的虚拟表情(如:emoji),用户通过即时通讯功能进行聊天的过程中,可能会使用相应的虚拟表情,但由于虚拟表情较多且虚拟表情候选区域有限,故用户通常需要在虚拟表情候选区域中进行滑动翻页,以查找用户所需的虚拟表情。However, in some cases, the aforementioned contact interaction method is more inconvenient. For example, when a user takes a selfie with a tablet computer, the user needs to use the virtual shutter button displayed on the shooting interface to implement shooting control. However, because the screen size of the tablet computer is large, it is more inconvenient for the user to control the shooting through the virtual shutter button in the shooting interface. , May lead to changes in the camera angle or jitter, affecting the camera effect. Another example: the Instant Messaging (IM) function on a mobile communication terminal provides users with a large number of virtual emoticons (such as emoji). During the chat process by the user through the instant messaging function, the corresponding virtual emoticons may be used. However, since there are many virtual expressions and the virtual expression candidate regions are limited, the user usually needs to slide pages in the virtual expression candidate regions to find the virtual expressions required by the user.
为此,在本申请实施例中提供一种终端控制方法,可以让用户在某些情况下,采用非接触式的交互方式便可控制终端执行相应的操作,进一步提升交互的便捷性。For this reason, a terminal control method is provided in the embodiment of the present application, which allows a user to control a terminal to perform a corresponding operation in a non-contact interaction mode in some cases, thereby further improving the convenience of interaction.
参考图1,示出了本申请实施例中用户和终端之间的交互控制方式。如图1所示,用户通过非接触式指令便可以控制终端。应理解,在用户发出非接触式指令的过程中,用户可以不触及终端的触控屏、物理按键或与终端连接的外接设备,当然,用户以非接触式指令控制终端并不应理解为用户完全不接触终端。具体地,在一些实施方式下,用户可以手持终端并通过表情、手势动作、声音等方式向终端发出非接触式指令;而在另一些实施方式下,用户与终端可完全分离,由用户向终端发出非接触式指令控制终端,此方式下,用户与终端之间的距离应不影响终端接收用户发出的非接触式指令。Referring to FIG. 1, an interaction control manner between a user and a terminal according to an embodiment of the present application is shown. As shown in Figure 1, the user can control the terminal through non-contact instructions. It should be understood that in the process of the user issuing a non-contact instruction, the user may not touch the terminal's touch screen, physical keys, or external devices connected to the terminal. Of course, the user should not be understood as a user by controlling the terminal with a non-contact instruction No contact at all with the terminal. Specifically, in some embodiments, the user can hold the terminal and issue non-contact instructions to the terminal through expressions, gestures, sounds, etc .; in other embodiments, the user and the terminal can be completely separated, and the user can send the terminal to the terminal. Sending non-contact instructions to control the terminal. In this mode, the distance between the user and the terminal should not affect the terminal receiving the non-contact instructions from the user.
需要说明的是,在本申请实施例中,所述的非接触式指令,具体可以是采用表情、肢体动作、声音等形式的、非接触式的交互指令,且非接触式指令不需要借助其他与终端相关的外接设备(如:与终端具有绑定关系的智能可穿戴设备)便可针对终端进行控制。It should be noted that, in the embodiments of the present application, the non-contact instructions may specifically be non-contact interactive instructions in the form of expressions, body movements, sounds, and the like, and the non-contact instructions do not need to rely on other External devices related to the terminal (such as smart wearable devices that have a binding relationship with the terminal) can control the terminal.
一般性地,所述的终端,具有图像采集和/或声音采集功能,具体可包括但不限于:手机、平板电脑、笔记本电脑、智能手表、相机等移动式终端,或,具有图像采集、声音采集功能的计算机等,这里不再一一列举。当然,若自身并不具有图像采集或声音采集功能、但通过与外接设备连接后实现前述采集功 能的终端设备,则也应理解为本申请实施例中所述的终端的涵盖范围内。Generally, the terminal has image acquisition and / or sound acquisition functions, and may specifically include, but is not limited to, mobile terminals such as mobile phones, tablet computers, notebook computers, smart watches, and cameras, or has image acquisition and sound Collection of computers, etc., will not be listed here one by one. Of course, if the terminal device does not have an image acquisition or sound acquisition function, but realizes the foregoing acquisition function by connecting with an external device, it should also be understood to be within the coverage of the terminal described in the embodiment of the present application.
此外,上述移动式终端中,具有通讯功能的终端,诸如:手机、平板电脑、笔记本电脑、智能手表等具有通讯功能的设备,在本申请实施例中的后续描述中,可统称为:移动通讯终端。In addition, among the above-mentioned mobile terminals, terminals with communication functions, such as: mobile phones, tablet computers, notebook computers, smart watches and other devices with communication functions, may be collectively referred to as mobile communication in subsequent descriptions in the embodiments of the present application. terminal.
基于上述如图1所示的架构,以下将详细说明本申请实施例中所提供的技术方案。Based on the architecture shown in FIG. 1, the technical solutions provided in the embodiments of the present application will be described in detail below.
继续参考图2,示出了本申请实施例中提供一种终端控制方法,具体包括以下步骤:With continued reference to FIG. 2, a terminal control method provided in an embodiment of the present application is shown, which specifically includes the following steps:
步骤S201:终端在特定状态下接收用户的非接触式指令。Step S201: The terminal receives a contactless instruction from the user in a specific state.
所述的特定状态,可认为是终端运行某些应用或启用某些功能的状态,如:相机取景但未拍摄的状态、手机上运行的IM应用显示表情候选区域时的状态等。这里需要说明的是,在现有的实施方式中,终端在所述特定状态下,用户与终端之间的交互通常只能是接触式交互,从而在一定程度上对用户较为不便。The specific state may be considered as a state in which the terminal is running certain applications or enabling certain functions, such as a state in which the camera is framing but not shooting, a state when an IM application running on the mobile phone displays an expression candidate area, and the like. It should be noted here that, in the existing implementation manner, in the specific state of the terminal, the interaction between the user and the terminal can usually only be a contact interaction, which is inconvenient to the user to a certain extent.
而在本申请实施例中,终端能够在上述的特定状态下接收并识别出用户的非接触式指令。具体地,实际应用中,用户发出非接触式指令的方式可能有多种,如:用户的面部表情、肢体动作、发出的声音等。换言之,终端对非接触式指令的接收,可通过终端的采集单元或组件所实现,如:摄像头或麦克风。当然,这里所提及的采集单元或组件也可能是外接设备,在此不作具体限定。In the embodiment of the present application, the terminal can receive and identify the user's non-contact instruction in the specific state described above. Specifically, in practical applications, there may be multiple ways for a user to issue a non-contact instruction, such as: a user's facial expression, a limb movement, a sound made, and the like. In other words, the terminal can receive contactless instructions through the terminal's acquisition unit or component, such as a camera or microphone. Of course, the acquisition unit or component mentioned here may also be an external device, which is not specifically limited here.
终端对非接触式指令的识别,则可通过相应的识别功能实现,该识别功能可由终端自身运行的操作系统提供,也可以由安装在终端上的应用(Application,App)提供,一般性地,识别功能具体可包括:声音检测、自然语义识别、体态识别、动作识别、面部特征识别(进一步可包括表情识别)中的至少一种,这里并不应理解为对本申请的限定。The terminal's identification of non-contact instructions can be realized by the corresponding identification function, which can be provided by the operating system running by the terminal itself, or by an application (Application, App) installed on the terminal. Generally, The recognition function may specifically include at least one of sound detection, natural semantic recognition, body recognition, motion recognition, facial feature recognition (which may further include expression recognition), which should not be construed as limiting the present application.
步骤S203:根据接收到的所述非接触式指令,执行预先设定的与所述非接触式指令相对应的操作。Step S203: According to the received contactless instruction, an operation corresponding to the contactless instruction set in advance is performed.
非接触式指令用于指示终端执行相应的操作,应理解,非接触式指令所对应的操作,可以是由设备厂商或功能提供商预先设定,也可以由用户自行设定。在一个示例中:当手机处于拍照模式时,用户可以通过说出“拍照”这一指令控制手机执行拍照。在另一个示例中:用户可以自行录制词组“咔擦”作为指令,以控制手机执行拍照。The non-contact instruction is used to instruct the terminal to perform the corresponding operation. It should be understood that the operation corresponding to the non-contact instruction may be preset by a device manufacturer or a function provider, or may be set by a user. In one example: when the mobile phone is in a photographing mode, the user can control the mobile phone to perform photographing by saying an instruction of “photographing”. In another example: the user can record the phrase "click" as an instruction to control the mobile phone to take a picture.
显然,通过上述步骤,在一些特定的场景中,用户可以向终端发出诸如动作、表情、声音等非接触式指令,终端在接收到用户所发出的非接触式指令后,便可执行相应的操作。这样的控制方式能够在一定程度上减少用户与终端之间的接触式交互,从而提升交互的便捷性。而且,采用非接触式指令的方式无需借助与终端绑定的可穿戴智能设备,能够降低相应的成本。Obviously, through the above steps, in some specific scenarios, the user can send non-contact instructions such as actions, expressions, and sounds to the terminal. After receiving the non-contact instructions issued by the user, the terminal can perform the corresponding operations. . Such a control method can reduce the contact interaction between the user and the terminal to a certain extent, thereby improving the convenience of the interaction. In addition, the non-contact instruction method can reduce the corresponding cost without using a wearable smart device bound to the terminal.
对于上述步骤而言,其执行主体通常可以是终端自身,但在某些实施例中,也可以是运行在终端上的App作为执行主体。并且,在执行上述步骤的过程中,执行主体还可能发生变化,比如:步骤S201的执行主体可以是终端自身,而 步骤S203的执行主体可以是终端上运行的App。当然,具体将根据实际应用时的情况来决定,这里并不应该理解为对本申请的限定。For the above steps, the execution subject may generally be the terminal itself, but in some embodiments, an App running on the terminal may also be used as the execution subject. In addition, during the execution of the above steps, the execution subject may also change. For example, the execution subject of step S201 may be the terminal itself, and the execution subject of step S203 may be an App running on the terminal. Of course, the specific will be determined according to the actual application situation, which should not be understood as a limitation on this application.
下面将结合具体的应用场景详细说明本申请实施例中的终端控制方法。The terminal control method in the embodiments of the present application will be described in detail below with reference to specific application scenarios.
场景一scene one
用户进行即时通讯时,虚拟表情可以增加聊天的趣味性。基于此,移动通讯终端上的聊天功能或第三方IM应用提供了丰富的虚拟表情供用户使用。但聊天界面中的虚拟表情候选区域的大小有限,在虚拟表情数量较多的情况下,难以完全展示。When users are communicating instantly, virtual expressions can increase the fun of chatting. Based on this, the chat function on the mobile communication terminal or the third-party IM application provides rich virtual expressions for users to use. However, the size of the virtual expression candidate area in the chat interface is limited, and it is difficult to fully display the virtual expression with a large number of virtual expressions.
继续参考图3,示出了IM应用中的虚拟表情候选区域的示意图,如图3所示,虚拟表情候选区域并不能够完全展示出所有虚拟表情。若采用现有的交互方式,则需要用户在虚拟表情候选区域中进行滑动翻页,以浏览并查找未显示出的虚拟表情。显然,在该场景下,采用现有的交互方式对用户而言较为不便,且用户在查找虚拟表情的过程中将消耗一定时长。With continued reference to FIG. 3, a schematic diagram of a virtual expression candidate region in an IM application is shown. As shown in FIG. 3, the virtual expression candidate region cannot fully display all virtual expressions. If the existing interaction method is adopted, the user is required to slide and turn pages in the virtual expression candidate area to browse and find the virtual expression that is not displayed. Obviously, in this scenario, it is inconvenient for the user to adopt the existing interaction method, and the user will spend a certain amount of time in the process of finding the virtual expression.
为此,继续参考图4,示出了在该场景下的终端控制方法,如图4所示,该方法具体可包括以下步骤:To this end, continuing to refer to FIG. 4, which illustrates a terminal control method in this scenario. As shown in FIG. 4, the method may specifically include the following steps:
步骤S401:终端在表情候选状态下采集用户的表情特征。Step S401: The terminal collects the user's expression features in the expression candidate state.
需要说明的是,所述的虚拟表情,至少可以包括:字符表情、图片表情或动画表情等。虚拟表情的呈现方式可以是静态的,也可以是动态的,这里并不作具体限定。It should be noted that the virtual expression may at least include a character expression, a picture expression, or an animation expression. The presentation mode of the virtual expression may be static or dynamic, which is not specifically limited here.
在本申请实施例中,表情候选状态可认为是受用户触发、终端准备为用户提供候选的虚拟表情的状态。In the embodiment of the present application, the expression candidate state may be considered as a state triggered by the user and the terminal is ready to provide the user with a candidate virtual expression.
应理解,对于上述步骤S401而言,当终端处于表情候选状态下时,将调用图像采集单元采集用户的表情特征,将采集到的表情特征确定为非接触式指令。当然,图像采集单元可以是终端自身携带的,也可以是终端的外接摄像头。It should be understood that, for the foregoing step S401, when the terminal is in an expression candidate state, the image acquisition unit is called to collect the expression characteristics of the user, and the collected expression characteristics are determined as a non-contact instruction. Of course, the image acquisition unit may be carried by the terminal itself, or may be an external camera of the terminal.
一般来说,表情候选状态由用户发出表情采集触发操作所触发,而用户发出的表情采集触发操作有多种:Generally speaking, the expression candidate status is triggered by the expression collection trigger operation issued by the user, and there are various expression collection trigger operations issued by the user:
在一种可能的实施方式中,表情候选状态在用户查看虚拟表情时触发,具体地,用户可以在IM界面中点击虚拟表情候选控件(用户点击虚拟表情候选控件的操作,可认为是表情采集触发操作),IM界面中将展示虚拟表情候选区域(如图3所示的区域),同时触发终端进入表情候选状态。In a possible implementation manner, the expression candidate status is triggered when the user views the virtual expression. Specifically, the user can click the virtual expression candidate control in the IM interface (the operation of the user clicking the virtual expression candidate control can be considered as an expression collection trigger). Operation), the virtual expression candidate area (the area shown in FIG. 3) will be displayed in the IM interface, and the terminal will be triggered to enter the expression candidate state.
在另一种可能的实施方式中,用户可以在IM界面中点击表情采集控件(相类似地,用户点击该控件的操作,也可认为是表情采集触发操作),该表情采集控件用于唤起终端的图像采集单元(如:摄像头)以对用户进行表情采集,此时,也可认为终端处于表情候选状态。In another possible implementation manner, the user may click an expression collection control in the IM interface (similarly, the operation of the user clicking the control may also be considered as an expression collection trigger operation), and the expression collection control is used to awaken the terminal An image acquisition unit (such as a camera) to collect expressions on the user. At this time, the terminal may also be considered to be in an expression candidate state.
所以,终端处于表情候选状态下时调用图像采集单元采集所述用户的表情特征,这一过程便可以为:终端接收用户在即时通讯界面上发出的表情采集触发操作,当接收到的所述表情采集触发操作后,调用所述图像采集单元采集所述用户的表情特征。Therefore, when the terminal is in the expression candidate state, the image acquisition unit is called to collect the expression characteristics of the user. This process can be as follows: the terminal receives the expression acquisition trigger operation sent by the user on the instant messaging interface, and when the received expression After the acquisition trigger operation, the image acquisition unit is called to collect the facial expression characteristics of the user.
以上两种方式均可能是表情候选状态的触发方式,具体还要由实际情况而 定。比如:用户持握手机并将手机前置摄像头朝向该用户自身的操作,也可以认为是表情采集触发操作;或者,用户解锁终端也可以认为是表情采集触发操作。这里并不应构成对本申请的限定。The above two methods may be the triggering modes of the expression candidate state, and the specific conditions depend on the actual situation. For example, the operation of the user holding the handshake and pointing the front camera of the mobile phone towards the user may also be regarded as an expression collection triggering operation; or the user unlocking the terminal may also be regarded as an expression collection triggering operation. This should not constitute a limitation on this application.
在本申请实施例中,可由面部特征识别功能,特别是表情识别功能实现对用户表情的采集和识别。当然,具体的表情识别功能可由深度学习算法、神经网络模型等技术实现,这里不再进行过多赘述。In the embodiment of the present application, the facial feature recognition function, especially the expression recognition function, can be used to collect and recognize the user's expression. Of course, the specific expression recognition function can be implemented by technologies such as deep learning algorithms and neural network models, which are not described in detail here.
步骤S403:根据采集到的所述表情特征,在虚拟表情中查找与所述表情特征相匹配的虚拟表情,以便作为即时通讯消息发送。Step S403: According to the collected facial expression features, find a virtual facial expression that matches the facial expression feature in the virtual facial expression, so as to send it as an instant messaging message.
在本场景下,用户面部所呈现的表情经终端采集后得到的表情特征,便可认为是一种非接触式指令,而终端在虚拟表情中查找与表情特征相匹配的虚拟表情或发送该虚拟表情的操作,便可认为是执行与非接触式指令相对应的操作。In this scenario, the facial expressions collected by the user's facial expressions can be considered as a non-contact instruction, and the terminal can find or send the virtual facial expressions that match the facial expression characteristics in the virtual facial expressions. The operation of the expression can be regarded as the operation corresponding to the non-contact instruction.
作为本申请实施例中的一种可能实施方式,可根据表情的类别查找相应的虚拟表情。具体地,在实际应用中,用户的表情可以根据面部五官的状态分为不同类别,如:微笑、撇嘴、皱眉、闭眼等,当然,除此之外还可基于情绪对表情进行划分,如:生气、高兴等。应理解,实际将表情划分为哪些类别,通常取决于具体的表情识别功能。换言之,通过表情识别功能,可识别出用户的实际面部表情所属的类别。As a possible implementation manner in the embodiment of the present application, a corresponding virtual expression can be found according to the type of the expression. Specifically, in actual applications, the user's expressions can be divided into different categories based on the facial features, such as: smile, pouting, frowning, closing eyes, etc. Of course, in addition to dividing emotions based on emotions, such as : Angry, happy, etc. It should be understood that which categories are actually divided into expressions usually depends on the specific expression recognition function. In other words, through the expression recognition function, the category to which the user's actual facial expression belongs can be identified.
此外,虚拟表情通常设有相应的标识信息,如:“/微笑”、“/皱眉”、“/瞪眼”等等(表情标识信息通常可由设备厂商或第三方IM应用提供方所设置),当然,这里仅是标识信息的一种可能方式,并不应理解为对本申请的限定。显然,在已知用户的表情所属的类别,以及虚拟表情的标识信息的基础上,也就可以确定出与用户的表情相匹配的虚拟表情。In addition, virtual emoticons are usually provided with corresponding identification information, such as: "/ smile", "/ frown", "/ glare eyes", etc. (the emoticon identification information can usually be set by the device manufacturer or a third-party IM application provider), of course This is only one possible way of identifying information, and should not be construed as limiting the present application. Obviously, based on the category to which the user's expression belongs and the identification information of the virtual expression, a virtual expression matching the user's expression can also be determined.
作为本申请实施例中的另一种可能实施方式,可根据表情的相似度查找相应的虚拟表情。具体地,终端可提取用户的表情特征,并计算用户的表情特征与虚拟表情的特征之间的相似度,由此可以查找到与用户的表情最为相似的虚拟表情。例如:终端可以根据用户在发笑时嘴部、眼部及面部的特征,查找出相似的微笑、眯眼笑、大笑等不同虚拟表情。As another possible implementation manner in the embodiment of the present application, a corresponding virtual expression can be found according to the similarity of the expression. Specifically, the terminal can extract the user's expression features and calculate the similarity between the user's expression features and the characteristics of the virtual expression, thereby finding the virtual expression most similar to the user's expression. For example, the terminal may find different virtual expressions such as similar smiles, squints, and laughs according to the characteristics of the mouth, eyes, and face when the user laughs.
查找到的虚拟表情可以直接作为即时通讯消息发送给其他用户。The found virtual emoticons can be sent directly to other users as instant messaging messages.
例如:如图5a所示,用户使用手机进行聊天并想要发送表情时,用户可以在当前的聊天界面中选择表情控件,进而在当前的聊天界面中将展示出虚拟表情候选区域。此时,手机处于表情候选状态,手机的前置摄像头被激活,并采集该用户的表情。如图5b所示,假设手机采集并识别出用户的面部表情为“微笑”,那么,便可在虚拟表情中查找并选定标识为“/微笑”的虚拟表情。如图5c所示,查找到的“微笑”虚拟表情可以直接作为即时通讯消息发送给其他用户。For example, as shown in FIG. 5a, when a user uses a mobile phone for chatting and wants to send an emoticon, the user can select an emoticon control in the current chat interface, and then the virtual emoticon candidate area will be displayed in the current chat interface. At this time, the mobile phone is in an expression candidate state, the front camera of the mobile phone is activated, and the user's expression is collected. As shown in FIG. 5b, if the mobile phone collects and recognizes that the user's facial expression is "smile", then the virtual expression labeled "/ smile" can be found and selected in the virtual expression. As shown in FIG. 5c, the found "smile" virtual expression can be directly sent to other users as an instant messaging message.
从上述内容中可见,在社交过程中发送表情的场景下,终端可以采集用户的面部表情,并基于用户的面部表情,从已有的虚拟表情中查找并选出与用户的面部表情相匹配的虚拟表情。正是采用本申请中的上述方法,用户无需在虚 拟表情候选区域中滑动翻页查找虚拟表情,而是可以采用一种更为便捷的交互方式----做出相应的表情,相应地,终端可基于用户的表情,为用户查找到匹配的虚拟表情。从而可在一定程度上减少用户为查找某些虚拟表情所执行的交互操作,使得交互更为便捷,同时,也能够缩短查找虚拟表情的耗时。From the above, it can be seen that in the scene of sending emotions in the social process, the terminal can collect the user's facial expressions, and based on the user's facial expressions, find and select from the existing virtual expressions that match the user's facial expressions. Virtual expression. It is precisely with the above method in this application that the user does not need to scroll through pages in the virtual expression candidate area to find a virtual expression, but can use a more convenient interaction method-make corresponding expressions, and accordingly, The terminal may find a matching virtual expression for the user based on the user's expression. As a result, the interactive operations performed by the user to find some virtual expressions can be reduced to a certain extent, the interaction is more convenient, and the time required to find the virtual expressions can be shortened.
需要说明的是,上述内容中,终端识别出用户的表情所属的类别,具体可以是,终端采集用户的面部表情图像,并提取表情特征数据,确定所述表情特征数据所述的表情类别,以此确定用户的表情所述的类别。当然,提取表情特征数据的过程具体可由人脸特征提取算法针对人脸五官特征、纹理区域及预定义的特征点进行定位和提取,具体过程在此不进行过多赘述。It should be noted that, in the above content, the terminal recognizes the category to which the user's expression belongs. Specifically, the terminal can collect the facial expression image of the user, and extract expression feature data to determine the expression category described in the expression feature data. This determines the category described by the user's expression. Of course, the process of extracting facial feature data can be specifically located and extracted by facial feature extraction algorithms for facial features, texture regions, and predefined feature points. The specific process is not described in detail here.
一旦确定出用户的表情所属的类别,便可以针对虚拟表情的标识信息进行遍历检索,以便查找出相同类别的虚拟表情。Once the category to which the user's expression belongs is determined, it is possible to perform a traversal search on the identification information of the virtual expression in order to find a virtual expression of the same category.
终端确定用户的表情与虚拟表情之间的相似度,具体可以是,终端采集用户的面部表情图像,并提取表情特征数据,根据提取的所述表情特征数据与虚拟表情所对应的特征数据,计算相似度。当然,相似度可以采用欧式距离、余弦距离等相似度算法,这里并不作具体限定。The terminal determines the similarity between the user's facial expression and the virtual facial expression. Specifically, the terminal can collect a facial expression image of the user and extract facial expression characteristic data, and calculate based on the extracted facial expression characteristic data and characteristic data corresponding to the virtual facial expression. Similarity. Of course, similarity algorithms such as Euclidean distance and cosine distance can be used, which are not specifically limited here.
在本申请实施例中,查找到的虚拟表情的数量有可能是1个,也有可能大于1个,具体的处理方式有所差异:In the embodiment of the present application, the number of found virtual expressions may be one or more than one, and the specific processing methods are different:
在一种可行的实施方式中,若查找到的虚拟表情的数量为1个,那么,可直接将该虚拟表情作为即时通信消息发送。In a feasible implementation manner, if the number of found virtual emoticons is one, then the virtual emoticons can be directly sent as an instant communication message.
而在另一种可行的实施方式中,选定的虚拟表情的数量可能不止1个,对于此情况,可将查找到的各个虚拟表情展示给用户,由用户自行决定发送某一个虚拟表情。具体地,当查找到的虚拟表情数量大于1个时,可采用如图6a所示的展现方式。在图6a中,与用户实际的表情相匹配的各虚拟表情以单独一行展示于虚拟表情候选区域上方。如果用户选择了某一虚拟表情,则将用户选择的虚拟表情作为即时通讯消息发送,并且该行消失。In another feasible implementation manner, the number of selected virtual expressions may be more than one. In this case, each of the found virtual expressions may be displayed to the user, and the user may decide to send a certain virtual expression. Specifically, when the number of found virtual expressions is more than one, a display manner as shown in FIG. 6a may be adopted. In FIG. 6a, each virtual expression matching the actual expression of the user is displayed in a separate line above the virtual expression candidate area. If the user selects a certain virtual expression, the virtual expression selected by the user is sent as an instant messaging message, and the line disappears.
在实际应用中,还可以采用如图6b所示的展示方式,即,图6b中,与用户实际的表情相匹配的各虚拟表情以新增图层的方式展示在原有的虚拟表情候选区域位置上。如果用户选择了某一虚拟表情,则该新增图层消失,继续显示原有的虚拟表情候选区域。In practical applications, the display method shown in FIG. 6b can also be adopted, that is, in FIG. 6b, each virtual expression matching the user's actual expression is displayed in the original virtual expression candidate area by adding a new layer. on. If the user selects a virtual expression, the newly added layer disappears, and the original virtual expression candidate area continues to be displayed.
当然,上述图6a、6b中的展示方式也仅是一种示例,在实际应用中可能会采用其他的展示方式,如:弹窗展示,浮层展示等,具体将取决于实际应用的需要。Of course, the above-mentioned display methods in Figs. 6a and 6b are only examples. In actual applications, other display methods may be adopted, such as pop-up window display, floating layer display, etc., depending on the needs of the actual application.
场景二Scene two
用户使用手机、相机或平板电脑等具有拍摄功能的终端与其他用户进行合影时,采用现有的交互方式可能较为不便:When a user uses a mobile phone, camera, or tablet with a shooting function to take a photo with other users, it may be inconvenient to use the existing interaction methods:
若用户手持终端进行拍摄,则可能受到终端尺寸、形状等因素的限制,不利于用户点击虚拟快门按键或物理快门按键,进而有可能会影响拍摄效果。若用户使用定时拍照功能进行拍摄,则由于定时拍照功能所提供的拍摄准备时长通常是固定的,用户并不能自行调节,不利于用户合影。If a user holds a terminal for shooting, it may be restricted by factors such as the size and shape of the terminal, which is not conducive to the user clicking the virtual shutter button or the physical shutter button, which may affect the shooting effect. If the user uses the timer photo function to shoot, the shooting preparation time provided by the timer photo function is usually fixed, and the user cannot adjust it by himself, which is disadvantageous for the user to take a photo.
虽然用户可以借助拍照杆、智能手表等外部设备,但采用外部设备无疑增加了拍摄成本。Although users can use external devices such as camera sticks and smart watches, the use of external devices will undoubtedly increase shooting costs.
故在该场景下,本申请实施例提供一种终端控制方法,如图7所示,具体可包括以下步骤:Therefore, in this scenario, an embodiment of the present application provides a terminal control method. As shown in FIG. 7, the method may specifically include the following steps:
步骤S701:终端在预拍摄状态下采集并监测用户的表情。Step S701: The terminal collects and monitors a user's expression in a pre-shooting state.
所述的预拍摄状态,可以认为是终端已经开始取景但还未进行拍摄的状态。具体地,若终端为手机或平板电脑等移动通讯终端,则预拍摄状态可以是终端已启动自带的相机功能或第三方拍摄App,此时终端屏幕上将展示拍摄界面但还未进行拍摄;若终端为相机,则预拍照状态可以是相机的摄像头已经开始取景但用户还未按下快门。The pre-shooting state may be considered as a state in which the terminal has started framing but has not yet performed shooting. Specifically, if the terminal is a mobile communication terminal such as a mobile phone or tablet computer, the pre-shooting status may be that the terminal has its own camera function activated or a third-party shooting app. At this time, the shooting interface will be displayed on the terminal screen but shooting has not been performed; If the terminal is a camera, the pre-photographing state may be that the camera of the camera has started framing but the user has not pressed the shutter.
需要说明的是,在实际拍摄时,用户通常会调节自己的面部表情,并保持一种适于拍摄的状态,一般地,用户会保持这样的表情并持续一段时间。故在本申请实施例中,终端监测用户的表情,可以认为是监测用户的表情在指定的时间内是否发生变化。It should be noted that, during actual shooting, the user usually adjusts his or her facial expression and maintains a state suitable for shooting. Generally, the user maintains such an expression for a period of time. Therefore, in the embodiment of the present application, the terminal monitoring the user's expression can be considered as monitoring whether the user's expression changes within a specified time.
对于用户表情的监测,可以由相应的监测功能实现,在一种可行的实施方式中,可通过人脸识别功能确定用户的面部,并监测面部的纹理、光影、预设定位点等是否发生变化,进而判断用户的表情是否发生变化。该方式并不需要过于复杂的识别算法。The monitoring of the user's expression can be achieved by the corresponding monitoring function. In a feasible implementation manner, the user's face can be determined by the face recognition function, and whether the texture, light and shadow, preset anchor points, etc. of the face have changed , And then determine whether the user's expression has changed. This method does not require overly complex recognition algorithms.
步骤S703:当监测到所述表情在设定的时长内未发生变化时,执行预设的拍摄操作。Step S703: When it is detected that the expression does not change within a set period of time, a preset shooting operation is performed.
在本场景中,非接触式指令可认为是被拍摄用户的表情在设定时长内未发生变化,相应地,终端所执行的预设的拍摄操作,则可认为是执行与非接触式指令对应的操作。In this scenario, the non-contact instruction can be considered as the expression of the photographed user has not changed within the set time period. Accordingly, the preset shooting operation performed by the terminal can be considered as the execution corresponding to the non-contact instruction. Operation.
预设的拍摄操作可以包括:触发快门、连拍、切换摄影等等,具体还可以根据实际应用的需要进行设置。The preset shooting operations may include: triggering a shutter, continuous shooting, switching photography, and the like, and may be specifically set according to actual application requirements.
预设的时长可以设置为1~n秒,既可以由设备厂商或第三方拍摄APP提供方预先设置为不同的时间档位,也可以由用户自行定义。一旦终端监测到用户的面部表情在设定时间内未发生变化时,则可以执行上述拍摄操作,从而实现了非接触式交互的拍摄控制。The preset duration can be set to 1 to n seconds, which can be set in advance by the device manufacturer or a third-party shooting APP provider to different time scales, or can be defined by the user. Once the terminal detects that the facial expression of the user has not changed within the set time, the above-mentioned shooting operation may be performed, thereby implementing non-contact interactive shooting control.
显然,通过上述步骤,用户在进行自拍或合影时,可以通过一定时长保持表情不变的交互方式实现拍摄控制。该方式不受终端尺寸的限制,也无需用户开启定时拍摄功能,只需将终端调节至预拍照状态即可,从而有效提升了交互控制的便捷性。同时,上述方法重点在于监测用户的表情在一定时长内是否发生变化,并不需要识别出用户所做出的具体表情,也就无需使用过于复杂的表情识别算法。Obviously, through the above steps, when taking selfies or group photos, the user can implement shooting control through an interactive manner in which the expression remains unchanged for a certain period of time. This method is not limited by the size of the terminal, and does not require the user to enable the timer shooting function, and only needs to adjust the terminal to the pre-photographing state, thereby effectively improving the convenience of interactive control. At the same time, the above method focuses on monitoring whether the user's expression changes within a certain period of time, and does not need to recognize the specific expression made by the user, and does not need to use an overly complex expression recognition algorithm.
采用上述方法在实际拍摄的过程中,被拍摄的用户数量可能大于1个,在此情况下,若要实现拍摄控制,则被拍摄、且面部朝向镜头的各个用户的表情均需要在一定时长内保持不变。During the actual shooting process using the above method, the number of users being shot may be more than one. In this case, to achieve shooting control, the expressions of each user who is shot and whose face is facing the lens need to be within a certain period of time. constant.
作为本申请的一种可行实施方式,终端在执行拍摄后,可以将快门触发前 和/或快门触发后一段时间内采集的图像作为拍摄结果提供给用户。具体例如:终端监测到用户的表情在设定的时长(假设为1s)内未发生变化时,则执行拍摄操作(终端自动触发快门进行拍摄),这里假设终端自动触发快门的时刻为t,则提供给用户的拍摄结果可以是从时刻t-1开始至时刻t终端所拍摄的图像内容,也可以是从时刻t开始至时刻t+1终端所拍摄的图像内容,还可以是从时刻t-1开始至时刻t+1终端所拍摄的图像内容。As a feasible implementation manner of the present application, after the terminal performs shooting, the terminal may provide an image collected before the shutter trigger and / or a period of time after the shutter trigger is provided as a shooting result to the user. For example, if the terminal detects that the user ’s expression has not changed within a set period of time (assuming it is 1s), the camera performs a shooting operation (the terminal automatically triggers the shutter to shoot). Here, it is assumed that the terminal automatically triggers the shutter at t, then The shooting result provided to the user may be the image content captured by the terminal from time t-1 to time t, or the image content captured by the terminal from time t to time t + 1, or it may be from time t- From 1 to time t + 1, the image content captured by the terminal.
之后,可由用户在上述拍摄结果中,自行选择相应的拍摄内容进行保存(保存为照片或视频均可)。当然,这里并不应理解为对本申请的限定。After that, the user can select the corresponding shooting content for saving in the above shooting results (save as a photo or a video). Of course, this should not be construed as limiting the present application.
需要说明的是,在上述场景是以用户的表情作为非接触式指令的,实际上,本申请实施例中的上述终端控制方法并不仅限于此,在一些应用方式下,用户发出的非接触式指令可以是多样化的,比如:用户在做出面部表情的同时也做出相应的肢体动作。It should be noted that, in the above scenario, the expression of the user is used as the non-contact instruction. In fact, the foregoing terminal control method in the embodiment of the present application is not limited to this. In some application modes, the non-contact The instructions can be diversified, for example, the user also makes corresponding limb movements while making facial expressions.
这里所述的肢体动作,包括但不限于:用户的手势、体态、四肢动作等,用户的肢体动作可以是动态的,也可以是静态的,这里并不作具体限定。The limb movements described herein include, but are not limited to, gestures, postures, limb movements, etc. of the user. The limb movements of the user may be dynamic or static, which is not specifically limited herein.
由此,在上述场景一中,除了上述基于用户的表情来查找虚拟表情的应用方式以外,本申请实施例中的上述方法也能够根据用户的表情和用户同时所做出的肢体动作来查找相应的虚拟表情。具体而言,在该应用方式下,终端将采集用户的表情以及用户的肢体动作,并将采集得到的表情特征以及肢体动作特征作为用户发出的非接触式指令。Therefore, in the above scenario 1, in addition to the above-mentioned application method for finding a virtual expression based on the user's expression, the above-mentioned method in the embodiment of the present application can also find the corresponding response based on the user's expression and the limb movements made simultaneously by the user. Virtual expression. Specifically, in this application mode, the terminal will collect the user's facial expression and the user's limb movement, and use the collected facial expression features and limb motion characteristics as the non-contact instruction issued by the user.
对于虚拟表情而言,部分虚拟表情是由虚拟形象的表情和动作共同构成的,那么,便可根据用户的表情特征及动作特征,在虚拟表情中,查找与前述特征相匹配的虚拟表情。当然,查找虚拟表情的方法可以与上述方法相类似,即,可以通过确定用户的特征(包含表情特征和动作特征)所属的类别,从而查找相同类别的虚拟表情;也可以通过比较用户的表情、动作与虚拟表情之间相似度的方式实现对虚拟表情的查找,这里不再过多赘述。As for the virtual expressions, part of the virtual expressions are composed of the expressions and actions of the virtual image. Then, according to the user's expression characteristics and motion characteristics, the virtual expressions can be found in the virtual expressions that match the aforementioned characteristics. Of course, the method for finding virtual expressions can be similar to the above method, that is, you can find the virtual expressions of the same category by determining the category to which the user's characteristics (including expression characteristics and action characteristics) belong; or by comparing the user's expressions, The manner of similarity between the action and the virtual expression realizes the search for the virtual expression, which will not be described in detail here.
当然,在实际应用中可由相应的体态识别、手势识别、肢体动作识别等多种动作识别模型实现对用户肢体动作的采集和识别。Of course, in actual applications, the user's limb movements can be collected and identified by various motion recognition models such as corresponding body recognition, gesture recognition, and limb movement recognition.
在上述场景二中,同样也可以采用用户表情结合用户肢体动作的方式,实现对终端的拍摄控制。具体而言,在一种可能的应用方式中,终端在预拍摄状态下采集并监测用户的表情特征及动作特征,当监测到用户的表情特征和动作特征在设定的时长内未发生变化(或变化的幅度在预设范围内)时,执行预设的拍摄操作。In the second scenario described above, the user's expression and the user's limb movement can also be used to implement the shooting control of the terminal. Specifically, in a possible application manner, the terminal collects and monitors the user's facial expressions and motion characteristics in a pre-shooting state, and when the monitored facial expressions and motion characteristics of the user are not changed within a set duration ( Or the amplitude of the change is within a preset range), a preset shooting operation is performed.
而在另一种可能的应用方式中,用户可通过一定时间内表情不变来控制终端触发快门,同时可通过手势等肢体动作来控制终端切换不同的拍摄模式。例如:在预拍摄状态下,用户伸出食指并做出模拟按下相机快门的动作,则终端执行拍照;又例如:在预拍摄状态下,用户伸出左手(或右手),四指并拢同时弯曲,则终端执行视频拍摄。当然,这里仅是一种可能的示例,具体还将取决于实际应用的需要。In another possible application method, the user can control the terminal to trigger the shutter by changing the facial expression for a certain period of time, and at the same time, can control the terminal to switch between different shooting modes through physical actions such as gestures. For example: in the pre-shooting state, if the user holds out his index finger and makes a movement that simulates pressing the camera shutter, the terminal performs a photo shoot; for another example: in the pre-shooting state, the user extends his left hand (or right hand), with his four fingers close together Bend, the terminal performs video shooting. Of course, this is only a possible example, and it will depend on the needs of the actual application.
当然,以上内容并不应理解为对本申请的限定。Of course, the above contents should not be construed as limiting the present application.
以上为本申请实施例提供的终端控制方法,基于同样的思路,本申请实施例还提供相应的终端控制装置。The above is the terminal control method provided in the embodiment of the present application. Based on the same idea, the embodiment of the present application also provides a corresponding terminal control device.
具体而言,继续参考图8,示出了本申请实施例中所提供的终端控制装置,所述装置包括:Specifically, with continued reference to FIG. 8, a terminal control device provided in an embodiment of the present application is shown. The device includes:
接收处理模块801,配置用于在特定状态下接收用户的非接触式指令;A receiving processing module 801 configured to receive a contactless instruction of a user in a specific state;
执行模块802,配置用于根据接收到的所述非接触式指令,执行预先设定的与所述非接触式指令相对应的操作。The execution module 802 is configured to perform a preset operation corresponding to the contactless instruction according to the received contactless instruction.
如图8所示的终端控制装置可以应用在不同场景下。The terminal control device shown in FIG. 8 can be applied in different scenarios.
场景一、在社交过程中通过表情实现终端控制。Scenario 1: Terminal control is achieved through expressions during social interaction.
在该场景中,所述特定状态至少包括表情候选状态,其中,所述表情候选状态用于向用户提供虚拟表情。In this scenario, the specific state includes at least an expression candidate state, wherein the expression candidate state is used to provide a virtual expression to a user.
进一步地,所述接收处理模块801,配置用于在所述特定状态下调用图像采集单元采集所述用户的表情特征,将所述表情特征确定为非接触式指令。Further, the receiving processing module 801 is configured to call an image acquisition unit to collect the expression feature of the user in the specific state, and determine the expression feature as a non-contact instruction.
进一步地,所述接收处理模块801,配置用于接收用户在即时通讯界面上发出的表情采集触发操作,当接收到的所述表情采集触发操作后,调用所述图像采集单元采集所述用户的表情特征。Further, the receiving processing module 801 is configured to receive an expression collection trigger operation sent by a user on an instant messaging interface, and after receiving the expression collection trigger operation, call the image acquisition unit to collect the user ’s Emoticons.
进一步地,所述执行模块802,配置用于根据所述表情特征,在已有的虚拟表情中,查找与所述表情特征相匹配的虚拟表情,将查找出的所述虚拟表情展示给所述用户。Further, the execution module 802 is configured to find a virtual expression matching the expression characteristic among the existing virtual expressions according to the expression characteristics, and display the found virtual expression to the virtual expression. user.
进一步地,所述执行模块802,配置用于将查找出的所述虚拟表情展示在终端界面的指定区域中,以便用户选择。Further, the execution module 802 is configured to display the found virtual emoticon in a designated area of a terminal interface for a user to select.
场景二、拍摄控制Scene two, shooting control
在该场景中,所述特定状态至少包括预拍摄状态,其中,所述预拍摄状态可以认为是终端已经开始取景但还未进行拍摄的状态。In this scenario, the specific state includes at least a pre-shooting state, where the pre-shooting state may be considered as a state in which the terminal has started framing but has not yet performed shooting.
所述接收处理模块801,配置用于在预拍摄状态下采集并监测用户的表情。The receiving processing module 801 is configured to collect and monitor a user's expression in a pre-shooting state.
所述执行模块802,配置用于当监测到所述表情在设定的时长内未发生变化时,执行预设的拍摄操作。The execution module 802 is configured to execute a preset shooting operation when it is detected that the expression has not changed within a set period of time.
基于图8所示的装置,在实际应用中可由实体电子设备实现,具体而言,该终端包括一个或多个处理器;以及存储装置,用于存储一个或多个程序;Based on the apparatus shown in FIG. 8, it may be implemented by a physical electronic device in practical applications. Specifically, the terminal includes one or more processors; and a storage device for storing one or more programs;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现以上所述的方法。When the one or more programs are executed by the one or more processors, the one or more processors implement the method described above.
基于图8所示的装置,在实际应用中可由计算机可读存储介质实现,具体而言,该计算机可读存储介质上存储有计算机程序,该程序被处理器执行时实现以上所述的方法Based on the apparatus shown in FIG. 8, it can be implemented by a computer-readable storage medium in practical applications. Specifically, a computer program is stored on the computer-readable storage medium. When the program is executed by a processor, the method described above is implemented.
本申请中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置、设备和介质类实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可,这里就不再一一赘述。Each embodiment in this application is described in a progressive manner, and the same or similar parts between the various embodiments can be referred to each other. Each embodiment focuses on the differences from other embodiments. In particular, for the device, device, and media embodiments, since they are basically similar to the method embodiments, the description is relatively simple. For the related parts, refer to the description of the method embodiments, and they will not be described here one by one.
至此,已经对本主题的特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作可以按照不同的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序,以实现期望的结果。在某些实施方式中,多任务处理和并行处理可以是有利的。So far, specific embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims may be performed in a different order and still achieve the desired result. In addition, the processes depicted in the figures do not necessarily require the particular order shown or sequential order to achieve the desired result. In certain embodiments, multitasking and parallel processing may be advantageous.
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分从网络上被下载和安装,和/或从可拆卸介质被安装。在该计算机程序被中央处理单元(CPU)执行时,执行本申请的方法中限定的上述功能。需要说明的是,本申请所述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本申请中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本申请中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、RF等等,或者上述的任意合适的组合。In particular, according to an embodiment of the present disclosure, the process described above with reference to the flowchart may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product including a computer program carried on a computer-readable medium, the computer program containing program code for performing a method shown in a flowchart. In such an embodiment, the computer program may be downloaded and installed from a network through a communication section, and / or installed from a removable medium. When the computer program is executed by a central processing unit (CPU), the above functions defined in the method of the present application are performed. It should be noted that the computer-readable medium described in this application may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two. The computer-readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programming read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing. In this application, a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in combination with an instruction execution system, apparatus, or device. In this application, a computer-readable signal medium may include a data signal that is included in baseband or propagated as part of a carrier wave, and which carries computer-readable program code. Such a propagated data signal may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. The computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, and the computer-readable medium may send, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device. . Program code embodied on a computer-readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
可以以一种或多种程序设计语言或其组合来编写用于执行本申请的操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如”C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for performing the operations of this application may be written in one or more programming languages, or a combination thereof, including programming languages such as Java, Smalltalk, C ++, and also conventional Procedural programming language—such as "C" or a similar programming language. The program code can be executed entirely on the user's computer, partly on the user's computer, as an independent software package, partly on the user's computer, partly on a remote computer, or entirely on a remote computer or server. In the case of a remote computer, the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or wide area network (WAN), or it can be connected to an external computer (such as through an Internet service provider Internet connection).
附图中的流程图和框图,图示了按照本申请各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图 中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowchart and block diagrams in the accompanying drawings illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagram may represent a module, a program segment, or a part of code, which contains one or more functions to implement a specified logical function Executable instructions. It should also be noted that in some alternative implementations, the functions labeled in the blocks may also occur in a different order than those labeled in the drawings. For example, two blocks represented one after the other may actually be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending on the functions involved. It should also be noted that each block in the block diagrams and / or flowcharts, and combinations of blocks in the block diagrams and / or flowcharts, can be implemented by a dedicated hardware-based system that performs the specified function or operation , Or it can be implemented with a combination of dedicated hardware and computer instructions.
描述于本申请实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的单元也可以设置在处理器中,例如,可以描述为:一种处理器包括接收单元、解析单元、信息选取单元和生成单元。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定,例如,接收单元还可以被描述为“接收用户的网页浏览请求的单元”。The units described in the embodiments of the present application may be implemented by software or hardware. The described unit may also be provided in a processor, for example, it may be described as: a processor includes a receiving unit, a parsing unit, an information selecting unit, and a generating unit. The names of these units do not in any way constitute a limitation on the unit itself. For example, the receiving unit may also be described as a "unit that receives a user's web browsing request."
以上描述仅为本申请的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本申请中所涉及的发明范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本申请中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。The above description is only a preferred embodiment of the present application and an explanation of the applied technical principles. Those skilled in the art should understand that the scope of the invention involved in this application is not limited to the technical solution of the specific combination of the above technical features, but should also cover the above technical features or Other technical solutions formed by arbitrarily combining their equivalent features. For example, a technical solution formed by replacing the above features with technical features disclosed in the present application (but not limited to) having similar functions.

Claims (12)

  1. 一种终端控制方法,其特征在于,包括:A terminal control method, comprising:
    终端在特定状态下接收用户的非接触式指令;The terminal receives the user's contactless instruction in a specific state;
    根据接收到的所述非接触式指令,执行预先设定的与所述非接触式指令相对应的操作。According to the received non-contact instruction, a preset operation corresponding to the non-contact instruction is executed.
  2. 如权利要求1所述的方法,其特征在于,终端在特定状态下接收用户的非接触式指令,包括:The method according to claim 1, wherein the terminal receiving the contactless instruction of the user in a specific state comprises:
    所述终端在所述特定状态下调用图像采集单元采集所述用户的表情特征;The terminal invoking an image acquisition unit in the specific state to collect facial expression characteristics of the user;
    将所述表情特征确定为非接触式指令。The expression feature is determined as a non-contact instruction.
  3. 如权利要求2所述的方法,其特征在于,所述终端在所述特定状态下调用图像采集单元采集所述用户的表情特征,包括:The method according to claim 2, wherein the terminal calling an image acquisition unit to collect the expression feature of the user in the specific state comprises:
    所述终端接收用户在即时通讯界面上发出的表情采集触发操作;Receiving, by the terminal, an expression collection trigger operation sent by a user on an instant messaging interface;
    当接收到的所述表情采集触发操作后,调用所述图像采集单元采集所述用户的表情特征。When the received expression collection trigger operation is received, the image acquisition unit is called to collect the user's expression characteristics.
  4. 如权利要求2所述的方法,其特征在于,根据接收到的所述非接触式指令,执行预先设定的与所述非接触式指令相对应的操作,包括:The method according to claim 2, characterized in that, according to the received contactless instruction, performing a preset operation corresponding to the contactless instruction comprises:
    根据所述表情特征,在已有的虚拟表情中,查找与所述表情特征相匹配的虚拟表情;Find the virtual expression that matches the expression feature among the existing virtual expressions according to the expression feature;
    将查找出的所述虚拟表情展示给所述用户。Displaying the found virtual expression to the user.
  5. 如权利要求4所述的方法,其特征在于,将查找出的所述虚拟表情展示给所述用户,包括:The method according to claim 4, characterized in that displaying the found virtual emoticon to the user comprises:
    将查找出的所述虚拟表情展示在终端界面的指定区域中,以便用户选择。Displaying the found virtual expression in a designated area of the terminal interface for a user to select.
  6. 一种终端控制装置,其特征在于,包括:A terminal control device, comprising:
    接收处理模块,配置用于在特定状态下接收用户的非接触式指令;A receiving processing module configured to receive a user's contactless instruction in a specific state;
    执行模块,配置用于根据接收到的所述非接触式指令,执行预先设定的与所述非接触式指令相对应的操作。The execution module is configured to execute a preset operation corresponding to the contactless instruction according to the received contactless instruction.
  7. 如权利要求6所述的装置,其特征在于,所述接收处理模块,配置用于在所述特定状态下调用图像采集单元采集所述用户的表情特征,将所述表情特征确定为非接触式指令。The device according to claim 6, wherein the receiving processing module is configured to call an image acquisition unit in the specific state to collect facial expression characteristics of the user, and determine the facial expression characteristics as non-contact instruction.
  8. 如权利要求7所述的装置,其特征在于,所述接收处理模块,配置用于接收用户在即时通讯界面上发出的表情采集触发操作,当接收到的所述表情采集触发操作后,调用所述图像采集单元采集所述用户的表情特征。The device according to claim 7, wherein the receiving processing module is configured to receive a facial expression triggering operation sent by a user on an instant messaging interface, and call the mobile terminal after the received facial expression triggering operation is received. The image acquisition unit collects expression features of the user.
  9. 如权利要求7所述的装置,其特征在于,所述执行模块,配置用于根据所述表情特征,在已有的虚拟表情中,查找与所述表情特征相匹配的虚拟表情,将查找出的所述虚拟表情展示给所述用户。The device according to claim 7, wherein the execution module is configured to find a virtual expression matching the expression characteristic among existing virtual expressions according to the expression characteristics, and find out The virtual emoticon is displayed to the user.
  10. 如权利要求9所述的装置,其特征在于,所述执行模块,配置用于将查找出的所述虚拟表情展示在终端界面的指定区域中,以便用户选择。The device according to claim 9, wherein the execution module is configured to display the found virtual emoticon in a designated area of a terminal interface for a user to select.
  11. 一种电子设备,包括:一个或多个处理器;An electronic device includes: one or more processors;
    存储装置,用于存储一个或多个程序;A storage device for storing one or more programs;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1~5中任一所述的方法。When the one or more programs are executed by the one or more processors, the one or more processors implement the method according to any one of claims 1 to 5.
  12. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1~5中任一所述的方法。A computer-readable storage medium having stored thereon a computer program, characterized in that when the program is executed by a processor, the method according to any one of claims 1 to 5 is implemented.
PCT/CN2018/091927 2018-06-20 2018-06-20 Terminal control method and device WO2019241920A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880001138.XA CN109496289A (en) 2018-06-20 2018-06-20 A kind of terminal control method and device
PCT/CN2018/091927 WO2019241920A1 (en) 2018-06-20 2018-06-20 Terminal control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/091927 WO2019241920A1 (en) 2018-06-20 2018-06-20 Terminal control method and device

Publications (1)

Publication Number Publication Date
WO2019241920A1 true WO2019241920A1 (en) 2019-12-26

Family

ID=65713836

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/091927 WO2019241920A1 (en) 2018-06-20 2018-06-20 Terminal control method and device

Country Status (2)

Country Link
CN (1) CN109496289A (en)
WO (1) WO2019241920A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023124200A1 (en) * 2021-12-27 2023-07-06 北京荣耀终端有限公司 Video processing method and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101311882A (en) * 2007-05-23 2008-11-26 华为技术有限公司 Eye tracking human-machine interaction method and apparatus
JP2010273280A (en) * 2009-05-25 2010-12-02 Nikon Corp Imaging apparatus
CN105068662A (en) * 2015-09-07 2015-11-18 哈尔滨市一舍科技有限公司 Electronic device used for man-machine interaction
CN105103536A (en) * 2013-03-06 2015-11-25 日本电气株式会社 Imaging device, imaging method and program
TW201819126A (en) * 2016-11-18 2018-06-01 上銀科技股份有限公司 Non-contact gesture teaching robot enables the driving module to perform a motion instruction corresponding to a hand movement according to the user's hand movement

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104753766B (en) * 2015-03-02 2019-03-22 小米科技有限责任公司 Expression sending method and device
CN106454071A (en) * 2016-09-09 2017-02-22 捷开通讯(深圳)有限公司 Terminal and automatic shooting method based on gestures
CN107315488A (en) * 2017-05-31 2017-11-03 北京安云世纪科技有限公司 A kind of searching method of expression information, device and mobile terminal
CN107153496B (en) * 2017-07-04 2020-04-28 北京百度网讯科技有限公司 Method and device for inputting emoticons

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101311882A (en) * 2007-05-23 2008-11-26 华为技术有限公司 Eye tracking human-machine interaction method and apparatus
JP2010273280A (en) * 2009-05-25 2010-12-02 Nikon Corp Imaging apparatus
CN105103536A (en) * 2013-03-06 2015-11-25 日本电气株式会社 Imaging device, imaging method and program
CN105068662A (en) * 2015-09-07 2015-11-18 哈尔滨市一舍科技有限公司 Electronic device used for man-machine interaction
TW201819126A (en) * 2016-11-18 2018-06-01 上銀科技股份有限公司 Non-contact gesture teaching robot enables the driving module to perform a motion instruction corresponding to a hand movement according to the user's hand movement

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023124200A1 (en) * 2021-12-27 2023-07-06 北京荣耀终端有限公司 Video processing method and electronic device

Also Published As

Publication number Publication date
CN109496289A (en) 2019-03-19

Similar Documents

Publication Publication Date Title
CN103529934B (en) Method and apparatus for handling multiple input
WO2018000585A1 (en) Interface theme recommendation method, apparatus, terminal and server
WO2017036035A1 (en) Screen control method and device
WO2019153925A1 (en) Searching method and related device
KR102165818B1 (en) Method, apparatus and recovering medium for controlling user interface using a input image
JP7166294B2 (en) Audio processing method, device and storage medium
WO2019206243A1 (en) Material display method, terminal, and computer storage medium
CN110572716B (en) Multimedia data playing method, device and storage medium
EP3933570A1 (en) Method and apparatus for controlling a voice assistant, and computer-readable storage medium
US20230368461A1 (en) Method and apparatus for processing action of virtual object, and storage medium
CN112860169B (en) Interaction method and device, computer readable medium and electronic equipment
WO2022179331A1 (en) Photographing method and apparatus, mobile terminal, and storage medium
WO2018098968A9 (en) Photographing method, apparatus, and terminal device
CN112068762A (en) Interface display method, device, equipment and medium of application program
WO2019129101A1 (en) Photographing method and mobile electronic terminal
CN112764600B (en) Resource processing method, device, storage medium and computer equipment
CN113936697B (en) Voice processing method and device for voice processing
WO2024067468A1 (en) Interaction control method and apparatus based on image recognition, and device
WO2019241920A1 (en) Terminal control method and device
WO2022198821A1 (en) Method and apparatus for performing matching between human face and human body, and electronic device, storage medium and program
CN109977424A (en) A kind of training method and device of Machine Translation Model
CN112181228A (en) Display method and device for displaying
CN109976549B (en) Data processing method, device and machine readable medium
CN112667124A (en) Information processing method and device and information processing device
WO2020056948A1 (en) Method and device for data processing and device for use in data processing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18923625

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18923625

Country of ref document: EP

Kind code of ref document: A1