WO2014106380A1 - Camera-based control method and mobile terminal - Google Patents

Camera-based control method and mobile terminal Download PDF

Info

Publication number
WO2014106380A1
WO2014106380A1 PCT/CN2013/080683 CN2013080683W WO2014106380A1 WO 2014106380 A1 WO2014106380 A1 WO 2014106380A1 CN 2013080683 W CN2013080683 W CN 2013080683W WO 2014106380 A1 WO2014106380 A1 WO 2014106380A1
Authority
WO
WIPO (PCT)
Prior art keywords
trajectory information
gesture
motion track
character
input device
Prior art date
Application number
PCT/CN2013/080683
Other languages
French (fr)
Chinese (zh)
Inventor
王晓晖
毛国红
文立夫
赵天涯
施驰
Original Assignee
深圳创维数字技术股份有限公司
深圳市创维软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳创维数字技术股份有限公司, 深圳市创维软件有限公司 filed Critical 深圳创维数字技术股份有限公司
Publication of WO2014106380A1 publication Critical patent/WO2014106380A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means

Definitions

  • the present invention relates to the field of electronic technologies, and in particular, to a camera-based control method and a mobile terminal. Background technique
  • a technical problem to be solved by the embodiments of the present invention is to provide a camera-based control method and a mobile terminal, which can obtain information to be input by a user by recognizing a motion track.
  • an embodiment of the present invention provides a camera-based control method, including:
  • Performing the terminal control operation according to the motion trajectory information specifically includes: controlling generation of a character according to the motion trajectory information, or generating a gesture instruction according to the motion trajectory information, and performing a control operation on the application program according to the gesture instruction.
  • the tracking input device used by the user is tracked by the camera, and the motion track information of the track input device in the air is recorded, including:
  • the controlling the generation of characters according to the motion track information includes:
  • the motion trajectory information is motion trajectory information of an input character
  • the motion trajectory information is processed into two-dimensional data
  • the generating a gesture instruction according to the motion trajectory information, and performing a control operation on the application according to the gesture instruction includes:
  • the motion track information is recognized as the motion track information of the gesture, the motion track information is processed as a gesture instruction;
  • the application is operated by calling a corresponding function according to the gesture instruction.
  • the distance sensor is used to sense the distance between the trajectory input device used by the user and the target element; when the trajectory input device reaches the operating distance between the target element, a click operation is performed on the target element.
  • an embodiment of the present invention further provides a mobile terminal, including:
  • a tracking record module configured to track a track input device used by the user through the camera, and record motion track information of the track input device in the air;
  • a character processing module configured to control generating a character according to the motion track information
  • a gesture processing module configured to generate a gesture instruction according to the motion track information, and perform a control operation on the application according to the gesture instruction.
  • the tracking record module includes:
  • a tracking unit configured to track, by the camera, a track input device used by the user
  • a determining unit configured to determine whether the motion track information of the track input device in the air is valid motion track information
  • the record deletion unit is configured to record the motion track information when the determination is yes, otherwise delete the motion track information.
  • a first processing unit configured to: when the motion track information is recognized as a motion track letter of an input character - - when the information is processed, the motion track information is processed into two-dimensional data;
  • a character generating unit configured to perform character recognition on the two-dimensional data, and generate at least one character sorted according to weights
  • a display unit configured to display the at least one weighted character to the user.
  • the gesture processing module includes:
  • a second processing unit configured to process the motion track information into a gesture instruction when the motion track information is recognized as motion track information for making a gesture
  • the calling unit is configured to invoke the corresponding function to operate the application according to the gesture instruction. Among them, it also includes:
  • a sensing module for sensing a distance between a trajectory input device used by a user and a target element by using a distance sensor
  • the operation module is configured to perform a click operation on the target element when the operation distance between the track input device and the target element is reached.
  • the trajectory input device such as a finger is tracked by the camera, and the trajectory information of the trajectory input device in the air is recorded, and the information to be input by the user is generated, so that the user is not restricted by various soft and hard keyboards.
  • the input and control are made in a single way, making the user's input and control more convenient and natural.
  • FIG. 1 is a schematic flowchart of a camera-based control method according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of another camera-based control method according to an embodiment of the present invention
  • FIG. 4 is a schematic structural diagram of a tracking and recording module according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a character processing module according to an embodiment of the present disclosure
  • - Figure 6 is a schematic structural diagram of a gesture processing module according to an embodiment of the present invention. detailed description
  • FIG. 1 it is a schematic flowchart of a camera-based control method according to an embodiment of the present invention. As shown in FIG. 1, the method of the embodiment of the present invention includes the following steps:
  • the track input device may be a finger of the user, that is, the camera can detect the fingertip of the user and track the movement of the fingertip.
  • the user's finger needs to move within the range that the camera can capture, so that the camera can capture complete motion track information.
  • the mobile terminal may further determine whether the obtained motion track information is valid motion track information. When the determination is yes, the motion track information is recorded, otherwise the motion track information is deleted, and the valid motion track information is a slave input character.
  • the effective motion track information is motion track information between the start of the gesture and the end of the gesture
  • the deleted motion track information is invalid motion track information
  • the motion track information when the finger just enters the camera acquisition range is invalid motion track information
  • the mobile terminal determines whether the user finger is a pen lift action or a pen down action according to the fingertip speed, and the lift pen action is when the input is started.
  • the preparatory action is the end motion after the input is completed, and the motion track information obtained before the pen is lifted and the motion track information obtained after the pen down are all invalid motion track information.
  • Performing a terminal control operation according to the motion trajectory information specifically: controlling generating a character according to the motion trajectory information, or generating a gesture instruction according to the motion trajectory information, and performing a control operation on the application program according to the gesture instruction;
  • the motion track information is identified as the motion track information of the input character
  • the motion track information is processed into two-dimensional data
  • the two-dimensional data is subjected to character recognition, and at least one weighted order is generated.
  • Character finally displaying the at least one character sorted by weight to - - User.
  • the process of identifying the motion track information as the motion track information of the input character may be: first identifying whether the current motion track information is the motion track information of the pre-stored gesture, and the motion track information of the gesture may be the hand dialing If the current motion track information is not the motion track information of the pre-stored gesture, the current motion track information may be regarded as the motion track information of the input character.
  • the user when the user selects an input box on the mobile terminal to obtain the focus, the user can start inputting, and after the user moves the finger over the camera, if the status light on the interface changes from red to green, the finger is at this time. The movement is being recognized by the camera, and then the user can draw a character above the camera. At the same time, the movement of the processed fingertip is displayed on the interface of the mobile terminal, and the camera shooting can be selected on the interface in the setting. When the user's finger no longer moves or leaves the camera's acquisition range, the indicator light turns red, indicating that the input is over, and the recognition is based on the obtained motion track information. Finally, several possible users can be displayed on the interface. Enter the matching characters. The character input by the user may be one or more.
  • the mobile terminal judges the end of the input by detecting the pause of the finger or leaving the collection range of the camera; when the user inputs multiple characters, multiple characters may For a word or a Chinese phrase, during the input process, the mobile terminal also needs to judge whether the user starts writing a new character according to the movement of the fingertip.
  • the motion track information is processed as a gesture instruction, and the corresponding function is called according to the gesture instruction to operate the application.
  • the process of identifying the motion track information as the motion track information of the gesture may be: comparing the current motion track information with the motion track information of the pre-stored gesture, and if the match, the current motion track information is regarded as Motion track information for making gestures. For example, if the user makes a back dial gesture above the camera, the page flip operation can be realized; if a forward dial gesture is made above the camera, the page flip operation can be realized; if the camera is made above the camera The left-handed gesture can be used to delete one character in the input box; or to make more other gestures to achieve more operations.
  • the trajectory input device such as a finger is tracked by the camera, and the trajectory information of the trajectory input device in the air is recorded, and the information to be input by the user is generated, so that the user is not restricted by various soft and hard keyboards.
  • the input and control are made in a single way, making the user's input and control more convenient and natural.
  • FIG. 2 it is a schematic flowchart of another camera-based control method according to an embodiment of the present invention. As shown in FIG. 2, the method of the embodiment of the present invention includes the following steps:
  • the track input device may be a user's finger, that is, the camera can detect the user's fingertip and track the movement of the fingertip; the user's finger needs to move within the range that the camera can collect, so that the camera can capture the complete motion track. information.
  • step 5202 determining whether the motion track information of the track input device in the air is a valid motion track signal.
  • the mobile terminal may further determine whether the obtained motion track information is valid motion track information. When the determination is yes, the step of step S204 is performed; If the determination is no, the step of S203 is executed.
  • the motion track information is deleted, and the deleted motion track information is invalid motion track information.
  • the motion track information when the finger just enters the camera acquisition range is an invalid motion track.
  • the mobile terminal determines whether the user's finger is a pen-up action or a pen-down action according to the fingertip speed.
  • the pen-up action is a preparation action when the input is started, and the pen-down action is an end action after the input is completed, and is obtained before the pen-up is completed.
  • the motion track information and the motion track information obtained after the pen down are all invalid motion track information.
  • step S202 When the judgment of step S202 is YES, the motion track information is recorded, and the effective motion track information is motion track information between the input character and the end of the input character, or the effective motion track information is made from The motion track information between the start of the gesture and the end of the gesture.
  • the motion track information is motion track information of an input character
  • the motion track information is processed into two-dimensional data
  • the process of identifying the motion track information as the motion track information of the input character may be: first identifying whether the current motion track information is the motion track information of the pre-stored gesture, and the motion track information of the gesture may be the hand dialing If the current motion track information is not the motion track information of the pre-stored gesture, the current motion track information may be regarded as the motion track information of the input character.
  • the input character can be one or more characters, and multiple characters can be a single word or a Chinese character phrase. When the input character is multiple characters, the motion track will be The multiple characters contained in the message are processed as two-dimensional data.
  • S206 Perform character recognition on the two-dimensional data, and generate at least one word-sorted by weight
  • the weight ordering is based on the word frequency of the character in the thesaurus and the user's input habits.
  • the motion track information is identified as the motion track information of the gesture, the motion track information is processed as a gesture instruction.
  • the process of identifying the motion track information as the motion track information of the gesture may be: comparing the current motion track information with the motion track information of the pre-stored gesture, and if the match, the current motion track information is regarded as Motion track information for making gestures.
  • the gesture instruction calling a corresponding function to operate the application; specifically, the user makes a back dial gesture above the camera, and generates a gesture instruction to invoke a corresponding function to implement a page-turning operation; Making a forward dialing gesture above the camera generates a gesture instruction to invoke a corresponding function to implement an upward page turning operation; if a gesture of waving to the left is made above the camera, a gesture instruction is generated to invoke a corresponding function to implement The operation of deleting a character in the input box; or making more other gestures to achieve more operations.
  • the mobile terminal can also sense the distance between the trajectory input device used by the user and the target element through the distance sensor; when the trajectory input device reaches the operating distance between the target element, the click operation is performed on the target element.
  • the distance sensor can detect that a finger is close to a character, thereby achieving an operation that does not click on the screen, that is, an optional word.
  • the trajectory input device such as a finger is tracked by the camera, and the trajectory information of the trajectory input device in the air is recorded, and the information to be input by the user is generated, so that the user is not restricted by various soft and hard keyboards.
  • the input and control are made in a single way, making the user's input and control more convenient and natural.
  • FIG. 3 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention.
  • the mobile terminal includes: a tracking record module 10, a character processing module 20, and a gesture processing module 30.
  • the tracking and recording module 10 is configured to track a track input device used by a user by using a camera, and record motion track information of the track input device in the air;
  • the track input device may be a finger of the user, that is, the track record module 10 may detect the fingertip of the user through the camera and track the movement of the fingertip. The user's finger needs to move within the range that the camera can capture, so that the camera can capture complete motion track information.
  • the tracking record module 10 can also determine whether the obtained motion track information is valid motion track information. When the determination is yes, the motion track information is recorded, otherwise the motion track information is deleted, and the effective motion track information is deleted.
  • the motion track information between the start of the input character and the end of the input character, or the effective motion track information is the motion track information between the start of the gesture and the end of the gesture, and the deleted motion track information is invalid.
  • the motion track information for example, the motion track information when the finger just enters the camera acquisition range is invalid motion track information; the mobile terminal determines whether the user finger is a pen lift action or a pen down action according to the fingertip speed, and the pen lift action is
  • the preparation operation at the start of the input is the end operation after the input is completed, and the motion trajectory information obtained before the pen is lifted and the motion trajectory information obtained after the pen down are all invalid motion trajectory information.
  • the user can input one or more characters.
  • the 3 track record module 10 can also determine the end of an input by detecting the pause of the finger or the range of the camera leaving the camera; In the case of characters, the plurality of characters may be a word or a Chinese character phrase.
  • the tracking and recording module 10 also needs to determine whether the user starts writing a new character according to the movement of the fingertip.
  • the character processing module 20 is configured to control generating a character according to the motion track information. Specifically, when the character processing module 20 recognizes that the motion track information is motion track information of an input character, the motion is performed. The trajectory information is processed into two-dimensional data, the two-dimensional data is subjected to character recognition, and at least one character sorted according to the weight is generated, and finally the at least one character sorted according to the weight is displayed to the user.
  • the gesture processing module 30 is configured to generate a gesture instruction according to the motion track information, and perform a control operation on the application according to the gesture instruction;
  • the gesture processing module 30 When the gesture processing module 30 recognizes that the motion track information is the motion track information of the gesture, the motion track information is processed into a gesture instruction, and the corresponding function is invoked according to the gesture instruction to operate the application. For example, if the user makes a backward gesture above the camera, the gesture processing module 30 can implement an operation of page down; if a forward dial gesture is made above the camera, the gesture processing module 30 can implement upward Page flip operation; if left at the top of the camera - - A gesture of waving, the gesture processing module 30 may implement an operation to delete one character in the input box; or make more other gestures to achieve more operations.
  • the trace recording module 10 includes: a tracking unit 101, a determining unit 102, and a record deleting unit 103.
  • the tracking unit 101 is configured to track the trajectory input device used by the user through the camera; the trajectory input device may be a finger of the user, that is, the tracking unit 101 may detect the fingertip of the user through the camera and track the movement of the fingertip; The user's finger needs to move within the range that the camera can capture, so that the camera can capture complete motion track information.
  • the determining unit 102 is configured to determine whether the motion track information of the track input device in the air is valid motion track information
  • the judging unit 102 can also judge whether the obtained motion locus information is valid motion track information, and notify the record deletion unit 103 to perform a corresponding operation according to the judgment result.
  • the record deletion unit 103 is configured to: when the determination is yes, record the motion track information, or delete the motion track information;
  • the record deletion unit 103 records the motion trajectory information, and the effective motion trajectory information is motion trajectory information from the input character to the end of the input character, or the The effective motion trajectory information is motion trajectory information from the start of the gesture to the end of the gesture.
  • the record deleting unit 103 deletes the motion track information, and the deleted motion track information is invalid motion track information, for example, the motion when the finger just enters the camera capturing range.
  • the trajectory information is invalid motion trajectory information;
  • the mobile terminal determines whether the user's finger is a pen-up action or a pen-down action according to the fingertip speed, and the pen-up action is a preparation action when the input is started, and the pen-down action is after the input is completed.
  • the motion track information obtained before the pen is lifted and the motion track information obtained after the pen is dropped are all invalid motion track information.
  • the character processing module 20 includes: a first processing unit 201, a character generating unit 202, and a display unit 203.
  • the first processing unit 201 is configured to process the motion track information into two-dimensional data when the motion track information is recognized as the motion track information of the input character;
  • the first processing unit 201 first identifies whether the current motion track information is pre-stored. - - The motion track information of a good gesture, the motion track information of the gesture may be the motion track information of the hand's down, up, or other gestures. If the current motion track information is not the motion track information of the pre-stored gesture, Then, the first processing unit 201 can regard the current motion track information as motion track information of the input character.
  • the character generating unit 202 is configured to perform character recognition on the two-dimensional data, and generate at least one character sorted according to weights;
  • the character generating unit 202 performs character recognition on the two-dimensional data and generates at least one character sorted according to the weight according to the word frequency of the character in the lexicon and the input habit of the user.
  • the display unit 203 is configured to display the at least one character sorted according to the weight to the user;
  • FIG. 3 is a schematic structural diagram of the gesture processing module 30 of FIG. 3.
  • the gesture processing module 30 includes: a second processing unit 301, and a calling unit 302.
  • the second processing unit 301 is configured to process the motion track information into a gesture instruction when the motion track information is recognized as a motion track information of a gesture;
  • the process of identifying the motion track information as the motion track information of the gesture may be: the second processing unit 301 compares the current motion track information with the motion track information of the pre-stored gesture, and if yes, The second processing unit 301 regards the current motion trajectory information as motion trajectory information of the gesture.
  • the calling unit 302 is configured to invoke a corresponding function to operate the application according to the gesture instruction
  • the user makes a back dialing gesture above the camera, and the calling unit 302 invokes a corresponding function according to the gesture instruction generated by the second processing unit 301 to implement a page turning operation; if the camera is above the camera When the forward dialing gesture is made, the calling unit 302 invokes a corresponding function according to the gesture instruction generated by the second processing unit 301 to implement an upward page turning operation; if a gesture of waving to the left is made above the camera, Then, the calling unit 302 invokes a corresponding function according to the gesture instruction generated by the second processing unit 301 to implement an operation of deleting one character in the input box; or makes more other gestures to implement more operations. .
  • the mobile terminal may further include a sensing module and an operation module;
  • the sensing module is configured to sense a trajectory input device and a target used by a user through a distance sensor - - the distance between the elements;
  • the operation module is configured to perform a click operation on the target element when an operation distance is reached between the track input device and the target element;
  • the distance sensor in the sensing module can detect that a finger is close to a certain character, and then notify the operation module to implement an operation that does not click on the screen, that is, an optional word.
  • the trajectory input device such as a finger is tracked by the camera, and the trajectory information of the trajectory input device in the air is recorded, and the information to be input by the user is generated, so that the user is not restricted by various soft and hard keyboards.
  • the input and control are made in a single way, making the user's input and control more convenient and natural.
  • the module or unit in the embodiment of the present invention may be through a general-purpose integrated circuit, such as a CPU.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Abstract

Disclosed in an embodiment of the present invention are a camera-based control method and a mobile terminal. The method comprises the following steps: tracking a track input device used by a user through a camera, and recording motion track information of the track input device in the air; and carrying out, according to the motion track information, a terminal control operation, which specifically comprises: generating a character or a gesture instruction according to the motion track information, and carrying out the control operation on an application according to the gesture instruction. By using the camera-based control method and the mobile terminal in the present invention, information to be input by the user can be obtained through identification for the motion track, and more convenient and natural input and control of the user can be achieved.

Description

一种基于摄像头的控制方法和移动终端  Camera-based control method and mobile terminal
本申请要求于 2013年 1月 6日提交中国专利局、申请号为 201310003506.3 , 发明名称为 "一种基于摄像头的控制方法和移动终端"的中国专利申请的优先 权, 其全部内容通过引用结合在本申请中。 技术领域  This application claims priority to Chinese Patent Application No. 201310003506.3, entitled "A Camera-Based Control Method and Mobile Terminal", filed on January 6, 2013, the entire contents of which are incorporated by reference. In this application. Technical field
本发明涉及电子技术领域,尤其涉及一种基于摄像头的控制方法和移动终 端。 背景技术  The present invention relates to the field of electronic technologies, and in particular, to a camera-based control method and a mobile terminal. Background technique
人机交互方式一直是计算机研究的重点, 传统的输入方式有键盘、 鼠标、 触摸板、 手写板和遥控器等等。 对于移动终端而言, 移动终端上传统的虚拟软 键盘输入法在实际使用时较为不便, 因为移动终端屏幕的限制,使得用户通过 软键盘输入时容易出错。 发明内容  Human-computer interaction has always been the focus of computer research. Traditional input methods include keyboards, mice, touchpads, tablets, and remote controls. For the mobile terminal, the traditional virtual soft keyboard input method on the mobile terminal is inconvenient in actual use, because the limitation of the screen of the mobile terminal makes the user error-prone when inputting through the soft keyboard. Summary of the invention
本发明实施例所要解决的技术问题在于,提供一种基于摄像头的控制方法 和移动终端, 可以通过对运动轨迹的识别以得到用户所要输入的信息。  A technical problem to be solved by the embodiments of the present invention is to provide a camera-based control method and a mobile terminal, which can obtain information to be input by a user by recognizing a motion track.
为了解决上述技术问题, 本发明实施例提供了一种基于摄像头的控制方 法, 包括:  In order to solve the above technical problem, an embodiment of the present invention provides a camera-based control method, including:
通过摄像头跟踪用户使用的轨迹输入装置,并记录轨迹输入装置在空中的 运动轨迹信息;  Tracking the trajectory input device used by the user through the camera, and recording the trajectory information of the trajectory input device in the air;
根据所述运动轨迹信息进行终端控制操作, 具体包括: 根据所述运动轨迹 信息控制生成字符, 或者根据所述运动轨迹信息生成手势指令, 并根据所述手 势指令对应用程序进行控制操作。  Performing the terminal control operation according to the motion trajectory information specifically includes: controlling generation of a character according to the motion trajectory information, or generating a gesture instruction according to the motion trajectory information, and performing a control operation on the application program according to the gesture instruction.
其中, 所述通过摄像头跟踪用户使用的轨迹输入装置, 并记录轨迹输入装 置在空中的运动轨迹信息, 包括:  The tracking input device used by the user is tracked by the camera, and the motion track information of the track input device in the air is recorded, including:
通过摄像头跟踪用户使用的轨迹输入装置;  Tracking the track input device used by the user through the camera;
判断轨迹输入装置在空中的运动轨迹信息是否为有效的运动轨迹信息; - - 当判断为是时, 记录所述运动轨迹信息, 否则删除所述运动轨迹信息。 其中, 所述根据所述运动轨迹信息控制生成字符, 包括: Determining whether the motion track information of the track input device in the air is valid motion track information; - - When the determination is YES, the motion track information is recorded, otherwise the motion track information is deleted. The controlling the generation of characters according to the motion track information includes:
当识别出所述运动轨迹信息为输入字符的运动轨迹信息时,则将所述运动 轨迹信息处理为二维数据;  When it is recognized that the motion trajectory information is motion trajectory information of an input character, the motion trajectory information is processed into two-dimensional data;
对所述二维数据进行字符识别, 并生成至少一个按照权重排序的字符; 将所述至少一个按照权重排序的字符显示给用户。  Performing character recognition on the two-dimensional data, and generating at least one character sorted by weight; displaying the at least one character sorted according to the weight to the user.
其中, 所述根据所述运动轨迹信息生成手势指令, 并根据所述手势指令对 应用程序进行控制操作, 包括:  The generating a gesture instruction according to the motion trajectory information, and performing a control operation on the application according to the gesture instruction, includes:
当识别出所述运动轨迹信息为做手势的运动轨迹信息时,则将所述运动轨 迹信息处理为手势指令;  When the motion track information is recognized as the motion track information of the gesture, the motion track information is processed as a gesture instruction;
根据所述手势指令调用相应的函数对应用程序进行操作。  The application is operated by calling a corresponding function according to the gesture instruction.
其中, 还包括:  Among them, it also includes:
通过距离感应器感应用户使用的轨迹输入装置与目标元素之间的距离; 当轨迹输入装置与目标元素之间达到操作距离时,则对目标元素执行点击 操作。  The distance sensor is used to sense the distance between the trajectory input device used by the user and the target element; when the trajectory input device reaches the operating distance between the target element, a click operation is performed on the target element.
相应地, 本发明实施例还提供了一种移动终端, 包括:  Correspondingly, an embodiment of the present invention further provides a mobile terminal, including:
跟踪记录模块, 用于通过摄像头跟踪用户使用的轨迹输入装置, 并记录轨 迹输入装置在空中的运动轨迹信息;  a tracking record module, configured to track a track input device used by the user through the camera, and record motion track information of the track input device in the air;
字符处理模块, 用于根据所述运动轨迹信息控制生成字符;  a character processing module, configured to control generating a character according to the motion track information;
手势处理模块, 用于根据所述运动轨迹信息生成手势指令, 并根据所述手 势指令对应用程序进行控制操作。  And a gesture processing module, configured to generate a gesture instruction according to the motion track information, and perform a control operation on the application according to the gesture instruction.
其中, 所述跟踪记录模块包括:  The tracking record module includes:
跟踪单元, 用于通过摄像头跟踪用户使用的轨迹输入装置;  a tracking unit, configured to track, by the camera, a track input device used by the user;
判断单元,用于判断轨迹输入装置在空中的运动轨迹信息是否为有效的运 动轨迹信息;  a determining unit, configured to determine whether the motion track information of the track input device in the air is valid motion track information;
记录删除单元, 用于当判断为是时, 记录所述运动轨迹信息, 否则删除所 述运动轨迹信息。  The record deletion unit is configured to record the motion track information when the determination is yes, otherwise delete the motion track information.
其中, 所述字符处理模块包括:  The character processing module includes:
第一处理单元,用于当识别出所述运动轨迹信息为输入字符的运动轨迹信 - - 息时, 则将所述运动轨迹信息处理为二维数据; a first processing unit, configured to: when the motion track information is recognized as a motion track letter of an input character - - when the information is processed, the motion track information is processed into two-dimensional data;
字符生成单元, 用于对所述二维数据进行字符识别, 并生成至少一个按照 权重排序的字符;  a character generating unit, configured to perform character recognition on the two-dimensional data, and generate at least one character sorted according to weights;
显示单元, 用于将所述至少一个按照权重排序的字符显示给用户。  a display unit, configured to display the at least one weighted character to the user.
其中, 所述手势处理模块包括:  The gesture processing module includes:
第二处理单元,用于当识别出所述运动轨迹信息为做手势的运动轨迹信息 时, 则将所述运动轨迹信息处理为手势指令;  a second processing unit, configured to process the motion track information into a gesture instruction when the motion track information is recognized as motion track information for making a gesture;
调用单元, 用于根据所述手势指令调用相应的函数对应用程序进行操作。 其中, 还包括:  The calling unit is configured to invoke the corresponding function to operate the application according to the gesture instruction. Among them, it also includes:
感应模块,用于通过距离感应器感应用户使用的轨迹输入装置与目标元素 之间的距离;  a sensing module for sensing a distance between a trajectory input device used by a user and a target element by using a distance sensor;
操作模块, 用于当轨迹输入装置与目标元素之间达到操作距离时, 则对目 标元素执行点击操作。  The operation module is configured to perform a click operation on the target element when the operation distance between the track input device and the target element is reached.
实施本发明实施例, 具有如下有益效果:  Embodiments of the present invention have the following beneficial effects:
本发明实施例通过摄像头跟踪用户如手指等轨迹输入装置,并记录轨迹输 入装置在空中的运动轨迹信息,生成用户所要输入的信息,让用户不受各种软、 硬键盘的限制, 以更筒单的方式进行输入以及控制,使用户的输入以及控制变 得更加便捷、 自然。 附图说明  In the embodiment of the present invention, the trajectory input device such as a finger is tracked by the camera, and the trajectory information of the trajectory input device in the air is recorded, and the information to be input by the user is generated, so that the user is not restricted by various soft and hard keyboards. The input and control are made in a single way, making the user's input and control more convenient and natural. DRAWINGS
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施 例或现有技术描述中所需要使用的附图作筒单地介绍,显而易见地, 下面描述 中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付 出创造性劳动的前提下, 还可以根据这些附图获得其他的附图。  In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below. Obviously, the drawings in the following description It is merely some embodiments of the present invention, and those skilled in the art can obtain other drawings according to the drawings without any creative work.
图 1是本发明实施例提供的一种基于摄像头的控制方法的流程示意图; 图 2是本发明实施例提供的另一种基于摄像头的控制方法的流程示意图; 图 3是本发明实施例提供的一种移动终端的结构示意图;  1 is a schematic flowchart of a camera-based control method according to an embodiment of the present invention; FIG. 2 is a schematic flowchart of another camera-based control method according to an embodiment of the present invention; A schematic structural diagram of a mobile terminal;
图 4是本发明实施例提供的一种跟踪记录模块的结构示意图;  4 is a schematic structural diagram of a tracking and recording module according to an embodiment of the present invention;
图 5是本发明实施例提供的一种字符处理模块的结构示意图; - - 图 6是本发明实施例提供的一种手势处理模块的结构示意图。 具体实施方式 FIG. 5 is a schematic structural diagram of a character processing module according to an embodiment of the present disclosure; - Figure 6 is a schematic structural diagram of a gesture processing module according to an embodiment of the present invention. detailed description
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清 楚、 完整地描述, 显然, 所描述的实施例仅仅是本发明一部分实施例, 而不是 全部的实施例。基于本发明中的实施例, 本领域普通技术人员在没有作出创造 性劳动前提下所获得的所有其他实施例, 都属于本发明保护的范围。  BRIEF DESCRIPTION OF THE DRAWINGS The technical solutions in the embodiments of the present invention will be described in detail below with reference to the accompanying drawings. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without creative work are within the scope of the present invention.
请参见图 1 , 是本发明实施例提供的一种基于摄像头的控制方法的流程示 意图。 如图 1所示, 本发明实施例的所述方法包括以下步骤:  Referring to FIG. 1 , it is a schematic flowchart of a camera-based control method according to an embodiment of the present invention. As shown in FIG. 1, the method of the embodiment of the present invention includes the following steps:
S101 ,通过摄像头跟踪用户使用的轨迹输入装置,并记录轨迹输入装置在 空中的运动轨迹信息;  S101. Track the track input device used by the user through the camera, and record the motion track information of the track input device in the air;
具体的, 所述轨迹输入装置可以是用户的手指, 即摄像头可以检测用户的 指尖并跟踪指尖的运动。 用户的手指需要在摄像头可以采集到的范围内运动, 以便于摄像头可以采集到完整的运动轨迹信息。移动终端还可以判断得到的运 动轨迹信息是否为有效的运动轨迹信息, 当判断为是时,记录所述运动轨迹信 息, 否则删除所述运动轨迹信息, 所述有效的运动轨迹信息为从输入字符开始 到输入字符结束之间的运动轨迹信息,或所述有效的运动轨迹信息为从做手势 开始到做手势结束之间的运动轨迹信息,被删除的运动轨迹信息均为无效的运 动轨迹信息, 例如,手指刚进入摄像头采集范围内时的运动轨迹信息则为无效 的运动轨迹信息;移动终端根据指尖速度判断用户手指是否为抬笔动作或落笔 动作, 所述抬笔动作为开始输入时的准备动作, 所述落笔动作为输入完毕后的 结束动作,抬笔之前得到的运动轨迹信息和落笔之后得到的运动轨迹信息均为 无效的运动轨迹信息。  Specifically, the track input device may be a finger of the user, that is, the camera can detect the fingertip of the user and track the movement of the fingertip. The user's finger needs to move within the range that the camera can capture, so that the camera can capture complete motion track information. The mobile terminal may further determine whether the obtained motion track information is valid motion track information. When the determination is yes, the motion track information is recorded, otherwise the motion track information is deleted, and the valid motion track information is a slave input character. Starting to the motion track information between the end of the input character, or the effective motion track information is motion track information between the start of the gesture and the end of the gesture, and the deleted motion track information is invalid motion track information, For example, the motion track information when the finger just enters the camera acquisition range is invalid motion track information; the mobile terminal determines whether the user finger is a pen lift action or a pen down action according to the fingertip speed, and the lift pen action is when the input is started. The preparatory action is the end motion after the input is completed, and the motion track information obtained before the pen is lifted and the motion track information obtained after the pen down are all invalid motion track information.
S102, 根据所述运动轨迹信息进行终端控制操作, 具体包括: 根据所述运 动轨迹信息控制生成字符, 或者根据所述运动轨迹信息生成手势指令, 并根据 所述手势指令对应用程序进行控制操作;  S102. Performing a terminal control operation according to the motion trajectory information, specifically: controlling generating a character according to the motion trajectory information, or generating a gesture instruction according to the motion trajectory information, and performing a control operation on the application program according to the gesture instruction;
具体的, 当识别出所述运动轨迹信息为输入字符的运动轨迹信息时, 则将 所述运动轨迹信息处理为二维数据,对所述二维数据进行字符识别, 并生成至 少一个按照权重排序的字符,最后将所述至少一个按照权重排序的字符显示给 - - 用户。其中,识别所述运动轨迹信息为输入字符的运动轨迹信息的过程可以为, 先识别当前的运动轨迹信息是否为预先存储好的手势的运动轨迹信息,手势的 运动轨迹信息可以为手的下拨、上拨或其他手势的运动轨迹信息, 若当前的运 动轨迹信息不为预先存储好的手势的运动轨迹信息,则可以将当前的运动轨迹 信息视为输入字符的运动轨迹信息。例如, 当用户选择移动终端上的一个输入 框, 使其获得焦点时, 用户可以开始输入, 用户将手指移到摄像头上方后, 若 界面上的状态灯从红色变成绿色,表示此时手指的移动正在被摄像头识别, 然 后用户可以在摄像头上方划出一个字符, 与此同时,移动终端的界面上会显示 出处理后的指尖的运动轨迹,也可以在设置中选择在界面上显示摄像头拍摄到 的画面, 当用户手指不再移动或离开了摄像头的采集范围时, 指示灯变红, 表 示一次输入结束,根据得到的运动轨迹信息进行识别, 最后在界面上将可以显 示若干个可能与用户输入相匹配的字符。 用户输入的字符可以为一个或多个, 当用户输入一个字符时,移动终端通过检测手指的停顿或离开摄像头的采集范 围来判断一次输入的结束; 当用户输入多个字符时, 多个字符可以为一个单词 或汉字词组,在输入过程中,移动终端还需要根据指尖的运动判断用户是否开 始书写新的字符。 Specifically, when the motion track information is identified as the motion track information of the input character, the motion track information is processed into two-dimensional data, the two-dimensional data is subjected to character recognition, and at least one weighted order is generated. Character, finally displaying the at least one character sorted by weight to - - User. The process of identifying the motion track information as the motion track information of the input character may be: first identifying whether the current motion track information is the motion track information of the pre-stored gesture, and the motion track information of the gesture may be the hand dialing If the current motion track information is not the motion track information of the pre-stored gesture, the current motion track information may be regarded as the motion track information of the input character. For example, when the user selects an input box on the mobile terminal to obtain the focus, the user can start inputting, and after the user moves the finger over the camera, if the status light on the interface changes from red to green, the finger is at this time. The movement is being recognized by the camera, and then the user can draw a character above the camera. At the same time, the movement of the processed fingertip is displayed on the interface of the mobile terminal, and the camera shooting can be selected on the interface in the setting. When the user's finger no longer moves or leaves the camera's acquisition range, the indicator light turns red, indicating that the input is over, and the recognition is based on the obtained motion track information. Finally, several possible users can be displayed on the interface. Enter the matching characters. The character input by the user may be one or more. When the user inputs a character, the mobile terminal judges the end of the input by detecting the pause of the finger or leaving the collection range of the camera; when the user inputs multiple characters, multiple characters may For a word or a Chinese phrase, during the input process, the mobile terminal also needs to judge whether the user starts writing a new character according to the movement of the fingertip.
当识别出所述运动轨迹信息为做手势的运动轨迹信息时,则将所述运动轨 迹信息处理为手势指令,根据所述手势指令调用相应的函数对应用程序进行操 作。 其中, 识别所述运动轨迹信息为做手势的运动轨迹信息的过程可以为, 将 当前的运动轨迹信息与预先存储好的手势的运动轨迹信息作比较, 若匹配, 则 将当前的运动轨迹信息视为做手势的运动轨迹信息。例如, 用户在摄像头上方 做出后拨的手势, 则可以实现向下翻页的操作; 若在摄像头上方做出前拨的手 势, 则可以实现向上翻页的操作; 若在摄像头上方做出向左挥手的手势, 则可 以实现删除输入框里的一个字符的操作; 或者做出更多其它的手势, 以实现更 多的操作。  When the motion track information is identified as the motion track information of the gesture, the motion track information is processed as a gesture instruction, and the corresponding function is called according to the gesture instruction to operate the application. The process of identifying the motion track information as the motion track information of the gesture may be: comparing the current motion track information with the motion track information of the pre-stored gesture, and if the match, the current motion track information is regarded as Motion track information for making gestures. For example, if the user makes a back dial gesture above the camera, the page flip operation can be realized; if a forward dial gesture is made above the camera, the page flip operation can be realized; if the camera is made above the camera The left-handed gesture can be used to delete one character in the input box; or to make more other gestures to achieve more operations.
本发明实施例通过摄像头跟踪用户如手指等轨迹输入装置,并记录轨迹输 入装置在空中的运动轨迹信息,生成用户所要输入的信息,让用户不受各种软、 硬键盘的限制, 以更筒单的方式进行输入以及控制,使用户的输入以及控制变 得更加便捷、 自然。 - - 请参见图 2, 是本发明实施例提供的另一种基于摄像头的控制方法的流程 示意图。 如图 2所示, 本发明实施例的所述方法包括以下步骤: In the embodiment of the present invention, the trajectory input device such as a finger is tracked by the camera, and the trajectory information of the trajectory input device in the air is recorded, and the information to be input by the user is generated, so that the user is not restricted by various soft and hard keyboards. The input and control are made in a single way, making the user's input and control more convenient and natural. Referring to FIG. 2, it is a schematic flowchart of another camera-based control method according to an embodiment of the present invention. As shown in FIG. 2, the method of the embodiment of the present invention includes the following steps:
5201 , 通过摄像头跟踪用户使用的轨迹输入装置;  5201, tracking the track input device used by the user through the camera;
所述轨迹输入装置可以是用户的手指,即摄像头可以检测用户的指尖并跟 踪指尖的运动; 用户的手指需要在摄像头可以采集到的范围内运动, 以便于摄 像头可以采集到完整的运动轨迹信息。  The track input device may be a user's finger, that is, the camera can detect the user's fingertip and track the movement of the fingertip; the user's finger needs to move within the range that the camera can collect, so that the camera can capture the complete motion track. information.
5202,判断轨迹输入装置在空中的运动轨迹信息是否为有效的运动轨迹信 移动终端还可以判断得到的运动轨迹信息是否为有效的运动轨迹信息,当 判断为是时, 则执行 S204的步骤; 当判断为否时, 则执行 S203的步骤。  5202, determining whether the motion track information of the track input device in the air is a valid motion track signal. The mobile terminal may further determine whether the obtained motion track information is valid motion track information. When the determination is yes, the step of step S204 is performed; If the determination is no, the step of S203 is executed.
5203 , 删除所述运动轨迹信息;  5203. Delete the motion track information.
当 S202步骤的判断为否时, 删除所述运动轨迹信息, 被删除的运动轨迹 信息均为无效的运动轨迹信息, 例如, 手指刚进入摄像头采集范围内时的运动 轨迹信息则为无效的运动轨迹信息;移动终端根据指尖速度判断用户手指是否 为抬笔动作或落笔动作, 所述抬笔动作为开始输入时的准备动作, 所述落笔动 作为输入完毕后的结束动作,抬笔之前得到的运动轨迹信息和落笔之后得到的 运动轨迹信息均为无效的运动轨迹信息。  When the judgment of step S202 is NO, the motion track information is deleted, and the deleted motion track information is invalid motion track information. For example, the motion track information when the finger just enters the camera acquisition range is an invalid motion track. The mobile terminal determines whether the user's finger is a pen-up action or a pen-down action according to the fingertip speed. The pen-up action is a preparation action when the input is started, and the pen-down action is an end action after the input is completed, and is obtained before the pen-up is completed. The motion track information and the motion track information obtained after the pen down are all invalid motion track information.
5204, 记录所述运动轨迹信息;  5204, recording the motion track information;
当 S202步骤的判断为是时, 记录所述运动轨迹信息, 所述有效的运动轨 迹信息为从输入字符开始到输入字符结束之间的运动轨迹信息,或所述有效的 运动轨迹信息为从做手势开始到做手势结束之间的运动轨迹信息。  When the judgment of step S202 is YES, the motion track information is recorded, and the effective motion track information is motion track information between the input character and the end of the input character, or the effective motion track information is made from The motion track information between the start of the gesture and the end of the gesture.
5205 , 当识别出所述运动轨迹信息为输入字符的运动轨迹信息时,则将所 述运动轨迹信息处理为二维数据;  5205. When it is recognized that the motion track information is motion track information of an input character, the motion track information is processed into two-dimensional data;
其中, 识别所述运动轨迹信息为输入字符的运动轨迹信息的过程可以为, 先识别当前的运动轨迹信息是否为预先存储好的手势的运动轨迹信息,手势的 运动轨迹信息可以为手的下拨、上拨或其他手势的运动轨迹信息, 若当前的运 动轨迹信息不为预先存储好的手势的运动轨迹信息,则可以将当前的运动轨迹 信息视为输入字符的运动轨迹信息。输入的字符可以为一个或多个字符, 多个 字符可以为一个单词或汉字词组, 当输入的字符为多个字符时,将对运动轨迹 信息中包含的多个字符分别处理为二维数据。 The process of identifying the motion track information as the motion track information of the input character may be: first identifying whether the current motion track information is the motion track information of the pre-stored gesture, and the motion track information of the gesture may be the hand dialing If the current motion track information is not the motion track information of the pre-stored gesture, the current motion track information may be regarded as the motion track information of the input character. The input character can be one or more characters, and multiple characters can be a single word or a Chinese character phrase. When the input character is multiple characters, the motion track will be The multiple characters contained in the message are processed as two-dimensional data.
S206,对所述二维数据进行字符识别,并生成至少一个按照权重排序的字 付;  S206: Perform character recognition on the two-dimensional data, and generate at least one word-sorted by weight;
所述权重排序依据的是字符在词库中的词频和用户的输入习惯。  The weight ordering is based on the word frequency of the character in the thesaurus and the user's input habits.
S207, 将所述至少一个按照权重排序的字符显示给用户;  S207. Display the at least one character sorted according to the weight to the user.
5208, 当识别出所述运动轨迹信息为做手势的运动轨迹信息时,则将所述 运动轨迹信息处理为手势指令;  5208. When the motion track information is identified as the motion track information of the gesture, the motion track information is processed as a gesture instruction.
其中,识别所述运动轨迹信息为做手势的运动轨迹信息的过程可以为,将 当前的运动轨迹信息与预先存储好的手势的运动轨迹信息作比较, 若匹配, 则 将当前的运动轨迹信息视为做手势的运动轨迹信息。  The process of identifying the motion track information as the motion track information of the gesture may be: comparing the current motion track information with the motion track information of the pre-stored gesture, and if the match, the current motion track information is regarded as Motion track information for making gestures.
5209, 根据所述手势指令调用相应的函数对应用程序进行操作; 具体的, 用户在摄像头上方做出后拨的手势, 则生成手势指令以调用相应 的函数来实现向下翻页的操作; 若在摄像头上方做出前拨的手势, 则生成手势 指令以调用相应的函数来实现向上翻页的操作;若在摄像头上方做出向左挥手 的手势,则生成手势指令以调用相应的函数来实现删除输入框里的一个字符的 操作; 或者做出更多其它的手势, 以实现更多的操作。  5209, according to the gesture instruction, calling a corresponding function to operate the application; specifically, the user makes a back dial gesture above the camera, and generates a gesture instruction to invoke a corresponding function to implement a page-turning operation; Making a forward dialing gesture above the camera generates a gesture instruction to invoke a corresponding function to implement an upward page turning operation; if a gesture of waving to the left is made above the camera, a gesture instruction is generated to invoke a corresponding function to implement The operation of deleting a character in the input box; or making more other gestures to achieve more operations.
移动终端还可以通过距离感应器感应用户使用的轨迹输入装置与目标元 素之间的距离; 当轨迹输入装置与目标元素之间达到操作距离时, 则对目标元 素执行点击操作。  The mobile terminal can also sense the distance between the trajectory input device used by the user and the target element through the distance sensor; when the trajectory input device reaches the operating distance between the target element, the click operation is performed on the target element.
例如,距离感应器可以检测手指靠近了某个字符,从而实现不点击屏幕即 可选词的操作。  For example, the distance sensor can detect that a finger is close to a character, thereby achieving an operation that does not click on the screen, that is, an optional word.
本发明实施例通过摄像头跟踪用户如手指等轨迹输入装置,并记录轨迹输 入装置在空中的运动轨迹信息,生成用户所要输入的信息,让用户不受各种软、 硬键盘的限制, 以更筒单的方式进行输入以及控制,使用户的输入以及控制变 得更加便捷、 自然。  In the embodiment of the present invention, the trajectory input device such as a finger is tracked by the camera, and the trajectory information of the trajectory input device in the air is recorded, and the information to be input by the user is generated, so that the user is not restricted by various soft and hard keyboards. The input and control are made in a single way, making the user's input and control more convenient and natural.
请参见图 3 , 是本发明实施例提供的一种移动终端的结构示意图, 所述移 动终端包括: 跟踪记录模块 10、 字符处理模块 20、 手势处理模块 30。  FIG. 3 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention. The mobile terminal includes: a tracking record module 10, a character processing module 20, and a gesture processing module 30.
其中, 所述跟踪记录模块 10, 用于通过摄像头跟踪用户使用的轨迹输入 装置, 并记录轨迹输入装置在空中的运动轨迹信息; - - 具体的, 所述轨迹输入装置可以是用户的手指, 即所述跟踪记录模块 10 可以通过摄像头检测用户的指尖并跟踪指尖的运动。用户的手指需要在摄像头 可以采集到的范围内运动, 以便于摄像头可以采集到完整的运动轨迹信息。所 述跟踪记录模块 10还可以判断得到的运动轨迹信息是否为有效的运动轨迹信 息, 当判断为是时, 记录所述运动轨迹信息, 否则删除所述运动轨迹信息, 所 述有效的运动轨迹信息为从输入字符开始到输入字符结束之间的运动轨迹信 息,或所述有效的运动轨迹信息为从做手势开始到做手势结束之间的运动轨迹 信息, 被删除的运动轨迹信息均为无效的运动轨迹信息, 例如, 手指刚进入摄 像头采集范围内时的运动轨迹信息则为无效的运动轨迹信息;移动终端根据指 尖速度判断用户手指是否为抬笔动作或落笔动作,所述抬笔动作为开始输入时 的准备动作, 所述落笔动作为输入完毕后的结束动作,抬笔之前得到的运动轨 迹信息和落笔之后得到的运动轨迹信息均为无效的运动轨迹信息。 The tracking and recording module 10 is configured to track a track input device used by a user by using a camera, and record motion track information of the track input device in the air; Specifically, the track input device may be a finger of the user, that is, the track record module 10 may detect the fingertip of the user through the camera and track the movement of the fingertip. The user's finger needs to move within the range that the camera can capture, so that the camera can capture complete motion track information. The tracking record module 10 can also determine whether the obtained motion track information is valid motion track information. When the determination is yes, the motion track information is recorded, otherwise the motion track information is deleted, and the effective motion track information is deleted. The motion track information between the start of the input character and the end of the input character, or the effective motion track information is the motion track information between the start of the gesture and the end of the gesture, and the deleted motion track information is invalid. The motion track information, for example, the motion track information when the finger just enters the camera acquisition range is invalid motion track information; the mobile terminal determines whether the user finger is a pen lift action or a pen down action according to the fingertip speed, and the pen lift action is The preparation operation at the start of the input is the end operation after the input is completed, and the motion trajectory information obtained before the pen is lifted and the motion trajectory information obtained after the pen down are all invalid motion trajectory information.
用户输入的字符可以为一个或多个, 当用户输入一个字符时, 所述 3艮踪记 录模块 10还可以通过检测手指的停顿或离开摄像头的采集范围来判断一次输 入的结束; 当用户输入多个字符时, 多个字符可以为一个单词或汉字词组, 在 输入过程中, 所述跟踪记录模块 10还需要根据指尖的运动判断用户是否开始 书写新的字符。  The user can input one or more characters. When the user inputs a character, the 3 track record module 10 can also determine the end of an input by detecting the pause of the finger or the range of the camera leaving the camera; In the case of characters, the plurality of characters may be a word or a Chinese character phrase. During the input process, the tracking and recording module 10 also needs to determine whether the user starts writing a new character according to the movement of the fingertip.
所述字符处理模块 20, 用于根据所述运动轨迹信息控制生成字符; 具体的, 当所述字符处理模块 20识别出所述运动轨迹信息为输入字符的 运动轨迹信息时, 则将所述运动轨迹信息处理为二维数据,对所述二维数据进 行字符识别, 并生成至少一个按照权重排序的字符, 最后将所述至少一个按照 权重排序的字符显示给用户。  The character processing module 20 is configured to control generating a character according to the motion track information. Specifically, when the character processing module 20 recognizes that the motion track information is motion track information of an input character, the motion is performed. The trajectory information is processed into two-dimensional data, the two-dimensional data is subjected to character recognition, and at least one character sorted according to the weight is generated, and finally the at least one character sorted according to the weight is displayed to the user.
所述手势处理模块 30, 用于根据所述运动轨迹信息生成手势指令, 并根 据所述手势指令对应用程序进行控制操作;  The gesture processing module 30 is configured to generate a gesture instruction according to the motion track information, and perform a control operation on the application according to the gesture instruction;
当所述手势处理模块 30识别出所述运动轨迹信息为做手势的运动轨迹信 息时, 则将所述运动轨迹信息处理为手势指令,根据所述手势指令调用相应的 函数对应用程序进行操作。 例如, 用户在摄像头上方做出后拨的手势, 则所述 手势处理模块 30可以实现向下翻页的操作;若在摄像头上方做出前拨的手势, 则所述手势处理模块 30可以实现向上翻页的操作; 若在摄像头上方做出向左 - - 挥手的手势, 则所述手势处理模块 30可以实现删除输入框里的一个字符的操 作; 或者做出更多其它的手势, 以实现更多的操作。 When the gesture processing module 30 recognizes that the motion track information is the motion track information of the gesture, the motion track information is processed into a gesture instruction, and the corresponding function is invoked according to the gesture instruction to operate the application. For example, if the user makes a backward gesture above the camera, the gesture processing module 30 can implement an operation of page down; if a forward dial gesture is made above the camera, the gesture processing module 30 can implement upward Page flip operation; if left at the top of the camera - - A gesture of waving, the gesture processing module 30 may implement an operation to delete one character in the input box; or make more other gestures to achieve more operations.
请参见图 4, 是图 3中跟踪记录模块 10的结构示意图, 所述跟踪记录模 块 10包括: 跟踪单元 101、 判断单元 102、 记录删除单元 103。  Referring to FIG. 4, which is a schematic structural diagram of the trace recording module 10 in FIG. 3, the trace recording module 10 includes: a tracking unit 101, a determining unit 102, and a record deleting unit 103.
其中,所述跟踪单元 101 ,用于通过摄像头跟踪用户使用的轨迹输入装置; 所述轨迹输入装置可以是用户的手指,即跟踪单元 101可以通过摄像头检 测用户的指尖并跟踪指尖的运动;用户的手指需要在摄像头可以采集到的范围 内运动, 以便于摄像头可以采集到完整的运动轨迹信息。  The tracking unit 101 is configured to track the trajectory input device used by the user through the camera; the trajectory input device may be a finger of the user, that is, the tracking unit 101 may detect the fingertip of the user through the camera and track the movement of the fingertip; The user's finger needs to move within the range that the camera can capture, so that the camera can capture complete motion track information.
所述判断单元 102, 用于判断轨迹输入装置在空中的运动轨迹信息是否为 有效的运动轨迹信息;  The determining unit 102 is configured to determine whether the motion track information of the track input device in the air is valid motion track information;
所述判断单元 102还可以判断得到的运动轨迹信息是否为有效的运动轨 迹信息, 并根据判断结果通知记录删除单元 103执行相应的操作。  The judging unit 102 can also judge whether the obtained motion locus information is valid motion track information, and notify the record deletion unit 103 to perform a corresponding operation according to the judgment result.
所述记录删除单元 103, 用于当判断为是时, 记录所述运动轨迹信息, 否 则删除所述运动轨迹信息;  The record deletion unit 103 is configured to: when the determination is yes, record the motion track information, or delete the motion track information;
当所述判断单元 102判断为是时,所述记录删除单元 103记录所述运动轨 迹信息,所述有效的运动轨迹信息为从输入字符开始到输入字符结束之间的运 动轨迹信息,或所述有效的运动轨迹信息为从做手势开始到做手势结束之间的 运动轨迹信息。  When the determination unit 102 determines YES, the record deletion unit 103 records the motion trajectory information, and the effective motion trajectory information is motion trajectory information from the input character to the end of the input character, or the The effective motion trajectory information is motion trajectory information from the start of the gesture to the end of the gesture.
当所述判断单元 102判断为否时,所述记录删除单元 103删除所述运动轨 迹信息, 被删除的运动轨迹信息均为无效的运动轨迹信息, 例如, 手指刚进入 摄像头采集范围内时的运动轨迹信息则为无效的运动轨迹信息;移动终端根据 指尖速度判断用户手指是否为抬笔动作或落笔动作,所述抬笔动作为开始输入 时的准备动作, 所述落笔动作为输入完毕后的结束动作,抬笔之前得到的运动 轨迹信息和落笔之后得到的运动轨迹信息均为无效的运动轨迹信息。  When the determining unit 102 determines NO, the record deleting unit 103 deletes the motion track information, and the deleted motion track information is invalid motion track information, for example, the motion when the finger just enters the camera capturing range. The trajectory information is invalid motion trajectory information; the mobile terminal determines whether the user's finger is a pen-up action or a pen-down action according to the fingertip speed, and the pen-up action is a preparation action when the input is started, and the pen-down action is after the input is completed. When the action is finished, the motion track information obtained before the pen is lifted and the motion track information obtained after the pen is dropped are all invalid motion track information.
请参见图 5, 是图 3中字符处理模块 20的结构示意图, 所述字符处理模 块 20包括: 第一处理单元 201、 字符生成单元 202、 显示单元 203。  Referring to FIG. 5, which is a schematic structural diagram of the character processing module 20 of FIG. 3, the character processing module 20 includes: a first processing unit 201, a character generating unit 202, and a display unit 203.
其中, 所述第一处理单元 201 , 用于当识别出所述运动轨迹信息为输入字 符的运动轨迹信息时, 则将所述运动轨迹信息处理为二维数据;  The first processing unit 201 is configured to process the motion track information into two-dimensional data when the motion track information is recognized as the motion track information of the input character;
其中,所述第一处理单元 201先识别当前的运动轨迹信息是否为预先存储 - - 好的手势的运动轨迹信息, 手势的运动轨迹信息可以为手的下拨、上拨或其他 手势的运动轨迹信息,若当前的运动轨迹信息不为预先存储好的手势的运动轨 迹信息,则所述第一处理单元 201可以将当前的运动轨迹信息视为输入字符的 运动轨迹信息。 The first processing unit 201 first identifies whether the current motion track information is pre-stored. - - The motion track information of a good gesture, the motion track information of the gesture may be the motion track information of the hand's down, up, or other gestures. If the current motion track information is not the motion track information of the pre-stored gesture, Then, the first processing unit 201 can regard the current motion track information as motion track information of the input character.
所述字符生成单元 202, 用于对所述二维数据进行字符识别, 并生成至少 一个按照权重排序的字符;  The character generating unit 202 is configured to perform character recognition on the two-dimensional data, and generate at least one character sorted according to weights;
所述字符生成单元 202对所述二维数据进行字符识别并依据字符在词库 中的词频和用户的输入习惯生成至少一个按照权重排序的字符。  The character generating unit 202 performs character recognition on the two-dimensional data and generates at least one character sorted according to the weight according to the word frequency of the character in the lexicon and the input habit of the user.
所述显示单元 203 , 用于将所述至少一个按照权重排序的字符显示给用 户;  The display unit 203 is configured to display the at least one character sorted according to the weight to the user;
请参见图 6, 是图 3中手势处理模块 30的结构示意图, 手势处理模块 30 包括: 第二处理单元 301、 调用单元 302。  Referring to FIG. 6, FIG. 3 is a schematic structural diagram of the gesture processing module 30 of FIG. 3. The gesture processing module 30 includes: a second processing unit 301, and a calling unit 302.
其中, 所述第二处理单元 301 , 用于当识别出所述运动轨迹信息为做手势 的运动轨迹信息时, 则将所述运动轨迹信息处理为手势指令;  The second processing unit 301 is configured to process the motion track information into a gesture instruction when the motion track information is recognized as a motion track information of a gesture;
其中,识别所述运动轨迹信息为做手势的运动轨迹信息的过程可以为, 所 述第二处理单元 301 将当前的运动轨迹信息与预先存储好的手势的运动轨迹 信息作比较, 若匹配, 则所述第二处理单元 301将当前的运动轨迹信息视为做 手势的运动轨迹信息。  The process of identifying the motion track information as the motion track information of the gesture may be: the second processing unit 301 compares the current motion track information with the motion track information of the pre-stored gesture, and if yes, The second processing unit 301 regards the current motion trajectory information as motion trajectory information of the gesture.
所述调用单元 302, 用于根据所述手势指令调用相应的函数对应用程序进 行操作;  The calling unit 302 is configured to invoke a corresponding function to operate the application according to the gesture instruction;
具体的, 用户在摄像头上方做出后拨的手势, 则所述调用单元 302根据所 述第二处理单元 301 所生成的手势指令调用相应的函数来实现向下翻页的操 作; 若在摄像头上方做出前拨的手势, 则所述调用单元 302根据所述第二处理 单元 301所生成的手势指令调用相应的函数来实现向上翻页的操作;若在摄像 头上方做出向左挥手的手势, 则所述调用单元 302根据所述第二处理单元 301 所生成的手势指令调用相应的函数来实现删除输入框里的一个字符的操作;或 者做出更多其它的手势, 以实现更多的操作。  Specifically, the user makes a back dialing gesture above the camera, and the calling unit 302 invokes a corresponding function according to the gesture instruction generated by the second processing unit 301 to implement a page turning operation; if the camera is above the camera When the forward dialing gesture is made, the calling unit 302 invokes a corresponding function according to the gesture instruction generated by the second processing unit 301 to implement an upward page turning operation; if a gesture of waving to the left is made above the camera, Then, the calling unit 302 invokes a corresponding function according to the gesture instruction generated by the second processing unit 301 to implement an operation of deleting one character in the input box; or makes more other gestures to implement more operations. .
移动终端还可以包括感应模块和操作模块;  The mobile terminal may further include a sensing module and an operation module;
所述感应模块,用于通过距离感应器感应用户使用的轨迹输入装置与目标 - - 元素之间的距离; The sensing module is configured to sense a trajectory input device and a target used by a user through a distance sensor - - the distance between the elements;
所述操作模块, 用于当轨迹输入装置与目标元素之间达到操作距离时, 则 对目标元素执行点击操作;  The operation module is configured to perform a click operation on the target element when an operation distance is reached between the track input device and the target element;
例如,感应模块中的距离感应器可以检测手指靠近了某个字符, 然后通知 操作模块实现不点击屏幕即可选词的操作。  For example, the distance sensor in the sensing module can detect that a finger is close to a certain character, and then notify the operation module to implement an operation that does not click on the screen, that is, an optional word.
本发明实施例通过摄像头跟踪用户如手指等轨迹输入装置,并记录轨迹输 入装置在空中的运动轨迹信息,生成用户所要输入的信息,让用户不受各种软、 硬键盘的限制, 以更筒单的方式进行输入以及控制,使用户的输入以及控制变 得更加便捷、 自然。  In the embodiment of the present invention, the trajectory input device such as a finger is tracked by the camera, and the trajectory information of the trajectory input device in the air is recorded, and the information to be input by the user is generated, so that the user is not restricted by various soft and hard keyboards. The input and control are made in a single way, making the user's input and control more convenient and natural.
本发明实施例中所述模块或单元, 可以通过通用集成电路, 例如 CPU The module or unit in the embodiment of the present invention may be through a general-purpose integrated circuit, such as a CPU.
( Central Processing Unit, 中央处理器), 或通过 ASIC ( Application Specific Integrated Circuit, 专用集成电路)来实现。 (Central Processing Unit, Central Processing Unit), or implemented by ASIC (Application Specific Integrated Circuit).
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程, 是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算 机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。 其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory, ROM )或随机存储记忆体(Random Access Memory, RAM )等。  A person skilled in the art can understand that all or part of the process of implementing the above embodiment method can be completed by a computer program to instruct related hardware, and the program can be stored in a computer readable storage medium, the program When executed, the flow of an embodiment of the methods as described above may be included. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).
以上所揭露的仅为本发明较佳实施例而已,当然不能以此来限定本发明之 权利范围,因此依本发明权利要求所作的等同变化,仍属本发明所涵盖的范围。  The above is only the preferred embodiment of the present invention, and the scope of the present invention is not limited thereto, and the equivalent changes made by the claims of the present invention are still within the scope of the present invention.

Claims

权 利 要 求 Rights request
1、 一种基于摄像头的控制方法, 其特征在于, 包括: 1. A camera-based control method, characterized by including:
通过摄像头跟踪用户使用的轨迹输入装置,并记录轨迹输入装置在空中的 运动轨迹信息; Track the trajectory input device used by the user through the camera, and record the trajectory information of the trajectory input device in the air;
根据所述运动轨迹信息进行终端控制操作, 具体包括: 根据所述运动轨迹 信息控制生成字符, 或者根据所述运动轨迹信息生成手势指令, 并根据所述手 势指令对应用程序进行控制操作。 Performing terminal control operations based on the movement trajectory information specifically includes: controlling the generation of characters based on the movement trajectory information, or generating gesture instructions based on the movement trajectory information, and performing control operations on the application program based on the gesture instructions.
2、 如权利要求 1所述的方法, 其特征在于, 所述通过摄像头跟踪用户使 用的轨迹输入装置, 并记录轨迹输入装置在空中的运动轨迹信息, 包括: 通过摄像头跟踪用户使用的轨迹输入装置; 2. The method of claim 1, wherein tracking the trajectory input device used by the user through the camera and recording the movement trajectory information of the trajectory input device in the air includes: tracking the trajectory input device used by the user through the camera ;
判断轨迹输入装置在空中的运动轨迹信息是否为有效的运动轨迹信息; 当判断为是时, 记录所述运动轨迹信息, 否则删除所述运动轨迹信息。 Determine whether the movement trajectory information of the trajectory input device in the air is valid movement trajectory information; when the determination is yes, record the movement trajectory information, otherwise delete the movement trajectory information.
3、 如权利要求 2所述的方法, 其特征在于, 所述根据所述运动轨迹信息 控制生成字符, 包括: 3. The method of claim 2, wherein the controlling the generation of characters according to the motion trajectory information includes:
当识别出所述运动轨迹信息为输入字符的运动轨迹信息时,则将所述运动 轨迹信息处理为二维数据; When it is recognized that the motion trajectory information is the motion trajectory information of the input character, the motion trajectory information is processed into two-dimensional data;
对所述二维数据进行字符识别, 并生成至少一个按照权重排序的字符; 将所述至少一个按照权重排序的字符显示给用户。 Perform character recognition on the two-dimensional data and generate at least one character sorted according to weight; and display the at least one character sorted according to weight to the user.
4、 如权利要求 2所述的方法, 其特征在于, 所述根据所述运动轨迹信息 生成手势指令, 并根据所述手势指令对应用程序进行控制操作, 包括: 4. The method of claim 2, wherein generating gesture instructions based on the motion trajectory information and performing control operations on the application program based on the gesture instructions includes:
当识别出所述运动轨迹信息为做手势的运动轨迹信息时,则将所述运动轨 迹信息处理为手势指令; When it is recognized that the motion trajectory information is the motion trajectory information of a gesture, the motion trajectory information is processed into a gesture instruction;
根据所述手势指令调用相应的函数对应用程序进行操作。 According to the gesture instruction, the corresponding function is called to operate the application program.
5、 如权利要求 1至 4任一项所述的方法, 其特征在于, 还包括: 通过距离感应器感应用户使用的轨迹输入装置与目标元素之间的距离; 当轨迹输入装置与目标元素之间达到操作距离时,则对目标元素执行点击 操作。 5. The method according to any one of claims 1 to 4, further comprising: The distance between the trajectory input device used by the user and the target element is sensed through a distance sensor; when the operating distance is reached between the trajectory input device and the target element, a click operation is performed on the target element.
6、 一种移动终端, 其特征在于, 包括: 6. A mobile terminal, characterized by including:
跟踪记录模块, 用于通过摄像头跟踪用户使用的轨迹输入装置, 并记录轨 迹输入装置在空中的运动轨迹信息; The tracking and recording module is used to track the trajectory input device used by the user through the camera, and record the movement trajectory information of the trajectory input device in the air;
字符处理模块, 用于根据所述运动轨迹信息控制生成字符; A character processing module, used to control the generation of characters according to the motion trajectory information;
手势处理模块, 用于根据所述运动轨迹信息生成手势指令, 并根据所述手 势指令对应用程序进行控制操作。 A gesture processing module, configured to generate gesture instructions according to the motion trajectory information, and to control the application program according to the gesture instructions.
7、 如权利要求 6所述的移动终端, 其特征在于, 所述跟踪记录模块包括: 跟踪单元, 用于通过摄像头跟踪用户使用的轨迹输入装置; 7. The mobile terminal according to claim 6, wherein the tracking and recording module includes: a tracking unit for tracking the trajectory input device used by the user through the camera;
判断单元,用于判断轨迹输入装置在空中的运动轨迹信息是否为有效的运 动轨迹信息; A judgment unit, used to judge whether the movement trajectory information of the trajectory input device in the air is valid movement trajectory information;
记录删除单元, 用于当判断为是时, 记录所述运动轨迹信息, 否则删除所 述运动轨迹信息。 The record deletion unit is used to record the motion trajectory information when the judgment is yes, otherwise delete the motion trajectory information.
8、 如权利要求 7所述的移动终端, 其特征在于, 所述字符处理模块包括: 第一处理单元,用于当识别出所述运动轨迹信息为输入字符的运动轨迹信 息时, 则将所述运动轨迹信息处理为二维数据; 8. The mobile terminal according to claim 7, wherein the character processing module includes: a first processing unit, configured to: when the movement trajectory information is recognized to be the movement trajectory information of the input character, the character processing module The motion trajectory information is processed into two-dimensional data;
字符生成单元, 用于对所述二维数据进行字符识别, 并生成至少一个按照 权重排序的字符; A character generation unit, used to perform character recognition on the two-dimensional data and generate at least one character sorted according to weight;
显示单元, 用于将所述至少一个按照权重排序的字符显示给用户。 A display unit, configured to display the at least one character sorted according to weight to the user.
9、 如权利要求 7所述的移动终端, 其特征在于, 所述手势处理模块包括: 第二处理单元,用于当识别出所述运动轨迹信息为做手势的运动轨迹信息 时, 则将所述运动轨迹信息处理为手势指令; 9. The mobile terminal according to claim 7, wherein the gesture processing module includes: a second processing unit, configured to: when the movement trajectory information is recognized to be the movement trajectory information of a gesture, the gesture processing module The motion trajectory information is processed as gesture instructions;
调用单元, 用于根据所述手势指令调用相应的函数对应用程序进行操作。 The calling unit is used to call the corresponding function according to the gesture instruction to operate the application program.
10、 如权利要求 6至 9任一项所述的移动终端, 其特征在于, 还包括: 感应模块,用于通过距离感应器感应用户使用的轨迹输入装置与目标元素 之间的距离; 10. The mobile terminal according to any one of claims 6 to 9, further comprising: a sensing module for sensing the distance between the trajectory input device used by the user and the target element through a distance sensor;
操作模块, 用于当轨迹输入装置与目标元素之间达到操作距离时, 则对目 标元素执行点击操作。 An operation module is used to perform a click operation on the target element when the operation distance is reached between the trajectory input device and the target element.
PCT/CN2013/080683 2013-01-06 2013-08-02 Camera-based control method and mobile terminal WO2014106380A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310003506.3 2013-01-06
CN201310003506.3A CN103092343B (en) 2013-01-06 2013-01-06 A kind of control method based on photographic head and mobile terminal

Publications (1)

Publication Number Publication Date
WO2014106380A1 true WO2014106380A1 (en) 2014-07-10

Family

ID=48205015

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/080683 WO2014106380A1 (en) 2013-01-06 2013-08-02 Camera-based control method and mobile terminal

Country Status (2)

Country Link
CN (1) CN103092343B (en)
WO (1) WO2014106380A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016056984A1 (en) * 2014-10-08 2016-04-14 Crunchfish Ab Communication device for improved sharing of content

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103092343B (en) * 2013-01-06 2016-12-28 深圳创维数字技术有限公司 A kind of control method based on photographic head and mobile terminal
CN104618566A (en) * 2013-11-04 2015-05-13 贵州广思信息网络有限公司 Control method for smart mobile phones
CN104951083A (en) * 2015-07-21 2015-09-30 石狮市智诚通讯器材贸易有限公司 Remote gesture input method and input system
CN107291215A (en) * 2016-04-01 2017-10-24 北京搜狗科技发展有限公司 A kind of body-sensing input message processing method and device
CN106406527A (en) * 2016-09-07 2017-02-15 传线网络科技(上海)有限公司 Input method and device based on virtual reality and virtual reality device
CN106778202A (en) * 2016-12-20 2017-05-31 北京小米移动软件有限公司 The unlocking method of terminal device, device and equipment
CN111090372B (en) * 2019-04-22 2021-11-05 广东小天才科技有限公司 Man-machine interaction method and electronic equipment
CN111090383B (en) * 2019-04-22 2021-06-25 广东小天才科技有限公司 Instruction identification method and electronic equipment
CN111913585A (en) * 2020-09-21 2020-11-10 北京百度网讯科技有限公司 Gesture recognition method, device, equipment and storage medium
CN112286411A (en) * 2020-09-30 2021-01-29 北京大米科技有限公司 Display mode control method and device, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020071036A1 (en) * 2000-12-13 2002-06-13 International Business Machines Corporation Method and system for video object range sensing
CN102168396A (en) * 2011-03-18 2011-08-31 中铁第一勘察设计院集团有限公司 Real-time data acquisition and data processing integrated field measuring method of rail datum network
CN102854983A (en) * 2012-09-10 2013-01-02 中国电子科技集团公司第二十八研究所 Man-machine interaction method based on gesture recognition
CN103092343A (en) * 2013-01-06 2013-05-08 深圳创维数字技术股份有限公司 Control method based on camera and mobile terminal

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1881994A (en) * 2006-05-18 2006-12-20 北京中星微电子有限公司 Method and apparatus for hand-written input and gesture recognition of mobile apparatus
CN101354608A (en) * 2008-09-04 2009-01-28 中兴通讯股份有限公司 Method and system for implementing video input
CN101930282A (en) * 2009-06-27 2010-12-29 英华达(上海)电子有限公司 Mobile terminal and mobile terminal-based input method
CN102236409A (en) * 2010-04-30 2011-11-09 宏碁股份有限公司 Motion gesture recognition method and motion gesture recognition system based on image
CN101901052B (en) * 2010-05-24 2012-07-04 华南理工大学 Target control method based on mutual reference of both hands
US20110299737A1 (en) * 2010-06-04 2011-12-08 Acer Incorporated Vision-based hand movement recognition system and method thereof
CN102843473A (en) * 2012-08-31 2012-12-26 惠州Tcl移动通信有限公司 Mobile phone and distance sensing device thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020071036A1 (en) * 2000-12-13 2002-06-13 International Business Machines Corporation Method and system for video object range sensing
CN102168396A (en) * 2011-03-18 2011-08-31 中铁第一勘察设计院集团有限公司 Real-time data acquisition and data processing integrated field measuring method of rail datum network
CN102854983A (en) * 2012-09-10 2013-01-02 中国电子科技集团公司第二十八研究所 Man-machine interaction method based on gesture recognition
CN103092343A (en) * 2013-01-06 2013-05-08 深圳创维数字技术股份有限公司 Control method based on camera and mobile terminal

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016056984A1 (en) * 2014-10-08 2016-04-14 Crunchfish Ab Communication device for improved sharing of content
US9930506B2 (en) 2014-10-08 2018-03-27 Crunchfish Ab Communication device for improved sharing of content

Also Published As

Publication number Publication date
CN103092343B (en) 2016-12-28
CN103092343A (en) 2013-05-08

Similar Documents

Publication Publication Date Title
WO2014106380A1 (en) Camera-based control method and mobile terminal
CN105144037B (en) For inputting the equipment, method and graphic user interface of character
TWI564786B (en) Managing real-time handwriting recognition
TWI653545B (en) Method, system and non-transitory computer-readable media for real-time handwriting recognition
WO2019062910A1 (en) Copy and pasting method, data processing apparatus, and user device
CN107643828B (en) Vehicle and method of controlling vehicle
JP5107453B1 (en) Information processing apparatus, operation screen display method, control program, and recording medium
CN102810023B (en) Identify method and the terminal device of gesture motion
WO2013091467A1 (en) Method and device for controlling application interface through drag gesture
CN104049738A (en) Method and apparatus for operating sensors of user device
US10860857B2 (en) Method for generating video thumbnail on electronic device, and electronic device
TW201118683A (en) Sensing a type of action used to operate a touch panel
TW201516887A (en) Integrating stroke-distribution information into spatial feature extraction for automatic handwriting recognition
WO2011066343A2 (en) Methods and apparatus for gesture recognition mode control
WO2011150607A1 (en) Method and mobile terminal for automatically recognizing gesture
CN109614845A (en) Manage real-time handwriting recognition
TWI505155B (en) Touch-control method for capactive and electromagnetic dual-mode touch screen and handheld electronic device
WO2015081606A1 (en) Method for deleting characters displayed on touch screen and electronic device
WO2012022070A1 (en) Method and mobile terminal for initiating application program
WO2016169078A1 (en) Touch control realization method and device
WO2012068950A1 (en) Touch screen triggering method and touch device
WO2011160356A1 (en) Method and terminal for switching display modes of function keys
US9025878B2 (en) Electronic apparatus and handwritten document processing method
WO2014121626A1 (en) Displaying method, device and storage medium of mobile terminal shortcuts
WO2019228149A1 (en) Collection method and apparatus for prediction sample, and storage medium and smart terminal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13870045

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16.11.2015)

122 Ep: pct application non-entry in european phase

Ref document number: 13870045

Country of ref document: EP

Kind code of ref document: A1