WO2019174398A1 - 一种利用手势模拟鼠标操作的方法、装置及终端 - Google Patents

一种利用手势模拟鼠标操作的方法、装置及终端 Download PDF

Info

Publication number
WO2019174398A1
WO2019174398A1 PCT/CN2019/072077 CN2019072077W WO2019174398A1 WO 2019174398 A1 WO2019174398 A1 WO 2019174398A1 CN 2019072077 W CN2019072077 W CN 2019072077W WO 2019174398 A1 WO2019174398 A1 WO 2019174398A1
Authority
WO
WIPO (PCT)
Prior art keywords
gesture
user
event
operation event
mouse
Prior art date
Application number
PCT/CN2019/072077
Other languages
English (en)
French (fr)
Inventor
贺三元
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2019174398A1 publication Critical patent/WO2019174398A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Definitions

  • the embodiments of the present disclosure relate to the field of gesture recognition technologies, and in particular, to a method, an apparatus, and a terminal for simulating a mouse operation by using a gesture.
  • the embodiment of the present specification provides a method, a device and a terminal for simulating a mouse operation by using a gesture, and the technical solution is as follows:
  • a method for simulating a mouse operation using a gesture comprising:
  • the preset mapping set includes a correspondence between at least one set of gesture operation events and a mouse operation event, where the mouse operation event includes at least a mouse click event, a mouse Moving event
  • a mouse operation event corresponding to the gesture operation event of the user is triggered.
  • an apparatus for simulating a mouse operation using a gesture comprising:
  • An acquiring module configured to acquire gesture information obtained by the gesture collection device to collect a user gesture
  • An identification module configured to identify the gesture information, and obtain a gesture operation event of the user
  • a search module configured to search a preset mapping set according to the gesture operation event of the user, where the preset mapping set includes a correspondence between at least one set of gesture operation events and a mouse operation event, where the mouse operation event includes at least a mouse Click event, mouse move event;
  • the triggering module is configured to trigger a mouse operation event corresponding to the gesture operation event of the user if the gesture operation event of the user is found in the preset mapping set.
  • a terminal comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein the processor implements the specification when the program is executed Any of the methods provided by the embodiments for simulating mouse operations using gestures.
  • the technical solution provided by the embodiment of the present disclosure obtains the gesture information obtained by the gesture collection device, collects the gesture information, and recognizes the gesture information to obtain a gesture operation event of the user, and the at least one set of gestures is searched according to the gesture operation event of the user.
  • a preset mapping set of a correspondence between an operation event and a mouse operation event if a gesture operation event of the user is found in the preset mapping set, a mouse operation event corresponding to the gesture operation event of the user is triggered, thereby realizing the use of the gesture to simulate the mouse
  • the operation provides the user with a novel intelligent terminal operation method, which can meet the user's needs to a certain extent and enhance the user experience.
  • any of the embodiments of the present specification does not need to achieve all of the above effects.
  • FIG. 1 is a schematic diagram of an application scenario of using a gesture to simulate a mouse operation according to an exemplary embodiment of the present disclosure
  • FIG. 2 is a flow chart of an embodiment of a method for simulating a mouse operation using a gesture according to an exemplary embodiment of the present disclosure
  • 3a, 3b, and 3c are schematic diagrams showing preset gestures according to an exemplary embodiment of the present specification
  • FIG. 4 is a block diagram of an embodiment of an apparatus for simulating a mouse operation using a gesture according to an exemplary embodiment of the present disclosure
  • FIG. 5 is a schematic diagram showing a more specific hardware structure of a terminal provided by an embodiment of the present specification.
  • FIG. 1 is a schematic diagram of an application scenario for simulating a mouse operation by using a gesture according to an exemplary embodiment of the present disclosure.
  • FIG. 1 includes an intelligent terminal 110 and an image capturing device 120.
  • the image capturing device 120 may collect gesture information for a user gesture (not shown in FIG. 1 ), and transmit the collected gesture information to the smart terminal 110.
  • the smart terminal 110 can perform the method for simulating the mouse operation by using the gesture provided by the embodiment of the present specification, to determine the user gesture by executing the method, and determining a mouse operation event corresponding to the user gesture, triggering the mouse operation event, and implementing the The smart terminal 110 operates.
  • the specific action process may include: Moving the mouse causes the mouse pointer to be displayed on the display interface of the smart terminal 110. Further, the user moves the mouse to move the mouse pointer to the "pause” control, and finally, the user presses the left mouse button and releases, and the left mouse button is When released, the video will pause.
  • the user may first make a gesture for indicating that the mouse pointer is displayed on the display interface of the smart terminal 110, in the embodiment of the present specification.
  • the smart terminal 110 can display the mouse pointer on the display interface according to the gesture; further, the user is making a gesture for the image capturing device 120 to indicate that the mouse pointer is moved on the display interface of the smart terminal 110, and the smart terminal 110 can The gesture moves the mouse pointer on the display interface until the mouse pointer is moved to the "pause” control; further, the user is making a gesture to the image capture device 120 indicating that the left mouse button is pressed and released, and the smart terminal 110 can According to the gesture, the mouse pointer is clicked to click the "pause" control to pause the video playback.
  • the gesture information of the user gesture is collected by the image capturing device 120.
  • the gesture information of the user gesture may be collected by using other devices, such as an infrared sensor. Make restrictions.
  • the image collection 120 and the smart terminal 110 are not shown as an example.
  • the smart terminal 110 can be provided with a camera or an infrared sensor. No restrictions.
  • FIG. 2 is a flowchart of an embodiment of a method for simulating a mouse operation by using a gesture according to an exemplary embodiment of the present disclosure.
  • the method is applicable to the application scenario shown in FIG.
  • the smart terminal 110 shown in the example includes the following steps:
  • Step 202 Acquire gesture information obtained by the gesture collection device to collect a user gesture.
  • the image capturing device 120 is a gesture collecting device. Then, the gesture information obtained by the gesture collecting device to collect the user gesture is the user gesture image collected by the image capturing 120 .
  • the gesture collection device may also be an infrared sensor.
  • the gesture information obtained by the gesture acquisition device to collect the user gesture is an infrared sensing signal collected by the infrared sensor.
  • Step 204 Identify the gesture information to obtain a gesture operation event of the user.
  • some gestures may be defined based on the operation of the mouse in the actual application.
  • the defined gesture is referred to as a preset gesture.
  • three types of preset gestures may be defined, respectively, for indicating that a mouse pointer is displayed on the display interface of the smart terminal 110, for indicating that the left mouse button is pressed, and for indicating that the left mouse button is
  • the preset gesture may at least include: a fist gesture (FIG. 3a), palm open gesture (shown in Figure 3b), single-finger straight gesture (shown in Figure 3c).
  • the palm open gesture is used to indicate that the mouse pointer is displayed on the display interface of the smart terminal 110
  • the fist gesture is used to indicate that the left button of the mouse is in a pressed state
  • the single finger straight gesture is used to indicate that the left button of the mouse is in an unpressed state.
  • the mouse operation event can be divided based on the type of mouse operation in the actual application. For example, at least two types of mouse operation events can be divided, namely, a mouse click event and a mouse movement event. Further, based on the operation characteristics of each type of mouse operation event, a correspondence between a mouse operation event and a gesture operation event is established.
  • the operation feature is “mouse movement”, and based on this, it may be defined
  • a first gesture operation event for indicating that a user's gesture has moved the first gesture operation event is a mouse movement event; for a mouse click event, the operation feature is "the left mouse button is pressed", It can be seen that for the mouse click event, the transformation of the user gesture is involved, based on which a second gesture operation event for representing the transformation of the gesture of the user may be defined, and the second gesture operation event is Corresponds to mouse click events.
  • the definitions of the first gesture operation event and the second gesture operation event may obtain a gesture operation event as exemplified in Table 1 below:
  • mapping of the gesture operation event to the existing mouse event can be realized by using the above gesture operation event mapping mouse movement event and mouse click event, for example, as shown in Table 2 below, for the gesture operation event and current
  • Table 2 An example of a mapping relationship between mouse events:
  • the user can reproduce the existing mouse event by making a preset gesture to implement the corresponding gesture operation event, thereby being compatible with the mouse event encapsulated in the existing control.
  • the gesture operation event may further include: a palm-changing single-finger event for indicating that the state of the mouse pointer is adjusted from a hovering state to a working state; Used to indicate that the state of the mouse pointer is adjusted from the working state to the hover state.
  • the currently acquired gesture information and the previously acquired gesture information may be separately identified to obtain the gesture currently made by the user and the gesture previously made by the user.
  • the gesture currently made by the user is referred to as a first gesture
  • the gesture previously made by the user is referred to as a second gesture.
  • first gesture and the second gesture belong to the preset gesture, and if yes, continue to determine whether the first gesture is the same as the second gesture, and if the same, further determine the first gesture relative to the second gesture a physical displacement of the potential, if the physical displacement is greater than a preset threshold, obtaining a first gesture operation event for indicating that the gesture of the user is moved from the position of the second gesture to the location of the first gesture; if the first gesture and the second gesture Differently, a second gesture operation event for indicating that the gesture of the user is changed from the second gesture to the first gesture may be obtained.
  • the state of the mouse pointer may be set to a hovering state.
  • the gesture area of the user is extracted in the user gesture image.
  • the gesture of the user is often placed.
  • the gesture area can be extracted from the user gesture image by utilizing the feature that the gesture area and the background area have different depth values.
  • the gray histogram of the image is statistically obtained, and the gray histogram can represent the number of pixels having a certain gray level in the image.
  • the area of the gesture area relative to the background area is small, and the gray value is small.
  • the pixel points may be searched according to the order of gray values from large to small.
  • a gray value whose value varies greatly, and the gray value to be found is used as a gray threshold for region segmentation. For example, if the gray threshold is 235, then the user gesture image may be performed according to the gray threshold.
  • the area indicated by the white pixel is the gesture area.
  • the feature extraction algorithm is used to extract the feature region by using a preset feature extraction algorithm.
  • the preset feature extraction algorithm may be a SIFT feature extraction algorithm, a shape feature extraction and allocation algorithm based on wavelet and relative moment, a model method, etc.
  • the extracted features may include: a centroid of the gesture area, a feature vector of the gesture area, a number of fingers, and the like.
  • gestures are identified by the extracted features to determine the gestures made by the user.
  • the determined physical displacement can be converted into inches, and the physical bit is further removed to correspond to the actual distance of each pixel on the screen of the smart terminal 110, and the actual distance is also in inches, and the result is obtained. This is the number of pixels that the mouse pointer moves.
  • Step 206 Search a preset mapping set according to a gesture operation event of the user, where the preset mapping set includes a correspondence between at least one set of gesture operation events and a mouse operation event, where the mouse operation event includes at least a mouse click event and a mouse movement event. .
  • Step 208 If a gesture operation event of the user is found in the preset mapping set, a mouse operation event corresponding to the gesture operation event of the user is triggered.
  • a mapping set may be preset, and the mapping set includes a correspondence between at least one set of gesture operation events and mouse operation events.
  • the mapping set may be as shown in Table 3 below:
  • Palm fist change event Used to indicate that the left mouse button is pressed
  • Fist into palm event Used to indicate that the left mouse button is released
  • Single finger change palm event Used to indicate that the mouse pointer has entered the hover state from the working state. Palm change single finger event Used to indicate that the mouse pointer has entered the working state from the hover state.
  • the mapping set of the example shown in FIG. 2 may be searched according to the gesture operation event, and if the gesture operation event is found, Trigger the corresponding mouse operation event.
  • the technical solution provided by the present invention obtains the gesture information obtained by the gesture collection device, collects the gesture information, and recognizes the gesture information to obtain a gesture operation event of the user, and the at least one set of gesture operation events is included according to the gesture operation event of the user.
  • a preset mapping set corresponding to a mouse operation event if a gesture operation event of the user is found in the preset mapping set, triggering a mouse operation event corresponding to the gesture operation event of the user, thereby realizing the use of the gesture to simulate the mouse operation, It provides users with a novel intelligent terminal operation method, which can meet user needs to a certain extent and enhance user experience.
  • the embodiment of the present specification further provides an apparatus for simulating a mouse operation by using a gesture.
  • an apparatus for simulating a mouse operation by using a gesture according to an exemplary embodiment of the present disclosure is provided.
  • the device may include: an obtaining module 41, an identifying module 42, a searching module 43, and a triggering module 44. among them,
  • the obtaining module 41 is configured to acquire gesture information obtained by the gesture collection device to collect a user gesture.
  • the identification module 42 can be configured to identify the gesture information to obtain a gesture operation event of the user;
  • the searching module 43 is configured to search, according to the gesture operation event of the user, a preset mapping set, where the preset mapping set includes a correspondence between at least one set of gesture operation events and a mouse operation event, where the mouse operation event is at least Including mouse click events, mouse movement events;
  • the triggering module 44 is configured to trigger a mouse operation event corresponding to the gesture operation event of the user if the gesture operation event of the user is found in the preset mapping set.
  • the gesture collection device is an image collection device
  • the gesture information is a user gesture image collected by the image collection device.
  • the identification module 42 can include (not shown in FIG. 4):
  • a region extraction submodule configured to extract a gesture area of the user in the user gesture image
  • a feature extraction sub-module configured to perform feature extraction on the gesture region by using a preset feature extraction algorithm
  • the feature recognition sub-module is configured to perform gesture recognition through the extracted features to obtain a gesture operation event of the user.
  • the gesture operation event of the user includes at least: a first gesture operation event for indicating that the gesture of the user is moved, and a second gesture operation event for indicating that the gesture of the user is changed.
  • the first gesture operation event corresponds to the mouse movement event
  • the second gesture operation event corresponds to the mouse click event.
  • the identification module 42 can include (not shown in FIG. 4):
  • a gesture recognition sub-module configured to respectively identify the currently acquired gesture information and the previously acquired gesture information, to obtain a first gesture currently made by the user and a second gesture previously made by the user ;
  • a first determining sub-module configured to determine whether the first gesture and the second gesture belong to a preset gesture
  • a second determining sub-module configured to determine whether the first gesture and the second gesture are the same if the first gesture and the second gesture belong to a preset gesture
  • a displacement determining submodule configured to determine a physical displacement of the first gesture relative to the second gesture if the first gesture is the same as the second gesture
  • a first determining submodule configured to: if the physical displacement is greater than a preset threshold, obtain a first gesture for indicating that the gesture of the user is moved from a location where the second gesture is located to a location where the first gesture is located Operational event
  • a second determining submodule configured to: if the first gesture is different from the second gesture, obtain a second gesture for indicating that the user is transformed from the second gesture to the second gesture Gesture action event.
  • the preset gesture includes at least: a fist gesture, a palm open gesture, and a single-finger straight gesture.
  • the acquisition module 41, the identification module 42, the search module 43, and the trigger module 44 are four functionally independent modules, which can be simultaneously configured in the device as shown in FIG. 4, or can be separately configured in the device. Therefore, the structure shown in FIG. 4 should not be construed as limiting the embodiment of the present specification.
  • the embodiment of the present specification further provides a terminal, which at least includes a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein when the processor executes the program, implementing the foregoing gesture-simulating mouse operation Methods.
  • the method includes: acquiring gesture information obtained by the gesture collection device to collect a user gesture; identifying the gesture information to obtain a gesture operation event of the user; and searching for a preset mapping set according to the gesture operation event of the user, where the preset The mapping set includes a correspondence between the at least one set of gesture operation events and the mouse operation event, wherein the mouse operation event includes at least a mouse click event and a mouse movement event; if the user is found in the preset mapping set The gesture operation event triggers a mouse operation event corresponding to the gesture operation event of the user.
  • the gesture collection device is an image collection device
  • the gesture information is a user gesture image collected by the image collection device.
  • the gesture information is identified to obtain a gesture operation event of the user, including: extracting a gesture area of the user in the user gesture image; and performing the gesture area by using a preset feature extraction algorithm. Feature extraction; gesture recognition by the extracted features to obtain a gesture operation event of the user.
  • the gesture operation event of the user includes at least: a first gesture operation event for indicating that the gesture of the user is moved, and a second gesture operation event for indicating that the gesture of the user is changed.
  • the first gesture operation event corresponds to the mouse movement event
  • the second gesture operation event corresponds to the mouse click event.
  • the gesture information is identified, and the gesture operation event of the user is obtained, including: separately identifying the currently acquired gesture information and the previously acquired gesture information, and obtaining the current gesture of the user. Determining, by the first gesture, a second gesture that is previously made by the user; determining whether the first gesture and the second gesture belong to a preset gesture, and if yes, determining the first gesture and the second gesture Whether the gestures are the same; if they are the same, determining a physical displacement of the first gesture relative to the second gesture; if the physical displacement is greater than a preset threshold, obtaining a gesture for indicating the user by the Moving a second gesture to a first gesture operation event at a position where the first gesture is located; if different, obtaining a gesture for indicating the user is changed from the second gesture to the first gesture
  • the second gesture operates the event.
  • the preset gesture includes at least: a fist gesture, a palm open gesture, and a single-finger straight gesture.
  • FIG. 5 is a schematic diagram showing a more specific structure of a terminal hardware provided by an embodiment of the present specification.
  • the terminal may include a processor 510, a memory 520, an input/output interface 530, a communication interface 540, and a bus 550.
  • the processor 510, the memory 520, the input/output interface 530, and the communication interface 540 implement a communication connection between the devices via the bus 550.
  • the processor 510 can be implemented by using a general-purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits for performing correlation.
  • the program is implemented to implement the technical solutions provided by the embodiments of the present specification.
  • the memory 520 can be implemented in the form of a ROM (Read Only Memory), a RAM (Random Access Memory), a static storage device, a dynamic storage device, or the like.
  • the memory 520 can store the operating system and other applications.
  • the technical solution provided by the embodiment of the present specification is implemented by software or firmware, the related program code is saved in the memory 520 and is called and executed by the processor 510.
  • the input/output interface 530 is used to connect an input/output module to implement information input and output.
  • the input/output/module can be configured as a component in the device (not shown in Figure 5) or externally to the device to provide the corresponding functionality.
  • the input device may include a keyboard, a mouse, a touch screen, a microphone, various types of sensors, and the like, and the output device may include a display, a speaker, a vibrator, an indicator light, and the like.
  • the communication interface 540 is used to connect a communication module (not shown in FIG. 5) to implement communication interaction between the device and other devices.
  • the communication module can communicate by wired means (such as USB, network cable, etc.), or can communicate by wireless means (such as mobile network, WIFI, Bluetooth, etc.).
  • Bus 550 includes a path for transferring information between various components of the device, such as processor 510, memory 520, input/output interface 530, and communication interface 540.
  • the above device only shows the processor 510, the memory 520, the input/output interface 530, the communication interface 540, and the bus 550, in a specific implementation, the device may also include necessary for normal operation. Other components.
  • the above-mentioned devices may also include only the components necessary for implementing the embodiments of the present specification, and do not necessarily include all the components shown in the drawings.
  • the embodiment of the present specification further provides a computer readable storage medium having stored thereon a computer program, which is executed by a processor to implement the aforementioned method for simulating a mouse operation using a gesture.
  • the method includes: acquiring gesture information obtained by the gesture collection device to collect a user gesture; identifying the gesture information to obtain a gesture operation event of the user; and searching for a preset mapping set according to the gesture operation event of the user, where the preset The mapping set includes a correspondence between the at least one set of gesture operation events and the mouse operation event, wherein the mouse operation event includes at least a mouse click event and a mouse movement event; if the user is found in the preset mapping set The gesture operation event triggers a mouse operation event corresponding to the gesture operation event of the user.
  • Computer readable media includes both permanent and non-persistent, removable and non-removable media.
  • Information storage can be implemented by any method or technology.
  • the information can be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory. (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape storage or other magnetic storage devices or any other non-transportable media can be used to store information that can be accessed by a computing device.
  • computer readable media does not include temporary storage of computer readable media, such as modulated data signals and carrier waves.
  • the embodiments of the present specification can be implemented by means of software plus a necessary general hardware platform. Based on such understanding, the technical solution of the embodiments of the present specification may be embodied in the form of a software product in essence or in the form of a software product, which may be stored in a storage medium such as a ROM/RAM. Disks, optical disks, and the like, including instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform the methods described in various embodiments of the embodiments of the present specification or embodiments.
  • a computer device which may be a personal computer, server, or network device, etc.
  • the system, device, module or unit illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product having a certain function.
  • a typical implementation device is a computer, and the specific form of the computer may be a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email transceiver, and a game control.
  • the various embodiments in the specification are described in a progressive manner, and the same or similar parts between the various embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • the device embodiments described above are merely illustrative, and the modules described as separate components may or may not be physically separated, and the functions of the modules may be the same in the implementation of the embodiments of the present specification. Or implemented in multiple software and/or hardware. It is also possible to select some or all of the modules according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without any creative effort.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种利用手势模拟鼠标操作的方法、装置、及终端,该方法包括:获取手势采集设备采集用户手势所得到的手势信息(202);对所述手势信息进行识别,得到用户的手势操作事件(204);根据所述用户的手势操作事件查找预设映射集,所述预设映射集包括至少一组手势操作事件与鼠标操作事件的对应关系,其中,所述鼠标操作事件至少包括鼠标单击事件、鼠标移动事件(206);若在所述预设映射集中查找到所述用户的手势操作事件,则触发与所述用户的手势操作事件对应的鼠标操作事件(208)。

Description

一种利用手势模拟鼠标操作的方法、装置及终端 技术领域
本说明书实施例涉及手势识别技术领域,尤其涉及一种利用手势模拟鼠标操作的方法、装置及终端。
背景技术
随着信息技术的发展,智能终端已成为人们生活中不可缺少的部分,用户可以通过智能终端实现多种操作。目前,在用户通过智能终端实现操作的过程中,通常借助鼠标对智能终端进行操作。但是,在实际使用过程中,将不可避免地出现鼠标无法使用情况,例如,鼠标故障、鼠标电量用尽等情况,针对该种情况,现有技术中并无任何的应急措施,从而,在该种情况下,用户则无法对智能终端进行操作,用户体验较差。
发明内容
针对上述技术问题,本说明书实施例提供一种利用手势模拟鼠标操作的方法、装置及终端,技术方案如下:
根据本说明书实施例的第一方面,提供一种利用手势模拟鼠标操作的方法,该方法包括:
获取手势采集设备采集用户手势所得到的手势信息;
对所述手势信息进行识别,得到用户的手势操作事件;
根据所述用户的手势操作事件查找预设映射集,所述预设映射集包括至少一组手势操作事件与鼠标操作事件的对应关系,其中,所述鼠标操作事件至少包括鼠标单击事件、鼠标移动事件;
若在所述预设映射集中查找到所述用户的手势操作事件,则触发与所述用户的手势操作事件对应的鼠标操作事件。
根据本说明书实施例的第二方面,提供一种利用手势模拟鼠标操作的装置,该装置包括:
获取模块,用于获取手势采集设备采集用户手势所得到的手势信息;
识别模块,用于对所述手势信息进行识别,得到用户的手势操作事件;
查找模块,用于根据所述用户的手势操作事件查找预设映射集,所述预设映射集包括至少一组手势操作事件与鼠标操作事件的对应关系,其中,所述鼠标操作事件至少包括鼠标单击事件、鼠标移动事件;
触发模块,用于若在所述预设映射集中查找到所述用户的手势操作事件,则触发与所述用户的手势操作事件对应的鼠标操作事件。
根据本说明书实施例的第三方面,提供一种终端,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述程序时实现本说明书实施例提供的任一利用手势模拟鼠标操作的方法。
本说明书实施例所提供的技术方案,通过获取手势采集设备采集用户手势所得到的手势信息,对手势信息进行识别,得到用户的手势操作事件,根据该用户的手势操作事件查找包括至少一组手势操作事件与鼠标操作事件的对应关系的预设映射集,若在预设映射集中查找到用户的手势操作事件,则触发与用户的手势操作事件对应的鼠标操作事件,从而实现了利用手势模拟鼠标操作,为用户提供了一种新颖的智能终端操作方法,在一定程度上可以满足用户需求,提升用户体验。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本说明书实施例。
此外,本说明书实施例中的任一实施例并不需要达到上述的全部效果。
附图说明
为了更清楚地说明本说明书实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本说明书实施例中记载的一些实施例,对于本领域普通技术人员来讲,还可以根据这些附图获得其他的附图。
图1为本说明书一示例性实施例示出的一种利用手势模拟鼠标操作的应用场景示意图;
图2为本说明书一示例性实施例示出的一种利用手势模拟鼠标操作的方法的实施例流程图;
图3a、图3b、图3c为本说明书一示例性实施例示出的预设手势的示意图;
图4为本说明书一示例性实施例示出的一种利用手势模拟鼠标操作的装置的实施例框图;
图5示出了本说明书实施例所提供的一种更为具体的终端硬件结构示意图。
具体实施方式
为了使本领域技术人员更好地理解本说明书实施例中的技术方案,下面将结合本说明书实施例中的附图,对本说明书实施例中的技术方案进行详细地描述,显然,所描述的实施例仅仅是本说明书的一部分实施例,而不是全部的实施例。基于本说明书中的实施例,本领域普通技术人员所获得的所有其他实施例,都应当属于保护的范围。
请参见图1,为本说明书一示例性实施例示出的一种利用手势模拟鼠标操作的应用场景示意图。图1中包括智能终端110、图像采集设备120,在该应用场景下,图像采集设备120可以针对用户手势(图1中未示出)采集手势信息,将采集到的手势信息传输给智能终端110,智能终端110则可以执行本说明书实施例提供的利用手势模拟鼠标操作的方法,以通过执行该方法确定用户手势,并确定该用户手势所对应的鼠标操作事件,触发该鼠标操作事件,实现对智能终端110进行操作。
举例来说,假设用户通过智能终端110观看视频,在观看过程中,用户想要暂停视频播放,若用户通过操作鼠标(图1中未示出)实现暂停视频播放,具体动作过程可以包括:用户移动鼠标,使得智能终端110的显示界面上显示出鼠标指针,进一步,用户移动鼠标,使得鼠标指针移动至“暂停”控件上,最后,用户按下鼠标左键并松开,在鼠标左键被松开后,视频即暂停播放。
对应于上述通过操作鼠标实现暂停视频播放的动作过程,在本说明书实施例中,首先,用户可以正对图像采集设备120作出用于指示在智能终端110的显示界面上显示出鼠标指针的手势,智能终端110则可以根据该手势在显示界面上显示出鼠标指针;进一步,用户正对图像采集设备120作出用于指示在智能终端110的显示界面上移动鼠标指针的手势,智能终端110可以根据该手势在显示界面上移动鼠标指针,直至将鼠标指针移动至“暂停”控件上;进一步,用户正对图像采集设备120作出用于表示鼠标左键被按下并松开的手势,智能终端110可以根据该手势触发鼠标指针单击“暂停”控件,实现暂停视频播放。
需要说明的是,上述通过图像采集设备120采集用户手势的手势信息仅仅作为举例,在实际应用中,还可以通过其他设备,例如红外传感器采集用户手势的手势信息,本说明书实施例对此并不做限制。
还需要说明的是,图1中所示例的图像采集120与智能终端110的布设方式仅仅作为举例,在实际应用中,智能终端110可自带有摄像头或红外传感器,本说明书实施例对此并不做限制。
如下,结合上述图1所示应用场景,示出下述实施例对本说明书实施例提供的利用手势模拟鼠标操作的方法进行说明。
请参见图2,为本说明书一示例性实施例示出的一种利用手势模拟鼠标操作的方法的实施例流程图,该方法在上述图1所示应用场景的基础上,可应用于图1中所示例的智能终端110上,包括以下步骤:
步骤202:获取手势采集设备采集用户手势所得到的手势信息。
在本说明书实施中,基于图1所示例的应用场景,图像采集设备120则为手势采集设备,那么,手势采集设备采集用户手势所得到的手势信息即为图像采集120采集到的用户手势图像。
此外,由上述描述可知,手势采集设备还可以为红外传感器,对应的,手势采集设备采集用户手势所得到的手势信息即为红外传感器采集到的红外感应信号。
步骤204:对手势信息进行识别,得到用户的手势操作事件。
首先说明,在本说明书实施例中,为了实现利用手势模拟鼠标操作,可基于实际应用中对鼠标的操作定义一些手势,为了描述方便,将所定义的手势称为预设手势。
在一实施例中,可以定义三类预设手势,分别用于指示在智能终端110的显示界面上显示出鼠标指针、用于指示鼠标左键处于按下状态,以及用于指示鼠标左键处于未按下状态,例如,请参见图3a、3b、3c,为本说明书一示例性实施例示出的预设手势的示意图,如图3所示,该预设手势至少可以包括:握拳手势(图3a所示)、手掌打开手势(图3b所示)、单指伸直手势(图3c所示)。其中,手掌打开手势用于指示在智能终端110显示界面上显示出鼠标指针,握拳手势用于指示鼠标左键处于按下状态,单指伸直手势则用于指示鼠标左键处于未按下状态。
同时,为了实现利用手势模拟鼠标操作,可以基于实际应用中鼠标操作的类型划 分鼠标操作事件,例如,可至少划分出两类鼠标操作事件,分别为鼠标单击事件、鼠标移动事件。进一步,基于每一类型的鼠标操作事件的操作特征,建立鼠标操作事件与手势操作事件的对应关系,例如,对于鼠标移动事件而言,其操作特征是“鼠标进行移动”,基于此,可以定义一类用于表示用户的手势发生移动的第一手势操作事件,该第一手势操作事件即对应鼠标移动事件;对于鼠标单击事件而言,其操作特征是“鼠标左键被按下”,由此可见,对于鼠标单击事件而言,涉及到用户手势的变换,基于此,可以定义一类用于表示用户的手势发生变换的第二手势操作事件,该第二手势操作事件即对应鼠标点击事件。
基于上述预设手势,上述第一手势操作事件和第二手势操作事件的定义,可以得到如下表1所示例的手势操作事件:
表1
Figure PCTCN2019072077-appb-000001
由上述表1可知,利用上述手势操作事件映射鼠标移动事件和鼠标单击事件,可以实现手势操作事件到现有的鼠标事件的映射,例如,如下述表2所示,为手势操作事件与现有的鼠标事件之间映射关系的一种示例:
表2
手势操作事件 鼠标事件
单指变拳头事件 MouseDown(鼠标左键被按下时触发)
单指或拳头移动事件 MouseOver(鼠标指针滑过时触发)
拳头变单指事件 MouseUp(鼠标左键由按下至松开时触发)
单指或拳头移动事件 MouseOut(鼠标指针滑出时触发)
单指或拳头移动事件 MouseMove(鼠标指针移动时触发)
由上述表2可知,本说明书实施例中,用户通过做出预设手势,实现相应的手势 操作事件,即可复用现有的鼠标事件,从而可以兼容现有控件内部封装的鼠标事件。
此外,除上述表1中所示例的手势操作事件以外,手势操作事件还可以包括:手掌变单指事件,用于表示鼠标指针的状态从悬停状态调整为工作状态;单指变手掌事件,用于表示鼠标指针的状态从工作状态调整为悬停状态。
需要说明的是,当鼠标指针的状态为悬停状态时,无法在显示界面上移动鼠标指针,若需移动鼠标指针,则可以先通过手掌变单指事件,将鼠标指针的状态从悬停状态调整为工作状态。
由上述描述可知,不论是第一手势操作事件,抑或是第二手势操作事件,均涉及到用户在前后两次所做出的手势之间的区别(具体为手势相同,但相对位置发生变化;手势不同),因而,在本说明书实施例中,可以分别对当前获取到的手势信息与前一次获取到的手势信息进行识别,以得到用户当前做出的手势与用户前一次做出的手势,在此说明,为了描述方便,将用户当前做出的手势称为第一手势,将用户前一次做出的手势称为第二手势。
后续,可以首先判断第一手势与第二手势是否属于上述预设手势,若是,则继续判断第一手势与第二手势是否相同,若相同,则进一步确定第一手势相对于第二手势的物理位移,若该物理位移大于预设阈值,则得到用于表示用户的手势由第二手势所在位置移动到第一手势所在位置的第一手势操作事件;若第一手势与二手势不相同,则可以得到用于表示用户的手势由第二手势变换为第一手势的第二手势操作事件。
需要说明的是,在上述过程中,通过确定第一手势相对于第二手势的物理位移大于预设阈值时,再得到第一手势操作事件,可以避免由于用户做出一些轻微移动而导致错误操作。
此外,在本说明书实施例中,若识别到的手势不属于上述预设手势,则可以将鼠标指针的状态设置为悬停状态。
如下,以手势信息为用户的手势图像为例,对手势信息进行识别的过程进行说明:首先,在用户手势图像中提取出用户的手势区域,例如,在实际应用中,用户的手势往往置于用户身体之前,从而可以利用手势区域与背景区域具有不同的深度值这一特征,在用户手势图像中提取出手势区域。具体的,根据用户手势图像中像素点的深度值,统计得到该图像的灰度直方图,灰度直方图则可以表示出该图像中具有某种灰度级的像素点的个数。由于在用户手势图像中,手势区域相对于背景区域的面积较小,且灰度值较 小,因此,在前述灰度直方图中,可以按照灰度值从大到小的顺序,查找像素点个数变化较大的灰度值,将查找到的灰度值作为用于区域分割的灰度阈值,例如,灰度阈值为235,那么,则可以根据该灰度阈值对用户手势图像进行二值化,在得到的二值化图像中,白色像素点所表示的区域即为手势区域。
进一步,利用预设的特征提取算法对该手势区域进行特征提取,例如,预设的特征提取算法可以为SIFT特征提取算法、基于小波和相对矩的形状特征提取与分配算法、模型法等,所提取到的特征可以包括:手势区域的质心、手势区域的特征向量、手指数量等等。
最后,通过提取到的特征进行手势识别,确定用户做出的手势。
在上述描述中,在第一手势与第二手势相同的场景下,确定第一手势相对于第二手势的物理位移的具体过程,可以参见现有技术中的描述,本说明书实施例对此不再详述。
后续,可以将确定出的物理位移换算成以英寸为单位,进一步使用该物理位移除以智能终端110的屏幕上每一像素点对应实际距离,该实际距离也以英寸为单位,得到的结果即为鼠标指针移动的像素点的个数。
步骤206:根据用户的手势操作事件查找预设映射集,该预设映射集包括至少一组手势操作事件与鼠标操作事件的对应关系,其中,鼠标操作事件至少包括鼠标单击事件、鼠标移动事件。
步骤208:若在预设映射集中查找到用户的手势操作事件,则触发与用户的手势操作事件对应的鼠标操作事件。
如下,对上述步骤206至步骤208进行详细说明:
在本说明书实施例中,可以预先设置映射集,该映射集中包括至少一组手势操作事件与鼠标操作事件的对应关系,例如,按照上述描述,该映射集可以如下述表3所示:
表3
手势操作事件 鼠标操作事件
拳头移动事件 用于表示鼠标左键被按下并且鼠标移动
单指移动事件 用于表示鼠标左键处于未按下状态并且鼠标移动
单指变拳头事件 用于表示鼠标左键被按下
拳头变单指事件 用于表示鼠标左键被松开
手掌变拳头事件 用于表示鼠标左键被按下
拳头变手掌事件 用于表示鼠标左键被松开
单指变手掌事件 用于表示鼠标指针从工作状态进入悬停状态
手掌变单指事件 用于表示鼠标指针从悬停状态进入工作状态
基于上述表2所示例的映射集,在本说明书实施例中,得到用户的手势操作事件之后,则可以根据该手势操作事件查找图2所示例的映射集,若查找到该手势操作事件,则触发对应的鼠标操作事件。
本发明所提供的技术方案,通过获取手势采集设备采集用户手势所得到的手势信息,对手势信息进行识别,得到用户的手势操作事件,根据该用户的手势操作事件查找包括至少一组手势操作事件与鼠标操作事件的对应关系的预设映射集,若在预设映射集中查找到用户的手势操作事件,则触发与用户的手势操作事件对应的鼠标操作事件,从而实现了利用手势模拟鼠标操作,为用户提供了一种新颖的智能终端操作方法,在一定程度上可以满足用户需求,提升用户体验。
相应于上述方法实施例,本说明书实施例还提供一种利用手势模拟鼠标操作的装置,请参见图4,为本说明书一示例性实施例示出的一种利用手势模拟鼠标操作的装置的实施例框图,该装置可以包括:获取模块41、识别模块42、查找模块43,触发模块44。其中,
获取模块41,可以用于获取手势采集设备采集用户手势所得到的手势信息;
识别模块42,可以用于对所述手势信息进行识别,得到用户的手势操作事件;
查找模块43,可以用于根据所述用户的手势操作事件查找预设映射集,所述预设映射集包括至少一组手势操作事件与鼠标操作事件的对应关系,其中,所述鼠标操作事件至少包括鼠标单击事件、鼠标移动事件;
触发模块44,可以用于若在所述预设映射集中查找到所述用户的手势操作事件,则触发与所述用户的手势操作事件对应的鼠标操作事件。
在一实施例中,所述手势采集设备为图像采集设备,所述手势信息为所述图像采集设备采集到的用户手势图像。
在一实施例中,所述识别模块42可以包括(图4中未示出):
区域提取子模块,用于在所述用户手势图像中提取出用户的手势区域;
特征提取子模块,用于利用预设的特征提取算法对所述手势区域进行特征提取;
特征识别子模块,用于通过提取到的特征进行手势识别,得到用户的手势操作事件。
在一实施例中,所述用户的手势操作事件至少包括:用于表示所述用户的手势发生移动的第一手势操作事件、用于表示所述用户的手势发生变换的第二手势操作事件。其中,所述第一手势操作事件对应所述鼠标移动事件,所述第二手势操作事件对应所述鼠标点击事件。
在一实施例中,所述识别模块42可以包括(图4中未示出):
手势识别子模块,用于分别对当前获取到的手势信息与前一次获取到的手势信息进行识别,得到所述用户当前做出的第一手势与所述用户前一次做出的第二手势;
第一判断子模块,用于判断所述第一手势与所述第二手势是否属于预设手势;
第二判断子模块,用于若所述第一手势与所述第二手势属于预设手势,则判断所述第一手势与所述第二手势是否相同;
位移确定子模块,用于若所述第一手势与所述第二手势相同,则确定所述第一手势相对于所述第二手势的物理位移;
第一确定子模块,用于若所述物理位移大于预设阈值,则得到用于表示所述用户的手势由所述第二手势所在位置移动到所述第一手势所在位置的第一手势操作事件;
第二确定子模块,用于若所述第一手势与所述第二手势不同,则得到用于表示所述用户的手势由所述第二手势变换为所述第一手势的第二手势操作事件。
在一实施例中,所述预设手势至少包括:握拳手势、手掌打开手势、单指伸直手势。
可以理解的是,获取模块41、识别模块42、查找模块43,以及触发模块44作为四种功能独立的模块,既可以如图4所示同时配置在装置中,也可以分别单独配置在装置中,因此图4所示的结构不应理解为对本说明书实施例方案的限定。
此外,上述装置中各个模块的功能和作用的实现过程具体详见上述方法中对应步骤的实现过程,在此不再赘述。
本说明书实施例还提供一种终端,其至少包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,处理器执行所述程序时实现前述的利用手势 模拟鼠标操作的方法。该方法至少包括:获取手势采集设备采集用户手势所得到的手势信息;对所述手势信息进行识别,得到用户的手势操作事件;根据所述用户的手势操作事件查找预设映射集,所述预设映射集包括至少一组手势操作事件与鼠标操作事件的对应关系,其中,所述鼠标操作事件至少包括鼠标单击事件、鼠标移动事件;若在所述预设映射集中查找到所述用户的手势操作事件,则触发与所述用户的手势操作事件对应的鼠标操作事件。
在一实施例中,所述手势采集设备为图像采集设备,所述手势信息为所述图像采集设备采集到的用户手势图像。
在一实施例中,对所述手势信息进行识别,得到用户的手势操作事件,包括:在所述用户手势图像中提取出用户的手势区域;利用预设的特征提取算法对所述手势区域进行特征提取;通过提取到的特征进行手势识别,得到用户的手势操作事件。
在一实施例中,所述用户的手势操作事件至少包括:用于表示所述用户的手势发生移动的第一手势操作事件、用于表示所述用户的手势发生变换的第二手势操作事件;其中,所述第一手势操作事件对应所述鼠标移动事件,所述第二手势操作事件对应所述鼠标点击事件。
在一实施例中,对所述手势信息进行识别,得到用户的手势操作事件,包括:分别对当前获取到的手势信息与前一次获取到的手势信息进行识别,得到所述用户当前做出的第一手势与所述用户前一次做出的第二手势;判断所述第一手势与所述第二手势是否属于预设手势,若是,则判断所述第一手势与所述第二手势是否相同;若相同,则确定所述第一手势相对于所述第二手势的物理位移;若所述物理位移大于预设阈值,则得到用于表示所述用户的手势由所述第二手势所在位置移动到所述第一手势所在位置的第一手势操作事件;若不同,则得到用于表示所述用户的手势由所述第二手势变换为所述第一手势的第二手势操作事件。
在一实施例中,所述预设手势至少包括:握拳手势、手掌打开手势、单指伸直手势。
图5示出了本说明书实施例所提供的一种更为具体的终端硬件结构示意图,该终端可以包括:处理器510、存储器520、输入/输出接口530、通信接口540和总线550。其中处理器510、存储器520、输入/输出接口530和通信接口540通过总线550实现彼此之间在设备内部的通信连接。
处理器510可以采用通用的CPU(Central Processing Unit,中央处理器)、微处理器、应用专用集成电路(Application Specific Integrated Circuit,ASIC)、或者一个或多个集成电路等方式实现,用于执行相关程序,以实现本说明书实施例所提供的技术方案。
存储器520可以采用ROM(Read Only Memory,只读存储器)、RAM(Random Access Memory,随机存取存储器)、静态存储设备,动态存储设备等形式实现。存储器520可以存储操作系统和其他应用程序,在通过软件或者固件来实现本说明书实施例所提供的技术方案时,相关的程序代码保存在存储器520中,并由处理器510来调用执行。
输入/输出接口530用于连接输入/输出模块,以实现信息输入及输出。输入输出/模块可以作为组件配置在设备中(图5中未示出),也可以外接于设备以提供相应功能。其中输入设备可以包括键盘、鼠标、触摸屏、麦克风、各类传感器等,输出设备可以包括显示器、扬声器、振动器、指示灯等。
通信接口540用于连接通信模块(图5中未示出),以实现本设备与其他设备的通信交互。其中通信模块可以通过有线方式(例如USB、网线等)实现通信,也可以通过无线方式(例如移动网络、WIFI、蓝牙等)实现通信。
总线550包括一通路,在设备的各个组件(例如处理器510、存储器520、输入/输出接口530和通信接口540)之间传输信息。
需要说明的是,尽管上述设备仅示出了处理器510、存储器520、输入/输出接口530、通信接口540以及总线550,但是在具体实施过程中,该设备还可以包括实现正常运行所必需的其他组件。此外,本领域的技术人员可以理解的是,上述设备中也可以仅包含实现本说明书实施例方案所必需的组件,而不必包含图中所示的全部组件。
本说明书实施例还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现前述的利用手势模拟鼠标操作的方法。该方法至少包括:获取手势采集设备采集用户手势所得到的手势信息;对所述手势信息进行识别,得到用户的手势操作事件;根据所述用户的手势操作事件查找预设映射集,所述预设映射集包括至少一组手势操作事件与鼠标操作事件的对应关系,其中,所述鼠标操作事件至少包括鼠标单击事件、鼠标移动事件;若在所述预设映射集中查找到所述用户的手势操作事件,则触发与所述用户的手势操作事件对应的鼠标操作事件。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数 据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
通过以上的实施方式的描述可知,本领域的技术人员可以清楚地了解到本说明书实施例可借助软件加必需的通用硬件平台的方式来实现。基于这样的理解,本说明书实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本说明书实施例各个实施例或者实施例的某些部分所述的方法。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机,计算机的具体形式可以是个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件收发设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任意几种设备的组合。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置实施例而言,由于其基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,在实施本说明书实施例方案时可以把各模块的功能在同一个或多个软件和/或硬件中实现。也可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
以上所述仅是本说明书实施例的具体实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本说明书实施例原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本说明书实施例的保护范围。

Claims (13)

  1. 一种利用手势模拟鼠标操作的方法,所述方法包括:
    获取手势采集设备采集用户手势所得到的手势信息;
    对所述手势信息进行识别,得到用户的手势操作事件;
    根据所述用户的手势操作事件查找预设映射集,所述预设映射集包括至少一组手势操作事件与鼠标操作事件的对应关系,其中,所述鼠标操作事件至少包括鼠标单击事件、鼠标移动事件;
    若在所述预设映射集中查找到所述用户的手势操作事件,则触发与所述用户的手势操作事件对应的鼠标操作事件。
  2. 根据权利要求1所述的方法,所述手势采集设备为图像采集设备,所述手势信息为所述图像采集设备采集到的用户手势图像。
  3. 根据权利要求2所述的方法,所述对所述手势信息进行识别,得到用户的手势操作事件,包括:
    在所述用户手势图像中提取出用户的手势区域;
    利用预设的特征提取算法对所述手势区域进行特征提取;
    通过提取到的特征进行手势识别,得到用户的手势操作事件。
  4. 根据权利要求1所述的方法,所述用户的手势操作事件至少包括:用于表示所述用户的手势发生移动的第一手势操作事件、用于表示所述用户的手势发生变换的第二手势操作事件;
    其中,所述第一手势操作事件对应所述鼠标移动事件,所述第二手势操作事件对应所述鼠标点击事件。
  5. 根据权利要求4所述的方法,所述对所述手势信息进行识别,得到用户的手势操作事件,包括:
    分别对当前获取到的手势信息与前一次获取到的手势信息进行识别,得到所述用户当前做出的第一手势与所述用户前一次做出的第二手势;
    判断所述第一手势与所述第二手势是否属于预设手势,若是,则判断所述第一手势与所述第二手势是否相同;
    若相同,则确定所述第一手势相对于所述第二手势的物理位移;若所述物理位移大于预设阈值,则得到用于表示所述用户的手势由所述第二手势所在位置移动到所述第一手势所在位置的第一手势操作事件;
    若不同,则得到用于表示所述用户的手势由所述第二手势变换为所述第一手势的第 二手势操作事件。
  6. 根据权利要求5所述的方法,所述预设手势至少包括:
    握拳手势、手掌打开手势、单指伸直手势。
  7. 一种利用手势模拟鼠标操作的装置,所述装置包括:
    获取模块,用于获取手势采集设备采集用户手势所得到的手势信息;
    识别模块,用于对所述手势信息进行识别,得到用户的手势操作事件;
    查找模块,用于根据所述用户的手势操作事件查找预设映射集,所述预设映射集包括至少一组手势操作事件与鼠标操作事件的对应关系,其中,所述鼠标操作事件至少包括鼠标单击事件、鼠标移动事件;
    触发模块,用于若在所述预设映射集中查找到所述用户的手势操作事件,则触发与所述用户的手势操作事件对应的鼠标操作事件。
  8. 根据权利要求7所述的装置,所述手势采集设备为图像采集设备,所述手势信息为所述图像采集设备采集到的用户手势图像。
  9. 根据权利要求8所述的装置,所述识别模块包括:
    区域提取子模块,用于在所述用户手势图像中提取出用户的手势区域;
    特征提取子模块,用于利用预设的特征提取算法对所述手势区域进行特征提取;
    特征识别子模块,用于通过提取到的特征进行手势识别,得到用户的手势操作事件。
  10. 根据权利要求7所述的装置,所述用户的手势操作事件至少包括:用于表示所述用户的手势发生移动的第一手势操作事件、用于表示所述用户的手势发生变换的第二手势操作事件;
    其中,所述第一手势操作事件对应所述鼠标移动事件,所述第二手势操作事件对应所述鼠标点击事件。
  11. 根据权利要求10所述的装置,所述识别模块包括:
    手势识别子模块,用于分别对当前获取到的手势信息与前一次获取到的手势信息进行识别,得到所述用户当前做出的第一手势与所述用户前一次做出的第二手势;
    第一判断子模块,用于判断所述第一手势与所述第二手势是否属于预设手势;
    第二判断子模块,用于若所述第一手势与所述第二手势属于预设手势,则判断所述第一手势与所述第二手势是否相同;
    位移确定子模块,用于若所述第一手势与所述第二手势相同,则确定所述第一手势相对于所述第二手势的物理位移;
    第一确定子模块,用于若所述物理位移大于预设阈值,则得到用于表示所述用户的 手势由所述第二手势所在位置移动到所述第一手势所在位置的第一手势操作事件;
    第二确定子模块,用于若所述第一手势与所述第二手势不同,则得到用于表示所述用户的手势由所述第二手势变换为所述第一手势的第二手势操作事件。
  12. 根据权利要求11所述的装置,所述预设手势至少包括:
    握拳手势、手掌打开手势、单指伸直手势。
  13. 一种终端,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述程序时实现如权利要求1至6任一项所述的方法。
PCT/CN2019/072077 2018-03-12 2019-01-17 一种利用手势模拟鼠标操作的方法、装置及终端 WO2019174398A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810200113.4A CN108446073A (zh) 2018-03-12 2018-03-12 一种利用手势模拟鼠标操作的方法、装置及终端
CN201810200113.4 2018-03-12

Publications (1)

Publication Number Publication Date
WO2019174398A1 true WO2019174398A1 (zh) 2019-09-19

Family

ID=63194033

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/072077 WO2019174398A1 (zh) 2018-03-12 2019-01-17 一种利用手势模拟鼠标操作的方法、装置及终端

Country Status (3)

Country Link
CN (1) CN108446073A (zh)
TW (1) TWI695311B (zh)
WO (1) WO2019174398A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108446073A (zh) * 2018-03-12 2018-08-24 阿里巴巴集团控股有限公司 一种利用手势模拟鼠标操作的方法、装置及终端
CN111221406B (zh) * 2018-11-23 2023-10-13 杭州萤石软件有限公司 一种信息交互方法及装置
CN109696958A (zh) * 2018-11-28 2019-04-30 南京华捷艾米软件科技有限公司 一种基于深度传感器手势识别的手势控制方法及系统
CN110221717A (zh) * 2019-05-24 2019-09-10 李锦华 虚拟鼠标驱动装置、用于虚拟鼠标的手势识别方法及设备
CN112068699A (zh) * 2020-08-31 2020-12-11 北京市商汤科技开发有限公司 一种交互方法、装置、电子设备和存储介质
CN112671972A (zh) * 2020-12-21 2021-04-16 四川长虹电器股份有限公司 一种手机控制大屏电视鼠标的移动的方法
CN114115536A (zh) * 2021-11-22 2022-03-01 北京字节跳动网络技术有限公司 一种交互方法、装置、电子设备和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102854983A (zh) * 2012-09-10 2013-01-02 中国电子科技集团公司第二十八研究所 一种基于手势识别的人机交互方法
CN103530613A (zh) * 2013-10-15 2014-01-22 无锡易视腾科技有限公司 一种基于单目视频序列的目标人手势交互方法
CN103926999A (zh) * 2013-01-16 2014-07-16 株式会社理光 手掌开合手势识别方法和装置、人机交互方法和设备
US20160253044A1 (en) * 2013-10-10 2016-09-01 Eyesight Mobile Technologies Ltd. Systems, devices, and methods for touch-free typing
CN108446073A (zh) * 2018-03-12 2018-08-24 阿里巴巴集团控股有限公司 一种利用手势模拟鼠标操作的方法、装置及终端

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339453B (zh) * 2008-08-15 2012-05-23 广东威创视讯科技股份有限公司 基于交互式输入设备的模拟鼠标输入方法
GB2474536B (en) * 2009-10-13 2011-11-02 Pointgrab Ltd Computer vision gesture based control of a device
CN107885316A (zh) * 2016-09-29 2018-04-06 阿里巴巴集团控股有限公司 一种基于手势的交互方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102854983A (zh) * 2012-09-10 2013-01-02 中国电子科技集团公司第二十八研究所 一种基于手势识别的人机交互方法
CN103926999A (zh) * 2013-01-16 2014-07-16 株式会社理光 手掌开合手势识别方法和装置、人机交互方法和设备
US20160253044A1 (en) * 2013-10-10 2016-09-01 Eyesight Mobile Technologies Ltd. Systems, devices, and methods for touch-free typing
CN103530613A (zh) * 2013-10-15 2014-01-22 无锡易视腾科技有限公司 一种基于单目视频序列的目标人手势交互方法
CN108446073A (zh) * 2018-03-12 2018-08-24 阿里巴巴集团控股有限公司 一种利用手势模拟鼠标操作的方法、装置及终端

Also Published As

Publication number Publication date
TW201939260A (zh) 2019-10-01
CN108446073A (zh) 2018-08-24
TWI695311B (zh) 2020-06-01

Similar Documents

Publication Publication Date Title
WO2019174398A1 (zh) 一种利用手势模拟鼠标操作的方法、装置及终端
US11592980B2 (en) Techniques for image-based search using touch controls
US10126824B2 (en) Generating a screenshot
CN112506340B (zh) 设备控制方法、装置、电子设备及存储介质
JP5802247B2 (ja) 情報処理装置
CN102906671A (zh) 手势输入装置及手势输入方法
WO2019062243A1 (zh) 触摸操作的识别方法、装置及电子设备
WO2021097750A1 (zh) 人体姿态的识别方法、装置、存储介质及电子设备
CN104081307A (zh) 图像处理装置、图像处理方法和程序
EP4030749B1 (en) Image photographing method and apparatus
CN108256071B (zh) 录屏文件的生成方法、装置、终端及存储介质
WO2015131590A1 (zh) 一种控制黑屏手势处理的方法及终端
US20170131785A1 (en) Method and apparatus for providing interface interacting with user by means of nui device
US20160140762A1 (en) Image processing device and image processing method
EP2899623A2 (en) Information processing apparatus, information processing method, and program
CN114360047A (zh) 举手手势识别方法、装置、电子设备及存储介质
WO2015164518A1 (en) Depth-based mode switching for touchless gestural interfaces
WO2017143575A1 (zh) 对图片的内容进行检索的方法、便携式电子设备和图形用户界面
CN103744609B (zh) 一种数据提取方法及装置
KR20200127928A (ko) 전자장치에서 이미지 내의 특정 객체를 인식하기 위한 방법 및 장치
CN110222576B (zh) 拳击动作识别方法、装置和电子设备
WO2023138546A1 (zh) 信息处理方法、装置、电子设备及存储介质
CN111796701A (zh) 模型训练方法、操作处理方法、装置、存储介质及设备
US10114469B2 (en) Input method touch device using the input method, gesture detecting device, computer-readable recording medium, and computer program product
WO2023273071A1 (zh) 一种图像处理方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19766808

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19766808

Country of ref document: EP

Kind code of ref document: A1