WO2018076720A1 - One-hand operation method and control system - Google Patents

One-hand operation method and control system Download PDF

Info

Publication number
WO2018076720A1
WO2018076720A1 PCT/CN2017/089027 CN2017089027W WO2018076720A1 WO 2018076720 A1 WO2018076720 A1 WO 2018076720A1 CN 2017089027 W CN2017089027 W CN 2017089027W WO 2018076720 A1 WO2018076720 A1 WO 2018076720A1
Authority
WO
WIPO (PCT)
Prior art keywords
manipulation
touch
display screen
control
depth image
Prior art date
Application number
PCT/CN2017/089027
Other languages
French (fr)
Chinese (zh)
Inventor
黄源浩
刘龙
肖振中
许星
Original Assignee
深圳奥比中光科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳奥比中光科技有限公司 filed Critical 深圳奥比中光科技有限公司
Publication of WO2018076720A1 publication Critical patent/WO2018076720A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Definitions

  • the present invention relates to the field of electronic technology, and in particular to a one-handed control method and a control system.
  • adding a function key to the back of the mobile phone is a solution for one-handed operation, but it will undoubtedly affect the beauty of the back of the mobile phone, and thus this solution has not been accepted by the user.
  • Another option is to add an additional touch screen to the back of the phone. This solution allows control of the area of the phone's screen that cannot be operated with one hand by manipulating the finger on the back.
  • this solution is costly and cannot be a mainstream one-hand operation.
  • the touch operation is performed by using the depth image, and the display screen outside the area cannot be touched between the control object and the display screen, and the position of the manipulation object on the display screen directly maps to the pixel on the acquired depth image.
  • the coordinates, that is, the position of the manipulation object on the display screen is the pixel coordinate mapping of the manipulation object in the depth image. This method of directly obtaining the position on the display screen through the pixel coordinate mapping cannot touch the manipulation object.
  • the area is touch-operated, so only some functions such as page turning can be implemented simply, and the one-hand control problem of the large screen cannot be solved well.
  • the existing scheme of using the depth image for touching is to set the position of the manipulation object on the display screen by setting the depth camera on the display screen side, and the best way is to control the object.
  • this method is suitable for a display without a touch function.
  • the finger touch display screen is determined to generate a touch on the screen.
  • the location information touches other areas of the display. Since two touch commands are generated, the touch conflict is caused, so that other areas of the display can no longer be touched, and other areas are touched.
  • only the touch object is not required to touch the display, which makes The mapping relationship between the position of the manipulation object on the display screen through the depth image acquisition is complicated, and the experience is greatly weakened.
  • the object of the present invention is to provide a one-handed control method and a control system, which can solve the problem that the above-mentioned prior art can not easily achieve one-hand control, and the touch screen with touch function is easy to generate touch conflict. problem.
  • the present invention provides a one-handed control method in which the display screen and the control surface for the touch operation are on different planes, including the following steps: S1: acquiring the control surface and the depth of the manipulation object on the control surface Image; S2: obtaining a first position of the manipulation object on the manipulation surface by the depth image; S3: positioning the second position on the display screen according to the first position of the manipulation object on the manipulation surface; S4: according to the shape and shape of the manipulation object The predetermined control action determined by the action is recognized and converted into a touch command to be executed; S5: performing a touch operation at the second position according to the touch command.
  • the manipulation method of the present invention may further have the following technical features:
  • the control surface comprises at least one control area, the at least one of the control areas being automatically delimited by the acquired depth image and the obtained position of the manipulation object on the control surface.
  • the size of the automatically delineated control area is easily accessible on the control surface when the manipulation object is operated by one hand.
  • the display screen includes a near touch area for controlling an object to be easily touched, and a far touch area that is not easily touched except the near touch area, and the manipulation object utilizes the display screen itself in the near touch area.
  • the touch function performs one-touch operation, and the touch is performed in the second position obtained by positioning in the far touch area.
  • the near touch area and the far touch area are automatically delimited according to the position of the manipulation object on the display screen.
  • the step of acquiring the depth image includes: S11: acquiring a first depth image including a control surface and not including the manipulation object; S12: acquiring a second depth image including the manipulation surface and the manipulation object; S13: passing the second depth image And obtaining a third depth image of the manipulation object with the first depth image.
  • Obtaining the first location includes the following steps: S21: determining, according to the second depth image, whether the manipulation object is in contact with the control surface, and if not, performing the next step; S22: obtaining the manipulation object according to the third depth image In the spatial position information of the coordinate system in which the display screen is located, the coordinates of the vertices of the manipulation object are taken as the first position.
  • the obtaining of the second location includes the following steps: S31: establishing a mapping relationship between the control plane and the display screen; S32: obtaining a second location of the display screen by the mapping relationship and the first location.
  • a linear mapping relationship is established according to the control surface and the horizontal and vertical dimensions of the display screen.
  • the present invention also provides a one-hand control system for performing the above touch method, including an image acquisition unit, a processor, a control surface, and a display screen, wherein the control surface and the display screen are on different planes. ;
  • the image acquisition unit is configured to acquire a control surface and a depth image of the manipulation object and depth information of the manipulation object;
  • the processor includes an identification module, a positioning module, a conversion module, and an execution module, and the identification module is configured to acquire a first position of the manipulation object on the manipulation surface according to the depth image, and identify a predetermined manipulation action of the manipulation object
  • the positioning module is configured to locate a second position on the display screen that needs to be manipulated according to the first position; the conversion module converts and generates a corresponding touch instruction according to a predefined control action; the execution module is used to Performing a touch command at the second position completes a touch operation on the display screen.
  • the present invention utilizes a depth image to implement a touch operation and is a one-handed control method in the control method.
  • the control surface for the control and the display screen are arranged on different planes, the control object completes the touch operation on the control surface, and the depth image is used to obtain the first position of the manipulation object on the manipulation surface, and the display screen is further obtained through the first position.
  • the second position is combined with a predetermined manipulation action to perform a touch operation at the second position, so that when the user touches the electronic device with the large screen, when the touch screen is touched
  • the control can be completed on the control surface, and the position of the control object on the display screen is completed, thereby avoiding the problem of touch conflict between the existing control and the touch.
  • the object can be manipulated at any time. It is in contact with the control surface to facilitate accurate and quick access to the position of the object on the control surface.
  • the present invention can not only realize simple gesture operations such as page turning and rewinding for a display screen without a touch function, but also realize one-hand touch well for a single-hand touch.
  • the object can be touched through the control surface and can achieve precise touch.
  • the control surface includes at least one control area to solve the acquisition and manipulation of the depth image under different touch habits of the manipulation object, such as the left and right hands, and the manipulation area is automatically delineated according to the depth image.
  • the manipulation area is A preset area on the control surface is automatically delineated by the position and depth image of the manipulated object on the manipulated object. It is not necessary to control the object to be manipulated in a certain area to perform the touch operation, thereby improving the manipulation.
  • the size of the control area is easily achieved on the control surface when the manipulation object is operated by one hand, and the shape defined by the optimization control area is not limited to a rectangle, and may be the most on the control surface according to the manipulation object.
  • the shape of the area that is easy to touch is determined, generally an irregular fan shape.
  • a mode that uses both its own touch function and a directional control method for hybrid control is implemented, such as:
  • the object is touched by the touch function of the display screen in the near touch area for one-hand touch, and the touch is performed in the second position obtained by positioning in the far touch area.
  • This hybrid control method can compensate for the low precision caused by the randomness of the manipulation of the moving object, and provides a better experience for the user.
  • the above-mentioned near touch area and far touch area are also automatically delineated according to the user's touch habits, to adapt to the user's habit of manipulation and to optimize the shape of the near touch area and the far touch area.
  • the second position on the display screen can be quickly obtained through the first position, and the linear mapping relationship can be used to control the control surface or the control area as a rectangle.
  • the shape quickly establishes a mapping relationship based on the horizontal and vertical dimensions.
  • FIG. 1 is a schematic structural view of a control system according to a first embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of a processor according to a first embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention.
  • FIG. 4 is a schematic rear view of a single hand operated mobile phone according to the third and fourth embodiments of the present invention.
  • Fig. 5 is a side elevational view showing the one hand control of the mobile phone according to the third and fourth embodiments of the present invention.
  • Figure 6 is a schematic illustration of the hybrid manipulation of embodiments 2 and 5 of the present invention.
  • FIG. 7 is a flowchart 1 of the operation of the fourth embodiment of the present invention.
  • FIG. 8 is a second flowchart of the operation of the fourth embodiment of the present invention.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • This embodiment provides a one-handed control system, as shown in Figure 1, including an image acquisition unit 1, a processor 2, a control surface 3 and a display screen 4;
  • the image acquisition unit 1 is configured to acquire the control surface 3 and the depth image of the manipulation object and the depth information of the manipulation object, and the control surface 3 and the display screen 4 are on different planes;
  • the processor 2 includes an identification module 21, a positioning module 22, a conversion module 23, and an execution module 24.
  • the identification module 21 is configured to acquire a control object on the control surface 3 according to the depth image. a position, and a predetermined manipulation action for identifying the manipulation object;
  • the positioning module 22 is configured to position a second position on the display screen 4 to be manipulated according to the first position;
  • the conversion module 23 is configured to be based on a predefined
  • the control action is generated to generate a corresponding touch command;
  • the execution module 24 is configured to perform a touch operation on the display screen 4 by executing a touch command at the second position;
  • the system can acquire the depth image of the control surface 3 through the image acquisition unit, and also acquire the depth image of the manipulation object that is in contact with the display screen 4, according to which the position and motion of the manipulation object on the control surface 3 can be recognized, thereby It is converted into a corresponding position on the display screen 4 and the instruction, so that the manipulation of the device can be realized.
  • the display screen of the e-book reader is considered to be on the display screen. Touch manipulation on the display will affect the display effect. For example, when the page is constantly flipped with a finger on the e-book, the hand will inevitably block part of the screen, thus affecting the reading experience. Using the manipulation object to control on the control surface will alleviate this problem and enhance the user experience.
  • the image acquisition unit 1 herein is a depth camera based on the structured light or TOF principle, and generally includes a receiving unit and a transmitting unit.
  • control surface can effectively avoid the problem of touch conflicts.
  • the object On the control surface, the object can be manipulated at any time. Face contact to facilitate accurate and fast access to the position of the object on the control surface.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • the difference is that the display screen of the embodiment is combined with the control surface 3 and the display screen 4 to complete the touch operation, and the display screen 4 is a touch screen.
  • the touch screen includes a touch panel 1 and a far touch region 2 , and the control object 16 performs touch control in the near touch region 1 according to the touch function of the touch screen itself.
  • the far touch area 2 is directed by the control surface 16 to complete the touch operation.
  • the touch control is still used in the near touch area 1 to ensure better precision, and in the far touch area.
  • the first position of the control surface 3 is used and the operation is performed according to the shape and motion of the touch object.
  • This embodiment is particularly suitable for a touch-enabled device such as a mobile phone that is currently unable to be completely controlled by a single screen due to a large screen.
  • Embodiment 3 is a diagrammatic representation of Embodiment 3
  • This embodiment provides an electronic device, which may be a mobile phone or a tablet, and includes a bus 5 for data transmission.
  • the bus 4 is divided into a data bus, an address bus, and a control bus, respectively.
  • Connected to the bus 4 are a CPU 6, a display 7, an IMU 8 (inertial measurement device), a memory 9, a camera 11, a sound device 12, a network interface 13, and a mouse/keyboard 14.
  • the mouse and keyboard are replaced by the display 7, and the IMU device is used for positioning, tracking and other functions.
  • the memory 9 is used to store an operating system, an application, etc., and can also be used to store temporary data during operation.
  • the image acquisition unit 1 in the manipulation system provided on the electronic device corresponds to the camera 11, and processes
  • the device 2 corresponds to the CPU 8, and may also be a separate processor.
  • the display screen 4 corresponds to the display 7.
  • the control surface is disposed on the back of the mobile phone, and is in a different plane from the display screen.
  • the camera 11 is typically secured to an electronic device.
  • the imaging direction is greatly different from that of the camera of the existing device.
  • Existing cameras are typically front or rear, and such a configuration cannot capture an image of the device itself.
  • the configuration of the camera 11 can be various. One is to rotate the camera by rotating the camera 90 degrees after rotating, and the other is to take the camera externally.
  • the camera as a whole is connected to the device through a certain fixed measure and via an interface such as USB.
  • Those skilled in the art can select the form of the camera on the electronic device according to the actual situation, without limitation.
  • the camera 11 of the present embodiment is a depth camera for acquiring a depth image of the target area.
  • FIG. 4 shows the back side of the phone with one hand.
  • a camera 11 is arranged on the top of the mobile phone, and the camera 11 collects the direction from the top to the bottom of the mobile phone, so that the control surface on the back of the mobile phone and the image of the finger (control object) can be obtained, and
  • FIG. 5 shows the operation of the mobile phone with one hand.
  • Side view, 17 is the first position on the control surface, 16 is the manipulation object.
  • the display 7 of the electronic device may be of a touch function or a touch function.
  • the touch function When the touch function is not provided, it can be controlled only by the control surface, and when the touch function is provided, the display can be touched by the control object.
  • the area is manipulated to perform control plane manipulation on areas that are not touched.
  • the camera 11 in this embodiment may also be a normal RGB camera for capturing RGB images; the camera 11 may also be an infrared camera for capturing infrared images; or may be a depth camera, such as based on the principle of structured light or based on the TOF principle. Depth camera, etc.
  • Depth images acquired with depth cameras are not affected by dark light, and can be measured even in the dark.
  • positioning and motion recognition using depth images is more accurate than RGB images. Therefore, in the following description, the depth camera and the depth image will be explained. However, the invention should not be limited to depth cameras.
  • Embodiment 4 is a diagrammatic representation of Embodiment 4:
  • a one-handed control method as shown in Figures 4-5 and 7, includes the following steps:
  • the control surface 3 comprises at least one control region 15 , the at least one of which is automatically delimited by the acquired depth image and the position of the controlled object 16 on the control surface 3 .
  • the size of the automatically delineated control area 15 is easily accessible on the control surface 3 when the handling object 16 is operated with one hand.
  • the contact point when the control object is in contact with the control surface 3 is determined.
  • all the points in contact with the touch surface in this operation are determined as the manipulation area 15.
  • the control surface 3 includes at least one control area 15 to solve the acquisition and manipulation of the depth image under different touch habits of the manipulation object 16, such as left and right hands, and the manipulation area 15 is automatically delineated according to the depth image.
  • the manipulation area 15 For a certain area on the control surface 3, the position and depth image of the manipulation object 16 on the manipulation object 16 are automatically demarcated, and the manipulation object 16 is required to be controlled in a certain area to perform the touch operation.
  • the size of the manipulation area 15 is easily achieved on the control surface 3 when the manipulation object 16 is operated by one hand, and the shape delimited by the manipulation area 15 is not limited to a rectangle. It can be determined according to the shape of the area of the manipulation object 16 that is most easily touched on the control surface 3, generally an irregular fan shape.
  • the acquired depth image includes other unrelated parts in addition to the control surface 3 and the finger.
  • the measurement range of the depth camera can be limited, that is, a certain threshold is set, and the depth information exceeding the threshold is removed.
  • the depth image acquired in this way will only contain the information on the back of the phone and the finger, which can reduce the amount of calculation of the recognition.
  • the image segmentation method is used to obtain the depth image, and one method includes the following steps:
  • the manipulation object 16 of the present embodiment is a finger, and the depth image of the front end portion of the finger obtained by the background segmentation method in the above step can reduce the calculation amount at the time of modeling and increase the calculation speed.
  • acquiring the first location 17 includes the following steps:
  • the obtaining of the second location comprises the following steps:
  • a linear mapping relationship is established according to the horizontal and vertical dimensions of the control surface 3 and the display screen 4.
  • the second position on the display screen 4 can be quickly obtained through the first position 17, and the control plane 3 or the manipulation area can be used by the linear mapping relationship.
  • 15 is a relatively regular shape such as a rectangle, and a mapping relationship is quickly established according to the horizontal and vertical dimensions.
  • the positioning needs to be performed first, and the manipulation action may be a change of the shape of the finger, such as a change in the angle between the finger and the display screen 4, or an action of the finger, such as a click action.
  • the shape and motion recognition of the finger are prior art and will not be described in detail herein.
  • a click action is completed.
  • the shape of the finger and the manipulation command corresponding to the action need to be preset.
  • the processor converts it into a corresponding manipulation command.
  • Embodiment 5 is a diagrammatic representation of Embodiment 5:
  • the display screen 4 is a touch screen with a touch function. As shown in FIG. 6 , the display screen 4 includes a near touch area 1 for controlling the object 16 to be easily touched, and is not easy to touch except for the near touch area. In the far touch area, the control object 16 performs one-hand touch using the touch function of the display screen 4 in the near touch area, and touches the second position obtained by positioning in the far touch area.
  • the distinction between the touch area 1 and the pointing area 2 can be automatically recognized and delineated by the system. That is, when the finger on the display 4 side is in contact with the touch display screen 4, the processor processes the signal fed back by the touch display screen 4 and executes the corresponding touch command, while the finger on the display screen 4 side and the touch display screen 4 When not in contact, the depth camera recognizes the non-contact motion and feeds back to the processor. The processor processes the depth image acquired by the depth camera and performs a touch operation through the second position.
  • a mode in which the touch control function is utilized and the control method of the pointing type is used for the hybrid control is implemented, such as:
  • the control object 16 performs one-hand touch using the touch function of the display screen 4 in the near touch area, and performs touch control in the second position obtained by positioning in the far touch area.
  • This hybrid control method can compensate for the low precision caused by the randomness of the manipulation of the object 16 and provide a better experience for the user.
  • the above-mentioned near touch area and far touch area are also automatically delineated according to the user's touch habits, to adapt to the user's habit of manipulation and to optimize the shape of the near touch area and the far touch area.

Abstract

The present invention discloses a one-hand operation method and an operation system, the one-hand operation method comprising the steps of: S1: Acquiring an operation surface and a depth image of an operation object on the operation surface; S2: Obtaining a first position of the operation object on the operation surface by means of the depth image; S3: Positioning a second position on a display screen according to the first position of the operation object on the operation surface; S4: According to a predetermined operation action determined by a shape and an action of the operation object, identifying and translating the predetermined operation action into a touch instruction which needs to be executed; S5: Performing a touch operation at a second position according to the touch instruction. Through the operation surface, the present invention can well realize the one-hand operation of a large-screen electronic device and avoid a touch control conflict.

Description

单手操控方法及操控系统One-handed control method and control system 技术领域Technical field
本发明涉及电子技术领域,特别是涉及一种单手操控方法及操控系统。The present invention relates to the field of electronic technology, and in particular to a one-handed control method and a control system.
背景技术Background technique
随着移动互联网的普及以及人们对手机等智能设备的功能要求越来越高,同时兼备通讯、上网以及视频播放等功能是目前电子设备最为基本的配置。目前像手机等设备的尺寸几乎进入了大屏时代,单手操作已经遥不可及,尽管如此,兼顾较大屏幕以及单手操作依然是目前用户对手机的诉求。With the popularity of the mobile Internet and the increasing demand for functions of smart devices such as mobile phones, the functions of communication, Internet access and video playback are the most basic configurations of electronic devices. At present, the size of devices such as mobile phones has almost entered the era of big screens, and one-handed operation is out of reach. However, taking into account larger screens and one-handed operation is still the current user demand for mobile phones.
以手机为例,在手机背后增加功能键是一种解决单手操作的方案,但是无疑会影响手机背面的美观,因而这种方案一直没有被用户接受。另一种方案是在手机背面增加额外的触摸屏,这种方案通过在背面利用手指的操控实现对手机屏幕无法单手操作区域的控制。但是这种方案成本较高,因而无法成为主流的单手操作方案。Taking a mobile phone as an example, adding a function key to the back of the mobile phone is a solution for one-handed operation, but it will undoubtedly affect the beauty of the back of the mobile phone, and thus this solution has not been accepted by the user. Another option is to add an additional touch screen to the back of the phone. This solution allows control of the area of the phone's screen that cannot be operated with one hand by manipulating the finger on the back. However, this solution is costly and cannot be a mainstream one-hand operation.
同时,目前利用深度图像进行触控操作,操控物体与显示屏之间并不能实现对区域外的显示屏进行触控,操控物体在显示屏上的位置直接映射对应于所获取的深度图像上像素的坐标,即操控物体在显示屏上的位置就是该操控物体在深度图像中的像素坐标映射而来,这种直接通过像素坐标映射而得到显示屏上位置的方法不能对操控物体触控不到的区域进行触控操作,所以只能简单的实现一些例如翻页的功能,不能很好的解决大屏的单手操控问题。At the same time, the touch operation is performed by using the depth image, and the display screen outside the area cannot be touched between the control object and the display screen, and the position of the manipulation object on the display screen directly maps to the pixel on the acquired depth image. The coordinates, that is, the position of the manipulation object on the display screen is the pixel coordinate mapping of the manipulation object in the depth image. This method of directly obtaining the position on the display screen through the pixel coordinate mapping cannot touch the manipulation object. The area is touch-operated, so only some functions such as page turning can be implemented simply, and the one-hand control problem of the large screen cannot be solved well.
再者,现有的利用深度图像进行触控的方案多通过在显示屏一侧设置深度相机,通过获取显示屏的深度图像来获得操控物体在显示屏上的位置,最好的方式在操控物体接触显示屏时进行获取,这种方法适用于不具备触摸功能的显示屏,对于自身具备触摸功能的触摸屏而言,手指触摸显示屏会被认定对屏幕产生触控,若在利用此时操控对象的位置信息对显示屏其他区域进行触控,由于现有产生了两个触控指令,便造成了触控冲突,这样便不能再对显示屏的其他区域进行触控,在对其他区域进行触控时,只能要求触控物体不与显示屏接触,这样使得 通过深度图像获取操控物体在显示屏上的位置时的映射关系比较复杂,体验大大减弱。Moreover, the existing scheme of using the depth image for touching is to set the position of the manipulation object on the display screen by setting the depth camera on the display screen side, and the best way is to control the object. When the display is touched, this method is suitable for a display without a touch function. For a touch screen with a touch function, the finger touch display screen is determined to generate a touch on the screen. The location information touches other areas of the display. Since two touch commands are generated, the touch conflict is caused, so that other areas of the display can no longer be touched, and other areas are touched. When controlling, only the touch object is not required to touch the display, which makes The mapping relationship between the position of the manipulation object on the display screen through the depth image acquisition is complicated, and the experience is greatly weakened.
以上背景技术内容的公开仅用于辅助理解本发明的发明构思及技术方案,其并不必然属于本专利申请的现有技术,在没有明确的证据表明上述内容在本专利申请的申请日已经公开的情况下,上述背景技术不应当用于评价本申请的新颖性和创造性。The above disclosure of the present invention is only for assisting in understanding the inventive concept and technical solution of the present invention, and it does not necessarily belong to the prior art of the present patent application, and there is no clear evidence that the above content has been disclosed on the filing date of the present patent application. In the event that the above background art should not be used to evaluate the novelty and inventiveness of the present application.
发明内容Summary of the invention
本发明目的在于提出一种单手操控方法及操控系统,以解决上述现有技术存在的不能很好的实现单手操控、对本身具备触摸功能的显示屏进行触控易产生触控冲突的技术问题。The object of the present invention is to provide a one-handed control method and a control system, which can solve the problem that the above-mentioned prior art can not easily achieve one-hand control, and the touch screen with touch function is easy to generate touch conflict. problem.
为此,本发明提出一种单手操控方法,显示屏与用于触控操作的操控面处在不同的平面上,包括以下步骤:S1:获取操控面和该操控面上的操控物体的深度图像;S2:通过深度图像获取操控物体在操控面上的第一位置;S3:根据操控物体在操控面上的第一位置定位出位于显示屏的第二位置;S4:根据操控物体的形状和动作确定的预定操控动作,进行识别并转化成需要执行的触控指令;S5:根据所述触控指令在第二位置处执行触控操作。To this end, the present invention provides a one-handed control method in which the display screen and the control surface for the touch operation are on different planes, including the following steps: S1: acquiring the control surface and the depth of the manipulation object on the control surface Image; S2: obtaining a first position of the manipulation object on the manipulation surface by the depth image; S3: positioning the second position on the display screen according to the first position of the manipulation object on the manipulation surface; S4: according to the shape and shape of the manipulation object The predetermined control action determined by the action is recognized and converted into a touch command to be executed; S5: performing a touch operation at the second position according to the touch command.
优选地,本发明的操控方法还可以具有如下技术特征:Preferably, the manipulation method of the present invention may further have the following technical features:
所述操控面包括至少一个操控区域,至少一个的所述操控区域通过获取的深度图像和获得的所述操控物体在所述操控面上的位置进行自动划定。The control surface comprises at least one control area, the at least one of the control areas being automatically delimited by the acquired depth image and the obtained position of the manipulation object on the control surface.
自动划定的所述操控区域的大小为操控物体单手操作时在所述操控面上容易到达的。The size of the automatically delineated control area is easily accessible on the control surface when the manipulation object is operated by one hand.
所述显示屏包括用于操控物体容易触控到的近触控区和除近触控区外不容易触控到的远触控区,所述操控物体在近触控区利用显示屏自身的触控功能进行单手触控,在远触控区用定位得到的第二位置进行触控。The display screen includes a near touch area for controlling an object to be easily touched, and a far touch area that is not easily touched except the near touch area, and the manipulation object utilizes the display screen itself in the near touch area. The touch function performs one-touch operation, and the touch is performed in the second position obtained by positioning in the far touch area.
所述近触控区和远触控区根据所述操控物体在显示屏的位置进行自动划定。The near touch area and the far touch area are automatically delimited according to the position of the manipulation object on the display screen.
所述深度图像的获取步骤包括:S11:获取包含操控面且不含操控物体的第一深度图像;S12:获取含有操控面和操控物体的第二深度图像;S13:通过所述第二深度图像和所述第一深度图像获得所述操控物体的第三深度图像。 The step of acquiring the depth image includes: S11: acquiring a first depth image including a control surface and not including the manipulation object; S12: acquiring a second depth image including the manipulation surface and the manipulation object; S13: passing the second depth image And obtaining a third depth image of the manipulation object with the first depth image.
获取所述第一位置包括如下步骤:S21:根据第二深度图像判断所述操控物体是否与所述操控面接触,若接触,则执行下一步;S22:根据第三深度图像得到所述操控物体在所述显示屏所在坐标系的空间位置信息,取所述操控物体顶点坐标作为第一位置。Obtaining the first location includes the following steps: S21: determining, according to the second depth image, whether the manipulation object is in contact with the control surface, and if not, performing the next step; S22: obtaining the manipulation object according to the third depth image In the spatial position information of the coordinate system in which the display screen is located, the coordinates of the vertices of the manipulation object are taken as the first position.
所述第二位置的获取包括以下步骤:S31:建立操控面与所述显示屏的映射关系;S32:由所述映射关系和所述第一位置得到显示屏的第二位置。The obtaining of the second location includes the following steps: S31: establishing a mapping relationship between the control plane and the display screen; S32: obtaining a second location of the display screen by the mapping relationship and the first location.
根据所述操控面和所述显示屏的横、纵向尺寸之建立线性映射关系。A linear mapping relationship is established according to the control surface and the horizontal and vertical dimensions of the display screen.
另外,本发明还提出了一种单手操控系统,用于执行上述的触控方法,包括图像获取单元、处理器、操控面和显示屏,所述操控面和显示屏处在不同的平面上;In addition, the present invention also provides a one-hand control system for performing the above touch method, including an image acquisition unit, a processor, a control surface, and a display screen, wherein the control surface and the display screen are on different planes. ;
所述图像获取单元用于获取操控面以及操控物体的深度图像和操控物体的深度信息;The image acquisition unit is configured to acquire a control surface and a depth image of the manipulation object and depth information of the manipulation object;
所述处理器包括识别模块、定位模块、转化模块和执行模块,所述识别模块用于根据所述深度图像获取操控物体在操控面上的第一位置,以及识别出操控物体的预定的操控动作;所述定位模块用于根据所述第一位置定位需要操控的显示屏上的第二位置;所述转化模块根据预定义的操控动作转化生成对应的触控指令;所述执行模块用于在所述第二位置处执行触控指令在显示屏上完成触控操作。The processor includes an identification module, a positioning module, a conversion module, and an execution module, and the identification module is configured to acquire a first position of the manipulation object on the manipulation surface according to the depth image, and identify a predetermined manipulation action of the manipulation object The positioning module is configured to locate a second position on the display screen that needs to be manipulated according to the first position; the conversion module converts and generates a corresponding touch instruction according to a predefined control action; the execution module is used to Performing a touch command at the second position completes a touch operation on the display screen.
本发明与现有技术对比的有益效果包括:Advantageous effects of the present invention compared to the prior art include:
本发明是利用深度图像来实现触控操作并且是一种单手操控方法,该操控方法中。用于操控的操控面与显示屏设置在不同的平面上,操控对象在操控面上完成触控操作,利用深度图像获取操控物体在操控面上的第一位置,通过第一位置进一步得到显示屏上的第二位置,结合预定的操控动作,在第二位置处执行触控操作,这样当用户在对大屏的电子设备进行触控时,若要对具备触摸功能的显示屏进行触控时,可在操控面上完成操控,已完成操控物体在显示屏上位置的确定,避免了现有的操控与触摸之间发生触控冲突的问题。The present invention utilizes a depth image to implement a touch operation and is a one-handed control method in the control method. The control surface for the control and the display screen are arranged on different planes, the control object completes the touch operation on the control surface, and the depth image is used to obtain the first position of the manipulation object on the manipulation surface, and the display screen is further obtained through the first position. The second position is combined with a predetermined manipulation action to perform a touch operation at the second position, so that when the user touches the electronic device with the large screen, when the touch screen is touched The control can be completed on the control surface, and the position of the control object on the display screen is completed, thereby avoiding the problem of touch conflict between the existing control and the touch.
同时,对于显示屏上难以触摸到的区域,只需在操控面上简单地完成预定的操控动作,就能对另一平面的显示屏进行触控,在操控面上,操控物体可以时刻 与操控面接触,以方便操控物体在操控面上的位置的准确快速获取。At the same time, for the hard-to-touch area on the display screen, simply press the predetermined control action on the control surface to touch the display of the other plane. On the control surface, the object can be manipulated at any time. It is in contact with the control surface to facilitate accurate and quick access to the position of the object on the control surface.
相比于现有技术而言,本发明不仅可以对不具备触摸功能的显示屏实现翻页、回退等一些简单的手势操作,还能很好的实现单手触控,对单手触摸不到的对象可通过操控面进行相应触控操作,并能实现精确触控。Compared with the prior art, the present invention can not only realize simple gesture operations such as page turning and rewinding for a display screen without a touch function, but also realize one-hand touch well for a single-hand touch. The object can be touched through the control surface and can achieve precise touch.
优选方案中,操控面包括至少一个以上的操控区域,以解决操控物体不同的触摸习惯下深度图像的获取以及操控,如左右手,操控区域根据深度图像进行自动划定,一般而言,操控区域为预先设定的操控面上某一区域,利用操控物体在操控物体上的位置和深度图像进行自动划定,无需要求操控物体一定要在某一区域进行操控才能进行触控操作,提高了操控的体验,特别的,该操控区域的大小为操控物体单手操作时在所述操控面上容易达到的,以最优化操控区域划定的形状,不限于矩形,可根据操控物体在操控面上最容易触摸的区域形状进行确定,一般为不规则的扇形。In a preferred embodiment, the control surface includes at least one control area to solve the acquisition and manipulation of the depth image under different touch habits of the manipulation object, such as the left and right hands, and the manipulation area is automatically delineated according to the depth image. Generally, the manipulation area is A preset area on the control surface is automatically delineated by the position and depth image of the manipulated object on the manipulated object. It is not necessary to control the object to be manipulated in a certain area to perform the touch operation, thereby improving the manipulation. Experience, in particular, the size of the control area is easily achieved on the control surface when the manipulation object is operated by one hand, and the shape defined by the optimization control area is not limited to a rectangle, and may be the most on the control surface according to the manipulation object. The shape of the area that is easy to touch is determined, generally an irregular fan shape.
在基于上述的操控方法的基础上,对于显示屏本身具备触控功能而言,实行一种既利用其本身的触控功能又利用指向型的操控方法进行混合操控的模式,如:所述操控物体在近触控区利用显示屏自身的触控功能进行单手触控,在远触控区用定位得到的第二位置进行触控。这种混合操控方法可以弥补由于操控物体移动的随机性大造成的精度低的问题,为用户提供一种更好的体验方案。上述近触控区和远触控区也是根据用户的触控习惯进行自动划定,以适应用户操控时的习惯和最优化近触控区及远触控区的形状。Based on the above-mentioned control method, for the touch function of the display screen itself, a mode that uses both its own touch function and a directional control method for hybrid control is implemented, such as: The object is touched by the touch function of the display screen in the near touch area for one-hand touch, and the touch is performed in the second position obtained by positioning in the far touch area. This hybrid control method can compensate for the low precision caused by the randomness of the manipulation of the moving object, and provides a better experience for the user. The above-mentioned near touch area and far touch area are also automatically delineated according to the user's touch habits, to adapt to the user's habit of manipulation and to optimize the shape of the near touch area and the far touch area.
对获取的深度图像根据第一深度图像和第二深度图像进行获取操控物体的第三深度图像,然后根据第三深度图像来获取第一位置,这样由于所获得第三深度图像仅仅包括操控物体的深度图像,所以在对第一位置进行获取时,可以降低处理过程的计算量,提高运算速度,提高系统的响应速度。Obtaining a third depth image of the manipulation object according to the first depth image and the second depth image, and then acquiring the first position according to the third depth image, so that the obtained third depth image only includes the manipulation object The depth image, so when the first position is acquired, the calculation amount of the processing process can be reduced, the calculation speed can be improved, and the response speed of the system can be improved.
通过对处在不同平面的操控面和显示屏之间建立映射关系,可快速的通过第一位置得到显示屏上的第二位置,采用线性映射关系可以对操控面或操控区域为矩形等较为规则的形状,根据横纵尺寸快速建立映射关系。By establishing a mapping relationship between the control planes and the display screens on different planes, the second position on the display screen can be quickly obtained through the first position, and the linear mapping relationship can be used to control the control surface or the control area as a rectangle. The shape quickly establishes a mapping relationship based on the horizontal and vertical dimensions.
附图说明 DRAWINGS
图1是本发明具体实施方式一的操控系统的结构示意图;1 is a schematic structural view of a control system according to a first embodiment of the present invention;
图2是本发明具体实施方式一的处理器的结构示意图。2 is a schematic structural diagram of a processor according to a first embodiment of the present invention.
图3是本发明具体实施方式三的电子设备的结构示意图。3 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention.
图4是本发明具体实施方式三和四的单手操控手机时的背面示意图。4 is a schematic rear view of a single hand operated mobile phone according to the third and fourth embodiments of the present invention.
图5是本发明具体实施方式三和四的单手操控手机时的侧面示意图。Fig. 5 is a side elevational view showing the one hand control of the mobile phone according to the third and fourth embodiments of the present invention.
图6是本发明具体实施方式二和五的混合操控的示意图。Figure 6 is a schematic illustration of the hybrid manipulation of embodiments 2 and 5 of the present invention.
图7是本发明具体实施方式四的操控流程图一。FIG. 7 is a flowchart 1 of the operation of the fourth embodiment of the present invention.
图8是本发明具体实施方式四的操控流程图二。FIG. 8 is a second flowchart of the operation of the fourth embodiment of the present invention.
具体实施方式detailed description
下面结合具体实施方式并对照附图对本发明作进一步详细说明。应该强调的是,下述说明仅仅是示例性的,而不是为了限制本发明的范围及其应用。The present invention will be further described in detail below in conjunction with the specific embodiments and with reference to the accompanying drawings. It is to be understood that the following description is only illustrative, and is not intended to limit the scope of the invention.
参照以下附图,将描述非限制性和非排他性的实施例,其中相同的附图标记表示相同的部件,除非另外特别说明。Non-limiting and non-exclusive embodiments will be described with reference to the drawings, wherein like reference numerals refer to the
实施例一:Embodiment 1:
本实施例提出了一种单手操控系统,如图1所示,包括图像获取单元1、处理器2、操控面3和显示屏4;This embodiment provides a one-handed control system, as shown in Figure 1, including an image acquisition unit 1, a processor 2, a control surface 3 and a display screen 4;
所述图像获取单元1用于获取操控面3以及操控物体的深度图像和操控物体的深度信息,所述操控面3和显示屏4处在不同的平面上;The image acquisition unit 1 is configured to acquire the control surface 3 and the depth image of the manipulation object and the depth information of the manipulation object, and the control surface 3 and the display screen 4 are on different planes;
如图2所示,所述处理器2包括识别模块21、定位模块22、转化模块23和执行模块24,所述识别模块21用于根据所述深度图像获取操控物体在操控面3上的第一位置,以及识别出操控物体的预定的操控动作;所述定位模块22用于根据所述第一位置定位需要操控的显示屏4上的第二位置;所述转化模块23用于根据预定义的操控动作转化生成对应的触控指令;所述执行模块24用于在在所述第二位置处执行触控指令在显示屏4上完成触控操作;As shown in FIG. 2, the processor 2 includes an identification module 21, a positioning module 22, a conversion module 23, and an execution module 24. The identification module 21 is configured to acquire a control object on the control surface 3 according to the depth image. a position, and a predetermined manipulation action for identifying the manipulation object; the positioning module 22 is configured to position a second position on the display screen 4 to be manipulated according to the first position; the conversion module 23 is configured to be based on a predefined The control action is generated to generate a corresponding touch command; the execution module 24 is configured to perform a touch operation on the display screen 4 by executing a touch command at the second position;
本系统可以通过图像获取单元获取操控面3的深度图像,同时也可获取与显示屏4接触的操控物体的深度图像,根据该深度图像可以识别出操控物体在操控面3的位置以及动作,从而转化成显示屏4上相应的位置以及指令,如此便可以实现对设备的操控,本实施例中,比如电子书阅读器的显示屏,考虑到在显示屏 上进行触摸操控会影响显示效果,比如在电子书上不断地用手指翻页时手部势必会遮挡部分屏幕,从而影响阅读体验。而利用操控对象在操控面上进行操控则会减轻这一问题,提升用户的使用体验。The system can acquire the depth image of the control surface 3 through the image acquisition unit, and also acquire the depth image of the manipulation object that is in contact with the display screen 4, according to which the position and motion of the manipulation object on the control surface 3 can be recognized, thereby It is converted into a corresponding position on the display screen 4 and the instruction, so that the manipulation of the device can be realized. In this embodiment, for example, the display screen of the e-book reader is considered to be on the display screen. Touch manipulation on the display will affect the display effect. For example, when the page is constantly flipped with a finger on the e-book, the hand will inevitably block part of the screen, thus affecting the reading experience. Using the manipulation object to control on the control surface will alleviate this problem and enhance the user experience.
这里的图像获取单元1为基于结构光或者TOF原理的深度相机,一般包含有接收单元以及发射单元组成。The image acquisition unit 1 herein is a depth camera based on the structured light or TOF principle, and generally includes a receiving unit and a transmitting unit.
使用操控面可以有效的避免触控冲突的问题。同时,对于显示屏上难以触摸到的区域,只需在操控面上简单地完成预定的操控动作,就能对另一平面的显示屏进行触控,在操控面上,操控物体可以时刻与操控面接触,以方便操控物体在操控面上的位置的准确快速获取。Using the control surface can effectively avoid the problem of touch conflicts. At the same time, for the hard-to-touch area on the display, simply press the predetermined control action on the control surface to touch the display of the other plane. On the control surface, the object can be manipulated at any time. Face contact to facilitate accurate and fast access to the position of the object on the control surface.
实施例二:Embodiment 2:
相较于实施例一,区别在于本实施例的显示屏结合操控面3和显示屏4一起完成触控操作,所述显示屏4为触摸屏。如图6所示,所述触摸屏包括近触控区①和远触控区②,所述操控物体16在所述近触控区①内依照所述触摸屏自身的触控功能完成触控,在所述远触控区②由操控面16指向完成触控操作。Compared with the first embodiment, the difference is that the display screen of the embodiment is combined with the control surface 3 and the display screen 4 to complete the touch operation, and the display screen 4 is a touch screen. As shown in FIG. 6 , the touch screen includes a touch panel 1 and a far touch region 2 , and the control object 16 performs touch control in the near touch region 1 according to the touch function of the touch screen itself. The far touch area 2 is directed by the control surface 16 to complete the touch operation.
具体为:单手能够到达的近触控区域①以及单手不能够到达的远触控区域②,在近触控区①依然采用触摸操控以保证有更好的精度,而在远触控区②则采用操控面3的第一位置并依据触控物体的形状以及动作来执行操控,本实施例特别适用于目前由于大屏幕导致单手无法完全操控的手机等具备触摸功能的设备。Specifically, the near touch area 1 that can be reached by one hand and the far touch area 2 that cannot be reached by one hand, the touch control is still used in the near touch area 1 to ensure better precision, and in the far touch area. 2, the first position of the control surface 3 is used and the operation is performed according to the shape and motion of the touch object. This embodiment is particularly suitable for a touch-enabled device such as a mobile phone that is currently unable to be completely controlled by a single screen due to a large screen.
实施例三:Embodiment 3:
本实施例提出了一种电子设备,电子设备可以是手机、平板,包括用于数据传输的总线5,如图3所示,所述总线4分为数据总线、地址总线和控制总线,分别用来传输数据、数据地址和控制信号。与所述总线4相连的有CPU6、显示器7、IMU8(惯性测量装置)、存储器9、摄像头11、声音装置12、网络接口13以及鼠标/键盘14,对于一些具备触摸功能的电子设备而言,其鼠标键盘被显示器7所取代,IMU设备用于定位、跟踪等功能实现。存储器9用于存储操作系统、应用程序等,也可以用于存储运行过程中的临时数据。This embodiment provides an electronic device, which may be a mobile phone or a tablet, and includes a bus 5 for data transmission. As shown in FIG. 3, the bus 4 is divided into a data bus, an address bus, and a control bus, respectively. To transfer data, data addresses and control signals. Connected to the bus 4 are a CPU 6, a display 7, an IMU 8 (inertial measurement device), a memory 9, a camera 11, a sound device 12, a network interface 13, and a mouse/keyboard 14. For some electronic devices with touch functions, The mouse and keyboard are replaced by the display 7, and the IMU device is used for positioning, tracking and other functions. The memory 9 is used to store an operating system, an application, etc., and can also be used to store temporary data during operation.
本电子设备上设置的操控系统中的图像获取单元1对应于摄像头11,处理 器2对应于CPU8,也可以是单独的处理器,显示屏4对应于显示器7,操控面设置在手机的背面,与显示屏处于不同的平面。The image acquisition unit 1 in the manipulation system provided on the electronic device corresponds to the camera 11, and processes The device 2 corresponds to the CPU 8, and may also be a separate processor. The display screen 4 corresponds to the display 7. The control surface is disposed on the back of the mobile phone, and is in a different plane from the display screen.
如图4所示,摄像头11一般被固定在电子设备上。在本实施例中,与已有设备的摄像头相比,摄像方向有较大的区别。已有的摄像头一般为前置或后置,这样的配置无法获取设备本身的图像。在本实施例中摄像头11的配置可以有多种情况,一种是借助于旋转轴,即通过旋转将摄像头旋转90度后达到对自己拍摄的目的;还有一种是摄像头外置的形式,将摄像头作为一个整体通过一定的固定措施并以USB等接口与设备连接。本领域的技术人员可以根据实际情况对摄像头设置在电子设备上的形式自行选择,不做限制。As shown in Figure 4, the camera 11 is typically secured to an electronic device. In this embodiment, the imaging direction is greatly different from that of the camera of the existing device. Existing cameras are typically front or rear, and such a configuration cannot capture an image of the device itself. In this embodiment, the configuration of the camera 11 can be various. One is to rotate the camera by rotating the camera 90 degrees after rotating, and the other is to take the camera externally. The camera as a whole is connected to the device through a certain fixed measure and via an interface such as USB. Those skilled in the art can select the form of the camera on the electronic device according to the actual situation, without limitation.
此时,与现有的摄像头而言,本实施例的摄像头11为深度相机,用于获取目标区域的深度图像。At this time, with the existing camera, the camera 11 of the present embodiment is a depth camera for acquiring a depth image of the target area.
图4所示是单手操控手机时的背面示意图。在手机顶部配置有摄像头11,摄像头11采集方向沿着手机自顶向下,这样就可以获取手机背面上的操控面以及手指(操控物体)的图像,图5所示是单手操控手机时的侧面示意图,17为操控面上的第一位置,16为操控物体。Figure 4 shows the back side of the phone with one hand. A camera 11 is arranged on the top of the mobile phone, and the camera 11 collects the direction from the top to the bottom of the mobile phone, so that the control surface on the back of the mobile phone and the image of the finger (control object) can be obtained, and FIG. 5 shows the operation of the mobile phone with one hand. Side view, 17 is the first position on the control surface, 16 is the manipulation object.
电子设备的显示器7可以是具备触摸功能的也可以是不具备触摸功能的,当不具备触摸功能时,可以仅通过操控面进行操控,当具备触摸功能,可以在操控物体能触摸到的显示屏区域进行操控,对触摸不到的区域执行操控面操控。The display 7 of the electronic device may be of a touch function or a touch function. When the touch function is not provided, it can be controlled only by the control surface, and when the touch function is provided, the display can be touched by the control object. The area is manipulated to perform control plane manipulation on areas that are not touched.
本实施例中的摄像头11还可以为普通RGB摄像头,用于拍摄RGB图像;摄像头11也可以为红外摄像头,用于拍摄红外图像;也可以为深度相机,如基于结构光原理或者基于TOF原理的深度相机等。The camera 11 in this embodiment may also be a normal RGB camera for capturing RGB images; the camera 11 may also be an infrared camera for capturing infrared images; or may be a depth camera, such as based on the principle of structured light or based on the TOF principle. Depth camera, etc.
利用深度相机获取的深度图像不会受到暗光影响,即使在黑夜里也能进行测量,另外,利用深度图像进行定位和动作识别较之于RGB图像会更加准确。因而以后面的说明中,将以深度相机以及深度图像来进行阐述。但本发明不应限定于深度相机。Depth images acquired with depth cameras are not affected by dark light, and can be measured even in the dark. In addition, positioning and motion recognition using depth images is more accurate than RGB images. Therefore, in the following description, the depth camera and the depth image will be explained. However, the invention should not be limited to depth cameras.
实施例四:Embodiment 4:
一种单手操控方法,结合图4-5和图7所示,包括以下步骤:A one-handed control method, as shown in Figures 4-5 and 7, includes the following steps:
S1:获取操控面3和该操控面3上的操控物体16的深度图像; S1: acquiring a depth image of the control surface 3 and the manipulation object 16 on the control surface 3;
S2:通过深度图像获取操控物体16在操控面3上的第一位置17;S2: obtaining a first position 17 of the manipulation object 16 on the manipulation surface 3 by the depth image;
S3:根据操控物体16在操控面3上的第一位置17定位出位于显示屏4的第二位置;S3: positioning the second position on the display screen 4 according to the first position 17 of the manipulation object 16 on the control surface 3;
S4:根据操控物体16的形状和动作确定的预定操控动作,进行识别并转化成需要执行的触控指令;S4: performing a predetermined manipulation action determined according to the shape and motion of the manipulation object 16, and identifying and converting into a touch instruction that needs to be executed;
S5:根据所述触控指令在第二位置处执行触控操作。S5: Perform a touch operation at the second position according to the touch instruction.
所述操控面3包括至少一个操控区域15,至少一个的所述操控区域15通过获取的深度图像和获得的所述操控物体16在所述操控面3上的位置进行自动划定。自动划定的所述操控区域15的大小为操控物体16单手操作时在所述操控面3上容易到达的。自动划定的过程中,可根据操控对象与操控面3接触时的接触点进行确定,在执行触控操作是,认定本次操作中与触控面接触的所有点作为操控区域15。The control surface 3 comprises at least one control region 15 , the at least one of which is automatically delimited by the acquired depth image and the position of the controlled object 16 on the control surface 3 . The size of the automatically delineated control area 15 is easily accessible on the control surface 3 when the handling object 16 is operated with one hand. In the process of automatically delimiting, the contact point when the control object is in contact with the control surface 3 is determined. When the touch operation is performed, all the points in contact with the touch surface in this operation are determined as the manipulation area 15.
操控面3包括至少一个以上的操控区域15,以解决操控物体16不同的触摸习惯下深度图像的获取以及操控,如左右手,操控区域15根据深度图像进行自动划定,一般而言,操控区域15为预先设定的操控面3上某一区域,利用操控物体16在操控物体16上的位置和深度图像进行自动划定,无需要求操控物体16一定要在某一区域进行操控才能进行触控操作,提高了操控的体验,特别的,该操控区域15的大小为操控物体16单手操作时在所述操控面3上容易达到的,以最优化操控区域15划定的形状,不限于矩形,可根据操控物体16在操控面3上最容易触摸的区域形状进行确定,一般为不规则的扇形。The control surface 3 includes at least one control area 15 to solve the acquisition and manipulation of the depth image under different touch habits of the manipulation object 16, such as left and right hands, and the manipulation area 15 is automatically delineated according to the depth image. Generally, the manipulation area 15 For a certain area on the control surface 3, the position and depth image of the manipulation object 16 on the manipulation object 16 are automatically demarcated, and the manipulation object 16 is required to be controlled in a certain area to perform the touch operation. In particular, the size of the manipulation area 15 is easily achieved on the control surface 3 when the manipulation object 16 is operated by one hand, and the shape delimited by the manipulation area 15 is not limited to a rectangle. It can be determined according to the shape of the area of the manipulation object 16 that is most easily touched on the control surface 3, generally an irregular fan shape.
获取的深度图像中除了包含操控面3、手指以外还包含其他无关部分,此时则可以通过对深度相机的测量范围进行限制,即设定一定的阈值,对超过阈值的深度信息予以去除,通过这种方式获取的深度图像将仅含有手机背面以及手指的信息,可减小识别的计算量。为了进一步提高系统运行速度,降低运算量,在获取深度图像时利用图像分割法来获取,其中一种方法是,包括如下步骤:The acquired depth image includes other unrelated parts in addition to the control surface 3 and the finger. In this case, the measurement range of the depth camera can be limited, that is, a certain threshold is set, and the depth information exceeding the threshold is removed. The depth image acquired in this way will only contain the information on the back of the phone and the finger, which can reduce the amount of calculation of the recognition. In order to further improve the system running speed and reduce the amount of calculation, the image segmentation method is used to obtain the depth image, and one method includes the following steps:
S11:获取包含操控面3且不含操控物体16的第一深度图像;S11: acquiring a first depth image including the control surface 3 and not including the manipulation object 16;
S12:获取含有操控面3和操控物体16的第二深度图像;S12: acquiring a second depth image including the control surface 3 and the manipulation object 16;
S13:通过所述第二深度图像和所述第一深度图像获得所述操控物体16的第 三深度图像。S13: obtaining, by the second depth image and the first depth image, the first object of the manipulation object 16 Three depth images.
本实施例的操控物体16是手指,上述步骤通过背景分割法获取的手指前端部位的深度图像,可以降低建模时的计算量,提高计算速度。The manipulation object 16 of the present embodiment is a finger, and the depth image of the front end portion of the finger obtained by the background segmentation method in the above step can reduce the calculation amount at the time of modeling and increase the calculation speed.
如图8所示,获取所述第一位置17包括如下步骤:As shown in FIG. 8, acquiring the first location 17 includes the following steps:
S21:根据第二深度图像判断所述操控物体16是否与所述操控面3接触,若接触,则执行下一步;S21: determining, according to the second depth image, whether the manipulation object 16 is in contact with the control surface 3, and if contacting, performing the next step;
S22:根据第三深度图像得到所述操控物体16在所述显示屏4所在坐标系的空间位置信息,取所述操控物体16顶点坐标作为第一位置17。S22: Obtain spatial position information of the manipulation object 16 in the coordinate system of the display screen 4 according to the third depth image, and take the coordinates of the apex of the manipulation object 16 as the first position 17.
所述第二位置的获取包括以下步骤:The obtaining of the second location comprises the following steps:
S31:建立操控面3与所述显示屏4的映射关系;S31: Establish a mapping relationship between the control surface 3 and the display screen 4;
S32:由所述映射关系和所述第一位置17得到显示屏4的第二位置。S32: obtaining the second position of the display screen 4 by the mapping relationship and the first position 17.
根据所述操控面3和所述显示屏4的横、纵向尺寸之建立线性映射关系。A linear mapping relationship is established according to the horizontal and vertical dimensions of the control surface 3 and the display screen 4.
通过对处在不同平面的操控面3和显示屏4之间建立映射关系,可快速的通过第一位置17得到显示屏4上的第二位置,采用线性映射关系可以对操控面3或操控区域15为矩形等较为规则的形状,根据横纵尺寸快速建立映射关系。By establishing a mapping relationship between the control surface 3 and the display screen 4 on different planes, the second position on the display screen 4 can be quickly obtained through the first position 17, and the control plane 3 or the manipulation area can be used by the linear mapping relationship. 15 is a relatively regular shape such as a rectangle, and a mapping relationship is quickly established according to the horizontal and vertical dimensions.
已有触摸技术中,定位与触摸是同时完成的,然而这在看不见的背部是不可行的。因而需要先定位再进行操作,操控动作可以是手指形状的改变,比如手指与显示屏4之间角度的改变,也可以是手指的动作,比如点击动作等等。对手指的形状以及动作识别为已有技术,在这里不予详细说明。In the existing touch technology, positioning and touch are completed at the same time, but this is not feasible on the invisible back. Therefore, the positioning needs to be performed first, and the manipulation action may be a change of the shape of the finger, such as a change in the angle between the finger and the display screen 4, or an action of the finger, such as a click action. The shape and motion recognition of the finger are prior art and will not be described in detail herein.
当然也不排除一些无需定位的操作,比如翻页、回退等操作。对于这类操作,可以仅识别手指的形状或动作就可以了。Of course, some operations that do not require positioning, such as page turning, rollback, etc., are not excluded. For this type of operation, it is possible to recognize only the shape or motion of the finger.
本实施例中,当手指触摸到背面时,并且减小手指的倾斜角度,则完成一次点击动作。手指的形状与动作所对应的操控指令需要预先设定。当识别到某个手指形状和动作时,处理器则将其转化成对应的操控指令。In this embodiment, when the finger touches the back side and reduces the tilt angle of the finger, a click action is completed. The shape of the finger and the manipulation command corresponding to the action need to be preset. When a finger shape and motion is recognized, the processor converts it into a corresponding manipulation command.
实施例五:Embodiment 5:
考虑到手指移动的随机性,其精度难以达到触摸的程度。因而对于一些具备触摸功能手机类设备的单手操控,在单手可以到达的区域依然选择触摸操作,而在单手无法到达的区域实施非接触操控是一种有更好体验的方案。所以本实施例 显示屏4为自身具备触控功能的触摸屏,如图6所示,所述显示屏4包括用于操控物体16容易触控到的近触控区①和除近触控区外不容易触控到的远触控区,所述操控物体16在近触控区利用显示屏4自身的触控功能进行单手触控,在远触控区用定位得到的第二位置进行触控。Considering the randomness of finger movement, its accuracy is difficult to reach the degree of touch. Therefore, for some one-handed control devices with touch-enabled mobile phones, the touch operation is still selected in the area that can be reached by one hand, and the non-contact control in the area that cannot be reached by one hand is a better experience. So this embodiment The display screen 4 is a touch screen with a touch function. As shown in FIG. 6 , the display screen 4 includes a near touch area 1 for controlling the object 16 to be easily touched, and is not easy to touch except for the near touch area. In the far touch area, the control object 16 performs one-hand touch using the touch function of the display screen 4 in the near touch area, and touches the second position obtained by positioning in the far touch area.
由于不同用户的手部大小、左右手习惯等都不相同,因而对于触控区①与指向区②的区分可以由系统自动识别和划定。即:当显示屏4侧的手指与触摸显示屏4接触时,则处理器处理由触摸显示屏4反馈的信号并执行相应的触控指令,而当显示屏4侧的手指与触摸显示屏4不接触时,深度相机识别出该不接触动作后,反馈给处理器,处理器则对深度相机获取的深度图像进行处理,通过第二位置进行触控操作。Since the hand size, left and right hand habits, etc. of different users are different, the distinction between the touch area 1 and the pointing area 2 can be automatically recognized and delineated by the system. That is, when the finger on the display 4 side is in contact with the touch display screen 4, the processor processes the signal fed back by the touch display screen 4 and executes the corresponding touch command, while the finger on the display screen 4 side and the touch display screen 4 When not in contact, the depth camera recognizes the non-contact motion and feeds back to the processor. The processor processes the depth image acquired by the depth camera and performs a touch operation through the second position.
在基于上述的操控方法的基础上,对于显示屏4本身具备触控功能而言,实行一种既利用其本身的触控功能又利用指向型的操控方法进行混合操控的模式,如:所述操控物体16在近触控区利用显示屏4自身的触控功能进行单手触控,在远触控区用定位得到的第二位置进行触控。这种混合操控方法可以弥补由于操控物体16移动的随机性大造成的精度低的问题,为用户提供一种更好的体验方案。上述近触控区和远触控区也是根据用户的触控习惯进行自动划定,以适应用户操控时的习惯和最优化近触控区及远触控区的形状。Based on the above-mentioned control method, for the touch screen function of the display screen 4 itself, a mode in which the touch control function is utilized and the control method of the pointing type is used for the hybrid control is implemented, such as: The control object 16 performs one-hand touch using the touch function of the display screen 4 in the near touch area, and performs touch control in the second position obtained by positioning in the far touch area. This hybrid control method can compensate for the low precision caused by the randomness of the manipulation of the object 16 and provide a better experience for the user. The above-mentioned near touch area and far touch area are also automatically delineated according to the user's touch habits, to adapt to the user's habit of manipulation and to optimize the shape of the near touch area and the far touch area.
本领域技术人员将认识到,对以上描述做出众多变通是可能的,所以实施例仅是用来描述一个或多个特定实施方式。Those skilled in the art will recognize that many variations are possible in the above description, and thus the embodiments are only used to describe one or more specific embodiments.
尽管已经描述和叙述了被看作本发明的示范实施例,本领域技术人员将会明白,可以对其作出各种改变和替换,而不会脱离本发明的精神。另外,可以做出许多修改以将特定情况适配到本发明的教义,而不会脱离在此描述的本发明中心概念。所以,本发明不受限于在此披露的特定实施例,但本发明可能还包括属于本发明范围的所有实施例及其等同物。 While the invention has been described and described with reference to the embodiments of the embodiments In addition, many modifications may be made to adapt a particular situation to the teachings of the invention, without departing from the inventive concept. Therefore, the invention is not limited to the specific embodiments disclosed herein, but the invention may also include all embodiments and equivalents thereof.

Claims (10)

  1. 一种单手操控方法,其特征在于,显示屏与用于触控操作的操控面处在不同的平面上,包括以下步骤:A one-handed control method is characterized in that the display screen and the control surface for the touch operation are on different planes, including the following steps:
    S1:获取操控面和该操控面上的操控物体的深度图像;S1: acquiring a depth image of the control surface and the manipulation object on the manipulation surface;
    S2:通过深度图像获取操控物体在操控面上的第一位置;S2: acquiring a first position of the manipulation object on the manipulation surface by using the depth image;
    S3:根据操控物体在操控面上的第一位置定位出位于显示屏的第二位置;S3: positioning the second position on the display screen according to the first position of the manipulation object on the manipulation surface;
    S4:根据操控物体的形状和动作确定的预定操控动作,进行识别并转化成需要执行的触控指令;S4: performing a predetermined manipulation action determined according to the shape and motion of the manipulation object, and performing recognition and converting into a touch instruction that needs to be executed;
    S5:根据所述触控指令在第二位置处执行触控操作。S5: Perform a touch operation at the second position according to the touch instruction.
  2. 如权利要求1所述的操控方法,其特征在于,所述操控面包括至少一个操控区域,至少一个的所述操控区域通过获取的深度图像和获得的所述操控物体在所述操控面上的位置进行自动划定。The manipulation method according to claim 1, wherein the control surface comprises at least one manipulation area, and at least one of the manipulation regions passes the acquired depth image and the obtained manipulation object on the manipulation surface The location is automatically delimited.
  3. 如权利要求2述的操控方法,其特征在于,自动划定的所述操控区域的大小为操控物体单手操作时在所述操控面上容易到达的。The manipulation method according to claim 2, wherein the automatically delineated size of the manipulation area is easily accessible on the manipulation surface when the manipulation object is operated by one hand.
  4. 如权利要求1所述的操控方法,其特征在于,所述显示屏包括用于操控物体容易触控到的近触控区和除近触控区外不容易触控到的远触控区,所述操控物体在近触控区利用显示屏自身的触控功能进行单手触控,在远触控区利用定位得到的第二位置进行触控。The control method of claim 1 , wherein the display screen comprises a near touch area for controlling an object to be easily touched, and a far touch area that is not easily touched except for the near touch area. The control object performs one-hand touch using the touch function of the display screen in the near touch area, and performs touch control in the far touch area by using the second position obtained by the positioning.
  5. 如权利要求4所述的操控方法,其特征在于,所述近触控区和远触控区根据所述操控物体在显示屏的位置进行自动划定。The control method according to claim 4, wherein the near touch area and the far touch area are automatically delimited according to the position of the manipulation object on the display screen.
  6. 如权利要求1所述的操控方法,其特征在于,所述深度图像的获取步骤包括:The manipulation method according to claim 1, wherein the step of acquiring the depth image comprises:
    S11:获取包含操控面且不含操控物体的第一深度图像;S11: acquiring a first depth image that includes a control surface and does not include a manipulation object;
    S12:获取含有操控面和操控物体的第二深度图像;S12: acquiring a second depth image including a control surface and a manipulation object;
    S13:通过所述第二深度图像和所述第一深度图像获得所述操控物体的第三深度图像。S13: Obtain a third depth image of the manipulation object by using the second depth image and the first depth image.
  7. 如权利要求6所述的操控方法,其特征在于,获取所述第一位置包括如下步骤:The manipulation method according to claim 6, wherein the acquiring the first location comprises the following steps:
    S21:根据第二深度图像判断所述操控物体是否与所述操控面接触,若接触, 则执行下一步;S21: determining, according to the second depth image, whether the manipulation object is in contact with the control surface, and if contact, Then proceed to the next step;
    S22:根据第三深度图像得到所述操控物体在所述显示屏所在坐标系的空间位置信息,取所述操控物体顶点坐标作为第一位置。S22: Obtain spatial position information of the manipulation object in a coordinate system of the display screen according to the third depth image, and take the coordinates of the control object vertex as the first position.
  8. 如权利要求1所述的操控方法,其特征在于,所述第二位置的获取包括以下步骤:The manipulation method according to claim 1, wherein the obtaining of the second position comprises the following steps:
    S31:建立操控面与所述显示屏的映射关系;S31: Establish a mapping relationship between the control plane and the display screen.
    S32:由所述映射关系和所述第一位置得到显示屏的第二位置。S32: Obtain a second position of the display screen by the mapping relationship and the first location.
  9. 如权利要求8所述的操控方法,其特征在于,根据所述操控面和所述显示屏的横、纵向尺寸之建立线性映射关系。The manipulation method according to claim 8, wherein a linear mapping relationship is established according to the horizontal and vertical dimensions of the control surface and the display screen.
  10. 一种单手操控系统,其特征在于:用于执行权利要求1-9任一项所述的触控方法,包括图像获取单元、处理器、操控面和显示屏,所述操控面和显示屏处在不同的平面上;A one-handed control system, comprising: the touch method according to any one of claims 1-9, comprising an image acquisition unit, a processor, a control surface and a display screen, the control surface and the display screen On different planes;
    所述图像获取单元用于获取操控面以及操控物体的深度图像和操控物体的深度信息;The image acquisition unit is configured to acquire a control surface and a depth image of the manipulation object and depth information of the manipulation object;
    所述处理器包括识别模块、定位模块、转化模块和执行模块,所述识别模块用于根据所述深度图像获取操控物体在操控面上的第一位置,以及识别出操控物体的预定的操控动作;所述定位模块用于根据所述第一位置定位需要操控的显示屏上的第二位置;所述转化模块根据预定义的操控动作转化生成对应的触控指令;所述执行模块用于在所述第二位置处执行触控指令在显示屏上完成触控操作。 The processor includes an identification module, a positioning module, a conversion module, and an execution module, and the identification module is configured to acquire a first position of the manipulation object on the manipulation surface according to the depth image, and identify a predetermined manipulation action of the manipulation object The positioning module is configured to locate a second position on the display screen that needs to be manipulated according to the first position; the conversion module converts and generates a corresponding touch instruction according to a predefined control action; the execution module is used to Performing a touch command at the second position completes a touch operation on the display screen.
PCT/CN2017/089027 2016-10-25 2017-06-19 One-hand operation method and control system WO2018076720A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610942444.6 2016-10-25
CN201610942444.6A CN106569716B (en) 2016-10-25 2016-10-25 Single-hand control method and control system

Publications (1)

Publication Number Publication Date
WO2018076720A1 true WO2018076720A1 (en) 2018-05-03

Family

ID=58536395

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/089027 WO2018076720A1 (en) 2016-10-25 2017-06-19 One-hand operation method and control system

Country Status (2)

Country Link
CN (1) CN106569716B (en)
WO (1) WO2018076720A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112204511A (en) * 2018-08-31 2021-01-08 深圳市柔宇科技股份有限公司 Input control method and electronic device

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106569716B (en) * 2016-10-25 2020-07-24 深圳奥比中光科技有限公司 Single-hand control method and control system
CN107613094A (en) * 2017-08-17 2018-01-19 珠海格力电器股份有限公司 A kind of method and mobile terminal of one-handed performance mobile terminal
WO2023220983A1 (en) * 2022-05-18 2023-11-23 北京小米移动软件有限公司 Control method and apparatus for switching single-hand mode, device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102929388A (en) * 2011-09-30 2013-02-13 微软公司 Full space posture input
CN103440033A (en) * 2013-08-19 2013-12-11 中国科学院深圳先进技术研究院 Method and device for achieving man-machine interaction based on bare hand and monocular camera
CN105824553A (en) * 2015-08-31 2016-08-03 维沃移动通信有限公司 Touch method and mobile terminal
CN106569716A (en) * 2016-10-25 2017-04-19 深圳奥比中光科技有限公司 One-hand operation and control method and control system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10216286B2 (en) * 2012-03-06 2019-02-26 Todd E. Chornenky On-screen diagonal keyboard
CN102789568B (en) * 2012-07-13 2015-03-25 浙江捷尚视觉科技股份有限公司 Gesture identification method based on depth information
CN102937822A (en) * 2012-12-06 2013-02-20 广州视声电子科技有限公司 Reverse side controlling structure and method of mobile equipment
CN103176605A (en) * 2013-03-27 2013-06-26 刘仁俊 Control device of gesture recognition and control method of gesture recognition
CN103777701A (en) * 2014-01-23 2014-05-07 深圳市国华光电研究所 Large-screen touch screen electronic equipment
CN104331182B (en) * 2014-03-06 2017-08-25 广州三星通信技术研究有限公司 Portable terminal with auxiliary touch-screen
CN104750188A (en) * 2015-03-26 2015-07-01 小米科技有限责任公司 Mobile terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102929388A (en) * 2011-09-30 2013-02-13 微软公司 Full space posture input
CN103440033A (en) * 2013-08-19 2013-12-11 中国科学院深圳先进技术研究院 Method and device for achieving man-machine interaction based on bare hand and monocular camera
CN105824553A (en) * 2015-08-31 2016-08-03 维沃移动通信有限公司 Touch method and mobile terminal
CN106569716A (en) * 2016-10-25 2017-04-19 深圳奥比中光科技有限公司 One-hand operation and control method and control system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112204511A (en) * 2018-08-31 2021-01-08 深圳市柔宇科技股份有限公司 Input control method and electronic device

Also Published As

Publication number Publication date
CN106569716B (en) 2020-07-24
CN106569716A (en) 2017-04-19

Similar Documents

Publication Publication Date Title
US20210096651A1 (en) Vehicle systems and methods for interaction detection
EP2972727B1 (en) Non-occluded display for hover interactions
US8619049B2 (en) Monitoring interactions between two or more objects within an environment
EP2864932B1 (en) Fingertip location for gesture input
US20140300542A1 (en) Portable device and method for providing non-contact interface
WO2018076720A1 (en) One-hand operation method and control system
US9454260B2 (en) System and method for enabling multi-display input
US9207779B2 (en) Method of recognizing contactless user interface motion and system there-of
JP2015516624A (en) Method for emphasizing effective interface elements
CN104081307A (en) Image processing apparatus, image processing method, and program
CN106598422B (en) hybrid control method, control system and electronic equipment
US9400575B1 (en) Finger detection for element selection
US20180196530A1 (en) Method for controlling cursor, device for controlling cursor and display apparatus
US10162501B2 (en) Terminal device, display control method, and non-transitory computer-readable recording medium
US20220019288A1 (en) Information processing apparatus, information processing method, and program
US9041689B1 (en) Estimating fingertip position using image analysis
WO2021004413A1 (en) Handheld input device and blanking control method and apparatus for indication icon of handheld input device
US11294510B2 (en) Method, system and non-transitory computer-readable recording medium for supporting object control by using a 2D camera
JP6555958B2 (en) Information processing apparatus, control method therefor, program, and storage medium
TWI444875B (en) Multi-touch input apparatus and its interface method using data fusion of a single touch sensor pad and imaging sensor
WO2019100547A1 (en) Projection control method, apparatus, projection interaction system, and storage medium
JP3201596U (en) Operation input device
TW201419087A (en) Micro-somatic detection module and micro-somatic detection method
JP2018181169A (en) Information processor, and information processor control method, computer program, and storage medium
US9116573B2 (en) Virtual control device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17866199

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17866199

Country of ref document: EP

Kind code of ref document: A1