Hybrid control method, control system and electronic equipment
Technical Field
the present invention relates to the field of electronic technologies, and in particular, to a pointing-based control method, a hybrid control method, a control system thereof, and an electronic device.
background
with the popularization of mobile internet and the higher and higher requirements of people on the functions of intelligent devices such as mobile phones, the electronic device with the functions of communication, internet surfing, video playing and the like is the most basic configuration of the current electronic devices. At present, the size of devices such as mobile phones almost enters a large-screen era, and single-hand operation is far from reaching, and even though the size of devices such as mobile phones and the like is large, the requirements of users on mobile phones are still met.
taking a mobile phone as an example, adding a function key at the back of the mobile phone is a solution to one-hand operation, but will undoubtedly affect the beauty of the back of the mobile phone, so this solution has not been accepted by users. The other scheme is that an additional touch screen is added on the back of the mobile phone, and the scheme realizes the control of the area, which cannot be operated by one hand, of the mobile phone screen through the control of fingers on the back. However, this solution is costly and cannot be the mainstream one-handed solution.
Meanwhile, currently, a depth image is used for touch operation, touch on a display screen outside a region cannot be achieved between an operation object and the display screen, the position of the operation object on the display screen is directly mapped to coordinates corresponding to pixels on an acquired depth image, namely the position of the operation object on the display screen is mapped by the pixel coordinates of the operation object in the depth image, and the method for obtaining the position of the operation object on the display screen directly through pixel coordinate mapping cannot achieve touch operation on the region where the operation object cannot be touched, so that some functions such as page turning can be achieved simply, the problem of single-hand operation of a large screen cannot be solved well, and accurate touch on an object on the display screen cannot be achieved.
the above background disclosure is only for the purpose of assisting understanding of the inventive concept and technical solutions of the present invention, and does not necessarily belong to the prior art of the present patent application, and should not be used for evaluating the novelty and inventive step of the present application in the case that there is no clear evidence that the above content is disclosed at the filing date of the present patent application.
disclosure of Invention
the invention aims to provide a pointing-type-based control and hybrid control method, a control system and electronic equipment, so as to solve the technical problem that single-hand control and accurate touch control cannot be well realized in the prior art.
Therefore, the invention provides a directional control method, which comprises the following steps: s1: acquiring a depth image of a display screen and a control object; s2: acquiring the position on the display screen pointed by the control object through the depth image to obtain a pointing point on the display screen; s3: according to the indication of the pointing point, combining a preset operation and control action, recognizing and generating a touch instruction; s4: and executing touch operation at the position of the pointing point according to the touch instruction.
Preferably, the control method of the present invention may further have the following technical features:
The acquisition of the pointing point in step S2 includes: and extracting characteristic points which can draw the control object on the depth image, and constructing a pointing straight line which draws the control object according to the characteristic points to obtain a pointing point on the display screen.
the pointing point is an intersection point of a pointing straight line and the display screen, and the method comprises the following steps: s21: acquiring depth information of the feature points; s22: converting the depth information of the characteristic points to obtain spatial position information of the characteristic points in a coordinate system where the display screen is located, and obtaining a pointing straight line for drawing the control object according to the spatial position information of the characteristic points; s23: and acquiring the intersection point of the pointing straight line and the display screen according to the pointing straight line to obtain a pointing point on the display screen, and displaying the pointing point on the display screen.
when the control object is in contact with the display screen, the pointing point is a characteristic point of the control object.
the depth image acquired in step S1 includes: s11: acquiring a first depth image which comprises a display screen and does not contain a control object; s12: acquiring a second depth image containing a display screen and a control object; s13: and obtaining a third depth image of the control object through the second depth image and the first depth image, and obtaining a pointing point according to the third depth image.
Meanwhile, the invention also provides a hybrid control method, wherein the display screen is a touch screen with a touch function, the touch screen comprises a touch area and a pointing area, the control object completes touch in the touch area according to the touch function of the touch screen, and the pointing area completes touch operation by pointing from a pointing point according to the control method.
In addition, the invention also provides a control system for executing the touch method, which comprises an image acquisition unit, a processor and a display screen;
The image acquisition unit is used for acquiring a display screen, a depth image of a control object and depth information of the control object;
The processor comprises an identification module, a conversion module and an execution module, wherein the identification module is used for acquiring the position on a display screen pointed by a control object according to the depth image and identifying a preset control action of the control object; the conversion module is used for converting and generating a corresponding touch instruction according to a predefined control action; the execution module is used for executing a touch instruction at the position of the pointing point to complete touch operation;
The display screen is used for displaying the pointing point and the information.
preferably, the hybrid steering method of the present invention may also have the following technical features:
the display screen is a touch screen with a touch function, the control object completes touch in the touch area according to the touch function of the touch screen, and touch operation is completed by pointing from a pointing point in the pointing area.
The image acquisition unit of the manipulation system is a depth camera based on structured light or TOF (time of flight) principles.
The invention further provides the electronic equipment which comprises an operation and control system arranged on the bus, and operation and control operations are completed through the operation and control system.
Compared with the prior art, the invention has the advantages that: the invention uses depth image to realize touch operation and is a control method based on pointing type, the control method obtains the position on the display screen pointed by the control object through the depth image, obtains the indication point on the display screen, the user can combine the indication of the indication point with the preset control action to execute the touch operation at the position of the pointing point, thus when the user touches the electronic device with large screen, if the user needs to touch the object which is difficult to touch, the user can obtain a pointing point on the display screen by simply adjusting the position and state of the finger on the display screen, such as the inclination, the height from the display screen, etc., and pointing the finger to the object, according to the indication of the pointing point, the user can accurately know the pointing condition of the finger, accurately execute the corresponding touch operation, compared with the prior art, the invention can realize simple gesture operations such as page turning, backspacing and the like on the display screen without touch function, can also well realize one-hand touch, can perform corresponding touch operation on objects which can not be touched by one hand in a pointing mode, and can realize accurate touch.
In a preferred scheme, the third depth image of the controlled object is acquired according to the first depth image and the second depth image for the acquired depth image, and then the pointing point is acquired according to the third depth image, so that the acquired third depth image only comprises the depth image of the controlled object, and therefore when the pointing point is acquired, the calculation amount of the processing process can be reduced, the operation speed is increased, and the response speed of the system is increased.
according to the hybrid control method provided by the invention, on the basis of the depth image and pointing control method, for the display screen with the touch function, a mode of performing hybrid control by using the touch function and the pointing control method is implemented, and the hybrid control method can solve the problem of low precision caused by high randomness of movement of a controlled object and provides a better experience scheme for a user.
Drawings
Fig. 1 is a schematic structural diagram of a control system according to a first embodiment of the present invention.
Fig. 2 is a schematic structural diagram of a processor according to a first embodiment of the present invention.
Fig. 3 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention.
Fig. 4 is a schematic front view of a single-handed mobile phone according to a third and a fourth embodiment of the present invention.
Fig. 5 is a schematic side view of a single-handed mobile phone according to a third and a fourth embodiment of the present invention.
fig. 6 is a schematic diagram of the hybrid operation of embodiments two and five of the present invention.
Fig. 7 is a first operation flow chart of the fourth embodiment of the present invention.
fig. 8 is a second operation flow chart of the fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following detailed description and accompanying drawings. It should be emphasized that the following description is merely exemplary in nature and is not intended to limit the scope of the invention or its application.
Non-limiting and non-exclusive embodiments will be described with reference to the following figures, wherein like reference numerals refer to like parts, unless otherwise specified.
The first embodiment is as follows:
The embodiment proposes a control system, as shown in fig. 1, including an image acquisition unit 1, a processor 2 and a display screen 3;
The image acquisition unit 1 is used for acquiring a depth image of the display screen 3 and a control object and depth information of the control object;
As shown in fig. 2, the processor 2 includes a recognition module 21, a conversion module 22 and an execution module 23, where the recognition module 21 is configured to obtain a position on the display screen 3 pointed by a manipulation object according to the depth image, and recognize a predetermined manipulation action of the manipulation object; the conversion module 22 is configured to convert a predefined control action to generate a corresponding touch instruction; the execution module 23 is configured to execute a touch instruction at the position of the pointing point to complete a touch operation;
the display screen 3 is used for displaying pointing points and information.
The system can acquire the depth image of the display screen through the image acquisition unit, and can also acquire the depth image of the control object which is not in contact with the display screen, and the position and the action of the display screen pointed by the control object can be identified according to the depth image, so that the position and the action are converted into corresponding positions and instructions on the display screen, and therefore the control of the equipment can be realized. Considering that the display effect is affected by performing touch control on the display screen, for example, when a page is continuously turned by a finger on an electronic book, a part of the screen is inevitably shielded by a hand, so that the reading experience is affected. The directional control by using the control object can reduce the problem and improve the use experience of the user.
The image acquisition unit is a depth camera based on structured light or TOF principle, and generally comprises a receiving unit and a transmitting unit.
Example two:
Compared with the first embodiment, the hybrid control system provided in the present embodiment is different in that the display screen of the first embodiment is a touch screen with a touch function, as shown in fig. 6, the touch screen includes a touch area i and a pointing area ii, the control object 14 performs touch control in the touch area i according to the touch function of the touch screen itself, and performs touch control operation in the pointing area ii from the pointing point 17.
The method specifically comprises the following steps: the touch area that can be reached by a single hand and the pointing control area that can not be reached by a single hand are still touch-controlled in the touch area to ensure better accuracy, and the pointing control area is controlled according to the pointing direction and the action of a finger.
example three:
The embodiment provides an electronic device, which may be a mobile phone or a tablet, including a bus 4 for data transmission, as shown in fig. 3, where the bus 4 is divided into a data bus, an address bus and a control bus, and is used to transmit data, a data address and a control signal, respectively. Connected to the bus 4 are a CPU5, a display 6, an IMU7 (inertial measurement unit), a memory 8, a camera 10, a sound unit 11, a network interface 12, and a mouse/keyboard 13, wherein for some touch-enabled electronic devices, the mouse/keyboard is replaced by the display 6, and the IMU device is used for positioning, tracking, and other functions. The memory 8 is used for storing an operating system, application programs 9, and the like, and may also be used for storing temporary data during operation.
The image acquisition unit in the control system arranged on the electronic equipment corresponds to a camera, the processor corresponds to a CPU (central processing unit), or can be an independent processor, and the display screen corresponds to a display.
as shown in fig. 4, the camera 10 is generally fixed to the electronic apparatus. In this embodiment, the imaging direction is greatly different from that of the camera of the existing apparatus. Existing cameras are typically front or rear mounted, and such a configuration cannot acquire images of the device itself. In the present embodiment, the configuration of the camera 10 can be in various situations, one is to rotate the camera by 90 degrees by means of the rotation shaft, so as to achieve the purpose of shooting the camera; the other is a form that the camera is externally arranged, and the camera as a whole is connected with the equipment through a certain fixing measure and interfaces such as USB and the like. The form of the camera arranged on the electronic device can be selected by a person skilled in the art according to actual conditions without limitation.
At this time, the camera of the present embodiment is a depth camera, as compared with the existing camera, and is used for acquiring a depth image of the target area.
The depth camera may be a depth camera based on the structured light principle or the TOF principle and generally comprises a receiving unit and a transmitting unit.
Fig. 4 is a front view of the mobile phone operated by a single hand. The top of the mobile phone is provided with a camera 10, and the camera collects images from top to bottom along the mobile phone, so that images of the display screen and fingers of the mobile phone can be obtained, and fig. 5 is a schematic side view of the mobile phone operated by a single hand.
The display 6 of the electronic device may be touch-enabled or touch-disabled, and when the display is not touch-enabled, the display can be controlled only by a pointing-type control method.
example four:
a pointing type manipulating method, as shown in fig. 4-5, applied to the manipulation of a display screen without a touch function, as shown in fig. 7, includes the following steps:
S1: acquiring depth images of the display screen 3 and the control object 14;
S2: acquiring the position on the display screen 3 pointed by the control object 14 through the depth image to obtain a pointing point 17 on the display screen;
S3: according to the indication of the pointing point 17, a touch instruction is identified and generated by combining a preset operation and control action;
s4: and executing touch operation at the position of the pointing point 17 according to the touch instruction. Such as clicking on an icon.
The acquisition of the pointing point 17 in step S2 includes: and extracting a characteristic point 15 capable of describing the control object on the depth image, and constructing a pointing straight line 16 describing the control object according to the characteristic point 15 to obtain a pointing point 17 on the display screen.
Specifically, as shown in fig. 5, the pointing point 17 is an intersection point where the pointing straight line 16 intersects with the display screen 3, and as shown in fig. 8, the method includes the following steps:
s21: extracting feature points 15 capable of describing the control object 14 according to the depth image, and acquiring depth information of the feature points 15;
S22: converting the depth information of the characteristic point 15 to obtain spatial position information of the characteristic point 15 in a coordinate system where the display screen 3 is located, and obtaining a pointing straight line 16 for drawing the control object according to the spatial position information of the characteristic point 15;
S23: and acquiring the intersection point of the pointing straight line 16 and the display screen 3 according to the pointing straight line 16 to obtain a pointing point 17 on the display screen 3, and displaying the pointing point 17 on the display screen 3.
In the above steps, when the manipulation object 14 is in contact with the display screen 3, the pointing point 17 is a characteristic point of the manipulation object 14, and may be determined as a touch operation. Of course, the contact with the screen may be regarded as an invalid operation so as to accurately calculate the position of the manipulation object 14 on the display screen 3.
when the feature points 15 are obtained, the skeleton modeling or other models may be used to model the manipulated object 14 to complete the construction of the pointing straight line, and for a finger, joint points and finger vertices of the finger are generally used as the feature points 15.
in the prior art, positioning and touching are completed simultaneously, and positioning and manipulation can be completed simultaneously in the pointing type non-contact (touching can lift a finger to point to an object to be touched), that is, when the position pointed by the finger is determined, a default touch instruction similar to selection and clicking is executed immediately, and at this time, the shape and action of the finger do not need to be further recognized. In addition, the operation can be performed step by step, that is, the positions are calculated first, then the finger performs corresponding actions, such as a change of the shape of the finger, or a click action in the pointing direction, and the operation instructions corresponding to the respective actions are executed after the actions are recognized. The recognition of the shape and movement of the finger is known in the art and will not be described in detail here.
Operations that do not require positioning, such as page turning, rollback, etc., are certainly not excluded. For such operations, it may be sufficient to recognize only the shape or the motion of the finger.
In this embodiment, the finger is pushed forward or backward along its pointing direction to be considered to complete a single click action, and other shapes and manners are also possible. The shape of the finger and the touch command corresponding to the motion need to be preset. When a certain finger shape and action are recognized, the processor converts the finger shape and action into a corresponding touch instruction.
The acquired depth image comprises the mobile phone display screen 3 and the finger as well as other irrelevant parts, at this time, the measurement range of the depth camera can be limited, namely a certain threshold value is set, the depth information exceeding the threshold value is removed, the depth image acquired in the mode only comprises the information of the mobile phone display screen and the finger, and the calculation amount of identification can be reduced. In order to further increase the system running speed and reduce the computation amount, an image segmentation method is used for obtaining the depth image, wherein one method comprises the following steps:
S11: acquiring a first depth image comprising the display screen 3 and not comprising the manipulated object 14;
S12: acquiring a second depth image containing the display screen 3 and the control object 14;
s13: a third depth image of the manipulated object 14 is obtained from the second depth image and the first depth image, and a pointing point 17 is obtained from the third depth image.
If the manipulated object 14 is a finger, the depth image of the front end part of the finger obtained by the background segmentation method in the above steps can reduce the calculation amount in modeling and improve the calculation speed.
Example five:
Considering the randomness of the finger movement, the precision is difficult to reach the touch level. Therefore, for the single-hand operation of some mobile phone devices with touch functions, the touch operation is still selected in the area which can be reached by a single hand, and the non-contact operation is implemented in the area which cannot be reached by the single hand, so that the scheme with better experience is provided. Therefore, the present embodiment proposes a hybrid control method, in which the display screen 3 is a touch screen having a touch function, as shown in fig. 6, the touch screen includes a touch area i and a pointing area ii, the control object 14 performs touch control in the touch area i according to the touch function of the touch screen itself, and on the basis of the fourth embodiment, performs touch control in the pointing area ii according to the pointing type control method, and performs touch control operation from the pointing direction of the pointing point 17.
because the sizes of the hands, the habits of the left hand and the right hand and the like of different users are different, the distinction between the touch area (i) and the pointing area (ii) can be automatically identified by the system. Namely: when the finger contacts the touch display screen 3, the processor processes the signal fed back by the touch screen and executes a corresponding touch instruction, when the finger does not contact the touch display screen 3, the depth camera recognizes the non-contact action and feeds back the non-contact action to the processor, and the processor processes the depth image acquired by the depth camera, recognizes the position pointed by the finger and the action of the finger and executes a corresponding operation instruction so as to realize non-contact control.
Those skilled in the art will recognize that numerous variations are possible in light of the above description, and thus the examples are intended to describe one or more specific embodiments.
while there has been described and illustrated what are considered to be example embodiments of the present invention, it will be understood by those skilled in the art that various changes and substitutions may be made therein without departing from the spirit of the invention. In addition, many modifications may be made to adapt a particular situation to the teachings of the present invention without departing from the central concept described herein. Therefore, it is intended that the invention not be limited to the particular embodiments disclosed, but that the invention will include all embodiments and equivalents falling within the scope of the invention.