WO2022247616A1 - 悬浮操控时的感应区域分离方法、装置、悬浮操控遥控器 - Google Patents

悬浮操控时的感应区域分离方法、装置、悬浮操控遥控器 Download PDF

Info

Publication number
WO2022247616A1
WO2022247616A1 PCT/CN2022/091500 CN2022091500W WO2022247616A1 WO 2022247616 A1 WO2022247616 A1 WO 2022247616A1 CN 2022091500 W CN2022091500 W CN 2022091500W WO 2022247616 A1 WO2022247616 A1 WO 2022247616A1
Authority
WO
WIPO (PCT)
Prior art keywords
sensing
area
point
value
target
Prior art date
Application number
PCT/CN2022/091500
Other languages
English (en)
French (fr)
Inventor
赵学文
沈健
王小瑞
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP22810343.8A priority Critical patent/EP4343505A1/en
Publication of WO2022247616A1 publication Critical patent/WO2022247616A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus

Definitions

  • the present application relates to the field of floating touch control, and in particular to a method and device for separating sensing areas during floating control, a remote control for floating control and a storage medium.
  • the remote control needs to detect the finger count and report the finger information to the large screen.
  • the remote control obtains signal changes from the control panel to detect finger information, including finger index, position, finger touch and suspension status, etc. Different heights of the fingertip from the control panel will cause different sizes of sensing areas on the control panel.
  • the sensing areas of the fingers can be clearly separated, which is easy to detect and track and distinguish ;However, when the two fingers are very close together on the horizontal plane, especially when one finger is far away from the hoverboard and the other finger is closer to the hoverboard on the vertical plane, the induction of the closer finger on the hoverboard The area is stronger, and the sensing area generated by the farther finger is weaker. At this time, the sensing area of the two fingers will merge or the weak sensing area will be covered. The result of identifying the finger index and finger information through the sensing area is no longer reliable. Fingers that are far away will not be recognized, finger information will be lost, and user operations will not get correct feedback, which will greatly affect the control accuracy and user experience.
  • the present application provides a method and device for separating sensing areas during suspension control, a remote control for suspension control, computer equipment and storage media, which can improve control accuracy and user experience, and have strong applicability.
  • the present application provides a method for separating sensing regions during hovering manipulation.
  • the computer device can generate the data based on the reference time (that is, the sampling time at which the two objects closest to the first time can be separated).
  • the sensing center points of the two sensing areas are used as the two reference segmentation center points of the first sensing area generated at the first moment, and the area dividing line of the first sensing area is determined based on the two reference dividing center points, and then based on the area dividing line , the first sensing area is divided to obtain the first sensing control area and the second sensing manipulation area, and a sensing area generated when two objects (such as fingers, sensor pens, etc.) are close together or overlapped can be segmented, effectively avoiding Because when two objects are close together or overlapped, the sensing areas of the two objects will merge or the weak sensing area will be covered, which improves the control accuracy and user experience, and has strong applicability.
  • the computer device determines the point with the largest sensing value in the second sensing area and the point with the largest sensing value in the third sensing area as the first sensing central point and the second sensing center point respectively.
  • Sensing center point Since the greater the induction value of a point on the suspension control board, the closer the point is to the object to be controlled by the suspension. Therefore, the point with the largest induction value in the induction area is determined as the induction center point, which can improve the accuracy of the segmentation of the induction area.
  • the first sensing center point and the second sensing center point are the sensing center points of the two sensing regions generated at the sampling moment (ie, the reference moment) that is the closest to the two objects closest to the first moment, according to the prior information (ie Segmenting the first sensing area by using two sensing center points at the reference moment) can improve the accuracy of the sensing area segmentation.
  • the computer device can divide the target area into a plurality of row areas consistent with the direction of the connecting line according to the direction of the connecting line and the preset interval value, and divide each row area The point with the smallest sensing value is determined as the area division point of the first sensing area.
  • the height of the two-finger contact point from the hovering control panel is the distance between the center points of the two fingers (that is, the first sensing center point and the second sensing center point) within the area of the hovering control panel. Therefore, determining the point with the smallest sensing value in each row area in the target area as the area segmentation point can improve the accuracy of the sensing area segmentation.
  • the computer device determines the first region sensing characteristic value of the first sensing manipulation region and the second region sensing characteristic value of the second sensing manipulation region, and detects in the first region
  • the difference between the characteristic value and the sensing characteristic value of the second area is greater than the preset difference threshold, based on the first starting point and the sensing value table of the suspension control panel at the first moment, determine the first target sensing control area , based on the second starting point and the sensing value table of the suspension control panel at the first moment, determine the second target sensing control area, wherein the first starting point is the point with the largest sensing value in the first sensing control area, and the second starting point is The starting point is the point with the largest sensing value in the second sensing manipulation area.
  • first sensing manipulation area and the second sensing manipulation area After obtaining the first sensing manipulation area and the second sensing manipulation area, it can be determined whether the difference between the sensing feature value of the first area and the sensing feature value of the second area is greater than a preset difference threshold. Whether there is information loss in the first sensing manipulation area and the second sensing manipulation area, that is, whether the segmentation of the first sensing area is correct.
  • the difference is greater than the preset difference threshold
  • the first sensing manipulation area and the second sensing manipulation area The area is supplemented completely, and a more complete first target sensing control area and second target sensing control area can be obtained, which can further improve the accuracy of sensing area segmentation, further improve control accuracy and user experience, and have stronger applicability.
  • the computer device determines a point whose sensing value is smaller than the sensing value of the first starting point among the multiple points adjacent to the first starting point as the first target point, and sets Add the first starting point and the first target point to the first target point set; judge whether there is a point whose induction value is smaller than the induction value of the first target point among the multiple points adjacent to the first target point, and if it exists, the induction value will be A point whose value is smaller than the sensing value of the first target point is determined as the first target point, and the first target point is added to the first target point set, and the first target sensing manipulation area is determined according to the first target point set.
  • the computer equipment uses the gradient descent area search method to perform a second area search on the area of the suspension control panel, thereby supplementing the first sensing control area, and obtaining a complete first target sensing control area, which can further improve the sensing area. segmentation accuracy.
  • the computer device determines the mean value of the sensing values of all points in the first sensing manipulation area as the sensing feature value of the first area, and determines the mean value of the sensing values of all points in the second sensing manipulation area The mean value of the sensing values is determined as the sensing feature value of the second area.
  • the computer device acquires The initial suspension control panel sensing data generated by the suspension operation above the control panel, the initial suspension control panel sensing data includes multiple sampling moments in the first time period and the sensing value table of the suspension control panel at each sampling moment , the induction value table includes the induction value of each point on the suspension control board. Afterwards, the computer device determines the point in the sensing value table whose sensing value is greater than the preset sensing threshold as the target point, and determines the sensing data (that is, the sensing area) on the floating control panel at each sampling time based on the target point.
  • the computer device can first use the area connection search method to determine the initial sensing area caused by the user's hovering operation, and then perform hole compensation for the initial sensing area at each sampling moment Complete, to obtain a relatively complete available area, that is, the sensing area, can improve the success rate of subsequent sensing area separation.
  • the sensing value includes a capacitance value.
  • the present application provides a sensing area separation device for suspension control, the sensing area separation device includes:
  • the first acquisition unit is used to acquire the data of the suspension sensing area generated by the user’s hovering operation above the suspension control panel within the first time period.
  • the suspension sensing data includes multiple sampling moments in the first time period and the floating For the sensing data on the control board, multiple sampling times include the first time and the reference time.
  • the sensing data on the suspension control board at the first time only include the first sensing area, and the sensing data at the reference time include the second sensing area and the third sensing area.
  • the sensing area, and the reference moment is the moment closest to the first moment among at least one moment corresponding to the sensing data including two sensing regions;
  • a first determining unit configured to determine a first sensing central point of the second sensing area and a second sensing central point of the third sensing area
  • a second determining unit configured to determine an area dividing line of the first sensing area based on the first sensing center point and the second sensing center point;
  • the segmentation unit is configured to segment the first sensing area based on the area dividing line to obtain a first sensing manipulation area and a second sensing manipulation area.
  • the above-mentioned first determining unit is configured to respectively determine the point with the largest sensing value in the second sensing area and the point with the largest sensing value in the third sensing area as the first sensing area. center point and second sensing center point.
  • the above-mentioned second determining unit is configured to determine a target area in the first sensing area perpendicular to the connecting line based on the connecting line between the first sensing center and the second sensing center, And based on the target area, an area division point of the first sensing area is determined, and an area division line is obtained according to the area division point.
  • the above-mentioned second determining unit is configured to divide the target area into a plurality of row areas consistent with the direction of the connecting line according to the direction of the connecting line and a preset interval value, And the point with the smallest sensing value in each row area is determined as the area division point of the first sensing area.
  • the device further includes:
  • a third determining unit configured to determine a first region sensing characteristic value of the first sensing manipulation region and a second region sensing characteristic value of the second sensing manipulation region;
  • the fourth determining unit is configured to, in the case that the difference between the first region sensing characteristic value and the second region sensing characteristic value is greater than a preset difference threshold, based on the first starting point and the sensing of the suspension control panel at the first moment
  • the value table determines the first target sensing control area, and determines the second target sensing control area based on the second starting point and the sensing value table of the floating control panel at the first moment, wherein the first starting point is the sensing area in the first sensing control area.
  • the point with the largest value, the second starting point is the point with the largest sensing value in the second sensing manipulation area.
  • the induction value table includes the induction value of each point on the suspension control board
  • the above-mentioned fourth determining unit is used to determine the point whose induction value is smaller than the induction value of the first starting point among the multiple points adjacent to the first starting point as the first target point, and add the first starting point and the first target point to The first set of target points; judging whether there is a point with a sensing value smaller than the sensing value of the first target point among the multiple points adjacent to the first target point, if it exists, set the sensing value smaller than the sensing value of the first target point
  • the point is determined as the first target point, and the first target point is added to the first target point set; the first target sensing manipulation area is determined according to the first target point set.
  • the third determining unit is configured to determine the mean value of the sensing values of all points in the first sensing manipulation area as the sensing feature value of the first area, and determine the mean value of the sensing values of all points in the first sensing manipulation area as the sensing feature value of the second sensing manipulation area
  • the mean value of the induction values of all points in is determined as the induction characteristic value of the second area.
  • the above device further includes:
  • the second acquisition unit is used to acquire the initial floating control panel sensing data generated by the user hovering above the floating control panel in the first time period, the initial floating control panel sensing data includes multiple sampling moments in the first time period and The induction value table of the suspension control panel at each sampling time, the induction value table includes the induction value of each point on the suspension control panel;
  • the fifth determining unit is configured to determine a point in the sensing value table whose sensing value is greater than a preset sensing threshold as a target point, and determine the sensing data on the suspension control panel at each sampling time based on the target point.
  • the sensing value includes a capacitance value.
  • the present application provides a remote control for floating control
  • the remote control for floating control includes any one of the possible implementation modes from the first possible implementation mode to the eighth possible implementation mode of the second aspect above. Sensing zone separation device and suspension control panel.
  • the present application provides a computer device, and the computer device includes a processor, a memory, and an input device.
  • the above-mentioned processor, memory and input device are connected to each other.
  • the memory is used to store computer programs
  • the computer programs include program instructions
  • the processor is configured to invoke the program instructions and the input device to execute the sensing area separation method described in the first aspect above.
  • the embodiment of the present application provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and the instructions can be executed by one or more processors on a processing circuit. When it runs on the computer, the computer is made to execute the method for separating the sensing area described in the first aspect above.
  • the embodiment of the present application provides a computer program product containing instructions, which when run on a computer, causes the computer to execute the sensing area separation method described in the first aspect above.
  • FIG. 1 is a schematic diagram of an application scenario of the sensing area separation method provided by the present application
  • Fig. 2 is a schematic flow chart of the sensing region separation method provided by the present application.
  • FIG. 3 is a schematic diagram of the workflow for determining the sensing area on the suspension control board at the first moment provided by the present application
  • FIG. 4 is a schematic diagram of the workflow for dividing the first sensing area provided by the present application.
  • FIG. 5 is a schematic diagram of the workflow for determining the region dividing line provided by the present application.
  • Fig. 6 is another schematic flow chart of the sensing region separation method provided by the present application.
  • Fig. 7 is a schematic diagram of the effects of the first target sensing manipulation area and the second target sensing manipulation area provided by the present application;
  • Fig. 8 is a schematic structural diagram of the sensing area separation device during suspension control provided by the present application.
  • Fig. 9 is a schematic structural diagram of the floating control remote controller provided by the present application.
  • Fig. 10 is a schematic structural diagram of a computer device provided by the present application.
  • the method provided in this application can be applied to the field of separation of sensing areas in the field of floating touch.
  • the computer equipment in this application can be a physical terminal with the function of separating sensing areas.
  • the physical terminal can be a server or a user terminal.
  • the server can be an independent physical server, or a server cluster or distributed system composed of multiple physical servers, and can also provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication , middleware services, domain name services, security services, content delivery network (content delivery network, CDN), and cloud servers for basic cloud computing services such as big data and artificial intelligence platforms.
  • User terminals may include, but are not limited to, tablet devices, desktop computers, laptop computers, mobile phones, or any other terminal devices capable of information interaction.
  • the computer device can acquire the hovering sensing area data generated by the user hovering over the hovering control panel within the first time period, and the hovering sensing data includes the A plurality of sampling moments and the sensing data on the suspension control panel at each sampling moment, the plurality of sampling moments include the first moment and a reference moment, and the sensing data on the suspension control panel at the first moment only include the first sensing area (such as double Refers to the sensing areas when they are close together or overlapping), the sensing data at the reference time includes the second sensing area and the third sensing area, and the reference time is the closest to the second sensing area at least one time corresponding to the sensing data including the two sensing areas moment of moment.
  • the hovering sensing data includes the A plurality of sampling moments and the sensing data on the suspension control panel at each sampling moment, the plurality of sampling moments include the first moment and a reference moment, and the sensing data on the suspension control panel at the first moment only include the first sensing area (such as double Refers to the sensing areas when they are
  • the computer device determines the first sensing center point of the second sensing area and the second sensing center point of the third sensing area, and determines the division line of the first sensing area based on the first sensing center point and the second sensing center point. Further, the computer device may divide the first sensing area based on the area dividing line to obtain the first sensing manipulation area and the second sensing manipulation area.
  • a sensing area generated when two objects such as fingers, sensor pens, etc.
  • Areas merge or weak response areas are covered, which improves the control accuracy and user experience, and has strong applicability.
  • the sensing area separation method provided in this application can be adapted to different sensing area separation application scenarios.
  • the scenario is taken as an example for description, and details will not be repeated below.
  • FIG. 1 is a schematic diagram of an application scenario of the sensing area separation method provided in the present application.
  • the computer device can establish communication with the display screen, and when the user performs floating manipulation drawing, the thumb and forefinger in the user's right hand perform a floating operation (such as rotation) above the floating control panel area.
  • the remote control of the floating control acquires the initial floating control panel sensing data generated during the floating operation of the user's right hand, and the initial floating control panel sensing data includes the first time period (that is, the time period when the user's right hand is above the floating control panel area) Multiple sampling moments and the induction value table of the suspension control panel at each sampling moment, and determine the induction data on the suspension control panel at each sampling moment based on the induction value table of the suspension control panel at each sampling moment in the first time period ( the sensing area).
  • the first time period that is, the time period when the user's right hand is above the floating control panel area
  • the plurality of sampling moments include a first moment and a reference moment
  • the sensing data on the suspension control panel at the first moment only include the first sensing area
  • the sensing data at the reference moment include the second sensing area and the third sensing area
  • the reference moment is a sampling moment closest to the first moment among at least one sampling moment corresponding to the sensing data including two sensing areas.
  • the computer device determines the first sensing center point of the second sensing area and the second sensing center point of the third sensing area, and determines the area dividing line of the first sensing area based on the first sensing center point and the second sensing center point, Furthermore, based on the area dividing line, the first sensing area is divided to obtain a first sensing manipulation area and a second sensing manipulation area.
  • the computer device can automatically divide the sensing area generated when two fingers are close together or overlapped, effectively avoiding the problem that the sensing area generated when two fingers approach is difficult to separate, and improving the control accuracy and user experience. Strong applicability.
  • FIG. 2 is a schematic flow chart of the sensing area separation method provided in the present application. As shown in Figure 2, the method may include the following steps S101 to S104:
  • Step S101 acquiring the data of the floating sensing area generated by the user's floating operation above the floating control panel within a first time period.
  • the computer device acquires initial sensing data of the hovering panel generated by the user's hovering operation above the hovering panel within a first time period.
  • the initial suspension control panel sensing data includes a plurality of sampling moments in the first time period and a sensing value table of the suspension control panel at each sampling moment, and the sensing value table includes the sensing value of each point on the suspension control panel and each The coordinate value of the point.
  • the sensing value of each point can reflect the distance between the user's finger and sensor pen and other items that can change the sensing value on the floating control board during the floating control process above the floating control board. , for example, the larger the sensing value of point A on the hoverboard, the closer the finger is to point A.
  • the first time period may be the time period corresponding to the user's hovering operation above the floating control panel
  • the user's floating operation above the floating control panel may be the user's two fingers from the separated state to the two fingers together and then to the two fingers on the floating control panel.
  • the sensor pen can be any object whose sensing value of a point on the floating control panel changes when it is above the floating control panel.
  • the computer device traverses each point in the sensing value table of the suspension control panel at each sampling time, determines the point whose sensing value is greater than the preset sensing threshold as the target point, and forms the initial sensing area at each sampling time based on the target point , and then the hole completion method (such as morphological closing operation, that is, expansion first and then corrosion) can be used to complete the hole existing in the initial sensing area at each sampling time, and the sensing data on the suspension control board at each sampling time can be obtained. That is the sensing area.
  • the hole completion method such as morphological closing operation, that is, expansion first and then corrosion
  • the computer device After the computer device obtains the sensing value table at each sampling moment, it first uses the area search method (such as the above-mentioned area Unicom search method) to determine the initial sensing area due to the user's hovering operation, and then analyzes the sensing value at each sampling moment. Hole completion is performed on the initial sensing area to obtain a relatively complete usable area, that is, the sensing area, which can improve the success rate of subsequent sensing area separation.
  • the area search method such as the above-mentioned area Unicom search method
  • FIG. 3 is a schematic workflow diagram of determining the sensing area on the suspension control board at the first moment provided by the present application. As shown in Figure 3, the figure includes the process of determining the sensing area on the suspension control board at the first moment during the approach of the first group of two fingers, and the process of determining the induction area on the suspension control board at the first moment during the approach of the second group of two fingers. area process. Wherein, the rectangular frames in (a) to (d) in FIG. 3 represent the suspension control panel.
  • FIG. 3 indicates the process of two fingers approaching, (b) in Figure 3 indicates that two fingers are close together, and (c) in Figure 3 indicates that the first moment (i.e. the sampling time corresponding to (b) in Figure 3) the initial sensing area at the first moment (i.e. the area where the two fingers may exist) obtained after the area search (i.e. the process of determining the target point) of the induction value table of the suspension control panel Fig. 3(c)), Fig. 3(d) is the sensing area on the suspension control panel at the first moment obtained after completing the hollow area in Fig. 3(c).
  • the second group in FIG. 3 may refer to the first group, which will not be repeated here.
  • the computer device obtains the data of the floating sensing area generated by the user's floating operation above the floating control panel within the first time period.
  • the suspension sensing data includes a plurality of sampling moments in the first time period and the sensing data (ie, sensing area) on the suspension control board at each sampling moment, and the plurality of sampling moments include the first moment and a reference moment.
  • the sensing data on the suspended control panel at any time only includes the first sensing area, and at the reference time the sensing data includes the second sensing area and the third sensing area, and the reference time is at least one sampling time corresponding to the sensing data including the two sensing areas The sampling moment closest to the first moment in .
  • Step S102 determining a first sensing center point of the second sensing area and a second sensing center point of the third sensing area.
  • the computer device may determine the point with the largest sensing value in the second sensing area as the first sensing center point of the second sensing area, and determine the point with the largest sensing value in the third sensing area as the third sensing area.
  • the second sensing center point of the sensing area may be a capacitance value.
  • FIG. 4 is a schematic workflow diagram of dividing the first sensing area provided by the present application.
  • the sensing area includes the second sensing area and the third sensing area
  • the computer device determines the sensing center point in step S102, Obtain the first sensing central point p of the second sensing region and the second sensing central point q of the third sensing region;
  • Fig. 4 is the sensing center point of the second sensing region and the second sensing central point q of the third sensing region.
  • the sensing region of the suspension control panel at the first moment is the sensing region of the suspension control panel at the first moment, and the sensing region only includes the first In the sensing area, after the computer device obtains the first sensing center point p of the second sensing area and the second sensing center point q of the third sensing area, it uses p and q as the reference sensing of two objects divided by the first sensing area center point.
  • Step S103 determining an area division line of the first sensing area based on the first sensing center point and the second sensing center point.
  • the computer device determines the target area perpendicular to the connecting line in the first sensing area based on the connecting line between the first sensing center point and the second sensing center point, and determines the target area in the first sensing area according to the direction of the connecting line and the preset interval value, the target area is divided into multiple row areas in the same direction as the connection line, and the point with the smallest induction value in each row area is determined as the area division point of the first sensing area, and then the area division line is obtained according to the area division point, that is Dotted line mn in (c) in FIG. 4 .
  • FIG. 5 is a schematic workflow diagram of determining an area dividing line provided by the present application.
  • (1) in Figure 5 is the detailed sensing area on the suspension control board at the first moment corresponding to (c) in Figure 4, that is, a rectangular area composed of 8*16 small squares, where, Each small square represents a point in the area of the floating dashboard, and the area formed by all small colored squares represents the first sensing area generated at the first moment.
  • the first sensing center point p is the intersection of the fourth row and the seventh column in (1) in Figure 5
  • the second sensing center point q is the intersection of the fourth row and the eleventh column in (1) in Figure 5
  • the direction of the connecting line between p and q is consistent with the direction of each row in the rectangular area
  • the first perpendicular line and the second perpendicular line perpendicular to the direction of the connecting line between p and q are respectively made from p and q, and the first perpendicular line 1.
  • the area enclosed by the second vertical line and the first sensing area is determined as the target area, that is, the area formed by all the small colored squares in (2) in FIG. 5 .
  • the computer device divides the target area into 6 rows according to the connection direction of p and q (that is, the direction of each row in the rectangular area) and the preset interval value (that is, the width value of the small square corresponding to each point).
  • Area that is, the first row area formed by the 7th column to the 11th column in the second row, the second row area formed by the 7th column to the 11th column in the third row, ..., the 7th column to the 7th line in the 7th line 11 columns make up the sixth row area.
  • the computer device determines the point with the smallest induction value in the first row area as the first area division point, determines the point with the smallest induction value in the second row area as the second area division point, ..., and determines the point in the sixth row area as the second area division point.
  • the point with the smallest induction value is determined as the sixth area division point, and the above six area division points are connected to obtain the dotted line mn in (3) in Figure 5, which is the first one shown in (4) in Figure 5
  • the area dividing line mn of the sensing area is the area division point.
  • Step S104 based on the area dividing line, divide the first sensing area to obtain a first sensing manipulation area and a second sensing manipulation area.
  • the first sensing area on the left is the first sensing control area
  • the first sensing area on the right of the area dividing line is the second sensing manipulation area.
  • the sensing area on the suspension control panel at the first moment is the sensing area generated by the user's two fingers touching together
  • the second sensing area and the third sensing area are the sensing area generated by the first finger and the third sensing area respectively.
  • the sensing area produced by the second finger after the first sensing area is divided into two areas, the area including the first sensing center point in the two areas is determined as the first sensing manipulation area (i.e. the first finger sensing manipulation area) ), and then determine the other area in the two areas as the second sensing manipulation area (ie, the second finger sensing manipulation area).
  • the computer device can use the sensing central point of the two sensing areas generated at the reference time (that is, the sampling time at which the two objects closest to the first time can be separated) as the first sensing point generated at the first time.
  • Two reference segmentation center points of the area and determine the area division line of the first sensing area based on the two reference segmentation center points, and then based on the area division line, segment the first sensing area to obtain the first sensing manipulation area and the second sensing area
  • the sensing control area can divide a sensing area generated when two objects (such as fingers, sensor pens, etc.) are close together or overlap, effectively avoiding the response area of two objects when two objects are close together or overlapping. Convergence or weak response areas are covered, which improves the control accuracy and user experience, and has strong applicability.
  • FIG. 6 is another schematic flowchart of the sensing area separation method provided in the present application. As shown in Figure 6, the method may include the following steps S201 to S206:
  • Step S201 acquiring the data of the floating sensing area generated by the user's floating operation above the floating control panel within a first time period.
  • Step S202 determining a first sensing center point of the second sensing area and a second sensing center point of the third sensing area.
  • Step S203 determining an area division line of the first sensing area based on the first sensing center point and the second sensing center point.
  • Step S204 based on the area dividing line, divide the first sensing area to obtain a first sensing manipulation area and a second sensing manipulation area.
  • step S201-step S204 please refer to step S101-step S104 in the embodiment shown in FIG. 2 , which will not be repeated here.
  • Step S205 determining a first area sensing feature value of the first sensing manipulation area and a second area sensing feature value of the second sensing manipulation area.
  • the computer device determines the mean value of the sensing values of all points in the first sensing manipulation area as the sensing characteristic value of the first area, and determines the mean value of the sensing values of all points in the second sensing manipulation area as the first sensing characteristic value. Two area sensing eigenvalues.
  • Step S206 in the case that the difference between the first area sensing feature value and the second area sensing feature value is greater than the preset difference threshold, based on the first starting point, the second starting point and the floating control panel at the first moment
  • the sensing value table determines the first target sensing manipulation area and the second target sensing manipulation area.
  • the induction value table of the suspension control panel at the first moment includes the coordinate value and the induction value of each point in the suspension control area.
  • the computer device calculates the difference between the first region sensing feature value and the second sensing feature value, and when the difference is greater than a preset difference threshold, determine the first sensing manipulation obtained in step S204 There is a situation of information loss between the area and the second sensing manipulation area, indicating that the obtained segmentation of the first sensing manipulation area and the second sensing manipulation area is inaccurate.
  • the computer device determines the first target sensing control area based on the first starting point and the sensing value table of the floating control panel at the first moment, and determines the second target sensing area based on the second starting point and the sensing value table of the floating control panel at the first moment. control area.
  • the first starting point is the point with the largest sensing value in the first sensing manipulation area
  • the second starting point is the point with the largest sensing value in the second sensing manipulation area.
  • the computer equipment takes the first starting point as the center, and searches for points satisfying the first condition in the surrounding area according to the 8-neighborhood, and these points constitute the first target sensing control area; Search around for points that meet the second condition, and these points constitute the second target induction control area.
  • the first condition is that the induction value of the outward point centered on the first starting point is in a downward trend
  • the second condition is that the induction value of the outward point centered on the second initial point is in a downward trend.
  • the induction value is less than
  • the point of the sensing value of the first starting point is determined as the first target point, and the first starting point and the first target point are added to the first target point set.
  • the computer device judges whether there is a point with a sensing value smaller than the sensing value of the first target point among the multiple points adjacent to the first target point, and if it exists, the point with a sensing value smaller than the sensing value of the first target point is determined. is the first target point, and the first target point is added to the first target point set.
  • the computer device forms the first target sensing manipulation area according to the coordinate value of each point in the first target point set.
  • the computer equipment calculates the eight points adjacent to the second starting point (that is, the four points adjacent to the top, bottom, left, and right of the second starting point and the four points adjacent to the second starting point diagonally). , the point whose induction value is smaller than the induction value of the second starting point is determined as the second target point, and the second starting point and the second target point are added to the second target point set. Afterwards, the computer device judges whether there is a point with a sensing value smaller than the sensing value of the second target point among the multiple points adjacent to the second target point, and if it exists, the point with a sensing value smaller than the sensing value of the second target point is determined.
  • the computer device composes a second target sensing manipulation area according to the coordinate value of each point in the second target point set.
  • the above-mentioned process of determining the first target sensing control area and the second target sensing control area is to use the area search method of gradient descent to perform a second area search on the hovering control panel area, so as to determine the first target sensing control area and the second target sensing control area.
  • the sensing control area is supplemented to obtain a complete supplementary first target sensing control area and a second target sensing control area, which can further improve the segmentation accuracy of the sensing area.
  • the method for performing the secondary area search here includes but not limited to the gradient descent method, and other area search methods are also applicable to this application.
  • the computer device can replan the first target sensing manipulation area and the second target sensing manipulation area, so that the two finally obtained sensing areas are more Matches the current positions of two objects (such as fingers).
  • FIG. 7 is a schematic diagram of the effect of the first target sensing manipulation area and the second target sensing manipulation area provided by the present application.
  • (a) in FIG. 7 is the first sensing manipulation area and the second sensing manipulation area obtained in step S204, and the first sensing characteristic value s and the first sensing characteristic value s of the first sensing manipulation area obtained in step S205.
  • the second area sensing feature value r of the second sensing manipulation area; (b) in Fig. 7 is the first target sensing manipulation area and the first target sensing manipulation area obtained after the computer equipment respectively takes s and r as the starting point to search and plan around the area again.
  • the second target sensing manipulation area wherein, the intersection area (ie the shaded area) of the above two areas is the overlapping part of the sensing areas generated by the two objects.
  • the computer device can use the sensing central point of the two sensing areas generated at the reference time (that is, the sampling time at which the two objects closest to the first time can be separated) as the first sensing point generated at the first time.
  • the reference segmentation center point of the area and determine the area dividing line of the first sensing area based on the reference dividing center point, and then based on the area dividing line, divide the first sensing area to obtain the first sensing manipulation area and the second sensing manipulation area .
  • the segmentation can be determined whether the segmentation is correct by using the difference between the first area sensing feature value of the first sensing manipulation area and the second area sensing feature value of the second sensing manipulation area, and if the segmentation is not correct, respectively with
  • the point with the largest sensing value in the first sensing control area and the point with the largest sensing value in the second sensing manipulation area are used as the starting point to re-search and plan around the area respectively to obtain the first target sensing control area and the second target sensing manipulation area , can further improve the accuracy of the segmentation of the sensing area, effectively avoid the situation that the response areas of the two objects will merge or the weak response area will be covered when two objects are close together or overlapped, and further improve the control accuracy and user experience sense, the applicability is stronger.
  • FIG. 8 is a schematic diagram of the structure of the sensing area separation device during suspension control provided by the present application.
  • the sensing area separation device can be a computer program (including program code) running in the computer equipment, for example, the sensing area separation device is an application software; the sensing area separation device can be used to perform the method provided by the application corresponding steps.
  • the sensing area separation device 8 includes:
  • the first acquisition unit 81 is configured to acquire the data of the floating sensing area generated by the user's hovering operation above the floating control panel within the first time period.
  • the floating sensing data includes a plurality of sampling moments in the first time period and For the sensing data on the suspension control board, the multiple sampling times include the first time and the reference time.
  • the sensing data on the suspension control board at the first time only include the first sensing area, and the sensing data at the reference time include the second sensing area and the second sensing area.
  • Three sensing areas, and the reference moment is the moment closest to the first moment among at least one moment corresponding to the sensing data including two sensing regions;
  • the first determining unit 82 is configured to determine a first sensing central point of the second sensing area and a second sensing central point of the third sensing area;
  • the second determining unit 83 is configured to determine an area dividing line of the first sensing area based on the first sensing center point and the second sensing center point;
  • the segmentation unit 84 is configured to segment the first sensing area based on the area dividing line to obtain a first sensing manipulation area and a second sensing manipulation area.
  • the above-mentioned first determination unit 82 is configured to determine the point with the largest sensing value in the second sensing area and the point with the largest sensing value in the third sensing area as the first sensing central point and the second sensing center point, respectively. center point.
  • the second determining unit 83 is configured to determine a target area perpendicular to the connecting line in the first sensing area based on the connecting line between the first sensing center and the second sensing center, and determine the second sensing area based on the target area. An area division point of the sensing area, and an area division line is obtained according to the area division point.
  • the above-mentioned second determination unit 83 is used to divide the target area into a plurality of row areas consistent with the direction of the connecting line according to the direction of the connecting line and the preset interval value, and to sense The point with the smallest value is determined as the area division point of the first sensing area.
  • the device also includes:
  • the third determining unit 85 is configured to determine a first region sensing characteristic value of the first sensing manipulation region and a second region sensing characteristic value of the second sensing manipulation region;
  • the fourth determining unit 86 is configured to, in the case that the difference between the first region sensing characteristic value and the second region sensing characteristic value is greater than a preset difference threshold, based on the first starting point and the position of the suspension control panel at the first moment
  • the sensing value table determines the first target sensing control area, and determines the second target sensing control area based on the second starting point and the sensing value table of the floating control panel at the first moment, wherein the first starting point is in the first sensing control area The point with the largest sensing value, the second starting point is the point with the largest sensing value in the second sensing manipulation area.
  • the induction value table includes the induction value of each point on the hoverboard
  • the above-mentioned fourth determination unit 86 is used to determine the point whose induction value is smaller than the induction value of the first starting point among the multiple points adjacent to the first starting point as the first target point, and add the first starting point and the first target point To the first target point set; judge whether there is a point with a sensing value smaller than the sensing value of the first target point among the multiple points adjacent to the first target point, and if it exists, set the sensing value to be smaller than the sensing value of the first target point is determined as the first target point, and the first target point is added to the first target point set; and the first target sensing manipulation area is determined according to the first target point set.
  • the third determining unit 85 is configured to determine the mean value of the sensing values of all points in the first sensing manipulation area as the sensing feature value of the first area, and determine the sensing value of all points in the second sensing manipulation area The mean value of is determined as the second area sensing characteristic value.
  • the above-mentioned device also includes:
  • the second acquisition unit 87 is configured to acquire the initial floating control panel sensing data generated by the user hovering above the floating control panel within the first time period, the initial floating control panel sensing data includes a plurality of sampling moments in the first time period and The induction value table of the suspension control panel at each sampling time, the induction value table includes the induction value of each point on the suspension control panel;
  • the fifth determining unit 88 is configured to determine a point in the sensing value table whose sensing value is greater than a preset sensing threshold as a target point, and determine sensing data on the floating control panel at each sampling time based on the target point.
  • the above sensing value includes a capacitance value.
  • first acquisition unit 81 first determination unit 82, second determination unit 83, segmentation unit 84, third determination unit 85, fourth determination unit 86, second acquisition unit 87 and fifth determination unit 88
  • first determination unit 82 first determination unit 82
  • second determination unit 83 segmentation unit 84
  • third determination unit 85 third determination unit 85
  • fourth determination unit 86 second acquisition unit 87
  • fifth determination unit 88 fifth determination unit 88
  • the sensing center point of the two sensing areas generated based on the reference moment can be used as the reference division of the first sensing region generated at the first moment Center point, and based on the reference segmentation center point to determine the area division line of the first sensing area, and then based on the area division line, the first sensing area is segmented, two objects (such as fingers, sensor pens, etc.) can be placed next to each other Segmentation of a sensing area generated when they are together or overlapping can effectively avoid the situation that when two objects are close together or overlap, the response areas of the two objects will merge or the weak response area will be covered, improving the control accuracy and user experience sense, strong applicability.
  • two objects such as fingers, sensor pens, etc.
  • FIG. 9 is a schematic structural diagram of the floating control remote controller provided by the present application.
  • the suspension control remote control 9 includes a suspension control panel 91 and a sensing area separation device 92 (corresponding to the sensing area separation device 8 in Figure 8), wherein the suspension control panel 91 is used for the user to carry out
  • the initial suspension control panel sensing data (such as the induction value table of the suspension control panel) is generated according to the change of its hardware signal quantity (such as the capacitance value), and the initial suspension control panel at different sampling times is transferred according to the preset frequency.
  • the sensing data is sent to the sensing area separating device 92 .
  • the hovering control remote control 9 may also include an interaction module (not shown in the figure) for transmitting data.
  • the sensing area separation device 92 obtains the sensing area separation result, it encapsulates the separation result in a preset format, and sends the encapsulated data packet to the display through the interaction module using a communication module (such as a Bluetooth module, etc.)
  • the display device parses the data packet, and can get a visual response feedback in the application.
  • the present application also provides a computer-readable storage medium, on which a computer program is stored.
  • a computer program is stored.
  • the method or steps performed by the computer device in the above method embodiments are implemented.
  • the present application also provides a computer program product.
  • the computer program product is executed by a computer device, the method or steps performed by the computer device in the above method embodiments are implemented.
  • FIG. 10 is a schematic structural diagram of a computer device provided by the present application.
  • the computer device 10 may include at least one processor 101 , at least one memory 102 and an input device 103 .
  • the processor 101 , the memory 102 and the input device 103 may be connected with a communication bus or a communication interface to complete mutual communication.
  • the above-mentioned processor 101, memory 102 and input device 103 can be used to implement the first acquiring unit 81, the first determining unit 82, the second determining unit 83, the dividing unit 84, and the third determining unit 85 shown in FIG.
  • the fourth determination unit 86 , the second acquisition unit 87 and the fifth determination unit 88 can implement various functions of the computer device.
  • processor 101 may be a central processing unit (Central Processing Unit, CPU), and the processor may also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the input device 103 may include devices such as a hoverboard.
  • the memory 102 may include read-only memory and random-access memory, and provides instructions and data to the processor 101 .
  • the memory 102 stores the following elements, executable modules or data structures, or their subsets, or their extended sets:
  • Operation instructions include various operation instructions for realizing various operations.
  • the memory 102 is used to store program codes for executing the sensing area separation method implemented by the computer device of the above-mentioned embodiments
  • the processor 101 is used to execute the program codes stored in the memory 102 to realize the above-mentioned embodiments.
  • Various steps of the sensing area separation method executed by the computer equipment For the specific implementation process, reference may be made to the corresponding content described in the foregoing embodiments, and details will not be repeated here.
  • the embodiment of the present application also provides a computer program product containing instructions, which, when running on a computer, enables the computer to execute the sensing area separation method or function performed by the computer device in the above embodiments.
  • the embodiment of the present application also provides a computer-readable storage medium, the readable storage medium stores instructions, and when the processor executes the instructions, the processor executes the sensing area separation method performed by the computer device in the above-mentioned embodiments or function.
  • the processor may be a general-purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more processors used to control the execution of the program above. integrated circuit.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • Memory can be read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (random access memory, RAM) or other types of memory that can store information and instructions
  • a dynamic storage device can also be an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact Disc Read-Only Memory (CD-ROM) or other optical disc storage, optical disc storage (including compact discs, laser discs, optical discs, digital versatile discs, blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program code in the form of instructions or data structures and can be stored by a computer Any other medium, but not limited to.
  • the memory can exist independently and be connected to the processor through the bus. Memory can also be integrated with the processor.
  • the computer program product described above comprises one or more computer instructions.
  • the above-mentioned computers may be general-purpose computers, special-purpose computers, computer networks, or other programmable devices.
  • the above-mentioned computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer-readable storage The medium can be any available medium that can be accessed by a computer or a data storage device such as a server or data center that includes one or more available media.
  • the above-mentioned available medium can be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a high-density digital video disc (digital video disc, DVD), or a semiconductor medium (for example, a solid state disk (solid state disk, SSD), etc.

Abstract

本申请涉及悬浮触控领域,尤其涉及一种悬浮操控时的感应区域分离方法、装置、悬浮操控遥控器、计算机设备及存储介质,该方法包括:计算机设备可基于参考时刻悬浮操控板上的感应数据确定第二感应区域的第一感应中心点和第三感应区域的第二感应中心点,并基于第一感应中心点和第二感应中心点确定第一感应区域的区域分割线,进而基于区域分割线,对第一时刻产生的第一感应区域进行分割得到第一感应操控区域和第二感应操控区域。在本申请中,可有效避免由于两个物体挨在一起或者重叠时,两个物体产生的感应区域会合并或弱感应区域被掩盖的情况,进而提高操控准确度和用户体验感,适用性强。

Description

悬浮操控时的感应区域分离方法、装置、悬浮操控遥控器
本申请要求于2021年05月28日提交中国专利局、申请号为202110593296.2、申请名称为“悬浮操控时的感应区域分离方法、装置、悬浮操控遥控器”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及悬浮触控领域,尤其涉及一种悬浮操控时的感应区域分离方法、装置、悬浮操控遥控器及存储介质。
背景技术
随着悬浮触控技术的发展,通过手持操控板外设进行多指操作控制大屏成为可能,只需要通过触摸或者悬浮操控一块大小与手机相近的操控板,就能实现控制整个大屏操作,悬浮光标和触摸光标的不同,能做到手眼分离,人只需要看大屏就可以做到准确操控;多指操作,使复杂的游戏和轨迹类的应用操作在大屏上可实现,不仅大幅提升了可操控性和视觉体验,且能将手机的操作习惯无缝移植到大屏端。
在悬浮触控操作过程中,遥控器需要检测手指数和手指信息上报到大屏。遥控器从操控板获取信号变化情况来检测手指信息,包括手指数、位置、手指的触摸和悬浮状态等。指尖距离操控板高度不同在操控板上引起的感应区域大小不同,在单手指情况下或者双指在水平面相距较远的情况下,手指感应区域是能明显分开的,容易检测追踪,易区分;但是,当双指在水平面上非常靠近,尤其是在垂直面上一个手指距离悬浮操控板较远,另一个手指距离悬浮操控板较近时,较近的手指在悬浮操控板上产生的感应区域较强,较远的手指产生的感应区域较弱,此时两个手指的感应区域会合并或弱感应区域被掩盖,通过感应区域识别手指数和手指信息结果已不可靠,距离悬浮操控板较远的手指将无法被识别,手指信息丢失,用户操作无法得到正确反馈,从而极大地影响操控准确度和用户体验感。
发明内容
本申请提供一种一种悬浮操控时的感应区域分离方法、装置、悬浮操控遥控器、计算机设备及存储介质,可提高操控准确度和用户体验感,适用性强。
第一方面,本申请提供了一种悬浮操控时的感应区域分离方法,在该方法中,计算机设备可基于参考时刻(即与第一时刻最靠近的两个物体可分离的采样时刻)产生的两个感应区域的感应中心点作为第一时刻产生的第一感应区域的两个参考分割中心点,并基于两个参考分割中心点确定第一感应区域的区域分割线,进而基于该区域分割线,对第一感应区域进行分割得到第一感应操控区域和第二感应操控区域,可对两个物体(如手指,感应笔等)挨在一起或者重叠时产生的一个感应区域进行分割,有效避免由于两个物体挨在一起或者重叠时,两个物体的感应区域会合并或弱感应区域被掩盖的情况,提高了操控准确度和用户体验感,适用性强。
结合第一方面,在第一种可能的实施方式中,计算机设备将第二感应区域中感应值最大的点和第三感应区域中感应值最大的点分别确定为第一感应中心点和第二感应中心 点。由于悬浮操控板上点的感应值越大说明该点距离悬浮操控的物体越近,因此将感应区域中感应值最大的点确定为感应中心点,可提高感应区域的分割正确率。
结合第一方面,在第二种可能的实施方式中,基于第一感应中心点和第二感应中心点的连接线,确定第一感应区域中垂直于连接线的目标区域,并基于目标区域确定第一感应区域的区域分割点,并根据区域分割点得到区域分割线。由于第一感应中心点和第二感应中心点为与第一时刻最靠近的两个物体可分离的采样时刻(即参考时刻)产生的两个感应区域的感应中心点,依据先验信息(即参考时刻下的两个感应中心点)对第一感应区域进行分割,可提高感应区域的分割正确率。
结合第一方面,在第三种可能的实施方式中,计算机设备可按照连接线所在方向以及预设间隔值,将目标区域划分为与连接线所在方向一致的多个行区域,并将各行区域中感应值最小的点确定为第一感应区域的区域分割点。示例性的,双指挨在一起时,双指接触处距离悬浮操控板的高度是双指中心点(即第一感应中心点和第二感应中心点)之间所在区域内距离悬浮操控板的高度中最高的高度,因此,将目标区域内各行区域中感应值最小的点确定为区域分割点,可提高感应区域的分割正确率。
结合第一方面,在第四种可能的实施方式中,计算机设备确定第一感应操控区域的第一区域感应特征值和第二感应操控区域的第二区域感应特征值,并在第一区域感应特征值与第二区域感应特征值之间的差值大于预设差值阈值的情况下,基于第一起始点和在第一时刻所述悬浮操控板的感应数值表,确定第一目标感应操控区域,基于第二起始点和在第一时刻所述悬浮操控板的感应数值表,确定第二目标感应操控区域,其中,第一起始点为第一感应操控区域中感应值最大的点,第二起始点为第二感应操控区域中感应值最大的点。可以理解,在得到第一感应操控区域和第二感应操控区域之后,可通过比较第一区域感应特征值与第二区域感应特征值之间的差值是否大于预设差值阈值,确定得到的第一感应操控区域与第二感应操控区域是否存在信息丢失情况,也即第一感应区域分割是否正确,在该差值大于预设差值阈值时,将第一感应操控区域和第二感应操控区域补充完整,得到更加完整的第一目标感应操控区域和第二目标感应操控区域,可进一步提高感应区域的分割正确率,进一步提高操控准确度和用户体验感,适用性更强。
结合第一方面,在第五种可能的实施方式中,计算机设备将与第一起始点相邻的多个点中,感应值小于第一起始点的感应值的点确定为第一目标点,并将第一起始点和第一目标点添加至第一目标点集合;判断与第一目标点相邻的多个点中是否存在感应值小于第一目标点的感应值的点,若存在,则将感应值小于第一目标点的感应值的点确定为第一目标点,并将第一目标点添加至第一目标点集合,并根据第一目标点集合确定第一目标感应操控区域。可以理解的,计算机设备采用梯度下降的区域搜索方法对悬浮操控板区域进行二次区域搜索,从而对第一感应操控区域进行补充,得到补充完整的第一目标感应操控区域,可进一步提高感应区域的分割正确率。
结合第一方面,在第六种可能的实施方式中,计算机设备将第一感应操控区域中所有点的感应值的均值确定为第一区域感应特征值,将第二感应操控区域中所有点的感应值的均值确定为第二区域感应特征值。
结合第一方面,在第七种可能的实施方式中,在获取第一时间段内用户在悬浮操控板上方悬浮操作所产生的悬浮感应区域数据之前,计算机设备获取第一时间段内用户在悬浮操控板上方悬浮操作所产生的初始悬浮操控板感应数据,该初始悬浮操控板感应数据包括所述第一时间段内的多个采样时刻以及在每个采样时刻所述悬浮操控板的感应数值 表,该感应数值表包括所述悬浮操控板上每个点的感应值。之后,计算机设备将感应数值表中感应值大于预设感应阈值的点确定为目标点,基于目标点确定在所述每个采样时刻悬浮操控板上的感应数据(即感应区域)。可以理解的,计算机设备在获取到每个采样时刻的感应数值表后,首先可利用区域联通搜索法确定由于用户悬浮操作产生的初始感应区域,之后对每个采样时刻的初始感应区域进行空洞补全,得到较为完整的可用区域,即感应区域,可提高后续的感应区域分离的成功率。
结合第一方面,在第八种可能的实施方式中,感应值包括电容值。
第二方面,本申请提供了一种悬浮操控时的感应区域分离装置,该感应区域分离装置包括:
第一获取单元,用于获取第一时间段内用户在悬浮操控板上方悬浮操作所产生的悬浮感应区域数据,悬浮感应数据包括第一时间段内的多个采样时刻以及在每个采样时刻悬浮操控板上的感应数据,多个采样时刻包括第一时刻和参考时刻,在第一时刻悬浮操控板上的感应数据只包括第一感应区域,在参考时刻感应数据包括第二感应区域和第三感应区域,并且,参考时刻为感应数据包括两个感应区域所对应的至少一个时刻中最靠近第一时刻的时刻;
第一确定单元,用于确定第二感应区域的第一感应中心点和第三感应区域的第二感应中心点;
第二确定单元,用于基于第一感应中心点和第二感应中心点确定第一感应区域的区域分割线;
分割单元,用于基于区域分割线,对第一感应区域进行分割得到第一感应操控区域和第二感应操控区域。
结合第二方面,在第一种可能的实施方式中,上述第一确定单元用于将第二感应区域中感应值最大的点和第三感应区域中感应值最大的点分别确定为第一感应中心点和第二感应中心点。
结合第二方面,在第二种可能的实施方式中,上述第二确定单元用于基于第一感应中心和第二感应中心的连接线,确定第一感应区域中垂直于连接线的目标区域,并基于目标区域确定第一感应区域的区域分割点,并根据区域分割点得到区域分割线。
结合第二方面,在第三种可能的实施方式中,上述第二确定单元用于按照连接线所在方向以及预设间隔值,将目标区域划分为与连接线所在方向一致的多个行区域,并将各行区域中感应值最小的点确定为第一感应区域的区域分割点。
结合第二方面,在第四种可能的实施方式中,装置还包括:
第三确定单元,用于确定第一感应操控区域的第一区域感应特征值和第二感应操控区域的第二区域感应特征值;
第四确定单元,用于在第一区域感应特征值与第二区域感应特征值之间的差值大于预设差值阈值的情况下,基于第一起始点和在第一时刻悬浮操控板的感应数值表,确定第一目标感应操控区域,基于第二起始点和在第一时刻悬浮操控板的感应数值表,确定第二目标感应操控区域,其中,第一起始点为第一感应操控区域中感应值最大的点,第二起始点为第二感应操控区域中感应值最大的点。
结合第二方面,在第五种可能的实施方式中,感应数值表包括悬浮操控板上每个点的感应值;
上述第四确定单元用于将与第一起始点相邻的多个点中,感应值小于第一起始点的 感应值的点确定为第一目标点,并将第一起始点和第一目标点添加至第一目标点集合;判断与第一目标点相邻的多个点中是否存在感应值小于第一目标点的感应值的点,若存在,则将感应值小于第一目标点的感应值的点确定为第一目标点,并将第一目标点添加至第一目标点集合;根据第一目标点集合确定第一目标感应操控区域。
结合第二方面,在第六种可能的实施方式中,第三确定单元用于将第一感应操控区域中所有点的感应值的均值确定为第一区域感应特征值,将第二感应操控区域中所有点的感应值的均值确定为第二区域感应特征值。
结合第二方面,在第七种可能的实施方式中,上述装置还包括:
第二获取单元,用于获取第一时间段内用户在悬浮操控板上方悬浮操作所产生的初始悬浮操控板感应数据,初始悬浮操控板感应数据包括第一时间段内的多个采样时刻以及在每个采样时刻悬浮操控板的感应数值表,感应数值表包括悬浮操控板上每个点的感应值;
第五确定单元,用于将感应数值表中感应值大于预设感应阈值的点确定为目标点,基于目标点确定在每个采样时刻悬浮操控板上的感应数据。
结合第二方面,在第八种可能的实施方式中,上述感应值包括电容值。
第三方面,本申请提供了一种悬浮操控遥控器,该悬浮操控遥控器包括上述第二方面第一种可能实施方式至第八种可能实施方式中任意一种可能的实施方式中所述的感应区域分离装置和悬浮操控板。
第四方面,本申请提供了一种计算机设备,该计算机设备该装置包括处理器、存储器和输入设备。上述处理器、存储器和输入设备相互连接。其中,上述存储器用于存储计算机程序,上述计算机程序包括程序指令,上述处理器被配置用于调用上述程序指令以及输入设备来执行上述第一方面所述的感应区域分离方法。
第五方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,所述指令可以由处理电路上的一个或多个处理器执行。当其在计算机上运行时,使得计算机执行上述第一方面所述的感应区域分离方法。
第六方面,本申请实施例提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述第一方面所述的感应区域分离方法。
应理解的是,本申请上述多个方面的实现和有益效果可互相参考。
附图说明
图1是本申请提供的感应区域分离方法的应用场景示意图;
图2是本申请提供的感应区域分离方法的一流程示意图;
图3是本申请提供的确定第一时刻悬浮操控板上的感应区域的工作流程示意图;
图4是本申请提供的对第一感应区域进行分割的工作流程示意图;
图5是本申请提供的确定区域分割线的工作流程示意图;
图6是本申请提供的感应区域分离方法的另一流程示意图;
图7是本申请提供的第一目标感应操控区域和第二目标感应操控区域的效果示意图;
图8是本申请提供的悬浮操控时的感应区域分离装置的结构示意图;
图9是本申请提供的悬浮操控遥控器的结构示意图;
图10是本申请提供的计算机设备的结构示意图。
具体实施方式
本申请提供的方法可适用于悬浮触控领域中的感应区域分离领域,本申请中的计算机设备可以为具有感应区域分离功能的实体终端,该实体终端可以为服务器,也可以为用户终端,在此不做限定。其中,服务器可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、内容分发网络(content delivery network,CDN)、以及大数据和人工智能平台等基础云计算服务的云服务器。用户终端可以包括但不限于平板设备、台式电脑、笔记本电脑、手机或者其他任何能够完成信息交互的终端设备。
在本申请提供的悬浮操控时的感应区域分离方法中,计算机设备可获取第一时间段内用户在悬浮操控板上方悬浮操作所产生的悬浮感应区域数据,悬浮感应数据包括第一时间段内的多个采样时刻以及在每个采样时刻悬浮操控板上的感应数据,多个采样时刻包括第一时刻和参考时刻,在第一时刻悬浮操控板上的感应数据只包括第一感应区域(如双指挨在一起或者重叠时的感应区域),在参考时刻感应数据包括第二感应区域和第三感应区域,并且,参考时刻为感应数据包括两个感应区域所对应的至少一个时刻中最靠近第一时刻的时刻。之后,计算机设备确定第二感应区域的第一感应中心点和第三感应区域的第二感应中心点,并基于第一感应中心点和第二感应中心点确定第一感应区域的区域分割线。进一步地,计算机设备可基于区域分割线,对第一感应区域进行分割得到第一感应操控区域和第二感应操控区域。在本申请中,可对两个物体(如手指,感应笔等)挨在一起或者重叠时产生的一个感应区域进行分割,有效避免由于两个物体挨在一起或者重叠时,两个物体的响应区域会合并或弱响应区域被掩盖的情况,提高了操控准确度和用户体验感,适用性强。本申请提供的感应区域分离方法可适配于不同的感应区域分离应用场景,例如,悬浮操控游戏、悬浮操控绘画等诸多涉及悬浮操控的场景中具有广泛的应用场景,下面将以悬浮操控绘画应用场景为例进行说明,以下不再赘述。
在悬浮操控绘画应用场景下,本申请中的计算机设备可以为悬浮操控遥控器。请参见图1,图1是本申请提供的感应区域分离方法的应用场景示意图。如图1所示,计算机设备可以与显示屏建立通讯,在用户进行悬浮操控绘画时,用户右手中的拇指和食指在悬浮操控板区域的上方进行悬浮操作(如转动)。此时,悬浮操控遥控器获取用户右手悬浮操作过程中产生的初始悬浮操控板感应数据,该初始悬浮板感应数据包括第一时间段(即用户右手在悬浮操控板区域上方时的时间段)内的多个采样时刻以及在每个采样时刻悬浮操控板的感应数值表,并基于第一时间段内每个采样时刻悬浮操控板的感应数值表确定每个采样时刻悬浮操控板上的感应数据(即感应区域)。其中,多个采样时刻包括第一时刻和参考时刻,在第一时刻悬浮操控板上的感应数据只包括第一感应区域,在参考时刻感应数据包括第二感应区域和第三感应区域,并且,参考时刻为感应数据包括两个感应区域所对应的至少一个采样时刻中最靠近第一时刻的采样时刻。之后,计算机设备确定第二感应区域的第一感应中心点和第三感应区域的第二感应中心点,并基于第一感应中心点和第二感应中心点确定第一感应区域的区域分割线,进而,基于区域分割线,对第一感应区域进行分割得到第一感应操控区域和第二感应操控区域。在整个过程中,计算机设备可自动对双指挨在一起或者重叠时产生的感应区域进行分割,有效避免双指靠近时产生的感应区域难以分割的问题,提高了操控准确度和用户体验感,适用性强。
下面将结合图2至图7对本申请提供的悬浮操控时的感应区域分离方法进行示例说明。请参见图2,图2是本申请提供的感应区域分离方法的一流程示意图。如图2所示,该方法可包括以下步骤S101至步骤S104:
步骤S101,获取第一时间段内用户在悬浮操控板上方悬浮操作所产生的悬浮感应区域数据。
在一些可行的实施方式中,计算机设备在执行步骤S101之前,获取第一时间段内用户在悬浮操控板上方悬浮操作所产生的初始悬浮操控板感应数据。其中,初始悬浮操控板感应数据包括第一时间段内的多个采样时刻以及在每个采样时刻悬浮操控板的感应数值表,感应数值表包括悬浮操控板上每个点的感应值以及每个点的坐标值。需要说明的是,每个点的感应值大小可以反映用户在悬浮操控板上方悬浮操控过程中,确定手指、感应笔等可使悬浮操控板上的感应值发生变化的物品距离悬浮操控板的远近,示例性的,悬浮操控板上点A的感应值越大,说明手指距离点A越近。
这里,第一时间段可以为用户在悬浮操控板上方悬浮操作所对应的时间段,用户在悬浮操控板上方的悬浮操作可以为用户双指从分离状态到双指挨在一起再到双指在垂直悬浮操控板方向上重叠对应的连续操作,或者为用户操控两个感应笔从在垂直悬浮操控板方向上重叠到两个感应笔挨在一起再到两个感应笔处于分离状态对应的连续操作。其中,感应笔可以为任何在与悬浮操控板上方时悬浮操控板上的点的感应值发生变化的物体。
之后,计算机设备遍历每个采样时刻悬浮操控板的感应数值表中的每个点,将感应值大于预设感应阈值的点确定为目标点,并基于目标点构成每个采样时刻的初始感应区域,进而可采用空洞补全方法(如形态学闭操作,即先膨胀后腐蚀)对每个采样时刻的初始感应区域存在的空洞进行补全,得到每个采样时刻悬浮操控板上的感应数据,即感应区域。
可以理解的,计算机设备在获取到每个采样时刻的感应数值表后,首先利用区域搜索方法(如上述区域联通搜索法)确定由于用户悬浮操作产生的初始感应区域,之后对每个采样时刻的初始感应区域进行空洞补全,得到较为完整的可用区域,即感应区域,可提高后续的感应区域分离的成功率。
示例性的,以用户在悬浮操控板上方的悬浮操作为用户双指从分离状态到挨在一起为例,对得到第一时刻悬浮操控板上的感应区域进行介绍。请参见图3,图3是本申请提供的确定第一时刻悬浮操控板上的感应区域的工作流程示意图。如图3所示,图中包括第一组双指靠近过程中确定第一时刻悬浮操控板上的感应区域的过程,以及第二组双指靠近过程中确定第一时刻悬浮操控板上的感应区域的过程。其中,图3中的(a)至(d)内的矩形框表示悬浮操控板。下面以第一组为例进行说明,图3中的(a)表示双指靠近的过程,图3中的(b)表示双指挨在一起,图3中的(c)表示对第一时刻(即图3中的(b)对应的采样时刻)悬浮操控板的感应数值表进行区域搜索(即确定目标点的过程)后得到的第一时刻的初始感应区域(即双指可能存在的区域图,图3中的(c)),图3中的(d)为对图3中的(c)内的空洞区域进行补全后得到的第一时刻在悬浮操控板上的感应区域。图3中的第二组可以参考第一组,此处不再赘述。
进而,计算机设备得到第一时间段内用户在悬浮操控板上方悬浮操作所产生的悬浮感应区域数据。
其中,悬浮感应数据包括第一时间段内的多个采样时刻以及在每个采样时刻悬浮操 控板上的感应数据(即感应区域),多个采样时刻包括第一时刻和参考时刻,在第一时刻悬浮操控板上的感应数据只包括第一感应区域,在参考时刻感应数据包括第二感应区域和第三感应区域,并且,参考时刻为感应数据包括两个感应区域所对应的至少一个采样时刻中最靠近第一时刻的采样时刻。
步骤S102,确定第二感应区域的第一感应中心点和第三感应区域的第二感应中心点。
在一些可行的实施方式中,计算机设备可将第二感应区域中感应值最大的点确定为第二感应区域的第一感应中心点,将第三感应区域中感应值最大的点确定为第三感应区域的第二感应中心点。其中,感应值可以为电容值。
示例性的,为了方便理解,请参见图4,图4是本申请提供的对第一感应区域进行分割的工作流程示意图。如图4所示,图4中的(a)为参考时刻悬浮操控板的感应区域,该感应区域包括第二感应区域和第三感应区域,计算机设备通过步骤S102中确定感应中心点的方式,得到第二感应区域的第一感应中心点p和第三感应区域的第二感应中心点q;图4中的(b)为第一时刻悬浮操控板的感应区域,该感应区域只包括第一感应区域,计算机设备在得到第二感应区域的第一感应中心点p和第三感应区域的第二感应中心点q后,将p和q作为第一感应区域进行分割的两个物体的参考感应中心点。
步骤S103,基于第一感应中心点和第二感应中心点确定第一感应区域的区域分割线。
在一些可行的实施方式中,计算机设备基于第一感应中心点和第二感应中心点的连接线,确定第一感应区域中垂直于连接线的目标区域,并按照连接线所在方向以及预设间隔值,将目标区域划分为与连接线所在方向一致的多个行区域,将各行区域中感应值最小的点确定为第一感应区域的区域分割点,进而根据区域分割点得到区域分割线,即图4中的(c)内的虚线mn。
示例性的,为了方便理解,请参见图5,图5是本申请提供的确定区域分割线的工作流程示意图。如图5所示,图5中的(1)为图4中的(c)对应的第一时刻悬浮操控板上详细的感应区域,即由8*16个小正方形构成的矩形区域,其中,每个小正方形表示悬浮操控板区域中的点,所有带颜色的小正方形组成的区域表示第一时刻产生的第一感应区域。假设第一感应中心点p为图5中的(1)中第4行与第7列的交点,第二感应中心点q为图5中的(1)中第4行第11列的交点,则p与q所在连线方向与矩形区域中的每行所在方向一致,分别从p和q做垂直与p与q所在连线方向的第一垂线和第二垂线,将第一垂线、第二垂线与第一感应区域所围成的区域确定为目标区域,即图5中的(2)内所有带颜色的小正方形组成的区域。之后,计算机设备按照p与q所在连线方向(即矩形区域中的每行所在方向)以及预设间隔值(即每个点对应的小正方形的宽度值),将目标区域划分为6个行区域,即第2行中第7列至第11列构成的第一行区域,第3行中第7列至第11列构成的第二行区域,…,第7行中第7列至第11列构成的第六行区域。进而,计算机设备将第一行区域中感应值最小的点确定为第一区域分割点,将第二行区域中感应值最小的点确定为第二区域分割点,…,将第六行区域中感应值最小的点确定为第六区域分割点,并将上述六个区域分割点进行连接,得到图5中的(3)内的虚线mn,也即图5中的(4)所示第一感应区域的区域分割线mn。
步骤S104,基于区域分割线,对第一感应区域进行分割得到第一感应操控区域和第二感应操控区域。
示例性的,请再参见图4,图4中的(d)内区域分割线以左的第一感应区域为第一感应操控区域,区域分割线以右的第一感应区域为第二感应操控区域。
需要说明的是,在第一时刻悬浮操控板上的感应区域是由于用户双指挨在一起产生的感应区域时,假设第二感应区域和第三感应区域分别为第一手指产生的感应区域和第二手指产生的感应区域,则在将第一感应区域分割为两个区域后,将两个区域中包括第一感应中心点的区域确定为第一感应操控区域(即第一手指感应操控区域),进而将两个区域中的另一个区域确定为第二感应操控区域(即第二手指感应操控区域)。
在本申请实施例中,计算机设备可基于参考时刻(即与第一时刻最靠近的两个物体可分离的采样时刻)产生的两个感应区域的感应中心点作为第一时刻产生的第一感应区域的两个参考分割中心点,并基于两个参考分割中心点确定第一感应区域的区域分割线,进而基于该区域分割线,对第一感应区域进行分割得到第一感应操控区域和第二感应操控区域,可对两个物体(如手指,感应笔等)挨在一起或者重叠时产生的一个感应区域进行分割,有效避免由于两个物体挨在一起或者重叠时,两个物体的响应区域会合并或弱响应区域被掩盖的情况,提高了操控准确度和用户体验感,适用性强。
请参见图6,图6是本申请提供的感应区域分离方法的另一流程示意图。如图6所示,该方法可包括以下步骤S201至步骤S206:
步骤S201,获取第一时间段内用户在悬浮操控板上方悬浮操作所产生的悬浮感应区域数据。
步骤S202,确定第二感应区域的第一感应中心点和第三感应区域的第二感应中心点。
步骤S203,基于第一感应中心点和第二感应中心点确定第一感应区域的区域分割线。
步骤S204,基于区域分割线,对第一感应区域进行分割得到第一感应操控区域和第二感应操控区域。
这里,步骤S201-步骤S204具体实现方式的描述请参见图2所示实施例中的步骤S101-步骤S104,此处不再赘述。
步骤S205,确定第一感应操控区域的第一区域感应特征值和第二感应操控区域的第二区域感应特征值。
在一些可行的实施方式中,计算机设备将第一感应操控区域中所有点的感应值的均值确定为第一区域感应特征值,将第二感应操控区域中所有点的感应值的均值确定为第二区域感应特征值。
步骤S206,在第一区域感应特征值与第二区域感应特征值之间的差值大于预设差值阈值的情况下,基于第一起始点、第二起始点和在第一时刻悬浮操控板的感应数值表,确定第一目标感应操控区域和第二目标感应操控区域。
其中,第一时刻悬浮操控板的感应数值表中包括悬浮操控区域内每个点的坐标值以及感应值。
在一些可行的实施方式中,计算机设备计算第一区域感应特征值与第二感应特征值之间的差值,在该差值大于预设差值阈值时,确定步骤S204得到的第一感应操控区域与第二感应操控区域存在信息丢失的情况,说明得到的第一感应操控区域与第二感应操控区域分割不准确。之后,计算机设备基于第一起始点和在第一时刻悬浮操控板的感应数值表确定第一目标感应操控区域,基于第二起始点和在第一时刻悬浮操控板的感应数值表确定第二目标感应操控区域。其中,第一起始点为第一感应操控区域中感应值最大的点,第二起始点为第二感应操控区域中感应值最大的点。
之后,计算机设备以第一起始点为中心,按8邻域向四周搜索满足第一条件的点,这些 点构成第一目标感应操控区域;同时,以第二起始点为中心,按8邻域向四周搜索满足第二条件的点,这些点构成第二目标感应操控区域。其中,第一条件为以第一起始点为中心向外的点的感应值呈下降趋势,第二条件为以第二起始点为中心向外的点的感应值呈下降趋势。
具体的,计算机设备将与第一起始点相邻的八个点(即与第一起始点的上下左右相邻的四个点以及与第一起始点对角相邻的四个点)中,感应值小于第一起始点的感应值的点确定为第一目标点,并将第一起始点和第一目标点添加至第一目标点集合。之后,计算机设备判断与第一目标点相邻的多个点中是否存在感应值小于第一目标点的感应值的点,若存在,则将感应值小于第一目标点的感应值的点确定为第一目标点,并将第一目标点添加至第一目标点集合,可以理解,通过上述循环可以得到第一时刻悬浮操控板上所有满足第一条件的点,也即第一目标点集合包括第一起始点和第一时刻悬浮操控板上所有满足第一条件的点。进而,计算机设备根据第一目标点集合中每个点的坐标值组成第一目标感应操控区域。
与此同时,计算机设备将与第二起始点相邻的八个点(即与第二起始点的上下左右相邻的四个点以及与第二起始点对角相邻的四个点)中,感应值小于第二起始点的感应值的点确定为第二目标点,并将第二起始点和第二目标点添加至第二目标点集合。之后,计算机设备判断与第二目标点相邻的多个点中是否存在感应值小于第二目标点的感应值的点,若存在,则将感应值小于第二目标点的感应值的点确定为第二目标点,并将第二目标点添加至第二目标点集合,可以理解,通过上述循环可以得到第一时刻悬浮操控板上所有满足第二条件的点,也即第二目标点集合包括第二起始点和第一时刻悬浮操控板上所有满足第二条件的点。进而,计算机设备根据第二目标点集合中每个点的坐标值组成第二目标感应操控区域。
可以理解的,上述确定第一目标感应操控区域和第二目标感应操控区域的过程为采用梯度下降的区域搜索方法对悬浮操控板区域进行二次区域搜索,从而对第一感应操控区域和第二感应操控区域进行补充,得到补充完整的第一目标感应操控区域和第二目标感应操控区域,可进一步提高感应区域的分割正确率。需要说明的是,这里进行二次区域搜索的方法包括但不限于梯度下降法,其他的区域搜索法同样适用于本申请。
进一步地,在得到第一目标感应操控区域和第二目标感应操控区域后,计算机设备可对第一目标感应操控区域和第二目标感应操控区域进行重新规划,使最终得到的两个感应区域更加符合两个物体(如手指)的当前位置。
示例性的,为了方便理解,请参见图7,图7是本申请提供的第一目标感应操控区域和第二目标感应操控区域的效果示意图。如图7所示,图7中的(a)为步骤S204得到的第一感应操控区域和第二感应操控区域,以及步骤S205得到的第一感应操控区域的第一区域感应特征值s和第二感应操控区域的第二区域感应特征值r;图7中的(b)为计算机设备分别以s和r为起始点分别向四周重新进行区域搜索和规划后得到的第一目标感应操控区域和第二目标感应操控区域,其中,上述两个区域中的相交区域(即阴影区域)为两个物体产生的感应区域中的重叠部分。
在本申请实施例中,计算机设备可基于参考时刻(即与第一时刻最靠近的两个物体可分离的采样时刻)产生的两个感应区域的感应中心点作为第一时刻产生的第一感应区域的参考分割中心点,并基于该参考分割中心点确定第一感应区域的区域分割线,进而基于该区域分割线,对第一感应区域进行分割得到第一感应操控区域和第二感应操控区域。之后,可通过第一感应操控区域的第一区域感应特征值与第二感应操控区域的第二区域感应特征值之间的差值确定分割是否正确,并在分割不正确的情况下,分别以第一感应操控区域中感应值最大的点和第二感应操控区域中感应值最大的点为起始点分别向四周重新进行区域搜索和规划, 得到第一目标感应操控区域和第二目标感应操控区域,可进一步提高感应区域的分割正确率,有效避免由于两个物体挨在一起或者重叠时,两个物体的响应区域会合并或弱响应区域被掩盖的情况,进一步提高了操控准确度和用户体验感,适用性更强。
请参见图8,图8是本申请提供的悬浮操控时的感应区域分离装置的结构示意图。该感应区域分离装置可以是运行于计算机设备中的一个计算机程序(包括程序代码),例如,该感应区域分离装置为一个应用软件;该感应区域分离装置可以用于执行本申请提供的方法中的相应步骤。如图8所示,该感应区域分离装置8包括:
第一获取单元81,用于获取第一时间段内用户在悬浮操控板上方悬浮操作所产生的悬浮感应区域数据,悬浮感应数据包括第一时间段内的多个采样时刻以及在每个采样时刻悬浮操控板上的感应数据,多个采样时刻包括第一时刻和参考时刻,在第一时刻悬浮操控板上的感应数据只包括第一感应区域,在参考时刻感应数据包括第二感应区域和第三感应区域,并且,参考时刻为感应数据包括两个感应区域所对应的至少一个时刻中最靠近第一时刻的时刻;
第一确定单元82,用于确定第二感应区域的第一感应中心点和第三感应区域的第二感应中心点;
第二确定单元83,用于基于第一感应中心点和第二感应中心点确定第一感应区域的区域分割线;
分割单元84,用于基于区域分割线,对第一感应区域进行分割得到第一感应操控区域和第二感应操控区域。
在一些可能的实施方式中,上述第一确定单元82用于将第二感应区域中感应值最大的点和第三感应区域中感应值最大的点分别确定为第一感应中心点和第二感应中心点。
在一些可能的实施方式中,上述第二确定单元83用于基于第一感应中心和第二感应中心的连接线,确定第一感应区域中垂直于连接线的目标区域,并基于目标区域确定第一感应区域的区域分割点,并根据区域分割点得到区域分割线。
在一些可能的实施方式中,上述第二确定单元83用于按照连接线所在方向以及预设间隔值,将目标区域划分为与连接线所在方向一致的多个行区域,并将各行区域中感应值最小的点确定为第一感应区域的区域分割点。
在一些可能的实施方式中,装置还包括:
第三确定单元85,用于确定第一感应操控区域的第一区域感应特征值和第二感应操控区域的第二区域感应特征值;
第四确定单元86,用于在第一区域感应特征值与第二区域感应特征值之间的差值大于预设差值阈值的情况下,基于第一起始点和在第一时刻悬浮操控板的感应数值表,确定第一目标感应操控区域,基于第二起始点和在第一时刻悬浮操控板的感应数值表,确定第二目标感应操控区域,其中,第一起始点为第一感应操控区域中感应值最大的点,第二起始点为第二感应操控区域中感应值最大的点。
在一些可能的实施方式中,感应数值表包括悬浮操控板上每个点的感应值;
上述第四确定单元86用于将与第一起始点相邻的多个点中,感应值小于第一起始点的感应值的点确定为第一目标点,并将第一起始点和第一目标点添加至第一目标点集合;判断与第一目标点相邻的多个点中是否存在感应值小于第一目标点的感应值的点,若存在,则将感应值小于第一目标点的感应值的点确定为第一目标点,并将第一目标点添加 至第一目标点集合;根据第一目标点集合确定第一目标感应操控区域。
在一些可能的实施方式中,第三确定单元85用于将第一感应操控区域中所有点的感应值的均值确定为第一区域感应特征值,将第二感应操控区域中所有点的感应值的均值确定为第二区域感应特征值。
在一些可能的实施方式中,上述装置还包括:
第二获取单元87,用于获取第一时间段内用户在悬浮操控板上方悬浮操作所产生的初始悬浮操控板感应数据,初始悬浮操控板感应数据包括第一时间段内的多个采样时刻以及在每个采样时刻悬浮操控板的感应数值表,感应数值表包括悬浮操控板上每个点的感应值;
第五确定单元88,用于将感应数值表中感应值大于预设感应阈值的点确定为目标点,基于目标点确定在每个采样时刻悬浮操控板上的感应数据。
在一些可能的实施方式中,上述感应值包括电容值。
具体实现中,上述第一获取单元81、第一确定单元82、第二确定单元83、分割单元84、第三确定单元85、第四确定单元86、第二获取单元87以及第五确定单元88实现上述各种可能的实现方式中的步骤的过程具体可参见上述实施例一中的计算机设备所执行的相应的过程,此处便不再赘述。
在本申请中,可基于参考时刻(即与第一时刻最靠近的两个物体可分离的采样时刻)产生的两个感应区域的感应中心点作为第一时刻产生的第一感应区域的参考分割中心点,并基于该参考分割中心点确定第一感应区域的区域分割线,进而基于该区域分割线,对第一感应区域进行分割,可对两个物体(如手指,感应笔等)挨在一起或者重叠时产生的一个感应区域进行分割,有效避免由于两个物体挨在一起或者重叠时,两个物体的响应区域会合并或弱响应区域被掩盖的情况,提高了操控准确度和用户体验感,适用性强。
请参见图9,图9是本申请提供的悬浮操控遥控器的结构示意图。如图9所示,该悬浮操控遥控器9包括悬浮操控板91和感应区域分离装置92(对应图8中的感应区域分离装置8),其中,悬浮操控板91用于在用户在其上方进行悬浮操作时,根据其硬件信号量(如电容值)产生的变化生成初始悬浮操控板感应数据(如悬浮操控板的感应数值表),并按照预设频率将不同采样时刻下的初始悬浮操控板感应数据发送至感应区域分离装置92。这里,感应区域分离装置92执行的步骤请参见图8中感应区域分离装置8的描述,此处不再赘述。可选的,悬浮操控遥控器9还可以包括交互模块(图未示),用于传输数据。具体的,感应区域分离装置92在得到感应区域分离结果后,对该分离结果按照预设格式进行封装,并将封装得到的数据包通过该交互模块利用通讯模块(如蓝牙模块等)发送至显示设备,显示设备在接收到上述数据包后,对该数据包进行解析,在应用中可得到视觉响应反馈。
本申请还提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被计算机执行时实现上述方法实施例中计算机设备执行的方法或者步骤。
本申请还提供了一种计算机程序产品,该计算机程序产品被计算机设备执行时实现上述方法实施例中计算机设备执行的方法或者步骤。
请参见图10,图10是本申请提供的计算机设备的结构示意图。如图10所示,该计算机设备10可包括至少一个处理器101、至少一个存储器102以及输入设备103。上述处理器101、上述存储器102以及上述输入设备103可通信总线或者通信接口连接并完成相互间的通信。 这里,上述处理器101、存储器102以及输入设备103可用于实现上述图8中所示的第一获取单元81、第一确定单元82、第二确定单元83、分割单元84、第三确定单元85、第四确定单元86、第二获取单元87以及第五确定单元88所能实现的计算机设备的各种功能。
应当理解,所称处理器101可以是中央处理单元(Central Processing Unit,CPU),该处理器还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
输入设备103可以包括悬浮操控板等设备。
存储器102可以包括只读存储器和随机存取存储器,并向处理器101提供指令和数据。存储器102存储了如下的元素,可执行模块或者数据结构,或者它们的子集,或者它们的扩展集:
操作指令:包括各种操作指令,用于实现各种操作。
具体的,所述存储器102用于存储执行上述实施例计算机设备所实现的感应区域分离方法的程序代码,所述处理器101用于执行所述存储器102中存储的程序代码以实现上述实施例中计算机设备所执行的感应区域分离方法的各个步骤。具体实现过程可参见前文实施例中所描述的相应内容,此处便不再赘述。
本申请实施例还提供了一种包含指令的计算机程序产品,其在计算机上运行时,使得计算机执行上述实施例中计算机设备所执行感应区域分离方法或功能。
本申请实施例还提供了一种计算机可读存储介质,该可读存储介质存储指令,当处理器运行所述指令时,使得所述处理器执行上述实施例中计算机设备所执行感应区域分离方法或功能。
在本申请实施例中,处理器可以是通用中央处理器(CPU),微处理器,特定应用集成电路(application-specific integrated circuit,ASIC),或一个或多个用于控制以上方案程序执行的集成电路。
存储器可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器可以是独立存在,通过总线与处理器相连接。存储器也可以和处理器集成在一起。
在上述方法实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。上述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行上述计算机指令时,全部或部分地产生按照本申请实施例上述的流程或功能。上述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。上述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,上述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber Line,DSL)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、 服务器或数据中心进行传输。上述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。上述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,高密度数字视频光盘(digital video disc,DVD)、或者半导体介质(例如,固态硬盘(solid state disk,SSD)等。
应理解,本实施例中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
总之,以上上述仅为本申请技术方案的较佳实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (22)

  1. 一种悬浮操控时的感应区域分离方法,其特征在于,所述方法包括:
    获取第一时间段内用户在悬浮操控板上方悬浮操作所产生的悬浮感应区域数据,所述悬浮感应数据包括所述第一时间段内的多个采样时刻以及在每个采样时刻所述悬浮操控板上的感应数据,所述多个采样时刻包括第一时刻和参考时刻,在所述第一时刻所述悬浮操控板上的感应数据只包括第一感应区域,在所述参考时刻所述感应数据包括第二感应区域和第三感应区域,并且,所述参考时刻为所述感应数据包括两个感应区域所对应的至少一个采样时刻中最靠近所述第一时刻的采样时刻;
    确定所述第二感应区域的第一感应中心点和所述第三感应区域的第二感应中心点;
    基于所述第一感应中心点和所述第二感应中心点确定所述第一感应区域的区域分割线;
    基于所述区域分割线,对所述第一感应区域进行分割得到第一感应操控区域和第二感应操控区域。
  2. 根据权利要求1所述的方法,其特征在于,所述确定所述第二感应区域的第一感应中心点和所述第三感应区域的第二感应中心点,包括:
    将所述第二感应区域中感应值最大的点和所述第三感应区域中感应值最大的点分别确定为所述第一感应中心点和所述第二感应中心点。
  3. 根据权利要求1所述的方法,其特征在于,所述基于所述第一感应中心点和所述第二感应中心点确定所述第一感应区域的区域分割线,包括:
    基于所述第一感应中心点和所述第二感应中心点的连接线,确定所述第一感应区域中垂直于所述连接线的目标区域;
    基于所述目标区域确定所述第一感应区域的区域分割点,并根据所述区域分割点得到所述区域分割线。
  4. 根据权利要求3所述的方法,其特征在于,所述基于所述目标区域确定所述第一感应区域的区域分割点,包括:
    按照所述连接线所在方向以及预设间隔值,将所述目标区域划分为与所述连接线所在方向一致的多个行区域;
    将各行区域中感应值最小的点确定为所述第一感应区域的区域分割点。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述方法还包括:
    确定所述第一感应操控区域的第一区域感应特征值和所述第二感应操控区域的第二区域感应特征值;
    在所述第一区域感应特征值与所述第二区域感应特征值之间的差值大于预设差值阈值的情况下,基于第一起始点和在所述第一时刻所述悬浮操控板的感应数值表,确定第一目标感应操控区域,基于第二起始点和在所述第一时刻所述悬浮操控板的感应数值表,确定第二目标感应操控区域,其中,所述第一起始点为所述第一感应操控区域中感应值最大的点,所述第二起始点为所述第二感应操控区域中感应值最大的点。
  6. 根据权利要求5所述的方法,其特征在于,所述感应数值表包括所述悬浮操控板上每个点的感应值;
    所述基于第一起始点和在所述第一时刻所述悬浮操控板的感应数值表,确定第一目标感应操控区域,包括:
    将与所述第一起始点相邻的多个点中,感应值小于所述第一起始点的感应值的点确定为第一目标点,并将所述第一起始点和所述第一目标点添加至第一目标点集合;
    判断与所述第一目标点相邻的多个点中是否存在感应值小于所述第一目标点的感应值的点,若存在,则将感应值小于所述第一目标点的感应值的点确定为所述第一目标点,并将所述第一目标点添加至所述第一目标点集合;
    根据所述第一目标点集合确定所述第一目标感应操控区域。
  7. 根据权利要求5或6所述的方法,其特征在于,所述确定所述第一感应操控区域的第一区域感应特征值和所述第二感应操控区域的第二区域感应特征值,包括:
    将所述第一感应操控区域中所有点的感应值的均值确定为所述第一区域感应特征值,将所述第二感应操控区域中所有点的感应值的均值确定为所述第二区域感应特征值。
  8. 根据权利要求1所述的方法,其特征在于,所述获取第一时间段内用户在悬浮操控板上方悬浮操作所产生的悬浮感应区域数据之前,包括:
    获取第一时间段内用户在悬浮操控板上方悬浮操作所产生的初始悬浮操控板感应数据,所述初始悬浮操控板感应数据包括所述第一时间段内的多个采样时刻以及在每个采样时刻所述悬浮操控板的感应数值表,所述感应数值表包括所述悬浮操控板上每个点的感应值;
    将所述感应数值表中感应值大于预设感应阈值的点确定为目标点,基于所述目标点确定在所述每个采样时刻所述悬浮操控板上的感应数据。
  9. 根据权利要求1-8任一项所述的方法,其特征在于,所述感应值包括电容值。
  10. 一种悬浮操控时的感应区域分离装置,其特征在于,所述感应区域分离装置包括:
    第一获取单元,用于获取第一时间段内用户在悬浮操控板上方悬浮操作所产生的悬浮感应区域数据,所述悬浮感应数据包括所述第一时间段内的多个采样时刻以及在每个采样时刻所述悬浮操控板上的感应数据,所述多个采样时刻包括第一时刻和参考时刻,在所述第一时刻所述悬浮操控板上的感应数据只包括第一感应区域,在所述参考时刻所述感应数据包括第二感应区域和第三感应区域,并且,所述参考时刻为所述感应数据包括两个感应区域所对应的至少一个时刻中最靠近所述第一时刻的时刻;
    第一确定单元,用于确定所述第二感应区域的第一感应中心点和所述第三感应区域的第二感应中心点;
    第二确定单元,用于基于所述第一感应中心点和所述第二感应中心点确定所述第一感应区域的区域分割线;
    分割单元,用于基于所述区域分割线,对所述第一感应区域进行分割得到第一感应 操控区域和第二感应操控区域。
  11. 根据权利要求10所述的装置,其特征在于,所述第一确定单元用于:
    将所述第二感应区域中感应值最大的点和所述第三感应区域中感应值最大的点分别确定为所述第一感应中心点和所述第二感应中心点。
  12. 根据权利要求10所述的装置,其特征在于,所述第二确定单元用于:
    基于所述第一感应中心和所述第二感应中心的连接线,确定所述第一感应区域中垂直于所述连接线的目标区域;
    基于所述目标区域确定所述第一感应区域的区域分割点,并根据所述区域分割点得到所述区域分割线。
  13. 根据权利要求12所述的装置,其特征在于,所述第二确定单元用于:
    按照所述连接线所在方向以及预设间隔值,将所述目标区域划分为与所述连接线所在方向一致的多个行区域;
    将各行区域中感应值最小的点确定为所述第一感应区域的区域分割点。
  14. 根据权利要求10-13任一项所述的装置,其特征在于,所述装置还包括:
    第三确定单元,用于确定所述第一感应操控区域的第一区域感应特征值和所述第二感应操控区域的第二区域感应特征值;
    第四确定单元,用于在所述第一区域感应特征值与所述第二区域感应特征值之间的差值大于预设差值阈值的情况下,基于第一起始点和在所述第一时刻所述悬浮操控板的感应数值表,确定第一目标感应操控区域,基于第二起始点和在所述第一时刻所述悬浮操控板的感应数值表,确定第二目标感应操控区域,其中,所述第一起始点为所述第一感应操控区域中感应值最大的点,所述第二起始点为所述第二感应操控区域中感应值最大的点。
  15. 根据权利要求14所述的装置,其特征在于,所述感应数值表包括所述悬浮操控板上每个点的感应值;
    所述第四确定单元用于:
    将与所述第一起始点相邻的多个点中,感应值小于所述第一起始点的感应值的点确定为第一目标点,并将所述第一起始点和所述第一目标点添加至第一目标点集合;
    判断与所述第一目标点相邻的多个点中是否存在感应值小于所述第一目标点的感应值的点,若存在,则将感应值小于所述第一目标点的感应值的点确定为所述第一目标点,并将所述第一目标点添加至所述第一目标点集合;
    根据所述第一目标点集合确定所述第一目标感应操控区域。
  16. 根据权利要求14或15所述的装置,其特征在于,所述第三确定单元用于:
    将所述第一感应操控区域中所有点的感应值的均值确定为所述第一区域感应特征值,将所述第二感应操控区域中所有点的感应值的均值确定为所述第二区域感应特征值。
  17. 根据权利要求10所述的装置,其特征在于,所述装置还包括:
    第二获取单元,用于获取第一时间段内用户在悬浮操控板上方悬浮操作所产生的初始悬浮操控板感应数据,所述初始悬浮操控板感应数据包括所述第一时间段内的多个采样时刻以及在每个采样时刻所述悬浮操控板的感应数值表,所述感应数值表包括所述悬浮操控板上每个点的感应值;
    第五确定单元,用于将所述感应数值表中感应值大于预设感应阈值的点确定为目标点,基于所述目标点确定在所述每个采样时刻所述悬浮操控板上的感应数据。
  18. 根据权利要求10-17任一项所述的装置,其特征在于,所述感应值包括电容值。
  19. 一种悬浮操控遥控器,其特征在于,所述悬浮操控遥控器包括如权利要求10-18中任一项所述的感应区域分离装置和悬浮操控板。
  20. 一种计算机设备,其特征在于,所述计算机设备包括:处理器、存储器和输入设备;
    所述存储器,用于存储计算机程序;
    所述处理器,用于调用所述存储器中存储的计算机程序和所述输入设备,以使得所述计算机设备执行如权利要求1-9中任一项所述的感应区域分离方法。
  21. 一种计算机可读存储介质,用于存储指令,当所述指令被执行时,使如权利要求1-9中任一项所述的感应区域分离方法被实现。
  22. 一种包含程序指令的计算机程序产品,当所述程序指令在计算机设备上运行时,使得所述计算机设备执行使如权利要求1-9中任一项所述的感应区域分离方法。
PCT/CN2022/091500 2021-05-28 2022-05-07 悬浮操控时的感应区域分离方法、装置、悬浮操控遥控器 WO2022247616A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP22810343.8A EP4343505A1 (en) 2021-05-28 2022-05-07 Method and apparatus for sensing area separation during floating control, and floating control remote controller

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110593296.2A CN115480662A (zh) 2021-05-28 2021-05-28 悬浮操控时的感应区域分离方法、装置、悬浮操控遥控器
CN202110593296.2 2021-05-28

Publications (1)

Publication Number Publication Date
WO2022247616A1 true WO2022247616A1 (zh) 2022-12-01

Family

ID=84228414

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/091500 WO2022247616A1 (zh) 2021-05-28 2022-05-07 悬浮操控时的感应区域分离方法、装置、悬浮操控遥控器

Country Status (3)

Country Link
EP (1) EP4343505A1 (zh)
CN (1) CN115480662A (zh)
WO (1) WO2022247616A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104049737A (zh) * 2013-03-14 2014-09-17 三星电子株式会社 用户设备的对象控制方法和装置
CN104375639A (zh) * 2014-07-17 2015-02-25 深圳市钛客科技有限公司 一种空中感应设备
US20200081491A1 (en) * 2018-09-11 2020-03-12 Sharp Kabushiki Kaisha Display device
CN110941339A (zh) * 2019-11-27 2020-03-31 上海创功通讯技术有限公司 一种手势感应方法及电子设备、存储介质
CN111414118A (zh) * 2019-01-08 2020-07-14 敦泰电子有限公司 触控中心计算方法、触控系统及触控装置
US20210019010A1 (en) * 2019-07-19 2021-01-21 Samsung Electronics Co., Ltd. Dynamically adaptive sensing for remote hover touch
CN112639689A (zh) * 2020-04-30 2021-04-09 华为技术有限公司 基于隔空手势的控制方法、装置及系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104049737A (zh) * 2013-03-14 2014-09-17 三星电子株式会社 用户设备的对象控制方法和装置
CN104375639A (zh) * 2014-07-17 2015-02-25 深圳市钛客科技有限公司 一种空中感应设备
US20200081491A1 (en) * 2018-09-11 2020-03-12 Sharp Kabushiki Kaisha Display device
CN111414118A (zh) * 2019-01-08 2020-07-14 敦泰电子有限公司 触控中心计算方法、触控系统及触控装置
US20210019010A1 (en) * 2019-07-19 2021-01-21 Samsung Electronics Co., Ltd. Dynamically adaptive sensing for remote hover touch
CN110941339A (zh) * 2019-11-27 2020-03-31 上海创功通讯技术有限公司 一种手势感应方法及电子设备、存储介质
CN112639689A (zh) * 2020-04-30 2021-04-09 华为技术有限公司 基于隔空手势的控制方法、装置及系统

Also Published As

Publication number Publication date
CN115480662A (zh) 2022-12-16
EP4343505A1 (en) 2024-03-27

Similar Documents

Publication Publication Date Title
US10216406B2 (en) Classification of touch input as being unintended or intended
KR102151683B1 (ko) 그래픽 키보드 내에서의 도형 심볼 검색
KR102424803B1 (ko) 터치 분류
US20170116474A1 (en) Enhanced Interpretation of Character Arrangements
US9589136B2 (en) Method and device for extracting message format
KR20160057407A (ko) 동시적 호버 및 터치 인터페이스
US20140132551A1 (en) Touch-Sensitive Bezel Techniques
US11209941B2 (en) Board game system, non-transitory computer readable medium, game piece, and game piece set
CN106874817A (zh) 二维码识别方法、设备和移动终端
CN107870677B (zh) 一种输入方法、装置和用于输入的装置
WO2019085921A1 (zh) 一种单手操作移动终端的方法、存储介质及移动终端
CN110377215B (zh) 模型展示方法、装置及终端设备
KR102125212B1 (ko) 전자 필기 운용 방법 및 이를 지원하는 전자 장치
US20140115521A1 (en) Decoding imprecise gestures for gesture-keyboards
US20140232672A1 (en) Method and terminal for triggering application programs and application program functions
WO2017172491A1 (en) Generating a services application
CN110780793A (zh) 一种树形菜单的构建方法、装置、电子设备及存储介质
CN111190509A (zh) 一种触摸检测方法、装置及无线耳机和存储介质
WO2022247616A1 (zh) 悬浮操控时的感应区域分离方法、装置、悬浮操控遥控器
US20240004541A1 (en) Content Extraction Method, Electronic Device and Medium
WO2023138558A1 (zh) 一种图像场景分割方法、装置、设备及存储介质
CN112988913B (zh) 一种数据处理方法和相关装置
CN103547982A (zh) 利用空间和时间特征识别触摸传感器数据中的接触和接触属性
CN107728826B (zh) 对象过滤器
CN111694451B (zh) 操作数据处理方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22810343

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022810343

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2022810343

Country of ref document: EP

Effective date: 20231218