CN116382453A - Display device, touch point positioning method and device - Google Patents

Display device, touch point positioning method and device Download PDF

Info

Publication number
CN116382453A
CN116382453A CN202111592881.7A CN202111592881A CN116382453A CN 116382453 A CN116382453 A CN 116382453A CN 202111592881 A CN202111592881 A CN 202111592881A CN 116382453 A CN116382453 A CN 116382453A
Authority
CN
China
Prior art keywords
polygon
touch
vertexes
touch point
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111592881.7A
Other languages
Chinese (zh)
Inventor
王武军
田媛
刘杰
张晓娜
李新
张连峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Hisense Commercial Display Co Ltd
Original Assignee
Qingdao Hisense Commercial Display Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Commercial Display Co Ltd filed Critical Qingdao Hisense Commercial Display Co Ltd
Priority to CN202111592881.7A priority Critical patent/CN116382453A/en
Publication of CN116382453A publication Critical patent/CN116382453A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0412Digitisers structurally integrated in a display

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)

Abstract

The embodiment of the application provides a display device, a touch point positioning method and a touch point positioning device. The display device includes: a display and a processor; the display is provided with a touch detection device; the touch detection device is used for respectively detecting N touch areas of a target touch point in N scanning directions, wherein N is an integer larger than 1; the processor is configured to: intersecting the ith touch area with a first polygon to obtain a second polygon, wherein the first polygon is determined based on the ith-1 touch area, and i is an integer smaller than or equal to N and larger than 0; converting the second polygon into a third polygon according to the vertexes of the second polygon, wherein the vertexes of the third polygon are subsets of the vertexes of the second polygon; and determining the coordinates of the target touch point according to the third polygon. The method reduces the operation complexity, improves the positioning speed of the touch point, further improves the response speed of touch operation, and improves the experience of touch operation.

Description

Display device, touch point positioning method and device
Technical Field
The embodiment of the application relates to the technical field of man-machine interaction, and in particular relates to a display device, a touch point positioning method and a touch point positioning device.
Background
At present, the intelligent terminal can realize man-machine interaction by performing touch operation on an interactable interface displayed by a display. The display needs to locate the position of the touch point during the response to the touch operation.
In the prior art, a display can detect a plurality of touch areas through multiple scans, so that the touch areas are intersected to accurately position touch points. However, when the user performs the quick touch operation, the user is affected by the positioning speed of the touch point, so that the response speed of the terminal to the touch operation is slower, the situation of point loss or line breakage exists, and the touch operation experience is reduced.
Disclosure of Invention
The embodiment of the application provides a display device, a touch point positioning method and a touch point positioning device, so as to reduce operation complexity, improve the positioning speed of a touch point and further improve the response speed of a terminal to touch operation.
In a first aspect, there is provided a display device comprising: a display and a processor; the display is provided with a touch detection device; the touch detection device is used for respectively detecting N touch areas of a target touch point in N scanning directions, wherein N is an integer larger than 1; the processor is configured to: intersecting the ith touch area with a first polygon to obtain a second polygon, wherein the first polygon is determined based on the ith-1 touch area, and i is an integer smaller than or equal to N and larger than 0; converting the second polygon into a third polygon according to the vertexes of the second polygon, wherein the vertexes of the third polygon are subsets of the vertexes of the second polygon; and determining the coordinates of the target touch point according to the third polygon.
In a second aspect, a touch point positioning method is provided, including: n touch areas of the target touch point are detected in N scanning directions respectively, wherein N is an integer greater than 1; intersecting the ith touch area with a first polygon to obtain a second polygon, wherein the first polygon is determined based on the ith-1 touch area, and i is an integer smaller than or equal to N and larger than 0; converting the second polygon into a third polygon according to the vertexes of the second polygon, wherein the vertexes of the third polygon are subsets of the vertexes of the second polygon; and determining the coordinates of the target touch point according to the third polygon.
In a third aspect, there is provided a touch point positioning apparatus comprising: the detection unit is used for respectively detecting N touch areas of the target touch point in N scanning directions, wherein N is an integer larger than 1; the processing unit is used for intersecting the ith touch area with a first polygon to obtain a second polygon, wherein the first polygon is determined based on the ith-1 touch area, and i is an integer smaller than or equal to N and larger than 0; the processing unit is further configured to convert the second polygon into a third polygon according to the vertices of the second polygon, where the vertices of the third polygon are a subset of the vertices of the second polygon; the processing unit is further configured to determine coordinates of the target touch point according to the third polygon.
In a fourth aspect, there is provided a computer-readable storage medium storing a computer program that causes a computer to perform the method as in the second aspect.
In a fifth aspect, there is provided a computer program product comprising computer program instructions for causing a computer to perform the method as in the second aspect.
According to the embodiment of the application, the processor converts the second polygon obtained by intersecting the ith touch area and the first polygon into the third polygon with fewer vertexes, so that simplification of the second polygon is realized, and then the coordinates of the target touch point are determined based on the third polygon, so that the operation complexity is reduced, the positioning speed of the touch point is improved, the response speed of touch operation is further improved, and the touch operation experience is improved.
Drawings
A schematic diagram of an operation scenario between a display device and a control apparatus according to an embodiment is exemplarily shown in fig. 1;
a hardware configuration block diagram of the display device 200 in accordance with an exemplary embodiment is illustrated in fig. 2;
fig. 3 is a schematic structural diagram of a touch detection device according to an embodiment of the present application;
fig. 4 is a flowchart of a touch point positioning method 300 according to an embodiment of the present disclosure;
Fig. 5a to 5d are schematic diagrams of scanning directions according to embodiments of the present application;
FIG. 6 is a schematic diagram of a touch area according to an embodiment of the present disclosure;
fig. 7a to 7c are schematic views illustrating intersection of touch areas according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a scan reversal provided herein;
fig. 9a and fig. 9b are schematic diagrams of a vertex screening method for intersecting touch areas according to an embodiment of the present application;
fig. 10 is a flowchart of a touch point positioning method 400 according to an embodiment of the present disclosure;
fig. 11 is a schematic block diagram of a touch point positioning device according to an embodiment of the present application.
Detailed Description
For purposes of clarity, embodiments and advantages of the present application, the following description will make clear and complete the exemplary embodiments of the present application, with reference to the accompanying drawings in the exemplary embodiments of the present application, it being apparent that the exemplary embodiments described are only some, but not all, of the examples of the present application.
Based on the exemplary embodiments described herein, all other embodiments that may be obtained by one of ordinary skill in the art without making any inventive effort are within the scope of the claims appended hereto. Furthermore, while the disclosure is presented in the context of an exemplary embodiment or embodiments, it should be appreciated that the various aspects of the disclosure may, separately, comprise a complete embodiment.
It should be noted that the brief description of the terms in the present application is only for convenience in understanding the embodiments described below, and is not intended to limit the embodiments of the present application. Unless otherwise indicated, these terms should be construed in their ordinary and customary meaning.
The terms "first," second, "" third and the like in the description and in the claims and in the above drawings are used for distinguishing between similar or similar objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated (Unless otherwise indicated). It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are, for example, capable of operation in sequences other than those illustrated or otherwise described herein.
Furthermore, the terms "comprise" and "have," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to those elements expressly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The term "module" as used in this application refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the function associated with that element.
A schematic diagram of an operation scenario between a display device and a control apparatus according to an embodiment is exemplarily shown in fig. 1. As shown in fig. 1, a user may operate the display apparatus 200 through the mobile terminal 300 and the control device 100.
In some embodiments, the control apparatus 100 may be a remote controller, and the communication between the remote controller and the display device includes infrared protocol communication or bluetooth protocol communication, and other short-range communication modes, etc., and the display device 200 is controlled by a wireless or other wired mode. The user may control the display device 200 by inputting user instructions through keys on a remote control, voice input, control panel input, etc. Such as: the user can input corresponding control instructions through volume up-down keys, channel control keys, up/down/left/right movement keys, voice input keys, menu keys, on-off keys, etc. on the remote controller to realize the functions of the control display device 200.
In some embodiments, mobile terminals, tablet computers, notebook computers, and other smart devices may also be used to control the display device 200. For example, the display device 200 is controlled using an application running on a smart device. The application program, by configuration, can provide various controls to the user in an intuitive User Interface (UI) on a screen associated with the smart device.
In some embodiments, the mobile terminal 300 may install a software application with the display device 200, implement connection communication through a network communication protocol, and achieve the purpose of one-to-one control operation and data communication. Such as: it is possible to implement a control command protocol established between the mobile terminal 300 and the display device 200, synchronize a remote control keyboard to the mobile terminal 300, and implement a function of controlling the display device 200 by controlling a user interface on the mobile terminal 300. The audio/video content displayed on the mobile terminal 300 can also be transmitted to the display device 200, so as to realize the synchronous display function.
As also shown in fig. 1, the display device 200 is also in data communication with the server 400 via a variety of communication means. The display device 200 may be permitted to make communication connections via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display device 200. By way of example, display device 200 receives software program updates, or accesses a remotely stored digital media library by sending and receiving information, as well as Electronic Program Guide (EPG) interactions. The server 400 may be a cluster, or may be multiple clusters, and may include one or more types of servers. Other web service content such as video on demand and advertising services are provided through the server 400.
The display device 200 may be a liquid crystal display, an OLED display, a projection display device. The particular display device type, size, resolution, etc. are not limited, and those skilled in the art will appreciate that the display device 200 may be modified in performance and configuration as desired.
The display apparatus 200 may additionally provide a smart network television function of a computer support function, including, but not limited to, a network television, a smart television, an Internet Protocol Television (IPTV), etc., in addition to the broadcast receiving television function.
A hardware configuration block diagram of the display device 200 according to an exemplary embodiment is illustrated in fig. 2.
In some embodiments, at least one of the controller 250, the modem 210, the communicator 220, the detector 230, the input/output interface 255, the display 275, the audio output interface 285, the memory 260, the power supply 290, the user interface 265, and the external device interface 240 is included in the display apparatus 200.
In some embodiments, the display 275 is configured to receive image signals from the first processor output, and to display video content and images and components of the menu manipulation interface.
In some embodiments, display 275 includes a display assembly for presenting pictures, and a drive assembly to drive the display of images.
In some embodiments, the video content is displayed from broadcast television content, or alternatively, from various broadcast signals that may be received via a wired or wireless communication protocol. Alternatively, various image contents received from the network server side transmitted from the network communication protocol may be displayed.
In some embodiments, the display 275 is used to present a user-manipulated UI interface generated in the display device 200 and used to control the display device 200.
In some embodiments, depending on the type of display 275, a drive assembly for driving the display is also included.
In some embodiments, display 275 is a projection display and may further include a projection device and a projection screen.
In some embodiments, communicator 220 is a component for communicating with external devices or external servers according to various communication protocol types. For example: the communicator may include at least one of a Wifi chip, a bluetooth communication protocol chip, a wired ethernet communication protocol chip, or other network communication protocol chip or a near field communication protocol chip, and an infrared receiver.
In some embodiments, the display device 200 may establish control signal and data signal transmission and reception between the communicator 220 and the external control device 100 or the content providing device.
In some embodiments, the user interface 265 may be used to receive infrared control signals from the control device 100 (e.g., an infrared remote control, etc.).
In some embodiments, the detector 230 is a signal that the display device 200 uses to capture or interact with the external environment.
In some embodiments, the detector 230 includes an optical receiver, a sensor for capturing the intensity of ambient light, a parameter change may be adaptively displayed by capturing ambient light, etc.
In some embodiments, the detector 230 may further include an image collector, such as a camera, a video camera, etc., which may be used to collect external environmental scenes, collect attributes of a user or interact with a user, adaptively change display parameters, and recognize a user gesture to realize an interaction function with the user.
In some embodiments, the detector 230 may also include a temperature sensor or the like, such as by sensing ambient temperature.
In some embodiments, the display device 200 may adaptively adjust the display color temperature of the image. The display device 200 may be adjusted to display a colder color temperature shade of the image, such as when the temperature is higher, or the display device 200 may be adjusted to display a warmer color shade of the image when the temperature is lower.
In some embodiments, the detector 230 may also be a sound collector or the like, such as a microphone, that may be used to receive the user's sound. Illustratively, a voice signal including a control instruction for a user to control the display apparatus 200, or an acquisition environmental sound is used to recognize an environmental scene type so that the display apparatus 200 can adapt to environmental noise.
In some embodiments, as shown in fig. 2, the input/output interface 255 is configured to enable data transfer between the controller 250 and external other devices or other controllers 250. Such as receiving video signal data and audio signal data of an external device, command instruction data, or the like.
In some embodiments, external device interface 240 may include, but is not limited to, the following: any one or more interfaces of a high definition multimedia interface HDMI interface, an analog or data high definition component input interface, a composite video input interface, a USB input interface, an RGB port, and the like can be used. The plurality of interfaces may form a composite input/output interface.
In some embodiments, as shown in fig. 2, the modem 210 is configured to receive the broadcast television signal by a wired or wireless receiving manner, and may perform modulation and demodulation processes such as amplification, mixing, and resonance, and demodulate the audio/video signal from a plurality of wireless or wired broadcast television signals, where the audio/video signal may include a television audio/video signal carried in a television channel frequency selected by a user, and an EPG data signal.
In some embodiments, the frequency point demodulated by the modem 210 is controlled by the controller 250, and the controller 250 may send a control signal according to the user selection, so that the modem responds to the television signal frequency selected by the user and modulates and demodulates the television signal carried by the frequency.
In some embodiments, the broadcast television signal may be classified into a terrestrial broadcast signal, a cable broadcast signal, a satellite broadcast signal, an internet broadcast signal, or the like according to a broadcasting system of the television signal. Or may be differentiated into digital modulation signals, analog modulation signals, etc., depending on the type of modulation. Or it may be classified into digital signals, analog signals, etc. according to the kind of signals.
In some embodiments, the controller 250 and the modem 210 may be located in separate devices, i.e., the modem 210 may also be located in an external device to the main device in which the controller 250 is located, such as an external set-top box or the like. In this way, the set-top box outputs the television audio and video signals modulated and demodulated by the received broadcast television signals to the main body equipment, and the main body equipment receives the audio and video signals through the first input/output interface.
In some embodiments, the controller 250 controls the operation of the display device and responds to user operations through various software control programs stored on the memory. The controller 250 may control the overall operation of the display apparatus 200. For example: in response to receiving a user command to select to display a UI object on the display 275, the controller 250 may perform an operation related to the object selected by the user command.
In some embodiments, the object may be any one of selectable objects, such as a hyperlink or an icon. Operations related to the selected object, such as: displaying an operation of connecting to a hyperlink page, a document, an image, or the like, or executing an operation of a program corresponding to the icon. The user command for selecting the UI object may be an input command through various input means (e.g., mouse, keyboard, touch pad, etc.) connected to the display device 200 or a voice command corresponding to a voice uttered by the user.
As shown in fig. 2, the controller 250 includes at least one of a random access Memory 251 (Random Access Memory, RAM), a Read-Only Memory 252 (ROM), a video processor 270, an audio processor 280, other processors 253 (e.g., a graphics processor (Graphics Processing Unit, GPU), a central processing unit 254 (Central Processing Unit, CPU), a communication interface (Communication Interface), and a communication Bus 256 (Bus), which connects the respective components.
In some embodiments, RAM 251 is used to store temporary data for the operating system or other on-the-fly programs
In some embodiments, ROM 252 is used to store instructions for various system boots.
In some embodiments, ROM 252 is used to store a basic input output system, referred to as a basic input output system (Basic lnput Output System, BIOS). The system comprises a drive program and a boot operating system, wherein the drive program is used for completing power-on self-checking of the system, initialization of each functional module in the system and basic input/output of the system.
In some embodiments, upon receipt of the power-on signal, the display device 200 power starts up, the CPU runs system boot instructions in the ROM 252, copies temporary data of the operating system stored in memory into the RAM 251, in order to start up or run the operating system. When the operating system is started, the CPU copies temporary data of various applications in the memory to the RAM 251, and then, facilitates starting or running of the various applications.
In some embodiments, CPU processor 254 is used to execute operating system and application program instructions stored in memory. And executing various application programs, data and contents according to various interactive instructions received from the outside, so as to finally display and play various audio and video contents.
In some exemplary embodiments, the CPU processor 254 may comprise a plurality of processors. The plurality of processors may include one main processor and one or more sub-processors. A main processor for performing some operations of the display apparatus 200 in the pre-power-up mode and/or displaying a picture in the normal mode. One or more sub-processors for one operation in a standby mode or the like.
In some embodiments, the graphics processor 253 is configured to generate various graphical objects, such as: icons, operation menus, user input instruction display graphics, and the like. The device comprises an arithmetic unit, wherein the arithmetic unit is used for receiving various interaction instructions input by a user to carry out operation and displaying various objects according to display attributes. And a renderer for rendering the various objects obtained by the arithmetic unit, wherein the rendered objects are used for being displayed on a display.
In some embodiments, video processor 270 is configured to receive external video signals, perform video processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, image composition, etc., according to standard codec protocols for input signals, and may result in signals that are displayed or played on directly displayable device 200.
In some embodiments, video processor 270 includes a demultiplexing module, a video decoding module, an image compositing module, a frame rate conversion module, a display formatting module, and the like.
The demultiplexing module is used for demultiplexing the input audio/video data stream, such as the input MPEG-2, and demultiplexes the input audio/video data stream into video signals, audio signals and the like.
And the video decoding module is used for processing the demultiplexed video signals, including decoding, scaling and the like.
And an image synthesis module, such as an image synthesizer, for performing superposition mixing processing on the graphic generator and the video image after the scaling processing according to the GUI signal input by the user or generated by the graphic generator, so as to generate an image signal for display.
The frame rate conversion module is configured to convert the input video frame rate, for example, converting the 60Hz frame rate into the 120Hz frame rate or the 240Hz frame rate, and the common format is implemented in an inserting frame manner.
The display format module is used for converting the received frame rate into a video output signal, and changing the video output signal to a signal conforming to the display format, such as outputting an RGB data signal.
In some embodiments, the graphics processor 253 may be integrated with the video processor, or may be separately configured, where the integrated configuration may perform processing of graphics signals output to the display, and the separate configuration may perform different functions, such as gpu+ FRC (Frame Rate Conversion)) architecture, respectively.
In some embodiments, the audio processor 280 is configured to receive an external audio signal, decompress and decode the audio signal according to a standard codec protocol of an input signal, and perform noise reduction, digital-to-analog conversion, and amplification processing, so as to obtain a sound signal that can be played in a speaker.
In some embodiments, video processor 270 may include one or more chips. The audio processor may also comprise one or more chips.
In some embodiments, video processor 270 and audio processor 280 may be separate chips or may be integrated together with the controller in one or more chips.
In some embodiments, the audio output, under the control of the controller 250, receives sound signals output by the audio processor 280, such as: the speaker 286, and an external sound output terminal that can be output to a generating device of an external device, other than the speaker carried by the display device 200 itself, such as: external sound interface or earphone interface, etc. can also include the close range communication module in the communication interface, for example: and the Bluetooth module is used for outputting sound of the Bluetooth loudspeaker.
The power supply 290 supplies power input from an external power source to the display device 200 under the control of the controller 250. The power supply 290 may include a built-in power circuit installed inside the display device 200, or may be an external power source installed in the display device 200, and a power interface for providing an external power source in the display device 200.
The user interface 265 is used to receive an input signal from a user and then transmit the received user input signal to the controller 250. The user input signal may be a remote control signal received through an infrared receiver, and various user control signals may be received through a network communication module.
In some embodiments, a user inputs a user command through the control apparatus 100 or the mobile terminal 300, the user input interface is then responsive to the user input through the controller 250, and the display device 200 is then responsive to the user input.
In some embodiments, a user may input a user command through a Graphical User Interface (GUI) displayed on the display 275, and the user input interface receives the user input command through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through the sensor to receive the user input command.
In some embodiments, a "user interface" is a media interface for interaction and exchange of information between an application or operating system and a user that enables conversion between an internal form of information and a form acceptable to the user. A commonly used presentation form of the user interface is a graphical user interface (Graphic User Interface, GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It may be an interface element such as an icon, window, control, etc. displayed in the display of the electronic device, where the control may include a visual interface element such as an icon, button, menu, tab, text box, dialog box, status bar, navigation bar, widget, etc.
The memory 260 includes memory storing various software modules for driving the display device 200. Such as: various software modules stored in the first memory, including: at least one of a base module, a detection module, a communication module, a display control module, a browser module, various service modules, and the like.
The base module is a bottom software module for signal communication between the various hardware in the display device 200 and for sending processing and control signals to the upper modules. The detection module is used for collecting various information from various sensors or user input interfaces and carrying out digital-to-analog conversion and analysis management.
For example, the voice recognition module includes a voice analysis module and a voice instruction database module. The display control module is used for controlling the display to display the image content, and can be used for playing the multimedia image content, the UI interface and other information. And the communication module is used for carrying out control and data communication with external equipment. And the browser module is used for executing data communication between the browsing servers. And the service module is used for providing various services and various application programs. Meanwhile, the memory 260 also stores received external data and user data, images of various items in various user interfaces, visual effect maps of focus objects, and the like.
The execution subject of the embodiment of the present application may be the display device 200 in fig. 1 described above.
At present, the intelligent terminal can realize man-machine interaction by performing touch operation on an interactable interface displayed by a display. The display needs to locate the position of the touch point during the response to the touch operation. In the prior art, a display can detect a plurality of touch areas through multiple scans, so that the touch areas are intersected to accurately position touch points. However, when the user performs the quick touch operation, the user is affected by the positioning speed of the touch point, so that the response speed of the terminal to the touch operation is slower, the situation of point loss or line breakage exists, and the touch operation experience is reduced.
Aiming at the technical problems, the embodiment of the application provides a touch point positioning scheme, which simplifies polygons after intersecting a touch area and then intersects the touch area detected next time, thereby reducing operation complexity, improving the positioning speed of the touch point, further improving the response speed of a terminal to touch operation and improving the experience of the touch operation.
It should be noted that, the intelligent terminal and the terminal in the application can be an intelligent television, a computer, a tablet, an intelligent wearable device, a mobile phone and the like. Of course, the intelligent terminal and the terminal may also be the aforementioned display device, such as the display device 200 in fig. 1 and 2.
Methods provided in the embodiments of the present application will be described below with reference to the accompanying drawings.
As mentioned above, a display device (such as display device 200 in fig. 1 or 2) includes a display and a processor. In the embodiment of the application, the display is provided with a touch detection device, and the touch detection device is used for detecting a touch area of a touch point.
The touch detection device comprises a light emitting component and a light receiving component, wherein the light emitting component and the light receiving component are respectively arranged on opposite sides of the display. For example, as shown in fig. 3, a first light emitting component is disposed on the upper side of the display, a first light receiving component is disposed on the lower side of the display, a second light emitting component is disposed on the left side of the display, and a second light receiving component is disposed on the right side of the display. Alternatively, the first light emitting component and the second light emitting component in fig. 3 may be part of a light emitting component, or may be independent light emitting components, and the first light receiving component and the second light receiving component in fig. 3 may be part of a light receiving component, or may be independent light receiving components, which is not limited in this application.
Alternatively, the light emitting assembly may be composed of a plurality of light emitting lamps, which may be, for example, infrared light emitting lamps.
Alternatively, the light receiving assembly may be composed of a plurality of receivers, which may be infrared light receivers, for example.
Fig. 4 is a flowchart of a touch point positioning method 300 according to an embodiment of the present application. As shown in fig. 4, the method specifically includes some or all of the following steps:
s310, detecting N touch areas of a target touch point in N scanning directions respectively, wherein N is an integer greater than 0;
s320, intersecting the ith touch area with a first polygon to obtain a second polygon, wherein the first polygon is determined based on the ith-1 touch area, and i is an integer smaller than or equal to N and larger than 0;
s330, converting the second polygon into a third polygon according to the vertexes of the second polygon, wherein the vertexes of the third polygon are subsets of the vertexes of the second polygon;
s340, determining the coordinates of the target touch point according to the third polygon.
In S310, the N scanning directions may be, for example, light emitting directions of light emitting components of the touch detector, where the light emitting directions of the light emitting components in each scanning direction are different, in other words, parallel light beams emitted by the light emitting components at the same angle form one scanning direction.
As shown in connection with fig. 5a to 5d, the first light emitting element emits light perpendicular to the first receiving element (see fig. 5 a) and emits light at an oblique angle to the first receiving element (see fig. 5 b), the second light emitting element emits light perpendicular to the second receiving element (see fig. 5 c) and emits light at an oblique angle to the second receiving element (see fig. 5 d), it will be appreciated that one scanning direction is formed by light emitting element emitting light at one angle to the receiving element, that is, in fig. 5a to 5d, two scanning directions are formed by light emitting element emitting light at two angles, two scanning directions are formed by light emitting from the second light emitting element emitting light at two angles, and the N scanning directions include 4 scanning directions in fig. 5a to 5 d. Of course, the above-mentioned 4 scanning directions are only examples, and more or fewer scanning directions may be included in a practical application scenario, for example, the first light emitting component may form 4 scanning directions, the second light emitting component may form 1 scanning direction, and so on.
The target touch point may be a location where a user touches in a human-machine interaction interface of the display. The number of the target touch points is not limited, for example, one target touch point is formed in the human-computer interaction interface when a user performs single-finger operation, and a plurality of target touch points are formed in the human-computer interaction interface simultaneously when the user performs multi-finger operation.
In S310 described above, the touch detection device may detect one touch area in each scanning direction. For example, in the scanning direction shown in fig. 6, the target touch point blocks several continuous light rays in the parallel light path, and the continuous blocked light rays form a touch area in the scanning direction, for example, light rays on two sides of the blocked light rays serve as boundaries of the touch area.
In S320 described above, the i-th touch area is one of N touch areas. The processor may intersect the touch region with the first polygon for an ith touch region to obtain a second polygon. Wherein the first polygon may be determined based on the i-1 th touch region, e.g., the first polygon may be the intersection of the i-1 th touch region and the first polygon determined based on the i-2 th touch region. In conjunction with the illustration of fig. 7a, assuming that the 1 st touch area and the 2 nd touch area intersect to obtain a polygon a, referring to fig. 7B, the processor may intersect the 3 rd touch area with the first polygon a to obtain a polygon B, and further, in fig. 7C, the processor may intersect the 4 th touch area with the polygon B to obtain a polygon C.
Referring to fig. 8, assuming that the first light emitting component can form 60 scanning directions by emitting light rays in different directions, and the second light emitting component can form 60 scanning directions by emitting light rays in different directions, the intersecting polygons have 240 types at most, and the computation amount of the intersecting computation is huge.
In order to reduce the operation complexity and improve the positioning speed of the touch point, in S330, the embodiment of the present application converts the second polygon into the third polygon, and reduces the number of vertices of the second polygon, thereby reducing the operation complexity when the second polygon intersects the i+1th touch area detected in the i+1th scanning direction.
For example, the processor may take the second polygon as the third polygon when the number of vertices of the second polygon is smaller than the preset value, that is, the vertices of the second polygon are not reduced. Correspondingly, the processor may screen the vertices of the second polygon to obtain the third polygon when the number of vertices of the second polygon is greater than or equal to a preset value.
For example, assuming that the preset value is 6 and the number of vertices of the polygon a in fig. 7a is 4, the vertices do not need to be screened before the polygon a and the 3 rd touch area intersect; the number of vertices of polygon B in FIG. 7B is 6, which does not require screening of vertices before intersecting with the 4 th touch region; while the number of vertices of polygon C in FIG. 7C is 7, the vertices need to be screened before intersecting the 5 th touch region.
Generally, the preset value is an integer greater than or equal to 3.
In some embodiments, the processor may screen out vertices of the second polygon from vertices of the third polygon at preset intervals. For example, the processor may delete one vertex per interval, or one vertex per interval of two vertices, etc.
In other embodiments, the processor determines a degree of positional change between adjacent vertices, and when the degree of positional change between adjacent vertices is less than a degree threshold, takes one of the adjacent vertices as a vertex of the third polygon. For example, in fig. 9a, one of the adjacent vertices a and b (e.g., vertex b) whose position change severity is less than the severity threshold is deleted, resulting in the polygon in fig. 9b (i.e., the third polygon).
Optionally, after the two types of screening the vertices of the second polygon, if the number of vertices is still greater than or equal to the preset value, the processor may screen the vertices again based on the same or different screening modes until the number of vertices of the third polygon is less than the preset value.
In this embodiment of the present application, if i is equal to N, the coordinates of the target touch point may be directly determined by the coordinates of the third polygon, and if i is not equal to N, the coordinates of the target touch point may be obtained based on the intersection between the third polygon and the i+1th touch area until the intersection between the N-th touch area is performed.
For example, the processor may perform the processes of S320 to S340 described above with the third polygon as the first polygon. That is, the processor intersects the i+1th touch area with the third polygon to obtain a second polygon corresponding to the i+1th touch area, converts the second polygon into a third polygon corresponding to the i+1th touch area according to the vertex of the second polygon, uses the third polygon as a first polygon corresponding to the i+2th touch area, and so on until the N-th touch area intersects the first polygon corresponding to the N-th touch area to obtain the coordinates of the target touch point.
It will be appreciated that the processor completes one scan cycle after intersecting the N touch areas. For example, the processor may generate one coordinate of the target touch point for each scan period to obtain a trajectory of the user touch operation.
In the embodiment of the application, the processor converts the second polygon obtained by intersecting the ith touch area and the first polygon into the third polygon with fewer vertices, so that simplification of the second polygon is realized, and then the coordinates of the target touch point are determined based on the third polygon, so that the operation complexity is reduced, the positioning speed of the touch point is improved, the response speed of touch operation is further improved, and the experience of touch operation is improved.
On the basis of any of the above embodiments, as shown in fig. 10, this embodiment may further include some or all of the following steps:
s410, initializing at least one of the number of scanning directions, the scanning angle and a preset value;
s420, detecting a scanning period by the touch detection device;
s430, extracting N touch areas in N scanning directions by a processor;
s440, intersecting the ith touch area and the first polygon to obtain a second polygon;
s450, determining a touch control mode according to the change rate of the position of the target touch point relative to the first touch point, wherein the position of the first touch point is detected in the last scanning period;
when the touch manipulation mode is a fast moving touch, converting the second polygon into a third polygon according to the vertex of the second polygon, and illustratively, S460 and S470 are performed when the touch manipulation mode is an express moving touch, and S480 is performed when the touch manipulation mode is a non-fast moving touch;
s460, determining whether the vertex number of the second polygon is larger than or equal to a preset value;
if the number of vertices of the second polygon is greater than or equal to the preset value, S470 is executed, and if the number of vertices of the second polygon is less than the preset value, S480 is executed;
S470, screening the vertexes of the second polygon to obtain a third polygon;
s480, taking the second polygon as a third polygon, namely not screening the vertexes of the second polygon, in other words, the vertexes of the second polygon and the third polygon are the same;
s490, determining whether all of the N touch areas have completed intersecting;
if all the N touch areas are intersected, executing S500, otherwise executing S440;
s500, performing at least one of ghost point removing operation, track tracking and smoothing on the target touch point based on the obtained track of the touch operation;
s510, generating coordinates of a target touch point, and outputting the coordinates of the target touch point;
in the case where the touch operation is ended, the above-described S420 is returned to continue execution, and otherwise the process is ended.
Here, S420 to S440 and S460 to S480 are already described in the foregoing embodiments, and are not described here again.
It should be noted that, in the process of the user writing at a slow speed, the requirement on the touch response speed of the display device is not high, and the requirement on the recognition accuracy of the touch position is high, so that the processor can obtain the accurate position of the target touch point by intersecting the N touch areas in sequence; however, in the operation process of fast writing or fast scribing, that is, fast moving touch, the requirement on the touch response speed of the display device is higher, and the requirement on the recognition accuracy of the touch position is lower, in this case, the processor needs to simplify the polygon obtained by intersecting after each touch area completes intersecting with the previous polygon, so as to reduce the operation complexity when the polygon intersects with the next touch area, and improve the touch response speed.
Fig. 11 is a schematic block diagram of a touch point positioning device according to an embodiment of the present application. As shown in fig. 11, the apparatus 600 includes:
a detection unit 610, configured to detect N touch areas of the target touch point in N scan directions, where N is an integer greater than 1;
a processing unit 620, configured to intersect the ith touch area with a first polygon to obtain a second polygon, where the first polygon is determined based on the ith-1 th touch area, and i is an integer less than or equal to N and greater than 0;
the processing unit 620 is further configured to convert the second polygon into a third polygon according to the vertices of the second polygon, wherein the vertices of the third polygon are a subset of the vertices of the second polygon;
the processing unit 620 is further configured to determine coordinates of the target touch point according to the third polygon.
In some embodiments, the processing unit 620 is specifically configured to: and when the number of the vertexes of the second polygon is larger than or equal to a preset value, screening the vertexes of the second polygon to obtain the third polygon.
In some embodiments, the processing unit 620 is specifically configured to: screening from the vertexes of the second polygon according to a preset interval to obtain the vertexes of the third polygon; or determining the intensity of the position change between the adjacent vertexes, and taking one of the adjacent vertexes as the vertex of the third polygon when the intensity of the position change between the adjacent vertexes is smaller than the intensity threshold value.
In some embodiments, the processing unit 620 is specifically configured to: and taking the third polygon as the first polygon, intersecting the first polygon with the (i+1) th touch area to obtain a second polygon, and circulating until the (N) th touch area is intersected to obtain the coordinate of the target touch point.
In some embodiments, the processing unit 620 is specifically configured to: and when the vertex number of the second polygon is smaller than the preset value, taking the second polygon as the third polygon.
In some embodiments, the processing unit 620 is further configured to: determining a touch control mode according to the position change rate of the target touch point relative to a first touch point, wherein the position of the first touch point is detected in the last scanning period; and when the touch control mode is a fast moving touch, converting the second polygon into the third polygon according to the vertex of the second polygon.
The touch point positioning device provided in the above embodiment may implement the technical solution of the above method embodiment, and its implementation principle and technical effects are similar, and are not repeated here.
The present application also provides a computer-readable storage medium, which may include: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk or an optical disk, etc., in which program codes may be stored, and in particular, the computer-readable storage medium stores program instructions for the methods in the above embodiments.
The present application also provides a program product comprising execution instructions stored in a readable storage medium. The at least one processor of the electronic device may read the execution instructions from the readable storage medium, the execution instructions being executed by the at least one processor to cause the electronic device to implement the methods provided by the various embodiments described above.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A display device, characterized by comprising: a display and a processor; the display is provided with a touch detection device;
the touch detection device is used for respectively detecting N touch areas of a target touch point in N scanning directions, wherein N is an integer greater than 1;
the processor is configured to:
intersecting the ith touch area with a first polygon to obtain a second polygon, wherein the first polygon is determined based on the ith-1 touch area, and i is an integer smaller than or equal to N and larger than 0;
converting the second polygon into a third polygon according to the vertexes of the second polygon, wherein the vertexes of the third polygon are subsets of the vertexes of the second polygon;
and determining the coordinates of the target touch point according to the third polygon.
2. The device of claim 1, wherein the processor is configured to:
and screening the vertexes of the second polygon to obtain the third polygon when the vertex number of the second polygon is larger than or equal to a preset value.
3. The device of claim 1 or 2, wherein the processor is configured to:
screening from the vertexes of the second polygon according to a preset interval to obtain the vertexes of the third polygon; or alternatively, the process may be performed,
Determining the intensity of the position change between the adjacent vertexes, and taking one of the adjacent vertexes as the vertex of the third polygon when the intensity of the position change between the adjacent vertexes is smaller than an intensity threshold value.
4. The device of claim 1 or 2, wherein the processor is configured to:
and taking the third polygon as the first polygon, intersecting the first polygon with the (i+1) th touch area to obtain a second polygon, and circulating until the (N) th touch area is intersected to obtain the coordinates of the target touch point.
5. The device of claim 1, wherein the processor is configured to:
and when the vertex number of the second polygon is smaller than the preset value, taking the second polygon as the third polygon.
6. The apparatus of claim 1 or 2, wherein the touch detection device comprises a light emitting assembly and a light receiving assembly, the light emitting assembly and the light receiving assembly being disposed on opposite sides of the display, respectively;
the light receiving assembly receives light rays emitted by the light emitting assembly in the N scanning directions so as to detect a touch area of a target touch point on the display.
7. The device of claim 1 or 2, wherein the processor is configured to:
determining a touch control mode according to the position change rate of the target touch point relative to a first touch point, wherein the position of the first touch point is detected in the last scanning period;
and when the touch control mode is a fast moving touch, converting the second polygon into the third polygon according to the vertex of the second polygon.
8. A touch point positioning method, comprising:
n touch areas of the target touch point are detected in N scanning directions respectively, wherein N is an integer greater than 1;
intersecting the ith touch area with a first polygon to obtain a second polygon, wherein the first polygon is determined based on the ith-1 touch area, and i is an integer smaller than or equal to N and larger than 0;
converting the second polygon into a third polygon according to the vertexes of the second polygon, wherein the vertexes of the third polygon are subsets of the vertexes of the second polygon;
and determining the coordinates of the target touch point according to the third polygon.
9. The method of claim 8, wherein the converting the second polygon into a third polygon based on vertices of the second polygon comprises:
And screening the vertexes of the second polygon to obtain the third polygon when the vertexes of the second polygon are larger than or equal to a preset value.
10. A touch point positioning apparatus, comprising:
the detection unit is used for respectively detecting N touch areas of the target touch point in N scanning directions, wherein N is an integer larger than 1;
the processing unit is used for intersecting the ith touch area with a first polygon to obtain a second polygon, wherein the first polygon is determined based on the ith-1 touch area, and i is an integer smaller than or equal to N and larger than 0;
the processing unit is further configured to convert the second polygon into a third polygon according to vertices of the second polygon, where vertices of the third polygon are a subset of vertices of the second polygon;
the processing unit is further configured to determine coordinates of the target touch point according to the third polygon.
CN202111592881.7A 2021-12-23 2021-12-23 Display device, touch point positioning method and device Pending CN116382453A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111592881.7A CN116382453A (en) 2021-12-23 2021-12-23 Display device, touch point positioning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111592881.7A CN116382453A (en) 2021-12-23 2021-12-23 Display device, touch point positioning method and device

Publications (1)

Publication Number Publication Date
CN116382453A true CN116382453A (en) 2023-07-04

Family

ID=86979207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111592881.7A Pending CN116382453A (en) 2021-12-23 2021-12-23 Display device, touch point positioning method and device

Country Status (1)

Country Link
CN (1) CN116382453A (en)

Similar Documents

Publication Publication Date Title
CN109618206B (en) Method and display device for presenting user interface
CN112181207B (en) Display device and geometric figure recognition method
US11917329B2 (en) Display device and video communication data processing method
CN111901656B (en) Media data searching method, display equipment and server
CN112463269B (en) User interface display method and display equipment
US11425466B2 (en) Data transmission method and device
US11960674B2 (en) Display method and display apparatus for operation prompt information of input control
WO2021031598A1 (en) Self-adaptive adjustment method for video chat window position, and display device
CN111176603A (en) Image display method for display equipment and display equipment
CN113630569B (en) Display apparatus and control method of display apparatus
CN111432257A (en) Method for starting screen protection of display equipment and display equipment
CN111930233B (en) Panoramic video image display method and display device
CN111984167B (en) Quick naming method and display device
CN113395600B (en) Interface switching method of display equipment and display equipment
CN112235621B (en) Display method and display equipment for visual area
CN112199560B (en) Search method of setting items and display equipment
CN111259639B (en) Self-adaptive adjustment method of table and display equipment
CN113485613A (en) Display equipment and method for realizing free-drawing screen edge painting
CN116382453A (en) Display device, touch point positioning method and device
CN115185392A (en) Display device, image processing method and device
CN112367550A (en) Method for realizing multi-title dynamic display of media asset list and display equipment
CN113076031B (en) Display equipment, touch positioning method and device
CN111931692A (en) Display device and image recognition method
CN114415864B (en) Touch area determining method and display device
CN113473198B (en) Control method of intelligent equipment and intelligent equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination