CN116501189A - Electronic device - Google Patents

Electronic device Download PDF

Info

Publication number
CN116501189A
CN116501189A CN202310060479.7A CN202310060479A CN116501189A CN 116501189 A CN116501189 A CN 116501189A CN 202310060479 A CN202310060479 A CN 202310060479A CN 116501189 A CN116501189 A CN 116501189A
Authority
CN
China
Prior art keywords
layer
mode
sensor
electronic device
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310060479.7A
Other languages
Chinese (zh)
Inventor
李淳奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Display Co Ltd
Original Assignee
Samsung Display Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020220048045A external-priority patent/KR20230115191A/en
Application filed by Samsung Display Co Ltd filed Critical Samsung Display Co Ltd
Publication of CN116501189A publication Critical patent/CN116501189A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0412Digitisers structurally integrated in a display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0421Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means by interrupting or reflecting a light beam, e.g. optical touch-screen

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Investigating Or Analyzing Materials By The Use Of Electric Means (AREA)
  • Measurement Of Length, Angles, Or The Like Using Electric Or Magnetic Means (AREA)
  • Photometry And Measurement Of Optical Pulse Characteristics (AREA)

Abstract

An electronic device includes: a sensor layer including a plurality of first electrodes and a plurality of second electrodes; a sensor driving circuit driving the sensor layer and operating in the first mode or the second mode; and a main driving circuit controlling an operation of the sensor driving circuit. In the first mode, the sensor driving circuit outputs a plurality of first transmission signals to the plurality of first electrodes, respectively, receives a plurality of first sensing signals from the plurality of second electrodes, respectively, and outputs the plurality of first sensing signals to the main driving circuit. In the second mode, the sensor driving circuit outputs a plurality of second transmission signals to the plurality of first electrodes, respectively, receives a plurality of second sensing signals from the plurality of second electrodes, respectively, and provides coordinates based on the plurality of second sensing signals to the main driving circuit.

Description

Electronic device
Cross Reference to Related Applications
The present patent application claims priority from korean patent application No. 10-2022-0011028 filed on 1 month 25 of 2022 and korean patent application No. 10-2022-0048045 filed on 4 month 19 of 2022, the disclosures of which are incorporated herein by reference in their entireties.
Technical Field
Embodiments of the present disclosure described herein relate to an electronic device having a proximity sensing function.
Background
Multimedia electronic devices such as televisions, mobile phones, tablet computers, navigation systems, and game consoles can display images and support touch-based input methods that allow users to intuitively, conveniently, and easily input information or commands. For example, touch-based input methods enable users to provide input with fingers, a stylus, or an electronic pen in addition to conventional input devices such as buttons, a keyboard, and a mouse.
Disclosure of Invention
Embodiments of the present disclosure provide an electronic device including a sensor layer having a proximity sensing function.
According to an embodiment, an electronic device includes: a display layer configured to display an image; a display driving circuit configured to drive the display layer; a sensor layer disposed on the display layer and including a plurality of first electrodes and a plurality of second electrodes; a sensor drive circuit configured to drive the sensor layer and selectively operate in a first mode or a second mode different from the first mode; and a main driving circuit configured to control an operation of the display driving circuit and an operation of the sensor driving circuit. In the first mode, the sensor driving circuit outputs a plurality of first transmission signals to the plurality of first electrodes, respectively, receives a plurality of first sensing signals from the plurality of second electrodes, respectively, and outputs the plurality of first sensing signals to the main driving circuit. In the second mode, the sensor driving circuit outputs a plurality of second transmission signals to the plurality of first electrodes, respectively, receives a plurality of second sensing signals from the plurality of second electrodes, respectively, and supplies coordinates obtained based on the plurality of second sensing signals to the main driving circuit. The plurality of first transmission signals may be simultaneously output to the plurality of first electrodes.
The plurality of first transmission signals may be in phase with each other.
The driving voltages of the plurality of first transmission signals may be equal to the driving voltages of the plurality of second transmission signals.
The first phase of one of the plurality of second transmission signals may be different from the second phases of the remaining second transmission signals of the plurality of second transmission signals.
The first mode may include a first sub-mode and a second sub-mode. In the first sub-mode, the sensor driving circuit may output the plurality of first sensing signals to the main driving circuit. In the second sub-mode, the sensor driving circuit may output a plurality of third transmission signals to the plurality of first electrodes, respectively, may receive a plurality of third sensing signals from the plurality of second electrodes, respectively, and may provide the main driving unit with proximity coordinates obtained based on the plurality of third sensing signals.
The length of the operation period in the first sub-mode may be longer than the length of the operation period in the second sub-mode.
Each of the plurality of third transmission signals may have a frequency higher than that of each of the plurality of first transmission signals.
The sensor drive circuit may operate in the first sub-mode and then may continue to operate in the second sub-mode, or may operate in the second sub-mode and then may continue to operate in the first sub-mode.
The main driving circuit may include: a noise model trained to predict noise included in the plurality of first sensing signals; and a decision model configured to determine whether an object is in proximity based on the noise predicted by the noise model and the plurality of first sensing signals.
The noise model may include: a plurality of noise prediction models configured to output a plurality of noise prediction values, respectively; and a selector configured to select one of the plurality of noise prediction values.
Each of the plurality of noise prediction models may include: a moving window configured to receive the plurality of first sensing signals for each of a plurality of frames; a moving average unit configured to calculate a moving average of the plurality of first sensing signals of each of the plurality of frames and output an intermediate signal; and a noise predictor configured to output a noise prediction value by using the intermediate signal and a trained algorithm.
The display layer may include a base layer, a circuit layer disposed on the base layer, a light emitting device layer disposed on the circuit layer, and a packaging layer disposed on the light emitting device layer, and the sensor layer may be directly disposed on the display layer.
According to an embodiment, an electronic device includes: a sensor layer including a plurality of first electrodes and a plurality of second electrodes; a sensor driving circuit configured to drive the sensor layer and selectively operate in a proximity sensing mode or a touch sensing mode; and a main driving circuit configured to control an operation of the sensor driving circuit. In the proximity sensing mode, the sensor driving circuit outputs all of the plurality of first sensing signals received from the plurality of second electrodes, respectively, to the main driving circuit. In the touch sensing mode, the sensor driving circuit calculates input coordinates based on a plurality of second sensing signals received from the plurality of second electrodes, respectively, and outputs a coordinate signal including information about the input coordinates to the main driving circuit.
The main driving circuit includes: a noise model trained to predict noise included in the plurality of first sensing signals; and a decision model configured to determine whether an object is in proximity based on the noise predicted by the noise model and the plurality of first sensing signals.
In the proximity sensing mode, the sensor driving circuit may simultaneously output a plurality of first transmission signals to the plurality of first electrodes, respectively, and may receive the plurality of first sensing signals from the plurality of second electrodes, respectively, and the plurality of first transmission signals may be in phase.
In the touch sensing mode, the sensor driving circuit may simultaneously output a plurality of second transmission signals to the plurality of first electrodes, respectively, and may receive the plurality of second sensing signals from the plurality of second electrodes, respectively, and a first phase of one of the plurality of second transmission signals may be different from a second phase of the remaining second transmission signals.
The driving voltages of the plurality of first transmission signals may be equal to the driving voltages of the plurality of second transmission signals.
The sensor driving circuit may selectively operate in one of the proximity sensing mode, the touch sensing mode, and the proximity coordinate sensing mode, and the sensor driving circuit may operate in the proximity sensing mode and then may continue to operate in the proximity coordinate sensing mode, or may operate in the proximity coordinate sensing mode and then may continue to operate in the proximity sensing mode.
In the proximity coordinate sensing mode, the sensor driving circuit may output a plurality of third transmission signals to the plurality of first electrodes, respectively, may receive a plurality of third sensing signals from the plurality of second electrodes, respectively, and may provide the proximity coordinate signals obtained based on the plurality of third sensing signals to the main driving circuit.
The length of the operation period in the proximity sensing mode may be longer than the length of the operation period in the proximity coordinate sensing mode, and the frequency of each of the plurality of third transmission signals may be higher than the frequency of each of the plurality of first transmission signals.
Drawings
The above and other objects and features of the present disclosure will become apparent by describing in detail embodiments thereof with reference to the accompanying drawings.
Fig. 1 is a perspective view illustrating an electronic device according to an embodiment of the present disclosure.
Fig. 2 is a diagram for describing an operation of an electronic device according to an embodiment of the present disclosure.
Fig. 3 is a schematic cross-sectional view of an electronic device according to an embodiment of the disclosure.
Fig. 4 is a cross-sectional view of an electronic device according to an embodiment of the present disclosure.
Fig. 5 is a block diagram illustrating a display layer and a display driving unit according to an embodiment of the present disclosure.
Fig. 6 is a block diagram illustrating a sensor layer and a sensor driving unit according to an embodiment of the present disclosure.
Fig. 7A is a diagram illustrating the operation of a sensor layer according to an embodiment of the present disclosure.
Fig. 7B is a diagram illustrating a first transmission signal according to an embodiment of the present disclosure.
Fig. 8 is a block diagram of a master drive unit according to an embodiment of the present disclosure.
Fig. 9 is a block diagram illustrating a noise prediction model according to an embodiment of the present disclosure.
Fig. 10A shows waveforms of event signals provided as raw data.
Fig. 10B shows waveforms of intermediate signals whose noise is removed by the noise prediction model.
Fig. 10C shows waveforms of the decision signals decided by the decision model.
Fig. 11A is a diagram illustrating an operation of a sensor layer according to an embodiment of the present disclosure.
Fig. 11B is a diagram illustrating a second transmission signal according to an embodiment of the present disclosure.
Fig. 12 is a block diagram illustrating a sensor layer and a sensor driving unit according to an embodiment of the present disclosure.
Fig. 13A is a diagram illustrating sub-modes included in a first mode according to an embodiment of the present disclosure.
Fig. 13B is a diagram illustrating sub-modes included in the first mode according to an embodiment of the present disclosure.
Fig. 14A is a diagram illustrating the operation of a sensor layer according to an embodiment of the present disclosure.
Fig. 14B illustrates a third transmission signal according to an embodiment of the present disclosure.
Detailed Description
In this specification, the expression that a first component (or region, layer, component, section, etc.) is "on," "connected to," or "coupled to" a second component means that the first component is directly on, connected to, or coupled to the second component, or that a third component is disposed therebetween.
Like reference numerals designate like components. It will be appreciated that in the drawings, the relative thicknesses, proportions, angles, and dimensions of components are intended to be drawn to scale for at least one embodiment of the present disclosure, however, these features may be varied within the scope of the present disclosure and the inventive concept is not necessarily limited to the features shown. The expression "and/or" includes one or more combinations that the associated components are capable of defining.
Although the terms "first," "second," etc. may be used to describe various components, the components should not be construed as limited by the terms. The term is used merely to distinguish one component from another. For example, a "first component" may be termed a "second component" and, similarly, a "second component" may be termed a "first component" without departing from the scope and spirit of the present invention. The singular is intended to include the plural unless the context clearly indicates otherwise.
Furthermore, the terms "under … …," "under … …," "over … …," "over … …," and the like are used to describe the relevance of the components shown in the figures. Terms of relative concepts are described based on the directions as shown in the drawings.
It will be further understood that the terms "comprises," "comprising," "includes," "including," and the like, specify the presence of stated features, integers, steps, operations, elements, components, or groups thereof, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or groups thereof.
The terms "component" and "unit" mean a software component or a hardware component that performs a specified function. The hardware components may include, for example, a Field Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC). A software component may refer to executable code and/or data used by executable code in an addressable storage medium. Thus, a software component may be an object-oriented software component, a class component, and a work component and may include processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, or variables.
Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
Fig. 1 is a perspective view illustrating an electronic device 1000 according to an embodiment of the present disclosure.
Referring to fig. 1, an electronic device 1000 may be a device that is activated according to an electrical signal. For example, the electronic apparatus 1000 may include a mobile phone, a foldable mobile phone, a notebook computer, a television, a tablet computer, a car navigation system, a game console, or a wearable device, but the disclosure is not limited thereto. An example in which the electronic device 1000 is a smart phone is shown in fig. 1.
An active region 1000A and a peripheral region (or non-active region) 1000NA may be defined in the electronic device 1000. The electronic device 1000 may display an image through the active area 1000A. The active region 1000A may include a surface defined by a first direction DR1 and a second direction DR 2. The peripheral region 1000NA may surround the active region 1000A.
The thickness direction of the electronic device 1000 may be parallel to a third direction DR3 intersecting the first direction DR1 and the second direction DR 2. Accordingly, a front surface (or top surface) and a rear surface (or bottom surface) of the member constituting the electronic device 1000 may be defined with respect to the third direction DR3.
Fig. 2 is a diagram for describing an operation of the electronic device 1000 according to an embodiment of the present disclosure.
Referring to fig. 2, the electronic device 1000 may include a display layer 100, a sensor layer 200, a display driving unit 100C (e.g., a driving circuit), a sensor driving unit 200C (e.g., a driving circuit), and a main driving unit 1000C (e.g., a driving circuit).
The display layer 100 may be a component that substantially generates an image. The display layer 100 may be a light emitting display layer. For example, the display layer 100 may be an organic light emitting display layer, an inorganic light emitting display layer, an organic-inorganic display layer, a quantum dot display layer, a micro-Light Emitting Diode (LED) display layer, or a nano-LED display layer.
The sensor layer 200 may be disposed on the display layer 100. The sensor layer 200 may sense an external input (e.g., external input 2000 or 3000) applied from the outside. The external input 2000 or 3000 may include all input devices capable of providing a change in capacitance. For example, the sensor layer 200 may sense an input of an active type input device/means providing a driving signal in addition to the passive type input device/means such as a user's body.
The main driving unit 1000C may control the overall operation of the electronic device 1000. For example, the main driving unit 1000C may control the operations of the display driving unit 100C and the sensor driving unit 200C. The main driving unit 1000C may include at least one microprocessor. In addition, the main driving unit 1000C may further include a graphic processor. The main driving unit 1000C may be referred to as an "application processor", "central processing unit", or "main processor".
The display driving unit 100C may drive the display layer 100. The display driving unit 100C may receive the image data RGB and the control signals D-CS from the main driving unit 1000C. The control signal D-CS may include various signals. For example, the control signal D-CS may include an input vertical synchronization signal, an input horizontal synchronization signal, a master clock signal, a data enable signal, and the like. The display driving unit 100C may generate a vertical synchronization signal and a horizontal synchronization signal for controlling the timing of providing signals to the display layer 100 based on the control signal D-CS. For example, the provided signal may be based on the image data RGB.
The sensor driving unit 200C may drive the sensor layer 200. The sensor drive unit 200C may receive the control signal I-CS from the main drive unit 1000C. The control signal I-CS may include a mode decision signal determining a driving mode of the sensor driving unit 200C. The control signals I-CS may also include clock signals.
The sensor driving unit 200C may calculate input coordinate information based on a signal received from the sensor layer 200, and may provide a coordinate signal I-SS including the coordinate information to the main driving unit 1000C. The coordinate information may include a position on a display layer touched by a user. The main driving unit 1000C performs an operation corresponding to the user input based on the coordinate signal I-SS. For example, the main driving unit 1000C may drive the display driving unit 100C so that a new application image is displayed in the display layer 100.
The sensor drive unit 200C may provide the event signals I-NS generated by the object 3000 spaced apart from the surface 1000SF of the electronic device 1000 to the main drive unit 1000C based on the signals received from the sensor layer 200. The spaced apart objects 3000 may be referred to as "hovering objects". The ears of a user proximate to the electronic device 1000 are shown as examples of spaced apart objects 3000, but the disclosure is not so limited.
The main driving unit 1000C may receive and process the event signal I-NS to calculate a processing result, and may determine that a proximity touch has occurred based on the processing result. For example, the main driving unit 1000C may predict noise of the event signals I-NS by using an artificial intelligence algorithm, and may determine whether a proximity touch has occurred. That is, the event signal I-NS may be raw data. According to an embodiment of the present disclosure, the data processing for the event signal I-NS may not be performed by the sensor driving unit 200C, but may be performed by the main driving unit 1000C after the event signal I-NS is provided to the main driving unit 1000C. Therefore, in the case of providing the event signal I-NS, it is possible to increase the amount of data to be provided to the main driving unit 1000C, as compared with the case of providing the coordinate signal I-SS including the coordinate information.
The main driving unit 1000C may process the event signals I-NS by using an artificial intelligence algorithm, and may then determine whether the object 3000 is sensed. Then, the main driving unit 1000C may control the display driving unit 100C based on the determination result such that the brightness of the image to be displayed in the display layer 100 is reduced or such that the image is not displayed in the display layer 100. That is, the main driving unit 1000C may turn off the display layer 100.
Further, in an embodiment, when it is determined that the subject 3000 is sensed, the main driving unit 1000C may enter a sleep mode. The sensor layer 200 and the sensor driving unit 200C can maintain their operations even if the main driving unit 1000C enters the sleep mode. Accordingly, in the case where the object 3000 is separated from the surface 1000SF of the electronic device 1000, the sensor driving unit 200C may determine that an event has occurred, and the sensor driving unit 200C may provide a signal to the main driving unit 1000C to release the sleep mode of the main driving unit 1000C.
Fig. 3 is a schematic cross-sectional view of an electronic device 1000 according to an embodiment of the disclosure.
Referring to fig. 3, the electronic device 1000 may include a display layer 100 and a sensor layer 200 disposed on the display layer 100. The display layer 100 may be referred to as a "display panel". The sensor layer 200 may be referred to as a "sensor or input sensing layer".
The display layer 100 may include a base layer 110, a circuit layer 120, a light emitting device layer 130, and an encapsulation layer 140.
The base layer 110 may be a member that provides a base surface on which the circuit layer 120 is disposed. The base layer 110 may be a glass substrate, a metal substrate, a polymer substrate, or the like. However, embodiments of the present disclosure are not limited thereto, and the base layer 110 may be an inorganic layer, an organic layer, or a composite material layer.
The base layer 110 may have a multi-layered structure. For example, the base layer 110 may include a first synthetic resin layer, silicon oxide (SiO x ) A layer, an amorphous silicon (a-Si) layer disposed on the silicon oxide layer, and a second synthetic resin layer disposed on the amorphous silicon layer. The silicon oxide layer and the amorphous silicon layer may be collectively referred to as a "base barrier layer".
Each of the first synthetic resin layer and the second synthetic resin layer may include a polyimide-based resin. Further, each of the first synthetic resin layer and the second synthetic resin layer may include at least one of an acrylate-based resin, a methacrylate-based resin, a polyisoprene-based resin, a vinyl-based resin, an epoxy-based resin, a polyurethane-based resin, a cellulose-based resin, a siloxane-based resin, a polyamide-based resin, and a perylene-based resin. Meanwhile, the expression "to-like resin" in the specification means "to-like resin" including "to" functional groups.
The circuit layer 120 may be disposed on the base layer 110. The circuit layer 120 may include an insulating layer, a semiconductor pattern, a conductive pattern, a signal line, and the like. The insulating layer, the semiconductor layer, and the conductive layer may be formed on the base layer 110 through a coating or deposition process, and then the insulating layer, the semiconductor layer, and the conductive layer may be selectively patterned through a plurality of photolithography processes. Then, an insulating layer, a semiconductor pattern, a conductive pattern, and a signal line included in the circuit layer 120 may be formed.
The light emitting device layer 130 may be disposed on the circuit layer 120. The light emitting device layer 130 may include a light emitting device. For example, the light emitting device layer 130 may include an organic light emitting material, an inorganic light emitting material, an organic-inorganic light emitting material, quantum dots, quantum rods, micro-LEDs, or nano-LEDs.
The encapsulation layer 140 may be disposed on the light emitting device layer 130. The encapsulation layer 140 may protect the light emitting device layer 130 from foreign substances such as moisture, oxygen, and dust particles.
The sensor layer 200 may be disposed on the display layer 100. The sensor layer 200 may sense an external input applied from the outside. The external input may be a user input. The user input may include various types of external inputs such as a portion of the user's body, light, heat, pen, or pressure.
The sensor layer 200 may be formed on the display layer 100 through a continuous process. In this case, it is possible to express that the expression "the sensor layer 200 is directly provided on the display layer 100". Here, the expression "directly disposed" may mean that the third component is not interposed between the sensor layer 200 and the display layer 100. That is, a separate adhesive member is not provided between the sensor layer 200 and the display layer 100.
Noise from the display layer 100 may be included in the signal provided from the sensor layer 200. For example, a change in noise included in a signal supplied from the sensor layer 200 when an image displayed in the display layer 100 is changed may be larger than a change in noise when the image displayed in the display layer 100 is in a stationary or non-moving state. According to an embodiment of the present disclosure, the main driving unit 1000C (refer to fig. 2) predicts noise of a signal provided from the sensor layer 200 by using an artificial intelligence algorithm and determines whether a proximity touch has occurred. Therefore, the accuracy of the proximity decision (proximity decision) can be improved.
The electronic device 1000 may also include an anti-reflective layer and an optical layer (not shown) on the sensor layer 200. The anti-reflection layer may reduce the reflectivity of external light incident from the outside of the electronic device 1000. The optical layer may increase front luminance (front luminance) of the electronic device 1000 by controlling a direction of light incident from the display layer 100.
Fig. 4 is a cross-sectional view of an electronic device 1000 according to an embodiment of the disclosure.
Referring to fig. 4, at least one inorganic layer is formed on the upper surface of the base layer 110. The inorganic layer may include at least one of aluminum oxide, titanium oxide, silicon nitride, silicon oxynitride, zirconium oxide, and hafnium oxide. The inorganic layer may be formed of a plurality of layers. The plurality of inorganic layers may constitute a barrier layer and/or a buffer layer. In this embodiment, the display layer 100 is shown to include a buffer layer BFL.
The buffer layer BFL may increase the coupling force between the base layer 110 and the semiconductor pattern. The buffer layer BFL may include at least one of silicon oxide, silicon nitride, and silicon oxynitride. For example, the buffer layer BFL may include a structure in which silicon oxide layers and silicon nitride layers are alternately stacked.
The semiconductor pattern may be disposed on the buffer layer BFL. The semiconductor pattern may include polysilicon. However, the present disclosure is not limited thereto, and the semiconductor pattern may include amorphous silicon, low temperature polysilicon, or an oxide semiconductor.
Fig. 4 shows only a part of the semiconductor pattern, and the semiconductor pattern may be disposed in other regions as well. The semiconductor patterns may be arranged throughout the pixels with a specific rule. The electrical properties of the semiconductor pattern may vary depending on whether it is doped or not. The semiconductor pattern may include a first region having a higher conductivity and a second region having a lower conductivity. The first region may be doped with an N-type dopant or a P-type dopant. The P-type transistor may include a doped region doped with a P-type dopant, and the N-type transistor may include a doped region doped with an N-type dopant. The second region may be an undoped region, or may be a region doped at a lower concentration than the first region.
The first region may have a conductivity greater than that of the second region, and the first region may substantially serve as an electrode or a signal line. The second region may substantially correspond to an active region (or channel) of the transistor. In other words, a portion of the semiconductor pattern may be an active region of the transistor, another portion thereof may be a source region or a drain region of the transistor, and another portion thereof may be a connection electrode or a connection signal line.
Each of the plurality of pixels may be represented by an equivalent circuit including 7 transistors, one capacitor, and a light emitting device, and the equivalent circuit of the pixel may be modified in various forms. As an example, one transistor 100PC and one light emitting device 100PE included in one pixel are shown in fig. 4.
The source region SC, the active region AL, and the drain region DR of the transistor 100PC may be formed of a semiconductor pattern. In the cross-sectional view, the source region SC and the drain region DR may extend from the active region AL in opposite directions to each other. A portion of the connection signal line SCL formed of a semiconductor pattern is shown in fig. 4. Although not shown separately, the connection signal line SCL may also be connected to the drain region DR of the transistor 100PC in a plan view.
The first insulating layer 10 may be disposed on the buffer layer BFL. The first insulating layer 10 may commonly overlap with a plurality of pixels, and may cover the semiconductor pattern. The first insulating layer 10 may be an inorganic layer and/or an organic layer, and may have a single-layer or multi-layer structure. The first insulating layer 10 may include at least one of aluminum oxide, titanium oxide, silicon nitride, silicon oxynitride, zirconium oxide, and hafnium oxide. In this embodiment, the first insulating layer 10 may be a silicon oxide layer having a single layer. The insulating layer of the circuit layer 120, which will be described later, may be an inorganic layer and/or an organic layer, and may have a single-layer or multi-layer structure, in addition to the first insulating layer 10. The inorganic layer may include at least one of the above materials, but is not limited thereto.
The gate GT of the transistor 100PC is disposed on the first insulating layer 10. The gate GT may be a part of the metal pattern. The gate GT overlaps the active area AL. The gate electrode GT may be used as a mask in a process of doping a semiconductor pattern.
The second insulating layer 20 may be disposed on the first insulating layer 10 and may cover the gate electrode GT. The second insulating layer 20 may commonly overlap a plurality of pixels. The second insulating layer 20 may be an inorganic layer and/or an organic layer, and may have a single-layer or multi-layer structure. The second insulating layer 20 may include at least one of silicon oxide, silicon nitride, and silicon oxynitride. In this embodiment, the second insulating layer 20 may have a multilayer structure including a silicon oxide layer and a silicon nitride layer.
The third insulating layer 30 may be disposed on the second insulating layer 20. The third insulating layer 30 may have a single-layer or multi-layer structure. In this embodiment, the third insulating layer 30 may have a multilayer structure including a silicon oxide layer and a silicon nitride layer.
The first connection electrode CNE1 may be disposed on the third insulating layer 30. The first connection electrode CNE1 may be connected to the connection signal line SCL through a contact hole CNT-1 penetrating the first, second, and third insulating layers 10, 20, and 30.
The fourth insulating layer 40 may be disposed on the third insulating layer 30. The fourth insulating layer 40 may be a single silicon oxide layer. The fifth insulating layer 50 may be disposed on the fourth insulating layer 40. The fifth insulating layer 50 may be an organic layer.
The second connection electrode CNE2 may be disposed on the fifth insulating layer 50. The second connection electrode CNE2 may be connected to the first connection electrode CNE1 through a contact hole CNT-2 penetrating the fourth and fifth insulating layers 40 and 50.
The sixth insulating layer 60 may be disposed on the fifth insulating layer 50, and may cover the second connection electrode CNE2. The sixth insulating layer 60 may be an organic layer.
The light emitting device layer 130 may be disposed on the circuit layer 120. The light emitting device layer 130 may include the light emitting device 100PE. For example, the light emitting device layer 130 may include an organic light emitting material, an inorganic light emitting material, an organic-inorganic light emitting material, quantum dots, quantum rods, micro-LEDs, or nano-LEDs. Next, an example in which the light emitting device 100PE is an organic light emitting device will be described, but the light emitting device 100PE is not particularly limited thereto.
The light emitting device 100PE may include a first electrode (or anode electrode) AE, an emission layer EL, and a second electrode (or cathode electrode) CE.
The first electrode AE may be disposed on the sixth insulating layer 60. The first electrode AE may be connected to the second connection electrode CNE2 through a contact hole CNT-3 penetrating the sixth insulating layer 60.
The pixel defining layer 70 may be disposed on the sixth insulating layer 60, and may cover a portion of the first electrode AE. The openings 70-OP are defined in the pixel defining layer 70. The opening 70-OP of the pixel defining layer 70 exposes at least a portion of the first electrode AE.
The active region 1000A (refer to fig. 1) may include a light emitting region PXA and a non-light emitting region NPXA adjacent to the light emitting region PXA. The non-light emitting region NPXA may surround the light emitting region PXA. In the present embodiment, the light emitting region PXA is defined to correspond to a partial region of the first electrode AE exposed by the opening 70-OP.
The emission layer EL may be disposed on the first electrode AE. The emissive layer EL may be disposed in the region defined by the openings 70-OP. That is, the emission layer EL may be independently provided for each pixel. In the case where the emission layers EL are independently provided for the respective pixels, each of the plurality of emission layers EL may emit light of at least one color of blue, red, and green. However, the present disclosure is not limited thereto. For example, the emission layer EL may be commonly provided at a plurality of pixels. In this case, the emission layer EL may emit blue light or white light.
The second electrode CE may be disposed on the emission layer EL. The second electrode CE may be commonly and integrally disposed in a plurality of pixels.
The hole control layer may be interposed between the first electrode AE and the emission layer EL. The hole control layer may be commonly disposed in the light emitting region PXA and the non-light emitting region NPXA. The hole control layer may include a hole transport layer, and may further include a hole injection layer. The electronic control layer may be interposed between the emission layer EL and the second electrode CE. The electron control layer may include an electron transport layer, and may further include an electron injection layer. The hole control layer and the electron control layer may be commonly formed at a plurality of pixels by using an open mask.
The encapsulation layer 140 may be disposed on the light emitting device layer 130. The encapsulation layer 140 may include an inorganic layer, an organic layer, and an inorganic layer sequentially stacked, and the layers constituting the encapsulation layer 140 are not limited thereto.
The inorganic layer may protect the light emitting device layer 130 from moisture and oxygen, and the organic layer may protect the light emitting device layer 130 from foreign substances such as dust particles. The inorganic layer may include a silicon nitride layer, a silicon oxynitride layer, a silicon oxide layer, a titanium oxide layer, or an aluminum oxide layer. The organic layer may include an acrylic organic layer, but is not limited thereto.
The sensor layer 200 may include a sensor base layer 201, a first conductive layer 202, a sensing insulating layer 203, a second conductive layer 204, and a cover insulating layer 205.
The sensor base layer 201 may be an inorganic layer including at least one of silicon nitride, silicon oxynitride, and silicon oxide. Alternatively, the sensor base layer 201 may be an organic layer including an epoxy-based resin, an acrylate-based resin, or an imide-based resin. The sensor base layer 201 may have a single-layer structure, or may have a multi-layer structure in which a plurality of layers are stacked in the third direction DR 3.
Each of the first conductive layer 202 and the second conductive layer 204 may have a single-layer structure, or may have a multi-layer structure in which a plurality of layers are stacked in the third direction DR 3.
Each of the first conductive layer 202 and the second conductive layer 204 having a single-layer structure may include a metal layer or a transparent conductive layer. The metal layer may comprise molybdenum, silver, titanium, copper, aluminum, or alloys thereof. The transparent conductive layer may include a transparent conductive oxide such as Indium Tin Oxide (ITO), indium Zinc Oxide (IZO), zinc oxide (ZnO), or Indium Zinc Tin Oxide (IZTO). In addition, the transparent conductive layer may include a conductive polymer such as poly (3, 4-ethylenedioxythiophene) (PEDOT), a metal nanowire, or graphene.
Each of the first conductive layer 202 and the second conductive layer 204 having a multilayer structure may include a metal layer. The metal layer may have a three-layer structure of, for example, titanium/aluminum/titanium. Each of the first conductive layer 202 and the second conductive layer 204 having a multi-layered structure may include at least one metal layer and at least one transparent conductive layer.
At least one of the sensing insulation layer 203 and the cover insulation layer 205 may include an inorganic layer. The inorganic layer may include at least one of aluminum oxide, titanium oxide, silicon nitride, silicon oxynitride, zirconium oxide, and hafnium oxide.
At least one of the sensing insulation layer 203 and the cover insulation layer 205 may include an organic layer. The organic layer may include at least one of an acrylic resin, a methacrylic resin, a polyisoprene, a vinyl-based resin, an epoxy-based resin, a polyurethane-based resin, a cellulose-based resin, a siloxane-based resin, a polyimide-based resin, a polyamide-based resin, and a perylene-based resin.
Fig. 5 is a block diagram illustrating the display layer 100 and the display driving unit 100C according to an embodiment of the present disclosure.
Referring to fig. 5, the display layer 100 may include a plurality of scan lines SL1 to SLn (i.e., scan lines SL1, SL2, … …, SLn-1, and SLn, n being an integer of 2 or more), a plurality of data lines DL1 to DLm (i.e., data lines DL1, DL2, … …, and DLm, m being an integer of 2 or more), and a plurality of pixels PX. Each of the plurality of pixels PX is connected to a corresponding one of the plurality of data lines DL1 to DLm, and may be connected to a corresponding one of the plurality of scan lines SL1 to SLn. In an embodiment of the present disclosure, the display layer 100 may further include a light emission control line, and the display driving unit 100C may further include a light emission driving circuit that supplies a control signal to the light emission control line. However, the configuration of the display layer 100 is not particularly limited to the above-described configuration.
Each of the plurality of scan lines SL1 to SLn may extend in the first direction DR1, and the plurality of scan lines SL1 to SLn may be arranged to be spaced apart from each other in the second direction DR 2. Each of the plurality of data lines DL1 to DLm may extend in the second direction DR2, and the plurality of data lines DL1 to DLm may be arranged to be spaced apart from each other in the first direction DR 1.
The display driving unit 100C may include a signal control circuit 100C1, a scan driving circuit 100C2, and a data driving circuit 100C3.
The signal control circuit 100C1 may receive the image data RGB and the control signals D-CS from the main driving unit 1000C (refer to fig. 2). The control signal D-CS may include various signals. For example, the control signal D-CS may include an input vertical synchronization signal, an input horizontal synchronization signal, a master clock signal, a data enable signal, and the like.
The signal control circuit 100C1 may generate the first control signal CONT1 and the vertical synchronization signal Vsync based on the control signal D-CS, and may output the first control signal CONT1 and the vertical synchronization signal Vsync to the scan driving circuit 100C2. The vertical synchronization signal Vsync may be included in the first control signal CONT 1.
The signal control circuit 100C1 may generate the second control signal CONT2 and the horizontal synchronization signal Hsync based on the control signal D-CS, and may output the second control signal CONT2 and the horizontal synchronization signal Hsync to the data driving circuit 100C3. The horizontal synchronization signal Hsync may be included in the second control signal CONT 2.
Further, the signal control circuit 100C1 may supply the data driving circuit 100C3 with a driving signal DS obtained by processing the image data RGB to be suitable for the operating condition of the display layer 100. The first control signal CONT1 and the second control signal CONT2, which are signals for the operation of the scan driving circuit 100C2 and the data driving circuit 100C3, are not particularly limited.
The scan driving circuit 100C2 drives the plurality of scan lines SL1 to SLn in response to the first control signal CONT1 and the vertical synchronization signal Vsync. In the embodiment of the present disclosure, the scan driving circuit 100C2 may be formed using the same process as the circuit layer 120 (refer to fig. 4) in the display layer 100, but the present disclosure is not limited thereto. For example, the scan driving circuit 100C2 may be implemented as an Integrated Circuit (IC) so as to be electrically connected with the display layer 100, and the integrated circuit may be directly mounted in a given region of the display layer 100, or may be mounted on a separate printed circuit board in a chip-on-film (COF) manner.
The data driving circuit 100C3 may output gray scale voltages to the plurality of data lines DL1 to DLm in response to the second control signal CONT2, the horizontal synchronizing signal Hsync, and the driving signal DS from the signal control circuit 100C 1. The data driving circuit 100C3 may be implemented as an integrated circuit so as to be electrically connected with the display layer 100, and the integrated circuit may be directly mounted in a given region of the display layer 100 or may be mounted on a separate printed circuit board in a chip-on-film manner, but the present disclosure is not limited thereto. For example, the data driving circuit 100C3 may be formed using the same process as the circuit layer 120 (refer to fig. 4) in the display layer 100.
Fig. 6 is a block diagram illustrating a sensor layer 200 and a sensor driving unit 200C according to an embodiment of the present disclosure.
Referring to fig. 6, the sensor layer 200 may include a plurality of first electrodes 210 and a plurality of second electrodes 220. Each of the plurality of second electrodes 220 may intersect the plurality of first electrodes 210. The sensor layer 200 may further include a plurality of signal lines connected to the plurality of first electrodes 210 and the plurality of second electrodes 220.
Each of the plurality of first electrodes 210 may extend in the second direction DR2, and the plurality of first electrodes 210 may be arranged to be spaced apart from each other in the first direction DR 1. Each of the plurality of second electrodes 220 may extend in the first direction DR1, and the plurality of second electrodes 220 may be arranged to be spaced apart from each other in the second direction DR 2.
Each of the plurality of first electrodes 210 may include a sensing pattern 211 and a bridge pattern 212. The two sensing patterns 211 adjacent to each other may be electrically connected to each other through the two bridge patterns 212, but the present disclosure is not particularly limited thereto. The sensing pattern 211 may be included in the second conductive layer 204 (refer to fig. 4), and the bridge pattern 212 may be included in the first conductive layer 202 (refer to fig. 4).
Each of the plurality of second electrodes 220 may include a first portion 221 and a second portion 222. The first portion 221 and the second portion 222 may have an integrated shape and may be disposed in the same layer. For example, a single monolithic layer may include first portion 221 and second portion 222. For example, the first portion 221 and the second portion 222 may be included in the second conductive layer 204 (refer to fig. 4). The two bridge patterns 212 may be insulated from the second portion 222 and may intersect the second portion 222.
The sensor driving unit 200C may selectively operate in a first mode (or referred to as a "proximity sensing mode") or a second mode (or referred to as a "touch sensing mode") different from the first mode. The sensor driving unit 200C may receive a control signal I-CS from the main driving unit 1000C (refer to fig. 2). In the first mode, the sensor driving unit 200C may provide the event signals I-NS generated due to the spaced objects 3000 (refer to fig. 2) to the main driving unit 1000C (refer to fig. 2). In the second mode, the sensor driving unit 200C may provide the coordinate signal I-SS to the main driving unit 1000C (refer to fig. 2). In an embodiment, the sensor driving unit 200C provides information (e.g., an event signal I-SS) about an object brought near the sensor layer 200 in the proximity sensing mode, and the sensor driving unit 200C provides information (e.g., a coordinate signal I-SS) based on a touch of a finger, pen, stylus, or the like to the sensor layer 200.
The sensor driving unit 200C may be implemented as an Integrated Circuit (IC) so as to be electrically connected with the sensor layer 200. The integrated circuit may be directly mounted in a given area of the sensor layer 200 or may be mounted on a separate printed circuit board in a chip-on-film (COF) manner.
The sensor driving unit 200C may include a sensor control circuit 200C1, a signal generating circuit 200C2, and an input detecting circuit 200C3. The sensor control circuit 200C1 may control the operations of the signal generation circuit 200C2 and the input detection circuit 200C3 based on the control signal I-CS.
The signal generating circuit 200C2 may output the transmission signal TX to the first electrode 210 of the sensor layer 200. The input detection circuit 200C3 may receive the sensing signal RX from the sensor layer 200. For example, the input detection circuit 200C3 may receive the sensing signal RX from the second electrode 220. For example, the input detection circuit 200C3 may receive the sensing signal RX in response to the output of the transmission signal TX.
The input detection circuit 200C3 may convert an analog signal into a digital signal. For example, the input detection circuit 200C3 amplifies a received analog signal and then filters the amplified signal. That is, the input detection circuit 200C3 may convert the filtered signal into a digital signal.
Fig. 7A is a diagram illustrating the operation of the sensor layer 200 according to an embodiment of the present disclosure. Fig. 7B is a diagram illustrating first transmission signals TXS1 and TXS2 to TXSx (x is an integer of 3 or more) according to an embodiment of the present disclosure. Fig. 7A illustrates an operation of the sensor layer 200 when the sensor driving unit 200C (refer to fig. 6) operates in the first mode MD 1. The first mode MD1 may be referred to as a "proximity sensing mode MD1".
Referring to fig. 6, 7A and 7B, in the first mode MD1, the sensor driving unit 200C may output the plurality of first transmission signals TXS1 and TXS2 to TXSx to the plurality of first electrodes 210, respectively, and may receive the first sensing signals RXS1 and RXS2 to RXSy (y is an integer of 3 or more) from the plurality of second electrodes 220, respectively. The sensor driving unit 200C may output the first sensing signals RXS1 and RXS2 to RXSy to the main driving unit 1000C (refer to fig. 2) without correction. That is, the event signal I-NS may include the first sensing signals RXS1 and RXS2 to RXSy.
The plurality of first transmission signals TXS1 and TXS2 to TXSx may be simultaneously output to the plurality of first electrodes 210. In an embodiment, the plurality of first transmission signals TXS1 and TXS2 to TXSx are in phase with each other and have the same waveform.
According to an embodiment of the present disclosure, as the intensity of the signal for detecting the object near the electronic apparatus 1000 (refer to fig. 1) increases, the signal-to-noise ratio of the first sensing signals RXS1 and RXS2 to RXSy may increase. Thus, the proximity sensing recognition distance (or object recognizable height) may be increased. For example, the object identifiable height when the plurality of first transmission signals TXS1 and TXS2 to TXSx are used for proximity sensing is measured to be up to about 5mm higher than the object identifiable height when the plurality of second transmission signals TXF1 and TXF2 to TXFx are used for proximity sensing as shown in fig. 11B.
In case of sensing a hovering object, a plurality of first transmission signals TXS1 and TXS2 to TXSx in phase may be provided to all the first electrodes 210, but the present disclosure is not particularly limited thereto. For example, the sensor layer 200 is divided into a plurality of areas according to the shape of the touch sensor or the shape of the electronic device 1000. The in-phase first transmission signal may be supplied to an electrode provided in one of the plurality of regions. When the in-phase first transmission signal is supplied to the partial region, the reporting rate can be increased.
Fig. 8 is a block diagram of a main drive unit 1000C according to an embodiment of the present disclosure. Fig. 9 is a block diagram illustrating a noise prediction model 1120 according to an embodiment of the present disclosure. Fig. 10A shows waveforms of event signals I-NS provided as raw data. Fig. 10B shows waveforms of the intermediate signal I-MS whose noise is removed by the noise model. Fig. 10C shows waveforms of the determination signal DCS determined by the determination model.
Referring to fig. 7A and 8, an operation for predicting and removing noise included in the first sensing signals RXS1 and RXS2 to RXSy received in the proximity sensing mode is performed by the main driving unit 1000C. In the embodiment, the operation for predicting and removing noise is not performed by the sensor driving unit 200C (refer to fig. 6). For example, the main drive unit 1000C may include an artificial intelligence algorithm or may access an artificial intelligence algorithm. The main driving unit 1000C may predict and remove noise included in the first sensing signals RXS1 and RXS2 to RXSy by using an artificial intelligence algorithm, and thus may improve accuracy of the proximity decision.
For example, the main driving unit 1000C may include a noise model 1100 and a decision model 1200. The noise model 1100 may be trained to predict noise included in the plurality of first sensing signals RXS1 and RXS2 to RXSy. The decision model 1200 may determine whether an object is close based on the selective noise prediction value SNDC and the plurality of first sensing signals RXS1 and RXS2 to RXSy output from the noise model 1100, and may output a decision signal DCS. The selective noise prediction value SNDC may be noise predicted by the noise model 1100. For example, the decision model 1200 may determine whether an object is near the electronic device 1000 (referring to fig. 2) or whether an object is within a particular distance from the electronic device 1000. For example, the selective noise prediction value SNDC may be a value representing a noise level.
Noise model 1100 may include a noise experience indicator 1110, a plurality of noise prediction models 1120, and a selector 1130. Decision model 1200 may include QoS controller 1210, differentiator 1220, absolute strength indicator 1230, relative strength indicator 1240, and resulting decision model 1250.
The noise experience indicator 1110 (e.g., logic circuitry) may receive the event signal I-NS and may provide meta information MTI about the event signal I-NS to the decision model 1200. For example, the noise experience indicator 1110 may provide the QoS controller 1210 with meta-information MTI including varying levels of data of the event signals I-NS. The QoS controller 1210 (e.g., control circuit) may determine a noise level based on the meta information MTI; based on the determination result, the QoS controller 1210 may adjust a threshold of the result decision model 1250 or may provide a signal for changing logic of the result decision model 1250 to the result decision model 1250. Depending on the noise level, the logic of the result decision model 1250 may be changed. For example, the QoS controller 1210 may provide a signal for changing logic to the result decision model 1250, and the result decision model 1250 may receive the signal and may change the logic.
The event signals I-NS (i.e., the first sensing signals RXS1 and RXS2 to RXSy) may be provided to the plurality of noise prediction models 1120, respectively. In an embodiment, each of the plurality of noise prediction models 1120 includes an artificial neural network. The plurality of noise prediction models 1120 may output noise signals NDC spatially separated from each other based on the first sensing signals RXS1 and RXS2 to RXSy, respectively. For example, when the number of the noise prediction models 1120 is "4", the first sensing signals RXS1 and RXS2 to RXSy may be sequentially divided into four groups, and thus the noise signal NDC corresponding thereto may be output.
Referring to fig. 9, each of the plurality of noise prediction models 1120 may include a moving window 1121, a moving average unit 1122, and a noise predictor 1123.
A signal set corresponding to a plurality of frames may be input to the moving window 1121. For example, first sensing signals RXS1 and RXS2 to RXSy (refer to fig. 7A) corresponding to the 1 st to K-th frames (i.e., the 1 st, 2 nd, … …, K-1 st and K-th frames), respectively, may be input to the moving window 1121. That is, event signals I-NS corresponding to the 1 st to K th frames, respectively, may be input to the moving window 1121.
The moving average unit 1122 (e.g., a logic circuit) may calculate a moving average of event signals I-NS input in a time-series manner to generate a first correction value. For example, the intermediate signal I-MS output from the moving average unit 1122 may be a noise-free signal, and may correspond to the intermediate signal I-MS shown in fig. 10B. The intermediate signal I-MS may be a signal obtained by removing a data outlier (data outlier) from the event signal I-NS. For example, the intermediate signal I-MS may be generated from a moving average.
The intermediate signal I-MS output from the moving average unit 1122 may be input to the noise predictor 1123 to which the artificial intelligence algorithm is applied, and the noise prediction value NDC may be output based on the intermediate signal I-MS. In an embodiment, noise predictor 1123 includes an artificial neural network for executing an artificial intelligence algorithm.
The noise predictor 1123 may predict noise of the sensor output by learning noise of each user environment and the display screen using an artificial intelligence algorithm. Deep learning, which is an artificial neural network, may be used as an artificial intelligence algorithm. For example, the neural network may comprise a convolutional neural network. Alternatively, a machine-learned regression algorithm may be used. The environment in which the temperature changes, the environment in which the humidity changes, the environment at a specific temperature, or the environment at a specific humidity may be regarded as the user environment. A display screen including a specific color, a display screen including a specific luminance, or a display screen including various colors may be used as the display screen.
The noise predictor 1123 may be trained using various methods. For example, a method in which the noise predictor 1123 is trained in advance and weights corresponding to training results are stored in the noise predictor 1123 or a method in which the noise predictor 1123 is trained in real time based on pieces of data in the moving window 1121 may be used.
Returning to fig. 8, the selector 1130 may receive noise prediction values NDC output from the plurality of noise prediction models 1120, respectively. That is, a plurality of noise prediction values NDC may be provided to the selector 1130. The selector 1130 may select one of the plurality of noise prediction values NDC as the selective noise prediction value SNDC, and may output the selected selective noise prediction value SNDC to the decision model 1200. For example, the selector 1130 may select a maximum value or a minimum value of the plurality of noise prediction values NDC or an intermediate value of the remaining noise prediction values except the maximum value and the minimum value as the selective noise prediction value SNDC. For example, the selector 1130 may be implemented using logic circuits and/or comparators. The value selected as the selective noise prediction value SNDC may be variously changed or modified, and is not limited to the above example.
The differentiator 1220 may remove noise from the event signal I-NS by subtracting the selective noise prediction value SNDC from the event signal I-NS. The differentiator 1220 may provide a signal obtained by subtracting the selective noise prediction value SNDC from the event signal I-NS to the relative intensity indicator 1240 (e.g., a logic circuit).
The relative intensity indicator 1240 may determine whether proximity sensing has occurred based on a pure signal that is a result of subtracting the selective noise prediction value SNDC from the event signal I-NS, and may output a second signal F2 corresponding to the determination result to the result decision model 1250.
The absolute intensity indicator 1230 (e.g., logic circuitry) may receive the event signals I-NS. The absolute intensity indicator 1230 may process the event signal I-NS (i.e., unmodified raw data). The absolute intensity indicator 1230 may determine whether proximity sensing has occurred based on the event signals I-NS, and may output a first signal F1 corresponding to the determination result to the result decision model 1250.
The result decision model 1250 may finally determine whether the object is close based on the first signal F1 and the second signal F2, and may output the decision signal DCS. For example, if the result decision model 1250 determines that the object has been brought within a certain distance of the electronic device 1000 (refer to fig. 2) based on the first signal F1 and the second signal F2, the decision signal DCS is set to the first value. For example, if the result decision model 1250 determines that the object has not been brought within a certain distance, the decision signal DCS is set to a second value different from the first value. Referring to fig. 10C, the decision signal DCS may have a square wave shape.
The operation of the result decision model 1250 may be controlled by the QoS controller 1210. The result decision model 1250 may determine whether an object is close based on the first signal F1 and the second signal F2 depending on a threshold adjusted according to the noise level or logic determined according to the noise level, and may output the decision signal DCS. For example, the result decision model 1250 may compare the first signal F1 and the second signal F2 to adjusted thresholds to determine whether the object is approaching. Alternatively, the result decision model 1250 may determine whether the object is close by calculating the first signal F1 and the second signal F2 according to the determined logic.
An artificial intelligence algorithm may be applied to the result decision model 1250. For example, in order to finally determine whether an object is close, a decision tree or a Support Vector Machine (SVM) as a classification algorithm may be applied to the result decision model 1250. Performance may be improved when determining whether an object is close by using artificial intelligence algorithms, as compared to heuristic models that require a developer to preset parameters or thresholds.
Fig. 11A is a diagram illustrating the operation of the sensor layer 200 according to an embodiment of the present disclosure. Fig. 11B is a diagram illustrating second transmission signals TXF1 and TXF2 to TXFx according to an embodiment of the present disclosure. Fig. 11A illustrates an operation of the sensor layer 200 when the sensor driving unit 200C (refer to fig. 6) operates in the second mode MD 2. The second mode MD2 may be referred to as a "touch sensing mode MD2".
Referring to fig. 6, 11A and 11B, in the second mode MD2, the sensor driving unit 200C may output the plurality of second transmission signals TXF1 and TXF2 to TXFx to the plurality of first electrodes 210, respectively, and may receive the second sensing signals RXF1 and RXF2 to RXFy from the plurality of second electrodes 220, respectively.
The sensor driving unit 200C may provide the coordinate signal I-SS obtained based on the plurality of second sensing signals RXF1 and RXF2 to RXFy to the main driving unit 1000C. The amount of data of the coordinate signal I-SS may be smaller than the amount of data of the event signal I-NS. In an embodiment, the size of the data within the coordinate signal I-SS is smaller than the size of the data within the event signal I-NS.
The second transmission signals TXF1 and TXF2 to TXFx provided for each of three frames FR1, FR2 and FRz (z is an integer of 3 or more) are shown in fig. 11B. An example is shown in which the third to z-1 th frames between the second frame FR2 and the z-th frame FRz are omitted.
In the first frame FR1, a first phase of one of the plurality of second transmission signals TXF1 and TXF2 to TXFx may be different from a second phase of the remaining second transmission signals TXF2 to TXFx. In the second frame FR2, the first phase of one of the plurality of second transmission signals TXF1 and TXF2 to TXFx may be different from the second phases of the remaining second transmission signals. In the z-th frame FRz, a first phase of one of the plurality of second transmission signals TXF1 and TXF2 to TXFx may be different from a second phase of the remaining second transmission signals. For example, the difference between the first phase and the second phase may be 180 degrees.
Even though the plurality of second transmission signals TXF1 and TXF2 to TXFx are simultaneously output to the plurality of first electrodes 210 in the second mode MD2, the phase of one of the plurality of second transmission signals TXF1 and TXF2 to TXFx may be different from the phases of the remaining second transmission signals in each frame. Accordingly, when decoding the plurality of second sensing signals RXF1 and RXF2 to RXFy, since a capacitance change value of each node (e.g., point) located between the first electrode 210 and the second electrode 220 can be detected, a two-dimensional (2D) coordinate value can be obtained. For example, the 2D coordinate values may include an X coordinate value and a Y coordinate value.
Referring to fig. 7B and 11B, the driving voltages of the plurality of first transmission signals TXS1 and TXS2 to TXSx may be the same or substantially the same as the driving voltages of the plurality of second transmission signals TXF1 and TXF2 to TXFx. For example, the high level voltage VMH of the plurality of first transmission signals TXS1 and TXS2 to TXSx may be equal to the high level voltage VMH of the plurality of second transmission signals TXF1 and TXF2 to TXFx. In addition, the low level voltage VML of the plurality of first transmission signals TXS1 and TXS2 to TXSx may be equal to the low level voltage VML of the plurality of second transmission signals TXF1 and TXF2 to TXFx.
In accordance with embodiments of the present disclosure, in order to improve the sensitivity of proximity sensing, the plurality of first transmission signals TXS1 and TXS2 to TXSx may be provided by using voltages used in touch sensing without using separate higher voltages. In contrast, when the in-phase first transmission signal transmission signals TXS1 and TXS2 to TXSx are simultaneously supplied to the first electrode 210, the intensity of a signal for detecting an object near the electronic device 1000 (refer to fig. 1) may be increased, and thus the proximity sensing sensitivity may be further improved.
The period WL1 of each of the plurality of first transmission signals TXS1 and TXS2 to TXSx may be longer than the period WL2 of each of the plurality of second transmission signals TXF1 and TXF2 to TXFx. That is, the frequency of each of the plurality of first transmission signals TXS1 and TXS2 to TXSx may be lower than the frequency of each of the plurality of second transmission signals TXF1 and TXF2 to TXFx. Because the frequencies of the plurality of first transmission signals TXS1 and TXS2 to TXSx in the proximity sensing mode are relatively low, the absolute values of the digital signals converted from the first sensing signals RXS1 and RXS2 to RXSy sensed by the sensor layer 200 may become large. Accordingly, proximity sensing sensitivity in the proximity sensing mode can be improved.
That is, according to embodiments of the present disclosure, waveforms of the plurality of first transmission signals TXS1 and TXS2 to TXSx may be identical in magnitude to waveforms of the plurality of second transmission signals TXF1 and TXF2 to TXFx, and the plurality of first transmission signals TXS1 and TXS2 to TXSx may be different in frequency and period from the plurality of second transmission signals TXF1 and TXF2 to TXFx.
Fig. 12 is a block diagram illustrating a sensor layer 200 and a sensor driving unit 200C according to an embodiment of the present disclosure. In the description of fig. 12, differences from fig. 6 will be described, and the same components are denoted by the same reference numerals, and thus, additional description will be omitted to avoid redundancy.
Referring to fig. 12, the sensor driving unit 200C may selectively operate in a first sub-mode (or referred to as a "proximity sensing mode"), a second sub-mode (or a proximity coordinate sensing mode), or a second mode (or referred to as a "touch sensing mode").
The sensor driving unit 200C may receive a control signal I-CS from the main driving unit 1000C (refer to fig. 2). In the first sub-mode, the sensor driving unit 200C may provide the event signals I-NS generated by the spaced objects 3000 (refer to fig. 2) to the main driving unit 1000C (refer to fig. 2). In the second sub-mode, the sensor driving unit 200C may provide the proximity coordinate signal I-PSS generated by the spaced apart objects 3000 to the main driving unit 1000C (refer to fig. 2). In the second mode, the sensor driving unit 200C may provide the coordinate signal I-SS to the main driving unit 1000C (refer to fig. 2).
Fig. 13A is a diagram illustrating sub-modes SMD1 and SMD2 included in a first mode MD1a according to an embodiment of the present disclosure.
Referring to fig. 12 and 13A, the first mode MD1a may include a first sub-mode SMD1 and a second sub-mode SMD2. In the first sub-mode SMD1, the sensor driving unit 200C may output the event signal I-NS to the main driving unit 1000C (refer to fig. 2). In the second sub-mode SMD2, the sensor driving unit 200C may output the proximity coordinate signal I-PSS to the main driving unit 1000C (refer to fig. 2).
The first sub-mode SMD1 may be the same or substantially the same as the first mode MD1 described with reference to fig. 7A and 7B. In the case of proximity sensing, since it is sufficient to determine only the proximity of the large-area conductor, only the first sub-mode SMD1 (i.e., the first mode MD1 described with reference to fig. 7A and 7B) may be sufficient. However, in the case where the first mode MD1a further includes the second sub-mode SMD2 for sensing coordinate information about proximity sensing, various functions may be additionally implemented by using the second sub-mode SMD2, and thus, various demands of users may be satisfied.
In the first mode MD1a, the sensor layer 200 and the sensor driving unit 200C may operate in the second sub-mode SMD2, and then may continue to operate in the first sub-mode SMD 1. In the case of the first mode MD1a, the length of the operation period in the first sub-mode SMD1 may be longer than the length of the operation period in the second sub-mode SMD2. For example, the length of the operation period of the first sub-mode SMD1 may be about four times as long as that of the second sub-mode SMD2, but the present disclosure is not particularly limited thereto. For example, assuming that the frame rate (frame rate) in the first mode MD1a is 60Hz (hertz), the first sub-mode SMD1 may be allocated for about 12ms (milliseconds) of 16.7ms corresponding to one period, and the second sub-mode SMD2 may be allocated for about 4ms of 16.7ms corresponding to one period.
Fig. 13B is a diagram illustrating sub-modes SMD1 and SMD2 included in the first mode MD1B according to an embodiment of the present disclosure.
Referring to fig. 13B, in the first mode MD1B, the sensor layer 200 and the sensor driving unit 200C may operate in the first sub-mode SMD1, and then may continue to operate in the second sub-mode SMD 2. In the case of the first mode MD1b, the length of the operation period in the first sub-mode SMD1 may be longer than the length of the operation period in the second sub-mode SMD 2.
Fig. 14A is a diagram illustrating the operation of the sensor layer 200 according to an embodiment of the present disclosure. Fig. 14B is a diagram illustrating third transmission signals TXP1 and TXP2 to TXPx according to embodiments of the present disclosure. Fig. 14A illustrates an operation of the sensor layer 200 when the sensor driving unit 200C (refer to fig. 12) is operated in the second sub-mode SMD 2. The second sub-mode SMD2 may be referred to as "proximity coordinate sensing mode SMD2".
Referring to fig. 12, 14A and 14B, in the second sub-mode SMD2, the sensor driving unit 200C may output the plurality of third transmission signals TXP1 and TXP2 to TXPx to the plurality of first electrodes 210, respectively, and may receive the plurality of third sensing signals RXP1 and RXP2 to RXPy from the plurality of second electrodes 220, respectively. The sensor driving unit 200C may provide the proximity coordinate signal I-PSS obtained based on the plurality of third sensing signals RXP1 and RXP2 to RXPy to the main driving unit 1000C (refer to fig. 2).
A plurality of third transmission signals TXP1 and TXP2 to TXPx provided for each of the three frames PFR1, PFR2 and PFRz are shown in fig. 14B. An example is shown in which the third to z-1 th frames between the second frame PFR2 and the z-th frame PFRz are omitted.
In the first frame PFR1, a first phase of one of the plurality of third transmission signals TXP1 and TXP2 to TXPx may be different from a second phase of the remaining third transmission signals TXP2 to TXPx. In the second frame PFR2, a first phase of one of the plurality of third transmission signals TXP1 and TXP2 to TXPx may be different from a second phase of the remaining third transmission signals. In the z-th frame PFRz, a first phase of one of the plurality of third transmission signals TXP1 and TXP2 to TXPx may be different from a second phase of the remaining third transmission signals. For example, the difference between the first phase and the second phase may be 180 degrees.
Even though the plurality of third transmission signals TXP1 and TXP2 to TXPx are simultaneously output to the plurality of first electrodes 210 in the second sub-mode SMD2, the phase of one of the plurality of third transmission signals TXP1 and TXP2 to TXPx may be different from the phase of the remaining third transmission signals in each frame. Accordingly, when decoding the plurality of third sensing signals RXP1 and RXP2 to RXPy, since a capacitance variation value of each node formed between the first electrode 210 and the second electrode 220 can be detected, a two-dimensional (2D) coordinate value can be obtained. For example, the 2D coordinate values may include x coordinate values and y coordinate values.
Referring to fig. 7B and 14B, the driving voltages of the plurality of first transmission signals TXS1 and TXS2 to TXSx may be the same or substantially the same as the driving voltages of the plurality of third transmission signals TXP1 and TXP2 to TXPx.
To improve accuracy of determining whether proximity sensing has occurred, frequencies of the plurality of first transmission signals TXS1 and TXS2 to TXSx in the first sub-mode SMD1 (refer to fig. 13A) may be lower than frequencies of the plurality of third transmission signals TXP1 and TXP2 to TXPx in the second sub-mode SMD 2. The period WL1 of each of the plurality of first transmission signals TXS1 and TXS2 to TXSx may be longer than the period WL3 of each of the plurality of third transmission signals TXP1 and TXP2 to TXPx. The period WL1 may be about four times the period WL3. However, it may be sufficient that the period WL1 is longer than the period WL3, and the present disclosure is not particularly limited thereto.
That is, according to embodiments of the present disclosure, waveforms of the plurality of first transmission signals TXS1 and TXS2 to TXSx may be identical in magnitude to waveforms of the plurality of third transmission signals TXP1 and TXP2 to TXPx, and the plurality of first transmission signals TXS1 and TXS2 to TXSx may be different in frequency and period from the plurality of third transmission signals TXP1 and TXP2 to TXPx.
Referring to fig. 11B and 14B, a period WL2 of each of the plurality of second transmission signals TXF1 and TXF2 to TXFx may be the same or substantially the same as a period WL3 of each of the plurality of third transmission signals TXP1 and TXP2 to TXPx. In embodiments of the present disclosure, the waveforms of the plurality of second transmission signals TXF1 and TXF2 to TXFx may be the same or substantially the same as the waveforms of the plurality of third transmission signals TXP1 and TXP2 to TXPx. In the embodiment, the frame rate when operating in the second mode MD2 (refer to fig. 11A) is different from the frame rate when operating in the second sub-mode SMD 2.
According to the above description, in the proximity sensing mode, the sensor driving unit may simultaneously supply the in-phase first transmission signals to the plurality of first electrodes of the sensor layer, respectively. In this case, as the strength of the approach signal increases, the signal-to-noise ratio may become greater. Thus, the proximity sensing recognition distance (or object recognizable height) can be increased.
Further, since noise learning for each user environment and display screen is performed by using artificial intelligence technology, it may be possible to predict noise of a sensing signal and remove noise. Furthermore, artificial intelligence techniques may be used even when determining whether an object is close based on the sensed signal and the predicted noise. Therefore, the performance (accuracy) of the proximity decision of the electronic device can be improved.
Although the present disclosure has been described with reference to the embodiments thereof, it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the disclosure as set forth in the following claims.

Claims (12)

1. An electronic device, wherein the electronic device comprises:
a display layer configured to display an image;
A display driving circuit configured to drive the display layer;
a sensor layer disposed on the display layer and including a plurality of first electrodes and a plurality of second electrodes;
a sensor drive circuit configured to drive the sensor layer and selectively operate in a first mode or a second mode different from the first mode; and
a main driving circuit configured to control an operation of the display driving circuit and an operation of the sensor driving circuit,
wherein in the first mode, the sensor driving circuit outputs a plurality of first transmission signals to the plurality of first electrodes, respectively, receives a plurality of first sensing signals from the plurality of second electrodes, respectively, and outputs the plurality of first sensing signals to the main driving circuit,
wherein in the second mode, the sensor driving circuit outputs a plurality of second transmission signals to the plurality of first electrodes, respectively, receives a plurality of second sensing signals from the plurality of second electrodes, respectively, and supplies the main driving circuit with coordinates obtained based on the plurality of second sensing signals, and
wherein the plurality of first transmission signals are simultaneously output to the plurality of first electrodes.
2. The electronic device of claim 1, wherein the plurality of first transmission signals are in phase with each other.
3. The electronic device of claim 1, wherein the driving voltages of the plurality of first transmission signals are equal to the driving voltages of the plurality of second transmission signals.
4. The electronic device of claim 1, wherein a first phase of one of the plurality of second transmission signals is different from a second phase of a remaining of the plurality of second transmission signals.
5. The electronic device of claim 1, wherein the first mode comprises a first sub-mode and a second sub-mode,
wherein in the first sub-mode, the sensor driving circuit outputs the plurality of first sensing signals to the main driving circuit, and
wherein in the second sub-mode, the sensor driving circuit outputs a plurality of third transmission signals to the plurality of first electrodes, respectively, receives a plurality of third sensing signals from the plurality of second electrodes, respectively, and supplies the proximity coordinates obtained based on the plurality of third sensing signals to the main driving circuit.
6. The electronic device of claim 5, wherein a length of the operational time period in the first sub-mode is longer than a length of the operational time period in the second sub-mode.
7. The electronic device of claim 5, wherein each third transmission signal of the plurality of third transmission signals has a higher frequency than each first transmission signal of the plurality of first transmission signals.
8. The electronic device of claim 5, wherein the sensor drive circuit operates in the first sub-mode and then continues to operate in the second sub-mode, or operates in the second sub-mode and then continues to operate in the first sub-mode.
9. The electronic device of claim 1, wherein the main drive circuit comprises:
a noise model trained to predict noise included in the plurality of first sensing signals; and
a decision model configured to determine whether an object is in proximity based on the noise predicted by the noise model and the plurality of first sensing signals.
10. The electronic device of claim 9, wherein the noise model comprises:
A plurality of noise prediction models configured to output a plurality of noise prediction values, respectively; and
and a selector configured to select one of the plurality of noise prediction values.
11. The electronic device of claim 10, wherein each of the plurality of noise prediction models comprises:
a moving window configured to receive the plurality of first sensing signals for each of a plurality of frames;
a moving average unit configured to calculate a moving average of the plurality of first sensing signals of each of the plurality of frames and output an intermediate signal; and
a noise predictor configured to output a noise prediction value by using the intermediate signal and a trained algorithm.
12. The electronic device of claim 1, wherein the display layer comprises a base layer, a circuit layer disposed on the base layer, a light emitting device layer disposed on the circuit layer, and an encapsulation layer disposed on the light emitting device layer, and
wherein the sensor layer is disposed directly on the display layer.
CN202310060479.7A 2022-01-25 2023-01-19 Electronic device Pending CN116501189A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
KR10-2022-0011028 2022-01-25
KR1020220048045A KR20230115191A (en) 2022-01-25 2022-04-19 Electronic device
KR10-2022-0048045 2022-04-19

Publications (1)

Publication Number Publication Date
CN116501189A true CN116501189A (en) 2023-07-28

Family

ID=87329167

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310060479.7A Pending CN116501189A (en) 2022-01-25 2023-01-19 Electronic device

Country Status (1)

Country Link
CN (1) CN116501189A (en)

Similar Documents

Publication Publication Date Title
US20230266849A1 (en) Display device
US11681395B2 (en) Display substrate, display device and detection method by using display device
JP2022188124A (en) Semiconductor device
US20190146622A1 (en) Electronic device having touch sensor and display device
US9236421B2 (en) In-cell active matrix OLED touch display panel structure of narrow border
KR20180074880A (en) Display device
KR20200084496A (en) Touch sensing unit and display device including the same
US20230236695A1 (en) Electronic device
CN116501189A (en) Electronic device
JP6367234B2 (en) System and method for detecting the position of an operating element on a display screen
CN112181215B (en) Touch display panel, touch detection method thereof and electronic equipment
KR20230115191A (en) Electronic device
US20240176442A1 (en) Electronic device
US11740735B2 (en) Electronic device
KR20220043964A (en) Electronic device and interface device including the same
US11537235B2 (en) Display device including display driving part to output compensation signal to data lines
US20230176676A1 (en) Electronic device and driving method of the same
US11616103B2 (en) Electronic device
US11670242B2 (en) Electronic device
KR20240077668A (en) Electronic device
KR101743891B1 (en) Apparatus and method for controlling display
US20240169930A1 (en) Electronic device
KR20220021064A (en) Electronic device and interface device including the same
KR20230105739A (en) Electronic device and driving method of electronic device
KR20240059755A (en) Driving circuit, display device including the same, and electronic device including display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication