WO2021227980A1 - 发声装置及其驱动方法、显示面板及显示装置 - Google Patents
发声装置及其驱动方法、显示面板及显示装置 Download PDFInfo
- Publication number
- WO2021227980A1 WO2021227980A1 PCT/CN2021/092332 CN2021092332W WO2021227980A1 WO 2021227980 A1 WO2021227980 A1 WO 2021227980A1 CN 2021092332 W CN2021092332 W CN 2021092332W WO 2021227980 A1 WO2021227980 A1 WO 2021227980A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sound
- sound emitting
- substrate
- sensor
- unit
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000004519 manufacturing process Methods 0.000 title abstract description 9
- 230000005236 sound signal Effects 0.000 claims abstract description 43
- 239000000758 substrate Substances 0.000 claims description 57
- 239000010410 layer Substances 0.000 claims description 18
- 238000012545 processing Methods 0.000 claims description 16
- 238000004364 calculation method Methods 0.000 claims description 10
- 239000012790 adhesive layer Substances 0.000 claims description 5
- 238000003491 array Methods 0.000 claims description 4
- 229920001721 polyimide Polymers 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 3
- KRHYYFGTRYWZRS-UHFFFAOYSA-N Fluorane Chemical compound F KRHYYFGTRYWZRS-UHFFFAOYSA-N 0.000 description 2
- 238000005553 drilling Methods 0.000 description 2
- 230000005284 excitation Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000001208 nuclear magnetic resonance pulse sequence Methods 0.000 description 2
- 230000001755 vocal effect Effects 0.000 description 2
- 239000004642 Polyimide Substances 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005530 etching Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000002688 persistence Effects 0.000 description 1
- 230000003238 somatosensory effect Effects 0.000 description 1
- 238000004441 surface measurement Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1601—Constructional details related to the housing of computer displays, e.g. of CRT monitors, of flat displays
- G06F1/1605—Multimedia displays, e.g. with integrated or attached speakers, cameras, microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/12—Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/20—Arrangements for obtaining desired frequency or directional characteristics
- H04R1/32—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
- H04R1/323—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only for loudspeakers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3206—Monitoring of events, devices or parameters that trigger a change in power modality
- G06F1/3231—Monitoring the presence, absence or movement of users
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
- G06F1/325—Power saving in peripheral device
- G06F1/3278—Power saving in modem or I/O interface
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/0414—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using force sensing means to determine a position
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/162—Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/02—Casings; Cabinets ; Supports therefor; Mountings therein
- H04R1/028—Casings; Cabinets ; Supports therefor; Mountings therein associated with devices performing functions other than acoustics, e.g. electric candles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/20—Arrangements for obtaining desired frequency or directional characteristics
- H04R1/32—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
- H04R1/40—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
- H04R1/403—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers loud-speakers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/20—Arrangements for obtaining desired frequency or directional characteristics
- H04R1/32—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
- H04R1/40—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
- H04R1/406—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R17/00—Piezoelectric transducers; Electrostrictive transducers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R29/00—Monitoring arrangements; Testing arrangements
- H04R29/001—Monitoring arrangements; Testing arrangements for loudspeakers
- H04R29/002—Loudspeaker arrays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2201/00—Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
- H04R2201/003—Mems transducers or their use
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2201/00—Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
- H04R2201/02—Details casings, cabinets or mounting therein for transducers covered by H04R1/02 but not provided for in any of its subgroups
- H04R2201/028—Structural combinations of loudspeakers with built-in power amplifiers, e.g. in the same acoustic enclosure
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2201/00—Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
- H04R2201/40—Details of arrangements for obtaining desired directional characteristic by combining a number of identical transducers covered by H04R1/40 but not provided for in any of its subgroups
- H04R2201/401—2D or 3D arrays of transducers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2203/00—Details of circuits for transducers, loudspeakers or microphones covered by H04R3/00 but not provided for in any of its subgroups
- H04R2203/12—Beamforming aspects for stereophonic sound reproduction with loudspeaker arrays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2217/00—Details of magnetostrictive, piezoelectric, or electrostrictive transducers covered by H04R15/00 or H04R17/00 but not provided for in any of their subgroups
- H04R2217/03—Parametric transducers where sound is generated or captured by the acoustic demodulation of amplitude modulated ultrasonic waves
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2430/00—Signal processing covered by H04R, not provided for in its groups
- H04R2430/20—Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2499/00—Aspects covered by H04R or H04S not otherwise provided for in their subgroups
- H04R2499/10—General applications
- H04R2499/15—Transducers incorporated in visual displaying devices, e.g. televisions, computer displays, laptops
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/302—Electronic adaptation of stereophonic sound system to listener position or orientation
- H04S7/303—Tracking of listener position or orientation
Definitions
- the present disclosure belongs to the field of sound generation technology, and specifically relates to a sound generation device and a driving method thereof, a display panel and a display device.
- the sound emitting device is applied in various fields, for example, in the smart display device, a sound emitting device will be provided.
- Smart display devices can realize human-computer interaction based on pressure, texture, touch, etc., but the existing sound-producing devices only have a single sound-producing function, and the sound direction and mode are single, so that the character receiving the sound from the sound-producing device cannot get good results. Listening experience.
- the present disclosure provides a sound emitting device, including: a recognition unit, a directional sound emitting unit, and a control unit; wherein,
- the recognition unit is connected to the control unit, and is configured to obtain relevant information of a person within a preset range, and send the obtained relevant information of the person to the control unit;
- the control unit is connected to the directional sounding unit, and is configured to obtain a corresponding audio signal according to the acquired related information of the person, and control the directional sounding unit to emit sound waves according to the audio signal.
- the identification unit includes:
- the number recognition module is configured to obtain information about the number of people within the preset range
- the position recognition module is configured to obtain position information of each person relative to the sound emitting device.
- the directional sound emission unit includes a sound emission sensor array and an audio processing module, and the audio processing module is configured to convert the audio signal into a driving signal to drive the sound emission sensor array to emit sound.
- the acoustic sensor array includes a piezoelectric transducer array.
- the piezoelectric transducer array includes a plurality of piezoelectric sensors
- the piezoelectric transducer array includes a first substrate, an elastic film layer located on one side of the first substrate, a first electrode located on a side of the elastic film layer away from the first substrate, and a first electrode located on the first substrate.
- the first electrode includes a plurality of sub-electrodes, the sub-electrodes are distributed in an array on the side of the elastic film layer away from the first substrate, and each of the sub-electrodes corresponds to a piezoelectric sensor;
- the first substrate has a plurality of openings, the openings correspond to the sub-electrodes one-to-one, and the orthographic projection of the sub-electrodes on the first substrate is located at the corresponding openings. In the orthographic projection on the first substrate.
- the elastic film layer includes a polyimide film.
- the sound sensor array includes a plurality of sound sensors, and the plurality of sound sensors are divided into a plurality of sensor groups, and each of the sensor groups correspondingly receives a drive signal.
- the multiple sounding sensors are arranged in an array, and the sounding sensors located in the same column or the same row are connected in series to form one sensor group;
- the multiple sounding sensors are divided into a plurality of sub-arrays, and the sounding sensors in each sub-array are connected in series to form a sensor group.
- the directional sound emitting unit further includes:
- a power amplifier connected to the audio processing module and configured to amplify the driving signal
- An impedance matching module is connected between the power amplifier and the sound sensor array, and is configured to match the impedance of the two to optimize the driving signal.
- control unit includes:
- a data recording module connected to the identification unit, and configured to record the related information transmitted by the identification unit;
- the audio signal calculation module is connected between the data recording module and the directional sound emitting unit, and is configured to calculate the audio signal corresponding to the relevant information of the character according to the relevant information of the character.
- the identification unit includes any one of a piezoelectric transducer sensor, a light pulse sensor, a structured light sensor, and a camera.
- the present disclosure also provides a driving method of a sound emitting device, including:
- the recognition unit obtains relevant information of a person within a preset range, and sends the relevant information of the person to the control unit;
- the control unit obtains the corresponding audio signal according to the relevant information of the person, and controls the directional sounding unit to emit sound waves according to the audio signal.
- the present disclosure also provides a display panel including the above-mentioned sound generating device.
- the sound emission device includes a directional sound emission unit, the directional sound emission unit includes a sound emission sensor array, and the sound emission sensor array includes a first substrate and a plurality of sound emission sensors arranged on one side of the first substrate;
- the display panel includes a second substrate and a plurality of pixel units arranged on one side of the second substrate; wherein,
- the sound emission sensor array and the display panel share a substrate, and the plurality of pixel units are arranged on a side of the plurality of sound emission sensors away from the common substrate.
- the display panel further includes an adhesive layer
- the sound emitting device is attached to the display panel through the adhesive layer.
- the display panel is one of an organic electro-laser display panel or a mini light-emitting diode display panel.
- the present disclosure also provides a display device including the above-mentioned display panel.
- FIG. 1 is a structural diagram of a sound emitting device provided by an embodiment of the disclosure
- FIG. 2 is a schematic diagram of a structure of a sounding sensor array in a sounding device provided by an embodiment of the disclosure
- FIG. 3 is a grouping method of the sounding sensors in the sounding device provided by the embodiments of the disclosure.
- FIG. 4 is another grouping method of the sounding sensors in the sounding device provided by the embodiments of the disclosure.
- FIG. 5 is a flowchart of a driving method of a sound emitting device provided by an embodiment of the disclosure
- FIG. 6 is a working schematic diagram of the acoustic parameter array method in the driving method provided by the embodiment of the present disclosure
- FIG. 7 is a schematic diagram of the acoustic wave coverage area of the acoustic parametric array method in the driving method provided by an embodiment of the disclosure.
- FIG. 8 is a schematic diagram of the principle of focusing of acoustic waves in the acoustic phased array method in the driving method provided by the embodiments of the disclosure.
- FIG. 9 is a schematic diagram of the principle of sound wave deflection in the acoustic phased array method in the driving method provided by the embodiments of the disclosure.
- FIG. 10 is a schematic diagram of the principle of combining the acoustic phased array method and the acoustic parametric array method in the driving method provided by the embodiments of the disclosure;
- FIG. 11 is a schematic structural diagram of a display panel provided by an embodiment of the disclosure.
- FIG. 12 is a schematic diagram of a structure of a display panel provided by an embodiment of the disclosure (the display panel and the sound emitting device are integrated as a whole).
- an embodiment of the present disclosure provides a sound emitting device, which includes: an identification unit 1, a directional sound emitting unit 2 and a control unit 3.
- the identification unit 1 is connected to the control unit 3, and the directional sounding unit 2 is connected to the control unit 3.
- the recognition unit 1 is configured to obtain relevant information of a person within a preset range and send the relevant information to the control unit 3.
- the preset range may be set according to needs and the recognition range of the recognition unit 1.
- the preset range may be a recognition range within 2 meters from the recognition unit 1. That is, the recognition unit 1 detects the relevant information of all persons included in the preset range. If multiple persons are included in the preset range, the recognition unit 1 respectively recognizes the relevant information of each person, and sends the relevant information of each person to Control unit 3.
- control unit 3 After the control unit 3 receives the relevant information of the character, it obtains the corresponding audio signal according to the relevant information of the character, and controls the directional sound unit 2 to emit sound waves according to the audio signal obtained by the control unit 3, and the sound waves correspond to the relevant information of each character. Recognize the relevant information of the person through the recognition unit 1, and then obtain the audio signal of the corresponding person through the control unit 3 according to the relevant information, and then control the directional sounding unit 2 to sound according to the acquired audio signal, so that the sound wave can be adjusted according to the person to make the sound sound.
- the sound of the device is intelligent.
- the relevant information of the person recognized by the recognition unit 1 may include multiple types of information.
- the recognition unit 1 may recognize the number of people included in a preset range, and may also recognize each person. Relative to the position of the sounding device.
- the recognition unit 1 may include a number recognition module 11 and a position recognition module 12.
- the number recognition module 11 is configured to obtain information on the number of people in a preset range
- the position recognition module 12 is configured to obtain the relative position of each person in the preset range to the sound emitting device.
- the location information of the person that is, the person-related information sent by the recognition unit 1 to the control unit 3, includes the number of people information and the location information of each person.
- control unit 3 can calculate the angle of the sound wave sent to each character according to the position information of each character relative to the sound emitting device, and generate a corresponding audio signal, and control the directional sound emitting unit 2 to emit sound, so that each character can better receive the sound. And the control unit 3 can calculate the coverage area of the sound wave according to the number of people and the position information of each character, so that the sound wave emitted by the directional sound unit 2 can cover all the characters within the preset range, and further enhance the character's listening experience.
- the recognition unit 1 may include multiple types of recognition devices, for example, a somatosensory recognition device or an image recognition device may be used.
- the identification unit 1 may include any one of a piezoelectric transducer sensor, a light pulse sensor, a structured light sensor, and a camera.
- the identification unit 1 is a piezoelectric transducer sensor
- the piezoelectric transducer sensor can emit ultrasonic waves, and the ultrasonic waves will be reflected when they encounter a person.
- the piezoelectric transducer sensor detects the reflected ultrasonic waves, that is, detects the echo signal. Identify the number of people in the preset range and the location information of each person.
- the recognition unit 1 can use a light pulse sensor to recognize through Time Of Flight (TOF) technology.
- the light pulse sensor can emit light pulses within a preset range. If there is a person in the preset range, the person will reflect the light pulse , By detecting the flight (round trip) time of the light pulse to get the position information and the number of people information.
- TOF Time Of Flight
- the structured light sensor may include a camera and a projector.
- the active structure information projected to the character through the projector such as laser stripes, gray codes, sine fringes, etc., is then photographed by a single or multiple cameras.
- a three-dimensional image of the person can be obtained based on the principle of triangulation, and the position information and the number of people information of the person can be identified.
- the recognition unit 1 may also use a camera for recognition.
- a dual-camera using binocular recognition technology can recognize the number of people in a preset range and the position information of each person through the images collected by the camera.
- the recognition unit 1 can also use other methods for recognition, and the specific design can be based on needs, which is not limited here.
- the directional sound emission unit 2 may include a sound emission sensor array 44 and an audio processing module 41.
- the audio processing module 41 is configured to convert the audio signal transmitted by the control unit 3 into a driving signal and drive The signal is sent to the utterance sensor array 44 to drive the utterance sensor array 44 to emit sound.
- the driving signal may include the angle of sound wave propagation.
- the driving signal may include the time sequence of each sounding sensor in the sounding sensor array 44 emitting sound waves.
- the sounding direction of the sounding sensor array 44 can be adjusted by the phase delay of the sound waves emitted by each sensor.
- the sounding sensor array 44 may include multiple types of sensors.
- the sounding sensor array 44 is a piezoelectric transducer array, that is, the sounding sensor array 44 includes a plurality of piezoelectric transducers.
- the sounding sensor array 44 can also be other types of sensor arrays, which are specifically set according to requirements, which is not limited here.
- the piezoelectric transducer array includes a plurality of piezoelectric sensors.
- the piezoelectric transducer array includes a first substrate 11, an elastic film layer 12 located on the side of the first substrate 11, a first electrode 13 located on the side of the elastic film layer 12 away from the first substrate 11, and a first electrode 13
- the elastic film layer 12 serves as an elastic auxiliary film of the sound sensor array 44 (piezoelectric transducer array), and is configured to increase the vibration amplitude of the piezoelectric film 14.
- the second electrode 15 may be a sheet-shaped electrode, which covers the entire area of the first substrate 11.
- the first electrode 13 includes a plurality of sub-electrodes 331, and the plurality of sub-electrodes 331 are arranged in an array on the side of the elastic film layer 12 away from the first substrate 11. Each sub-electrode 331 corresponds to a piezoelectric sensor 001.
- the sub-electrode 331 and the area of the sub-electrode 331 on the side of the sub-electrode 331 away from the first substrate 11 form a piezoelectric sensor 001, and the sound sensor array 44 It has a speaker function, and the sub-electrodes 331, the piezoelectric film 14 and the elastic film layer 12 together constitute the diaphragm of the sound sensor array 44 (speaker) for emitting sound waves.
- the first substrate 11 has a plurality of openings 111.
- the openings 111 serve as chambers for the sound sensor array 44 (speakers).
- the openings 111 correspond to the sub-electrodes 331 one-to-one.
- the orthographic projection of the sub-electrodes 331 on the first substrate 11 The corresponding opening 111 is located in the orthographic projection on the first substrate 11 so that sound waves can propagate through the opening 111 as a cavity.
- the opening 111 is made on the first substrate 11, so that the first substrate 11, the elastic film layer 12 and the piezoelectric film 14 can form a suspended film structure.
- the opening 111 can be formed on the first substrate 11 by laser drilling, hydrofluoric acid etching and drilling.
- the first substrate 11 may be multiple types of substrates, for example, the first substrate 11 may be a glass substrate 11.
- the elastic film layer 12 can be various types of elastic film layers, for example, the elastic film layer can be a polyimide film (Polyimide, PI). Of course, the elastic film layer 12 can also be made of other materials, which is not limited here. .
- the piezoelectric transducer array may be a piezoelectric transducer array of a Micro-Electro-Mechanical System (MEMS).
- MEMS Micro-Electro-Mechanical System
- the sound sensor array 44 includes a plurality of sound sensors.
- the sound sensor is a piezoelectric sensor 001 as an example.
- the plurality of piezoelectric sensors 001 are arranged in an array in the first On the substrate 11, for ease of description, the specific structure of the piezoelectric sensor 001 is omitted in FIGS. 3 and 4, and the piezoelectric sensor 001 is replaced by a circle.
- the multiple piezoelectric sensors 001 are equally divided into multiple sensor groups, and each sensor group correspondingly receives a driving signal sent by an audio processing module 41, that is, the group driving method is used to control the sound sensor in the sound sensor array 44, which can reduce the introduction
- the number of drive lines of the sensor group simplifies the structure of the sound sensor array 44, and the entire sensor group can be used to propagate sound waves in one direction. Compared with using only one sensor to propagate sound waves in one direction, a larger sound wave coverage area will be obtained. .
- the sound waves can be easily adjusted according to the distance between the person and the sound emitting device. For example, if the person is close to the sensor on the center line of the sound sensor array 44, the sensor group on the center line can be used to produce sound to increase the sound wave. Covered area.
- multiple methods can be used to group multiple sounding sensors.
- the following takes Mode 1 and Mode 2 as examples for description.
- Fig. 3 taking the piezoelectric sensor 001 as the acoustic sensor as an example, multiple piezoelectric sensors 001 are arranged in an array, and multiple piezoelectric sensors 001 in the same column (Fig. 3) or in the same row form a sensor group A, each The piezoelectric sensors 001 in the sensor group A are connected in series with each other, and one sensor group A corresponds to one drive signal (drive signals P1...Pn in Fig. 3).
- piezoelectric sensor 001 as the acoustic sensor as an example, multiple piezoelectric sensors 001 are distributed in an array, and the multiple piezoelectric sensors 001 are divided into multiple sub-arrays, and the piezoelectric sensors 001 in each sub-array form a sensor In group A, the piezoelectric sensors 001 in each sensor group A are connected in series with each other, and one sensor group A corresponds to a drive signal (drive signals P1...P4... in Figure 4).
- the directional sound emitting unit 2 further includes a power amplifier 42 and an impedance matching module 43.
- the power amplifier 42 is connected to the audio processing module 41, and the power amplifier 42 is configured to amplify the driving signal transmitted by the audio processing module 41.
- the impedance matching module 43 is connected between the power amplifier 42 and the sound sensor array 44.
- the impedance matching module 43 is configured to match the impedance of the power amplifier 42 and the sound sensor array 44 by adjusting the impedance of the power amplifier 42 and/or the sound sensor array 44. , To match the impedance of the two, to achieve the maximum driving signal amplification, in order to optimize the driving signal.
- the power amplifier 42 cooperates with the impedance matching module 43 to optimize the driving signal, and maximize the driving signal to be transmitted to the sound sensor array 44.
- the control unit 3 includes a data recording module 31 and an audio signal calculation module 32.
- the data recording module 31 is connected to the recognition unit 1, and the data recording module 31 is configured to record relevant information of the person transmitted by the recognition unit 1 and transmit the recorded relevant information to the audio signal calculation module 32.
- the audio signal calculation module 32 is connected between the data recording module 31 and the directional sound generating unit 2.
- the audio signal calculation module 32 is configured to calculate the audio signal corresponding to the relevant information of the person according to the obtained relevant information of the person.
- the sound production sensor array 44 of the sound production device provided in this embodiment can drive the sound production sensor to sound in various ways.
- the audio signal calculation module 32 calculates the audio signal according to a preset algorithm, so as to adjust the sound wave corresponding to the angle of the person according to the position information of the person. , Calculate the required sound wave coverage area based on the number of people and the location information of each person.
- the sound emitting device provided in this embodiment further includes a storage unit 4 and a setting unit 5.
- the storage unit 4 is connected to the control unit 3, and the setting unit 5 is connected to the storage unit 4.
- the sound emitting device provided by the present disclosure may include multiple sounding modes, such as a single character mode and a multi-character mode.
- the sounding mode can be set by the setting unit 5 and the setting can be stored in the storage unit 4.
- the setting unit 5 can also initialize the sounding device and store the initialized settings in the storage unit 4, and the control unit 3 can read the setting information from the storage unit 4 and perform corresponding settings for the sounding device.
- an embodiment of the present disclosure also provides a driving method of a sound emitting device, which includes the following steps S1 to S3.
- the recognition unit 1 obtains relevant information of a person within a preset range, and sends the obtained relevant information of the person to the control unit 3.
- the relevant information of the characters recognized (or acquired) by the recognition unit 1 includes the number of characters included in the preset range (that is, the information about the number of people), and the position information of each character in the preset range relative to the sound emitting device (for example, The angle of the center line of the generating device), and the related information of the recognized person is transmitted to the control unit 3.
- the control unit 3 obtains the corresponding audio signal according to the obtained relevant information of the person.
- control unit 3 includes a data recording module 31 and an audio signal calculation module 32.
- the audio signal calculation module 32 calculates the audio signal corresponding to the relevant information of the acquired character according to a preset algorithm, and the preset algorithm is based on the sound produced by the sound generating device. Mode to set.
- description will be given by taking the sounding device using the phased array method and the acoustic parameter array method to control each sounding sensor in the sounding sensor array 44 as an example.
- the sounding device can also use the acoustic parameter array method to drive each sounding sensor in the sounding sensor array 44 to increase the directivity of sound waves.
- the multiple sounding sensors in the sounding sensor array 44 can be divided into a sensor array group C1 and a sensor array group C2 in a way that each side of the center line is a group, and the control unit 3 controls the audio processing module 41 to generate two-way drives.
- the first drive signal is optimized by the power amplifier 42 and impedance matching module 43 and then transmitted to the sensor array group C1, which drives the sensor array group C1 to emit ultrasonic waves with frequency f1, and the second drive signal is matched with the impedance through the power amplifier 42
- the module 43 is transmitted to the sensor array group C2, and drives the sensor array group C2 to emit ultrasonic waves with a frequency of f2, which are different from f1 and f2, so that after two ultrasonic waves with different frequencies have a nonlinear interaction in the air, they can be modulated to be audible to human ears.
- Sound waves (referred to as "audible sound"). Since the sound waves of ultrasonic waves are sharper than those of audible sound, the ultrasonic waves have stronger directivity.
- Using the acoustic parametric array method to drive the sound sensor array 44 to produce sound can enhance the directivity of audible sound.
- control unit 3 can determine the directivity of the sound wave emitted by the sound sensor array 44 in the acoustic parameter array mode according to the arrangement of the sound sensor array 44 according to the following array directivity function D( ⁇ , ⁇ ):
- the sounding sensor array 44 includes N columns and M rows, the row spacing of each row of sounding sensors is d 2 , the column spacing of each row of sounding sensors is d 1 , ⁇ and ⁇ are the directions of sound waves in spherical coordinates In the angle.
- the directivity diagram of the sound waves emitted by the sensor array 44 can be obtained, as shown in Figure 7, the directivity angle ⁇ 10°, the directivity coverage area is the coverage area with the positive direction of the sound wave as the midline, deflection to both sides by 10°, at a distance of L 1 from the sound source (the center of the sounding sensor array 44), the sound source propagates on both sides of the midline
- the maximum distance of is d 1.
- L 2 2 ⁇ L 1
- d 2 2 ⁇ d 1 .
- the audio signal calculation in the control unit 3 The module 32 can obtain an audio signal with a delay time sequence by calculating the excitation delay time of each piezoelectric sensor 001 in the plurality of piezoelectric sensors 001, and then transmit the audio signal to the audio processing module 41, which is based on the audio signal
- Each piezoelectric sensor (or sensor group) is driven with the excitation pulse sequence in the corresponding time sequence in the delay time sequence in the Figure 8) and the deflection direction ( Figure 9), so that the sound wave can be adjusted according to the character, so that the sound of the sound device can be intelligent.
- the control unit 3 calculates the audio signal that makes the propagation direction of the sound wave correspond to the position of the person, and transmits it to the audio processing module 41,
- the audio processing module 41 converts the audio signal into a corresponding driving signal.
- the driving signal is optimized by the power amplifier 42 and the impedance matching module 43 and then transmitted to the vocal sensor array 44, and the vocal sensor array 44 is driven to emit sound waves whose propagation direction corresponds to the position of the person. .
- the acoustic parameter array method and the acoustic phase control array method can be combined to enhance the directivity of the sound wave with the acoustic parameter array method, and then the acoustic phase control method is used to adjust the sound delay of each acoustic sensor, thereby increasing the coverage area of the acoustic wave.
- the number of sounding sensors is 101 (ie S1...S50...S101 in FIG. 10), and the sounding sensor located in the center is the fiftieth sounding sensor.
- the distance between any adjacent two sounding sensors is 1mm. Because of the persistence effect of the human ear, the human ear cannot distinguish two sounds with a time difference of less than 0.1s, and will consider two sounds with a time difference of less than 0.1s as one sound. Therefore, the control unit 3 can obtain a large listening area by means of phase-controlled focus scanning.
- the control unit 3 controls the sounding sensor array 44 to propagate sound waves along the center line of the sounding sensor array 44 (that is, the direction directly opposite to S50) using a sound parameter array method, and the directivity angle ⁇ is 10°.
- a pulse sequence is used to control multiple sound sensors, from the first sound sensor S1
- each sounding sensor (S1 to S101) sequentially emits sound waves with a fixed delay time (for example, 1 ⁇ s).
- the frequency of the sound wave is the same as the sound wave frequency of the audible sound generated by the sound parameter method.
- the sound waves emitted by each sounding sensor have a fixed phase. If they interfere with each other, the coverage of sound waves can be increased. Finally, in this embodiment, the sound wave can obtain a coverage range of 20mm to 2.8m.
- the acoustic sensor array 44 is driven by the acoustic parameter array method to increase the directivity of the sound wave, and then the propagation angle and coverage area of the acoustic wave are adjusted by the acoustic phased array method. Intelligent voice.
- control unit 3 calculates the corresponding audio signal according to the relevant information of the person, and then transmits the corresponding audio signal to the directional sounding unit 2.
- the audio signal includes the sounding frequency or delay sequence of each sounding sensor in the directional sounding unit 2, namely The propagation direction and coverage area of the sound wave can be adjusted according to the character.
- an embodiment of the present disclosure also provides a display panel including the above-mentioned sound emitting device.
- the sound generating device can be integrated with the display panel, or can be attached to the display panel and placed outside the panel.
- the sound emitting device includes a directional sound emitting unit 2
- the directional sound emitting unit 2 includes a sound emitting sensor array 44.
- the sound emitting sensor array 44 includes a first substrate 11 and a plurality of sound-emitting sensors (for example, piezoelectric sensors 001) arranged on one side of the first substrate 11.
- the display panel includes a second substrate 01 and a plurality of pixel units 6 disposed on one side of the second substrate 01.
- Each pixel unit 6 includes a pixel electrode 61 and a pixel 62, and the pixel 62 includes a plurality of sub-pixels, such as a red sub-pixel 623, a green sub-pixel 622, and a blue sub-pixel 621.
- the sounding sensor array 44 and the display panel can share a substrate, and the sounding device and the display panel are integrated to form a display panel with sounding function.
- One side of the common substrate (11(01) in FIG. 12) can be used to make the display panel have a sounding function, and the thickness of the display panel can be reduced.
- the sound emitting device can also be externally installed on the display panel. Specifically, on the light emitting side of the display panel, the sound emitting device is attached to the display panel through an adhesive layer, and is externally installed on the light emitting side of the display panel, and The sound emitting device is a transparent sound emitting device, which does not affect the light output rate of the display panel.
- the sound emitting device can be used in various types of display panels.
- the display panel can be an organic electro-laser (OLED) display panel or a mini LED (mini LED) display panel, which is not used here. limited.
- OLED organic electro-laser
- mini LED mini LED
- control unit 3 in the sound emitting device can be shared with the control chip (CPU) on the back panel of the display panel.
- the sound sensor array 44 in the directional sound emitting unit 2 is arranged on the light emitting side of the display panel.
- the audio processing module 41, the power amplifier 42, and the impedance matching module 43 may be arranged in the peripheral area of the display panel, for example, the area where the pixel unit driving circuit is located.
- the recognition unit 1 may be arranged on one side of the display panel, for example, on the side of the display panel where the camera is arranged. If the recognition unit 1 is a camera, the camera of the recognition unit 1 can be shared with the camera in the display panel.
- an embodiment of the present disclosure also provides a display device including the above-mentioned display panel.
- the display device may be any product or component that has a display function, such as a mobile phone, a tablet computer, a television, a monitor, a notebook computer, a digital photo frame, a navigator, and the like.
- a display function such as a mobile phone, a tablet computer, a television, a monitor, a notebook computer, a digital photo frame, a navigator, and the like.
- the display device also has other indispensable components, which will not be repeated here, and should not be used as a limitation to the present disclosure.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Acoustics & Sound (AREA)
- Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Computer Hardware Design (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
Description
Claims (17)
- 一种发声装置,包括:识别单元、定向发声单元和控制单元;其中,所述识别单元与所述控制单元相连,配置为获取预设范围内的人物的相关信息,并将获取的所述人物的相关信息发送给所述控制单元;以及所述控制单元与所述定向发声单元相连,配置为根据获取的所述人物的相关信息,获取对应的音频信号,并控制所述定向发声单元按照所述音频信号发出声波。
- 根据权利要求1所述的发声装置,其中,所述识别单元包括:人数识别模块,配置为获取所述预设范围内的人数信息;以及位置识别模块,配置为获取各人物相对于所述发声装置的位置信息。
- 根据权利要求1所述的发声装置,其中,所述定向发声单元包括发声传感器阵列和音频处理模块,所述音频处理模块配置为将获取的所述音频信号转换为驱动信号,以驱动所述发声传感器阵列进行发声。
- 根据权利要求3所述的发声装置,其中,所述发声传感器阵列包括压电换能器阵列。
- 根据权利要求4所述的发声装置,其中,所述压电换能器阵列包括多个压电传感器;所述压电换能器阵列包括第一基底、位于所述第一基底一侧的弹性膜层、位于所述弹性膜层背离所述第一基底一侧的第一电极、位于所述第一电极背离所述第一基底一侧的压电薄膜、位于所述压电薄膜背离第一基底一侧的第二电极;其中,所述第一电极包括多个子电极,所述子电极呈阵列分布在所述弹性膜层背离所述第一基底一侧,每个所述子电极对应一个压电传感器;所述第一基底上具有多个开孔,所述开孔与所述子电极一一对应,所述子电极在所述第一基底上的正投影位于与之对应的所述开孔在所述第一基底上的正投影内。
- 根据权利要求5所述的发声装置,其中,所述弹性膜层包括聚酰亚胺薄膜。
- 根据权利要求3所述的发声装置,其中,所述发声传感器阵列包括多个发声传感器,所述多个发声传感器均分为多个传感器组,每个所述传感器组对应接收一路驱动信号。
- 根据权利要求7所述的发声装置,其中,所述多个发声传感器呈阵列分布,位于同一列或同一行的所述发声传感器相串联形成一个所述传感器组;或,将所述多个发声传感器分为多个子阵列,每个子阵列中的所述发声传感器互相串联形成一个所述传感器组。
- 根据权利要求3所述的发声装置,其中,所述定向发声单元还包括:功率放大器,与所述音频处理模块相连,配置为放大所述驱动信号;以及阻抗匹配模块,连接在所述功率放大器与所述发声传感器阵列之间,配置为匹配二者的阻抗以优化所述驱动信号。
- 根据权利要求1所述的发声装置,其中,所述控制单元包括:数据记录模块,与所述识别单元相连,配置为记录所述识别单元传输的所述人物的相关信息;音频信号计算模块,连接在所述数据记录模块与所述定向发声单元之间,配置为根据获取的所述人物的相关信息,计算所述人物的相关信息对应的音频信号。
- 根据权利要求1所述的发声装置,其中,所述识别单元包括压电换能传感器、光脉冲传感器、结构光传感器、摄像头中的任一种。
- 一种发声装置的驱动方法,包括:识别单元获取预设范围内的人物的相关信息,并将所述人物的相关信息发送给控制单元;以及所述控制单元根据所述人物的相关信息,获取对应的音频信号,控制定向发声单元按照所述音频信号发出声波。
- 一种显示面板,包括根据权利要求1至11中任一项所述的发声装置。
- 根据权利要求13所述的显示面板,其中,所述发声装置包括定向发声单元,所述定向发声单元包括发声传感器阵列,所述发声传感器阵列包括第一基底和设置在所述第一基底一侧的多个发声传感器;所述显示面板包括第二基底和设置在所述第二基底一侧的多个像素单元;其中,所述发声传感器阵列和所述显示面板共用基底,且所述多个像素单元设置在所述多个发声传感器背离所共用的基底的一侧。
- 根据权利要求13所述的显示面板,还包括粘接层;所述发声装置通过所述粘接层与所述显示面板相贴合。
- 根据权利要求13所述的显示面板,其中,所述显示面板为有机电激光显示面板或迷你发光二极管显示面板中的一种。
- 一种显示装置,包括如权利要求13至16中任一项所述的显示面板。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/765,225 US11902761B2 (en) | 2020-05-14 | 2021-05-08 | Sound producing device and method for driving the same, display panel and display apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010409194.6 | 2020-05-14 | ||
CN202010409194.6A CN111615033B (zh) | 2020-05-14 | 2020-05-14 | 发声装置及其驱动方法、显示面板及显示装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021227980A1 true WO2021227980A1 (zh) | 2021-11-18 |
Family
ID=72203373
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/092332 WO2021227980A1 (zh) | 2020-05-14 | 2021-05-08 | 发声装置及其驱动方法、显示面板及显示装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US11902761B2 (zh) |
CN (1) | CN111615033B (zh) |
WO (1) | WO2021227980A1 (zh) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111615033B (zh) | 2020-05-14 | 2024-02-20 | 京东方科技集团股份有限公司 | 发声装置及其驱动方法、显示面板及显示装置 |
CN112153538B (zh) * | 2020-09-24 | 2022-02-22 | 京东方科技集团股份有限公司 | 显示装置及其全景声实现方法、非易失性存储介质 |
CN114173262B (zh) * | 2021-11-18 | 2024-02-27 | 苏州清听声学科技有限公司 | 一种超声波发声器、显示器及电子设备 |
CN114173261B (zh) * | 2021-11-18 | 2023-08-25 | 苏州清听声学科技有限公司 | 一种超声波发声器、显示器及电子设备 |
CN114724459B (zh) * | 2022-03-10 | 2024-04-19 | 武汉华星光电技术有限公司 | 显示基板和显示面板 |
CN116614745B (zh) * | 2023-06-19 | 2023-12-22 | 金声源(嘉兴)科技有限公司 | 一种应用于高速公路的定向声波发生器 |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1774871A (zh) * | 2003-04-15 | 2006-05-17 | 专利创投公司 | 定向扬声器 |
CN103165125A (zh) * | 2013-02-19 | 2013-06-19 | 深圳创维-Rgb电子有限公司 | 音频定向处理方法和装置 |
CN104937660A (zh) * | 2012-11-18 | 2015-09-23 | 诺威托系统有限公司 | 用于生成声场的方法和系统 |
CN108966086A (zh) * | 2018-08-01 | 2018-12-07 | 苏州清听声学科技有限公司 | 基于目标位置变化的自适应定向音频系统及其控制方法 |
CN109068245A (zh) * | 2018-08-01 | 2018-12-21 | 京东方科技集团股份有限公司 | 屏幕发声装置、发声显示屏及其制造方法和屏幕发声系统 |
US20190124446A1 (en) * | 2016-03-31 | 2019-04-25 | The Trustees Of The University Of Pennsylvania | Methods, systems, and computer readable media for a phase array directed speaker |
CN110099343A (zh) * | 2019-05-28 | 2019-08-06 | 安徽奥飞声学科技有限公司 | 一种具有mems扬声器阵列的听筒及通信装置 |
CN111615033A (zh) * | 2020-05-14 | 2020-09-01 | 京东方科技集团股份有限公司 | 发声装置及其驱动方法、显示面板及显示装置 |
CN112216266A (zh) * | 2020-10-20 | 2021-01-12 | 傅建玲 | 一种相控多声道声波定向发射方法及系统 |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DK1142446T3 (da) * | 1999-01-06 | 2003-11-17 | Iroquois Holding Co Inc | Højttalersystem |
JP5254951B2 (ja) * | 2006-03-31 | 2013-08-07 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | データ処理装置及び方法 |
CN103002376B (zh) * | 2011-09-09 | 2015-11-25 | 联想(北京)有限公司 | 声音定向发送的方法和电子设备 |
JP5163796B1 (ja) * | 2011-09-22 | 2013-03-13 | パナソニック株式会社 | 音響再生装置 |
US8879766B1 (en) * | 2011-10-03 | 2014-11-04 | Wei Zhang | Flat panel displaying and sounding system integrating flat panel display with flat panel sounding unit array |
KR20180097786A (ko) * | 2013-03-05 | 2018-08-31 | 애플 인크. | 하나 이상의 청취자들의 위치에 기초한 스피커 어레이의 빔 패턴의 조정 |
JP6233581B2 (ja) * | 2013-12-26 | 2017-11-22 | セイコーエプソン株式会社 | 超音波センサー及びその製造方法 |
US20150382129A1 (en) * | 2014-06-30 | 2015-12-31 | Microsoft Corporation | Driving parametric speakers as a function of tracked user location |
US11310617B2 (en) * | 2016-07-05 | 2022-04-19 | Sony Corporation | Sound field forming apparatus and method |
CN107776483A (zh) * | 2017-09-26 | 2018-03-09 | 广州小鹏汽车科技有限公司 | 一种定向发声系统及实现方法 |
CN109032411B (zh) * | 2018-07-26 | 2021-04-23 | 京东方科技集团股份有限公司 | 一种显示面板、显示装置及其控制方法 |
CN109803199A (zh) * | 2019-01-28 | 2019-05-24 | 合肥京东方光电科技有限公司 | 发声装置、显示系统以及发声装置的发声方法 |
CN110112284B (zh) * | 2019-05-27 | 2021-09-17 | 京东方科技集团股份有限公司 | 柔性声电基板及其制备方法、柔性声电装置 |
CN110225439B (zh) * | 2019-06-06 | 2020-08-14 | 京东方科技集团股份有限公司 | 一种阵列基板及发声装置 |
CN110636420B (zh) * | 2019-09-25 | 2021-02-09 | 京东方科技集团股份有限公司 | 一种薄膜扬声器、薄膜扬声器的制备方法以及电子设备 |
-
2020
- 2020-05-14 CN CN202010409194.6A patent/CN111615033B/zh active Active
-
2021
- 2021-05-08 US US17/765,225 patent/US11902761B2/en active Active
- 2021-05-08 WO PCT/CN2021/092332 patent/WO2021227980A1/zh active Application Filing
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1774871A (zh) * | 2003-04-15 | 2006-05-17 | 专利创投公司 | 定向扬声器 |
CN104937660A (zh) * | 2012-11-18 | 2015-09-23 | 诺威托系统有限公司 | 用于生成声场的方法和系统 |
CN103165125A (zh) * | 2013-02-19 | 2013-06-19 | 深圳创维-Rgb电子有限公司 | 音频定向处理方法和装置 |
US20190124446A1 (en) * | 2016-03-31 | 2019-04-25 | The Trustees Of The University Of Pennsylvania | Methods, systems, and computer readable media for a phase array directed speaker |
CN108966086A (zh) * | 2018-08-01 | 2018-12-07 | 苏州清听声学科技有限公司 | 基于目标位置变化的自适应定向音频系统及其控制方法 |
CN109068245A (zh) * | 2018-08-01 | 2018-12-21 | 京东方科技集团股份有限公司 | 屏幕发声装置、发声显示屏及其制造方法和屏幕发声系统 |
CN110099343A (zh) * | 2019-05-28 | 2019-08-06 | 安徽奥飞声学科技有限公司 | 一种具有mems扬声器阵列的听筒及通信装置 |
CN111615033A (zh) * | 2020-05-14 | 2020-09-01 | 京东方科技集团股份有限公司 | 发声装置及其驱动方法、显示面板及显示装置 |
CN112216266A (zh) * | 2020-10-20 | 2021-01-12 | 傅建玲 | 一种相控多声道声波定向发射方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
US11902761B2 (en) | 2024-02-13 |
CN111615033B (zh) | 2024-02-20 |
US20220353613A1 (en) | 2022-11-03 |
CN111615033A (zh) | 2020-09-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021227980A1 (zh) | 发声装置及其驱动方法、显示面板及显示装置 | |
CN107107114B (zh) | 三端口压电超声换能器 | |
JP6316433B2 (ja) | マイクロメカニカル超音波トランスデューサおよびディスプレイ | |
US7760891B2 (en) | Focused hypersonic communication | |
US20150023138A1 (en) | Ultrasonic Positioning System and Method Using the Same | |
WO2020155671A1 (zh) | 指纹识别结构以及显示装置 | |
US20080093952A1 (en) | Hypersonic transducer | |
WO2020259384A1 (zh) | 指纹识别器件及其驱动方法、显示装置 | |
JP5423370B2 (ja) | 音源探査装置 | |
WO2016061410A1 (en) | Three-port piezoelectric ultrasonic transducer | |
US11403868B2 (en) | Display substrate having texture information identification function, method for driving the same and display device | |
TW202026941A (zh) | 超音波指紋偵測和相關設備及方法 | |
US11474607B2 (en) | Virtual, augmented, or mixed reality device | |
Dokmanić et al. | Hardware and algorithms for ultrasonic depth imaging | |
CN111782090B (zh) | 显示模组、超声波触控检测方法、超声波指纹识别方法 | |
CN111586553B (zh) | 显示装置及其工作方法 | |
JP2014032600A (ja) | 表示入力装置 | |
WO2020062108A1 (zh) | 设备 | |
CN111510819A (zh) | 一种超声波扬声器系统及其工作方法 | |
CN112329672A (zh) | 指纹识别模组及其制备方法、显示面板、显示装置 | |
JP2012134591A (ja) | 発振装置および電子機器 | |
Sarkar | Audio recovery and acoustic source localization through laser distance sensors | |
TWM642391U (zh) | 超聲波指紋識別模組和裝置 | |
CN116458171A (zh) | 发声装置、屏幕发声显示装置及其制备方法 | |
JP2014188148A (ja) | 超音波測定装置、超音波トランスデューサーデバイス及び超音波画像装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21805208 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21805208 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21805208 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 29.06.2023) |