WO2021227980A1 - 发声装置及其驱动方法、显示面板及显示装置 - Google Patents

发声装置及其驱动方法、显示面板及显示装置 Download PDF

Info

Publication number
WO2021227980A1
WO2021227980A1 PCT/CN2021/092332 CN2021092332W WO2021227980A1 WO 2021227980 A1 WO2021227980 A1 WO 2021227980A1 CN 2021092332 W CN2021092332 W CN 2021092332W WO 2021227980 A1 WO2021227980 A1 WO 2021227980A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
sound emitting
substrate
sensor
unit
Prior art date
Application number
PCT/CN2021/092332
Other languages
English (en)
French (fr)
Inventor
韩艳玲
刘英明
郭玉珍
张晨阳
李佩笑
李秀锋
姬雅倩
勾越
孙伟
韩文超
张良浩
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US17/765,225 priority Critical patent/US11902761B2/en
Publication of WO2021227980A1 publication Critical patent/WO2021227980A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1601Constructional details related to the housing of computer displays, e.g. of CRT monitors, of flat displays
    • G06F1/1605Multimedia displays, e.g. with integrated or attached speakers, cameras, microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/12Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/323Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only for loudspeakers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3206Monitoring of events, devices or parameters that trigger a change in power modality
    • G06F1/3231Monitoring the presence, absence or movement of users
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/325Power saving in peripheral device
    • G06F1/3278Power saving in modem or I/O interface
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0414Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using force sensing means to determine a position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/162Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/02Casings; Cabinets ; Supports therefor; Mountings therein
    • H04R1/028Casings; Cabinets ; Supports therefor; Mountings therein associated with devices performing functions other than acoustics, e.g. electric candles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/403Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers loud-speakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/406Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R17/00Piezoelectric transducers; Electrostrictive transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements
    • H04R29/001Monitoring arrangements; Testing arrangements for loudspeakers
    • H04R29/002Loudspeaker arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/003Mems transducers or their use
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/02Details casings, cabinets or mounting therein for transducers covered by H04R1/02 but not provided for in any of its subgroups
    • H04R2201/028Structural combinations of loudspeakers with built-in power amplifiers, e.g. in the same acoustic enclosure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/40Details of arrangements for obtaining desired directional characteristic by combining a number of identical transducers covered by H04R1/40 but not provided for in any of its subgroups
    • H04R2201/4012D or 3D arrays of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2203/00Details of circuits for transducers, loudspeakers or microphones covered by H04R3/00 but not provided for in any of its subgroups
    • H04R2203/12Beamforming aspects for stereophonic sound reproduction with loudspeaker arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2217/00Details of magnetostrictive, piezoelectric, or electrostrictive transducers covered by H04R15/00 or H04R17/00 but not provided for in any of their subgroups
    • H04R2217/03Parametric transducers where sound is generated or captured by the acoustic demodulation of amplitude modulated ultrasonic waves
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/20Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/15Transducers incorporated in visual displaying devices, e.g. televisions, computer displays, laptops
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation

Definitions

  • the present disclosure belongs to the field of sound generation technology, and specifically relates to a sound generation device and a driving method thereof, a display panel and a display device.
  • the sound emitting device is applied in various fields, for example, in the smart display device, a sound emitting device will be provided.
  • Smart display devices can realize human-computer interaction based on pressure, texture, touch, etc., but the existing sound-producing devices only have a single sound-producing function, and the sound direction and mode are single, so that the character receiving the sound from the sound-producing device cannot get good results. Listening experience.
  • the present disclosure provides a sound emitting device, including: a recognition unit, a directional sound emitting unit, and a control unit; wherein,
  • the recognition unit is connected to the control unit, and is configured to obtain relevant information of a person within a preset range, and send the obtained relevant information of the person to the control unit;
  • the control unit is connected to the directional sounding unit, and is configured to obtain a corresponding audio signal according to the acquired related information of the person, and control the directional sounding unit to emit sound waves according to the audio signal.
  • the identification unit includes:
  • the number recognition module is configured to obtain information about the number of people within the preset range
  • the position recognition module is configured to obtain position information of each person relative to the sound emitting device.
  • the directional sound emission unit includes a sound emission sensor array and an audio processing module, and the audio processing module is configured to convert the audio signal into a driving signal to drive the sound emission sensor array to emit sound.
  • the acoustic sensor array includes a piezoelectric transducer array.
  • the piezoelectric transducer array includes a plurality of piezoelectric sensors
  • the piezoelectric transducer array includes a first substrate, an elastic film layer located on one side of the first substrate, a first electrode located on a side of the elastic film layer away from the first substrate, and a first electrode located on the first substrate.
  • the first electrode includes a plurality of sub-electrodes, the sub-electrodes are distributed in an array on the side of the elastic film layer away from the first substrate, and each of the sub-electrodes corresponds to a piezoelectric sensor;
  • the first substrate has a plurality of openings, the openings correspond to the sub-electrodes one-to-one, and the orthographic projection of the sub-electrodes on the first substrate is located at the corresponding openings. In the orthographic projection on the first substrate.
  • the elastic film layer includes a polyimide film.
  • the sound sensor array includes a plurality of sound sensors, and the plurality of sound sensors are divided into a plurality of sensor groups, and each of the sensor groups correspondingly receives a drive signal.
  • the multiple sounding sensors are arranged in an array, and the sounding sensors located in the same column or the same row are connected in series to form one sensor group;
  • the multiple sounding sensors are divided into a plurality of sub-arrays, and the sounding sensors in each sub-array are connected in series to form a sensor group.
  • the directional sound emitting unit further includes:
  • a power amplifier connected to the audio processing module and configured to amplify the driving signal
  • An impedance matching module is connected between the power amplifier and the sound sensor array, and is configured to match the impedance of the two to optimize the driving signal.
  • control unit includes:
  • a data recording module connected to the identification unit, and configured to record the related information transmitted by the identification unit;
  • the audio signal calculation module is connected between the data recording module and the directional sound emitting unit, and is configured to calculate the audio signal corresponding to the relevant information of the character according to the relevant information of the character.
  • the identification unit includes any one of a piezoelectric transducer sensor, a light pulse sensor, a structured light sensor, and a camera.
  • the present disclosure also provides a driving method of a sound emitting device, including:
  • the recognition unit obtains relevant information of a person within a preset range, and sends the relevant information of the person to the control unit;
  • the control unit obtains the corresponding audio signal according to the relevant information of the person, and controls the directional sounding unit to emit sound waves according to the audio signal.
  • the present disclosure also provides a display panel including the above-mentioned sound generating device.
  • the sound emission device includes a directional sound emission unit, the directional sound emission unit includes a sound emission sensor array, and the sound emission sensor array includes a first substrate and a plurality of sound emission sensors arranged on one side of the first substrate;
  • the display panel includes a second substrate and a plurality of pixel units arranged on one side of the second substrate; wherein,
  • the sound emission sensor array and the display panel share a substrate, and the plurality of pixel units are arranged on a side of the plurality of sound emission sensors away from the common substrate.
  • the display panel further includes an adhesive layer
  • the sound emitting device is attached to the display panel through the adhesive layer.
  • the display panel is one of an organic electro-laser display panel or a mini light-emitting diode display panel.
  • the present disclosure also provides a display device including the above-mentioned display panel.
  • FIG. 1 is a structural diagram of a sound emitting device provided by an embodiment of the disclosure
  • FIG. 2 is a schematic diagram of a structure of a sounding sensor array in a sounding device provided by an embodiment of the disclosure
  • FIG. 3 is a grouping method of the sounding sensors in the sounding device provided by the embodiments of the disclosure.
  • FIG. 4 is another grouping method of the sounding sensors in the sounding device provided by the embodiments of the disclosure.
  • FIG. 5 is a flowchart of a driving method of a sound emitting device provided by an embodiment of the disclosure
  • FIG. 6 is a working schematic diagram of the acoustic parameter array method in the driving method provided by the embodiment of the present disclosure
  • FIG. 7 is a schematic diagram of the acoustic wave coverage area of the acoustic parametric array method in the driving method provided by an embodiment of the disclosure.
  • FIG. 8 is a schematic diagram of the principle of focusing of acoustic waves in the acoustic phased array method in the driving method provided by the embodiments of the disclosure.
  • FIG. 9 is a schematic diagram of the principle of sound wave deflection in the acoustic phased array method in the driving method provided by the embodiments of the disclosure.
  • FIG. 10 is a schematic diagram of the principle of combining the acoustic phased array method and the acoustic parametric array method in the driving method provided by the embodiments of the disclosure;
  • FIG. 11 is a schematic structural diagram of a display panel provided by an embodiment of the disclosure.
  • FIG. 12 is a schematic diagram of a structure of a display panel provided by an embodiment of the disclosure (the display panel and the sound emitting device are integrated as a whole).
  • an embodiment of the present disclosure provides a sound emitting device, which includes: an identification unit 1, a directional sound emitting unit 2 and a control unit 3.
  • the identification unit 1 is connected to the control unit 3, and the directional sounding unit 2 is connected to the control unit 3.
  • the recognition unit 1 is configured to obtain relevant information of a person within a preset range and send the relevant information to the control unit 3.
  • the preset range may be set according to needs and the recognition range of the recognition unit 1.
  • the preset range may be a recognition range within 2 meters from the recognition unit 1. That is, the recognition unit 1 detects the relevant information of all persons included in the preset range. If multiple persons are included in the preset range, the recognition unit 1 respectively recognizes the relevant information of each person, and sends the relevant information of each person to Control unit 3.
  • control unit 3 After the control unit 3 receives the relevant information of the character, it obtains the corresponding audio signal according to the relevant information of the character, and controls the directional sound unit 2 to emit sound waves according to the audio signal obtained by the control unit 3, and the sound waves correspond to the relevant information of each character. Recognize the relevant information of the person through the recognition unit 1, and then obtain the audio signal of the corresponding person through the control unit 3 according to the relevant information, and then control the directional sounding unit 2 to sound according to the acquired audio signal, so that the sound wave can be adjusted according to the person to make the sound sound.
  • the sound of the device is intelligent.
  • the relevant information of the person recognized by the recognition unit 1 may include multiple types of information.
  • the recognition unit 1 may recognize the number of people included in a preset range, and may also recognize each person. Relative to the position of the sounding device.
  • the recognition unit 1 may include a number recognition module 11 and a position recognition module 12.
  • the number recognition module 11 is configured to obtain information on the number of people in a preset range
  • the position recognition module 12 is configured to obtain the relative position of each person in the preset range to the sound emitting device.
  • the location information of the person that is, the person-related information sent by the recognition unit 1 to the control unit 3, includes the number of people information and the location information of each person.
  • control unit 3 can calculate the angle of the sound wave sent to each character according to the position information of each character relative to the sound emitting device, and generate a corresponding audio signal, and control the directional sound emitting unit 2 to emit sound, so that each character can better receive the sound. And the control unit 3 can calculate the coverage area of the sound wave according to the number of people and the position information of each character, so that the sound wave emitted by the directional sound unit 2 can cover all the characters within the preset range, and further enhance the character's listening experience.
  • the recognition unit 1 may include multiple types of recognition devices, for example, a somatosensory recognition device or an image recognition device may be used.
  • the identification unit 1 may include any one of a piezoelectric transducer sensor, a light pulse sensor, a structured light sensor, and a camera.
  • the identification unit 1 is a piezoelectric transducer sensor
  • the piezoelectric transducer sensor can emit ultrasonic waves, and the ultrasonic waves will be reflected when they encounter a person.
  • the piezoelectric transducer sensor detects the reflected ultrasonic waves, that is, detects the echo signal. Identify the number of people in the preset range and the location information of each person.
  • the recognition unit 1 can use a light pulse sensor to recognize through Time Of Flight (TOF) technology.
  • the light pulse sensor can emit light pulses within a preset range. If there is a person in the preset range, the person will reflect the light pulse , By detecting the flight (round trip) time of the light pulse to get the position information and the number of people information.
  • TOF Time Of Flight
  • the structured light sensor may include a camera and a projector.
  • the active structure information projected to the character through the projector such as laser stripes, gray codes, sine fringes, etc., is then photographed by a single or multiple cameras.
  • a three-dimensional image of the person can be obtained based on the principle of triangulation, and the position information and the number of people information of the person can be identified.
  • the recognition unit 1 may also use a camera for recognition.
  • a dual-camera using binocular recognition technology can recognize the number of people in a preset range and the position information of each person through the images collected by the camera.
  • the recognition unit 1 can also use other methods for recognition, and the specific design can be based on needs, which is not limited here.
  • the directional sound emission unit 2 may include a sound emission sensor array 44 and an audio processing module 41.
  • the audio processing module 41 is configured to convert the audio signal transmitted by the control unit 3 into a driving signal and drive The signal is sent to the utterance sensor array 44 to drive the utterance sensor array 44 to emit sound.
  • the driving signal may include the angle of sound wave propagation.
  • the driving signal may include the time sequence of each sounding sensor in the sounding sensor array 44 emitting sound waves.
  • the sounding direction of the sounding sensor array 44 can be adjusted by the phase delay of the sound waves emitted by each sensor.
  • the sounding sensor array 44 may include multiple types of sensors.
  • the sounding sensor array 44 is a piezoelectric transducer array, that is, the sounding sensor array 44 includes a plurality of piezoelectric transducers.
  • the sounding sensor array 44 can also be other types of sensor arrays, which are specifically set according to requirements, which is not limited here.
  • the piezoelectric transducer array includes a plurality of piezoelectric sensors.
  • the piezoelectric transducer array includes a first substrate 11, an elastic film layer 12 located on the side of the first substrate 11, a first electrode 13 located on the side of the elastic film layer 12 away from the first substrate 11, and a first electrode 13
  • the elastic film layer 12 serves as an elastic auxiliary film of the sound sensor array 44 (piezoelectric transducer array), and is configured to increase the vibration amplitude of the piezoelectric film 14.
  • the second electrode 15 may be a sheet-shaped electrode, which covers the entire area of the first substrate 11.
  • the first electrode 13 includes a plurality of sub-electrodes 331, and the plurality of sub-electrodes 331 are arranged in an array on the side of the elastic film layer 12 away from the first substrate 11. Each sub-electrode 331 corresponds to a piezoelectric sensor 001.
  • the sub-electrode 331 and the area of the sub-electrode 331 on the side of the sub-electrode 331 away from the first substrate 11 form a piezoelectric sensor 001, and the sound sensor array 44 It has a speaker function, and the sub-electrodes 331, the piezoelectric film 14 and the elastic film layer 12 together constitute the diaphragm of the sound sensor array 44 (speaker) for emitting sound waves.
  • the first substrate 11 has a plurality of openings 111.
  • the openings 111 serve as chambers for the sound sensor array 44 (speakers).
  • the openings 111 correspond to the sub-electrodes 331 one-to-one.
  • the orthographic projection of the sub-electrodes 331 on the first substrate 11 The corresponding opening 111 is located in the orthographic projection on the first substrate 11 so that sound waves can propagate through the opening 111 as a cavity.
  • the opening 111 is made on the first substrate 11, so that the first substrate 11, the elastic film layer 12 and the piezoelectric film 14 can form a suspended film structure.
  • the opening 111 can be formed on the first substrate 11 by laser drilling, hydrofluoric acid etching and drilling.
  • the first substrate 11 may be multiple types of substrates, for example, the first substrate 11 may be a glass substrate 11.
  • the elastic film layer 12 can be various types of elastic film layers, for example, the elastic film layer can be a polyimide film (Polyimide, PI). Of course, the elastic film layer 12 can also be made of other materials, which is not limited here. .
  • the piezoelectric transducer array may be a piezoelectric transducer array of a Micro-Electro-Mechanical System (MEMS).
  • MEMS Micro-Electro-Mechanical System
  • the sound sensor array 44 includes a plurality of sound sensors.
  • the sound sensor is a piezoelectric sensor 001 as an example.
  • the plurality of piezoelectric sensors 001 are arranged in an array in the first On the substrate 11, for ease of description, the specific structure of the piezoelectric sensor 001 is omitted in FIGS. 3 and 4, and the piezoelectric sensor 001 is replaced by a circle.
  • the multiple piezoelectric sensors 001 are equally divided into multiple sensor groups, and each sensor group correspondingly receives a driving signal sent by an audio processing module 41, that is, the group driving method is used to control the sound sensor in the sound sensor array 44, which can reduce the introduction
  • the number of drive lines of the sensor group simplifies the structure of the sound sensor array 44, and the entire sensor group can be used to propagate sound waves in one direction. Compared with using only one sensor to propagate sound waves in one direction, a larger sound wave coverage area will be obtained. .
  • the sound waves can be easily adjusted according to the distance between the person and the sound emitting device. For example, if the person is close to the sensor on the center line of the sound sensor array 44, the sensor group on the center line can be used to produce sound to increase the sound wave. Covered area.
  • multiple methods can be used to group multiple sounding sensors.
  • the following takes Mode 1 and Mode 2 as examples for description.
  • Fig. 3 taking the piezoelectric sensor 001 as the acoustic sensor as an example, multiple piezoelectric sensors 001 are arranged in an array, and multiple piezoelectric sensors 001 in the same column (Fig. 3) or in the same row form a sensor group A, each The piezoelectric sensors 001 in the sensor group A are connected in series with each other, and one sensor group A corresponds to one drive signal (drive signals P1...Pn in Fig. 3).
  • piezoelectric sensor 001 as the acoustic sensor as an example, multiple piezoelectric sensors 001 are distributed in an array, and the multiple piezoelectric sensors 001 are divided into multiple sub-arrays, and the piezoelectric sensors 001 in each sub-array form a sensor In group A, the piezoelectric sensors 001 in each sensor group A are connected in series with each other, and one sensor group A corresponds to a drive signal (drive signals P1...P4... in Figure 4).
  • the directional sound emitting unit 2 further includes a power amplifier 42 and an impedance matching module 43.
  • the power amplifier 42 is connected to the audio processing module 41, and the power amplifier 42 is configured to amplify the driving signal transmitted by the audio processing module 41.
  • the impedance matching module 43 is connected between the power amplifier 42 and the sound sensor array 44.
  • the impedance matching module 43 is configured to match the impedance of the power amplifier 42 and the sound sensor array 44 by adjusting the impedance of the power amplifier 42 and/or the sound sensor array 44. , To match the impedance of the two, to achieve the maximum driving signal amplification, in order to optimize the driving signal.
  • the power amplifier 42 cooperates with the impedance matching module 43 to optimize the driving signal, and maximize the driving signal to be transmitted to the sound sensor array 44.
  • the control unit 3 includes a data recording module 31 and an audio signal calculation module 32.
  • the data recording module 31 is connected to the recognition unit 1, and the data recording module 31 is configured to record relevant information of the person transmitted by the recognition unit 1 and transmit the recorded relevant information to the audio signal calculation module 32.
  • the audio signal calculation module 32 is connected between the data recording module 31 and the directional sound generating unit 2.
  • the audio signal calculation module 32 is configured to calculate the audio signal corresponding to the relevant information of the person according to the obtained relevant information of the person.
  • the sound production sensor array 44 of the sound production device provided in this embodiment can drive the sound production sensor to sound in various ways.
  • the audio signal calculation module 32 calculates the audio signal according to a preset algorithm, so as to adjust the sound wave corresponding to the angle of the person according to the position information of the person. , Calculate the required sound wave coverage area based on the number of people and the location information of each person.
  • the sound emitting device provided in this embodiment further includes a storage unit 4 and a setting unit 5.
  • the storage unit 4 is connected to the control unit 3, and the setting unit 5 is connected to the storage unit 4.
  • the sound emitting device provided by the present disclosure may include multiple sounding modes, such as a single character mode and a multi-character mode.
  • the sounding mode can be set by the setting unit 5 and the setting can be stored in the storage unit 4.
  • the setting unit 5 can also initialize the sounding device and store the initialized settings in the storage unit 4, and the control unit 3 can read the setting information from the storage unit 4 and perform corresponding settings for the sounding device.
  • an embodiment of the present disclosure also provides a driving method of a sound emitting device, which includes the following steps S1 to S3.
  • the recognition unit 1 obtains relevant information of a person within a preset range, and sends the obtained relevant information of the person to the control unit 3.
  • the relevant information of the characters recognized (or acquired) by the recognition unit 1 includes the number of characters included in the preset range (that is, the information about the number of people), and the position information of each character in the preset range relative to the sound emitting device (for example, The angle of the center line of the generating device), and the related information of the recognized person is transmitted to the control unit 3.
  • the control unit 3 obtains the corresponding audio signal according to the obtained relevant information of the person.
  • control unit 3 includes a data recording module 31 and an audio signal calculation module 32.
  • the audio signal calculation module 32 calculates the audio signal corresponding to the relevant information of the acquired character according to a preset algorithm, and the preset algorithm is based on the sound produced by the sound generating device. Mode to set.
  • description will be given by taking the sounding device using the phased array method and the acoustic parameter array method to control each sounding sensor in the sounding sensor array 44 as an example.
  • the sounding device can also use the acoustic parameter array method to drive each sounding sensor in the sounding sensor array 44 to increase the directivity of sound waves.
  • the multiple sounding sensors in the sounding sensor array 44 can be divided into a sensor array group C1 and a sensor array group C2 in a way that each side of the center line is a group, and the control unit 3 controls the audio processing module 41 to generate two-way drives.
  • the first drive signal is optimized by the power amplifier 42 and impedance matching module 43 and then transmitted to the sensor array group C1, which drives the sensor array group C1 to emit ultrasonic waves with frequency f1, and the second drive signal is matched with the impedance through the power amplifier 42
  • the module 43 is transmitted to the sensor array group C2, and drives the sensor array group C2 to emit ultrasonic waves with a frequency of f2, which are different from f1 and f2, so that after two ultrasonic waves with different frequencies have a nonlinear interaction in the air, they can be modulated to be audible to human ears.
  • Sound waves (referred to as "audible sound"). Since the sound waves of ultrasonic waves are sharper than those of audible sound, the ultrasonic waves have stronger directivity.
  • Using the acoustic parametric array method to drive the sound sensor array 44 to produce sound can enhance the directivity of audible sound.
  • control unit 3 can determine the directivity of the sound wave emitted by the sound sensor array 44 in the acoustic parameter array mode according to the arrangement of the sound sensor array 44 according to the following array directivity function D( ⁇ , ⁇ ):
  • the sounding sensor array 44 includes N columns and M rows, the row spacing of each row of sounding sensors is d 2 , the column spacing of each row of sounding sensors is d 1 , ⁇ and ⁇ are the directions of sound waves in spherical coordinates In the angle.
  • the directivity diagram of the sound waves emitted by the sensor array 44 can be obtained, as shown in Figure 7, the directivity angle ⁇ 10°, the directivity coverage area is the coverage area with the positive direction of the sound wave as the midline, deflection to both sides by 10°, at a distance of L 1 from the sound source (the center of the sounding sensor array 44), the sound source propagates on both sides of the midline
  • the maximum distance of is d 1.
  • L 2 2 ⁇ L 1
  • d 2 2 ⁇ d 1 .
  • the audio signal calculation in the control unit 3 The module 32 can obtain an audio signal with a delay time sequence by calculating the excitation delay time of each piezoelectric sensor 001 in the plurality of piezoelectric sensors 001, and then transmit the audio signal to the audio processing module 41, which is based on the audio signal
  • Each piezoelectric sensor (or sensor group) is driven with the excitation pulse sequence in the corresponding time sequence in the delay time sequence in the Figure 8) and the deflection direction ( Figure 9), so that the sound wave can be adjusted according to the character, so that the sound of the sound device can be intelligent.
  • the control unit 3 calculates the audio signal that makes the propagation direction of the sound wave correspond to the position of the person, and transmits it to the audio processing module 41,
  • the audio processing module 41 converts the audio signal into a corresponding driving signal.
  • the driving signal is optimized by the power amplifier 42 and the impedance matching module 43 and then transmitted to the vocal sensor array 44, and the vocal sensor array 44 is driven to emit sound waves whose propagation direction corresponds to the position of the person. .
  • the acoustic parameter array method and the acoustic phase control array method can be combined to enhance the directivity of the sound wave with the acoustic parameter array method, and then the acoustic phase control method is used to adjust the sound delay of each acoustic sensor, thereby increasing the coverage area of the acoustic wave.
  • the number of sounding sensors is 101 (ie S1...S50...S101 in FIG. 10), and the sounding sensor located in the center is the fiftieth sounding sensor.
  • the distance between any adjacent two sounding sensors is 1mm. Because of the persistence effect of the human ear, the human ear cannot distinguish two sounds with a time difference of less than 0.1s, and will consider two sounds with a time difference of less than 0.1s as one sound. Therefore, the control unit 3 can obtain a large listening area by means of phase-controlled focus scanning.
  • the control unit 3 controls the sounding sensor array 44 to propagate sound waves along the center line of the sounding sensor array 44 (that is, the direction directly opposite to S50) using a sound parameter array method, and the directivity angle ⁇ is 10°.
  • a pulse sequence is used to control multiple sound sensors, from the first sound sensor S1
  • each sounding sensor (S1 to S101) sequentially emits sound waves with a fixed delay time (for example, 1 ⁇ s).
  • the frequency of the sound wave is the same as the sound wave frequency of the audible sound generated by the sound parameter method.
  • the sound waves emitted by each sounding sensor have a fixed phase. If they interfere with each other, the coverage of sound waves can be increased. Finally, in this embodiment, the sound wave can obtain a coverage range of 20mm to 2.8m.
  • the acoustic sensor array 44 is driven by the acoustic parameter array method to increase the directivity of the sound wave, and then the propagation angle and coverage area of the acoustic wave are adjusted by the acoustic phased array method. Intelligent voice.
  • control unit 3 calculates the corresponding audio signal according to the relevant information of the person, and then transmits the corresponding audio signal to the directional sounding unit 2.
  • the audio signal includes the sounding frequency or delay sequence of each sounding sensor in the directional sounding unit 2, namely The propagation direction and coverage area of the sound wave can be adjusted according to the character.
  • an embodiment of the present disclosure also provides a display panel including the above-mentioned sound emitting device.
  • the sound generating device can be integrated with the display panel, or can be attached to the display panel and placed outside the panel.
  • the sound emitting device includes a directional sound emitting unit 2
  • the directional sound emitting unit 2 includes a sound emitting sensor array 44.
  • the sound emitting sensor array 44 includes a first substrate 11 and a plurality of sound-emitting sensors (for example, piezoelectric sensors 001) arranged on one side of the first substrate 11.
  • the display panel includes a second substrate 01 and a plurality of pixel units 6 disposed on one side of the second substrate 01.
  • Each pixel unit 6 includes a pixel electrode 61 and a pixel 62, and the pixel 62 includes a plurality of sub-pixels, such as a red sub-pixel 623, a green sub-pixel 622, and a blue sub-pixel 621.
  • the sounding sensor array 44 and the display panel can share a substrate, and the sounding device and the display panel are integrated to form a display panel with sounding function.
  • One side of the common substrate (11(01) in FIG. 12) can be used to make the display panel have a sounding function, and the thickness of the display panel can be reduced.
  • the sound emitting device can also be externally installed on the display panel. Specifically, on the light emitting side of the display panel, the sound emitting device is attached to the display panel through an adhesive layer, and is externally installed on the light emitting side of the display panel, and The sound emitting device is a transparent sound emitting device, which does not affect the light output rate of the display panel.
  • the sound emitting device can be used in various types of display panels.
  • the display panel can be an organic electro-laser (OLED) display panel or a mini LED (mini LED) display panel, which is not used here. limited.
  • OLED organic electro-laser
  • mini LED mini LED
  • control unit 3 in the sound emitting device can be shared with the control chip (CPU) on the back panel of the display panel.
  • the sound sensor array 44 in the directional sound emitting unit 2 is arranged on the light emitting side of the display panel.
  • the audio processing module 41, the power amplifier 42, and the impedance matching module 43 may be arranged in the peripheral area of the display panel, for example, the area where the pixel unit driving circuit is located.
  • the recognition unit 1 may be arranged on one side of the display panel, for example, on the side of the display panel where the camera is arranged. If the recognition unit 1 is a camera, the camera of the recognition unit 1 can be shared with the camera in the display panel.
  • an embodiment of the present disclosure also provides a display device including the above-mentioned display panel.
  • the display device may be any product or component that has a display function, such as a mobile phone, a tablet computer, a television, a monitor, a notebook computer, a digital photo frame, a navigator, and the like.
  • a display function such as a mobile phone, a tablet computer, a television, a monitor, a notebook computer, a digital photo frame, a navigator, and the like.
  • the display device also has other indispensable components, which will not be repeated here, and should not be used as a limitation to the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

本公开提供一种发声装置及其驱动方法、一种显示面板及一种显示装置。所述发声装置包括识别单元、定向发声单元和控制单元,识别单元与控制单元相连,配置为获取预设范围内的人物的相关信息,并将获取的人物的相关信息发送给控制单元;控制单元与定向发声单元相连,配置为根据获取的人物的相关信息,获取对应的音频信号,并控制定向发声单元按照获取的音频信号发出声波。

Description

发声装置及其驱动方法、显示面板及显示装置
相关申请的交叉引用
本申请要求于2020年5月14日提交的中国专利申请NO.202010409194.6的优先权,该中国专利申请的内容通过引用的方式整体合并于此。
技术领域
本公开属于发声技术领域,具体涉及发声装置及其驱动方法、显示面板及显示装置。
背景技术
发声装置应用在各个领域之中,例如在智能显示装置之中,会设置发声装置。智能显示装置能够根据压力、纹理触觉等实现人机交互,但现有的发声装置却只具有单一的发出声音的功能,发声方向和模式单一,使得接收发声装置所发声音的人物无法获得良好的收听体验。
公开内容
本公开提供一种发声装置,包括:识别单元、定向发声单元和控制单元;其中,
所述识别单元与所述控制单元相连,配置为获取预设范围内的人物的相关信息,并将获取的所述人物的相关信息发送给所述控制单元;
所述控制单元与所述定向发声单元相连,配置为根据获取的所述人物的相关信息,获取对应的音频信号,并控制所述定向发声单元按照所述音频信号发出声波。
在一些实施方式中,所述识别单元包括:
人数识别模块,配置为获取所述预设范围内的人数信息;
位置识别模块,配置为获取各人物相对于所述发声装置的位置信息。
在一些实施方式中,所述定向发声单元包括发声传感器阵列和音频处理模块,所述音频处理模块配置为将所述音频信号转换为驱动信号,以驱动所述发声传感器阵列进行发声。
在一些实施方式中,所述发声传感器阵列包括压电换能器阵列。
在一些实施方式中,所述压电换能器阵列包括多个压电传感器;
所述压电换能器阵列包括第一基底、位于所述第一基底一侧的弹性膜层、位于所述弹性膜层背离所述第一基底一侧的第一电极、位于所述第一电极背离所述第一基底一侧的压电薄膜、位于所述压电薄膜背离第一基底一侧的第二电极;其中,
所述第一电极包括多个子电极,所述子电极呈阵列分布在所述弹性膜层背离所述第一基底一侧,每个所述子电极对应一个压电传感器;
所述第一基底上具有多个开孔,所述开孔与所述子电极一一对应,所述子电极在所述第一基底上的正投影位于与之对应的所述开孔在所述第一基底上的正投影内。
在一些实施方式中,所述弹性膜层包括聚酰亚胺薄膜。
在一些实施方式中,所述发声传感器阵列包括多个发声传感器,所述多个发声传感器均分为多个传感器组,每个所述传感器组对应接收一路驱动信号。
在一些实施方式中,所述多个发声传感器呈阵列分布,位于同一列或同一行的所述发声传感器相串联形成一个所述传感器组;
或,
将所述多个发声传感器分为多个子阵列,每个子阵列中的所述发声传感器互相串联形成一个所述传感器组。
在一些实施方式中,所述定向发声单元还包括:
功率放大器,与所述音频处理模块相连,配置为放大所述驱动信号;
阻抗匹配模块,连接在所述功率放大器与所述发声传感器阵列之间,配置为匹配二者的阻抗以优化所述驱动信号。
在一些实施方式中,所述控制单元包括:
数据记录模块,与所述识别单元相连,配置为记录所述识别单元传输的所述相关信息;
音频信号计算模块,连接在所述数据记录模块与所述定向发声单元之间,配置为根据所述人物的相关信息,计算所述人物的相关信息对应的音频信号。
在一些实施方式中,所述识别单元包括压电换能传感器、光脉冲传感器、结构光传感器、摄像头中的任一种。
相应地,本公开还提供一种发声装置的驱动方法,包括:
识别单元获取预设范围内的人物的相关信息,并将所述人物的相关信息发送给控制单元;
所述控制单元根据所述人物的相关信息,获取对应的音频信号,控制定向发声单元按照所述音频信号发出声波。
相应地,本公开还提供一种显示面板,包括上述的发声装置。
在一些实施方式中,所述发声装置包括定向发声单元,所述定向发声单元包括发声传感器阵列,所述发声传感器阵列包括第一基底和设置在所述第一基底一侧的多个发声传感器;
所述显示面板包括第二基底和设置在所述第二基底一侧的多个像素单元;其中,
所述发声传感器阵列和所述显示面板共用基底,且所述多个像素单元设置在所述多个发声传感器背离所共用的基底的一侧。
在一些实施方式中,所述显示面板还包括粘接层;
所述发声装置通过所述粘接层与所述显示面板相贴合。
在一些实施方式中,所述显示面板为有机电激光显示面板或迷你发光二极管显示面板中的一种。
相应地,本公开还提供一种显示装置,包括上述的显示面板。
附图说明
图1为本公开实施例提供的发声装置的一种架构图;
图2为本公开实施例提供的发声装置中发声传感器阵列的一种结构示意图;
图3为本公开实施例提供的发声装置中发声传感器的一种分组方式;
图4为本公开实施例提供的发声装置中发声传感器的另一种分组方式;
图5为本公开实施例提供的发声装置的驱动方法的流程图;
图6本公开实施例提供的驱动方法中声参量阵法的工作示意图;
图7为本公开实施例提供的驱动方法中声参量阵法的声波覆盖面积示意图;
图8为本公开实施例提供的驱动方法中声相控阵法中声波聚焦的原理示意图;
图9为本公开实施例提供的驱动方法中声相控阵法中声波偏转的原理示意图;
图10为本公开实施例提供的驱动方法中声相控阵法与声参量阵法结合的原理示意图;
图11为本公开实施例提供的显示面板的结构示意图;以及
图12为本公开实施例提供的显示面板的一种结构示意图(显示面板与发声装置集成为一体)。
具体实施方式
为了使本公开的目的、技术方案和优点更加清楚,下面将结合附图对本公开作进一步地详细描述,显然,所描述的实施例仅是本公开的部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例都属于本公开保护的范围。
附图中各部件的形状和大小不反映真实比例,目的只是为了便于对本公开实施例的内容的理解。
除非另外定义,本公开使用的技术术语或者科学术语应当为本 公开所属领域内具有一般技能的人士所理解的通常意义。本公开中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。同样,“一个”、“一”或者“该”等类似词语也不表示数量限制,而是表示存在至少一个。“包括”或者“包含”等类似的词语意指出现在该词前面的元件或者物件涵盖出现在该词后面列举的元件或者物件及其等同,而不排除其他元件或者物件。“连接”或者“相连”等类似的词语并非限定于物理的或者机械的连接,而是可以包括电性的连接,不管是直接的还是间接的。诸如“上”的位置关系术语仅用于表示相对位置关系,当被描述对象的绝对位置改变后,则该相对位置关系也可能相应地改变。
如图1所示,本公开实施例提供一种发声装置,包括:识别单元1、定向发声单元2和控制单元3。
具体地,识别单元1与控制单元3相连,定向发声单元2与控制单元3相连。识别单元1配置为获取预设范围内的人物的相关信息,并将相关信息发送给控制单元3。预设范围可以根据需要以及识别单元1的识别范围来设定,例如预设范围可以为距离所述识别单元1在2米以内的识别范围。即识别单元1检测预设范围内所包括的所有人物的相关信息,若预设范围内包括多个人物,则识别单元1分别识别每个人物的相关信息,并将各个人物的相关信息发送给控制单元3。控制单元3接收到人物的相关信息后,根据人物的相关信息获取对应的音频信号,并控制定向发声单元2按照控制单元3获取到的音频信号发出声波,该声波对应各个人物的相关信息。通过识别单元1识别人物的相关信息,再通过控制单元3根据相关信息获取对应人物的音频信号,之后控制定向发声单元2按照获取到的音频信号发声,从而能够实现根据人物来调整声波,使发声装置的发声智能化。
在一些实施方式中,如图1所示,识别单元1识别的人物的相关信息可以包括多种类型的信息,例如,识别单元1可以识别预设范围内所包括的人数,还可识别各个人物相对于发声装置的位置。相应地,识别单元1可以包括人数识别模块11和位置识别模块12,人数识别模块11配置为获取预设范围内的人数信息,位置识别模块12 配置为获取预设范围内各人物相对于发声装置的位置信息,即识别单元1发送给控制单元3的人物的相关信息包括人数信息和各人物的位置信息。从而,控制单元3可以根据各人物相对于发声装置的位置信息,计算发送给各人物的声波的角度,并生成对应的音频信号,控制定向发声单元2发声,使各人物可以更好地接收声音,并且控制单元3可以根据人数和各人物的位置信息,计算声波的覆盖面积,使定向发声单元2发出的声波可以覆盖预设范围内的所有人物,进一步提升人物的收听体验。
在一些实施方式中,识别单元1可以包括多种类型的识别器件,例如可以采用体感识别器件,也可以采用图像识别器件。例如,识别单元1可以包括压电换能传感器、光脉冲传感器、结构光传感器、摄像头中的任一种。
具体地,若识别单元1为压电换能传感器,压电换能传感器能够发射超声波,超声波遇到人物时会反射,压电换能传感器通过探测反射回来的超声波,即探测回波信号,可以识别出预设范围内的人数信息和各人物的位置信息。识别单元1可以采用光脉冲传感器,通过飞行时间(Time Of Flight,TOF)技术进行识别,光脉冲传感器可以向预设范围内发射光脉冲,若预设范围内存在人物,则人物会反射光脉冲,通过探测光脉冲的飞行(往返)时间来得到人物的位置信息和人数信息。若识别单元1采用结构光传感器,结构光传感器可以包括相机和投射器,通过投射器投射到人物的主动结构信息,如激光条纹、格雷码、正弦条纹等,再通过单个或多个相机拍摄被测表面、获取结构光图像,之后可以基于三角测量原理获得人物的三维图像,即可识别到人物的位置信息和人数信息。识别单元1还可以采用摄像头进行识别,例如通过双摄像头采用双目识别技术,可以通过摄像头采集到的图像识别出预设范围内的人数信息和各人物的位置信息。当然,识别单元1还可以采用其他方式进行识别,具体的可以根据需要设计,在此不做限定。
在一些实施方式中,如图1所示,定向发声单元2可以包括发声传感器阵列44和音频处理模块41,音频处理模块41配置为将控 制单元3传输的音频信号转换为驱动信号,并将驱动信号发送给发声传感器阵列44,以驱动发声传感器阵列44进行发声。驱动信号可以包括声波传播的角度,具体地,例如驱动信号可以包括发声传感器阵列44中各个发声传感器发射声波的时序,通过各个传感器的发射声波的相位延迟,可以调整发声传感器阵列44的发声方向。
在一些实施方式中,发声传感器阵列44可以包括多种类型的传感器,例如发声传感器阵列44为压电换能器阵列,即发声传感器阵列44包括多个压电换能器。当然,发声传感器阵列44也可以为其他类型的传感器阵列,具体的根据需要设置,在此不做限定。
进一步地,如图2所示,若发声传感器阵列44为压电换能器阵列,压电换能器阵列包括多个压电传感器。具体地,压电换能器阵列包括第一基底11、位于第一基底11一侧的弹性膜层12、位于弹性膜层12背离第一基底11一侧的第一电极13、位于第一电极13背离第一基底11一侧的压电薄膜14、位于压电薄膜14背离第一基底11一侧的第二电极15。
弹性膜层12作为发声传感器阵列44(压电换能器阵列)的弹性辅助膜,配置为增强压电薄膜14的振动幅度。第二电极15可以为片状电极,其覆盖第一基底11的整个区域,第一电极13包括多个子电极331,多个子电极331呈阵列分布在弹性膜层12背离第一基底11一侧,每个子电极331对应一个压电传感器001,也就说,子电极331及子电极331背离第一基底11一侧的各个膜层对应子电极331的区域形成一个压电传感器001,发声传感器阵列44具有扬声器功能,子电极331、压电薄膜14和弹性膜层12共同构成发声传感器阵列44(扬声器)的振膜,用于发出声波。第一基底11上具有多个开孔111,开孔111作为发声传感器阵列44(扬声器)的腔室,开孔111与子电极331一一对应,子电极331在第一基底11上的正投影位于与之对应的开孔111在第一基底11上的正投影内,以使声波能够透过作为腔室的开孔111传播出去。在第一基底11上制作开孔111,可以使第一基底11、弹性膜层12和压电薄膜14形成悬空膜结构。
在一些实施方式中,在第一基底11上可以采用激光打孔、氢氟 酸刻蚀打孔等多种方式形成开孔111。
在一些实施方式中,第一基底11可以为多种类型的基底,例如第一基底11可以为玻璃基底11。弹性膜层12可以为多种类型的弹性膜层,例如弹性膜层可以为聚酰亚胺薄膜(Polyimide,PI),当然,弹性膜层12也可以采用其他材料制成,在此不做限定。
在一些实施方式中,压电换能器阵列可以为微机电系统(Micro-Electro-Mechanical System,MEMS)的压电换能器阵列。
在一些实施方式中,如图3、图4所示,发声传感器阵列44包括多个发声传感器,图中以发声传感器为压电传感器001为例,多个压电传感器001呈阵列分布在第一基底11上,为了便于描述,图3、图4中省略了压电传感器001的具体结构,用圆形代替压电传感器001。多个压电传感器001均分为多个传感器组,每个传感器组对应接收一个音频处理模块41发送的驱动信号,即采用分组驱动的方式控制发声传感器阵列44中的发声传感器,从而可以减少引入传感器组的驱动线的数量,简化发声传感器阵列44的结构,并且,可以采用整个传感器组传播一个方向的声波,相较于只采用一个传感器传播一个方向的声波,将得到更大的声波覆盖面积。而且,可以方便地根据人物相较于发声装置的距离调节声波,例如,若人物距离发声传感器阵列44的中线上的传感器较近时,可以采用中线上的传感器组进行发声,以增大声波的覆盖面积。
具体地,可以采用多种方式对多个发声传感器进行分组。以下以方式一和方式二为例进行说明。
方式一
参见图3,以发声传感器为压电传感器001为例,多个压电传感器001呈阵列分布,位于同一列(图3)或同一行的多个压电传感器001形成一个传感器组A,每个传感器组A中的压电传感器001互相串联,一个传感器组A对应一路驱动信号(如图3中驱动信号P1……Pn)。
方式二
参见图4,以发声传感器为压电传感器001为例,多个压电传感 器001呈阵列分布,将多个压电传感器001分为多个子阵列,每个子阵列中的压电传感器001形成一个传感器组A,每个传感器组A中的压电传感器001互相串联,一个传感器组A对应一路驱动信号(如图4中驱动信号P1……P4……)。
当然,还可以采用其他方式对发声传感器阵列44中的多个发声传感器进行分组,以上仅为举例说明,在此不做限定。
在一些实施方式中,如图1所示,定向发声单元2还包括功率放大器42和阻抗匹配模块43。功率放大器42与音频处理模块41相连,功率放大器42配置为放大音频处理模块41传入的驱动信号。阻抗匹配模块43连接在功率放大器42与发声传感器阵列44之间,阻抗匹配模块43配置为匹配功率放大器42和发声传感器阵列44的阻抗,通过调节功率放大器42和/或发声传感器阵列44的阻抗大小,使二者的阻抗相匹配,达到最大的驱动信号放大作用,以优化驱动信号。功率放大器42和阻抗匹配模块43相配合,以优化驱动信号,使驱动信号最大化后传输至发声传感器阵列44。
在一些实施方式中,如图1所示,控制单元3包括数据记录模块31和音频信号计算模块32。数据记录模块31与识别单元1相连,数据记录模块31配置为记录识别单元1传输的人物的相关信息,并将记录好的相关信息传输给音频信号计算模块32。音频信号计算模块32连接在数据记录模块31与定向发声单元2之间,音频信号计算模块32配置为根据获取的人物的相关信息,计算人物的相关信息对应的音频信号。本实施例提供的发声装置的发声传感器阵列44可以通过各种方式驱动发声传感器发声,音频信号计算模块32按照预设的算法计算音频信号,以实现根据人物的位置信息调整对应该人物角度的声波,根据人数信息和各人物的位置信息,计算所需的声波覆盖面积。
在一些实施方式中,本实施例提供的发声装置还包括存储单元4和设置单元5。存储单元4连接控制单元3,设置单元5连接存储单元4。本公开提供的发声装置可以包括多种发声模式,例如单人物模式和多人物模式,可以通过设置单元5对发声模式进行设置,并将设 置存储在存储单元4中。设置单元5还可以对发声装置进行初始化设置,并将初始化设置存入存储单元4中,控制单元3可以从存储单元4中读取设置信息,并对发声装置进行对应的设置。
相应地,如图5所示,本公开实施例还提供一种发声装置的驱动方法,包括以下步骤S1至S3。
S1、识别单元1获取预设范围内的人物的相关信息,并将获取的人物的相关信息发送给控制单元3。
具体地,识别单元1识别(或获取)的人物的相关信息包括预设范围内所包括的人物的数量(即人数信息)、以及预设范围内各人物相对于发声装置的位置信息(例如与发生装置的中线的角度),并将识别的人物的相关信息传输给控制单元3。
S2、控制单元3根据获取的人物的相关信息,获取对应的音频信号。
具体地,控制单元3包括数据记录模块31和音频信号计算模块32,音频信号计算模块32按照预设的算法计算获取的人物的相关信息对应的音频信号,预设的算法根据发声装置采用的发声模式来设置。以下以发声装置采用声相控阵法和声参量阵法来控制发声传感器阵列44中的各个发声传感器为例进行说明。
具体地,参见图6、图7,由于可听声的传播指向性较低,因此,发声装置还可以采用声参量阵法驱动发声传感器阵列44中的各个发声传感器,以增加声波的指向性。参见图6,发声传感器阵列44中的多个发声传感器可以以中线的两侧各为一组的方式分为传感器阵列组C1和传感器阵列组C2,控制单元3控制音频处理模块41产生两路驱动信号,第一路驱动信号经功率放大器42和阻抗匹配模块43的优化后传输给传感器阵列组C1,驱动传感器阵列组C1发出频率为f1的超声波,第二路驱动信号经功率放大器42和阻抗匹配模块43传输给传感器阵列组C2,驱动传感器阵列组C2发出频率为f2的超声波,f1与f2不同,从而两个频率不同的超声波在空气中发生非线性交互作用后,可以调制出人耳可听见的声波(简称“可听声”)。 由于超声波的声波相较于可听声的声波更加尖锐,因此超声波具有更强的指向性,采用声参量阵法驱动发声传感器阵列44发声可增强可听声的指向性。
进一步地,控制单元3可以根据发声传感器阵列44的排布,按照以下阵列指向性函数D(α,θ)来确定声参量阵法模式下,发声传感器阵列44发出的声波的指向性:
Figure PCTCN2021092332-appb-000001
其中,k=2π/λ,发声传感器阵列44包括N列M行,各行发声传感器的行间距为d 2,各列发声传感器的列间距为d 1,α和θ为声波指向的方向在球面坐标中的角度。
以下以发声传感器阵列44为101×101的阵列为例,101为一行或一列发声传感器的数量,即M=N=101,行间距和列间距相等,即d 1=d 2=2.8mm,发声传感器的尺寸半径r=0.9mm,相邻的两个发声传感器之间的间隙为1mm,代入上式可得传感器阵列44发出的声波的指向性图示,如图7所示,指向性角η为10°,指向性覆盖面积为以声波的正方向为中线,往两侧偏转10°的覆盖范围,在距离声源(发声传感器阵列44的中心)L 1处,声源在中线两侧传播的最大距离为d 1,在距离声源L 2处,声源在中线两侧传播的最大距离为d 2,图7中L 2=2×L 1,相应地,d 2=2×d 1。取L 1为1m,d 1=0.176m,则本实施例中声波在1m处,在中线两侧传播的最大距离为2×d 1=0.352m。若人物在距离声源1m处、且距离中线0.176m以内的区域,即可收听到发声传感器阵列44发出的声音。
进一步地,如图8、图9所示,若采用声相控阵法来控制发声传感器阵列44中的各个发声传感器,以发声传感器为压电传感器001为例,控制单元3中的音频信号计算模块32可以通过计算多个压电传感器001中各个压电传感器001的激发延迟时间,得到具有延迟时间序列的音频信号,再将该音频信号传输给音频处理模块41,音频处理模块41根据音频信号中的延迟时间序列,以相应时序的激励脉 冲序列驱动各个压电传感器(或传感器组),于是各压电传感器001发射的声波产生了相位差,从而影响干涉结果,即可实现声波的聚焦(图8)和偏转方向(图9),从而能够实现根据人物来调整声波,使发声装置的发声智能化。在识别单元1识别到预设范围内的人物的相关信息后,根据人物的位置信息,控制单元3计算出使声波的传播方向对应该人物的位置的音频信号,并传输给音频处理模块41,音频处理模块41将音频信号转换成对应的驱动信号,驱动信号经过功率放大器42和阻抗匹配模块43的优化后传输给发声传感器阵列44,驱动发声传感器阵列44发出传播方向对应该人物的位置的声波。
进一步地,基于上述,在采用声参量阵法驱动发声传感器阵列44时,由于加强了声波的指向性,因此声波的覆盖面积较小,若人物没有正对发声传感器阵列44的中线,而偏离中线的距离较远,则可能收听不到声音。因此可以结合声参量阵法和声相控阵法,以声参量阵法加强声波的指向性,再通过声相控法调整各个发声传感器发声的延时,从而增大声波的覆盖面积。
具体地,参见图10,以发声传感器阵列44包括一行发声传感器为例,发声传感器的数量为101个(即图10中的S1…S50…S101),位于中心的发声传感器为第五十个发声传感器S50,任意相邻的两个发声传感器的间距为1mm。因为人耳的听觉暂留效应,人耳不能分辨时间差小于0.1s的两个声音,会认为时间差小于0.1s的两个声音为一个声音。所以,可以使控制单元3采用相控聚焦扫描的方式获得大的收听面积。控制单元3控制发声传感器阵列44以声参量阵法,沿发声传感器阵列44的中线(即S50正对的方向)传播声波,其指向性角度η为10°。以人物处于位置O 2为例,即人物在偏离中线角度β为60°、距离发声传感器阵列44为1m处要听到声波,则通过脉冲序列控制多个发声传感器,从第一个发声传感器S1开始,各个发声传感器(S1到S101)依次间隔固定的延迟时间(例如1μs)发射声波,该声波频率与声参量法产生的可听声的声波频率相同,各个发声传感器发射的声波具有固定的相位差,互相干涉即可使声波的覆盖范围增大。最终在本实施例中声波可以得到20mm至2.8m的覆盖范围。 综上所述,通过声参量阵法驱动发声传感器阵列44增加声波的指向性,再通过声相控阵法调节声波的传播角度以及覆盖面积,即可实现根据人物来调整声波,使发声装置的发声智能化。
S3、控制定向发声单元2按照获取(或计算)的音频信号发出声波。
具体地,控制单元3根据人物的相关信息计算对应的音频信号,再将对应的音频信号传输给定向发声单元2,音频信号中包括定向发声单元2中各发声传感器的发声频率或延迟序列,即可根据人物来调整声波的传播方向和覆盖面积。
相应地,本公开实施例还提供一种显示面板,包括上述的发声装置。
发声装置可以和显示面板集成为一体,也可以与显示面板相贴合,置于面板外。
具体地,参见图2、图11和图12,若发声装置与显示面板集成为一体,发声装置包括定向发声单元2,定向发声单元2包括发声传感器阵列44,如图2所示,发声传感器阵列44包括第一基底11和设置在第一基底11一侧的多个发声传感器(例如压电传感器001)。如图11所示,显示面板包括第二基底01和设置在第二基底01一侧的多个像素单元6。每个像素单元6包括像素电极61和像素62,像素62包括多个子像素,例如红色子像素623、绿色子像素622和蓝色子像素621。参见图12,发声传感器阵列44和显示面板可共用基底,将发声装置与显示面板集合为一体形成具有发声功能的显示面板,多个像素单元6设置在多个发声传感器(压电传感器001)背离所共用的基底(图12中11(01))的一侧,从而可以使显示面板具有发声功能,且可以减少显示面板的厚度。
在一些实施方式中,发声装置还可以外置在显示面板上,具体的,在显示面板的出光侧,发声装置通过粘接层与显示面板相贴合,外置在显示面板的出光侧,且发声装置为透明的发声装置,不影响显示面板的出光率。
在一些实施方式中,发声装置可以应用在各种类型的显示面板中,例如显示面板可以为有机电激光(OLED)显示面板,也可以为迷你发光二极管(mini LED)显示面板,在此不做限定。
进一步地,发声装置中的控制单元3可以与显示面板的背板上的控制芯片(CPU)共用,定向发声单元2中的发声传感器阵列44设置在显示面板的出光侧,定向发声单元2中的音频处理模块41、功率放大器42和阻抗匹配模块43可以设置在显示面板的周边区域,例如像素单元驱动电路所在区域。识别单元1可以设置在显示面板的一侧,例如设置在显示面板的设置摄像头的一侧。若识别单元1为摄像头,识别单元1的摄像头可以与显示面板中的摄像头共用。
相应地,本公开实施例还提供一种显示装置,包括上述显示面板。
需要说明的是,本实施例提供的显示装置可以为:手机、平板电脑、电视机、显示器、笔记本电脑、数码相框、导航仪等具有显示功能的任何产品或部件。本领域的普通技术人员应该理解,显示装置还具有其它必不可少的组成部分,在此不做赘述,其不应作为对本公开的限制。
可以理解的是,以上实施方式仅仅是为了说明本公开的原理而采用的示例性实施方式,然而本公开并不局限于此。对于本领域内的普通技术人员而言,在不脱离本公开的精神和实质的情况下,可以做出各种变型和改进,这些变型和改进也视为落入本公开的保护范围内。

Claims (17)

  1. 一种发声装置,包括:识别单元、定向发声单元和控制单元;其中,
    所述识别单元与所述控制单元相连,配置为获取预设范围内的人物的相关信息,并将获取的所述人物的相关信息发送给所述控制单元;以及
    所述控制单元与所述定向发声单元相连,配置为根据获取的所述人物的相关信息,获取对应的音频信号,并控制所述定向发声单元按照所述音频信号发出声波。
  2. 根据权利要求1所述的发声装置,其中,所述识别单元包括:
    人数识别模块,配置为获取所述预设范围内的人数信息;以及
    位置识别模块,配置为获取各人物相对于所述发声装置的位置信息。
  3. 根据权利要求1所述的发声装置,其中,所述定向发声单元包括发声传感器阵列和音频处理模块,所述音频处理模块配置为将获取的所述音频信号转换为驱动信号,以驱动所述发声传感器阵列进行发声。
  4. 根据权利要求3所述的发声装置,其中,所述发声传感器阵列包括压电换能器阵列。
  5. 根据权利要求4所述的发声装置,其中,所述压电换能器阵列包括多个压电传感器;
    所述压电换能器阵列包括第一基底、位于所述第一基底一侧的弹性膜层、位于所述弹性膜层背离所述第一基底一侧的第一电极、位于所述第一电极背离所述第一基底一侧的压电薄膜、位于所述压电薄膜背离第一基底一侧的第二电极;其中,
    所述第一电极包括多个子电极,所述子电极呈阵列分布在所述弹性膜层背离所述第一基底一侧,每个所述子电极对应一个压电传感器;
    所述第一基底上具有多个开孔,所述开孔与所述子电极一一对应,所述子电极在所述第一基底上的正投影位于与之对应的所述开孔在所述第一基底上的正投影内。
  6. 根据权利要求5所述的发声装置,其中,所述弹性膜层包括聚酰亚胺薄膜。
  7. 根据权利要求3所述的发声装置,其中,所述发声传感器阵列包括多个发声传感器,所述多个发声传感器均分为多个传感器组,每个所述传感器组对应接收一路驱动信号。
  8. 根据权利要求7所述的发声装置,其中,所述多个发声传感器呈阵列分布,位于同一列或同一行的所述发声传感器相串联形成一个所述传感器组;
    或,
    将所述多个发声传感器分为多个子阵列,每个子阵列中的所述发声传感器互相串联形成一个所述传感器组。
  9. 根据权利要求3所述的发声装置,其中,所述定向发声单元还包括:
    功率放大器,与所述音频处理模块相连,配置为放大所述驱动信号;以及
    阻抗匹配模块,连接在所述功率放大器与所述发声传感器阵列之间,配置为匹配二者的阻抗以优化所述驱动信号。
  10. 根据权利要求1所述的发声装置,其中,所述控制单元包括:
    数据记录模块,与所述识别单元相连,配置为记录所述识别单元传输的所述人物的相关信息;
    音频信号计算模块,连接在所述数据记录模块与所述定向发声单元之间,配置为根据获取的所述人物的相关信息,计算所述人物的相关信息对应的音频信号。
  11. 根据权利要求1所述的发声装置,其中,所述识别单元包括压电换能传感器、光脉冲传感器、结构光传感器、摄像头中的任一种。
  12. 一种发声装置的驱动方法,包括:
    识别单元获取预设范围内的人物的相关信息,并将所述人物的相关信息发送给控制单元;以及
    所述控制单元根据所述人物的相关信息,获取对应的音频信号,控制定向发声单元按照所述音频信号发出声波。
  13. 一种显示面板,包括根据权利要求1至11中任一项所述的发声装置。
  14. 根据权利要求13所述的显示面板,其中,所述发声装置包括定向发声单元,所述定向发声单元包括发声传感器阵列,所述发声传感器阵列包括第一基底和设置在所述第一基底一侧的多个发声传感器;
    所述显示面板包括第二基底和设置在所述第二基底一侧的多个像素单元;其中,
    所述发声传感器阵列和所述显示面板共用基底,且所述多个像素单元设置在所述多个发声传感器背离所共用的基底的一侧。
  15. 根据权利要求13所述的显示面板,还包括粘接层;
    所述发声装置通过所述粘接层与所述显示面板相贴合。
  16. 根据权利要求13所述的显示面板,其中,所述显示面板为有机电激光显示面板或迷你发光二极管显示面板中的一种。
  17. 一种显示装置,包括如权利要求13至16中任一项所述的显示面板。
PCT/CN2021/092332 2020-05-14 2021-05-08 发声装置及其驱动方法、显示面板及显示装置 WO2021227980A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/765,225 US11902761B2 (en) 2020-05-14 2021-05-08 Sound producing device and method for driving the same, display panel and display apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010409194.6 2020-05-14
CN202010409194.6A CN111615033B (zh) 2020-05-14 2020-05-14 发声装置及其驱动方法、显示面板及显示装置

Publications (1)

Publication Number Publication Date
WO2021227980A1 true WO2021227980A1 (zh) 2021-11-18

Family

ID=72203373

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/092332 WO2021227980A1 (zh) 2020-05-14 2021-05-08 发声装置及其驱动方法、显示面板及显示装置

Country Status (3)

Country Link
US (1) US11902761B2 (zh)
CN (1) CN111615033B (zh)
WO (1) WO2021227980A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111615033B (zh) 2020-05-14 2024-02-20 京东方科技集团股份有限公司 发声装置及其驱动方法、显示面板及显示装置
CN112153538B (zh) * 2020-09-24 2022-02-22 京东方科技集团股份有限公司 显示装置及其全景声实现方法、非易失性存储介质
CN114173262B (zh) * 2021-11-18 2024-02-27 苏州清听声学科技有限公司 一种超声波发声器、显示器及电子设备
CN114173261B (zh) * 2021-11-18 2023-08-25 苏州清听声学科技有限公司 一种超声波发声器、显示器及电子设备
CN114724459B (zh) * 2022-03-10 2024-04-19 武汉华星光电技术有限公司 显示基板和显示面板
CN116614745B (zh) * 2023-06-19 2023-12-22 金声源(嘉兴)科技有限公司 一种应用于高速公路的定向声波发生器

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1774871A (zh) * 2003-04-15 2006-05-17 专利创投公司 定向扬声器
CN103165125A (zh) * 2013-02-19 2013-06-19 深圳创维-Rgb电子有限公司 音频定向处理方法和装置
CN104937660A (zh) * 2012-11-18 2015-09-23 诺威托系统有限公司 用于生成声场的方法和系统
CN108966086A (zh) * 2018-08-01 2018-12-07 苏州清听声学科技有限公司 基于目标位置变化的自适应定向音频系统及其控制方法
CN109068245A (zh) * 2018-08-01 2018-12-21 京东方科技集团股份有限公司 屏幕发声装置、发声显示屏及其制造方法和屏幕发声系统
US20190124446A1 (en) * 2016-03-31 2019-04-25 The Trustees Of The University Of Pennsylvania Methods, systems, and computer readable media for a phase array directed speaker
CN110099343A (zh) * 2019-05-28 2019-08-06 安徽奥飞声学科技有限公司 一种具有mems扬声器阵列的听筒及通信装置
CN111615033A (zh) * 2020-05-14 2020-09-01 京东方科技集团股份有限公司 发声装置及其驱动方法、显示面板及显示装置
CN112216266A (zh) * 2020-10-20 2021-01-12 傅建玲 一种相控多声道声波定向发射方法及系统

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DK1142446T3 (da) * 1999-01-06 2003-11-17 Iroquois Holding Co Inc Højttalersystem
JP5254951B2 (ja) * 2006-03-31 2013-08-07 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ データ処理装置及び方法
CN103002376B (zh) * 2011-09-09 2015-11-25 联想(北京)有限公司 声音定向发送的方法和电子设备
JP5163796B1 (ja) * 2011-09-22 2013-03-13 パナソニック株式会社 音響再生装置
US8879766B1 (en) * 2011-10-03 2014-11-04 Wei Zhang Flat panel displaying and sounding system integrating flat panel display with flat panel sounding unit array
KR20180097786A (ko) * 2013-03-05 2018-08-31 애플 인크. 하나 이상의 청취자들의 위치에 기초한 스피커 어레이의 빔 패턴의 조정
JP6233581B2 (ja) * 2013-12-26 2017-11-22 セイコーエプソン株式会社 超音波センサー及びその製造方法
US20150382129A1 (en) * 2014-06-30 2015-12-31 Microsoft Corporation Driving parametric speakers as a function of tracked user location
US11310617B2 (en) * 2016-07-05 2022-04-19 Sony Corporation Sound field forming apparatus and method
CN107776483A (zh) * 2017-09-26 2018-03-09 广州小鹏汽车科技有限公司 一种定向发声系统及实现方法
CN109032411B (zh) * 2018-07-26 2021-04-23 京东方科技集团股份有限公司 一种显示面板、显示装置及其控制方法
CN109803199A (zh) * 2019-01-28 2019-05-24 合肥京东方光电科技有限公司 发声装置、显示系统以及发声装置的发声方法
CN110112284B (zh) * 2019-05-27 2021-09-17 京东方科技集团股份有限公司 柔性声电基板及其制备方法、柔性声电装置
CN110225439B (zh) * 2019-06-06 2020-08-14 京东方科技集团股份有限公司 一种阵列基板及发声装置
CN110636420B (zh) * 2019-09-25 2021-02-09 京东方科技集团股份有限公司 一种薄膜扬声器、薄膜扬声器的制备方法以及电子设备

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1774871A (zh) * 2003-04-15 2006-05-17 专利创投公司 定向扬声器
CN104937660A (zh) * 2012-11-18 2015-09-23 诺威托系统有限公司 用于生成声场的方法和系统
CN103165125A (zh) * 2013-02-19 2013-06-19 深圳创维-Rgb电子有限公司 音频定向处理方法和装置
US20190124446A1 (en) * 2016-03-31 2019-04-25 The Trustees Of The University Of Pennsylvania Methods, systems, and computer readable media for a phase array directed speaker
CN108966086A (zh) * 2018-08-01 2018-12-07 苏州清听声学科技有限公司 基于目标位置变化的自适应定向音频系统及其控制方法
CN109068245A (zh) * 2018-08-01 2018-12-21 京东方科技集团股份有限公司 屏幕发声装置、发声显示屏及其制造方法和屏幕发声系统
CN110099343A (zh) * 2019-05-28 2019-08-06 安徽奥飞声学科技有限公司 一种具有mems扬声器阵列的听筒及通信装置
CN111615033A (zh) * 2020-05-14 2020-09-01 京东方科技集团股份有限公司 发声装置及其驱动方法、显示面板及显示装置
CN112216266A (zh) * 2020-10-20 2021-01-12 傅建玲 一种相控多声道声波定向发射方法及系统

Also Published As

Publication number Publication date
US11902761B2 (en) 2024-02-13
CN111615033B (zh) 2024-02-20
US20220353613A1 (en) 2022-11-03
CN111615033A (zh) 2020-09-01

Similar Documents

Publication Publication Date Title
WO2021227980A1 (zh) 发声装置及其驱动方法、显示面板及显示装置
CN107107114B (zh) 三端口压电超声换能器
JP6316433B2 (ja) マイクロメカニカル超音波トランスデューサおよびディスプレイ
US7760891B2 (en) Focused hypersonic communication
US20150023138A1 (en) Ultrasonic Positioning System and Method Using the Same
WO2020155671A1 (zh) 指纹识别结构以及显示装置
US20080093952A1 (en) Hypersonic transducer
WO2020259384A1 (zh) 指纹识别器件及其驱动方法、显示装置
JP5423370B2 (ja) 音源探査装置
WO2016061410A1 (en) Three-port piezoelectric ultrasonic transducer
US11403868B2 (en) Display substrate having texture information identification function, method for driving the same and display device
TW202026941A (zh) 超音波指紋偵測和相關設備及方法
US11474607B2 (en) Virtual, augmented, or mixed reality device
Dokmanić et al. Hardware and algorithms for ultrasonic depth imaging
CN111782090B (zh) 显示模组、超声波触控检测方法、超声波指纹识别方法
CN111586553B (zh) 显示装置及其工作方法
JP2014032600A (ja) 表示入力装置
WO2020062108A1 (zh) 设备
CN111510819A (zh) 一种超声波扬声器系统及其工作方法
CN112329672A (zh) 指纹识别模组及其制备方法、显示面板、显示装置
JP2012134591A (ja) 発振装置および電子機器
Sarkar Audio recovery and acoustic source localization through laser distance sensors
TWM642391U (zh) 超聲波指紋識別模組和裝置
CN116458171A (zh) 发声装置、屏幕发声显示装置及其制备方法
JP2014188148A (ja) 超音波測定装置、超音波トランスデューサーデバイス及び超音波画像装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21805208

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21805208

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 21805208

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 29.06.2023)