WO2022218195A1 - 显示设备及其音频输出方法 - Google Patents
显示设备及其音频输出方法 Download PDFInfo
- Publication number
- WO2022218195A1 WO2022218195A1 PCT/CN2022/085410 CN2022085410W WO2022218195A1 WO 2022218195 A1 WO2022218195 A1 WO 2022218195A1 CN 2022085410 W CN2022085410 W CN 2022085410W WO 2022218195 A1 WO2022218195 A1 WO 2022218195A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- speaker
- display device
- sound
- distance
- moment
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 230000001360 synchronised effect Effects 0.000 claims description 23
- 230000004044 response Effects 0.000 claims description 11
- 230000000694 effects Effects 0.000 abstract description 30
- 238000009877 rendering Methods 0.000 description 42
- 238000010586 diagram Methods 0.000 description 23
- 230000004807 localization Effects 0.000 description 19
- 230000005236 sound signal Effects 0.000 description 13
- 230000003321 amplification Effects 0.000 description 5
- 230000000873 masking effect Effects 0.000 description 5
- 238000003199 nucleic acid amplification method Methods 0.000 description 5
- 239000000284 extract Substances 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000008447 perception Effects 0.000 description 4
- 230000001960 triggered effect Effects 0.000 description 4
- 229920001621 AMOLED Polymers 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000002096 quantum dot Substances 0.000 description 2
- 230000001953 sensory effect Effects 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000009365 direct transmission Effects 0.000 description 1
- 230000005669 field effect Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/64—Constructional details of receivers, e.g. cabinets or dust covers
- H04N5/642—Disposition of sound reproducers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/44—Receiver circuitry for the reception of television signals according to analogue transmission standards
- H04N5/60—Receiver circuitry for the reception of television signals according to analogue transmission standards for the sound signals
- H04N5/607—Receiver circuitry for the reception of television signals according to analogue transmission standards for the sound signals for more than one sound signal, e.g. stereo, multilanguages
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R5/00—Stereophonic arrangements
- H04R5/02—Spatial or constructional arrangements of loudspeakers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R5/00—Stereophonic arrangements
- H04R5/04—Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/301—Automatic calibration of stereophonic sound system, e.g. with test microphone
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/20—Arrangements for obtaining desired frequency or directional characteristics
- H04R1/32—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
- H04R1/323—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only for loudspeakers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2201/00—Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
- H04R2201/02—Details casings, cabinets or mounting therein for transducers covered by H04R1/02 but not provided for in any of its subgroups
- H04R2201/025—Transducer mountings or cabinet supports enabling variable orientation of transducer of cabinet
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2420/00—Details of connection covered by H04R, not provided for in its groups
- H04R2420/01—Input selection or mixing for amplifiers or loudspeakers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2499/00—Aspects covered by H04R or H04S not otherwise provided for in their subgroups
- H04R2499/10—General applications
- H04R2499/15—Transducers incorporated in visual displaying devices, e.g. televisions, computer displays, laptops
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S1/00—Two-channel systems
- H04S1/007—Two-channel systems in which the audio signals are in digital form
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/11—Positioning of individual sound objects, e.g. moving airplane, within a sound field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/05—Application of the precedence or Haas effect, i.e. the effect of first wavefront, in order to improve sound-source localisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S5/00—Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/302—Electronic adaptation of stereophonic sound system to listener position or orientation
Definitions
- the present application relates to the field of television equipment, and in particular, to a display device and an audio output method thereof.
- Existing video playback devices such as flat-screen TVs, have at least two speakers for stereo effect. But the stereo effect of the existing video playback device is not good.
- the embodiment of the present application provides a display device, aiming to obtain a display device with good stereo effect.
- the embodiment of the present application also provides an audio output method of a display device, which is used to improve the stereo effect of the display device.
- a display device in a first aspect, includes a display screen, a first speaker and a second speaker.
- the first speaker is arranged on the rear side of the display screen, and the sounding direction of the first speaker is toward the rear and upper part of the display device; the sounding direction of the second speaker is toward the front or toward the display device.
- Below the display device the first speaker and the second speaker sound out of sync.
- the sounding direction of the first speaker is the initial propagation direction of the sound emitted by the first speaker
- the rear and top of the display device are the direction between the rear of the display device and the top of the display device.
- the user by restricting the asynchronous sound of the first speaker and the second speaker, the user can receive the sound from the first speaker and the second speaker at the same time, and the sound image position formed by the sound from the first speaker and the second speaker is more accurate, and the There will be a situation where the picture and the audio-visual position are out of position, so that the audio-visual position and the picture can be accurately synchronized, the stereo effect is good, and the user experience is improved.
- the sound emitted by the first speaker is reflected by a first obstacle located behind the display device to a second obstacle located above the display device, and reflected to the user viewing area in front of the display device through the second obstacle.
- the first obstacle may be a wall
- the second obstacle may be a ceiling.
- the axial direction of the first speaker By directing the sounding direction of the first speaker toward the upper rear of the display device, that is, the axial direction of the first speaker is toward the upper rear of the display device. Since the axial direction of the first speaker is toward the rear and top of the display device, most of the sound of the first speaker is directed toward the first obstacle located behind the display device. After being reflected by the first obstacle, it passes through the second obstacle located above the display device. Object reflections reach the user viewing area. A small part of the sound of the first speaker deviates greatly from the axial direction, and can directly reach the viewing area of the user towards the front of the display device.
- this part of the sound has a large off-axis angle from the main axis of the first speaker, and the intensity of the sound is weak, so it directly reaches the user's viewing area.
- the sound in the user's viewing area has a weak masking effect on the sound that reaches the user's viewing area after being reflected twice by the wall and the ceiling.
- the sound emitted by the first speaker is reflected by the first obstacle to the second obstacle, and then reflected from the ceiling and projected to the user's viewing area.
- the sound image of the sound transmitted to the user's viewing area is the sound image located above the ceiling, so that
- the range of the sound field in the height direction formed by the first speaker is not limited to the size of the display screen, so that the sound field in the height direction can cover the entire spatial height of the application environment and achieve the effect of sound image localization in the sky, for example, the sound image energy of an aircraft engine. It is positioned above the display screen and played through the first speaker, so that the picture played on the display screen is consistent with the sound positioning.
- the sound output direction of the first speaker is 10 degrees to 80 degrees (including 10 degrees and 80 degrees) from the horizontal direction, so as to ensure that the sound emitted by the first speaker is reflected through the wall and the ceiling in turn, and finally reach the user viewing area.
- the horizontal direction is the direction perpendicular to the display surface of the display screen.
- the sound output direction of the first speaker is 35 degrees to 45 degrees (including 35 degrees and 45 degrees) to the horizontal direction, and when the sound output direction of the first speaker is 35 degrees to 45 degrees to the horizontal direction
- the space parameter is a collection of multiple different parameters, such as the distance from the display device to the wall, the distance from the display device to the ceiling, and the distance from the display device to the user. That is to say, the display device can be applied to a variety of application environments with different spatial parameters within a certain distance from the wall, within a certain range from the ceiling, and within a certain range from the user, so as to ensure the user's audio-visual experience.
- the first speaker emits the first sound at the first moment
- the second speaker emits the second sound corresponding to the first sound at the second moment
- the first sound and the second sound are in the viewing area of the user.
- the first moment and the second moment are in the viewing area of the user.
- the user can simultaneously receive the sound from the first speaker and the second speaker at the third moment, and the positioning of the sound image position is more accurate.
- the position of the picture and the audio-visual position is deviated, so that the audio-visual position and the picture can be accurately synchronized, and the user experience can be improved.
- the time difference between the first moment and the second moment changes, so as to ensure that the sounds emitted by the first speaker and the second speaker reach the viewing area of the user at the same time. , so that the sound image localization is more accurate.
- the position of the sound image formed by the first sound and the second sound changes.
- the position of the sound image formed by the first sound and the second sound is adjusted, so as to realize the adjustment of the sound image position in the height direction, so that the sound image position is synchronized with the picture.
- the sound emission direction of the first speaker is variable, and the sound emission direction of the first speaker can be adjusted as required, so that the sound emitted by the first speaker can be accurately transmitted to the viewing area of the user.
- the sounding direction of the first speaker can be changed.
- the sounding direction of the first speaker is adjusted through spatial parameters, so as to realize automatic adjustment of the sounding direction of the first speaker and improve user experience.
- the spatial parameter includes at least one of a first distance, a second distance, and a third distance, where the first distance is the distance between the display device and the first obstacle, and the second distance is the distance between the display device and the first obstacle. The distance between the second obstacles, and the third distance is the distance between the display device and the user.
- the display device can adjust the position of the viewing area of the user according to the position of the user, so that the user can have a good audio-visual experience no matter where he moves.
- the display device includes a top and a bottom, and the second speaker is arranged near the bottom relative to the first speaker, so that the user can receive a combined sound formed by the sounds emitted from the speakers at different positions, thereby improving the stereoscopic effect of the sound.
- the display device further includes a processor, the processor is coupled to both the first speaker and the second speaker, and the processor controls the volume of the first sound emitted by the first speaker and the volume of the sound emitted by the second speaker.
- the ratio of the volume of the second sound to adjust the position of the sound image formed by the first sound and the second sound, so as to realize the adjustment of the sound image position in the height direction, so that the sound image position is synchronized with the picture.
- the processor is further configured to control the first speaker to emit the first sound at the first moment, control the second speaker to emit the second sound at the second moment, and the user receives the first sound at the third moment or almost simultaneously.
- a sound and a second sound wherein there is a time difference between the first moment and the second moment.
- the display device further includes a drive assembly, the drive assembly is coupled to the processor, and the processor is used to drive the drive assembly to adjust the sound emission direction of the first speaker, and the first speaker can adjust the sound emission direction as required, so that the first speaker The sound emitted can be accurately transmitted to the user's viewing area.
- the display device further includes a distance detector, and the distance detector is used to detect the application environment of the display device, obtain spatial parameters of the application environment, and in response to changes in the spatial parameters, the sounding direction of the first speaker changes to The automatic adjustment of the sounding direction of the first speaker is realized, and the user experience is improved.
- the distance detector is coupled to the processor, the distance detector sends an instruction to the processor, and in response to the instruction, the time difference between the first moment and the second moment changes to ensure that the first speaker and the second The sound from the speakers reaches the user's viewing area at the same time, so that the sound image localization is more accurate.
- the distance detector includes a radar. Radar is more accurate than other detectors.
- the display device further includes a sensing sensor, that is, a sensor that senses that the display device is moved or that the position of the display device changes.
- the sensory sensor may be a gyroscope, an accelerometer, or the like.
- the distance detector can be triggered to detect the spatial parameters of the application environment, so that the first speaker adjusts the sounding direction according to the spatial parameters, so that as long as the position of the display device changes, the distance detector will The spatial parameters of the application environment will be acquired, so that the first speaker adjusts the sounding direction according to the spatial parameters, so as to ensure the user's audio-visual experience at all times.
- the first speaker is located between the top and the midpoint between the top and the bottom. That is to say, the first speaker is located below the top, so that the sound emitted by the first speaker towards the front of the display device will be blocked by the casing of the display device, effectively reducing the sound directly transmitted from the first speaker to the viewing area of the user, The sky sound image positioning is more accurate.
- the second speaker is arranged at the bottom and/or at the side of the display device.
- an audio output method of a display device includes a display screen, a first speaker and a second speaker, the first speaker emits sound toward the rear and upper part of the display device, and the sound direction of the second speaker is toward the front of the display device or toward the bottom of the display device, and the audio output method includes:
- the first speaker receives the first audio information
- the second speaker receives the second audio information corresponding to the first audio information
- the playing times of the first audio information and the second audio information are not synchronized.
- the audio output method of the present application controls the first speaker to sound at the first moment and the second speaker to sound at the second moment, so that the user can simultaneously receive the sound from the first speaker and the second speaker at the third moment, and the stereo effect is good , to improve the user experience.
- the sound emitted by the first speaker is reflected by a first obstacle located behind the display device to a second obstacle located above the display device, and reflected to the user viewing area in front of the display device through the second obstacle.
- the first speaker By orienting the sounding direction toward the rear and upper part of the display device, the first speaker has a good sky sound image localization effect of the sound reaching the user's viewing area through the ceiling reflection, which improves the user's audio-visual experience.
- the audio output method includes that the first speaker emits a first sound at a first moment after receiving the first audio information, and the second speaker emits a sound corresponding to the first sound at a second moment after receiving the second audio information.
- the second sound, the first sound and the second sound are mixed in the user's viewing area; wherein, there is a time difference between the first moment and the second moment.
- the first speaker is controlled to emit sound at the first moment
- the second speaker is to emit sound at the second moment, so that the user can simultaneously receive the sound from the first speaker and the second speaker at the third moment, and the positioning of the sound image position is more accurate , there will be no deviation between the picture and the audio-visual position, so that the audio-visual position and the picture can be accurately synchronized, and the user experience can be improved.
- the position of the sound image formed by the first sound and the second sound changes. That is to say, by adjusting the volume ratio of the first audio information and the second audio information, so that the sound image position is synchronized with the picture, a real three-dimensional space sound field sound effect can be realized.
- the audio output method further includes: detecting an application environment of the display device, and acquiring spatial parameters of the application environment; and changing the sounding direction of the first speaker in response to changes in the spatial parameters.
- the audio output method in this embodiment can adjust the sounding direction of the first speaker according to the application environment of the display device.
- the display device detects the spatial parameters of the application environment through the distance detector. Then, the sounding direction of the first speaker is adjusted through spatial parameters, so that the display device can ensure that the sound emitted by the first speaker is reflected in the correct viewing area of the user in different application environments, thereby improving the user's audiovisual experience.
- the audio output method further includes: detecting the application environment of the display device, and obtaining spatial parameters of the application environment; in response to changes in the spatial parameters, the time difference between the first moment and the second moment changes to ensure The sound from the first speaker and the second speaker reaches the viewing area of the user at the same time, so that the sound image localization is more accurate.
- the spatial parameter includes at least one of a first distance, a second distance, and a third distance, where the first distance is the distance between the display device and the first obstacle, and the second distance is the distance between the display device and the first obstacle. The distance between the second obstacles, and the third distance is the distance between the display device and the user.
- the audio output method can also sense the movement state of the display device at all times, and the movement state includes being moved and the position changes.
- the distance detector is triggered to detect the space of the application environment. parameters, adjust the sounding direction of the first speaker according to the spatial parameters, so that as long as the position of the display device changes, the distance detector will obtain the spatial parameters of the application environment, so that the first speaker can adjust the sounding direction according to the spatial parameters, and always ensure the user's sound quality. Audiovisual experience.
- a display device in a third aspect, includes a display screen, a first speaker and a second speaker, the first speaker is arranged on the rear side of the display screen, and the sounding direction of the first speaker is toward the rear and upper part of the display device; the sounding direction of the second speaker is different from that of the first speaker. sound direction.
- the sounding direction of the second speaker of the present application is different from the sounding direction of the first speaker, so that the user can receive the sound from different directions and improve the stereoscopic effect of the sound.
- the sound emitted by the first speaker is reflected by a first obstacle located behind the display device to a second obstacle located above the display device, and reflected to the user viewing area in front of the display device through the second obstacle.
- the first obstacle may be a wall
- the second obstacle may be a ceiling.
- the axial direction of the first speaker By directing the sounding direction of the first speaker toward the upper rear of the display device, that is, the axial direction of the first speaker is toward the upper rear of the display device. Since the axial direction of the first speaker is toward the rear and top of the display device, most of the sound of the first speaker is directed toward the first obstacle located behind the display device. After being reflected by the first obstacle, it passes through the second obstacle located above the display device. Object reflections reach the user viewing area. A small part of the sound of the first speaker deviates greatly from the axial direction, and can directly reach the viewing area of the user towards the front of the display device.
- this part of the sound has a large off-axis angle from the main axis of the first speaker, and the intensity of the sound is weak, so it directly reaches the user's viewing area.
- the sound in the user's viewing area has a weak masking effect on the sound that reaches the user's viewing area after being reflected twice by the wall and the ceiling.
- the sound emitted by the first speaker is reflected by the first obstacle to the second obstacle, and then reflected from the ceiling and projected to the user's viewing area.
- the sound image of the sound transmitted to the user's viewing area is the sound image located above the ceiling, so that
- the range of the sound field in the height direction formed by the first speaker is not limited to the size of the display screen, so that the sound field in the height direction can cover the entire spatial height of the application environment and achieve the effect of sound image localization in the sky, for example, the sound image energy of an aircraft engine. It is positioned above the display screen and played through the first speaker, so that the picture played on the display screen is consistent with the sound positioning.
- the sound output direction of the first speaker is 10 degrees to 80 degrees (including 10 degrees and 80 degrees) from the horizontal direction, so as to ensure that the sound emitted by the first speaker is reflected through the wall and the ceiling in turn, and finally reach the user viewing area.
- the horizontal direction is the direction perpendicular to the display surface of the display screen.
- the sounding direction of the first speaker is variable, so that the first speaker adjusts the sounding direction as required, so that the sound emitted by the first speaker can be accurately transmitted to the viewing area of the user.
- the sounding direction of the first speaker can be changed.
- the sounding direction of the first speaker is adjusted through spatial parameters, so as to realize automatic adjustment of the sounding direction of the first speaker and improve user experience.
- a display device in a fourth aspect, includes a display screen, a first speaker and a second speaker, the first speaker is arranged on the rear side of the display screen, and the sounding direction of the first speaker is toward the rear and upper part of the display device; the sounding direction of the first speaker is variable.
- the sounding direction of the first speaker can be adjusted as required, so that the sound emitted by the first speaker can be accurately transmitted to the viewing area of the user.
- the sound emitted by the first speaker is reflected by a first obstacle located behind the display device to a second obstacle located above the display device, and reflected to the user viewing area in front of the display device through the second obstacle.
- the first obstacle may be a wall
- the second obstacle may be a ceiling.
- the axial direction of the first speaker By directing the sounding direction of the first speaker toward the upper rear of the display device, that is, the axial direction of the first speaker is toward the upper rear of the display device. Since the axial direction of the first speaker is toward the rear and top of the display device, most of the sound of the first speaker is directed toward the first obstacle located behind the display device. After being reflected by the first obstacle, it passes through the second obstacle located above the display device. Object reflections reach the user viewing area. A small part of the sound of the first speaker deviates greatly from the axial direction, and can directly reach the viewing area of the user towards the front of the display device.
- this part of the sound has a large off-axis angle from the main axis of the first speaker, and the intensity of the sound is weak, so it directly reaches the user's viewing area.
- the sound in the user's viewing area has a weak masking effect on the sound that reaches the user's viewing area after being reflected twice by the wall and the ceiling.
- the sound emitted by the first speaker is reflected by the first obstacle to the second obstacle, and then reflected from the ceiling and projected to the user's viewing area.
- the sound image of the sound transmitted to the user's viewing area is the sound image located above the ceiling, so that
- the range of the sound field in the height direction formed by the first speaker is not limited to the size of the display screen, so that the sound field in the height direction can cover the entire spatial height of the application environment and achieve the effect of sound image localization in the sky, for example, the sound image energy of an aircraft engine. It is positioned above the display screen and played through the first speaker, so that the picture played on the display screen is consistent with the sound positioning.
- the sounding direction of the first speaker can be changed.
- the sounding direction of the first speaker is adjusted through spatial parameters, so as to realize automatic adjustment of the sounding direction of the first speaker and improve user experience.
- the spatial parameter includes at least one of a first distance, a second distance, and a third distance, where the first distance is the distance between the display device and the first obstacle, and the second distance is the distance between the display device and the first obstacle. The distance between the second obstacles, and the third distance is the distance between the display device and the user.
- the position of the sound image formed by the first sound and the second sound changes.
- the position of the sound image formed by the first sound and the second sound is adjusted, so as to realize the adjustment of the sound image position in the height direction, so that the sound image position is synchronized with the picture.
- the first speaker emits the first sound at the first moment
- the second speaker emits the second sound corresponding to the first sound at the second moment
- the first sound and the second sound are in the viewing area of the user.
- the first moment and the second moment are in the viewing area of the user.
- the user can simultaneously receive the sound from the first speaker and the second speaker at the third moment, and the positioning of the sound image position is more accurate.
- the position of the picture and the audio-visual position is deviated, so that the audio-visual position and the picture can be accurately synchronized, and the user experience can be improved.
- the time difference between the first moment and the second moment changes, so as to ensure that the sounds emitted by the first speaker and the second speaker reach the viewing area of the user at the same time. , so that the sound image localization is more accurate.
- the time difference between the first moment and the second moment is the first time difference
- the first moment and The time difference between the second moments is the second time difference, wherein the first time difference and the second time difference are different.
- the time difference between the first moment and the second moment can be obtained from data such as the sounding direction of the first speaker.
- a display device in a fifth aspect, includes a display screen and a first speaker, the first speaker is arranged on the rear side of the display screen, and the sounding direction of the first speaker faces the rear and upper part of the display device; the sound emitted by the first speaker passes through the first obstacle located behind the display device It is reflected to a second obstacle located above the display device, and is reflected to a user viewing area in front of the display device through the second obstacle.
- the first obstacle may be a wall
- the second obstacle may be a ceiling.
- the first speaker of the present application is formed by directing the sound-emitting direction toward the upper rear of the display device, that is, the axial direction of the first speaker is toward the upper rear of the display device. Since the axial direction of the first speaker is toward the rear and top of the display device, most of the sound of the first speaker is directed toward the first obstacle located behind the display device. After being reflected by the first obstacle, it passes through the second obstacle located above the display device. Object reflections reach the user viewing area. A small part of the sound of the first speaker deviates greatly from the axial direction, and can directly reach the viewing area of the user towards the front of the display device.
- this part of the sound has a large off-axis angle from the main axis of the first speaker, and the intensity of the sound is weak, so it directly reaches the user's viewing area.
- the sound in the user's viewing area has a weak masking effect on the sound that reaches the user's viewing area after being reflected twice by the wall and the ceiling.
- the sound emitted by the first speaker is reflected by the first obstacle to the second obstacle, and then reflected from the ceiling and projected to the user's viewing area.
- the sound image of the sound transmitted to the user's viewing area is the sound image located above the ceiling, so that
- the range of the sound field in the height direction formed by the first speaker is not limited to the size of the display screen, so that the sound field in the height direction can cover the entire spatial height of the application environment and achieve the effect of sound image localization in the sky, for example, the sound image energy of an aircraft engine. It is positioned above the display screen and played through the first speaker, so that the picture played on the display screen is consistent with the sound positioning.
- the sound output direction of the first speaker is 10 degrees to 80 degrees (including 10 degrees and 80 degrees) from the horizontal direction, so as to ensure that the sound emitted by the first speaker is reflected through the wall and the ceiling in turn, and finally reach the user viewing area.
- the horizontal direction is the direction perpendicular to the display surface of the display screen.
- the display device further includes a second speaker, and the sounding direction of the second speaker is toward the front of the display device or the bottom of the display device.
- the position of the sound image formed by the first sound and the second sound changes.
- the position of the sound image formed by the first sound and the second sound is adjusted, so as to realize the adjustment of the sound image position in the height direction, so that the sound image position is synchronized with the picture.
- the first speaker emits the first sound at the first moment
- the second speaker emits the second sound corresponding to the first sound at the second moment
- the first sound and the second sound are in the viewing area of the user.
- the first moment and the second moment are in the viewing area of the user.
- the user can simultaneously receive the sound from the first speaker and the second speaker at the third moment, and the positioning of the sound image position is more accurate.
- the position of the picture and the audio-visual position is deviated, so that the audio-visual position and the picture can be accurately synchronized, and the user experience can be improved.
- the time difference between the first moment and the second moment changes, so as to ensure that the sounds emitted by the first speaker and the second speaker reach the viewing area of the user at the same time. , so that the sound image localization is more accurate.
- the sound emission direction of the first speaker is variable, and the sound emission direction of the first speaker can be adjusted as required, so that the sound emitted by the first speaker can be accurately transmitted to the viewing area of the user.
- the sounding direction of the first speaker can be changed.
- the sounding direction of the first speaker is adjusted through spatial parameters, so as to realize automatic adjustment of the sounding direction of the first speaker and improve user experience.
- the spatial parameter includes at least one of a first distance, a second distance, and a third distance, where the first distance is the distance between the display device and the first obstacle, and the second distance is the distance between the display device and the first obstacle. The distance between the second obstacles, and the third distance is the distance between the display device and the user.
- the display device can adjust the position of the viewing area of the user according to the position of the user, so that the user can have a good audio-visual experience no matter where he moves.
- FIG. 1 is a schematic structural diagram of a display device provided by an embodiment of the present application.
- FIG. 2 is a schematic diagram of an exploded structure of the display device shown in FIG. 1 at another angle;
- FIG. 3 is a schematic structural diagram of the display device shown in FIG. 1 located in an application environment;
- FIG. 4 is a schematic structural diagram of a related art display device located in an application environment
- FIG. 5 is a schematic structural diagram of another embodiment of the display device shown in FIG. 3;
- FIG. 6 is a schematic structural diagram of a processor, a first speaker and a second speaker of the display device shown in FIG. 3;
- Fig. 7 is the audio output processing process schematic diagram of the structure shown in Fig. 6;
- Fig. 8 is the concrete schematic diagram of the audio output processing procedure shown in Fig. 7;
- Fig. 9 is another kind of audio output processing schematic diagram of the structure shown in Fig. 6;
- Fig. 10 is the control schematic diagram of the speaker sounding time of the audio output processing process shown in Fig. 9;
- Fig. 11 is a concrete schematic diagram of the audio output processing process shown in Fig. 10;
- FIG. 12 is a schematic structural diagram of another embodiment of the structure shown in FIG. 6;
- Figure 13 is a schematic diagram of the audio output processing process of the structure shown in Figure 12;
- FIG. 14 is a schematic structural diagram of another embodiment of the display device shown in FIG. 1;
- FIG. 15 is a schematic flowchart of an audio output method of a display device provided in this embodiment.
- connection may be detachable connection, or It is a non-removable connection; it can be a direct connection or an indirect connection through an intermediate medium.
- An embodiment of the present application provides a display device, and the display device includes but is not limited to a display device with a speaker, such as a flat-panel TV, a computer display screen, a conference display screen, or a vehicle display screen.
- the display device is a flat-panel TV as an example for specific description.
- FIG. 1 is a schematic structural diagram of a display device provided by an embodiment of the present application.
- FIG. 2 is a schematic diagram of an exploded structure of the display device shown in FIG. 1 from another angle.
- the display device 100 includes a casing 10 , a display screen 20 , a speaker 30 , a main board 40 , a processor 50 and a memory 60 .
- the display screen 20 is used to display images, videos, and the like.
- the display screen 20 may also integrate touch functionality.
- the display screen 20 is mounted on the casing 10 .
- the housing 10 may include a frame 11 and a rear case 12 .
- the display screen 20 and the rear case 12 are respectively installed on opposite sides of the frame 11 , wherein the display screen 20 is located on the side facing the user, and the rear case 12 is located on the side facing away from the user.
- the display screen 20 includes a display panel.
- the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
- AMOLED flexible light-emitting diode
- FLED flexible light-emitting diode
- Miniled MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
- the space facing the display screen 20 is defined as the front of the display device 100
- the space facing the rear case 12 is the rear of the display device 100
- the display device 100 includes a top portion 101, a bottom portion 102, and two side portions 103 connected between the top portion 101 and the bottom portion 102 and disposed opposite to each other.
- the direction in which the top 101 of the display device 100 faces is defined as above the display device 100
- the direction in which the bottom 102 of the display device 100 faces is below the display device 100 .
- the mainboard 40 is located inside the casing 10 , and the mainboard 40 integrates the processor 50 , the memory 60 and other various circuit devices.
- the display screen 20 is coupled to the processor 50 to receive display signals sent by the processor 50 .
- the processor 50 may include one or more processing units, for example, the processor 50 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
- the processor can generate the operation control signal according to the instruction operation code and the timing signal, and complete the control of extracting the instruction and executing the instruction.
- An internal memory may also be provided in the processor 50 for storing instructions and data.
- the memory in processor 50 may be a cache memory.
- the memory may store instructions or data that are used by the processor 50 or are frequently used. If the processor 50 needs to use the instructions or data, it can be called directly from this memory. Repeated accesses are avoided and the latency of the processor 50 is reduced, thereby increasing the efficiency of the system.
- processor 50 may include one or more interfaces.
- the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / or universal serial bus (universal serial bus, USB) interface, etc.
- the processor 50 may be connected to modules such as a touch sensor, an audio module, a wireless communication module, a display, a camera, and the like through at least one of the above interfaces.
- Memory 60 may be used to store computer-executable program code, which includes instructions.
- the memory 60 may include a program storage area and a data storage area.
- the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
- the storage data area may store data (such as audio data, phone book, etc.) created during the use of the display device 100 and the like.
- the memory 60 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
- the processor 50 executes various functional methods or data processing of the display device 100 by executing the instructions stored in the memory 60 and/or the instructions stored in the memory provided in the processor, for example, causing the display screen 20 to display a target image.
- the display device 100 may implement audio functions through an audio module, a speaker, and a processor. Such as music playback, sound playback, etc.
- the audio module is used to convert digital audio information into analog audio signal output, and also used to convert analog audio input to digital audio signal.
- the audio module can also be used to encode and decode audio signals.
- the audio module may be provided in the processor 50 , or some functional modules of the audio module may be provided in the processor 50 , or some or all functional modules of the audio module may be provided outside the processor 50 .
- Speakers eg, speaker 30
- horns are used to convert audio electrical signals into sound signals.
- the display device 100 can play sounds such as music through the speaker.
- the speaker 30 is located inside the casing 10 and is integrated on the side of the main board 40 facing away from the display screen 20 , that is, the speaker 30 is arranged on the rear side of the display screen 20 .
- the rear side of the display screen 20 is the side facing away from the display surface of the display screen 20 .
- the rear case 12 is provided with a sound hole 121 , and the sound emitted by the speaker 30 is transmitted to the outside of the case 10 through the sound hole 121 .
- Speaker 30 is coupled to processor 50 for executing instructions stored in memory 60, and/or instructions stored in memory provided in the processor, to cause speaker 30 to produce sound.
- the speaker 30 includes a first speaker 31 and a second speaker 32 , and the first speaker 31 and the second speaker 32 are both fixed to the main board 40 . Both the first speaker 31 and the second speaker 32 are coupled to the processor 50 .
- the sound emission direction of the first speaker 31 is toward the rear and upper part of the display device 100
- the sound emission direction of the second speaker 32 is toward the front of the display device 100 . It can be understood that the sounding direction of the first speaker 31 is the initial propagation direction of the sound emitted by the first speaker 31
- the rear and top of the display device 100 are the direction between the rear of the display device 100 and the top of the display device.
- the first speaker and the second speaker may also be fixed at other positions in the housing, the first speaker and the second speaker are generally located at different positions, and the first speaker and the second speaker emit The direction of the sound is different, and the path length of the sound to reach the user is different.
- the speaker may further include other speakers other than the first speaker and the second speaker.
- the first speaker may be located at the top of the display device 100, and the direction of the sound emitted by the first speaker is upward. The sound can be directed up front and so on.
- the first speaker and/or the second speaker may also be located on the side of the display device 100 .
- Each of the first loudspeaker and/or the second loudspeaker may comprise a plurality of arrayed loudspeakers.
- FIG. 3 is a schematic structural diagram of the display device 100 shown in FIG. 1 in an application environment.
- the display device 100 is disposed close to the wall 201 , the display screen 20 of the display device 100 faces away from the wall 201 , and the top 101 of the display device 100 faces the ceiling 202 .
- the sound emitted by the first speaker 31 is reflected by the first obstacle behind the display device 100 to the second obstacle above the display device 100 , and reflected to the user viewing area in front of the display device 100 through the second obstacle. That is, the first obstacle behind the display device 100 is the wall 201 , the second obstacle above the display device 100 is the ceiling 202 , and the front of the display device 100 is the user viewing area 203 .
- the application environment of the display device 100 may be different, and the first obstacle may also be other structures other than the wall 201, such as a screen, a reflector, and the like.
- the second obstacle may also be a blocking structure such as a reflective plate.
- the sound emitted by the first speaker 31 is reflected by the wall 201 located behind the display device 100 to form a mirror image sound source A of the first speaker 31 with the wall 201 as a reflective mirror, and is reflected by the wall 201
- the sound continues to be transmitted to the ceiling 202 above the display device 100, and is reflected by the ceiling 202 to form a mirror sound source B of the first speaker 31 with the ceiling 202 as a reflective mirror.
- the sound is mirror sound Source B is an audio and video projected from the ceiling 202 down to the user viewing area 203 in front of the display device 100 .
- the position of the sounding object perceived by a person through the sound he hears is called the sound image.
- the sounding direction of the speaker 2 is toward the front and top of the display device 1 , so that the sound emitted by the speaker 2 can be transmitted to the ceiling 3 and reflected by the ceiling 3 to reach the viewing area 4 of the user. Since the sound wave radiated by the speaker 2 has directivity, the intensity of the sound wave propagating along the axial direction of the speaker 2 is the strongest, and as the off-axis angle increases, the intensity of the sound wave gradually weakens. Part of the sound emitted by the speaker 2 will be reflected by the ceiling 3 and then reach the user's viewing area 4.
- This part of the sound is called reflected sound S1, and the other part will be directly transmitted to the user's viewing area 4 in front of the display device 1.
- This part of the sound is called direct sound. S2. Since the axial direction of the speaker 2 is toward the front and top of the display device 1, the direct sound S2 has a smaller off-axis angle from the axial direction of the speaker 2, so the intensity of the direct sound S2 is stronger, and the direct sound S2 will arrive before the reflected sound S1. In the user viewing area 4, due to the preconceived characteristics of the Haas effect, human hearing cannot distinguish the reflected sound S1 arriving late, so the direct sound S2 will weaken the positioning effect of the sky sound image through the reflected sound S1, reducing the user experience.
- the first speaker 31 of the present application is formed by directing the sound-emitting direction toward the upper rear of the display device 100 , that is, the axial direction of the first speaker 31 is toward the upper rear of the display device 100 . Since the axial direction of the first speaker 31 is toward the rear and upper part of the display device 100 , most of the sound of the first speaker 31 is directed toward the wall 201 . A small part of the sound of the first speaker 31 deviates greatly from the axial direction, and can directly reach the user viewing area 203 toward the front of the display device 100, but the part of the sound has a large off-axis angle from the main axis of the first speaker 31, and the intensity of the sound is weak .
- the sound that directly reaches the user's viewing area 203 has a weak masking effect on the sound that reaches the user's viewing area 203 after being reflected twice by the wall 201 and the ceiling 202, and the sky sound image localization of the sound that reaches the user's viewing area 203 reflected by the ceiling 202
- the effect is good, and the user's audio-visual experience is improved.
- the sound emitted by the first speaker 31 is reflected by the wall 201 to the ceiling 202 , and then reflected by the ceiling 202 and then projected to the user viewing area 203 .
- the sound image of the sound transmitted to the user viewing area 203 is the sound image located above the ceiling 202 , so that the range of the sound field in the height direction formed by the first speaker 31 is not limited to the size of the display screen 20, so that the sound field in the height direction can cover the entire space height of the application environment, and achieve the effect of sound image localization in the sky, for example, an aircraft engine
- the sound image can be positioned above the display screen 20, and played through the first speaker 31, so that the picture played on the display screen 20 is consistent with the sound positioning.
- the first speaker 31 is disposed near the top 101 of the display device 100 , and the sound through hole 121 of the rear case is disposed corresponding to the first speaker 31 . That is to say, the first speaker 31 is located below the top 101 , so that the sound emitted by the first speaker 31 toward the front of the display device 100 will be blocked by the housing 10 of the display device 100 , effectively reducing the direct transmission from the first speaker 31
- the sound to the user's viewing area 203 makes the sky sound image localization more accurate.
- the number of the first speakers 31 is two, one first speaker 31 is disposed close to one side 103 of the display device 100 , and the other first speaker 31 is disposed close to the other side 103 to play the left channel and Audio information for the right channel.
- the number of the first speakers 31 may also be one or more, and the present application does not limit the number of the first speakers 31 .
- the first speaker 31 may also be located between the midpoint between the top 101 and the bottom 102 of the display device 100 and the top 101 , that is, the first speaker 31 may also be located in the display device 100 . Any position on the top half of the device 100 as shown in FIG. 5 . In another implementation scenario of other embodiments, the first speaker 31 may also be located on the top 101 of the display device 100 . Specifically, the position of the first speaker 31 is also related to the distance from the display device 100 to the wall 201 and the ceiling 202, and the angle between the sound output direction of the first speaker 31 and the horizontal direction.
- the sound output direction of the first speaker 31 is 35° ⁇ 45° (including 35° and 45°) with respect to the horizontal direction, and the horizontal direction is the direction perpendicular to the display surface of the display screen 20 .
- the display device 100 can be applied to a variety of application environments with different spatial parameters.
- the sound emitted by the speaker 31 can be reflected through the wall 201 and the ceiling 202 in different application environments in turn, and finally reaches the viewing area 203 of the user.
- the spatial parameter is a set of different parameters, such as the distance from the display device 100 to the wall 201 , the distance from the display device 100 to the ceiling 202 , and the distance from the display device 100 to the user. That is to say, the display device 100 can be applied to a variety of application environments with different spatial parameters within a certain distance from the wall 201 , within a certain range from the ceiling 202 , and within a certain range from the user to ensure the user's audio-visual experience.
- the sound output direction of the first speaker 31 may also be 10 degrees to 80 degrees (including 10 degrees and 80 degrees) or other angles other than 10 degrees to 80 degrees with the horizontal direction, as long as it can be guaranteed
- the sound emitted by the first speaker is reflected through the wall and the ceiling in turn, and finally reaches the viewing area of the user.
- the second speaker 32 is disposed closer to the bottom 102 than the first speaker 31 .
- the first sound from the first speaker 31 and the second sound from the second speaker 32 are mixed in the viewing area of the user, so that the user can receive the combined sound formed by the sounds from the speakers at different positions and improve the stereoscopic effect of the sound.
- the second speakers 32 are located at the bottom 102 of the display device 100 , the number of the second speakers 32 is two, one second speaker 32 is disposed close to one side portion 103 of the display device 100 , and the other is a second speaker 32 . Speakers 32 are disposed close to the other side 103 of the display device 100 to play audio information of the left and right channels, respectively.
- the number of the second speakers 32 may also be one or more, and the present application does not limit the number of the second speakers 32 .
- the second speaker may also be provided on the side of the display device. In another implementation scenario in other embodiments, the second speaker may also be partially provided at the bottom of the display device and partially provided at the side of the display device. In yet another implementation scenario of other embodiments, the second speaker may also be located in the middle of the display device. Alternatively, some of the second speakers are located at the top of the display device, some of the second speakers are located at the bottom of the display device, and some of the second speakers are located in the middle of the display device. In yet another implementation scenario of other embodiments, the second speaker may vibrate through the display screen to emit sound, that is, a part of the display screen forms the second speaker through vibration, which will not occupy the space of the display device while achieving the stereo effect. The internal space is also conducive to improving the screen ratio of the display device.
- the sounding direction of the second speaker 32 faces the front of the display device 100 . That is to say, the sounding direction of the second speaker is toward the viewing area 203 of the user, and the second speaker 32 can be used to play sounds such as footsteps.
- the sound emission direction of the second speaker 32 faces the user viewing area 203, which may be that the sound opening direction of the second speaker 32 directly faces the user viewing area 203, or the sound opening direction of the second speaker 32 does not face the user viewing area 203, but After the sound is turned to the device, the sound is turned to the viewing area 203 of the user.
- the sounding direction of the second speaker may also be directed downward of the display device.
- FIG. 6 is a schematic structural diagram of the processor 50 , the first speaker 31 and the second speaker 32 of the display device 100 shown in FIG. 3 .
- the processor 50 includes an audio module, where the audio module may include functional modules such as an acquisition module, a rendering module, and a power amplifier module.
- the rendering module is coupled to the acquisition module and the power amplifier module, respectively.
- the power amplifier module includes a first power amplifier module and a second power amplifier module. The first power amplifier module is coupled to the first speaker 31 , and the second power amplifier module is coupled to the second speaker 32 .
- the processor 50 can adjust the position of the sound image formed by the first sound emitted by the first speaker 31 and the second sound corresponding to the first sound emitted by the second speaker 32 .
- the specific processor 50 adjusts the position of the sound image formed by the first sound and the second sound by the following method:
- FIG. 7 is a schematic diagram of the audio output processing process of the structure shown in FIG. 6 .
- the processor 50 adjusts the first sound emitted by the first speaker 31 and the sound emitted by the second speaker 32 by controlling the ratio of the volume of the first sound emitted by the first speaker 31 to the volume of the second sound emitted by the second speaker 32.
- the position of the sound image formed by the second sound That is, when the volume ratio of the first sound and the second sound changes, the position of the sound image formed by the first sound and the second sound changes.
- the acquiring module acquires picture information and audio information of the video content.
- the video content may be video content, games, real-time video, and the like.
- the real-time video may be, for example, a video call, a live video broadcast, or a video conference.
- the first audio information and the second audio information are extracted from the audio information, wherein the first audio information and the second audio information correspond to the first speaker 31 and the second speaker 32 respectively, and the first audio information and the second audio information may correspond to The same sound content, for example, the sound content of the first audio information and the second audio information may correspond to "hello" said by the same person.
- the rendering module performs gain adjustment on the volume of the first audio information and the second audio information. Specifically, the rendering module determines the audio-visual position of the audio information according to the picture information, and determines the audio-visual position of the first audio information and the second audio information according to the audio-visual position. Adjust the volume ratio.
- the first audio information is sent to the first power amplifier module of the power amplifier module, and then transmitted to the first speaker 31 after power amplification by the first power amplifier module, and the second audio information is sent to the second power amplifier module of the power amplifier module.
- the module power is amplified and transmitted to the second speaker 32 .
- the volume of the first audio information can be lower than the volume of the second audio information, and the first audio signal and the second audio signal pass through the first power amplifier respectively.
- the module and the second power amplifier module perform power amplification and send them to the first speaker and the second speaker respectively, so that the volume of "Hello” issued by the first speaker is lower than the volume of "Hello” issued by the second speaker.
- the volume of the first audio information can be greater than the volume of the second audio information, and the first audio signal and the second audio signal pass through the first power amplifier module and the second audio signal respectively.
- the second power amplifier module performs power amplification, it is sent to the first speaker and the second speaker respectively, so that the volume of "Hello” issued by the first speaker is greater than the volume of "Hello” issued by the second speaker.
- the height direction is the direction from the top 101 to the bottom 102 of the display device 100 .
- the sound image position is the position of the sound image formed by the first sound and the second sound.
- the position of the mirror sound source B of the first speaker 31 is the first position
- the position of the second speaker 32 is the second position.
- 31 and the ratio of the volume of the sound emitted by the second speaker 32 that is, adjusting the ratio of the volume of the first sound and the second sound, so that the sound image positions of the first sound and the second sound are at the first position and the second position. Adjustable between. For example, when the first speaker 31 does not emit sound and the second speaker 32 emits sound, the sound image position is at the second position; when the sound levels of the first speaker 31 and the second speaker 32 are the same, the sound image position is at the first position and the second position near the middle.
- the audiovisual position of the flight audio corresponding to the bird also moves from the bottom to the top.
- the first speaker does not emit sound
- the second speaker emits sound.
- the first sound emitted by the first speaker gradually increases, and the second speaker The emitted second sound is gradually reduced, so that the sound image position formed by the first sound and the second sound is consistent with the flight trajectory of the bird.
- the acquisition module acquires picture information and audio information of the video content.
- the first audio information and the second audio information are extracted from the audio information.
- the audio information includes first information and second information, the first information is left channel audio information, and the second information is right channel audio information.
- the first sub-information and the second sub-information are extracted from the first information, and the third sub-information and the fourth sub-information are extracted from the second information, wherein the first sub-information and the third sub-information form the first audio information, and the first sub-information and the third sub-information form the first audio information.
- the second sub-information and the fourth sub-information form the second audio information.
- the rendering module performs gain adjustment on the volume sizes of the first sub-information, the second sub-information, the third sub-information and the fourth sub-information. Specifically, the rendering module determines the audio-visual position of the audio information according to the picture information, and adjusts the audio-visual position according to the audio-visual position. The volume ratio of the first sub-information, the second sub-information, the third sub-information and the fourth sub-information is adjusted.
- the first power amplifier module includes a first power amplifier and a second power amplifier
- the second power amplifier module includes a third power amplifier and a fourth power amplifier.
- the two first speakers 31 are the first speaker L (first speaker on the left) and the first speaker R (the first speaker on the right)
- the two second speakers 32 are respectively the second speaker L (the second speaker on the left). ) and the second speaker R (the second speaker on the right).
- the first sub-information rendered by the rendering module is sent to the first power amplifier, and then transmitted to the first speaker L after being amplified by the first power amplifier.
- the third sub-information rendered by the rendering module is sent to the second power amplifier, and then transmitted to the first speaker R after being amplified by the second power amplifier.
- the second sub-information rendered by the rendering module is sent to the third power amplifier, and then transmitted to the second speaker L after being amplified by the third power amplifier.
- the fourth sub-information rendered by the rendering module is sent to the fourth power amplifier, and then transmitted to the second speaker R after being amplified by the fourth power amplifier.
- FIG. 9 is a schematic diagram of another audio output processing process of the structure shown in FIG. 6;
- the processor is further configured to control the timing of sound from the first speaker and the second speaker. That is to say, the first speaker and the second speaker sound out of sync.
- the acquiring module acquires picture information and audio information of the video content.
- the video content may be video content, games, real-time video, and the like.
- the real-time video may be, for example, a video call, a live video broadcast, or a video conference.
- Extract the first audio information and the second audio information from the audio information wherein the first audio information and the second audio information correspond to the first speaker and the second speaker respectively, and the first audio information and the second audio information may correspond to the same
- the sound content for example, the sound content of the first audio information and the second audio information may correspond to "hello" said by the same person.
- the rendering module performs gain adjustment on the volume levels of the first audio information and the second audio information. Specifically, the rendering module determines the audio-visual position of the audio information according to the picture information, and adjusts the volume ratio of the first audio information and the second audio information according to the audio-visual position. At the same time, the rendering module also controls the sending delay for sending the first audio information and the second audio information to the next module. Then, the first audio information is sent to the first power amplifier module of the power amplifier module, and then transmitted to the first speaker after power amplification by the first power amplifier module. The second audio information is sent to the second power amplifier module of the power amplifier module, and then transmitted to the second speaker after being amplified by the second power amplifier module, and the second speaker emits a second sound at the second time T2.
- 10A represents the waveform of the sound produced by the first speaker after receiving the first audio information at the first time T1
- 10B represents the sound of the second speaker after receiving the second audio information at the second time T2 waveform.
- the waveform relationship of the first audio information and the second audio information is similar to that of 10A and 10B, for example, the waveforms of the first audio information and the second audio information may also have a time difference.
- the volume of the first audio information can be lower than the volume of the second audio information, and the first audio signal is sent to the first power amplifier at the first time.
- module which is amplified by the first power module and sent to the first speaker, so that the first speaker sends out "hello” at the first moment (the waveform of 10A can indicate that the first speaker sends out "hello"), and the second audio signal is sent to the first speaker.
- the volume of the first audio information can be greater than the volume of the second audio information
- the first audio signal is sent to the first power amplifier module at the first time
- the first speaker After being amplified by the first power module, it is sent to the first speaker, so that the first speaker emits "hello” at the first moment
- the second audio signal is sent to the second power amplifier module at the second time, and amplified by the second power module. Then send it to the second speaker, so that the second speaker will say "hello" at the second moment.
- the volume of "Hello” issued by the first speaker is greater than the volume of "Hello” issued by the second speaker, and the "Hello” issued by the first speaker and the “Hello” issued by the second speaker are at the same time or almost at the same time. reach the user.
- the user hears the two sound components of "Hello” from the first speaker and "Hello” from the second speaker, the user will feel that the sound image position of "Hello” comes from the upper part of the display screen.
- the adjustment of the audio-visual position in the height direction is realized, so that the audio-visual position is synchronized with the screen .
- the first speaker emits sound at the first time T1
- the second speaker emits sound at the second time T2 so that the user can receive the first sound and the second sound at the same time or almost at the same time at the third time, and the sound image position can be positioned more accurately.
- the third moment may refer to a specific moment, or may refer to a small range of moments. That is to say, the user can receive the first sound and the second sound at the same time at the third moment, and the user can also receive the second sound at a certain time interval after receiving the first sound, but the user cannot perceive this time gap, and That is to say, this time gap will not cause a deviation to the user's positioning of the sound image position.
- the transmission path of the sound from the second speaker to the viewing area of the user, V 340 m/s (speed of sound in air).
- the value of the time difference ⁇ T may be 1ms to 50ms, such as 2ms, 5ms, or 10ms, etc., which can accurately adjust the stereo.
- the time difference ⁇ T may also change to enhance the position information of the sound image. For example, when the sound image position of a person's voice "Hello" moves from the bottom of the display screen to the top of the display screen, when the sound image position is below the display screen, the time difference is ⁇ T1, and when the sound image position is above the display, the time difference is ⁇ T2, the time difference ⁇ T2 is smaller than the time difference ⁇ T1. That is to say, in the process of moving the audio-visual position from the bottom of the display screen to the top of the display screen, the "Hello” issued by the first speaker will reach the user before the "Hello” issued by the second speaker and be received by the user. The movement of the sound image position can be clearly felt.
- the acquisition module acquires picture information and audio information of the video content.
- the first audio information and the second audio information are extracted from the audio information.
- the audio information includes first information and second information, the first information is left channel audio information, and the second information is right channel audio information.
- the first sub-information and the second sub-information are extracted from the first information, and the third sub-information and the fourth sub-information are extracted from the second information, wherein the first sub-information and the third sub-information form the first audio information, and the first sub-information and the third sub-information form the first audio information.
- the second sub-information and the fourth sub-information form the second audio information.
- the rendering module performs gain adjustment on the volume sizes of the first sub-information, the second sub-information, the third sub-information and the fourth sub-information. Specifically, the rendering module determines the audio-visual position of the audio information according to the picture information, and adjusts the audio-visual position according to the audio-visual position. The volume ratio of the first sub-information, the second sub-information, the third sub-information and the fourth sub-information is adjusted. At the same time, the rendering module also controls the sending delay of the first sub-information, the second sub-information, the third sub-information and the fourth sub-information to the next module.
- the first power amplifier module includes a first power amplifier and a second power amplifier
- the second power amplifier module includes a third power amplifier and a fourth power amplifier.
- the two first speakers are the first speaker L (left first speaker) and the first speaker R (right first speaker)
- the two second speakers are the second speaker L (left second speaker) and Second speaker R (right second speaker).
- the first sub-information rendered by the rendering module is sent to the first power amplifier, and then transmitted to the first speaker L after being amplified by the first power amplifier
- the third sub-information rendered by the rendering module is sent to the second power amplifier, and the second power amplifier After the power is amplified, it is transmitted to the first speaker R, and the first speaker L and the first speaker R emit sound at the first time T1.
- the second sub-information rendered by the rendering module is sent to the third power amplifier, and then transmitted to the second speaker L after being amplified by the third power amplifier;
- the fourth sub-information rendered by the rendering module is sent to the fourth power amplifier, and passed through the fourth power amplifier After the power is amplified, it is transmitted to the second speaker R, and the second speaker L and the second speaker R emit sound at the second time T2. There is a time difference between the first moment and the second moment.
- This embodiment adjusts the sound image position by adjusting the size ratio of the sound emitted from the first speaker L, the first speaker R, the second speaker L and the second speaker R, so that the sound image position is synchronized with the screen.
- the first speaker L and the first speaker R sound at the first time T1
- the second speaker L and the second speaker R sound at the second time T2
- the user can simultaneously receive the first speaker L and the first speaker at the third time.
- the sound image position is positioned more accurately, and there will be no deviation between the picture and the sound image position, so that the sound image position and the picture can be accurately synchronized, and the stereo The effect is good and the user experience is improved.
- the processor may also only control the sounding time of the first speaker and the second speaker, that is, the first speaker and the second speaker do not emit sound synchronously, so that the first sound emitted by the first speaker The second sound emitted by the second speaker reaches the viewing area of the user at the same time, and is received by the user at the same time, the stereo effect is good, and the user experience is improved.
- FIG. 12 is a schematic structural diagram of another embodiment of the structure shown in FIG. 6 .
- FIG. 13 is a schematic diagram of the audio output processing process of the structure shown in FIG. 12 .
- the processor 50 includes an audio module, where the audio module may include functional modules such as an acquisition module, a rendering module, a sound mixing module, and a power amplifier module.
- the acquisition module, the rendering module, the sound mixing module and the power amplifier module are coupled in sequence, and the power amplifier module is coupled to both the first speaker 31 and the second speaker 32 .
- the audio information in the video content is processed by the processor 50 using an upmixing algorithm.
- the acquiring module acquires picture information and audio information of the video content.
- the video content may be video content, games, real-time video, and the like.
- the real-time video may be, for example, a video call, a live video broadcast, or a video conference.
- the first audio information and the second audio information are extracted from the audio information.
- the audio information includes first information and second information, the first information is left channel audio information, and the second information is right channel audio information.
- the first information and the second information are processed by the upmixing algorithm of the height content signal extraction, the first information generates a left height channel signal and a left main channel signal, and the second information is divided into a right height channel signal and a right main channel. signal, then the left height channel signal is divided into the first signal and the second signal, the right height channel signal is divided into the third signal and the fourth signal, the left main channel signal is divided into the fifth signal and the sixth signal, the right The main channel signal is divided into a seventh signal and an eighth signal.
- the first information, the third signal, the fifth signal and the seventh signal form the first audio information
- the second information, the fourth signal, the sixth signal and the eighth signal form the second audio information.
- the rendering module performs gain adjustment on the volume of the first signal, the second signal, the third signal, the fourth signal, the fifth signal, the sixth signal, the seventh signal and the eighth signal. Specifically, the rendering module determines the audio-visual position of the audio information according to the picture information, and analyzes the first signal, the second signal, the third signal, the fourth signal, the fifth signal, the sixth signal, the seventh signal and the third signal according to the audio-visual position. The volume ratio of the eight signals can be adjusted. At the same time, the rendering module also controls the sending delay of the first signal, the second signal, the third signal, the fourth signal, the fifth signal, the sixth signal, the seventh signal and the eighth signal to one module.
- the sound mixing module includes a first module, a second module, a third module and a fourth module.
- the first module mixes the first signal and the fifth signal rendered by the rendering module to obtain the first sound mixing
- the second module mixes the sound.
- the third signal and the seventh signal rendered by the rendering module are mixed to obtain the second mix
- the third module is to mix the second signal and the sixth signal rendered by the rendering module to obtain the third mix
- the fourth The module mixes the fourth signal and the eighth signal rendered by the rendering module to obtain a fourth mix.
- the power amplifier module includes a first power amplifier module and a second power amplifier module, the first power amplifier module includes a first power amplifier and a second power amplifier, and the second power amplifier module includes a third power amplifier and a fourth power amplifier.
- the two first speakers 31 are the first speaker L (first speaker on the left) and the first speaker R (the first speaker on the right), and the two second speakers 32 are respectively the second speaker L (the second speaker on the left). ) and the second speaker R (the second speaker on the right).
- the first mixed sound is sent to the first power amplifier, and then transmitted to the first speaker L after being amplified by the first power amplifier; the second mixed sound is sent to the second power amplifier, and then transmitted to the first speaker R after being amplified by the second power amplifier.
- the speaker L and the first speaker R emit sound at the first moment.
- the third mixed sound is sent to the third power amplifier, and then transmitted to the second speaker L after being amplified by the third power amplifier;
- the fourth mixed sound is sent to the fourth power amplifier, and then transmitted to the second speaker R after being amplified by the fourth power amplifier.
- the speaker L and the second speaker R emit sound at the second time. There is a time difference between the first moment and the second moment.
- the content of the left height channel signal, the right height channel signal, the left main channel signal and the right main channel signal can be effectively realized in the height Sound and image localization at a specific position in the direction, so that the positioning of the sound and image of various sounds in the height direction can be adjusted as needed, and the positioning of various sounds and pictures can be integrated.
- the sound and image of an aircraft engine can be positioned on the display screen The upper position, the sound image of the character's dialogue is positioned in the middle of the display screen, the sound image of the footsteps is positioned at the bottom position of the screen, etc.
- FIG. 14 is a schematic structural diagram of another embodiment of the display device 100 shown in FIG. 1 .
- the display device 100 in this embodiment further includes a distance detector 70 .
- the distance detector 70 is arranged inside the casing 10 .
- the distance detector 70 can also be provided outside the housing 10 .
- the distance detector 70 is used to detect the spatial parameters of the application environment where the display device 100 is located, and the spatial parameters include various parameters, such as the first distance between the display device 100 and the wall, the second distance between the display device 100 and the ceiling The distance and the third distance between the display device 100 and the user.
- the sound emission direction of the first speaker 31 can be adjusted. Specifically, the sound emission direction of the first speaker 31 can be adjusted according to a spatial parameter.
- the display device 100 may include a drive assembly, the first speaker 31 is disposed on or cooperated with the drive assembly, the drive assembly is coupled with a processor, and the processor is used to drive the drive assembly to adjust according to the spatial parameters obtained by the distance detector 70 The sounding direction of the first speaker 31 . That is to say, when the spatial parameters of the application environment in which the display device is located are changed, the sound emission direction of the first speaker 31 can be changed, so that the display device 100 can be adapted to different application environments, and the sound emission direction of the first speaker can be adjusted according to the application environment.
- the display device 100 can also adjust the position of the viewing area of the user according to the position of the user, so that the user can have a good audio-visual experience no matter where the user moves.
- the space parameter also includes the distance between the display device 100 and other obstacles.
- the spatial parameter includes at least one parameter among the first distance, the second distance, and the third distance.
- the sounding direction of the first speaker 31 may also be manually adjusted manually.
- the display device 100 in this embodiment can adjust the sounding direction of the first speaker 31 according to the application environment of the display device 100.
- the display device 100 When the display device 100 is used for the first time, or the display device 100 is moved to a new application environment, the display device 100 will detect the spatial parameters of the application environment through the distance detector 70 .
- the display device 100 adjusts the sounding direction of the first speaker 31 through spatial parameters, so that the display device 100 can ensure that the first sound emitted by the first speaker 31 is accurately transmitted to the user's viewing area after being reflected in different application environments, thereby improving the user's viewing and listening. experience.
- the display device 100 may further include a sensing sensor, that is, a sensor that senses that the display device 100 is moved or that the position of the display device 100 changes.
- the sensory sensor may be a gyroscope, an accelerometer, or the like.
- the distance detector 70 can be triggered to detect the spatial parameters of the application environment, so that the first speaker 31 adjusts the sounding direction according to the spatial parameters, so that as long as the position of the display device 100 changes, The distance detector 70 will acquire the spatial parameters of the application environment, so that the first speaker 31 can adjust the sounding direction according to the spatial parameters, so as to ensure the user's audio-visual experience at all times.
- the perception sensor can also record the information that the display device 100 is moved when the display device 100 is powered off.
- the perception sensor triggers the distance detector 70 to detect the spatial parameters of the application environment to Adjust the sounding direction of the first speaker 31 according to the spatial parameters. Even if the display device 100 is moved after the power is turned off, the sounding direction of the first speaker 31 will still be adjusted according to the application environment when the display device 100 is activated again to ensure the user's audio-visual experience.
- the distance detector 70 includes a radar, and the radar can transmit and receive ultrasonic waves, and the spatial parameters of the application environment measured by ultrasonic waves are relatively more accurate than data obtained in other ways.
- the distance detector 70 may also include a microphone, that is, the display device 100 emits a sound, and the sound is reflected back to the display device 100 through an obstacle and then received by the microphone, and the sound is emitted and received by calculating The time difference is obtained to obtain the distance between the display device 100 and the obstacle.
- the distance detector 70 includes a camera, and the distance between the display device 100 and the obstacle is recognized by taking pictures of the camera.
- the obstacles may be walls, ceilings, users, and the like.
- the distance detector 70 may further include at least two of radar, microphone and camera, and use different ranging methods for different obstacles to obtain more accurate spatial parameters.
- the distance detector 70 is coupled to the processor 50, and the distance detector 70 sends an instruction to the processor 50.
- the instruction may be a pulse signal or an analog signal including spatial parameter information.
- the time difference between two moments Specifically, the processor 50 adjusts the time difference between the first moment and the second moment according to the information carried in the instruction, such as the first distance, the second distance and the third distance, that is to say, the application environment in which the display device 100 is located.
- the spatial parameters change, the time difference between the first moment and the second moment varies to ensure that the sounds emitted by the first speaker 31 and the second speaker 32 reach the user viewing area at the same time, so that the sound image localization is more accurate.
- the distance detector 70 can also detect the path of the sound emitted by the first speaker 31 to the user area, and the path of the sound emitted by the second speaker 32 to reach the user area, and determine the path according to the difference between the two paths. The time difference between the first moment and the second moment.
- the display device may further include a user input entry, and the user input entry may be an application software in a mobile phone that interacts with the display device, or a setting window of the display device.
- the user fills in spatial parameters such as the first distance, the second distance, or the third distance through the user input entry, and the first speaker adjusts the sounding direction of the first speaker according to the data filled in by the user. This method is less expensive than obtaining spatial parameters through distance detectors.
- FIG. 15 is a schematic flowchart of an audio output method of the display device 100 provided in this embodiment.
- the audio output method is applied to the display device 100 shown in FIG. 1 .
- the audio output method includes the following steps S110-S130.
- the picture information and audio information of the video content are obtained through the obtaining module.
- the video content may be video content, games, real-time video, and the like.
- the real-time video may be, for example, a video call, a live video broadcast, or a video conference.
- the audio information includes first information and second information, the first information is left channel audio information, and the second information is right channel audio information.
- S120 Extract the first audio information and the second audio information from the audio information.
- the acquisition module extracts the first sub-information and the second sub-information from the first information, and extracts the third sub-information and the fourth sub-information from the second information, wherein the first sub-information and the third sub-information
- the information forms the first audio information, the second sub-information and the fourth sub-information form the second audio information, the first audio information and the second audio information respectively correspond to the first speaker and the second speaker, the first audio information and the second audio information It may correspond to the same sound content, for example, the sound content of the first audio information and the second audio information may correspond to "hello" said by the same person.
- the first audio information and the second audio information are first processed to adjust the position of the sound image formed by the first sound emitted by the first speaker 31 and the second sound emitted by the second speaker 32 .
- the processing of the first audio information and the processing of the second audio information includes adjusting the volume ratio of the first audio information and the second audio information.
- the rendering module performs gain adjustment on the volume sizes of the first sub-information, the second sub-information, the third sub-information and the fourth sub-information. Specifically, the rendering module determines the audio-visual position of the audio information according to the picture information, The volume ratio of the first sub-information, the second sub-information, the third sub-information and the fourth sub-information is adjusted according to the image position.
- the first sub-information rendered by the rendering module is sent to the first power amplifier, and then transmitted to the first speaker L (left first speaker) after being amplified by the first power amplifier.
- the third sub-information rendered by the rendering module is sent to the second power amplifier, and then transmitted to the first speaker R (the first speaker on the right side) after being amplified by the second power amplifier.
- the second sub-information rendered by the rendering module is sent to the third power amplifier, and then transmitted to the second speaker L (the second speaker on the left) after being amplified by the third power amplifier.
- the fourth sub-information rendered by the rendering module is sent to the fourth power amplifier, and then transmitted to the second speaker R (the second speaker on the right side) after being amplified by the fourth power amplifier.
- This audio output method adjusts the size ratio of the sound emitted from the first speaker L, the first speaker R, the second speaker L, and the second speaker R, so as to realize the adjustment of the sound image position in the three-dimensional space, so that the sound image position is the same as the sound image position.
- the screen is synchronized, and the real three-dimensional space sound field effect is realized.
- the first speaker of the present application orients the sounding direction toward the rear and upper part of the display device, so that the sound reflected from the ceiling and reaches the user's viewing area has a good sky sound image localization effect and a good stereo effect, which improves the user's audio-visual experience.
- processing the first audio information and the second audio information may further include controlling the first speaker 31 to emit sound at the first moment, and controlling the second speaker 32 to emit sound at the second moment,
- the user simultaneously receives the sound of the first speaker 31 and the sound of the second speaker 32 at the third moment, wherein there is a time difference between the first moment and the second moment, and the first moment is earlier than the second moment, so that the first speaker receives the first moment.
- the second speaker receives second audio information corresponding to the first audio information, and the playing times of the first audio information and the second audio information are not synchronized. Specifically, as shown in FIG.
- the rendering module while the rendering module adjusts the volume ratio of the first sub-information, the second sub-information, the third sub-information and the fourth sub-information, the rendering module also controls the first sub-information and the second sub-information , the sending delay of the third sub-information and the fourth sub-information to the next module.
- the first sub-information rendered by the rendering module is sent to the first power amplifier, and then transmitted to the first speaker L after being amplified by the first power amplifier; the third sub-information rendered by the rendering module is sent to the second power amplifier, and the third sub-information rendered by the rendering module is sent to the second power amplifier.
- the power of the two power amplifiers is amplified and transmitted to the first speaker R, and the first speaker L and the first speaker R emit sound at the first moment.
- the second sub-information rendered by the rendering module is sent to the third power amplifier, and then transmitted to the second speaker L after being amplified by the third power amplifier; the fourth sub-information rendered by the rendering module is sent to the fourth power amplifier, and the The quad-amplifier power is amplified and transmitted to the second speaker R, and the second speaker L and the second speaker R emit sound at the second moment. There is a time difference between the first moment and the second moment.
- the sound image position can be adjusted by adjusting the size ratio of the sounds emitted from the first speaker L, the first speaker R, the second speaker L, and the second speaker R, so that the sound image position is synchronized with the screen.
- the second speaker L and the second speaker R sound at the second moment, that is, the first speaker receives the first audio information at the first moment.
- the first sound is emitted, and after receiving the second audio information, the second speaker emits a second sound corresponding to the first sound at the second moment, so that the user can simultaneously receive the first speaker L, the first speaker R, and the first speaker at the third moment.
- the sound image position is positioned more accurately, and there will be no deviation between the picture and the sound image position, so that the sound image position and the picture can be accurately synchronized, and the stereo effect is good. Improve user experience.
- the audio output method may only control the first speaker to emit sound at the first moment, and the second speaker to emit sound at the second moment, so that the user can simultaneously receive the first speaker and the second speaker at the third moment The sound emitted, the stereo effect is good, and the user experience is improved.
- the audio output method may further perform an upmixing algorithm for height content signal extraction on the first information and the second information, and then generate a left height channel signal and a right height channel signal.
- channel signal, left main channel signal and right main channel signal Then, the left height channel signal is processed with specific gain, time delay, sound mixing and power amplification, and then transmitted to the first speaker L and the second speaker L respectively, so that the content of the left height channel signal can be placed at a specific position in the height direction. sound image localization.
- the right height channel signal, the left main channel signal and the right main channel signal can achieve sound image localization at a specific position in the height direction in the same way, and will not be repeated here.
- the content of the left height channel signal, the right height channel signal, the left main channel signal and the right main channel signal can be effectively realized.
- Sound and image localization at a specific position in the height direction so that the sound and image positioning of various sounds in the height direction can be adjusted as needed, and the positioning of various sounds and pictures can be integrated.
- the sound image of an aircraft engine can be positioned in Position at the top of the screen, position the sound image of the character's dialogue at the middle of the screen, position the sound image of the footsteps at the bottom of the screen, etc.
- the audio output method may also acquire the first audio information only from the audio information, and output the first audio information through the first speaker.
- the audio output method may also be applied to the display device 100 shown in FIG. 14 .
- the audio output method further includes detecting the application environment of the display device, acquiring spatial parameters of the application environment, and in response to changes in the spatial parameters, the sounding direction of the first speaker 31 changes.
- the distance detector 70 detects the spatial parameters of the application environment where the display device 100 is located.
- the spatial parameters include various parameters, such as the first distance between the display device 100 and the wall, the distance between the display device 100 and the ceiling. Parameters such as the second distance and the third distance between the display device and the user.
- the processor adjusts the sounding direction of the first speaker 31 in response to two or more of the spatial parameters.
- the specific processor can adjust the sounding direction of the first speaker 31 by controlling the driving component, so that the display device 100 can be suitable for various application environments.
- the audio output method in this embodiment may adjust the sounding direction of the first speaker 31 according to the application environment of the display device 100 .
- the display device 100 When the display device 100 is used for the first time, or the display device 100 is moved to a new application environment, the display device 100 will detect the spatial parameters of the application environment through the distance detector 70 . Then, the sounding direction of the first speaker 31 is adjusted through spatial parameters, so that the display device 100 can ensure that the first sound emitted by the first speaker 31 is reflected in the correct viewing area of the user in different application environments, thereby improving the user's audiovisual experience.
- the spatial parameters of the application environment of the electronic device may also be manually input by the user.
- a user may enter through a user input portal of the display device.
- the user input entry may be application software in the mobile phone that interacts with the display device, or may be a setting window of the display device.
- the user fills in spatial parameters such as the first distance, the second distance or the third distance through the user input entry, and the processor adjusts the sounding direction of the first speaker according to the data filled in by the user in response to the spatial parameters input by the user. This method is less expensive than obtaining spatial parameters through distance detectors.
- the audio output method can also sense the movement state of the display device 100 at all times.
- the movement state includes being moved and the position changes.
- the distance detector 70 is triggered to detect the application environment. Spatial parameters, adjust the sounding direction of the first speaker 31 according to the spatial parameters, so that as long as the position of the display device 100 changes, the distance detector 70 will acquire the spatial parameters of the application environment, so that the first speaker 31 can adjust the sounding direction according to the spatial parameters , to ensure the user's audio-visual experience at all times.
- the audio output method further comprises changing the time difference between the first time instant and the second time instant in response to the change in the spatial parameter.
- the processor may adjust the time difference between the first moment and the second moment according to spatial parameters, such as the first distance, the second distance and the third distance, to ensure that the sounds emitted by the first speaker and the second speaker reach the user at the same time Viewing area for more accurate panning.
- spatial parameters such as the first distance, the second distance and the third distance
- the difference between the path of the first sound emitted by the first speaker reaching the user area and the path of the second sound emitted by the second speaker reaching the user area can also be obtained. Time difference.
- the components for executing each step of the audio output method are not limited to the components described above, and may be any components that can perform the above method.
Abstract
Description
Claims (26)
- 一种显示设备,其特征在于,所述显示设备包括显示屏、第一扬声器和第二扬声器,所述第一扬声器设于所述显示屏的后侧,所述第一扬声器的发声方向朝向所述显示设备的后上方;所述第二扬声器的发声方向朝向所述显示设备前方或朝向所述显示设备下方;所述第一扬声器和所述第二扬声器不同步发声。
- 根据权利要求1所述的显示设备,其特征在于,所述第一扬声器的出声方向与水平方向呈10度~80度。
- 根据权利要求1或2所述的显示设备,其特征在于,所述第一扬声器发出的声音经过位于所述显示设备后方的第一障碍物反射至位于所述显示设备上方的第二障碍物,经所述第二障碍物反射至所述显示设备前方的用户观看区域。
- 根据权利要求1至3任一项所述的显示设备,其特征在于,所述第一扬声器在第一时刻发出第一声音,所述第二扬声器在第二时刻发出与所述第一声音相对应的第二声音,所述第一声音和所述第二声音在所述用户观看区域混合;其中,所述第一时刻和所述第二时刻存在时间差。
- 根据权利要求4所述的显示设备,其特征在于,所述显示设备处于的应用环境的空间参数变化时,所述第一时刻和所述第二时刻之间的时间差变化。
- 根据权利要求4或5所述的显示设备,其特征在于,所述第一声音和所述第二声音的音量比例变化时,所述第一声音和所述第二声音形成的声像的位置发生变化。
- 根据权利要求1至6任一项所述的显示设备,其特征在于,所述第一扬声器的发声方向可变。
- 根据权利要求7所述的显示设备,其特征在于,所述显示设备处于的应用环境的空间参数变化时,所述第一扬声器的发声方向可变。
- 根据权利要求5或8所述的显示设备,其特征在于,所述空间参数包括第一距离、第二距离和第三距离中的至少一个,所述第一距离为所述显示设备与所述第一障碍物之间的距离,所述第二距离为所述显示设备与所述第二障碍物之间的距离,所述第三距离为所述显示设备与所述用户之间的距离。
- 根据权利要求9所述的显示设备,其特征在于,所述显示设备还包括处理器,所述处理器与所述第一扬声器和所述第二扬声器均耦合,所述处理器用于控制所述第一扬声器在所述第一时刻发出所述第一声音,控制所述第二扬声器在所述第二时刻发出所述第二声音。
- 根据权利要求10所述的显示设备,其特征在于,所述处理器还用于控制经所述第一扬声器发出的所述第一声音的音量和经所述第二扬声器发出的所述第二声音的音量的比例。
- 根据权利要求11所述的显示设备,其特征在于,所述显示设备还包括距离探测器,所述距离探测器用于探测所述显示设备的应用环境,获取所述应用环境的空间参数,响应于所述空间参数的变化,所述第一扬声器的发声方向变化。
- 根据权利要求12所述的显示设备,其特征在于,所述距离探测器与所述处理器耦合,所述距离探测器向所述处理器发送指令,响应于所述指令,所述第一时刻和所述第二时刻之间的时间差变化。
- 根据权利要求1至13中任一项所述的显示设备,其特征在于,所述显示设备包括顶部和底部,所述第二扬声器相对所述第一扬声器靠近所述底部设置。
- 一种显示设备的音频输出方法,其特征在于,所述显示设备包括显示屏、第一扬声器和第二扬声器,所述第一扬声器向所述显示设备的后上方发声,所述第二扬声器的发声方向朝向所述显示设备前方或朝向所述显示设备下方,所述音频输出方法包括:所述第一扬声器接收第一音频信息,所述第二扬声器接收与所述第一音频信息相对应的第二音频信息,所述第一音频信息和所述第二音频信息的播放时间不同步。
- 根据权利要求15所述的音频输出方法,其特征在于,所述第一扬声器发出的声音经过位于所述显示设备后方的第一障碍物反射至位于所述显示设备上方的第二障碍物,经所述第二障碍物反射至所述显示设备前方的用户观看区域。
- 根据权利要求15或16所述的音频输出方法,其特征在于,所述音频输出方法包括所述第一扬声器接收所述第一音频信息之后在第一时刻发出第一声音,所述第二扬声器接收所述第二音频信息之后在第二时刻发出与所述第一声音相对应的第二声音,所述第一声音和所述第二声音在所述用户观看区域混合;其中,所述第一时刻和所述第二时刻存在时间差。
- 根据权利要求15至17任一项所述的音频输出方法,其特征在于,所述第一声音和所述第二声音的音量比例变化时,所述第一声音和所述第二声音形成的声像的位置发生变化。
- 根据权利要求15至18中任一项所述的音频输出方法,其特征在于,所述音频输出方法还包括:检测所述显示设备的应用环境,获取所述应用环境的空间参数;响应于所述空间参数的变化,所述第一扬声器的发声方向变化。
- 根据权利要求17所述的音频输出方法,其特征在于,所述音频输出方法还包括:检测所述显示设备的应用环境,获取所述应用环境的空间参数;响应于所述空间参数的变化,所述第一时刻和所述第二时刻之间的时间差变化。
- 根据权利要求19或20所述的音频输出方法,其特征在于,所述空间参数包括第一距离、第二距离和第三距离中的至少一个,所述第一距离为所述显示设备与所述第一障碍物之间的距离,所述第二距离为所述显示设备与所述第二障碍物之间的距离,所述第三距离为所述显示设备与所述用户之间的距离。
- 一种显示设备,其特征在于,所述显示设备包括显示屏、第一扬声器和第二扬声器,所述第一扬声器设于所述显示屏的后侧,所述第一扬声器的发声方向朝向所述显示设备的后上方;所述第二扬声器的发声方向不同于所述第一扬声器的发声方向。
- 根据权利要求22所述的显示设备,其特征在于,所述第一扬声器的出声方向与水平方向呈10度~80度。
- 根据权利要求23所述的显示设备,其特征在于,所述第一扬声器发出的声音经过位于所述显示设备后方的第一障碍物反射至位于所述显示设备上方的第二障碍物,经所述第二障碍物反射至所述显示设备前方的用户观看区域。
- 根据权利要求22至24任一项所述的显示设备,其特征在于,所述第一扬声器的发声方向可变。
- 根据权利要求25所述的显示设备,其特征在于,所述显示设备处于的应用环境的空间参数变化时,所述第一扬声器的发声方向可变。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22787422.9A EP4304164A1 (en) | 2021-04-13 | 2022-04-06 | Display device and audio output method therefor |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110396109.1A CN115209077A (zh) | 2021-04-13 | 2021-04-13 | 显示设备及其音频输出方法 |
CN202110396109.1 | 2021-04-13 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022218195A1 true WO2022218195A1 (zh) | 2022-10-20 |
Family
ID=83571294
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/085410 WO2022218195A1 (zh) | 2021-04-13 | 2022-04-06 | 显示设备及其音频输出方法 |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP4304164A1 (zh) |
CN (1) | CN115209077A (zh) |
WO (1) | WO2022218195A1 (zh) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101895801A (zh) * | 2009-05-22 | 2010-11-24 | 三星电子株式会社 | 用于声音聚焦的设备和方法 |
CN106030505A (zh) * | 2014-02-11 | 2016-10-12 | Lg电子株式会社 | 显示装置及其控制方法 |
CN111757171A (zh) * | 2020-07-03 | 2020-10-09 | 海信视像科技股份有限公司 | 一种显示设备及音频播放方法 |
CN112153538A (zh) * | 2020-09-24 | 2020-12-29 | 京东方科技集团股份有限公司 | 显示装置及其全景声实现方法、非易失性存储介质 |
CN112218153A (zh) * | 2019-07-12 | 2021-01-12 | 海信视像科技股份有限公司 | 显示装置、音响设备、以及多数据通道iis声道接口电路 |
-
2021
- 2021-04-13 CN CN202110396109.1A patent/CN115209077A/zh active Pending
-
2022
- 2022-04-06 WO PCT/CN2022/085410 patent/WO2022218195A1/zh active Application Filing
- 2022-04-06 EP EP22787422.9A patent/EP4304164A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101895801A (zh) * | 2009-05-22 | 2010-11-24 | 三星电子株式会社 | 用于声音聚焦的设备和方法 |
CN106030505A (zh) * | 2014-02-11 | 2016-10-12 | Lg电子株式会社 | 显示装置及其控制方法 |
CN112218153A (zh) * | 2019-07-12 | 2021-01-12 | 海信视像科技股份有限公司 | 显示装置、音响设备、以及多数据通道iis声道接口电路 |
CN111757171A (zh) * | 2020-07-03 | 2020-10-09 | 海信视像科技股份有限公司 | 一种显示设备及音频播放方法 |
CN112153538A (zh) * | 2020-09-24 | 2020-12-29 | 京东方科技集团股份有限公司 | 显示装置及其全景声实现方法、非易失性存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN115209077A (zh) | 2022-10-18 |
EP4304164A1 (en) | 2024-01-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5675729B2 (ja) | オーディオエンハンス型装置 | |
US9693169B1 (en) | Ultrasonic speaker assembly with ultrasonic room mapping | |
CN106303836B (zh) | 一种调节立体声播放的方法及装置 | |
KR101880844B1 (ko) | 오디오 공간 효과를 위한 초음파 스피커 어셈블리 | |
US10587979B2 (en) | Localization of sound in a speaker system | |
US20190394567A1 (en) | Dynamically Adapting Sound Based on Background Sound | |
US20190391783A1 (en) | Sound Adaptation Based on Content and Context | |
US20120128184A1 (en) | Display apparatus and sound control method of the display apparatus | |
US20190394602A1 (en) | Active Room Shaping and Noise Control | |
US20230021918A1 (en) | Systems, devices, and methods of manipulating audio data based on microphone orientation | |
US20190394603A1 (en) | Dynamic Cross-Talk Cancellation | |
US20200084537A1 (en) | Automatically movable speaker to track listener or optimize sound performance | |
US11641547B2 (en) | Sound box assembly, display apparatus, and audio output method | |
WO2022218195A1 (zh) | 显示设备及其音频输出方法 | |
US20190394601A1 (en) | Automatic Room Filling | |
US11620976B2 (en) | Systems, devices, and methods of acoustic echo cancellation based on display orientation | |
KR102284914B1 (ko) | 프리셋 영상이 구현되는 사운드 트랙킹 시스템 | |
US20210382672A1 (en) | Systems, devices, and methods of manipulating audio data based on display orientation | |
CN111586553B (zh) | 显示装置及其工作方法 | |
US10484809B1 (en) | Closed-loop adaptation of 3D sound | |
US20220217469A1 (en) | Display Device, Control Method, And Program | |
CN220210601U (zh) | 一种音响系统 | |
TW202324372A (zh) | 可動態調整目標聆聽點並消除環境物件干擾的音響系統 | |
KR20150047411A (ko) | 틈 스피커를 통해 음향을 출력하는 방법 및 장치 | |
CN117579797A (zh) | 投影设备及系统、以及投影方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22787422 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022787422 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2022787422 Country of ref document: EP Effective date: 20231002 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18286658 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |