WO2022218195A1 - 显示设备及其音频输出方法 - Google Patents

显示设备及其音频输出方法 Download PDF

Info

Publication number
WO2022218195A1
WO2022218195A1 PCT/CN2022/085410 CN2022085410W WO2022218195A1 WO 2022218195 A1 WO2022218195 A1 WO 2022218195A1 CN 2022085410 W CN2022085410 W CN 2022085410W WO 2022218195 A1 WO2022218195 A1 WO 2022218195A1
Authority
WO
WIPO (PCT)
Prior art keywords
speaker
display device
sound
distance
moment
Prior art date
Application number
PCT/CN2022/085410
Other languages
English (en)
French (fr)
Inventor
霍鹏
王安
张强
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP22787422.9A priority Critical patent/EP4304164A1/en
Publication of WO2022218195A1 publication Critical patent/WO2022218195A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/64Constructional details of receivers, e.g. cabinets or dust covers
    • H04N5/642Disposition of sound reproducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/60Receiver circuitry for the reception of television signals according to analogue transmission standards for the sound signals
    • H04N5/607Receiver circuitry for the reception of television signals according to analogue transmission standards for the sound signals for more than one sound signal, e.g. stereo, multilanguages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/02Spatial or constructional arrangements of loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/04Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/301Automatic calibration of stereophonic sound system, e.g. with test microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/323Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only for loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/02Details casings, cabinets or mounting therein for transducers covered by H04R1/02 but not provided for in any of its subgroups
    • H04R2201/025Transducer mountings or cabinet supports enabling variable orientation of transducer of cabinet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/01Input selection or mixing for amplifiers or loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/15Transducers incorporated in visual displaying devices, e.g. televisions, computer displays, laptops
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • H04S1/007Two-channel systems in which the audio signals are in digital form
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/11Positioning of individual sound objects, e.g. moving airplane, within a sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/05Application of the precedence or Haas effect, i.e. the effect of first wavefront, in order to improve sound-source localisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S5/00Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation

Definitions

  • the present application relates to the field of television equipment, and in particular, to a display device and an audio output method thereof.
  • Existing video playback devices such as flat-screen TVs, have at least two speakers for stereo effect. But the stereo effect of the existing video playback device is not good.
  • the embodiment of the present application provides a display device, aiming to obtain a display device with good stereo effect.
  • the embodiment of the present application also provides an audio output method of a display device, which is used to improve the stereo effect of the display device.
  • a display device in a first aspect, includes a display screen, a first speaker and a second speaker.
  • the first speaker is arranged on the rear side of the display screen, and the sounding direction of the first speaker is toward the rear and upper part of the display device; the sounding direction of the second speaker is toward the front or toward the display device.
  • Below the display device the first speaker and the second speaker sound out of sync.
  • the sounding direction of the first speaker is the initial propagation direction of the sound emitted by the first speaker
  • the rear and top of the display device are the direction between the rear of the display device and the top of the display device.
  • the user by restricting the asynchronous sound of the first speaker and the second speaker, the user can receive the sound from the first speaker and the second speaker at the same time, and the sound image position formed by the sound from the first speaker and the second speaker is more accurate, and the There will be a situation where the picture and the audio-visual position are out of position, so that the audio-visual position and the picture can be accurately synchronized, the stereo effect is good, and the user experience is improved.
  • the sound emitted by the first speaker is reflected by a first obstacle located behind the display device to a second obstacle located above the display device, and reflected to the user viewing area in front of the display device through the second obstacle.
  • the first obstacle may be a wall
  • the second obstacle may be a ceiling.
  • the axial direction of the first speaker By directing the sounding direction of the first speaker toward the upper rear of the display device, that is, the axial direction of the first speaker is toward the upper rear of the display device. Since the axial direction of the first speaker is toward the rear and top of the display device, most of the sound of the first speaker is directed toward the first obstacle located behind the display device. After being reflected by the first obstacle, it passes through the second obstacle located above the display device. Object reflections reach the user viewing area. A small part of the sound of the first speaker deviates greatly from the axial direction, and can directly reach the viewing area of the user towards the front of the display device.
  • this part of the sound has a large off-axis angle from the main axis of the first speaker, and the intensity of the sound is weak, so it directly reaches the user's viewing area.
  • the sound in the user's viewing area has a weak masking effect on the sound that reaches the user's viewing area after being reflected twice by the wall and the ceiling.
  • the sound emitted by the first speaker is reflected by the first obstacle to the second obstacle, and then reflected from the ceiling and projected to the user's viewing area.
  • the sound image of the sound transmitted to the user's viewing area is the sound image located above the ceiling, so that
  • the range of the sound field in the height direction formed by the first speaker is not limited to the size of the display screen, so that the sound field in the height direction can cover the entire spatial height of the application environment and achieve the effect of sound image localization in the sky, for example, the sound image energy of an aircraft engine. It is positioned above the display screen and played through the first speaker, so that the picture played on the display screen is consistent with the sound positioning.
  • the sound output direction of the first speaker is 10 degrees to 80 degrees (including 10 degrees and 80 degrees) from the horizontal direction, so as to ensure that the sound emitted by the first speaker is reflected through the wall and the ceiling in turn, and finally reach the user viewing area.
  • the horizontal direction is the direction perpendicular to the display surface of the display screen.
  • the sound output direction of the first speaker is 35 degrees to 45 degrees (including 35 degrees and 45 degrees) to the horizontal direction, and when the sound output direction of the first speaker is 35 degrees to 45 degrees to the horizontal direction
  • the space parameter is a collection of multiple different parameters, such as the distance from the display device to the wall, the distance from the display device to the ceiling, and the distance from the display device to the user. That is to say, the display device can be applied to a variety of application environments with different spatial parameters within a certain distance from the wall, within a certain range from the ceiling, and within a certain range from the user, so as to ensure the user's audio-visual experience.
  • the first speaker emits the first sound at the first moment
  • the second speaker emits the second sound corresponding to the first sound at the second moment
  • the first sound and the second sound are in the viewing area of the user.
  • the first moment and the second moment are in the viewing area of the user.
  • the user can simultaneously receive the sound from the first speaker and the second speaker at the third moment, and the positioning of the sound image position is more accurate.
  • the position of the picture and the audio-visual position is deviated, so that the audio-visual position and the picture can be accurately synchronized, and the user experience can be improved.
  • the time difference between the first moment and the second moment changes, so as to ensure that the sounds emitted by the first speaker and the second speaker reach the viewing area of the user at the same time. , so that the sound image localization is more accurate.
  • the position of the sound image formed by the first sound and the second sound changes.
  • the position of the sound image formed by the first sound and the second sound is adjusted, so as to realize the adjustment of the sound image position in the height direction, so that the sound image position is synchronized with the picture.
  • the sound emission direction of the first speaker is variable, and the sound emission direction of the first speaker can be adjusted as required, so that the sound emitted by the first speaker can be accurately transmitted to the viewing area of the user.
  • the sounding direction of the first speaker can be changed.
  • the sounding direction of the first speaker is adjusted through spatial parameters, so as to realize automatic adjustment of the sounding direction of the first speaker and improve user experience.
  • the spatial parameter includes at least one of a first distance, a second distance, and a third distance, where the first distance is the distance between the display device and the first obstacle, and the second distance is the distance between the display device and the first obstacle. The distance between the second obstacles, and the third distance is the distance between the display device and the user.
  • the display device can adjust the position of the viewing area of the user according to the position of the user, so that the user can have a good audio-visual experience no matter where he moves.
  • the display device includes a top and a bottom, and the second speaker is arranged near the bottom relative to the first speaker, so that the user can receive a combined sound formed by the sounds emitted from the speakers at different positions, thereby improving the stereoscopic effect of the sound.
  • the display device further includes a processor, the processor is coupled to both the first speaker and the second speaker, and the processor controls the volume of the first sound emitted by the first speaker and the volume of the sound emitted by the second speaker.
  • the ratio of the volume of the second sound to adjust the position of the sound image formed by the first sound and the second sound, so as to realize the adjustment of the sound image position in the height direction, so that the sound image position is synchronized with the picture.
  • the processor is further configured to control the first speaker to emit the first sound at the first moment, control the second speaker to emit the second sound at the second moment, and the user receives the first sound at the third moment or almost simultaneously.
  • a sound and a second sound wherein there is a time difference between the first moment and the second moment.
  • the display device further includes a drive assembly, the drive assembly is coupled to the processor, and the processor is used to drive the drive assembly to adjust the sound emission direction of the first speaker, and the first speaker can adjust the sound emission direction as required, so that the first speaker The sound emitted can be accurately transmitted to the user's viewing area.
  • the display device further includes a distance detector, and the distance detector is used to detect the application environment of the display device, obtain spatial parameters of the application environment, and in response to changes in the spatial parameters, the sounding direction of the first speaker changes to The automatic adjustment of the sounding direction of the first speaker is realized, and the user experience is improved.
  • the distance detector is coupled to the processor, the distance detector sends an instruction to the processor, and in response to the instruction, the time difference between the first moment and the second moment changes to ensure that the first speaker and the second The sound from the speakers reaches the user's viewing area at the same time, so that the sound image localization is more accurate.
  • the distance detector includes a radar. Radar is more accurate than other detectors.
  • the display device further includes a sensing sensor, that is, a sensor that senses that the display device is moved or that the position of the display device changes.
  • the sensory sensor may be a gyroscope, an accelerometer, or the like.
  • the distance detector can be triggered to detect the spatial parameters of the application environment, so that the first speaker adjusts the sounding direction according to the spatial parameters, so that as long as the position of the display device changes, the distance detector will The spatial parameters of the application environment will be acquired, so that the first speaker adjusts the sounding direction according to the spatial parameters, so as to ensure the user's audio-visual experience at all times.
  • the first speaker is located between the top and the midpoint between the top and the bottom. That is to say, the first speaker is located below the top, so that the sound emitted by the first speaker towards the front of the display device will be blocked by the casing of the display device, effectively reducing the sound directly transmitted from the first speaker to the viewing area of the user, The sky sound image positioning is more accurate.
  • the second speaker is arranged at the bottom and/or at the side of the display device.
  • an audio output method of a display device includes a display screen, a first speaker and a second speaker, the first speaker emits sound toward the rear and upper part of the display device, and the sound direction of the second speaker is toward the front of the display device or toward the bottom of the display device, and the audio output method includes:
  • the first speaker receives the first audio information
  • the second speaker receives the second audio information corresponding to the first audio information
  • the playing times of the first audio information and the second audio information are not synchronized.
  • the audio output method of the present application controls the first speaker to sound at the first moment and the second speaker to sound at the second moment, so that the user can simultaneously receive the sound from the first speaker and the second speaker at the third moment, and the stereo effect is good , to improve the user experience.
  • the sound emitted by the first speaker is reflected by a first obstacle located behind the display device to a second obstacle located above the display device, and reflected to the user viewing area in front of the display device through the second obstacle.
  • the first speaker By orienting the sounding direction toward the rear and upper part of the display device, the first speaker has a good sky sound image localization effect of the sound reaching the user's viewing area through the ceiling reflection, which improves the user's audio-visual experience.
  • the audio output method includes that the first speaker emits a first sound at a first moment after receiving the first audio information, and the second speaker emits a sound corresponding to the first sound at a second moment after receiving the second audio information.
  • the second sound, the first sound and the second sound are mixed in the user's viewing area; wherein, there is a time difference between the first moment and the second moment.
  • the first speaker is controlled to emit sound at the first moment
  • the second speaker is to emit sound at the second moment, so that the user can simultaneously receive the sound from the first speaker and the second speaker at the third moment, and the positioning of the sound image position is more accurate , there will be no deviation between the picture and the audio-visual position, so that the audio-visual position and the picture can be accurately synchronized, and the user experience can be improved.
  • the position of the sound image formed by the first sound and the second sound changes. That is to say, by adjusting the volume ratio of the first audio information and the second audio information, so that the sound image position is synchronized with the picture, a real three-dimensional space sound field sound effect can be realized.
  • the audio output method further includes: detecting an application environment of the display device, and acquiring spatial parameters of the application environment; and changing the sounding direction of the first speaker in response to changes in the spatial parameters.
  • the audio output method in this embodiment can adjust the sounding direction of the first speaker according to the application environment of the display device.
  • the display device detects the spatial parameters of the application environment through the distance detector. Then, the sounding direction of the first speaker is adjusted through spatial parameters, so that the display device can ensure that the sound emitted by the first speaker is reflected in the correct viewing area of the user in different application environments, thereby improving the user's audiovisual experience.
  • the audio output method further includes: detecting the application environment of the display device, and obtaining spatial parameters of the application environment; in response to changes in the spatial parameters, the time difference between the first moment and the second moment changes to ensure The sound from the first speaker and the second speaker reaches the viewing area of the user at the same time, so that the sound image localization is more accurate.
  • the spatial parameter includes at least one of a first distance, a second distance, and a third distance, where the first distance is the distance between the display device and the first obstacle, and the second distance is the distance between the display device and the first obstacle. The distance between the second obstacles, and the third distance is the distance between the display device and the user.
  • the audio output method can also sense the movement state of the display device at all times, and the movement state includes being moved and the position changes.
  • the distance detector is triggered to detect the space of the application environment. parameters, adjust the sounding direction of the first speaker according to the spatial parameters, so that as long as the position of the display device changes, the distance detector will obtain the spatial parameters of the application environment, so that the first speaker can adjust the sounding direction according to the spatial parameters, and always ensure the user's sound quality. Audiovisual experience.
  • a display device in a third aspect, includes a display screen, a first speaker and a second speaker, the first speaker is arranged on the rear side of the display screen, and the sounding direction of the first speaker is toward the rear and upper part of the display device; the sounding direction of the second speaker is different from that of the first speaker. sound direction.
  • the sounding direction of the second speaker of the present application is different from the sounding direction of the first speaker, so that the user can receive the sound from different directions and improve the stereoscopic effect of the sound.
  • the sound emitted by the first speaker is reflected by a first obstacle located behind the display device to a second obstacle located above the display device, and reflected to the user viewing area in front of the display device through the second obstacle.
  • the first obstacle may be a wall
  • the second obstacle may be a ceiling.
  • the axial direction of the first speaker By directing the sounding direction of the first speaker toward the upper rear of the display device, that is, the axial direction of the first speaker is toward the upper rear of the display device. Since the axial direction of the first speaker is toward the rear and top of the display device, most of the sound of the first speaker is directed toward the first obstacle located behind the display device. After being reflected by the first obstacle, it passes through the second obstacle located above the display device. Object reflections reach the user viewing area. A small part of the sound of the first speaker deviates greatly from the axial direction, and can directly reach the viewing area of the user towards the front of the display device.
  • this part of the sound has a large off-axis angle from the main axis of the first speaker, and the intensity of the sound is weak, so it directly reaches the user's viewing area.
  • the sound in the user's viewing area has a weak masking effect on the sound that reaches the user's viewing area after being reflected twice by the wall and the ceiling.
  • the sound emitted by the first speaker is reflected by the first obstacle to the second obstacle, and then reflected from the ceiling and projected to the user's viewing area.
  • the sound image of the sound transmitted to the user's viewing area is the sound image located above the ceiling, so that
  • the range of the sound field in the height direction formed by the first speaker is not limited to the size of the display screen, so that the sound field in the height direction can cover the entire spatial height of the application environment and achieve the effect of sound image localization in the sky, for example, the sound image energy of an aircraft engine. It is positioned above the display screen and played through the first speaker, so that the picture played on the display screen is consistent with the sound positioning.
  • the sound output direction of the first speaker is 10 degrees to 80 degrees (including 10 degrees and 80 degrees) from the horizontal direction, so as to ensure that the sound emitted by the first speaker is reflected through the wall and the ceiling in turn, and finally reach the user viewing area.
  • the horizontal direction is the direction perpendicular to the display surface of the display screen.
  • the sounding direction of the first speaker is variable, so that the first speaker adjusts the sounding direction as required, so that the sound emitted by the first speaker can be accurately transmitted to the viewing area of the user.
  • the sounding direction of the first speaker can be changed.
  • the sounding direction of the first speaker is adjusted through spatial parameters, so as to realize automatic adjustment of the sounding direction of the first speaker and improve user experience.
  • a display device in a fourth aspect, includes a display screen, a first speaker and a second speaker, the first speaker is arranged on the rear side of the display screen, and the sounding direction of the first speaker is toward the rear and upper part of the display device; the sounding direction of the first speaker is variable.
  • the sounding direction of the first speaker can be adjusted as required, so that the sound emitted by the first speaker can be accurately transmitted to the viewing area of the user.
  • the sound emitted by the first speaker is reflected by a first obstacle located behind the display device to a second obstacle located above the display device, and reflected to the user viewing area in front of the display device through the second obstacle.
  • the first obstacle may be a wall
  • the second obstacle may be a ceiling.
  • the axial direction of the first speaker By directing the sounding direction of the first speaker toward the upper rear of the display device, that is, the axial direction of the first speaker is toward the upper rear of the display device. Since the axial direction of the first speaker is toward the rear and top of the display device, most of the sound of the first speaker is directed toward the first obstacle located behind the display device. After being reflected by the first obstacle, it passes through the second obstacle located above the display device. Object reflections reach the user viewing area. A small part of the sound of the first speaker deviates greatly from the axial direction, and can directly reach the viewing area of the user towards the front of the display device.
  • this part of the sound has a large off-axis angle from the main axis of the first speaker, and the intensity of the sound is weak, so it directly reaches the user's viewing area.
  • the sound in the user's viewing area has a weak masking effect on the sound that reaches the user's viewing area after being reflected twice by the wall and the ceiling.
  • the sound emitted by the first speaker is reflected by the first obstacle to the second obstacle, and then reflected from the ceiling and projected to the user's viewing area.
  • the sound image of the sound transmitted to the user's viewing area is the sound image located above the ceiling, so that
  • the range of the sound field in the height direction formed by the first speaker is not limited to the size of the display screen, so that the sound field in the height direction can cover the entire spatial height of the application environment and achieve the effect of sound image localization in the sky, for example, the sound image energy of an aircraft engine. It is positioned above the display screen and played through the first speaker, so that the picture played on the display screen is consistent with the sound positioning.
  • the sounding direction of the first speaker can be changed.
  • the sounding direction of the first speaker is adjusted through spatial parameters, so as to realize automatic adjustment of the sounding direction of the first speaker and improve user experience.
  • the spatial parameter includes at least one of a first distance, a second distance, and a third distance, where the first distance is the distance between the display device and the first obstacle, and the second distance is the distance between the display device and the first obstacle. The distance between the second obstacles, and the third distance is the distance between the display device and the user.
  • the position of the sound image formed by the first sound and the second sound changes.
  • the position of the sound image formed by the first sound and the second sound is adjusted, so as to realize the adjustment of the sound image position in the height direction, so that the sound image position is synchronized with the picture.
  • the first speaker emits the first sound at the first moment
  • the second speaker emits the second sound corresponding to the first sound at the second moment
  • the first sound and the second sound are in the viewing area of the user.
  • the first moment and the second moment are in the viewing area of the user.
  • the user can simultaneously receive the sound from the first speaker and the second speaker at the third moment, and the positioning of the sound image position is more accurate.
  • the position of the picture and the audio-visual position is deviated, so that the audio-visual position and the picture can be accurately synchronized, and the user experience can be improved.
  • the time difference between the first moment and the second moment changes, so as to ensure that the sounds emitted by the first speaker and the second speaker reach the viewing area of the user at the same time. , so that the sound image localization is more accurate.
  • the time difference between the first moment and the second moment is the first time difference
  • the first moment and The time difference between the second moments is the second time difference, wherein the first time difference and the second time difference are different.
  • the time difference between the first moment and the second moment can be obtained from data such as the sounding direction of the first speaker.
  • a display device in a fifth aspect, includes a display screen and a first speaker, the first speaker is arranged on the rear side of the display screen, and the sounding direction of the first speaker faces the rear and upper part of the display device; the sound emitted by the first speaker passes through the first obstacle located behind the display device It is reflected to a second obstacle located above the display device, and is reflected to a user viewing area in front of the display device through the second obstacle.
  • the first obstacle may be a wall
  • the second obstacle may be a ceiling.
  • the first speaker of the present application is formed by directing the sound-emitting direction toward the upper rear of the display device, that is, the axial direction of the first speaker is toward the upper rear of the display device. Since the axial direction of the first speaker is toward the rear and top of the display device, most of the sound of the first speaker is directed toward the first obstacle located behind the display device. After being reflected by the first obstacle, it passes through the second obstacle located above the display device. Object reflections reach the user viewing area. A small part of the sound of the first speaker deviates greatly from the axial direction, and can directly reach the viewing area of the user towards the front of the display device.
  • this part of the sound has a large off-axis angle from the main axis of the first speaker, and the intensity of the sound is weak, so it directly reaches the user's viewing area.
  • the sound in the user's viewing area has a weak masking effect on the sound that reaches the user's viewing area after being reflected twice by the wall and the ceiling.
  • the sound emitted by the first speaker is reflected by the first obstacle to the second obstacle, and then reflected from the ceiling and projected to the user's viewing area.
  • the sound image of the sound transmitted to the user's viewing area is the sound image located above the ceiling, so that
  • the range of the sound field in the height direction formed by the first speaker is not limited to the size of the display screen, so that the sound field in the height direction can cover the entire spatial height of the application environment and achieve the effect of sound image localization in the sky, for example, the sound image energy of an aircraft engine. It is positioned above the display screen and played through the first speaker, so that the picture played on the display screen is consistent with the sound positioning.
  • the sound output direction of the first speaker is 10 degrees to 80 degrees (including 10 degrees and 80 degrees) from the horizontal direction, so as to ensure that the sound emitted by the first speaker is reflected through the wall and the ceiling in turn, and finally reach the user viewing area.
  • the horizontal direction is the direction perpendicular to the display surface of the display screen.
  • the display device further includes a second speaker, and the sounding direction of the second speaker is toward the front of the display device or the bottom of the display device.
  • the position of the sound image formed by the first sound and the second sound changes.
  • the position of the sound image formed by the first sound and the second sound is adjusted, so as to realize the adjustment of the sound image position in the height direction, so that the sound image position is synchronized with the picture.
  • the first speaker emits the first sound at the first moment
  • the second speaker emits the second sound corresponding to the first sound at the second moment
  • the first sound and the second sound are in the viewing area of the user.
  • the first moment and the second moment are in the viewing area of the user.
  • the user can simultaneously receive the sound from the first speaker and the second speaker at the third moment, and the positioning of the sound image position is more accurate.
  • the position of the picture and the audio-visual position is deviated, so that the audio-visual position and the picture can be accurately synchronized, and the user experience can be improved.
  • the time difference between the first moment and the second moment changes, so as to ensure that the sounds emitted by the first speaker and the second speaker reach the viewing area of the user at the same time. , so that the sound image localization is more accurate.
  • the sound emission direction of the first speaker is variable, and the sound emission direction of the first speaker can be adjusted as required, so that the sound emitted by the first speaker can be accurately transmitted to the viewing area of the user.
  • the sounding direction of the first speaker can be changed.
  • the sounding direction of the first speaker is adjusted through spatial parameters, so as to realize automatic adjustment of the sounding direction of the first speaker and improve user experience.
  • the spatial parameter includes at least one of a first distance, a second distance, and a third distance, where the first distance is the distance between the display device and the first obstacle, and the second distance is the distance between the display device and the first obstacle. The distance between the second obstacles, and the third distance is the distance between the display device and the user.
  • the display device can adjust the position of the viewing area of the user according to the position of the user, so that the user can have a good audio-visual experience no matter where he moves.
  • FIG. 1 is a schematic structural diagram of a display device provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of an exploded structure of the display device shown in FIG. 1 at another angle;
  • FIG. 3 is a schematic structural diagram of the display device shown in FIG. 1 located in an application environment;
  • FIG. 4 is a schematic structural diagram of a related art display device located in an application environment
  • FIG. 5 is a schematic structural diagram of another embodiment of the display device shown in FIG. 3;
  • FIG. 6 is a schematic structural diagram of a processor, a first speaker and a second speaker of the display device shown in FIG. 3;
  • Fig. 7 is the audio output processing process schematic diagram of the structure shown in Fig. 6;
  • Fig. 8 is the concrete schematic diagram of the audio output processing procedure shown in Fig. 7;
  • Fig. 9 is another kind of audio output processing schematic diagram of the structure shown in Fig. 6;
  • Fig. 10 is the control schematic diagram of the speaker sounding time of the audio output processing process shown in Fig. 9;
  • Fig. 11 is a concrete schematic diagram of the audio output processing process shown in Fig. 10;
  • FIG. 12 is a schematic structural diagram of another embodiment of the structure shown in FIG. 6;
  • Figure 13 is a schematic diagram of the audio output processing process of the structure shown in Figure 12;
  • FIG. 14 is a schematic structural diagram of another embodiment of the display device shown in FIG. 1;
  • FIG. 15 is a schematic flowchart of an audio output method of a display device provided in this embodiment.
  • connection may be detachable connection, or It is a non-removable connection; it can be a direct connection or an indirect connection through an intermediate medium.
  • An embodiment of the present application provides a display device, and the display device includes but is not limited to a display device with a speaker, such as a flat-panel TV, a computer display screen, a conference display screen, or a vehicle display screen.
  • the display device is a flat-panel TV as an example for specific description.
  • FIG. 1 is a schematic structural diagram of a display device provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of an exploded structure of the display device shown in FIG. 1 from another angle.
  • the display device 100 includes a casing 10 , a display screen 20 , a speaker 30 , a main board 40 , a processor 50 and a memory 60 .
  • the display screen 20 is used to display images, videos, and the like.
  • the display screen 20 may also integrate touch functionality.
  • the display screen 20 is mounted on the casing 10 .
  • the housing 10 may include a frame 11 and a rear case 12 .
  • the display screen 20 and the rear case 12 are respectively installed on opposite sides of the frame 11 , wherein the display screen 20 is located on the side facing the user, and the rear case 12 is located on the side facing away from the user.
  • the display screen 20 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • AMOLED flexible light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • the space facing the display screen 20 is defined as the front of the display device 100
  • the space facing the rear case 12 is the rear of the display device 100
  • the display device 100 includes a top portion 101, a bottom portion 102, and two side portions 103 connected between the top portion 101 and the bottom portion 102 and disposed opposite to each other.
  • the direction in which the top 101 of the display device 100 faces is defined as above the display device 100
  • the direction in which the bottom 102 of the display device 100 faces is below the display device 100 .
  • the mainboard 40 is located inside the casing 10 , and the mainboard 40 integrates the processor 50 , the memory 60 and other various circuit devices.
  • the display screen 20 is coupled to the processor 50 to receive display signals sent by the processor 50 .
  • the processor 50 may include one or more processing units, for example, the processor 50 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • the processor can generate the operation control signal according to the instruction operation code and the timing signal, and complete the control of extracting the instruction and executing the instruction.
  • An internal memory may also be provided in the processor 50 for storing instructions and data.
  • the memory in processor 50 may be a cache memory.
  • the memory may store instructions or data that are used by the processor 50 or are frequently used. If the processor 50 needs to use the instructions or data, it can be called directly from this memory. Repeated accesses are avoided and the latency of the processor 50 is reduced, thereby increasing the efficiency of the system.
  • processor 50 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / or universal serial bus (universal serial bus, USB) interface, etc.
  • the processor 50 may be connected to modules such as a touch sensor, an audio module, a wireless communication module, a display, a camera, and the like through at least one of the above interfaces.
  • Memory 60 may be used to store computer-executable program code, which includes instructions.
  • the memory 60 may include a program storage area and a data storage area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area may store data (such as audio data, phone book, etc.) created during the use of the display device 100 and the like.
  • the memory 60 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the processor 50 executes various functional methods or data processing of the display device 100 by executing the instructions stored in the memory 60 and/or the instructions stored in the memory provided in the processor, for example, causing the display screen 20 to display a target image.
  • the display device 100 may implement audio functions through an audio module, a speaker, and a processor. Such as music playback, sound playback, etc.
  • the audio module is used to convert digital audio information into analog audio signal output, and also used to convert analog audio input to digital audio signal.
  • the audio module can also be used to encode and decode audio signals.
  • the audio module may be provided in the processor 50 , or some functional modules of the audio module may be provided in the processor 50 , or some or all functional modules of the audio module may be provided outside the processor 50 .
  • Speakers eg, speaker 30
  • horns are used to convert audio electrical signals into sound signals.
  • the display device 100 can play sounds such as music through the speaker.
  • the speaker 30 is located inside the casing 10 and is integrated on the side of the main board 40 facing away from the display screen 20 , that is, the speaker 30 is arranged on the rear side of the display screen 20 .
  • the rear side of the display screen 20 is the side facing away from the display surface of the display screen 20 .
  • the rear case 12 is provided with a sound hole 121 , and the sound emitted by the speaker 30 is transmitted to the outside of the case 10 through the sound hole 121 .
  • Speaker 30 is coupled to processor 50 for executing instructions stored in memory 60, and/or instructions stored in memory provided in the processor, to cause speaker 30 to produce sound.
  • the speaker 30 includes a first speaker 31 and a second speaker 32 , and the first speaker 31 and the second speaker 32 are both fixed to the main board 40 . Both the first speaker 31 and the second speaker 32 are coupled to the processor 50 .
  • the sound emission direction of the first speaker 31 is toward the rear and upper part of the display device 100
  • the sound emission direction of the second speaker 32 is toward the front of the display device 100 . It can be understood that the sounding direction of the first speaker 31 is the initial propagation direction of the sound emitted by the first speaker 31
  • the rear and top of the display device 100 are the direction between the rear of the display device 100 and the top of the display device.
  • the first speaker and the second speaker may also be fixed at other positions in the housing, the first speaker and the second speaker are generally located at different positions, and the first speaker and the second speaker emit The direction of the sound is different, and the path length of the sound to reach the user is different.
  • the speaker may further include other speakers other than the first speaker and the second speaker.
  • the first speaker may be located at the top of the display device 100, and the direction of the sound emitted by the first speaker is upward. The sound can be directed up front and so on.
  • the first speaker and/or the second speaker may also be located on the side of the display device 100 .
  • Each of the first loudspeaker and/or the second loudspeaker may comprise a plurality of arrayed loudspeakers.
  • FIG. 3 is a schematic structural diagram of the display device 100 shown in FIG. 1 in an application environment.
  • the display device 100 is disposed close to the wall 201 , the display screen 20 of the display device 100 faces away from the wall 201 , and the top 101 of the display device 100 faces the ceiling 202 .
  • the sound emitted by the first speaker 31 is reflected by the first obstacle behind the display device 100 to the second obstacle above the display device 100 , and reflected to the user viewing area in front of the display device 100 through the second obstacle. That is, the first obstacle behind the display device 100 is the wall 201 , the second obstacle above the display device 100 is the ceiling 202 , and the front of the display device 100 is the user viewing area 203 .
  • the application environment of the display device 100 may be different, and the first obstacle may also be other structures other than the wall 201, such as a screen, a reflector, and the like.
  • the second obstacle may also be a blocking structure such as a reflective plate.
  • the sound emitted by the first speaker 31 is reflected by the wall 201 located behind the display device 100 to form a mirror image sound source A of the first speaker 31 with the wall 201 as a reflective mirror, and is reflected by the wall 201
  • the sound continues to be transmitted to the ceiling 202 above the display device 100, and is reflected by the ceiling 202 to form a mirror sound source B of the first speaker 31 with the ceiling 202 as a reflective mirror.
  • the sound is mirror sound Source B is an audio and video projected from the ceiling 202 down to the user viewing area 203 in front of the display device 100 .
  • the position of the sounding object perceived by a person through the sound he hears is called the sound image.
  • the sounding direction of the speaker 2 is toward the front and top of the display device 1 , so that the sound emitted by the speaker 2 can be transmitted to the ceiling 3 and reflected by the ceiling 3 to reach the viewing area 4 of the user. Since the sound wave radiated by the speaker 2 has directivity, the intensity of the sound wave propagating along the axial direction of the speaker 2 is the strongest, and as the off-axis angle increases, the intensity of the sound wave gradually weakens. Part of the sound emitted by the speaker 2 will be reflected by the ceiling 3 and then reach the user's viewing area 4.
  • This part of the sound is called reflected sound S1, and the other part will be directly transmitted to the user's viewing area 4 in front of the display device 1.
  • This part of the sound is called direct sound. S2. Since the axial direction of the speaker 2 is toward the front and top of the display device 1, the direct sound S2 has a smaller off-axis angle from the axial direction of the speaker 2, so the intensity of the direct sound S2 is stronger, and the direct sound S2 will arrive before the reflected sound S1. In the user viewing area 4, due to the preconceived characteristics of the Haas effect, human hearing cannot distinguish the reflected sound S1 arriving late, so the direct sound S2 will weaken the positioning effect of the sky sound image through the reflected sound S1, reducing the user experience.
  • the first speaker 31 of the present application is formed by directing the sound-emitting direction toward the upper rear of the display device 100 , that is, the axial direction of the first speaker 31 is toward the upper rear of the display device 100 . Since the axial direction of the first speaker 31 is toward the rear and upper part of the display device 100 , most of the sound of the first speaker 31 is directed toward the wall 201 . A small part of the sound of the first speaker 31 deviates greatly from the axial direction, and can directly reach the user viewing area 203 toward the front of the display device 100, but the part of the sound has a large off-axis angle from the main axis of the first speaker 31, and the intensity of the sound is weak .
  • the sound that directly reaches the user's viewing area 203 has a weak masking effect on the sound that reaches the user's viewing area 203 after being reflected twice by the wall 201 and the ceiling 202, and the sky sound image localization of the sound that reaches the user's viewing area 203 reflected by the ceiling 202
  • the effect is good, and the user's audio-visual experience is improved.
  • the sound emitted by the first speaker 31 is reflected by the wall 201 to the ceiling 202 , and then reflected by the ceiling 202 and then projected to the user viewing area 203 .
  • the sound image of the sound transmitted to the user viewing area 203 is the sound image located above the ceiling 202 , so that the range of the sound field in the height direction formed by the first speaker 31 is not limited to the size of the display screen 20, so that the sound field in the height direction can cover the entire space height of the application environment, and achieve the effect of sound image localization in the sky, for example, an aircraft engine
  • the sound image can be positioned above the display screen 20, and played through the first speaker 31, so that the picture played on the display screen 20 is consistent with the sound positioning.
  • the first speaker 31 is disposed near the top 101 of the display device 100 , and the sound through hole 121 of the rear case is disposed corresponding to the first speaker 31 . That is to say, the first speaker 31 is located below the top 101 , so that the sound emitted by the first speaker 31 toward the front of the display device 100 will be blocked by the housing 10 of the display device 100 , effectively reducing the direct transmission from the first speaker 31
  • the sound to the user's viewing area 203 makes the sky sound image localization more accurate.
  • the number of the first speakers 31 is two, one first speaker 31 is disposed close to one side 103 of the display device 100 , and the other first speaker 31 is disposed close to the other side 103 to play the left channel and Audio information for the right channel.
  • the number of the first speakers 31 may also be one or more, and the present application does not limit the number of the first speakers 31 .
  • the first speaker 31 may also be located between the midpoint between the top 101 and the bottom 102 of the display device 100 and the top 101 , that is, the first speaker 31 may also be located in the display device 100 . Any position on the top half of the device 100 as shown in FIG. 5 . In another implementation scenario of other embodiments, the first speaker 31 may also be located on the top 101 of the display device 100 . Specifically, the position of the first speaker 31 is also related to the distance from the display device 100 to the wall 201 and the ceiling 202, and the angle between the sound output direction of the first speaker 31 and the horizontal direction.
  • the sound output direction of the first speaker 31 is 35° ⁇ 45° (including 35° and 45°) with respect to the horizontal direction, and the horizontal direction is the direction perpendicular to the display surface of the display screen 20 .
  • the display device 100 can be applied to a variety of application environments with different spatial parameters.
  • the sound emitted by the speaker 31 can be reflected through the wall 201 and the ceiling 202 in different application environments in turn, and finally reaches the viewing area 203 of the user.
  • the spatial parameter is a set of different parameters, such as the distance from the display device 100 to the wall 201 , the distance from the display device 100 to the ceiling 202 , and the distance from the display device 100 to the user. That is to say, the display device 100 can be applied to a variety of application environments with different spatial parameters within a certain distance from the wall 201 , within a certain range from the ceiling 202 , and within a certain range from the user to ensure the user's audio-visual experience.
  • the sound output direction of the first speaker 31 may also be 10 degrees to 80 degrees (including 10 degrees and 80 degrees) or other angles other than 10 degrees to 80 degrees with the horizontal direction, as long as it can be guaranteed
  • the sound emitted by the first speaker is reflected through the wall and the ceiling in turn, and finally reaches the viewing area of the user.
  • the second speaker 32 is disposed closer to the bottom 102 than the first speaker 31 .
  • the first sound from the first speaker 31 and the second sound from the second speaker 32 are mixed in the viewing area of the user, so that the user can receive the combined sound formed by the sounds from the speakers at different positions and improve the stereoscopic effect of the sound.
  • the second speakers 32 are located at the bottom 102 of the display device 100 , the number of the second speakers 32 is two, one second speaker 32 is disposed close to one side portion 103 of the display device 100 , and the other is a second speaker 32 . Speakers 32 are disposed close to the other side 103 of the display device 100 to play audio information of the left and right channels, respectively.
  • the number of the second speakers 32 may also be one or more, and the present application does not limit the number of the second speakers 32 .
  • the second speaker may also be provided on the side of the display device. In another implementation scenario in other embodiments, the second speaker may also be partially provided at the bottom of the display device and partially provided at the side of the display device. In yet another implementation scenario of other embodiments, the second speaker may also be located in the middle of the display device. Alternatively, some of the second speakers are located at the top of the display device, some of the second speakers are located at the bottom of the display device, and some of the second speakers are located in the middle of the display device. In yet another implementation scenario of other embodiments, the second speaker may vibrate through the display screen to emit sound, that is, a part of the display screen forms the second speaker through vibration, which will not occupy the space of the display device while achieving the stereo effect. The internal space is also conducive to improving the screen ratio of the display device.
  • the sounding direction of the second speaker 32 faces the front of the display device 100 . That is to say, the sounding direction of the second speaker is toward the viewing area 203 of the user, and the second speaker 32 can be used to play sounds such as footsteps.
  • the sound emission direction of the second speaker 32 faces the user viewing area 203, which may be that the sound opening direction of the second speaker 32 directly faces the user viewing area 203, or the sound opening direction of the second speaker 32 does not face the user viewing area 203, but After the sound is turned to the device, the sound is turned to the viewing area 203 of the user.
  • the sounding direction of the second speaker may also be directed downward of the display device.
  • FIG. 6 is a schematic structural diagram of the processor 50 , the first speaker 31 and the second speaker 32 of the display device 100 shown in FIG. 3 .
  • the processor 50 includes an audio module, where the audio module may include functional modules such as an acquisition module, a rendering module, and a power amplifier module.
  • the rendering module is coupled to the acquisition module and the power amplifier module, respectively.
  • the power amplifier module includes a first power amplifier module and a second power amplifier module. The first power amplifier module is coupled to the first speaker 31 , and the second power amplifier module is coupled to the second speaker 32 .
  • the processor 50 can adjust the position of the sound image formed by the first sound emitted by the first speaker 31 and the second sound corresponding to the first sound emitted by the second speaker 32 .
  • the specific processor 50 adjusts the position of the sound image formed by the first sound and the second sound by the following method:
  • FIG. 7 is a schematic diagram of the audio output processing process of the structure shown in FIG. 6 .
  • the processor 50 adjusts the first sound emitted by the first speaker 31 and the sound emitted by the second speaker 32 by controlling the ratio of the volume of the first sound emitted by the first speaker 31 to the volume of the second sound emitted by the second speaker 32.
  • the position of the sound image formed by the second sound That is, when the volume ratio of the first sound and the second sound changes, the position of the sound image formed by the first sound and the second sound changes.
  • the acquiring module acquires picture information and audio information of the video content.
  • the video content may be video content, games, real-time video, and the like.
  • the real-time video may be, for example, a video call, a live video broadcast, or a video conference.
  • the first audio information and the second audio information are extracted from the audio information, wherein the first audio information and the second audio information correspond to the first speaker 31 and the second speaker 32 respectively, and the first audio information and the second audio information may correspond to The same sound content, for example, the sound content of the first audio information and the second audio information may correspond to "hello" said by the same person.
  • the rendering module performs gain adjustment on the volume of the first audio information and the second audio information. Specifically, the rendering module determines the audio-visual position of the audio information according to the picture information, and determines the audio-visual position of the first audio information and the second audio information according to the audio-visual position. Adjust the volume ratio.
  • the first audio information is sent to the first power amplifier module of the power amplifier module, and then transmitted to the first speaker 31 after power amplification by the first power amplifier module, and the second audio information is sent to the second power amplifier module of the power amplifier module.
  • the module power is amplified and transmitted to the second speaker 32 .
  • the volume of the first audio information can be lower than the volume of the second audio information, and the first audio signal and the second audio signal pass through the first power amplifier respectively.
  • the module and the second power amplifier module perform power amplification and send them to the first speaker and the second speaker respectively, so that the volume of "Hello” issued by the first speaker is lower than the volume of "Hello” issued by the second speaker.
  • the volume of the first audio information can be greater than the volume of the second audio information, and the first audio signal and the second audio signal pass through the first power amplifier module and the second audio signal respectively.
  • the second power amplifier module performs power amplification, it is sent to the first speaker and the second speaker respectively, so that the volume of "Hello” issued by the first speaker is greater than the volume of "Hello” issued by the second speaker.
  • the height direction is the direction from the top 101 to the bottom 102 of the display device 100 .
  • the sound image position is the position of the sound image formed by the first sound and the second sound.
  • the position of the mirror sound source B of the first speaker 31 is the first position
  • the position of the second speaker 32 is the second position.
  • 31 and the ratio of the volume of the sound emitted by the second speaker 32 that is, adjusting the ratio of the volume of the first sound and the second sound, so that the sound image positions of the first sound and the second sound are at the first position and the second position. Adjustable between. For example, when the first speaker 31 does not emit sound and the second speaker 32 emits sound, the sound image position is at the second position; when the sound levels of the first speaker 31 and the second speaker 32 are the same, the sound image position is at the first position and the second position near the middle.
  • the audiovisual position of the flight audio corresponding to the bird also moves from the bottom to the top.
  • the first speaker does not emit sound
  • the second speaker emits sound.
  • the first sound emitted by the first speaker gradually increases, and the second speaker The emitted second sound is gradually reduced, so that the sound image position formed by the first sound and the second sound is consistent with the flight trajectory of the bird.
  • the acquisition module acquires picture information and audio information of the video content.
  • the first audio information and the second audio information are extracted from the audio information.
  • the audio information includes first information and second information, the first information is left channel audio information, and the second information is right channel audio information.
  • the first sub-information and the second sub-information are extracted from the first information, and the third sub-information and the fourth sub-information are extracted from the second information, wherein the first sub-information and the third sub-information form the first audio information, and the first sub-information and the third sub-information form the first audio information.
  • the second sub-information and the fourth sub-information form the second audio information.
  • the rendering module performs gain adjustment on the volume sizes of the first sub-information, the second sub-information, the third sub-information and the fourth sub-information. Specifically, the rendering module determines the audio-visual position of the audio information according to the picture information, and adjusts the audio-visual position according to the audio-visual position. The volume ratio of the first sub-information, the second sub-information, the third sub-information and the fourth sub-information is adjusted.
  • the first power amplifier module includes a first power amplifier and a second power amplifier
  • the second power amplifier module includes a third power amplifier and a fourth power amplifier.
  • the two first speakers 31 are the first speaker L (first speaker on the left) and the first speaker R (the first speaker on the right)
  • the two second speakers 32 are respectively the second speaker L (the second speaker on the left). ) and the second speaker R (the second speaker on the right).
  • the first sub-information rendered by the rendering module is sent to the first power amplifier, and then transmitted to the first speaker L after being amplified by the first power amplifier.
  • the third sub-information rendered by the rendering module is sent to the second power amplifier, and then transmitted to the first speaker R after being amplified by the second power amplifier.
  • the second sub-information rendered by the rendering module is sent to the third power amplifier, and then transmitted to the second speaker L after being amplified by the third power amplifier.
  • the fourth sub-information rendered by the rendering module is sent to the fourth power amplifier, and then transmitted to the second speaker R after being amplified by the fourth power amplifier.
  • FIG. 9 is a schematic diagram of another audio output processing process of the structure shown in FIG. 6;
  • the processor is further configured to control the timing of sound from the first speaker and the second speaker. That is to say, the first speaker and the second speaker sound out of sync.
  • the acquiring module acquires picture information and audio information of the video content.
  • the video content may be video content, games, real-time video, and the like.
  • the real-time video may be, for example, a video call, a live video broadcast, or a video conference.
  • Extract the first audio information and the second audio information from the audio information wherein the first audio information and the second audio information correspond to the first speaker and the second speaker respectively, and the first audio information and the second audio information may correspond to the same
  • the sound content for example, the sound content of the first audio information and the second audio information may correspond to "hello" said by the same person.
  • the rendering module performs gain adjustment on the volume levels of the first audio information and the second audio information. Specifically, the rendering module determines the audio-visual position of the audio information according to the picture information, and adjusts the volume ratio of the first audio information and the second audio information according to the audio-visual position. At the same time, the rendering module also controls the sending delay for sending the first audio information and the second audio information to the next module. Then, the first audio information is sent to the first power amplifier module of the power amplifier module, and then transmitted to the first speaker after power amplification by the first power amplifier module. The second audio information is sent to the second power amplifier module of the power amplifier module, and then transmitted to the second speaker after being amplified by the second power amplifier module, and the second speaker emits a second sound at the second time T2.
  • 10A represents the waveform of the sound produced by the first speaker after receiving the first audio information at the first time T1
  • 10B represents the sound of the second speaker after receiving the second audio information at the second time T2 waveform.
  • the waveform relationship of the first audio information and the second audio information is similar to that of 10A and 10B, for example, the waveforms of the first audio information and the second audio information may also have a time difference.
  • the volume of the first audio information can be lower than the volume of the second audio information, and the first audio signal is sent to the first power amplifier at the first time.
  • module which is amplified by the first power module and sent to the first speaker, so that the first speaker sends out "hello” at the first moment (the waveform of 10A can indicate that the first speaker sends out "hello"), and the second audio signal is sent to the first speaker.
  • the volume of the first audio information can be greater than the volume of the second audio information
  • the first audio signal is sent to the first power amplifier module at the first time
  • the first speaker After being amplified by the first power module, it is sent to the first speaker, so that the first speaker emits "hello” at the first moment
  • the second audio signal is sent to the second power amplifier module at the second time, and amplified by the second power module. Then send it to the second speaker, so that the second speaker will say "hello" at the second moment.
  • the volume of "Hello” issued by the first speaker is greater than the volume of "Hello” issued by the second speaker, and the "Hello” issued by the first speaker and the “Hello” issued by the second speaker are at the same time or almost at the same time. reach the user.
  • the user hears the two sound components of "Hello” from the first speaker and "Hello” from the second speaker, the user will feel that the sound image position of "Hello” comes from the upper part of the display screen.
  • the adjustment of the audio-visual position in the height direction is realized, so that the audio-visual position is synchronized with the screen .
  • the first speaker emits sound at the first time T1
  • the second speaker emits sound at the second time T2 so that the user can receive the first sound and the second sound at the same time or almost at the same time at the third time, and the sound image position can be positioned more accurately.
  • the third moment may refer to a specific moment, or may refer to a small range of moments. That is to say, the user can receive the first sound and the second sound at the same time at the third moment, and the user can also receive the second sound at a certain time interval after receiving the first sound, but the user cannot perceive this time gap, and That is to say, this time gap will not cause a deviation to the user's positioning of the sound image position.
  • the transmission path of the sound from the second speaker to the viewing area of the user, V 340 m/s (speed of sound in air).
  • the value of the time difference ⁇ T may be 1ms to 50ms, such as 2ms, 5ms, or 10ms, etc., which can accurately adjust the stereo.
  • the time difference ⁇ T may also change to enhance the position information of the sound image. For example, when the sound image position of a person's voice "Hello" moves from the bottom of the display screen to the top of the display screen, when the sound image position is below the display screen, the time difference is ⁇ T1, and when the sound image position is above the display, the time difference is ⁇ T2, the time difference ⁇ T2 is smaller than the time difference ⁇ T1. That is to say, in the process of moving the audio-visual position from the bottom of the display screen to the top of the display screen, the "Hello” issued by the first speaker will reach the user before the "Hello” issued by the second speaker and be received by the user. The movement of the sound image position can be clearly felt.
  • the acquisition module acquires picture information and audio information of the video content.
  • the first audio information and the second audio information are extracted from the audio information.
  • the audio information includes first information and second information, the first information is left channel audio information, and the second information is right channel audio information.
  • the first sub-information and the second sub-information are extracted from the first information, and the third sub-information and the fourth sub-information are extracted from the second information, wherein the first sub-information and the third sub-information form the first audio information, and the first sub-information and the third sub-information form the first audio information.
  • the second sub-information and the fourth sub-information form the second audio information.
  • the rendering module performs gain adjustment on the volume sizes of the first sub-information, the second sub-information, the third sub-information and the fourth sub-information. Specifically, the rendering module determines the audio-visual position of the audio information according to the picture information, and adjusts the audio-visual position according to the audio-visual position. The volume ratio of the first sub-information, the second sub-information, the third sub-information and the fourth sub-information is adjusted. At the same time, the rendering module also controls the sending delay of the first sub-information, the second sub-information, the third sub-information and the fourth sub-information to the next module.
  • the first power amplifier module includes a first power amplifier and a second power amplifier
  • the second power amplifier module includes a third power amplifier and a fourth power amplifier.
  • the two first speakers are the first speaker L (left first speaker) and the first speaker R (right first speaker)
  • the two second speakers are the second speaker L (left second speaker) and Second speaker R (right second speaker).
  • the first sub-information rendered by the rendering module is sent to the first power amplifier, and then transmitted to the first speaker L after being amplified by the first power amplifier
  • the third sub-information rendered by the rendering module is sent to the second power amplifier, and the second power amplifier After the power is amplified, it is transmitted to the first speaker R, and the first speaker L and the first speaker R emit sound at the first time T1.
  • the second sub-information rendered by the rendering module is sent to the third power amplifier, and then transmitted to the second speaker L after being amplified by the third power amplifier;
  • the fourth sub-information rendered by the rendering module is sent to the fourth power amplifier, and passed through the fourth power amplifier After the power is amplified, it is transmitted to the second speaker R, and the second speaker L and the second speaker R emit sound at the second time T2. There is a time difference between the first moment and the second moment.
  • This embodiment adjusts the sound image position by adjusting the size ratio of the sound emitted from the first speaker L, the first speaker R, the second speaker L and the second speaker R, so that the sound image position is synchronized with the screen.
  • the first speaker L and the first speaker R sound at the first time T1
  • the second speaker L and the second speaker R sound at the second time T2
  • the user can simultaneously receive the first speaker L and the first speaker at the third time.
  • the sound image position is positioned more accurately, and there will be no deviation between the picture and the sound image position, so that the sound image position and the picture can be accurately synchronized, and the stereo The effect is good and the user experience is improved.
  • the processor may also only control the sounding time of the first speaker and the second speaker, that is, the first speaker and the second speaker do not emit sound synchronously, so that the first sound emitted by the first speaker The second sound emitted by the second speaker reaches the viewing area of the user at the same time, and is received by the user at the same time, the stereo effect is good, and the user experience is improved.
  • FIG. 12 is a schematic structural diagram of another embodiment of the structure shown in FIG. 6 .
  • FIG. 13 is a schematic diagram of the audio output processing process of the structure shown in FIG. 12 .
  • the processor 50 includes an audio module, where the audio module may include functional modules such as an acquisition module, a rendering module, a sound mixing module, and a power amplifier module.
  • the acquisition module, the rendering module, the sound mixing module and the power amplifier module are coupled in sequence, and the power amplifier module is coupled to both the first speaker 31 and the second speaker 32 .
  • the audio information in the video content is processed by the processor 50 using an upmixing algorithm.
  • the acquiring module acquires picture information and audio information of the video content.
  • the video content may be video content, games, real-time video, and the like.
  • the real-time video may be, for example, a video call, a live video broadcast, or a video conference.
  • the first audio information and the second audio information are extracted from the audio information.
  • the audio information includes first information and second information, the first information is left channel audio information, and the second information is right channel audio information.
  • the first information and the second information are processed by the upmixing algorithm of the height content signal extraction, the first information generates a left height channel signal and a left main channel signal, and the second information is divided into a right height channel signal and a right main channel. signal, then the left height channel signal is divided into the first signal and the second signal, the right height channel signal is divided into the third signal and the fourth signal, the left main channel signal is divided into the fifth signal and the sixth signal, the right The main channel signal is divided into a seventh signal and an eighth signal.
  • the first information, the third signal, the fifth signal and the seventh signal form the first audio information
  • the second information, the fourth signal, the sixth signal and the eighth signal form the second audio information.
  • the rendering module performs gain adjustment on the volume of the first signal, the second signal, the third signal, the fourth signal, the fifth signal, the sixth signal, the seventh signal and the eighth signal. Specifically, the rendering module determines the audio-visual position of the audio information according to the picture information, and analyzes the first signal, the second signal, the third signal, the fourth signal, the fifth signal, the sixth signal, the seventh signal and the third signal according to the audio-visual position. The volume ratio of the eight signals can be adjusted. At the same time, the rendering module also controls the sending delay of the first signal, the second signal, the third signal, the fourth signal, the fifth signal, the sixth signal, the seventh signal and the eighth signal to one module.
  • the sound mixing module includes a first module, a second module, a third module and a fourth module.
  • the first module mixes the first signal and the fifth signal rendered by the rendering module to obtain the first sound mixing
  • the second module mixes the sound.
  • the third signal and the seventh signal rendered by the rendering module are mixed to obtain the second mix
  • the third module is to mix the second signal and the sixth signal rendered by the rendering module to obtain the third mix
  • the fourth The module mixes the fourth signal and the eighth signal rendered by the rendering module to obtain a fourth mix.
  • the power amplifier module includes a first power amplifier module and a second power amplifier module, the first power amplifier module includes a first power amplifier and a second power amplifier, and the second power amplifier module includes a third power amplifier and a fourth power amplifier.
  • the two first speakers 31 are the first speaker L (first speaker on the left) and the first speaker R (the first speaker on the right), and the two second speakers 32 are respectively the second speaker L (the second speaker on the left). ) and the second speaker R (the second speaker on the right).
  • the first mixed sound is sent to the first power amplifier, and then transmitted to the first speaker L after being amplified by the first power amplifier; the second mixed sound is sent to the second power amplifier, and then transmitted to the first speaker R after being amplified by the second power amplifier.
  • the speaker L and the first speaker R emit sound at the first moment.
  • the third mixed sound is sent to the third power amplifier, and then transmitted to the second speaker L after being amplified by the third power amplifier;
  • the fourth mixed sound is sent to the fourth power amplifier, and then transmitted to the second speaker R after being amplified by the fourth power amplifier.
  • the speaker L and the second speaker R emit sound at the second time. There is a time difference between the first moment and the second moment.
  • the content of the left height channel signal, the right height channel signal, the left main channel signal and the right main channel signal can be effectively realized in the height Sound and image localization at a specific position in the direction, so that the positioning of the sound and image of various sounds in the height direction can be adjusted as needed, and the positioning of various sounds and pictures can be integrated.
  • the sound and image of an aircraft engine can be positioned on the display screen The upper position, the sound image of the character's dialogue is positioned in the middle of the display screen, the sound image of the footsteps is positioned at the bottom position of the screen, etc.
  • FIG. 14 is a schematic structural diagram of another embodiment of the display device 100 shown in FIG. 1 .
  • the display device 100 in this embodiment further includes a distance detector 70 .
  • the distance detector 70 is arranged inside the casing 10 .
  • the distance detector 70 can also be provided outside the housing 10 .
  • the distance detector 70 is used to detect the spatial parameters of the application environment where the display device 100 is located, and the spatial parameters include various parameters, such as the first distance between the display device 100 and the wall, the second distance between the display device 100 and the ceiling The distance and the third distance between the display device 100 and the user.
  • the sound emission direction of the first speaker 31 can be adjusted. Specifically, the sound emission direction of the first speaker 31 can be adjusted according to a spatial parameter.
  • the display device 100 may include a drive assembly, the first speaker 31 is disposed on or cooperated with the drive assembly, the drive assembly is coupled with a processor, and the processor is used to drive the drive assembly to adjust according to the spatial parameters obtained by the distance detector 70 The sounding direction of the first speaker 31 . That is to say, when the spatial parameters of the application environment in which the display device is located are changed, the sound emission direction of the first speaker 31 can be changed, so that the display device 100 can be adapted to different application environments, and the sound emission direction of the first speaker can be adjusted according to the application environment.
  • the display device 100 can also adjust the position of the viewing area of the user according to the position of the user, so that the user can have a good audio-visual experience no matter where the user moves.
  • the space parameter also includes the distance between the display device 100 and other obstacles.
  • the spatial parameter includes at least one parameter among the first distance, the second distance, and the third distance.
  • the sounding direction of the first speaker 31 may also be manually adjusted manually.
  • the display device 100 in this embodiment can adjust the sounding direction of the first speaker 31 according to the application environment of the display device 100.
  • the display device 100 When the display device 100 is used for the first time, or the display device 100 is moved to a new application environment, the display device 100 will detect the spatial parameters of the application environment through the distance detector 70 .
  • the display device 100 adjusts the sounding direction of the first speaker 31 through spatial parameters, so that the display device 100 can ensure that the first sound emitted by the first speaker 31 is accurately transmitted to the user's viewing area after being reflected in different application environments, thereby improving the user's viewing and listening. experience.
  • the display device 100 may further include a sensing sensor, that is, a sensor that senses that the display device 100 is moved or that the position of the display device 100 changes.
  • the sensory sensor may be a gyroscope, an accelerometer, or the like.
  • the distance detector 70 can be triggered to detect the spatial parameters of the application environment, so that the first speaker 31 adjusts the sounding direction according to the spatial parameters, so that as long as the position of the display device 100 changes, The distance detector 70 will acquire the spatial parameters of the application environment, so that the first speaker 31 can adjust the sounding direction according to the spatial parameters, so as to ensure the user's audio-visual experience at all times.
  • the perception sensor can also record the information that the display device 100 is moved when the display device 100 is powered off.
  • the perception sensor triggers the distance detector 70 to detect the spatial parameters of the application environment to Adjust the sounding direction of the first speaker 31 according to the spatial parameters. Even if the display device 100 is moved after the power is turned off, the sounding direction of the first speaker 31 will still be adjusted according to the application environment when the display device 100 is activated again to ensure the user's audio-visual experience.
  • the distance detector 70 includes a radar, and the radar can transmit and receive ultrasonic waves, and the spatial parameters of the application environment measured by ultrasonic waves are relatively more accurate than data obtained in other ways.
  • the distance detector 70 may also include a microphone, that is, the display device 100 emits a sound, and the sound is reflected back to the display device 100 through an obstacle and then received by the microphone, and the sound is emitted and received by calculating The time difference is obtained to obtain the distance between the display device 100 and the obstacle.
  • the distance detector 70 includes a camera, and the distance between the display device 100 and the obstacle is recognized by taking pictures of the camera.
  • the obstacles may be walls, ceilings, users, and the like.
  • the distance detector 70 may further include at least two of radar, microphone and camera, and use different ranging methods for different obstacles to obtain more accurate spatial parameters.
  • the distance detector 70 is coupled to the processor 50, and the distance detector 70 sends an instruction to the processor 50.
  • the instruction may be a pulse signal or an analog signal including spatial parameter information.
  • the time difference between two moments Specifically, the processor 50 adjusts the time difference between the first moment and the second moment according to the information carried in the instruction, such as the first distance, the second distance and the third distance, that is to say, the application environment in which the display device 100 is located.
  • the spatial parameters change, the time difference between the first moment and the second moment varies to ensure that the sounds emitted by the first speaker 31 and the second speaker 32 reach the user viewing area at the same time, so that the sound image localization is more accurate.
  • the distance detector 70 can also detect the path of the sound emitted by the first speaker 31 to the user area, and the path of the sound emitted by the second speaker 32 to reach the user area, and determine the path according to the difference between the two paths. The time difference between the first moment and the second moment.
  • the display device may further include a user input entry, and the user input entry may be an application software in a mobile phone that interacts with the display device, or a setting window of the display device.
  • the user fills in spatial parameters such as the first distance, the second distance, or the third distance through the user input entry, and the first speaker adjusts the sounding direction of the first speaker according to the data filled in by the user. This method is less expensive than obtaining spatial parameters through distance detectors.
  • FIG. 15 is a schematic flowchart of an audio output method of the display device 100 provided in this embodiment.
  • the audio output method is applied to the display device 100 shown in FIG. 1 .
  • the audio output method includes the following steps S110-S130.
  • the picture information and audio information of the video content are obtained through the obtaining module.
  • the video content may be video content, games, real-time video, and the like.
  • the real-time video may be, for example, a video call, a live video broadcast, or a video conference.
  • the audio information includes first information and second information, the first information is left channel audio information, and the second information is right channel audio information.
  • S120 Extract the first audio information and the second audio information from the audio information.
  • the acquisition module extracts the first sub-information and the second sub-information from the first information, and extracts the third sub-information and the fourth sub-information from the second information, wherein the first sub-information and the third sub-information
  • the information forms the first audio information, the second sub-information and the fourth sub-information form the second audio information, the first audio information and the second audio information respectively correspond to the first speaker and the second speaker, the first audio information and the second audio information It may correspond to the same sound content, for example, the sound content of the first audio information and the second audio information may correspond to "hello" said by the same person.
  • the first audio information and the second audio information are first processed to adjust the position of the sound image formed by the first sound emitted by the first speaker 31 and the second sound emitted by the second speaker 32 .
  • the processing of the first audio information and the processing of the second audio information includes adjusting the volume ratio of the first audio information and the second audio information.
  • the rendering module performs gain adjustment on the volume sizes of the first sub-information, the second sub-information, the third sub-information and the fourth sub-information. Specifically, the rendering module determines the audio-visual position of the audio information according to the picture information, The volume ratio of the first sub-information, the second sub-information, the third sub-information and the fourth sub-information is adjusted according to the image position.
  • the first sub-information rendered by the rendering module is sent to the first power amplifier, and then transmitted to the first speaker L (left first speaker) after being amplified by the first power amplifier.
  • the third sub-information rendered by the rendering module is sent to the second power amplifier, and then transmitted to the first speaker R (the first speaker on the right side) after being amplified by the second power amplifier.
  • the second sub-information rendered by the rendering module is sent to the third power amplifier, and then transmitted to the second speaker L (the second speaker on the left) after being amplified by the third power amplifier.
  • the fourth sub-information rendered by the rendering module is sent to the fourth power amplifier, and then transmitted to the second speaker R (the second speaker on the right side) after being amplified by the fourth power amplifier.
  • This audio output method adjusts the size ratio of the sound emitted from the first speaker L, the first speaker R, the second speaker L, and the second speaker R, so as to realize the adjustment of the sound image position in the three-dimensional space, so that the sound image position is the same as the sound image position.
  • the screen is synchronized, and the real three-dimensional space sound field effect is realized.
  • the first speaker of the present application orients the sounding direction toward the rear and upper part of the display device, so that the sound reflected from the ceiling and reaches the user's viewing area has a good sky sound image localization effect and a good stereo effect, which improves the user's audio-visual experience.
  • processing the first audio information and the second audio information may further include controlling the first speaker 31 to emit sound at the first moment, and controlling the second speaker 32 to emit sound at the second moment,
  • the user simultaneously receives the sound of the first speaker 31 and the sound of the second speaker 32 at the third moment, wherein there is a time difference between the first moment and the second moment, and the first moment is earlier than the second moment, so that the first speaker receives the first moment.
  • the second speaker receives second audio information corresponding to the first audio information, and the playing times of the first audio information and the second audio information are not synchronized. Specifically, as shown in FIG.
  • the rendering module while the rendering module adjusts the volume ratio of the first sub-information, the second sub-information, the third sub-information and the fourth sub-information, the rendering module also controls the first sub-information and the second sub-information , the sending delay of the third sub-information and the fourth sub-information to the next module.
  • the first sub-information rendered by the rendering module is sent to the first power amplifier, and then transmitted to the first speaker L after being amplified by the first power amplifier; the third sub-information rendered by the rendering module is sent to the second power amplifier, and the third sub-information rendered by the rendering module is sent to the second power amplifier.
  • the power of the two power amplifiers is amplified and transmitted to the first speaker R, and the first speaker L and the first speaker R emit sound at the first moment.
  • the second sub-information rendered by the rendering module is sent to the third power amplifier, and then transmitted to the second speaker L after being amplified by the third power amplifier; the fourth sub-information rendered by the rendering module is sent to the fourth power amplifier, and the The quad-amplifier power is amplified and transmitted to the second speaker R, and the second speaker L and the second speaker R emit sound at the second moment. There is a time difference between the first moment and the second moment.
  • the sound image position can be adjusted by adjusting the size ratio of the sounds emitted from the first speaker L, the first speaker R, the second speaker L, and the second speaker R, so that the sound image position is synchronized with the screen.
  • the second speaker L and the second speaker R sound at the second moment, that is, the first speaker receives the first audio information at the first moment.
  • the first sound is emitted, and after receiving the second audio information, the second speaker emits a second sound corresponding to the first sound at the second moment, so that the user can simultaneously receive the first speaker L, the first speaker R, and the first speaker at the third moment.
  • the sound image position is positioned more accurately, and there will be no deviation between the picture and the sound image position, so that the sound image position and the picture can be accurately synchronized, and the stereo effect is good. Improve user experience.
  • the audio output method may only control the first speaker to emit sound at the first moment, and the second speaker to emit sound at the second moment, so that the user can simultaneously receive the first speaker and the second speaker at the third moment The sound emitted, the stereo effect is good, and the user experience is improved.
  • the audio output method may further perform an upmixing algorithm for height content signal extraction on the first information and the second information, and then generate a left height channel signal and a right height channel signal.
  • channel signal, left main channel signal and right main channel signal Then, the left height channel signal is processed with specific gain, time delay, sound mixing and power amplification, and then transmitted to the first speaker L and the second speaker L respectively, so that the content of the left height channel signal can be placed at a specific position in the height direction. sound image localization.
  • the right height channel signal, the left main channel signal and the right main channel signal can achieve sound image localization at a specific position in the height direction in the same way, and will not be repeated here.
  • the content of the left height channel signal, the right height channel signal, the left main channel signal and the right main channel signal can be effectively realized.
  • Sound and image localization at a specific position in the height direction so that the sound and image positioning of various sounds in the height direction can be adjusted as needed, and the positioning of various sounds and pictures can be integrated.
  • the sound image of an aircraft engine can be positioned in Position at the top of the screen, position the sound image of the character's dialogue at the middle of the screen, position the sound image of the footsteps at the bottom of the screen, etc.
  • the audio output method may also acquire the first audio information only from the audio information, and output the first audio information through the first speaker.
  • the audio output method may also be applied to the display device 100 shown in FIG. 14 .
  • the audio output method further includes detecting the application environment of the display device, acquiring spatial parameters of the application environment, and in response to changes in the spatial parameters, the sounding direction of the first speaker 31 changes.
  • the distance detector 70 detects the spatial parameters of the application environment where the display device 100 is located.
  • the spatial parameters include various parameters, such as the first distance between the display device 100 and the wall, the distance between the display device 100 and the ceiling. Parameters such as the second distance and the third distance between the display device and the user.
  • the processor adjusts the sounding direction of the first speaker 31 in response to two or more of the spatial parameters.
  • the specific processor can adjust the sounding direction of the first speaker 31 by controlling the driving component, so that the display device 100 can be suitable for various application environments.
  • the audio output method in this embodiment may adjust the sounding direction of the first speaker 31 according to the application environment of the display device 100 .
  • the display device 100 When the display device 100 is used for the first time, or the display device 100 is moved to a new application environment, the display device 100 will detect the spatial parameters of the application environment through the distance detector 70 . Then, the sounding direction of the first speaker 31 is adjusted through spatial parameters, so that the display device 100 can ensure that the first sound emitted by the first speaker 31 is reflected in the correct viewing area of the user in different application environments, thereby improving the user's audiovisual experience.
  • the spatial parameters of the application environment of the electronic device may also be manually input by the user.
  • a user may enter through a user input portal of the display device.
  • the user input entry may be application software in the mobile phone that interacts with the display device, or may be a setting window of the display device.
  • the user fills in spatial parameters such as the first distance, the second distance or the third distance through the user input entry, and the processor adjusts the sounding direction of the first speaker according to the data filled in by the user in response to the spatial parameters input by the user. This method is less expensive than obtaining spatial parameters through distance detectors.
  • the audio output method can also sense the movement state of the display device 100 at all times.
  • the movement state includes being moved and the position changes.
  • the distance detector 70 is triggered to detect the application environment. Spatial parameters, adjust the sounding direction of the first speaker 31 according to the spatial parameters, so that as long as the position of the display device 100 changes, the distance detector 70 will acquire the spatial parameters of the application environment, so that the first speaker 31 can adjust the sounding direction according to the spatial parameters , to ensure the user's audio-visual experience at all times.
  • the audio output method further comprises changing the time difference between the first time instant and the second time instant in response to the change in the spatial parameter.
  • the processor may adjust the time difference between the first moment and the second moment according to spatial parameters, such as the first distance, the second distance and the third distance, to ensure that the sounds emitted by the first speaker and the second speaker reach the user at the same time Viewing area for more accurate panning.
  • spatial parameters such as the first distance, the second distance and the third distance
  • the difference between the path of the first sound emitted by the first speaker reaching the user area and the path of the second sound emitted by the second speaker reaching the user area can also be obtained. Time difference.
  • the components for executing each step of the audio output method are not limited to the components described above, and may be any components that can perform the above method.

Abstract

本申请提供一种显示设备及其音频输出方法。显示设备包括显示屏、第一扬声器和第二扬声器,第一扬声器设于显示屏的后侧,第一扬声器的发声方向朝向显示设备的后上方;所述第二扬声器的发声方向朝向所述显示设备前方或朝向所述显示设备下方,所述第一扬声器和所述第二扬声器不同步发声。本申请的显示设备的立体声效果好。

Description

显示设备及其音频输出方法
本申请要求于2021年04月13日提交中国专利局、申请号为202110396109.1、申请名称为“显示设备及其音频输出方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电视设备领域,特别涉及一种显示设备及其音频输出方法。
背景技术
现有的视频播放设备,如平板电视,具有至少两个扬声器,用于实现立体声效果。但是现有的视频播放设备的立体声效果不好。
发明内容
本申请实施例提供一种显示设备,旨在获得一种立体声效果好的显示设备。本申请实施例还提供一种显示设备的音频输出方法,用于提高显示设备的立体声效果。
第一方面,提供了一种显示设备。显示设备包括显示屏、第一扬声器和第二扬声器,第一扬声器设于显示屏的后侧,第一扬声器的发声方向朝向显示设备的后上方;第二扬声器的发声方向朝向显示设备前方或朝向显示设备下方,第一扬声器和第二扬声器不同步发声。
可以理解的是,第一扬声器的发声方向即为第一扬声器发出的声音的初始传播方向,显示设备的后上方为显示设备后方和显示设备的上方之间的方向。
本申请通过限制第一扬声器和第二扬声器不同步发声,从而用户能够同时接收第一扬声器和第二扬声器发出的声音,第一扬声器和第二扬声器发出的声音形成的声像位置更准确,不会出现画面与声像位置偏位的情况,以使声像位置与画面实现精准的同步,立体声效果好,提高用户体验。
一种可能的实现方式中,第一扬声器发出的声音经过位于显示设备后方的第一障碍物反射至位于显示设备上方的第二障碍物,经第二障碍物反射至显示设备前方的用户观看区域。其中,第一障碍物可以是墙体,第二障碍物可以是天花板。
第一扬声器通过将发声方向朝向显示设备的后上方,也就是说,第一扬声器的轴向方向朝向显示设备的后上方。由于第一扬声器的轴向方向朝向显示设备的后上方,第一扬声器的大部分声音朝向位于显示设备后方的第一障碍物,经第一障碍物反射后,经位于显示设备上方的第二障碍物反射到达用户观看区域。第一扬声器的小部分声音偏离轴向方向大,能够朝向显示设备正前方直接到达用户观看区域,但是该部分声音离第一扬声器的主轴方向偏轴角度大,声音的强度较弱,因此直接到达用户观看区域的声音对经过墙体和天花板两次反射后到达用户观看区域的声音的掩蔽效应弱,经天花板反射到达用户观看区域的声音的天空声像定位效果好,提高了用户的视听体验。
同时,第一扬声器发出的声音经过第一障碍物反射至第二障碍物,继而经过天花板反射后投射至用户观看区域,传递至用户观看区域的声音的声像为位于天花板上方的声像,这样由第一扬声器形成的高度方向的声场范围不会局限于显示屏尺寸,使得高度方向的声场能够覆盖应用环境的整个空间高度,起到天空声像定位的效果,例如,飞机引擎的声像能定位在显示屏的上方的位置,通过第一扬声器播放,实现显示屏播放的画面与声音定位一致。
一种可能的实现方式中,第一扬声器的出声方向与水平方向呈10度~80度(包括10度和80度),以保证第一扬声器发出的声音依次经过墙体和天花板反射,最终达到用户观看区 域。其中,水平方向为垂直于显示屏的显示面的方向。
一种可能的实现方式中,第一扬声器的出声方向与水平方向呈35度~45度(包括35度和45度),当第一扬声器的出声方向与水平方向呈35度~45度时,显示设备能够适用于多种不同空间参数的应用环境,应用于多种不同的空间参数的应用环境时,第一扬声器发出的声音均能依次经过不同应用环境的墙体和天花板反射,最终达到用户观看区域。空间参数为多个不同参数的集合,例如显示设备至墙体的距离、显示设备至天花板的距离及显示设备至用户的距离等参数。也就是说,显示设备能适用于距离墙体一定距离范围内、距离天花板一定范围内及距离用户一定范围内的多种不同空间参数的应用环境,保证用户的视听体验。
一种可能的实现方式中,第一扬声器在第一时刻发出第一声音,第二扬声器在第二时刻发出与第一声音相对应的第二声音,第一声音和第二声音在用户观看区域混合;其中,第一时刻和第二时刻存在时间差。通过控制第一扬声器在第一时刻发声,第二扬声器在第二时刻发声,从而用户能在第三时刻同时接收到第一扬声器和第二扬声器发出的声音,声像位置定位更准确,不会出现画面与声像位置偏位的情况,以使声像位置与画面实现精准的同步,提高用户体验。
一种可能的实现方式中,显示设备处于的应用环境的空间参数变化时,第一时刻和第二时刻之间的时间差变化,以保证第一扬声器和第二扬声器发出的声音同时到达用户观看区域,以使声像定位更准确。
一种可能的实现方式中,第一声音和第二声音的音量比例变化时,第一声音和第二声音形成的声像的位置发生变化。通过调整第一声音和第二声音的音量比例,调整第一声音和第二声音形成的声像的位置,从而实现声像位置在高度方向的调整,以使声像位置与画面同步。
一种可能的实现方式中,第一扬声器的发声方向可变,第一扬声器可以根据需要调整发声方向,使得第一扬声器发出的声音能准确的传至用户观看区域。
一种可能的实现方式中,显示设备处于的应用环境的空间参数变化时,第一扬声器的发声方向可变。通过空间参数调整第一扬声器的发声方向,以实现第一扬声器的发声方向的自动调节,提高用户体验。
一种可能的实现方式中,空间参数包括第一距离、第二距离和第三距离中的至少一个,第一距离为显示设备与第一障碍物之间的距离,第二距离为显示设备与第二障碍物之间的距离,第三距离为显示设备与用户之间的距离。显示设备可以根据用户所处的位置,调整用户观看区域的位置,以使用户无论移动到哪个位置都能有很好的视听体验。
一种可能的实现方式中,显示设备包括顶部和底部,第二扬声器相对第一扬声器靠近底部设置,以使用户接收到从不同位置的扬声器发出的声音形成的组合声,提高声音的立体感。
一种可能的实现方式中,显示设备还包括处理器,处理器与第一扬声器和第二扬声器均耦合,处理器通过控制经第一扬声器发出的第一声音的音量和经第二扬声器发出的第二声音的音量的比例,以调整第一声音和第二声音形成的声像的位置,从而实现声像位置在高度方向的调整,以使声像位置与画面同步。
一种可能的实现方式中,处理器还用于控制第一扬声器在第一时刻发出第一声音,控制第二扬声器在第二时刻发出第二声音,用户在第三时刻同时或几乎同时接收第一声音和第二声音,其中,第一时刻和第二时刻存在时间差。通过控制第一扬声器在第一时刻发声,第二扬声器在第二时刻发声,从而用户能在第三时刻同时接收到第一扬声器和第二扬声器发出的声音,声像位置定位更准确,不会出现画面与声像位置偏位的情况,以使声像位置与画面实现精准的同步,提高用户体验。
一种可能的实现方式中,显示设备还包括驱动组件,驱动组件与处理器耦合,处理器用于驱动驱动组件调整第一扬声器的发声方向,第一扬声器可以根据需要调整发声方向,使得第一扬声器发出的声音能准确的传至用户观看区域。
一种可能的实现方式中,显示设备还包括距离探测器,距离探测器用于探测显示设备的应用环境,获取应用环境的空间参数,响应于空间参数的变化,第一扬声器的发声方向变化,以实现第一扬声器的发声方向的自动调节,提高用户体验。
一种可能的实现方式中,距离探测器与处理器耦合,距离探测器向处理器发送指令,响应于指令,第一时刻和第二时刻之间的时间差变化,以保证第一扬声器和第二扬声器发出的声音同时到达用户观看区域,以使声像定位更准确。
一种可能的实现方式中,距离探测器包括雷达。雷达相比于其他探测器得到的数据更加准确。
一种可能的实现方式中,显示设备还包括感知传感器,即感知显示设备被搬动,或者感知显示设备的位置发生变化的传感器。例如感知传感器可以是陀螺仪、加速度计等。当感知传感器检测到显示设备的位置发生变化后,可以触发距离探测器探测应用环境的空间参数,以使第一扬声器根据空间参数调整发声方向,从而只要显示设备的位置发生变化,距离探测器就会获取应用环境的空间参数,以使第一扬声器根据空间参数调整发声方向,时刻保证用户的视听体验。
一种可能的实现方式中,第一扬声器位于顶部和底部之间的中点与顶部之间。也就是说,第一扬声器位于顶部下方的位置,从而第一扬声器发出的朝向显示设备正前方的声音会被显示设备的壳体阻挡,有效减少从第一扬声器直接传递至用户观看区域的声音,天空声像定位更准确。
一种可能的实现方式中,第二扬声器设于底部和/或设于显示设备的侧部。
第二方面,提供了一种显示设备的音频输出方法。显示设备包括显示屏、第一扬声器和第二扬声器,第一扬声器向显示设备的后上方发声,第二扬声器的发声方向朝向所述显示设备前方或朝向显示设备下方,音频输出方法包括:
第一扬声器接收第一音频信息,第二扬声器接收与第一音频信息相对应的第二音频信息,第一音频信息和第二音频信息的播放时间不同步。
本申请的音频输出方法通过控制第一扬声器在第一时刻发声,第二扬声器在第二时刻发声,从而用户能在第三时刻同时接收到第一扬声器和第二扬声器发出的声音,立体声效果好,提高用户体验。
一种可能的实现方式中,第一扬声器发出的声音经过位于显示设备后方的第一障碍物反射至位于显示设备上方的第二障碍物,经第二障碍物反射至显示设备前方的用户观看区域。第一扬声器通过将发声方向朝向显示设备的后上方,经天花板反射到达用户观看区域的声音的天空声像定位效果好,提高了用户的视听体验。
一种可能的实现方式中,音频输出方法包括第一扬声器接收第一音频信息之后在第一时刻发出第一声音,第二扬声器接收第二音频信息之后在第二时刻发出与第一声音相对应的第二声音,第一声音和第二声音在用户观看区域混合;其中,第一时刻和第二时刻存在时间差。
本实现方式通过控制第一扬声器在第一时刻发声,第二扬声器在第二时刻发声,从而用户能在第三时刻同时接收到第一扬声器和第二扬声器发出的声音,声像位置定位更准确,不会出现画面与声像位置偏位的情况,以使声像位置与画面实现精准的同步,提高用户体验。
一种可能的实现方式中,第一声音和第二声音的音量比例变化时,第一声音和第二声音 形成的声像的位置发生变化。也就是说,可以通过调节第一音频信息和第二音频信息的音量比例,以使声像位置与画面同步,实现了真实的三维空间声场音效。
一种可能的实现方式中,音频输出方法还包括:检测显示设备的应用环境,获取应用环境的空间参数;响应于空间参数的变化,第一扬声器的发声方向变化。
可以理解的是,本实施例中音频输出方法可以根据显示设备的应用环境对第一扬声器的发声方向进行调整。在首次使用显示设备,或者将显示设备移动到一个新的应用环境中时,显示设备会通过距离探测器探测应用环境的空间参数。然后通过空间参数调整第一扬声器的发声方向,以使显示设备在不同的应用环境均能保证第一扬声器发出的声音经过反射后准确用户观看区域,提高用户视听体验。
一种可能的实现方式中,音频输出方法还包括:检测显示设备的应用环境,获取应用环境的空间参数;响应于空间参数的变化,第一时刻和第二时刻之间的时间差变化,以保证第一扬声器和第二扬声器发出的声音同时到达用户观看区域,以使声像定位更准确。
一种可能的实现方式中,空间参数包括第一距离、第二距离和第三距离中的至少一个,第一距离为显示设备与第一障碍物之间的距离,第二距离为显示设备与第二障碍物之间的距离,第三距离为显示设备与用户之间的距离。
一种可能的实现方式中,音频输出方法还能时刻感知显示设备的移动状态,移动状态包括被搬动及位置发生变化,当显示设备的位置发生变化后,触发距离探测器探测应用环境的空间参数,根据空间参数调整第一扬声器的发声方向,从而只要显示设备的位置发生变化,距离探测器就会获取应用环境的空间参数,以使第一扬声器根据空间参数调整发声方向,时刻保证用户的视听体验。
第三方面,提供了一种显示设备。显示设备包括显示屏、第一扬声器和第二扬声器,第一扬声器设于显示屏的后侧,第一扬声器的发声方向朝向显示设备的后上方;第二扬声器的发声方向不同于第一扬声器的发声方向。
本申请的第二扬声器的发声方向不同于第一扬声器的发声方向,从而用户能够接收到来自不同方向传过来的声音,提高声音的立体感。
一种可能的实现方式中,第一扬声器发出的声音经过位于显示设备后方的第一障碍物反射至位于显示设备上方的第二障碍物,经第二障碍物反射至显示设备前方的用户观看区域。其中,第一障碍物可以是墙体,第二障碍物可以是天花板。
第一扬声器通过将发声方向朝向显示设备的后上方,也就是说,第一扬声器的轴向方向朝向显示设备的后上方。由于第一扬声器的轴向方向朝向显示设备的后上方,第一扬声器的大部分声音朝向位于显示设备后方的第一障碍物,经第一障碍物反射后,经位于显示设备上方的第二障碍物反射到达用户观看区域。第一扬声器的小部分声音偏离轴向方向大,能够朝向显示设备正前方直接到达用户观看区域,但是该部分声音离第一扬声器的主轴方向偏轴角度大,声音的强度较弱,因此直接到达用户观看区域的声音对经过墙体和天花板两次反射后到达用户观看区域的声音的掩蔽效应弱,经天花板反射到达用户观看区域的声音的天空声像定位效果好,提高了用户的视听体验。
同时,第一扬声器发出的声音经过第一障碍物反射至第二障碍物,继而经过天花板反射后投射至用户观看区域,传递至用户观看区域的声音的声像为位于天花板上方的声像,这样由第一扬声器形成的高度方向的声场范围不会局限于显示屏尺寸,使得高度方向的声场能够覆盖应用环境的整个空间高度,起到天空声像定位的效果,例如,飞机引擎的声像能定位在显示屏的上方的位置,通过第一扬声器播放,实现显示屏播放的画面与声音定位一致。
一种可能的实现方式中,第一扬声器的出声方向与水平方向呈10度~80度(包括10度和80度),以保证第一扬声器发出的声音依次经过墙体和天花板反射,最终达到用户观看区域。其中,水平方向为垂直于显示屏的显示面的方向。
一种可能的实现方式中,第一扬声器的发声方向可变,以使第一扬声器根据需要调整发声方向,使得第一扬声器发出的声音能准确的传至用户观看区域。
一种可能的实现方式中,显示设备处于的应用环境的空间参数变化时,第一扬声器的发声方向可变。通过空间参数调整第一扬声器的发声方向,以实现第一扬声器的发声方向的自动调节,提高用户体验。
第四方面,提供了一种显示设备。显示设备包括显示屏、第一扬声器和第二扬声器,第一扬声器设于显示屏的后侧,第一扬声器的发声方向朝向显示设备的后上方;第一扬声器的发声方向可变。第一扬声器可以根据需要调整发声方向,使得第一扬声器发出的声音能准确的传至用户观看区域。
一种可能的实现方式中,第一扬声器发出的声音经过位于显示设备后方的第一障碍物反射至位于显示设备上方的第二障碍物,经第二障碍物反射至显示设备前方的用户观看区域。其中,第一障碍物可以是墙体,第二障碍物可以是天花板。
第一扬声器通过将发声方向朝向显示设备的后上方,也就是说,第一扬声器的轴向方向朝向显示设备的后上方。由于第一扬声器的轴向方向朝向显示设备的后上方,第一扬声器的大部分声音朝向位于显示设备后方的第一障碍物,经第一障碍物反射后,经位于显示设备上方的第二障碍物反射到达用户观看区域。第一扬声器的小部分声音偏离轴向方向大,能够朝向显示设备正前方直接到达用户观看区域,但是该部分声音离第一扬声器的主轴方向偏轴角度大,声音的强度较弱,因此直接到达用户观看区域的声音对经过墙体和天花板两次反射后到达用户观看区域的声音的掩蔽效应弱,经天花板反射到达用户观看区域的声音的天空声像定位效果好,提高了用户的视听体验。
同时,第一扬声器发出的声音经过第一障碍物反射至第二障碍物,继而经过天花板反射后投射至用户观看区域,传递至用户观看区域的声音的声像为位于天花板上方的声像,这样由第一扬声器形成的高度方向的声场范围不会局限于显示屏尺寸,使得高度方向的声场能够覆盖应用环境的整个空间高度,起到天空声像定位的效果,例如,飞机引擎的声像能定位在显示屏的上方的位置,通过第一扬声器播放,实现显示屏播放的画面与声音定位一致。
一种可能的实现方式中,显示设备处于的应用环境的空间参数变化时,第一扬声器的发声方向可变。通过空间参数调整第一扬声器的发声方向,以实现第一扬声器的发声方向的自动调节,提高用户体验。
一种可能的实现方式中,空间参数包括第一距离、第二距离和第三距离中的至少一个,第一距离为显示设备与第一障碍物之间的距离,第二距离为显示设备与第二障碍物之间的距离,第三距离为显示设备与用户之间的距离。
一种可能的实现方式中,第一声音和第二声音的音量比例变化时,第一声音和第二声音形成的声像的位置发生变化。通过调整第一声音和第二声音的音量比例,调整第一声音和第二声音形成的声像的位置,从而实现声像位置在高度方向的调整,以使声像位置与画面同步。
一种可能的实现方式中,第一扬声器在第一时刻发出第一声音,第二扬声器在第二时刻发出与第一声音相对应的第二声音,第一声音和第二声音在用户观看区域混合;其中,第一时刻和第二时刻存在时间差。通过控制第一扬声器在第一时刻发声,第二扬声器在第二时刻发声,从而用户能在第三时刻同时接收到第一扬声器和第二扬声器发出的声音,声像位置定 位更准确,不会出现画面与声像位置偏位的情况,以使声像位置与画面实现精准的同步,提高用户体验。
一种可能的实现方式中,显示设备处于的应用环境的空间参数变化时,第一时刻和第二时刻之间的时间差变化,以保证第一扬声器和第二扬声器发出的声音同时到达用户观看区域,以使声像定位更准确。
一种可能的实现方式中,当第一扬声器位于第一发声方向时,第一时刻和第二时刻之间的时间差为第一时间差,当第一扬声器位于第二发声方向时,第一时刻和第二时刻之间的时间差为第二时间差,其中,第一时间差和第二时间差不同。第一时刻和第二时刻之间的时间差可以通过第一扬声器的发声方向等数据得到。
第五方面,提供了一种显示设备。显示设备包括显示屏和第一扬声器,第一扬声器设于显示屏的后侧,第一扬声器的发声方向朝向显示设备的后上方;第一扬声器发出的声音经过位于显示设备后方的第一障碍物反射至位于显示设备上方的第二障碍物,经第二障碍物反射至显示设备前方的用户观看区域。其中,第一障碍物可以是墙体,第二障碍物可以是天花板。
本申请的第一扬声器通过将发声方向朝向显示设备的后上方,也就是说,第一扬声器的轴向方向朝向显示设备的后上方。由于第一扬声器的轴向方向朝向显示设备的后上方,第一扬声器的大部分声音朝向位于显示设备后方的第一障碍物,经第一障碍物反射后,经位于显示设备上方的第二障碍物反射到达用户观看区域。第一扬声器的小部分声音偏离轴向方向大,能够朝向显示设备正前方直接到达用户观看区域,但是该部分声音离第一扬声器的主轴方向偏轴角度大,声音的强度较弱,因此直接到达用户观看区域的声音对经过墙体和天花板两次反射后到达用户观看区域的声音的掩蔽效应弱,经天花板反射到达用户观看区域的声音的天空声像定位效果好,提高了用户的视听体验。
同时,第一扬声器发出的声音经过第一障碍物反射至第二障碍物,继而经过天花板反射后投射至用户观看区域,传递至用户观看区域的声音的声像为位于天花板上方的声像,这样由第一扬声器形成的高度方向的声场范围不会局限于显示屏尺寸,使得高度方向的声场能够覆盖应用环境的整个空间高度,起到天空声像定位的效果,例如,飞机引擎的声像能定位在显示屏的上方的位置,通过第一扬声器播放,实现显示屏播放的画面与声音定位一致。
一种可能的实现方式中,第一扬声器的出声方向与水平方向呈10度~80度(包括10度和80度),以保证第一扬声器发出的声音依次经过墙体和天花板反射,最终达到用户观看区域。其中,水平方向为垂直于显示屏的显示面的方向。
一种可能的实现方式中,显示设备还包括第二扬声器,第二扬声器的发声方向朝向显示设备前方或显示设备下方。
一种可能的实现方式中,第一扬声器发出的第一声音和第二扬声器发出的第二声音的音量比例变化时,第一声音和第二声音形成的声像的位置发生变化。通过调整第一声音和第二声音的音量比例,调整第一声音和第二声音形成的声像的位置,从而实现声像位置在高度方向的调整,以使声像位置与画面同步。
一种可能的实现方式中,第一扬声器在第一时刻发出第一声音,第二扬声器在第二时刻发出与第一声音相对应的第二声音,第一声音和第二声音在用户观看区域混合;其中,第一时刻和第二时刻存在时间差。通过控制第一扬声器在第一时刻发声,第二扬声器在第二时刻发声,从而用户能在第三时刻同时接收到第一扬声器和第二扬声器发出的声音,声像位置定位更准确,不会出现画面与声像位置偏位的情况,以使声像位置与画面实现精准的同步,提高用户体验。
一种可能的实现方式中,显示设备处于的应用环境的空间参数变化时,第一时刻和第二时刻之间的时间差变化,以保证第一扬声器和第二扬声器发出的声音同时到达用户观看区域,以使声像定位更准确。
一种可能的实现方式中,第一扬声器的发声方向可变,第一扬声器可以根据需要调整发声方向,使得第一扬声器发出的声音能准确的传至用户观看区域。
一种可能的实现方式中,显示设备处于的应用环境的空间参数变化时,第一扬声器的发声方向可变。通过空间参数调整第一扬声器的发声方向,以实现第一扬声器的发声方向的自动调节,提高用户体验。
一种可能的实现方式中,空间参数包括第一距离、第二距离和第三距离中的至少一个,第一距离为显示设备与第一障碍物之间的距离,第二距离为显示设备与第二障碍物之间的距离,第三距离为显示设备与用户之间的距离。显示设备可以根据用户所处的位置,调整用户观看区域的位置,以使用户无论移动到哪个位置都能有很好的视听体验。
附图说明
为了更清楚地说明本申请实施例或背景技术中的技术方案,下面将对本申请实施例或背景技术中所需要使用的附图进行说明。
图1是本申请实施例提供的一种显示设备的结构示意图;
图2是图1所示显示设备在另一角度的分解结构示意图;
图3是图1所示显示设备位于应用环境的结构示意图;
图4是相关技术的显示设备位于应用环境的结构示意图;
图5是图3所示显示设备的另一实施例的结构示意图;
图6是图3所示的显示设备的处理器、第一扬声器和第二扬声器的结构示意图;
图7是图6所示结构的音频输出处理过程示意图;
图8是图7所示音频输出处理过程的具体示意图;
图9是图6所示结构的另一种音频输出处理过程示意图;
图10是图9所示音频输出处理过程的扬声器发声时间的控制示意图;
图11是图10所示音频输出处理过程的具体示意图;
图12是图6所示结构的另一种实施例的结构示意图;
图13是图12所示结构的音频输出处理过程示意图;
图14是图1所示的显示设备的另一种实施例的结构示意图;
图15是本实施例提供的一种显示设备的音频输出方法的流程示意图。
具体实施方式
下面结合本申请实施例中的附图对本申请实施例进行描述。
在本申请实施例的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“连接”应做广义理解,例如,“连接”可以是可拆卸地连接,也可以是不可拆卸地连接;可以是直接连接,也可以通过中间媒介间接连接。本申请实施例中所提到的方位用语,例如,“上”、“下”、“左”、“右”、“内”、“外”、“前”、“后”等,仅是参考附图的方向,因此,使用的方位用语是为了更好、更清楚地说明及理解本申请实施例,而不是指示或暗指所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请实施例的限制。“多个”是指至少两个。
可以理解的是,此处所描述的具体实施例用于解释相关方案,而非对该方案的限定。另 外还需要说明的是,为了便于描述,附图中仅示出了与方案相关的部分。
下面将参考附图并结合实施例来详细说明本申请。
本申请实施例提供一种显示设备,显示设备包括且不限于平板电视、电脑显示屏、会议显示屏或车用显示屏等具有扬声器的显示设备。本申请以显示设备是平板电视为例进行具体说明。
请参阅图1和图2,图1是本申请实施例提供的一种显示设备的结构示意图。图2是图1所示显示设备在另一角度的分解结构示意图。
显示设备100包括壳体10、显示屏20、扬声器30、主板40、处理器50以及存储器60。
显示屏20用于显示图像,视频等。显示屏20还可以集成触摸功能。显示屏20安装于壳体10。壳体10可以包括边框11和后壳12。显示屏20和后壳12分别安装于边框11的相背两侧,其中,显示屏20位于面向用户的一侧,后壳12位于背对用户的一侧。显示屏20包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。
在本实施例中,在显示设备100的外部空间中,定义显示屏20朝向的空间为显示设备100的前方,后壳12朝向的空间为显示设备100的后方。显示设备100包括顶部101、底部102和连接在顶部101和底部102之间且相对设置的两个侧部103。定义显示设备100的顶部101朝向的方向为显示设备100的上方,显示设备100的底部102朝向的方向为显示设备100的下方。
主板40位于壳体10内侧,主板40上集成了处理器50、存储器60以及其他各类电路器件。显示屏20耦合处理器50,以接收处理器50发送的显示信号。处理器50可以包括一个或多个处理单元,例如:处理器50可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
处理器可以根据指令操作码和时序信号,产生操作控制信号,完成提取指令和执行指令的控制。
处理器50中还可以设置内部存储器,用于存储指令和数据。在一些实施例中,处理器50中的存储器可以为高速缓冲存储器。该存储器可以保存处理器50用过或使用频率较高的指令或数据。如果处理器50需要使用该指令或数据,可从该存储器中直接调用。避免了重复存取,减少了处理器50的等待时间,因而提高了系统的效率。
在一些实施例中,处理器50可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。处理器50可以通过以上至少一种接口连接触摸传感器、音频模块、无线通信模块、显示器、摄像头等 模块。
存储器60可以用于存储计算机可执行程序代码,该可执行程序代码包括指令。存储器60可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储显示设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,存储器60可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器50通过运行存储在存储器60的指令,和/或存储在设置于处理器中的存储器的指令,执行显示设备100的各种功能方法或数据处理,例如,使显示屏20显示目标图像。
显示设备100可以通过音频模块,扬声器,以及处理器等实现音频功能。例如音乐播放,声音播放等。音频模块用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块还可以用于对音频信号编码和解码。在一些实施例中,音频模块可以设置于处理器50中,或将音频模块的部分功能模块设置于处理器50中,或音频模块的部分功能模块或全部功能模块设置于处理器50外。
扬声器(例如扬声器30),也称“喇叭”,用于将音频电信号转换为声音信号。显示设备100可以通过扬声器播放音乐等声音。
扬声器30位于壳体10内侧,并集成于主板40背向显示屏20的一侧,也就是说,扬声器30设于显示屏20的后侧。显示屏20的后侧即为显示屏20的显示面背向的一侧。后壳12设有通音孔121,扬声器30发出的声音通过通音孔121传出壳体10外部。扬声器30耦合处理器50,处理器50用于运行存储在存储器60的指令,和/或存储在设置于处理器中的存储器的指令,以使扬声器30发出声音。
本实施例中,扬声器30包括第一扬声器31和第二扬声器32,第一扬声器31和第二扬声器32均固定于主板40。第一扬声器31和第二扬声器32均耦合处理器50。第一扬声器31的发声方向朝向显示设备100的后上方,第二扬声器32的发声方向朝向显示设备100的前方。可以理解的是,第一扬声器31的发声方向即为第一扬声器31发出的声音的初始传播方向,显示设备100的后上方为显示设备100后方和显示设备的上方之间的方向。
当然,在其他实施例的一种场景中,第一扬声器和第二扬声器还可以固定于壳体内的其他位置,第一扬声器和第二扬声器一般位于不同的位置,第一扬声器和第二扬声器发出声音的方向不同,声音到达用户的路径长短不同。在其他实施例的另一种场景中,扬声器还可以包括除第一扬声器和第二扬声器以外的其他扬声器。
例如,在一些实施例中,第一扬声器可以位于显示设备100的顶部,第一扬声器发出声音的方向向上,具体的,第一扬声器发出声音可以朝向显示设备100的正上方,或第一扬声器发出声音可以朝向前上方等等。此外,第一扬声器和/或第二扬声器还可以位于显示设备100的侧面。第一扬声器和/或第二扬声器均可以包括多个阵列的扬声器。
请参阅图3,图3是图1所示显示设备100位于应用环境的结构示意图。
本实施例中,显示设备100靠近墙体201设置,显示设备100的显示屏20背向墙体201,显示设备100的顶部101朝向天花板202。第一扬声器31发出的声音经过位于显示设备100后方的第一障碍物反射至位于显示设备100上方的第二障碍物,经第二障碍物反射至显示设备100前方的用户观看区域。也就是说,显示设备100后方的第一障碍物为墙体201,显示设备100上方的第二障碍物为天花板202,显示设备100的前方为用户观看区域203。
当然,在其他实施例中,显示设备100的应用环境可以不同,第一障碍物还可以是除墙 体201以外的其他结构,例如屏风、反射板等。第二障碍物还可以是反射板等阻挡结构。
如图3,具体的,第一扬声器31发出的声音经过位于显示设备100后方的墙体201反射,形成以墙体201为反射镜面的第一扬声器31的镜像声源A,经过墙体201反射的声音继续向显示设备100上方的天花板202方向传输,经过天花板202反射,形成以天花板202为反射镜面的第一扬声器31的镜像声源B,声音在经过天花板202再次反射后,声音以镜像声源B为声像从天花板202向下投射到显示设备100前方的用户观看区域203。其中,人通过听到的声音所感知到的发声物体的位置点称为声像。
可以理解的是,如图4,在相关技术中,扬声器2的发声方向朝向显示设备1的前上方,从而扬声器2发出的声音能够传递至天花板3,并经过天花板3反射到达用户观看区域4。由于扬声器2辐射的声波具有指向性,沿扬声器2的轴向方向传播的声波的强度最强,随着偏轴角度的增大,声波的强度逐渐减弱。扬声器2发出的声音一部分会经天花板3反射后到达用户观看区域4,该部分声音称为反射音S1,另一部分会直接传到显示设备1前方的用户观看区域4,该部分声音称为直达音S2。由于扬声器2的轴向方向朝向显示设备1的前上方,直达音S2离扬声器2的轴向方向偏轴角度较小,从而直达音S2的强度较强,直达音S2会先于反射音S1到达用户观看区域4,由于哈斯效应人的听觉有先入为主的特性,不能分辨出来延迟到达的反射音S1,因此直达音S2会减弱经反射音S1的天空声像的定位效果,降低用户体验。
本申请的第一扬声器31通过将发声方向朝向显示设备100的后上方,也就是说,第一扬声器31的轴向方向朝向显示设备100的后上方。由于第一扬声器31的轴向方向朝向显示设备100的后上方,第一扬声器31的大部分声音朝向墙体201,经墙体201反射后,经天花板202反射到达用户观看区域203。第一扬声器31的小部分声音偏离轴向方向大,能够朝向显示设备100正前方直接到达用户观看区域203,但是该部分声音离第一扬声器31的主轴方向偏轴角度大,声音的强度较弱。因此,直接到达用户观看区域203的声音对经过墙体201和天花板202两次反射后到达用户观看区域203的声音的掩蔽效应弱,经天花板202反射到达用户观看区域203的声音的天空声像定位效果好,提高了用户的视听体验。
同时,第一扬声器31发出的声音经过墙体201反射至天花板202,继而经过天花板202反射后投射至用户观看区域203,传递至用户观看区域203的声音的声像为位于天花板202上方的声像,这样由第一扬声器31形成的高度方向的声场范围不会局限于显示屏20尺寸,使得高度方向的声场能够覆盖应用环境的整个空间高度,起到天空声像定位的效果,例如,飞机引擎的声像能定位在显示屏20的上方的位置,通过第一扬声器31播放,实现显示屏20播放的画面与声音定位一致。
请参阅图2和图3,本实施例中,第一扬声器31靠近显示设备100的顶部101设置,后壳的通音孔121与第一扬声器31对应设置。也就是说,第一扬声器31位于顶部101下方的位置,从而第一扬声器31发出的朝向显示设备100正前方的声音会被显示设备100的壳体10阻挡,有效减少从第一扬声器31直接传递至用户观看区域203的声音,使得天空声像定位更准确。
具体的,第一扬声器31的数量为两个,一个第一扬声器31靠近显示设备100的一个侧部103设置,另一个第一扬声器31靠近另一个侧部103设置,以分别播放左声道和右声道的音频信息。当然,在其他实施例中,第一扬声器31的数量还可以是一个或多个,本申请对第一扬声器31的数量不做限制。
在其他实施例的一种实施场景中,第一扬声器31还可以位于显示设备100的顶部101和 底部102之间的中点与顶部101之间,也就是说,第一扬声器31还可以位于显示设备100的上半部分的任意位置,如图5所示。在其他实施例的另一种实施场景中,第一扬声器31还可以位于显示设备100的顶部101。具体的,第一扬声器31设置的位置还与显示设备100至墙体201和天花板202的距离,及第一扬声器31的出声方向与水平方向的角度有关。
如图3所示,第一扬声器31的出声方向与水平方向呈35度~45度(包括35度和45度),水平方向为垂直于显示屏20的显示面的方向。当第一扬声器31的出声方向与水平方向呈35度~45度时,显示设备100能够适用于多种不同空间参数的应用环境,应用于多种不同的空间参数的应用环境时,第一扬声器31发出的声音均能依次经过不同应用环境的墙体201和天花板202反射,最终达到用户观看区域203。空间参数为多个不同参数的集合,例如显示设备100至墙体201的距离、显示设备100至天花板202的距离及显示设备100至用户的距离等参数。也就是说,显示设备100能适用于距离墙体201一定距离范围内、距离天花板202一定范围内及距离用户一定范围内的多种不同空间参数的应用环境,保证用户的视听体验。
当然,在其他实施例中,第一扬声器31的出声方向还可以与水平方向呈10度~80度(包括10度和80度)或者除10度~80度以外的其他角度,只要能保证第一扬声器发出的声音依次经过墙体和天花板反射,最终达到用户观看区域即可。
如图1和图3,第二扬声器32相对第一扬声器31靠近底部102设置。第一扬声器31发出的第一声音和第二扬声器32发出的第二声音在用户观看区域混合,以使用户接收到从不同位置的扬声器发出的声音形成的组合声,提高声音的立体感。
具体的,本实施例中,第二扬声器32位于显示设备100的底部102,第二扬声器32的数量为两个,一个第二扬声器32靠近显示设备100的一个侧部103设置,另一个第二扬声器32靠近显示设备100的另一个侧部103设置,以分别播放左声道和右声道的音频信息。当然,在其他实施例中,第二扬声器32的数量还可以是一个或多个,本申请对第二扬声器32的数量不做限制。
在其他实施例的一种实施场景中,第二扬声器还可以设于显示设备的侧部。在其他实施例中的另一种实施场景中,第二扬声器还可以部分设于显示设备的底部,部分设于显示设备的侧部。在其他实施例的又一种实施场景中,第二扬声器还可以位于显示设备的中部。或者,部分第二扬声器位于显示设备的顶部,部分第二扬声器位于显示设备的底部,部分第二扬声器位于显示设备的中部。在其他实施例的再一种实施场景中,第二扬声器可以通过显示屏震动发声,也就是说,显示屏的部分通过震动形成第二扬声器,在实现立体声的效果的同时不会占用显示设备的内部空间,还有利于提高显示设备的屏占比。
本实施例中,第二扬声器32的发声方向朝向显示设备100的前方。也就是说,第二扬声器的发声方向朝向用户观看区域203,第二扬声器32可以用于播放脚步声等声音。第二扬声器32的发声方向朝向用户观看区域203,可以是第二扬声器32的声音开口方向直接朝向用户观看区域203,或者,第二扬声器32的声音开口方向不朝向用户观看区域203,而是经声音转向装置后,将发声方向转向用户观看区域203。当然,在其他实施例中,第二扬声器的发声方向还可以朝向显示设备的下方。
请参阅图6,图6是图3所示的显示设备100的处理器50、第一扬声器31和第二扬声器32的结构示意图。处理器50包括音频模块,其中,音频模块可以包括获取模块、渲染模块及功放模块等功能模块。渲染模块与获取模块和功放模块分别耦合,功放模块包括第一功放模块和第二功放模块,第一功放模块与第一扬声器31耦合,第二功放模块与第二扬声器32耦合。
本实施例中,处理器50可以调整第一扬声器31发出的第一声音和第二扬声器32发出的与第一声音相对应的第二声音形成的声像的位置。具体处理器50通过下述方法调整第一声音和第二声音形成的声像的位置:
请一并参阅图3、图6和图7,图7是图6所示结构的音频输出处理过程示意图。
处理器50通过控制经第一扬声器31发出的第一声音的音量和经第二扬声器32发出的第二声音的音量的比例,调整第一扬声器31发出的第一声音和第二扬声器32发出的第二声音形成的声像的位置。也就是说,第一声音和第二声音的音量比例变化时,第一声音和第二声音形成的声像的位置发生变化。
具体的,获取模块获取影像内容的画面信息和音频信息。影像内容可以是视频内容、游戏、实时视频等。实时视频例如可以是视频通话、视频直播或者视频会议等。从音频信息中提取第一音频信息和第二音频信息,其中,第一音频信息和第二音频信息分别对应第一扬声器31和第二扬声器32,第一音频信息和第二音频信息可以对应于同一个声音内容,例如,第一音频信息和第二音频信息的声音内容可以对应于同一个人说的“你好”。
渲染模块对第一音频信息和第二音频信息的音量大小进行增益调节,具体的,渲染模块根据画面信息确定音频信息的声像位置,根据声像位置对第一音频信息和第二音频信息的音量比例进行调节。
然后将第一音频信息发送给功放模块的第一功放模块,经第一功放模块功率放大后传递给第一扬声器31,将第二音频信息发送给功放模块的第二功放模块,经第二功放模块功率放大后传递给第二扬声器32。
例如,当一个人的声音“你好”的声像位置来自于显示屏下方,第一音频信息的音量可以小于第二音频信息的音量,第一音频信号和第二音频信号分别通过第一功放模块和第二功放模块进行功率放大之后分别发送给第一扬声器和第二扬声器,从而使得第一扬声器发出的“你好”的音量小于第二扬声器发出的“你好”的音量,当用户在听到第一扬声器发出的“你好”和第二扬声器发出的“你好”这两个声音分量时,会感觉到“你好”的声像位置来自于显示屏下方。当一个人的声音“你好”的声像位置来自于显示屏上方,第一音频信息的音量可以大于第二音频信息的音量,第一音频信号和第二音频信号分别通过第一功放模块和第二功放模块进行功率放大之后分别发送给第一扬声器和第二扬声器,从而使得第一扬声器发出的“你好”的音量大于第二扬声器发出的“你好”的音量,当用户在听到第一扬声器发出的“你好”和第二扬声器发出的“你好”这两个声音分量时,会感觉到“你好”的声像位置来自于显示屏上方。
通过调节从第一扬声器31发出的第一声音大小和从第二扬声器32的发出的第二声音大小的比例,从而实现声像位置在高度方向的调整,以使声像位置与画面同步,立体声效果好。其中,高度方向为显示设备100顶部101至底部102的方向。声像位置为第一声音和第二声音形成的声像的位置。
可以理解的是,如图3所示,第一扬声器31的镜像声源B所在的位置为第一位置,第二扬声器32所在的位置为第二位置,处理器50可以通过调节经第一扬声器31和第二扬声器32发出的声音的音量的比例,也就是调节第一声音和第二声音的音量的比例,以使第一声音和第二声音的声像位置在第一位置和第二位置之间可调整。举例来说,当第一扬声器31不发声,第二扬声器32发声,则声像位置位于第二位置,当第一扬声器31和第二扬声器32的声音大小相同,则声像位置在第一位置和第二位置之间的中部附近。
例如,显示屏20显示一只鸟儿从显示设备的底部朝向顶部飞行,则与小鸟对应的飞行的 音频的声像位置也是从底部朝向顶部移动。对应的,当音频的声像位置在底部时,第一扬声器不发声,第二扬声器发声,随着音频的声像位置朝向顶部移动,第一扬声器发出的第一声音逐渐增大,第二扬声器发出的第二声音逐渐减小,以使第一声音和第二声音形成的声像位置与小鸟的飞行轨迹一致。
具体的,如图8,获取模块获取影像内容的画面信息和音频信息。从音频信息中提取第一音频信息和第二音频信息。音频信息包括第一信息和第二信息,第一信息为左声道音频信息,第二信息为右声道音频信息。从第一信息中提取第一子信息和第二子信息,从第二信息中提取第三子信息和第四子信息,其中,第一子信息和第三子信息形成第一音频信息,第二子信息和第四子信息形成第二音频信息。
渲染模块对第一子信息、第二子信息、第三子信息和第四子信息的音量大小进行增益调节,具体的,渲染模块根据画面信息确定音频信息的声像位置,根据声像位置对第一子信息、第二子信息、第三子信息和第四子信息的音量比例进行调节。
第一功放模块包括第一功放和第二功放,第二功放模块包括第三功放和第四功放。两个第一扬声器31分别为第一扬声器L(左侧第一扬声器)和第一扬声器R(右侧第一扬声器),两个第二扬声器32分别为第二扬声器L(左侧第二扬声器)和第二扬声器R(右侧第二扬声器)。经过渲染模块渲染后的第一子信息发送给第一功放,经第一功放功率放大后传递给第一扬声器L。经过渲染模块渲染后的第三子信息发送给第二功放,经第二功放功率放大后传递给第一扬声器R。经过渲染模块渲染后的第二子信息发送给第三功放,经第三功放功率放大后传递给第二扬声器L。经过渲染模块渲染后的第四子信息发送给第四功放,经第四功放功率放大后传递给第二扬声器R。
本申请通过调节从第一扬声器L、第一扬声器R、第二扬声器L和第二扬声器R发出的声音大小比例,从而实现声像位置在三维空间上的调整,以使声像位置与画面同步,实现了真实的三维空间声场音效,立体声效果好。请参阅图9和图10,图9是图6所示结构的另一种音频输出处理过程示意图;图10是图9所示音频输出处理的扬声器发声时间的控制示意图。
在一些实施例中,处理器还用于控制第一扬声器和第二扬声器发声的时间。也就是说,第一扬声器和第二扬声器不同步发声。具体的,获取模块获取影像内容的画面信息和音频信息。影像内容可以是视频内容、游戏、实时视频等。实时视频例如可以是视频通话、视频直播或者视频会议等。从音频信息中提取第一音频信息和第二音频信息,其中,第一音频信息和第二音频信息分别对应第一扬声器和第二扬声器,第一音频信息和第二音频信息可以对应于同一个声音内容,例如,第一音频信息和第二音频信息的声音内容可以对应于同一个人说的“你好”。
渲染模块对第一音频信息和第二音频信息的音量大小进行增益调节。具体的,渲染模块根据画面信息确定音频信息的声像位置,根据声像位置对第一音频信息和第二音频信息的音量比例进行调节。同时渲染模块还控制第一音频信息和第二音频信息发送到下个模块的发送时延。然后将第一音频信息发送给功放模块的第一功放模块,经第一功放模块功率放大后传递给第一扬声器,第一扬声器在第一时刻T1(如图10)发出第一声音,将第二音频信息发送给功放模块的第二功放模块,经第二功放模块功率放大后传递给第二扬声器,第二扬声器在第二时刻T2发出第二声音。其中,第一时刻和第二时刻存在时间差△T,第一时刻早于第二时刻。其中,在图10中,10A表示第一扬声器在第一时刻T1接收到第一音频信息后,发出声音的波形,10B表示第二扬声器在第二时刻T2接收到第二音频信息后,发出声音的波形。第一音频信息和第二音频信息的波形关系类似于10A和10B的关系,例如,第一音频信息和 第二音频信息的波形也可以具有时间差。
例如,当一个人的声音“你好”的声像位置来自于显示屏下方,第一音频信息的音量可以小于第二音频信息的音量,将第一音频信号在第一时间发送给第一功放模块,通过第一功率模块放大后发给第一扬声器,以使第一扬声器在第一时刻发出“你好”(10A的波形可以表示第一扬声器发出“你好”),将第二音频信号在第二时间发送给第二功放模块,通过第二功率模块放大后发给第二扬声器,以使第二扬声器在第二时刻发出“你好”(10B的波形可以表示第二扬声器发出“你好”)。从而使得第一扬声器发出的“你好”的音量小于第二扬声器发出的“你好”的音量,且第一扬声器发出的“你好”和第二扬声器发出的“你好”同时到达用户。当用户在听到第一扬声器发出的“你好”和第二扬声器发出的“你好”这两个声音分量时,会感觉到“你好”的声像位置来自于显示屏下方。
当一个人的声音“你好”的声像位置来自于显示屏上方,第一音频信息的音量可以大于第二音频信息的音量,将第一音频信号在第一时间发送给第一功放模块,通过第一功率模块放大后发给第一扬声器,以使第一扬声器在第一时刻发出“你好”,将第二音频信号在第二时间发送给第二功放模块,通过第二功率模块放大后发给第二扬声器,以使第二扬声器在第二时刻发出“你好”。从而使得第一扬声器发出的“你好”的音量大于第二扬声器发出的“你好”的音量,且第一扬声器发出的“你好”和第二扬声器发出的“你好”同时或几乎同时到达用户。当用户在听到第一扬声器发出的“你好”和第二扬声器发出的“你好”这两个声音分量时,会感觉到“你好”的声像位置来自于显示屏上方。
本实施例通过调节从第一扬声器发出的第一声音大小和从第二扬声器的发出的第二声音的大小的比例,从而实现声像位置在高度方向的调整,以使声像位置与画面同步。同时第一扬声器在第一时刻T1发声,第二扬声器在第二时刻T2发声,从而用户能在第三时刻同时或几乎同时接收到第一声音和第二声音,声像位置定位得更准确,不会出现画面与声像位置偏位的情况,以使声像位置与画面实现精准的同步,立体声效果好,提高用户体验。
可以理解的是,第三时刻可以指某个特定的时刻,也可以指某个很小的时刻范围。也就是说,用户可以正好在第三时刻同时接收第一声音和第二声音,用户也可以接收到第一声音后间隔一定时间间隙再接收到第二声音,但是用户无法感知这个时间间隙,也就是说,这个时间间隙不会对用户对声像位置定位产生偏差。
可以理解的是,时间差△T=(D1-D2)/V,其中,D1为第一扬声器发出的第一声音从第一扬声器至用户观看器的传输路径,D2为第二扬声器发出的第二声音从第二扬声器至用户观看区的传输路径,V=340米/秒(空气中声速)。具体的,时间差△T的取值可以为1ms至50ms,例如2ms、5ms或10ms等,可以准确地进行立体声的调节。
在一些实施例中,当声像位置变化时,时间差△T也可以发生变化以强化声像的位置信息。例如,当一个人的声音“你好”的声像位置从显示屏下方移到显示屏上方,声像位置位于显示屏下方时,时间差为△T1,声像位置为显示上方时,时间差为△T2,时间差△T2小于时间差△T1。也就是说,声像位置在从显示屏下方移到显示屏上方的过程中,第一扬声器发出的“你好”会比第二扬声器发出的“你好”先到达用户被用户接收,让用户能够明显感受到声像位置的移动。
具体的,如图11,获取模块获取影像内容的画面信息和音频信息。从音频信息中提取第一音频信息和第二音频信息。音频信息包括第一信息和第二信息,第一信息为左声道音频信息,第二信息为右声道音频信息。从第一信息中提取第一子信息和第二子信息,从第二信息中提取第三子信息和第四子信息,其中,第一子信息和第三子信息形成第一音频信息,第二 子信息和第四子信息形成第二音频信息。
渲染模块对第一子信息、第二子信息、第三子信息和第四子信息的音量大小进行增益调节,具体的,渲染模块根据画面信息确定音频信息的声像位置,根据声像位置对第一子信息、第二子信息、第三子信息和第四子信息的音量比例进行调节。同时渲染模块还控制第一子信息、第二子信息、第三子信息和第四子信息发送到下个模块的发送时延。
第一功放模块包括第一功放和第二功放,第二功放模块包括第三功放和第四功放。两个第一扬声器分别为第一扬声器L(左侧第一扬声器)和第一扬声器R(右侧第一扬声器),两个第二扬声器分别为第二扬声器L(左侧第二扬声器)和第二扬声器R(右侧第二扬声器)。经过渲染模块渲染后的第一子信息发送给第一功放,经第一功放功率放大后传递给第一扬声器L;经过渲染模块渲染后的第三子信息发送给第二功放,经第二功放功率放大后传递给第一扬声器R,第一扬声器L和第一扬声器R在第一时刻T1发出声音。经过渲染模块渲染后的第二子信息发送给第三功放,经第三功放功率放大后传递给第二扬声器L;经过渲染模块渲染后的第四子信息发送给第四功放,经第四功放功率放大后传递给第二扬声器R,第二扬声器L和第二扬声器R在第二时刻T2发出声音。其中,第一时刻和第二时刻存在时间差。
本实施例通过调节从第一扬声器L、第一扬声器R、第二扬声器L和第二扬声器R发出的声音的大小比例,从而实现声像位置的调整,以使声像位置与画面同步。同时第一扬声器L和第一扬声器R在第一时刻T1发声,第二扬声器L和第二扬声器R在第二时刻T2发声,从而用户能在第三时刻同时接收到第一扬声器L、第一扬声器R、第二扬声器L和第二扬声器R发出的声音,声像位置定位得更准确,不会出现画面与声像位置偏位的情况,以使声像位置与画面实现精准的同步,立体声效果好,提高用户体验。
当然,在一些实施例中,处理器还可以仅通过控制第一扬声器和第二扬声器发声的时间,也就是说,第一扬声器和第二扬声器不同步发声,使得第一扬声器发出的第一声音和第二扬声器发出的第二声音同时到达用户观看区域,同时被用户接收,立体声效果好,提高用户体验。
一些实施例中,请参阅图12和图13,图12是图6所示结构的另一种实施例的结构示意图。图13是图12所示结构的音频输出处理过程示意图。
处理器50包括音频模块,其中,音频模块可以包括获取模块、渲染模块、混音模块和功放模块等功能模块。获取模块、渲染模块、混音模块和功放模块依次耦合,功放模块与第一扬声器31和第二扬声器32均耦合。影像内容中的音频信息通过处理器50采用上混算法处理。
具体的,获取模块获取影像内容的画面信息和音频信息。影像内容可以是视频内容、游戏、实时视频等。实时视频例如可以是视频通话、视频直播或者视频会议等。从音频信息中提取第一音频信息和第二音频信息。音频信息包括第一信息和第二信息,第一信息为左声道音频信息,第二信息为右声道音频信息。
将第一信息和第二信息经过高度内容信号提取的上混算法处理后,第一信息生成左高度声道信号和左主声道信号,第二信息分成右高度声道信号和右主声道信号,继而将左高度声道信号分为第一信号和第二信号,右高度声道信号分为第三信号和第四信号,左主声道信号分为第五信号和第六信号,右主声道信号分为第七信号和第八信号。其中,第一信息、第三信号、第五信号和第七信号形成第一音频信息,第二信息、第四信号、第六信号和第八信号形成第二音频信息。
渲染模块对第一信号、第二信号、第三信号、第四信号、第五信号、第六信号、第七信号和第八信号的音量大小进行增益调节。具体的,渲染模块根据画面信息确定音频信息的声 像位置,根据声像位置对第一信号、第二信号、第三信号、第四信号、第五信号、第六信号、第七信号和第八信号的音量比例进行调节。同时渲染模块还控制第一信号、第二信号、第三信号、第四信号、第五信号、第六信号、第七信号和第八信号发送到一个模块的发送时延。
混音模块包括第一模块、第二模块、第三模块和第四模块,第一模块对经过渲染模块渲染后的第一信号和第五信号进行混音得到第一混音,第二模块对经过渲染模块渲染后的第三信号和第七信号进行混音得到第二混音,第三模块对经过渲染模块渲染后的第二信号和第六信号进行混音得到第三混音,第四模块对经过渲染模块渲染后的第四信号和第八信号进行混音得到第四混音。
功放模块包括第一功放模块和第二功放模块,第一功放模块包括第一功放和第二功放,第二功放模块包括第三功放和第四功放。两个第一扬声器31分别为第一扬声器L(左侧第一扬声器)和第一扬声器R(右侧第一扬声器),两个第二扬声器32分别为第二扬声器L(左侧第二扬声器)和第二扬声器R(右侧第二扬声器)。第一混音发送给第一功放,经第一功放功率放大后传递给第一扬声器L;第二混音发送给第二功放,经第二功放功率放大后传递给第一扬声器R,第一扬声器L和第一扬声器R在第一时刻发出声音。第三混音发送给第三功放,经第三功放功率放大后传递给第二扬声器L;第四混音发送给第四功放,经第四功放功率放大后传递给第二扬声器R,第二扬声器L和第二扬声器R在第二时刻发出声音。其中,第一时刻和第二时刻存在时间差。
本申请将影像内容中的音频信息经过高度内容信号提取的上混算法处理后,有效实现左高度声道信号、右高度声道信号、左主声道信号和右主声道信号的内容在高度方向特定位置上的声像定位,以使各种声音的声像在高度方向的定位可以按需要进行调整,实现各种声音与画面的定位合一,如飞机引擎的声像能定位在显示屏上方的位置,将人物对白的声像定位在显示屏中部的位置,将脚步声的声像定位在显示屏底部的位置等。
请参阅图14,图14是图1所示的显示设备100的另一种实施例的结构示意图。
本实施例与图1所示的实施例大致相同,不同的是,本实施例中的显示设备100还包括距离探测器70,距离探测器70设于壳体10内部,距离探测器70与处理器50耦合,当然,距离探测器70也可以设于壳体10外部。距离探测器70用于探测显示设备100所处的应用环境的空间参数,空间参数包括多种参数,例如显示设备100与墙体之间的第一距离、显示设备100与天花板之间的第二距离及显示设备100与用户之间的第三距离。
第一扬声器31的发声方向能够调整,具体的,第一扬声器31可以根据空间参数来调整发声方向。示例的,显示设备100可以包括驱动组件,第一扬声器31设于驱动组件上或与驱动组件配合,驱动组件与处理器耦合,处理器用于驱动驱动组件根据距离探测器70获得的空间参数来调整第一扬声器31的发声方向。也就是说,显示设备处于的应用环境的空间参数变化时,第一扬声器31的发声方向可变,以使显示设备100能够适用于不同的应用环境,且根据应用环境调整第一扬声器的发声方向,实现第一扬声器的发声方向的自动调节,以使第一扬声器发出的第一声音能准确的传至用户观看区域,提高用户体验。显示设备100还可以根据用户所处的位置,调整用户观看区域的位置,以使用户无论移动到哪个位置都能有很好的视听体验。
当然,在其他实施例的一种实施场景中,空间参数还包括显示设备100与其他障碍物之间的距离。或者,空间参数包括第一距离、第二距离和第三距离等距离中至少一个参数。在其他实施例的又一种实施场景中,第一扬声器31的发声方向还可以通过人工手动调节。
可以理解的是,本实施例中的显示设备100可以根据显示设备100的应用环境对第一扬 声器31的发声方向进行调整。在首次使用显示设备100,或者将显示设备100移动到一个新的应用环境中时,显示设备100会通过距离探测器70探测应用环境的空间参数。显示设备100通过空间参数调整第一扬声器31的发声方向,以使显示设备100在不同的应用环境均能保证第一扬声器31发出的第一声音经过反射后准确传输至用户观看区域,提高用户视听体验。
一些实施例中,显示设备100还可以包括感知传感器,即感知显示设备100被搬动,或者感知显示设备100的位置发生变化的传感器。例如感知传感器可以是陀螺仪、加速度计等。当感知传感器检测到显示设备100的位置发生变化后,可以触发距离探测器70探测应用环境的空间参数,以使第一扬声器31根据空间参数调整发声方向,从而只要显示设备100的位置发生变化,距离探测器70就会获取应用环境的空间参数,以使第一扬声器31根据空间参数调整发声方向,时刻保证用户的视听体验。
可以理解的是,感知传感器在显示设备100处于断电的情况下也可以记录显示设备100被移动的信息,当显示设备100通电后,感知传感器触发距离探测器70探测应用环境的空间参数,以根据空间参数调整第一扬声器31的发声方向,就算断电后搬移显示设备100,依然会在显示设备100再次启用时根据应用环境调整第一扬声器31的发声方向,保证用户的视听体验。
本实施例中,距离探测器70包括雷达,雷达能够发射及接收超声波,通过超声波测得的应用环境的空间参数相对比其他方式得到的数据更加准确。当然,在其他实施例的一种实施场景中,距离探测器70还可以包括麦克风,即显示设备100发出声音,声音经过障碍物反射回到显示设备100后被麦克风接收,通过计算声音发出和接收的时间差得到显示设备100与障碍物之间的距离。
在其他实施例的另一种实施场景中,距离探测器70包括摄像头,通过摄像头拍照识别显示设备100与障碍物之间的距离。其中,障碍物可以是墙体、天花板及用户等。在其他实施例的又一种实施场景中,距离探测器70还可以包括雷达、麦克风和摄像头中的至少两种,对于不同的障碍物采用不同的测距方式,以获得更加准确的空间参数。
本实施例中,距离探测器70与处理器50耦合,距离探测器70向处理器50发送指令,指令可以是包括空间参数信息的脉冲信号或模拟信号,响应于该指令,第一时刻和第二时刻之间的时间差。具体的,处理器50根据指令中携带的信息,例如第一距离、第二距离及第三距离调整第一时刻和第二时刻之间的时间差,也就是说,显示设备100处于的应用环境的空间参数变化时,第一时刻和第二时刻之间的时间差变化,以保证第一扬声器31和第二扬声器32发出的声音同时到达用户观看区域,以使声像定位更准确。当然,在其他实施例中,距离探测器70还可以通过探测第一扬声器31发出的声音到达用户区域的路径,及探测第二扬声器32发出的声音到达用户区域的路径,根据两个路径差确定的第一时刻和第二时刻的时间差。
在其他实施例中,显示设备还可以包括用户输入入口,用户输入入口可以是手机里与显示设备交互的应用软件,也可以是显示设备的设置窗口。用户通过用户输入入口填入第一距离、第二距离或第三距离等空间参数,第一扬声器根据用户填入的数据调整第一扬声器的发声方向。该方法相对于通过距离探测器获取空间参数的成本更低。
请参阅图15,图15是本实施例提供的一种显示设备100的音频输出方法的流程示意图。该音频输出方法应用于如图1所示的显示设备100。该音频输出方法包括如下步骤S110~S130。
S110:获取音频信息。
具体的,如图8,通过获取模块获取影像内容的画面信息和音频信息。影像内容可以是视频内容、游戏、实时视频等。实时视频例如可以是视频通话、视频直播或者视频会议等。音频信息包括第一信息和第二信息,第一信息为左声道音频信息,第二信息为右声道音频信息。
S120:从音频信息中提取第一音频信息和第二音频信息。
具体的,如图8,获取模块从第一信息中提取第一子信息和第二子信息,从第二信息提取第三子信息和第四子信息,其中,第一子信息和第三子信息形成第一音频信息,第二子信息和第四子信息形成第二音频信息,第一音频信息和第二音频信息分别对应第一扬声器和第二扬声器,第一音频信息和第二音频信息可以对应于同一个声音内容,例如,第一音频信息和第二音频信息的声音内容可以对应于同一个人说的“你好”。
S130:将第一音频信息通过第一扬声器31输出,将第二音频信息通过第二扬声器32输出。
具体的,如图3和图8,首先处理第一音频信息和第二音频信息,以调整第一扬声器31发出的第一声音和第二扬声器32发出的第二声音形成的声像的位置。处理第一音频信息和处理第二音频信息包括调节第一音频信息和第二音频信息的音量比例。
示例的,渲染模块对第一子信息、第二子信息、第三子信息和第四子信息的音量大小进行增益调节,具体的,渲染模块根据画面信息确定音频信息的声像位置,根据声像位置对第一子信息、第二子信息、第三子信息和第四子信息的音量比例进行调节。
然后,将经过渲染模块渲染后的第一子信息发送给第一功放,经第一功放功率放大后传递给第一扬声器L(左侧第一扬声器)。将经过渲染模块渲染后的第三子信息发送给第二功放,经第二功放功率放大后传递给第一扬声器R(右侧第一扬声器)。将经过渲染模块渲染后的第二子信息发送给第三功放,经第三功放功率放大后传递给第二扬声器L(左侧第二扬声器)。将经过渲染模块渲染后的第四子信息发送给第四功放,经第四功放功率放大后传递给第二扬声器R(右侧第二扬声器)。
本音频输出方法通过调节从第一扬声器L、第一扬声器R、第二扬声器L和第二扬声器R发出的声音大小比例,从而实现声像位置在三维空间上的调整,以使声像位置与画面同步,实现了真实的三维空间声场音效。同时,本申请的第一扬声器通过将发声方向朝向显示设备的后上方,使得经天花板反射到达用户观看区域的声音的天空声像定位效果好,立体声效果好,提高了用户的视听体验。
当然,在其他实施例的一种实施场景中,处理第一音频信息和第二音频信息还可以包括控制第一扬声器31在第一时刻发出声音,控制第二扬声器32在第二时刻发出声音,用户在第三时刻同时接收第一扬声器31的声音和第二扬声器32的声音,其中,第一时刻和第二时刻存在时间差,第一时刻早于第二时刻,以使第一扬声器接收第一音频信息,第二扬声器接收与第一音频信息相对应的第二音频信息,第一音频信息和第二音频信息的播放时间不同步。具体的,如图11,渲染模块对第一子信息、第二子信息、第三子信息和第四子信息的音量比例进行调节的同时,渲染模块还控制第一子信息、第二子信息、第三子信息和第四子信息发送到下个模块的发送时延。
将经过渲染模块渲染后的第一子信息发送给第一功放,经第一功放功率放大后传递给第一扬声器L;将经过渲染模块渲染后的第三子信息发送给第二功放,经第二功放功率放大后传递给第一扬声器R,第一扬声器L和第一扬声器R在第一时刻发出声音。将经过渲染模块渲染后的第二子信息发送给第三功放,经第三功放功率放大后传递给第二扬声器L;将经过 渲染模块渲染后的第四子信息发送给第四功放,经第四功放功率放大后传递给第二扬声器R,第二扬声器L和第二扬声器R在第二时刻发出声音。其中,第一时刻和第二时刻存在时间差。
本实施场景通过调节从第一扬声器L、第一扬声器R、第二扬声器L和第二扬声器R发出的声音的大小比例,从而实现声像位置的调整,以使声像位置与画面同步。同时通过控制第一扬声器L和第一扬声器R在第一时刻发声,第二扬声器L和第二扬声器R在第二时刻发声,也就是说,第一扬声器接收第一音频信息之后在第一时刻发出第一声音,第二扬声器接收第二音频信息之后在第二时刻发出与第一声音相对应的第二声音,从而用户能在第三时刻同时接收到第一扬声器L、第一扬声器R、第二扬声器L和第二扬声器R发出的声音,声像位置定位得更准确,不会出现画面与声像位置偏位的情况,以使声像位置与画面实现精准的同步,立体声效果好,提高用户体验。
当然,在其他实施例中,音频输出方法还可以仅控制第一扬声器在第一时刻发声,第二扬声器在第二时刻发声,从而用户能在第三时刻同时接收到第一扬声器和第二扬声器发出的声音,立体声效果好,提高用户体验。
在其他实施例的另一种实施场景中,如图12,音频输出方法还可以对第一信息和第二信息进行高度内容信号提取的上混算法处理后生成左高度声道信号、右高度声道信号、左主声道信号和右主声道信号。然后对左高度声道信号分别做特定的增益、时延、混音及功率放大处理后,分别传输给第一扬声器L及第二扬声器L,实现左高度声道信号内容在高度方向特定位置上的声像定位。同理,右高度声道信号、左主声道信号及右主声道信号,通过同样的方式实现高度方向特定位置的声像定位,不再赘述。
本实施场景通过将影像内容中的音频信息经过高度内容信号提取的上混算法处理后,有效实现左高度声道信号、右高度声道信号、左主声道信号和右主声道信号的内容在高度方向特定位置上的声像定位,以使各种声音的声像在高度方向的定位可以按需要进行调整,实现各种声音与画面的定位合一,如飞机引擎的声像能定位在显示屏上方的位置,将人物对白的声像定位在显示屏中部的位置,将脚步声的声像定位在显示屏底部的位置等。
在其他实施例中,音频输出方法还可以仅从音频信息中获取第一音频信息,并将第一音频信息通过第一扬声器输出。
在其他实施例中,音频输出方法还可以应用于图14所示的显示设备100。音频输出方法还包括检测显示设备的应用环境,获取应用环境的空间参数,响应于空间参数的变化,第一扬声器31的发声方向变化。具体的,通过距离探测器70探测显示设备100所处的应用环境的空间参数,空间参数包括多种参数,例如显示设备100与墙体之间的第一距离、显示设备100与天花板之间的第二距离及显示设备与用户之间的第三距离等参数。处理器响应于空间参数中的两种或以上参数调整第一扬声器31的发声方向。具体处理器可以通过控制驱动组件来调整第一扬声器31的发声方向,以使显示设备100能够适用于多种不同的应用环境。
可以理解的是,本实施例中音频输出方法可以根据显示设备100的应用环境对第一扬声器31的发声方向进行调整。在首次使用显示设备100,或者将显示设备100移动到一个新的应用环境中时,显示设备100会通过距离探测器70探测应用环境的空间参数。然后通过空间参数调整第一扬声器31的发声方向,以使显示设备100在不同的应用环境均能保证第一扬声器31发出的第一声音经过反射后准确用户观看区域,提高用户视听体验。
当然,在一些实施例中,电子设备的应用环境的空间参数还可以通过用户手动输入。例如用户可以通过显示设备的用户输入入口输入。用户输入入口可以是手机里与显示设备交互的应用软件,也可以是显示设备的设置窗口。用户通过用户输入入口填入第一距离、第二距 离或第三距离等空间参数,处理器响应于用户输入的空间参数,根据用户填入的数据调整第一扬声器的发声方向。该方法相对于通过距离探测器获取空间参数的成本更低。
在一些实施例中,音频输出方法还能时刻感知显示设备100的移动状态,移动状态包括被搬动及位置发生变化,当显示设备100的位置发生变化后,触发距离探测器70探测应用环境的空间参数,根据空间参数调整第一扬声器31的发声方向,从而只要显示设备100的位置发生变化,距离探测器70就会获取应用环境的空间参数,以使第一扬声器31根据空间参数调整发声方向,时刻保证用户的视听体验。
在一些实施例中,音频输出方法还包括响应于空间参数的变化,第一时刻和第二时刻之间的时间差变化。具体的,处理器可以根据空间参数,例如第一距离、第二距离及第三距离调整第一时刻和第二时刻之间的时间差,以保证第一扬声器和第二扬声器发出的声音同时到达用户观看区域,以使声像定位更准确。当然,在其他实施例中,还可以根据第一扬声器发出的第一声音到达用户区域的路径与第二扬声器发出的第二声音到达用户区域的路径的路径差获得第一时刻和第二时刻的时间差。
可以理解是,执行音频输出方法各个步骤的部件不限于上述描述的部件,可以是任何能执行上述方法的部件均可。
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。
以上,仅为本申请的部分实施例和实施方式,本申请的保护范围不局限于此,任何熟知本领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (26)

  1. 一种显示设备,其特征在于,所述显示设备包括显示屏、第一扬声器和第二扬声器,所述第一扬声器设于所述显示屏的后侧,所述第一扬声器的发声方向朝向所述显示设备的后上方;
    所述第二扬声器的发声方向朝向所述显示设备前方或朝向所述显示设备下方;
    所述第一扬声器和所述第二扬声器不同步发声。
  2. 根据权利要求1所述的显示设备,其特征在于,所述第一扬声器的出声方向与水平方向呈10度~80度。
  3. 根据权利要求1或2所述的显示设备,其特征在于,所述第一扬声器发出的声音经过位于所述显示设备后方的第一障碍物反射至位于所述显示设备上方的第二障碍物,经所述第二障碍物反射至所述显示设备前方的用户观看区域。
  4. 根据权利要求1至3任一项所述的显示设备,其特征在于,所述第一扬声器在第一时刻发出第一声音,所述第二扬声器在第二时刻发出与所述第一声音相对应的第二声音,所述第一声音和所述第二声音在所述用户观看区域混合;其中,所述第一时刻和所述第二时刻存在时间差。
  5. 根据权利要求4所述的显示设备,其特征在于,所述显示设备处于的应用环境的空间参数变化时,所述第一时刻和所述第二时刻之间的时间差变化。
  6. 根据权利要求4或5所述的显示设备,其特征在于,所述第一声音和所述第二声音的音量比例变化时,所述第一声音和所述第二声音形成的声像的位置发生变化。
  7. 根据权利要求1至6任一项所述的显示设备,其特征在于,所述第一扬声器的发声方向可变。
  8. 根据权利要求7所述的显示设备,其特征在于,所述显示设备处于的应用环境的空间参数变化时,所述第一扬声器的发声方向可变。
  9. 根据权利要求5或8所述的显示设备,其特征在于,所述空间参数包括第一距离、第二距离和第三距离中的至少一个,所述第一距离为所述显示设备与所述第一障碍物之间的距离,所述第二距离为所述显示设备与所述第二障碍物之间的距离,所述第三距离为所述显示设备与所述用户之间的距离。
  10. 根据权利要求9所述的显示设备,其特征在于,所述显示设备还包括处理器,所述处理器与所述第一扬声器和所述第二扬声器均耦合,所述处理器用于控制所述第一扬声器在所述第一时刻发出所述第一声音,控制所述第二扬声器在所述第二时刻发出所述第二声音。
  11. 根据权利要求10所述的显示设备,其特征在于,所述处理器还用于控制经所述第一扬声器发出的所述第一声音的音量和经所述第二扬声器发出的所述第二声音的音量的比例。
  12. 根据权利要求11所述的显示设备,其特征在于,所述显示设备还包括距离探测器,所述距离探测器用于探测所述显示设备的应用环境,获取所述应用环境的空间参数,响应于所述空间参数的变化,所述第一扬声器的发声方向变化。
  13. 根据权利要求12所述的显示设备,其特征在于,所述距离探测器与所述处理器耦合,所述距离探测器向所述处理器发送指令,响应于所述指令,所述第一时刻和所述第二时刻之间的时间差变化。
  14. 根据权利要求1至13中任一项所述的显示设备,其特征在于,所述显示设备包括顶部和底部,所述第二扬声器相对所述第一扬声器靠近所述底部设置。
  15. 一种显示设备的音频输出方法,其特征在于,所述显示设备包括显示屏、第一扬声器和第二扬声器,所述第一扬声器向所述显示设备的后上方发声,所述第二扬声器的发声方向朝向所述显示设备前方或朝向所述显示设备下方,所述音频输出方法包括:
    所述第一扬声器接收第一音频信息,所述第二扬声器接收与所述第一音频信息相对应的第二音频信息,所述第一音频信息和所述第二音频信息的播放时间不同步。
  16. 根据权利要求15所述的音频输出方法,其特征在于,所述第一扬声器发出的声音经过位于所述显示设备后方的第一障碍物反射至位于所述显示设备上方的第二障碍物,经所述第二障碍物反射至所述显示设备前方的用户观看区域。
  17. 根据权利要求15或16所述的音频输出方法,其特征在于,所述音频输出方法包括所述第一扬声器接收所述第一音频信息之后在第一时刻发出第一声音,所述第二扬声器接收所述第二音频信息之后在第二时刻发出与所述第一声音相对应的第二声音,所述第一声音和所述第二声音在所述用户观看区域混合;其中,所述第一时刻和所述第二时刻存在时间差。
  18. 根据权利要求15至17任一项所述的音频输出方法,其特征在于,所述第一声音和所述第二声音的音量比例变化时,所述第一声音和所述第二声音形成的声像的位置发生变化。
  19. 根据权利要求15至18中任一项所述的音频输出方法,其特征在于,所述音频输出方法还包括:
    检测所述显示设备的应用环境,获取所述应用环境的空间参数;
    响应于所述空间参数的变化,所述第一扬声器的发声方向变化。
  20. 根据权利要求17所述的音频输出方法,其特征在于,所述音频输出方法还包括:
    检测所述显示设备的应用环境,获取所述应用环境的空间参数;
    响应于所述空间参数的变化,所述第一时刻和所述第二时刻之间的时间差变化。
  21. 根据权利要求19或20所述的音频输出方法,其特征在于,所述空间参数包括第一距离、第二距离和第三距离中的至少一个,所述第一距离为所述显示设备与所述第一障碍物之间的距离,所述第二距离为所述显示设备与所述第二障碍物之间的距离,所述第三距离为所述显示设备与所述用户之间的距离。
  22. 一种显示设备,其特征在于,所述显示设备包括显示屏、第一扬声器和第二扬声器,所述第一扬声器设于所述显示屏的后侧,所述第一扬声器的发声方向朝向所述显示设备的后上方;
    所述第二扬声器的发声方向不同于所述第一扬声器的发声方向。
  23. 根据权利要求22所述的显示设备,其特征在于,所述第一扬声器的出声方向与水平方向呈10度~80度。
  24. 根据权利要求23所述的显示设备,其特征在于,所述第一扬声器发出的声音经过位于所述显示设备后方的第一障碍物反射至位于所述显示设备上方的第二障碍物,经所述第二障碍物反射至所述显示设备前方的用户观看区域。
  25. 根据权利要求22至24任一项所述的显示设备,其特征在于,所述第一扬声器的发声方向可变。
  26. 根据权利要求25所述的显示设备,其特征在于,所述显示设备处于的应用环境的空间参数变化时,所述第一扬声器的发声方向可变。
PCT/CN2022/085410 2021-04-13 2022-04-06 显示设备及其音频输出方法 WO2022218195A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP22787422.9A EP4304164A1 (en) 2021-04-13 2022-04-06 Display device and audio output method therefor

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110396109.1A CN115209077A (zh) 2021-04-13 2021-04-13 显示设备及其音频输出方法
CN202110396109.1 2021-04-13

Publications (1)

Publication Number Publication Date
WO2022218195A1 true WO2022218195A1 (zh) 2022-10-20

Family

ID=83571294

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/085410 WO2022218195A1 (zh) 2021-04-13 2022-04-06 显示设备及其音频输出方法

Country Status (3)

Country Link
EP (1) EP4304164A1 (zh)
CN (1) CN115209077A (zh)
WO (1) WO2022218195A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101895801A (zh) * 2009-05-22 2010-11-24 三星电子株式会社 用于声音聚焦的设备和方法
CN106030505A (zh) * 2014-02-11 2016-10-12 Lg电子株式会社 显示装置及其控制方法
CN111757171A (zh) * 2020-07-03 2020-10-09 海信视像科技股份有限公司 一种显示设备及音频播放方法
CN112153538A (zh) * 2020-09-24 2020-12-29 京东方科技集团股份有限公司 显示装置及其全景声实现方法、非易失性存储介质
CN112218153A (zh) * 2019-07-12 2021-01-12 海信视像科技股份有限公司 显示装置、音响设备、以及多数据通道iis声道接口电路

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101895801A (zh) * 2009-05-22 2010-11-24 三星电子株式会社 用于声音聚焦的设备和方法
CN106030505A (zh) * 2014-02-11 2016-10-12 Lg电子株式会社 显示装置及其控制方法
CN112218153A (zh) * 2019-07-12 2021-01-12 海信视像科技股份有限公司 显示装置、音响设备、以及多数据通道iis声道接口电路
CN111757171A (zh) * 2020-07-03 2020-10-09 海信视像科技股份有限公司 一种显示设备及音频播放方法
CN112153538A (zh) * 2020-09-24 2020-12-29 京东方科技集团股份有限公司 显示装置及其全景声实现方法、非易失性存储介质

Also Published As

Publication number Publication date
CN115209077A (zh) 2022-10-18
EP4304164A1 (en) 2024-01-10

Similar Documents

Publication Publication Date Title
JP5675729B2 (ja) オーディオエンハンス型装置
US9693169B1 (en) Ultrasonic speaker assembly with ultrasonic room mapping
CN106303836B (zh) 一种调节立体声播放的方法及装置
KR101880844B1 (ko) 오디오 공간 효과를 위한 초음파 스피커 어셈블리
US10587979B2 (en) Localization of sound in a speaker system
US20190394567A1 (en) Dynamically Adapting Sound Based on Background Sound
US20190391783A1 (en) Sound Adaptation Based on Content and Context
US20120128184A1 (en) Display apparatus and sound control method of the display apparatus
US20190394602A1 (en) Active Room Shaping and Noise Control
US20230021918A1 (en) Systems, devices, and methods of manipulating audio data based on microphone orientation
US20190394603A1 (en) Dynamic Cross-Talk Cancellation
US20200084537A1 (en) Automatically movable speaker to track listener or optimize sound performance
US11641547B2 (en) Sound box assembly, display apparatus, and audio output method
WO2022218195A1 (zh) 显示设备及其音频输出方法
US20190394601A1 (en) Automatic Room Filling
US11620976B2 (en) Systems, devices, and methods of acoustic echo cancellation based on display orientation
KR102284914B1 (ko) 프리셋 영상이 구현되는 사운드 트랙킹 시스템
US20210382672A1 (en) Systems, devices, and methods of manipulating audio data based on display orientation
CN111586553B (zh) 显示装置及其工作方法
US10484809B1 (en) Closed-loop adaptation of 3D sound
US20220217469A1 (en) Display Device, Control Method, And Program
CN220210601U (zh) 一种音响系统
TW202324372A (zh) 可動態調整目標聆聽點並消除環境物件干擾的音響系統
KR20150047411A (ko) 틈 스피커를 통해 음향을 출력하는 방법 및 장치
CN117579797A (zh) 投影设备及系统、以及投影方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22787422

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022787422

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022787422

Country of ref document: EP

Effective date: 20231002

WWE Wipo information: entry into national phase

Ref document number: 18286658

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE