WO2022001451A1 - 显示装置、发声控制方法及发声控制装置 - Google Patents
显示装置、发声控制方法及发声控制装置 Download PDFInfo
- Publication number
- WO2022001451A1 WO2022001451A1 PCT/CN2021/094627 CN2021094627W WO2022001451A1 WO 2022001451 A1 WO2022001451 A1 WO 2022001451A1 CN 2021094627 W CN2021094627 W CN 2021094627W WO 2022001451 A1 WO2022001451 A1 WO 2022001451A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sound
- sounding
- channel signal
- display
- signal
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 21
- 230000005236 sound signal Effects 0.000 claims description 26
- 230000004807 localization Effects 0.000 claims description 25
- 238000004519 manufacturing process Methods 0.000 claims description 21
- 238000004364 calculation method Methods 0.000 claims description 15
- 238000001514 detection method Methods 0.000 claims description 15
- 238000000195 production control method Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 230000000694 effects Effects 0.000 description 7
- 238000000926 separation method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000010354 integration Effects 0.000 description 4
- 230000008929 regeneration Effects 0.000 description 4
- 238000011069 regeneration method Methods 0.000 description 4
- 241001465754 Metazoa Species 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000005284 excitation Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000010219 correlation analysis Methods 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R5/00—Stereophonic arrangements
- H04R5/02—Spatial or constructional arrangements of loudspeakers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R7/00—Diaphragms for electromechanical transducers; Cones
- H04R7/02—Diaphragms for electromechanical transducers; Cones characterised by the construction
- H04R7/04—Plane diaphragms
- H04R7/045—Plane diaphragms using the distributed mode principle, i.e. whereby the acoustic radiation is emanated from uniformly distributed free bending wave vibration induced in a stiff panel and not from pistonic motion
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09F—DISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
- G09F9/00—Indicating arrangements for variable information in which the information is built-up on a support by selection or combination of individual elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/20—Arrangements for obtaining desired frequency or directional characteristics
- H04R1/22—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired frequency characteristic only
- H04R1/26—Spatial arrangements of separate transducers responsive to two or more frequency ranges
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/12—Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2440/00—Bending wave transducers covered by H04R, not provided for in its groups
- H04R2440/01—Acoustic transducers using travelling bending waves to generate or detect sound
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2440/00—Bending wave transducers covered by H04R, not provided for in its groups
- H04R2440/05—Aspects relating to the positioning and way or means of mounting of exciters to resonant bending wave panels
Definitions
- the present disclosure relates to the field of display technology, and in particular, to a display device, a sound production control method, and a sound production control device.
- the purpose of the technical solutions of the present disclosure is to provide a display device, a sound production control method and a sound production control device, which are used to realize the integrated playback of sound and picture of the display device.
- Embodiments of the present disclosure provide a display device, including:
- a display screen including a first display area, a middle display area and a second display area arranged in sequence along the first direction;
- the plurality of sound-emitting units include: a plurality of first sound-emitting units whose orthographic projections on the plane where the display screen is located are located in the first display area, and the orthographic projections on the plane where the display screen is located are located in the second a plurality of second sound-generating units in the display area and a plurality of third sound-generating units located in the middle display area in an orthographic projection of the plane where the display screen is located;
- the plurality of first sound-emitting units and the plurality of second sound-emitting units respectively include at least a sound-emitting unit that emits sound in a first frequency band, a sound-emitting unit that emits sound in a second frequency band, and a sound-emitting unit that emits sound in a third frequency band
- the first frequency band, the second frequency band and the third frequency band increase in sequence; among the plurality of the third sound generation units, all the sound generation units that emit sound are located in the second frequency band.
- the orthographic projections of a plurality of the third sound-emitting units on the plane where the display screen is located are evenly distributed in the middle display area, and each of the third sound-emitting units corresponds to one of the sub-areas of the middle display area.
- each of the sound-emitting units includes an exciter and a vibration panel respectively, wherein the exciter is installed on the vibration panel, and the sound-emitting unit drives the sound-emitting unit through the exciter.
- the vibrating panel vibrates and emits sound.
- the display device wherein the display device includes a display panel, one surface of the display panel is the display screen, wherein the display panel includes a plurality of sub-panels, one of the plurality of sub-panels The sub-panel is reused as the vibration panel.
- a plurality of the sub-panels are assembled to form the display panel.
- the areas of the first display area and the second display area are equal, and the area of the intermediate display area is at least twice the area of the first display area.
- An embodiment of the present disclosure further provides a sound emission control method, wherein, applied to the display device according to any one of the above, the method includes:
- the sounding position of the sounding object in the target image frame in the target image frame is detected, and the third sounding corresponding to the sounding position is determined unit;
- the voice control method wherein, when the target image frame for detecting the video data is displayed on the display screen, the voice position of the voice emitting object in the target image frame in the target image frame ,include:
- the left channel data and the right channel data perform sound image localization calculation to determine sound image localization information
- the sounding position of the sounding object in the target image frame is determined.
- the sound production control method wherein, in the step of outputting the middle channel signal to at least one of the third sound production units:
- the orthographic projection of the third sound generating unit output by the middle channel signal on the plane where the display screen is located is located in the middle of the middle display area.
- the sound production control method wherein outputting the middle channel signal to at least one of the third sound production units includes:
- Each sub-channel signal is sent to the corresponding third sound generating unit respectively.
- the third sound production unit output by the middle channel signal and the third sound production unit output by the target sound signal are the same sound production unit, After combining the middle channel signal and the target sounding signal, it is output to the corresponding third sounding unit.
- An embodiment of the present disclosure further provides a sound emission control device, wherein, applied to the display device according to any one of the above, the device includes:
- a data acquisition module for acquiring video data and audio data of the audio and video data to be output
- the detection module is used to detect the sounding position of the sounding object in the target image frame when the target image frame of the video data is displayed on the display screen, and determine the sounding position corresponding to the sounding position.
- the third sounding unit is used to detect the sounding position of the sounding object in the target image frame when the target image frame of the video data is displayed on the display screen, and determine the sounding position corresponding to the sounding position.
- a conversion module for extracting the target sounding signal corresponding to the sounding object in the audio signal corresponding to the target image frame of the audio data, and converting the audio signal into a left channel signal, a middle sound channel signal and right channel signal;
- an output module for outputting the left channel signal to a plurality of the first sound generating units, outputting the right channel signal to a plurality of the second sound generating units, and outputting the middle channel signal to at least one of the A third sounding unit, and outputting the target sounding signal to the third sounding unit corresponding to the display position.
- FIG. 1 is a schematic structural diagram of a display device according to an embodiment of the disclosure
- FIG. 2 is a schematic flowchart of an implementation manner of the vocalization control method according to an embodiment of the present disclosure
- FIG. 3 is a schematic flowchart of another implementation of the vocalization control method according to the embodiment of the present disclosure.
- FIG. 4 is a schematic flowchart of determining the sounding position of a sounding object in one of the implementations of the sounding control method according to the embodiment of the present disclosure
- Fig. 5 is the structural representation explaining the implication of audio-visual
- Fig. 6 is the schematic flow chart of the partial process of carrying out sound image localization calculation in Fig. 4;
- Fig. 7 is a schematic flowchart of another part of the process of carrying out sound image localization calculation in Fig. 4;
- FIG. 8 is a structural schematic diagram illustrating the relationship between the audio time difference and the audio-visual position
- FIG. 9 is a schematic structural diagram illustrating the relationship between the left and right channel intensity level difference and the sound image position
- FIG. 10 is a schematic structural diagram of a sound emission control device according to an embodiment of the disclosure.
- the display device includes:
- the display screen 100 includes a first display area 110, a middle display area 130 and a second display area 120 arranged in sequence along the first direction a;
- the plurality of sound-emitting units include: a plurality of first sound-emitting units 210 whose orthographic projections are located in the first display area 110 on the plane where the display screen 100 is located, and a plurality of first sound-emitting units 210 whose orthographic projections on the plane where the display screen 100 is located are located in the second display area 120
- the second sound-generating unit 220 and a plurality of third sound-generating units 230 located in the middle display area 130 with orthographic projection on the plane where the display screen 100 is located;
- the plurality of first sound-emitting units 210 and the plurality of second sound-emitting units 220 respectively include at least a sound-emitting unit that emits sound in the first frequency band, a sound-emitting unit that emits sound in the second frequency band, and a sound-emitting unit that emits sound in the third frequency band unit; the first frequency band, the second frequency band, and the third frequency band increase in sequence; among the plurality of third sound-emitting units 230, all sound-emitting units are located in the second frequency band.
- the first direction a is a horizontal direction, and may be a horizontal direction to the right.
- the setting of 230 corresponds to the first display area 110, the second display area 120 and the middle display area 130 arranged in sequence along the first direction a respectively, so the first display area 110, the middle display area 130 and the second display area 120 are respectively formed as
- the left channel playback area, the middle channel playback area and the right channel playback area of the display device make the entire display screen a sound-emitting screen, that is, the sound-emitting units are distributed and set on the entire display screen to make the entire display screen sound. .
- the plurality of first sound-emitting units 210 correspond to the first display area 110
- the plurality of second sound-emitting units 220 correspond to the second display area 120
- the plurality of third sound-emitting units 230 correspond to the intermediate display area 130 are evenly distributed.
- the plurality of first sound-generating units 210 and the plurality of second sound-generating units 220 respectively include a sound-emitting unit that emits sound in the first frequency band, a sound-emitting unit that emits sound in the second frequency band, and a sound-emitting unit that emits sound in the third frequency band Sound generating unit, wherein the first frequency band, the second frequency band and the third frequency band increase in sequence, optionally, the first frequency band, the second frequency band, and the third frequency band respectively correspond to the sounds of high, medium and low frequency bands .
- sounding units of high, middle and low frequency bands are respectively arranged in the left and right channel playing areas of the corresponding display screen, which can meet the playing requirements of each frequency band and meet the user's demand for sound of each frequency band.
- the plurality of third sound-emitting units 230 are all sound-emitting units that emit sound in the second frequency band, and the second frequency band is an intermediate frequency frequency band. The playback requirements of the channel.
- a plurality of first sound generating units 210 corresponding to the first display area, a plurality of second sound generating units 220 corresponding to the second display area, and a plurality of first sound generating units 220 corresponding to the intermediate display area 230 are provided.
- Three sound-emitting units 230, and sound-emitting units corresponding to different display areas meet different frequency band requirements, so that the entire display screen is formed as a sound-emitting screen, and the sound-emitting screen has a left channel playback area, a middle channel playback area and a right channel playback area.
- the left channel playback area, the middle channel playback area and the right channel playback area of the entire display screen are used for sound playback, which can meet the playback requirements of sound and picture integration.
- the orthographic projections of the plurality of third sound-emitting units 230 on the plane where the display screen 100 is located are evenly distributed in the middle display area 130, and each third sound-emitting unit 230 corresponds to one of the middle display areas. sub area.
- the middle display area 130 is divided into a plurality of sub-areas, and the third sound-emitting unit 230 is respectively arranged in each sub-area, so that the display screen can display the image in the middle display area 130 according to the position of the face image displayed on the image. position, and control the corresponding third sound generating unit 230 to sound, so as to realize the playback effect of combining sound and picture.
- the middle display area 130 includes M ⁇ N sub-areas, where M and N are positive integers respectively, for example, M and N are 3 respectively, that is, the middle display area 130 includes 3 ⁇ 3 sub-areas, and each sub-area corresponds to at least one
- the third sound producing unit 230, and the third sound producing unit 230 of each sub-area can produce sound independently to meet the playback requirements of the middle channel.
- the first frequency band, the second frequency band, and the third frequency band respectively correspond to three frequency bands of high, middle, and low sound, and specifically correspond to the frequency ranges of the high, middle, and low frequency bands, which can be determined according to the industry.
- the specific provisions in the document are determined and are not limited here.
- each sound generating unit includes an exciter and a vibration panel respectively, wherein the exciter is mounted on the vibration panel, and the sound generating unit drives the vibration panel through the exciter to generate Vibrate to make sound.
- the vibrating panel is used as a vibrating body, and the vibrating sound waves are transmitted to the human ear. That is, when the sound generating unit emits sound, it can output sound by using the vibrating panel as the vibrating body without the need for a speaker and an earpiece.
- the display device includes a display panel 300, one surface of the display panel is the display screen, wherein the display panel 300 includes a plurality of sub-panels, and the One of the sub-panels is reused as the vibration panel.
- each sub-panel acts as a vibrating body, and is driven by an exciter to generate sound waves to realize sound output.
- a plurality of sub-panels are arranged in a one-to-one correspondence with a plurality of lasers.
- the display device does not need to install a speaker and an earpiece, and the vibration of the sub-panel can transmit the sound, which constitutes a screen sound technology.
- the display device using this implementation structure can further increase the screen ratio and ensure a true full-screen effect.
- the display device that uses the screen to emit sound can judge the sound-emitting position of the sound-emitting object on the image according to the displayed image, and control the sub-panel in the corresponding position to vibrate and emit sound according to the determined sound-emitting position, when the image is displayed. Realize the playback effect of sound and picture integration.
- a plurality of sub-panels are assembled to form a display panel.
- each sub-panel is combined with the set exciter to form a sound-emitting unit, and a plurality of sub-panels are spliced together to form a display panel including a large-area display screen. While displaying images, the display panel can realize the screen sound effect.
- the areas of the first display area 110 and the second display area 120 are equal, and the area of the middle display area 130 is equal to the area of the first display area 110 . at least twice as much.
- the area of the middle display area 130 is much larger than the areas of the first display area 110 and the second display area 120, so that the first display area 110 and the second display area 120 correspond to the left and right edges of the display area, respectively. set, and formed into the corresponding left channel playing area and right channel playing area; the middle display area 130 is used to display the main part of the output image, and can be based on the position of the sounding object on the output image in the middle display area 130, Control the sub-area of the corresponding position to emit sound waves to achieve the playback effect of sound and picture integration.
- the display device described in the above-mentioned embodiments of the present disclosure uses a sound-emitting screen formed by combining an excitation source and a plurality of sub-panels, and the plurality of sound-emitting units composed of the excitation source and the sub-panels correspond to the first display area and the second display area of the display screen.
- respectively at least include sound-emitting units that emit high-frequency sound, intermediate-frequency sound and low-frequency sound, corresponding to the middle display area of the display screen, respectively, are sound-emitting units that emit intermediate-frequency sounds, while satisfying the sound playback of the left and right channels, it can also Guarantee the playback of intermediate frequency sound to meet the needs of users for sound in various frequency bands.
- the left channel playback area, the middle channel playback area and the right channel playback area of the entire display screen are used for sound playback, which can meet the playback requirement of combining sound and picture.
- An embodiment of the present disclosure further provides a sound emission control method, which is applied to the display device described in any one of the above. As shown in FIG. 2 , and in conjunction with FIG. 1 , the method includes:
- S230 Extract the target sounding signal corresponding to the sounding object in the audio signal corresponding to the target image frame in the audio data, and convert the audio signal into a left channel signal, a middle channel signal and a right channel signal;
- S240 Output the left channel signal to a plurality of the first sound generation units, output the right channel signal to a plurality of the second sound generation units, and output the middle channel signal to at least one of the third sound generation units unit, and output the target sounding signal to the third sounding unit corresponding to the display position.
- the sound control method described in the embodiment of the present disclosure uses the display screen divided into the left channel playback area, the right channel playback area, and the middle channel playback area, when outputting audio and video data, video data and audio data are output. Separating and detecting and locating the sounding object on the target image frame, and detecting and separating the sounding signal of the object corresponding to the audio signal of the target image frame, so as to output the left channel signal to a plurality of first sounding units corresponding to the first display area, The right channel signal is output to a plurality of second sound-emitting units corresponding to the second display area, and the middle channel signal is output to a plurality of third sound-emitting units corresponding to the middle display area, and the phase is controlled according to the position of the located sound-emitting object image. The corresponding third sound-emitting unit outputs the target sound-emitting signal corresponding to the sound-emitting object image, so as to meet the playback requirement of combining sound and picture.
- a display screen capable of full-screen sounding can be realized, and the plurality of first sounding units and the plurality of second sounding units corresponding to the left and right channel playback areas are respectively used to play the left and right channel signals of the audio data, and the corresponding middle
- the plurality of third sound generating units in the channel playing area are used for playing the middle channel signal of the audio data and locating the corresponding target sounding signal according to the sounding object image.
- the middle channel is often used to play the main sound signals such as character dialogue in all audios, that is, most of the character voice information in the audio data is an intermediate frequency signal, so the third sound for emitting intermediate frequency sound is used.
- the unit plays the middle channel signal.
- the left and right channels are generally used to play audio signals such as environment and sound effect enhancement to enhance the sound signal played by the middle channel. There are signals in each frequency band, so the use of high, medium and low frequency sound.
- a plurality of first sounding units and a plurality of second sounding units play left and right channel signals.
- the sound-emitting objects in the video data include but are not limited to include only human face images, animal head images, sound-emitting machines, and the like.
- step S220 when the target image frame of the video data is displayed on the display screen, the sounding position of the sounding object of the target image frame in the target image frame is detected, as shown in FIG. 3 , Specifically:
- step S210 According to the video data and the audio data in the audio-video data extracted in step S210, sound-emitting object detection and sound-emitting object location are performed on the image in the video data;
- Image recognition which can determine the location of the sound in the target image frame.
- the target in the process of sound-emitting object detection and sound-emitting object positioning for the image in the video data, the target can be detected by performing channel separation on the audio data and performing sound signal detection on each of the separated sub-channels.
- the sounding position of the sounding object in the target image frame by performing channel separation on the audio data and performing sound signal detection on each of the separated sub-channels.
- the sound control method described in the embodiment of the present disclosure including: audio and video separation, the steps of detecting and locating sound objects by using the separated video data, and according to the separated audio data
- the channel regeneration is performed, the left channel signal, the middle channel signal and the right channel signal are separated, and the left channel signal is played in the left channel area, and the left channel signal is played in the right channel area.
- the right channel signal is played in the channel area, and the center channel signal is played in the center channel area.
- the separated audio signals when performing channel separation and object sound detection, usually include 2.0, 2.1, and 5.1 channels, among which 2.0 channels are more common.
- the audio signal of the above-mentioned initial channel is separated into each sub-channel, and when the object sound detection is performed, it is detected whether there is an object sound signal in each sub-channel.
- the method for detecting the object sound signal can be trained by TensorFlow For example, when performing human voice detection, the spleeter library in ffmpeg is used as the human voice detection model for human voice detection.
- step S220 it is detected that when the target image frame of the video data is displayed on the display screen, the sound emitting object of the target image frame is in the target image frame
- the vocalization position includes:
- the left channel data and the right channel data perform sound image localization calculation to determine sound image localization information
- the sounding position of the sounding object in the target image frame is determined.
- the meaning of the sound image is: when two speakers are used for stereo playback, the listener does not feel the existence of the two sound sources, but feels as if there is a sound between the two speakers.
- a spatial point is sounding, and the sounding point is the sound image.
- the sound image is at the middle position of the left and right channels
- the binaural sound image localization is achieved by the time difference and/or intensity difference between the left and right channel signals.
- the sound image localization information includes the audio time difference and the intensity level difference between the left channel signal and the right channel signal.
- the Sound image localization calculation determine sound image localization information, including:
- the average value calculation of the frame signal and the intensity calculation of the left and right channels are respectively performed to determine the intensity level difference between the left channel signal and the right channel signal;
- sound image localization information is determined.
- the left channel signal and the right channel signal separated from the audio data are Pulse Code Modulation (Pulse Code Modulation, PCM) signals.
- PCM Pulse Code Modulation
- Performing the cross-correlation calculation includes: sequentially performing ITD calculation on the left channel signal and the right channel signal and performing cross-correlation analysis using a cross-correlation function to determine the audio time difference of the target image frame.
- the position of the sound image in the lateral direction of the display screen can be determined according to the determined audio time difference.
- the audio time difference between the left channel signal and the right channel signal and the intensity level difference calculated in the above manner are used to determine the sound image localization information of the sounding object.
- the motion state of the current video frame is calculated by the difference frame method, and the audio time difference and intensity level difference obtained by the above calculation process can determine the sounding position of the sounding object in the target image frame.
- the audio signal is separated into left and right channel signals, and format conversion is performed to obtain the time stamp and signal data of each audio frame; , using the calculated results to calculate the sound image localization to obtain the horizontal position of the sound image on the display screen.
- the vertical position of the sound image on the screen can be obtained, and then use the frame difference processing , can determine the sounding position of the sounding object and the sounding object in the sounding signal of the object.
- step S230 when extracting the target sounding signal of the audio signal corresponding to the target image frame in the audio data, the target sounding signal may be determined according to the above-mentioned sound image localization information.
- the human voice model can also be used to detect each sub-channel signal to determine the target uttered signal.
- the playback area of the display screen can be determined according to the sounding position, and according to the plurality of first The correspondence between the three sounding units and the display area can determine the third sounding unit corresponding to the sounding position, that is, the target sounding channel can be determined.
- the audio data is separated into the left channel by performing channel regeneration. , the right channel, the middle channel and the target sound channel.
- the left and right channels are played in the left and right channel playback area
- the middle channel is played in the middle channel playback area
- the target sound channel is used for playback The target sounding signal of the sounding object.
- the channels are combined before playing.
- the usual conversion method between channels can be converted from 2 channels to 3 channels, and can also be converted from 2 channels to multi-channel.
- the audio data is separated into a left channel, a right channel and a target sound channel, and further, for each sub-region of the middle channel playback area, each The sub-regions are respectively provided with at least one third sound-emitting unit, which divides the middle channel signal of the audio data into a plurality of sub-channel signals, and each sub-channel signal corresponds to a channel of a sub-region, such as the middle channel playback area.
- the middle channel signal of the audio data is divided into nine sub-channels, and each sub-channel signal corresponds to a sub-region, located in the sub-region.
- the third sound generating unit of the sub area is used to play the corresponding sub channel signal.
- the channels are combined before playing.
- step S240 outputting the middle channel signal to at least one of the third sound generating units includes:
- Each sub-channel signal is sent to the corresponding third sound generating unit respectively.
- step S240 in the step of outputting the middle channel signal to at least one of the third sounding units:
- the orthographic projection of the third sound generating unit output by the middle channel signal on the plane where the display screen is located is located in the middle of the middle display area.
- the third sounding unit output by the middle channel signal and the third sounding unit output by the target sounding signal are the same sounding unit, the After the channel signal is combined with the target sounding signal, it is output to the corresponding third sounding unit.
- the target sounding channel used for playing the target sounding signal may include a third sounding unit, and the third sounding unit is determined according to the sounding position of the above-mentioned sounding object.
- the target sound-emitting channel used for playing the target sound-emitting signal may include at least two third sound-emitting units, the at least two third sound-emitting units are located in a part of the corresponding middle display area, and include The third sound-emitting unit determined by the sound-emitting position of the sound-emitting object; in addition, all third sound-emitting units may also be included.
- the sound played by the third sound-emitting unit determined by the sound-emitting position of the sound-emitting object can be larger than the playback sound of other third sound-emitting units, so as to meet the playback requirement of combining sound and picture.
- Another aspect of an embodiment of the present disclosure further provides a sound emission control device, the sound emission control device is applied to the display device according to any one of the above, as shown in FIG. 10 , the device includes:
- Data acquisition module 1010 for acquiring video data and audio data of audio-video data to be output
- the detection module 1020 is configured to detect the sounding position of the sounding object in the target image frame when the target image frame of the video data is displayed on the display screen, and determine the sounding position corresponding to the sounding position
- the conversion module 1030 is configured to extract the target sounding signal corresponding to the sounding object in the audio signal corresponding to the target image frame of the audio data, and convert the audio signal into a left channel signal, a middle channel signal and right channel signal;
- the output module 1040 is configured to output the left channel signal to a plurality of the first sound generating units, output the right channel signal to a plurality of the second sound generating units, and output the middle channel signal to at least one sound generating unit. the third sounding unit, and outputting the target sounding signal to the third sounding unit corresponding to the display position.
- the sound emitting object in the target image frame is in the target image frame.
- sounding positions including:
- the left channel data and the right channel data perform sound image localization calculation to determine sound image localization information
- the sounding position of the sounding object in the target image frame is determined.
- the output module 440 outputs the middle channel signal to at least one of the third sound production units:
- the orthographic projection of the third sound generating unit output by the middle channel signal on the plane where the display screen is located is located in the middle of the middle display area.
- the described sound production control device wherein, the output module 440 outputs the middle channel signal to at least one of the third sound production units, including:
- Each sub-channel signal is sent to the corresponding third sound generating unit respectively.
- the output module 440 outputs the middle channel signal and the target sounding signal to the corresponding third sounding unit after combining.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Stereophonic System (AREA)
Abstract
Description
Claims (12)
- 一种显示装置,其中,包括:显示屏幕,包括沿第一方向依次排列的第一显示区域、中间显示区域和第二显示区域;多个发声单元,设置于背离所述显示屏幕的一侧;其中,多个所述发声单元包括:在所述显示屏幕所在平面的正投影位于所述第一显示区域的多个第一发声单元、在所述显示屏幕所在平面的正投影位于所述第二显示区域的多个第二发声单元和在所述显示屏幕所在平面的正投影位于所述中间显示区域的多个第三发声单元;其中,多个所述第一发声单元和多个所述第二发声单元中,分别至少包括发出声音位于第一频段的发声单元、发出声音位于第二频段的发声单元和发出声音位于第三频段的发声单元;所述第一频段、所述第二频段和所述第三频段依次增大;多个所述第三发声单元中,均为发出声音位于所述第二频段的发声单元。
- 根据权利要求1所述的显示装置,其中,多个所述第三发声单元在所述显示屏幕所在平面的正投影,在所述中间显示区域内均匀分布,每一所述第三发声单元对应所述中间显示区域的其中一子区域。
- 根据权利要求1或2所述的显示装置,其中,每一所述发声单元分别包括激励器和振动面板,其中所述激励器安装于所述振动面板上,所述发声单元通过所述激励器带动所述振动面板产生振动发出声音。
- 根据权利要求3所述的显示装置,其中,所述显示装置包括显示面板,所述显示面板的其中一表面为所述显示屏幕,其中所述显示面板包括多个子面板,多个子面板的其中一子面板复用为所述振动面板。
- 根据权利要求4所述的显示装置,其中,多个所述子面板相拼接形成为所述显示面板。
- 根据权利要求1所述的显示装置,其中,所述第一显示区域和所述第二显示区域的面积相等,所述中间显示区域的面积为所述第一显示区域的面积的至少两倍。
- 一种发声控制方法,其中,应用于权利要求1至6任一项所述的显示装 置,所述方法包括:获取待输出的音视频数据的视频数据和音频数据;检测所述视频数据的目标图像帧在所述显示屏幕上显示时,所述目标图像帧中的发声物在目标图像帧中的发声位置,并确定与所述发声位置对应的所述第三发声单元;提取所述音频数据的与所述目标图像帧相对应音频信号中,与所述发声物相对应的目标发声信号,并将所述音频信号转换为左声道信号、中声道信号和右声道信号;将所述左声道信号输出至多个所述第一发声单元,所述右声道信号输出至多个所述第二发声单元,所述中声道信号输出至至少一所述第三发声单元,以及将所述目标发声信号输出至与所述显示位置对应的所述第三发声单元。
- 根据权利要求7所述的发声控制方法,其中,所述检测所述视频数据的目标图像帧在所述显示屏幕上显示时,所述目标图像帧中的发声物在目标图像帧中的发声位置,包括:将所述音频数据分离为左声道数据和右声道数据;根据所述左声道数据和所述右声道数据,进行声像定位计算,确定声像定位信息;根据所述声像定位信息和对所述视频数据的目标图像帧进行帧差处理的处理结果,确定所述发声物在目标图像帧的发声位置。
- 根据权利要求7所述的发声控制方法,其中,将所述中声道信号输出至至少一所述第三发声单元的步骤中:所述中声道信号所输出的所述第三发声单元,在所述显示屏幕所在平面的正投影位于所述中间显示区域的中部位置。
- 根据权利要求7所述的发声控制方法,其中,将所述中声道信号输出至至少一所述第三发声单元,包括:将所述中声道信号转换为多个子声道信号;其中每一子声道信号对应一个所述第三发声单元;将每一子声道信号分别发送至相对应的所述第三发声单元。
- 根据权利要求7所述的发声控制方法,其中,在所述中声道信号所输出的所述第三发声单元和所述目标发声信号所输出的所述第三发声单元为同 一发声单元时,将所述中声道信号与所述目标发声信号组合后,输出至相应的所述第三发声单元。
- 一种发声控制装置,其中,应用于权利要求1至6任一项所述的显示装置,所述装置包括:数据获取模块,用于获取待输出的音视频数据的视频数据和音频数据;检测模块,用于检测所述视频数据的目标图像帧在所述显示屏幕上显示时,所述目标图像帧中的发声物在目标图像帧中的发声位置,并确定与所述发声位置对应的所述第三发声单元;转换模块,用于提取所述音频数据的与所述目标图像帧相对应音频信号中,与所述发声物相对应的目标发声信号,并将所述音频信号转换为左声道信号、中声道信号和右声道信号;输出模块,用于将所述左声道信号输出至多个所述第一发声单元,所述右声道信号输出至多个所述第二发声单元,所述中声道信号输出至至少一所述第三发声单元,以及将所述目标发声信号输出至与所述显示位置对应的所述第三发声单元。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/790,365 US20230045236A1 (en) | 2020-06-29 | 2021-05-19 | Display device, sound-emitting controlling method, and sound-emitting controlling device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010609539.2 | 2020-06-29 | ||
CN202010609539.2A CN111741412B (zh) | 2020-06-29 | 2020-06-29 | 显示装置、发声控制方法及发声控制装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022001451A1 true WO2022001451A1 (zh) | 2022-01-06 |
Family
ID=72653507
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/094627 WO2022001451A1 (zh) | 2020-06-29 | 2021-05-19 | 显示装置、发声控制方法及发声控制装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20230045236A1 (zh) |
CN (1) | CN111741412B (zh) |
WO (1) | WO2022001451A1 (zh) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7326824B2 (ja) | 2019-04-05 | 2023-08-16 | ヤマハ株式会社 | 信号処理装置、及び信号処理方法 |
CN111741412B (zh) * | 2020-06-29 | 2022-07-26 | 京东方科技集团股份有限公司 | 显示装置、发声控制方法及发声控制装置 |
WO2022134169A1 (zh) * | 2020-12-21 | 2022-06-30 | 安徽鸿程光电有限公司 | 一种多显示屏系统及其音频控制方法 |
CN114915812B (zh) * | 2021-02-08 | 2023-08-22 | 华为技术有限公司 | 一种拼接屏音频的分配方法及其相关设备 |
CN113329197A (zh) * | 2021-05-13 | 2021-08-31 | 纳路易爱姆斯株式会社 | 一种固态沉浸式oled音频系统 |
CN117501714A (zh) * | 2022-05-31 | 2024-02-02 | 京东方科技集团股份有限公司 | 显示面板、显示装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150003648A1 (en) * | 2013-06-27 | 2015-01-01 | Samsung Electronics Co., Ltd. | Display apparatus and method for providing stereophonic sound service |
CN108124224A (zh) * | 2016-11-30 | 2018-06-05 | 乐金显示有限公司 | 面板振动型发声显示装置 |
CN109194796A (zh) * | 2018-07-09 | 2019-01-11 | Oppo广东移动通信有限公司 | 屏幕发声方法、装置、电子装置及存储介质 |
CN110312032A (zh) * | 2019-06-17 | 2019-10-08 | Oppo广东移动通信有限公司 | 音频播放方法及相关产品 |
CN111741412A (zh) * | 2020-06-29 | 2020-10-02 | 京东方科技集团股份有限公司 | 显示装置、发声控制方法及发声控制装置 |
Family Cites Families (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104270552A (zh) * | 2014-08-29 | 2015-01-07 | 华为技术有限公司 | 一种声像播放方法及装置 |
CN105491474A (zh) * | 2014-10-07 | 2016-04-13 | 鸿富锦精密工业(深圳)有限公司 | 显示装置及具有该显示装置的电子装置 |
KR101817103B1 (ko) * | 2016-06-30 | 2018-01-10 | 엘지디스플레이 주식회사 | 패널 진동형 음향 발생 표시 장치 |
CN109314832B (zh) * | 2016-05-31 | 2021-01-29 | 高迪奥实验室公司 | 音频信号处理方法和设备 |
CN107786696A (zh) * | 2017-12-13 | 2018-03-09 | 京东方科技集团股份有限公司 | 显示屏幕及移动终端 |
CN108462917B (zh) * | 2018-03-30 | 2020-03-17 | 四川长虹电器股份有限公司 | 电磁激励能量转换器和激光投影光学音响屏幕及其同步显示方法 |
CN108833638B (zh) * | 2018-05-17 | 2021-08-17 | Oppo广东移动通信有限公司 | 发声方法、装置、电子装置及存储介质 |
CN108806560A (zh) * | 2018-06-27 | 2018-11-13 | 四川长虹电器股份有限公司 | 屏幕发声显示屏及声场画面同步定位方法 |
TWI687915B (zh) * | 2018-07-06 | 2020-03-11 | 友達光電股份有限公司 | 動態電視牆及其影音播放方法 |
CN109144249B (zh) * | 2018-07-23 | 2021-09-14 | Oppo广东移动通信有限公司 | 屏幕发声方法、装置、电子装置及存储介质 |
CN110874203A (zh) * | 2018-09-04 | 2020-03-10 | 中兴通讯股份有限公司 | 屏幕发声控制器、方法、装置、终端及存储介质 |
KR102628489B1 (ko) * | 2018-11-15 | 2024-01-22 | 엘지디스플레이 주식회사 | 디스플레이 장치 |
KR102663292B1 (ko) * | 2018-11-23 | 2024-05-02 | 엘지디스플레이 주식회사 | 표시장치 및 차량용 장치 |
US11249716B2 (en) * | 2018-12-11 | 2022-02-15 | Samsung Display Co., Ltd. | Display device and method for driving the same |
CN110018809A (zh) * | 2019-03-28 | 2019-07-16 | 联想(北京)有限公司 | 一种电子设备和控制方法 |
CN210090908U (zh) * | 2019-06-17 | 2020-02-18 | 青岛海信电器股份有限公司 | 显示装置 |
CN110572760B (zh) * | 2019-09-05 | 2021-04-02 | Oppo广东移动通信有限公司 | 电子设备及其控制方法 |
CN111641898B (zh) * | 2020-06-08 | 2021-12-03 | 京东方科技集团股份有限公司 | 发声装置、显示装置、发声控制方法及装置 |
CN112135227B (zh) * | 2020-09-30 | 2022-04-05 | 京东方科技集团股份有限公司 | 显示装置、发声控制方法及发声控制装置 |
CN116235509A (zh) * | 2020-10-06 | 2023-06-06 | 索尼集团公司 | 声音再现装置和方法 |
-
2020
- 2020-06-29 CN CN202010609539.2A patent/CN111741412B/zh active Active
-
2021
- 2021-05-19 US US17/790,365 patent/US20230045236A1/en active Pending
- 2021-05-19 WO PCT/CN2021/094627 patent/WO2022001451A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150003648A1 (en) * | 2013-06-27 | 2015-01-01 | Samsung Electronics Co., Ltd. | Display apparatus and method for providing stereophonic sound service |
CN108124224A (zh) * | 2016-11-30 | 2018-06-05 | 乐金显示有限公司 | 面板振动型发声显示装置 |
CN109194796A (zh) * | 2018-07-09 | 2019-01-11 | Oppo广东移动通信有限公司 | 屏幕发声方法、装置、电子装置及存储介质 |
CN110312032A (zh) * | 2019-06-17 | 2019-10-08 | Oppo广东移动通信有限公司 | 音频播放方法及相关产品 |
CN111741412A (zh) * | 2020-06-29 | 2020-10-02 | 京东方科技集团股份有限公司 | 显示装置、发声控制方法及发声控制装置 |
Also Published As
Publication number | Publication date |
---|---|
CN111741412B (zh) | 2022-07-26 |
US20230045236A1 (en) | 2023-02-09 |
CN111741412A (zh) | 2020-10-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022001451A1 (zh) | 显示装置、发声控制方法及发声控制装置 | |
CN104869335B (zh) | 用于局域化感知音频的技术 | |
JP5174527B2 (ja) | 音像定位音響メタ情報を付加した音響信号多重伝送システム、制作装置及び再生装置 | |
US11832071B2 (en) | Hybrid speaker and converter | |
KR20120064104A (ko) | 오디오 신호의 공간 추출 시스템 | |
US11399249B2 (en) | Reproduction system and reproduction method | |
KR20130014187A (ko) | 오디오 신호 처리 방법 및 그에 따른 오디오 신호 처리 장치 | |
US20170251324A1 (en) | Reproducing audio signals in a motor vehicle | |
JP2645731B2 (ja) | 音像定位再生方式 | |
US20220386062A1 (en) | Stereophonic audio rearrangement based on decomposed tracks | |
KR102348658B1 (ko) | 표시장치 및 그 구동 방법 | |
CN1672463A (zh) | 音频处理系统 | |
KR101516644B1 (ko) | 가상스피커 적용을 위한 혼합음원 객체 분리 및 음원 위치 파악 방법 | |
CN113810837A (zh) | 一种显示装置的同步发声控制方法及相关设备 | |
JP2003518891A (ja) | 音声信号処理装置 | |
US20230269552A1 (en) | Electronic device, system, method and computer program | |
US20230007434A1 (en) | Control apparatus, signal processing method, and speaker apparatus | |
JP3104348B2 (ja) | 収録装置、再生装置、収録方法および再生方法、および、信号処理装置 | |
JP3104349B2 (ja) | 収録装置、再生装置、収録方法および再生方法、および、信号処理装置 | |
JPH07236194A (ja) | 映像音響再生装置 | |
JP2007221599A (ja) | スピーカ装置、駆動装置、および、スピーカ装置の駆動方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21832230 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21832230 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 05.09.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21832230 Country of ref document: EP Kind code of ref document: A1 |