CN109274998B - Dynamic television wall and video and audio playing method thereof - Google Patents

Dynamic television wall and video and audio playing method thereof Download PDF

Info

Publication number
CN109274998B
CN109274998B CN201811124645.0A CN201811124645A CN109274998B CN 109274998 B CN109274998 B CN 109274998B CN 201811124645 A CN201811124645 A CN 201811124645A CN 109274998 B CN109274998 B CN 109274998B
Authority
CN
China
Prior art keywords
sound
panels
information
dynamic
panel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811124645.0A
Other languages
Chinese (zh)
Other versions
CN109274998A (en
Inventor
徐理智
薛芷苓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AU Optronics Corp
Original Assignee
AU Optronics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AU Optronics Corp filed Critical AU Optronics Corp
Publication of CN109274998A publication Critical patent/CN109274998A/en
Application granted granted Critical
Publication of CN109274998B publication Critical patent/CN109274998B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4122Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41415Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance involving a public display, viewable by several users in a public space outside their home, e.g. movie theatre, information kiosk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/60Receiver circuitry for the reception of television signals according to analogue transmission standards for the sound signals
    • H04N5/607Receiver circuitry for the reception of television signals according to analogue transmission standards for the sound signals for more than one sound signal, e.g. stereo, multilanguages

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

A video and audio playing method for a dynamic video wall comprises the following steps: the method comprises the steps of receiving a multimedia file signal, receiving an environment detection signal, generating at least one position information according to the environment detection signal, dividing video picture information of the multimedia file signal into a plurality of image data corresponding to the configuration positions of a plurality of panels spliced and configured in a dynamic television wall, displaying the image data by the panels respectively, and driving a sound production unit of at least one of the panels to play sound file information according to the at least one position information. Wherein, the sound file information is output in the display area of the at least one panel.

Description

Dynamic television wall and video and audio playing method thereof
Technical Field
The present invention relates to video and audio playing technologies, and in particular, to a dynamic video wall and a video and audio playing method thereof.
Background
As technology grows mature, information can be broadcast by a variety of different methods. Generally, the television wall has a large size, is suitable for being watched by many people, and can achieve good entertainment and advertisement putting effects, so that the television wall is widely applied to occasions such as outdoors, large exhibition spaces and the like.
However, the sound generating units (such as speakers, etc.) of the conventional tv wall are generally disposed on two sides of the conventional tv wall. When the existing television wall plays video and audio, the sound of the sound file is often transmitted from two sides of the existing television wall, the sound is easily influenced by the surrounding environment and is not easily and effectively transmitted to the positions of audiences, and the viewing quality and the acousto-optic effect are further influenced.
Disclosure of Invention
An embodiment of the present invention provides a video and audio playing method for a dynamic video wall, including: the method comprises the steps of receiving a multimedia file signal, receiving an environment detection signal, generating at least one position information according to the environment detection signal, dividing video picture information of the multimedia file signal into a plurality of image data corresponding to the configuration positions of a plurality of panels spliced and configured in a dynamic television wall, displaying the image data by the panels respectively, and driving a sound production unit of at least one of the panels to play sound file information according to the at least one position information. Wherein, the sound file information is output in the display area of the at least one panel.
An embodiment of the present invention provides a video and audio playing method for a dynamic video wall, including: the method comprises the steps of receiving a multimedia file signal, dividing video picture information of the multimedia file signal into a plurality of image data corresponding to the configuration positions of a plurality of panels in a dynamic television wall, detecting at least one first image data with at least one sound production image and at least one second image data without at least one sound production image in the image data, displaying the image data by the panels respectively, and driving sound production units of the panels displaying the first image data to play sound file information at a first volume. The video image information of the multimedia file signal is provided with at least one sounding image, the sound file information is related to the at least one sounding image, and the sound file information is output in the display area of the panel.
An embodiment of the invention provides a dynamic video wall, which includes a plurality of panels, an environment sensor, an image analysis module, and a control module. In the panel splicing configuration, each panel comprises a sound production unit, and the sound production unit is positioned in the display area of each panel. The environment sensor receives the environment detection signal and generates at least one position information according to the environment detection signal. The image analysis module receives a multimedia file signal, wherein the multimedia file signal comprises video picture information and sound file information, and the video picture information is divided into a plurality of image data by the image analysis module corresponding to the configuration positions of a plurality of panels spliced and configured in the dynamic television wall. The control module controls the panels to display the image data, and drives the sound production unit in the display area of at least one of the panels to play sound file information according to the at least one position information, wherein the sound file information is output in the display area.
In summary, in the dynamic video wall and the video-audio playing method thereof according to the embodiment of the invention, the environmental detection signal is received, the position information is generated according to the environmental detection signal, and then the sound-generating unit in the display area of at least one of the panels is driven to play the sound file information according to the position information, and the other panels do not play the sound file information. Therefore, the sound file information can be sounded in the range of the display area of the panel corresponding to the target object, and is transmitted in a more telepresence mode. In another embodiment of the invention, the dynamic video wall and the video-audio playing method thereof detect the first image data with the sound production image and the second image data without the sound production image in all the image data, and then drive the sound production units in the display areas of the panels displaying the first image data to play the sound file information with the first volume. In this way, the sound file information can be played in the sound production unit of the panel on which the sound production image is displayed.
The invention is described in detail below with reference to the drawings and specific examples, but the invention is not limited thereto.
Drawings
FIG. 1 is a functional block diagram of a dynamic video wall according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a panel according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of interaction between a dynamic video wall and a target object according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a panel according to an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating interaction between a dynamic video wall and a plurality of objects according to another embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating interaction between a dynamic video wall and a plurality of objects according to another embodiment of the present invention;
fig. 7 is a flowchart of a video playback method of a dynamic video wall according to an embodiment of the present invention;
FIG. 8 is a flowchart of an embodiment of step S111;
FIG. 9 is a flowchart of another embodiment of step S111;
fig. 10 is a flowchart of a video playback method for a dynamic video wall according to another embodiment of the present invention;
FIG. 11 is a schematic diagram of interaction between a dynamic video wall and a target object according to another embodiment of the present invention;
FIG. 12 is a flowchart of another embodiment of step S111;
FIG. 13 is a schematic diagram illustrating interaction between a dynamic video wall and a plurality of objects according to another embodiment of the present invention;
FIG. 14 is a flowchart of another embodiment of step S111;
FIG. 15 is a schematic diagram of interaction between a dynamic video wall and a plurality of objects according to another embodiment of the present invention;
FIG. 16 is a flowchart of another embodiment of step S111;
fig. 17 is a schematic diagram of interaction between a first point in time and a target object of a dynamic video wall according to an embodiment of the invention;
fig. 18 is a schematic diagram illustrating an interaction between a dynamic video wall and a target object at a second time point according to an embodiment of the invention;
fig. 19 is a flowchart of a video playback method for a dynamic video wall according to another embodiment of the present invention;
FIG. 20 is a diagram of a dynamic video wall according to an embodiment of the present invention;
fig. 21 is a flowchart of a video playback method for a dynamic video wall according to another embodiment of the present invention;
FIG. 22 is a flowchart of an embodiment of step S209;
fig. 23 is a flowchart of another embodiment of step S209.
Wherein the reference numerals
100 dynamic television wall 110, 1101-1108 panel
110a panel assembly 110W tiled display
112 sound generating unit 1121 electric induction element
1122. 1125 vibrating diaphragm 1123 Metal layer
1124 lower metal layer 1123a, 1124a hollow out region
120 environmental sensor 130 image analysis module
132 first image data 134 second image data
140 control Module A display area
B non-display area D1 first dimension
D2 second dimension F10, F11, F20, F21 flow
H. H1, H2 relative distance value Ha preset depth value
M, M ', M', M1-M5 position
P sound production image
S, S1-S5 target
S101~S1011、S1111a、S1111b、S1112、S1113a、S1113b、S1113c、
S1114a、S1114b、S1114c、S1115a、S1115b、S201~S211、S2091a、
S2091b and S2092
X, X1, X2 horizontal coordinate values
Y vertical coordinate value
Detailed Description
The invention will be described in detail with reference to the following drawings, which are provided for illustration purposes and the like:
referring to fig. 1 to 3, the dynamic video wall 100 includes a tiled display 110W, an environment sensor 120, an image analysis module 130, and a control module 140. The control module 140 may be in signal connection with the tiled display 110W, the environmental sensor 120 and the image analysis module 130 in a wired or wireless manner. The tiled display 110W includes a plurality of panels 110 in a tiled configuration. That is, the panels 110 are adjacent to each other such that the display screens thereof are connected to each other and constitute a single screen. In one embodiment, the display areas a of the panels 110 may be substantially coplanar.
In one embodiment, as shown in FIG. 1, the panels 110 are arranged in a two-dimensional array to form a tiled display 110W. In some embodiments, the two-dimensional array has a first dimension D1 and a second dimension D2 that are substantially perpendicular to each other. In some embodiments, the first dimension D1 is parallel to the row direction (column direction) and the second dimension D2 is parallel to the column direction (row direction).
Each panel 110 includes a panel assembly 110a (shown in fig. 2 and 4) and a sound emitting unit 112. The sound unit 112 is stacked on the display area a of the panel assembly 110a of each panel 110. However, in other embodiments, the joints between adjacent panels 110 in some or all of the panels 110 may be connected by hinges capable of bending at a specific angle, and the display areas a of the panel elements 110a of the panels 110 may not be coplanar. It should be noted that the number of the panels 110 shown in fig. 1 and fig. 3 is only an example and is not a limitation to the embodiment of the present invention.
The environment sensor 120 receives at least one environment detection signal and generates at least one position information according to the environment detection signal. Referring to fig. 1-3, in some embodiments, the environmental sensor 120 may be an infrared emitter, an infrared CMOS camera, an infrared distance sensor, an ultrasonic sensor, an image sensor, or a combination thereof with a microprocessor. In some embodiments, the environment detection signal may be a distance signal, an angle signal, a temperature signal, and/or an image signal. In some embodiments, the position information may represent a position M of the dynamic tv wall 100 in the horizontal direction in the first dimension D1 (where, the position M is represented by a horizontal coordinate value), or the position information may represent a position M of the dynamic tv wall 100 in the vertical direction in the second dimension D2 (where, the position M is represented by a vertical coordinate value), or the position information may represent a position M of the dynamic tv wall 100 in the horizontal direction in the first dimension D1 and the vertical direction in the second dimension D2 (where, the position M is represented by a horizontal coordinate value and a vertical coordinate value).
For example, the environment sensor 120 includes at least two infrared distance sensors, when a target S is located near the panels 110 of the dynamic tv wall 100, the infrared distance sensors of the environment sensor 120 receive distance signals (environment detection signals) of the target S relative to the dynamic tv wall 100, and after operation, a horizontal distance value of the target S corresponding to the dynamic tv wall 100 in the first dimension D1 is X and a relative distance value between the target S and the dynamic tv wall 100 is H. Then, the environmental sensor 120 obtains a position M '(where the position M' is represented by a horizontal coordinate value X) of the object S corresponding to the dynamic tv wall 100 in the first dimension D1 according to the horizontal distance value (X) and/or the relative distance value (H) of the object S. For another example, the distance signal (environment detection signal) received by the infrared distance sensor of the environment sensor 120 may further include that the vertical distance of the corresponding dynamic tv wall 100 of the target object S in the second dimension D2 is Y (e.g. the height of the target object S), and the environment sensor 120 obtains the position M "(here, the position M" is represented by the horizontal coordinate value X and the vertical coordinate value Y) of the corresponding dynamic tv wall 100 on the first dimension D1 and the second dimension D2 according to the horizontal distance value (X), the relative distance value (H), and the vertical distance value (Y) of the target object S. In other words, according to different embodiments of the environment sensor 120 type, the position information of the object S may be a position M' (indicated by a horizontal coordinate value X) representing the horizontal distance of the dynamic tv wall 100 in the first dimension D1, or a position M "(indicated by a horizontal coordinate value X and a vertical coordinate value Y) representing the horizontal distance of the dynamic tv wall 100 in the first dimension D1 and the vertical distance in the second dimension D2.
The image analysis module 130 receives a multimedia file signal, wherein the multimedia file signal includes video frame information and audio file information. Referring to fig. 1 to 3, the image analysis module 130 divides the video frame information into a plurality of image data corresponding to the arrangement positions of the plurality of panels arranged in a splicing manner in the dynamic tv wall.
The control module 140 receives the image data from the image analysis module 130 and controls the display area a of the panel element 110a of the panels 110 to display the image data. Referring to fig. 1 to fig. 3, the image data are displayed synchronously by a plurality of panels 110 arranged in a tiled manner to form a complete image. Moreover, the control module 140 receives the position information from the environment sensor 120, and drives the sound-generating unit 112 in the display area a of at least one of the panels 110 to play the sound file information according to the position information. In other words, the sound of the sound file information played by each sound generating unit 112 is transmitted from the display area a of the panel 110 where each sound generating unit 112 is located. For example, referring to fig. 3, the control module 140 receives the position information from the environment sensor 120 to represent the position M 'of the horizontal distance of the dynamic tv wall 100 on the first dimension D1, and drives the sound-generating unit 112 in the display area a of the panel 1104 at the position M' to play the sound-file information according to the position information.
Here, the panel 110 of the position M of the dynamic tv wall 100 corresponding to the target object S in the first dimension D1 and/or the second dimension D2 plays the sound file information, and the other panels 110 do not play the sound file information, so that the sound file information can be sounded within the range of the display area a of the panel 100 corresponding to the target object S, and the sound file information is sounded on the panel 110 near the target object S, so as to be transmitted in a more realistic manner.
In one embodiment, referring to fig. 2 again, in an implementation aspect, the sound generating unit 112 of each panel 110 includes an electric sensing element 1121 and a vibrating membrane 1122. The electric sensing element 1121 may be disposed on one of the light-permeable substrates of the panel 110 and located in the non-display region B surrounding the display region a. The vibrating membrane 1122 is spaced apart from the electric sensing element 1121 and is located within the display region a of the panel 110.
After the control module 140 receives the position information from the environment sensor 120 and drives the sound generating unit 112 in the display area a of at least one of the panels 110 to play the sound stage information according to the position information, the control module 140 transmits a reference current to the vibrating membrane 1122 of the sound generating unit 112 to control the vibrating membrane 1122 to generate the first magnetic field, and the control module 140 transmits a sound source current generated according to the sound stage information to the electric sensing element 1121 to control the electric sensing element 1121 to generate the second magnetic field according to the sound stage information. The first magnetic field and the second magnetic field are mutually exclusive, so that the vibration film 1122 is vibrated, and the sound-grade information can be sounded in the range of the display area a of the panel 110 corresponding to the target object S.
Referring to fig. 4, in another embodiment, each sound unit 112 of each panel 110 includes an upper metal layer 1123, a lower metal layer 1124 and a diaphragm 1125. The diaphragm 1125 is located between the upper metal layer 1123 and the lower metal layer 1124 and within the display area a of the panel 110. The upper metal layer 1123 and the lower metal layer 1124 may have hollow areas 1123a and 1124a, respectively, and both the hollow areas 1123a and 1124a correspond to the display area a and expose the diaphragm 1125.
When the control module 140 receives the position information from the environmental sensor 120 and drives the sound generating unit 112 in the display area a of at least one of the panels 110 to play the sound file information according to the position information, the control module 140 transmits the sound source current generated according to the sound file information to the upper metal layer 1123 and the lower metal layer 1124, so that the ions are distributed in the upper metal layer 1123 and the lower metal layer 1124 corresponding to the sound source current, and the electrical properties of the ions distributed in the upper metal layer 1123 and the ions distributed in the lower metal layer 1124 are opposite to each other. Here, the vibration film 1125 is vibrated, so that the sound-proof information can be generated within the range of the display area a of the panel 110 corresponding to the object S.
Here, the vibration film (for example, the vibration film 1122 in fig. 2 or the vibration film 1125 in fig. 4) in the panel 110 may generate sound by vibration, so that the panel 110 itself becomes a speaker, and the sound information and the video information are more vivid in matching. In some embodiments, the panel 110 may be a liquid crystal display, a Micro light emitting diode display (Micro LED), an organic electroluminescent display (OLED), or other flexible display.
In one embodiment, as shown in FIG. 3, the video frame information of the multimedia file signal has at least one audio image P, and the audio file information is associated with the at least one audio image P. The image analysis module 130 detects at least one first image data 132 having the sounding image P and at least one second image data 134 not having the sounding image P in the image data. In one embodiment, the sounding image P corresponds to the sound file information, which may be a human image, an animal image, or an object image (e.g., a car, a musical instrument, a speaker, etc.). The control module 140 drives the sound generating units 112 in the display area a of the panel 110 displaying the first image data 132 of the panels 110 to play the sound file information. Here, the sound file information not only is played on the panel 110 at the position M on the first dimension D1 and/or the second dimension D2 of the dynamic tv wall 100 corresponding to the target object S, but also can be played on the sound unit 112 of the panel 110 displayed by the sound-producing image P, and the rest of the panels 110 do not play the sound file information. Thus, the sound file information not only sounds on the panel 110 near the target S, but also conforms to the position of the sound image P of the video frame information, and is transmitted synchronously in a more realistic manner.
In some embodiments, each panel 110 has two sound units 112 located at the left and right sides, and the sound file information can be generated by the two sound units 112 of the panel 110 at the position M of the dynamic tv wall 100 corresponding to the target object S to form left and right channels. The sound file information can also be played by two sound units 112 of the panel 110 displayed by the sound image P to form left and right channels. Therefore, the sound file information can be transmitted in a more stereoscopic manner.
In some embodiments, the sound file information may be generated from at least two panels 110 of the dynamic tv wall 100 corresponding to the target object S at the position M to form left and right channels. The sound stage information may also be played on the panel 110 displayed by the sound image P and the panel 110 located adjacent to the panel 110 displayed by the sound image P to form left and right channels. Therefore, the sound file information can be transmitted in a more stereoscopic manner.
In another embodiment, when the environment sensor 120 receives a plurality of environment detection signals and generates a plurality of position information according to the environment detection signals, the position information respectively represents a plurality of positions M of the dynamic tv wall 100, and the control module 140 drives at least one of the panels 110 to respectively represent at least one or more panels 110 located at the plurality of positions M. For an example, referring to fig. 5, when the environment sensor 120 receives the environment detection signals of the objects S1, S2, and S3 and generates a plurality of position information according to the environment detection signals, the position information respectively represents a plurality of positions M (e.g., position M1, position M2, and position M3) of the dynamic video wall 100 in the first dimension D1, and the plurality of different panels 110 are respectively located at position M1, position M2, and position M3. The control module 140 controls all the panels 110 to display the image data and drives the sound generating units 112 located in the display areas a of the panels 110 (e.g., the panel 1104, the panel 1106, and the panel 1108) at the positions M1, M2, and M3 to play the sound file information according to the position information. For another example, please refer to fig. 6, if one panel 110 (e.g., the panel 1106) is located at the positions M1 and M2 and the other panel 110 (e.g., the panel 1108) is located at the position M3, the control module 140 drives the panel 1106 located at the positions M1 and M2 and the sound unit 112 located in the display area a of the panel 1108 located at the position M3 to play the sound file information according to the position information.
Fig. 7 is a flowchart of a video playback method of a dynamic video wall (process F10) according to an embodiment of the present invention. Please refer to fig. 1 to 3 and fig. 7. The video and audio playing method of the dynamic video wall (process F10) includes receiving a multimedia file signal (step S101), receiving an environment detection signal (step S103), generating at least one position information according to the environment detection signal (step S105), dividing video frame information into a plurality of image data corresponding to the arrangement positions of a plurality of panels spliced and arranged in the dynamic video wall (step S107), displaying the image data on the panels 110 (step S109), and driving a sound-emitting unit 112 of at least one of the panels 110 to play sound file information according to the at least one position information, wherein the sound file information is output within a display area a of the at least one panel 110 (step S111).
In step S101, the image analysis module 130 receives a multimedia file signal, wherein the multimedia file signal includes video information and audio information.
In step S103, the environment sensor 120 receives at least one environment detection signal. In some embodiments, the environmental sensor 120 may be an infrared emitter, an infrared CMOS camera, an infrared distance sensor, an ultrasonic sensor, an image sensor, or a combination thereof with a microprocessor. In some embodiments, the environment detection signal may be a distance signal, an angle signal, a temperature signal, and/or an image signal.
In step S105, the environment sensor 120 generates at least one position information according to the environment detection signal. In some embodiments, the position information may represent a position M of the horizontal distance of the dynamic tv wall 100 in the first dimension D1 (where, the position M is represented by a horizontal coordinate value X), or the position information may represent a position M of the vertical distance of the dynamic tv wall 100 in the second dimension D2 (where, the position M is represented by a vertical coordinate value Y), or the position information may also represent a position M of the horizontal distance of the dynamic tv wall 100 in the first dimension D1 and the vertical distance in the second dimension D2 (where, the position M is represented by a horizontal coordinate value C and a vertical coordinate value Y). In other words, referring to fig. 3, for the same object S, according to different types of the environment sensors 120 of different embodiments, the position information of the object S may be a position M' (expressed by a horizontal coordinate value X) representing the horizontal distance of the dynamic tv wall 100 in the first dimension D1, or a position M "(expressed by a horizontal coordinate value X and a vertical coordinate value Y) representing the horizontal distance of the dynamic tv wall 100 in the first dimension D1 and the vertical distance in the second dimension D2.
In step S107, the image analysis module 130 divides the video frame information into a plurality of image data corresponding to the arrangement positions of the plurality of panels spliced and arranged in the dynamic tv wall 100. Here, the image data are respectively displayed on the panels 110, and the image data are synchronously displayed by the panels 110 arranged in a splicing manner to be spliced into a complete picture.
In step S109, the control module 140 receives the image data from the image analysis module 130 and controls the panels 110 to display the image data respectively. Here, the image data are respectively displayed on the panels 110, and the image data are synchronously displayed by the panels 110 arranged in a splicing manner to be spliced into a complete picture.
In step S111, the control module 140 receives the position information from the environment sensor 120, and drives the sound-emitting unit 112 of at least one of the panels 110 to play the sound-file information according to (or in response to) the position information. In other words, the sound of the sound file information played by each sound generating unit 112 is transmitted from the display area a of the panel 110 where each sound generating unit 112 is located.
In some embodiments, each panel 110 has two sound units 112 located at the left and right sides, and the control module 140 drives the two sound units 112 of the panel 110 to generate sound according to (or in response to) the position information to form left and right channels, so that the sound stage information can be transmitted in a more stereoscopic manner.
In some embodiments, the control module 140 drives the panel 110 and the adjacent panel 110 to generate sound according to (or in response to) the position information to form left and right channels, so that the sound stage information can be transmitted in a more stereoscopic manner.
Fig. 8 is a flowchart of an embodiment of step S111. Please refer to fig. 1 to 3, fig. 7 and fig. 8. In this embodiment, the sound generating unit 112 of each panel 110 includes an electric sensing element 1121 and a diaphragm 1122. The vibrating membrane 1122 is spaced apart from the electric sensing element 1121 and is located within the display region a of the panel 110.
First, steps S101 to S109 are substantially the same as those described above, and therefore, the description thereof is omitted.
In the step of driving the sound units 112 of at least one of the panels to play the sound-level information according to at least one position information (step S111), the step of driving each sound unit 112 may further include transmitting a reference current to the diaphragm 1122 (as shown in fig. 2) to enable the diaphragm 1122 to generate a first magnetic field (step S1111a) and transmitting a sound source current generated according to the sound-level information to the electric sensing element 1121 to enable the electric sensing element 1121 to generate a second magnetic field (step S1111 b).
After the control module 140 receives the position information from the environment sensor 120 (step S103), the control module 140 transmits a reference current to the diaphragm 1122 of the sound generating unit 112 to control the diaphragm 1122 to generate a first magnetic field (step S1111 a). The control module 140 transmits the sound source current generated according to the sound stage information to the electric induction element 1121 according to the sound stage information, so as to control the electric induction element 1121 to generate the second magnetic field. The first magnetic field and the second magnetic field are mutually exclusive, so that the diaphragm 1122 is vibrated, and the sound-document information can be generated within the display area a of the panel 100 corresponding to the object S (step S1111 b).
Fig. 9 is a flowchart of another embodiment of step S111. Please refer to fig. 1, 4, 7 and 9. In this embodiment, the sound generating unit 112 of each panel 110 includes an upper metal layer 1123, a lower metal layer 1124, and a diaphragm 1125 located between the upper metal layer 1123 and the lower metal layer 1124 and in the display area a of the panel 110.
First, steps S101 to S109 are substantially the same as those described above, and therefore, the description thereof is omitted.
In the step of driving the sound-emitting units 112 of at least one of the panels 110 to play the sound file information according to at least one position information (step S111), the step of driving each sound-emitting unit 112 may further include transmitting a sound source current generated according to the sound file information to the upper metal layer 1123 and the lower metal layer 1124 (step S1112).
In step S1112, after the control module 140 receives the position information from the environmental sensor 120 (step S103), the control module 140 transmits the sound source current generated according to the sound file information to the upper metal layer 1123 and the lower metal layer 1124, so that the upper metal layer 1123 and the lower metal layer 1124 distribute ions corresponding to the sound source current, and the electric properties of the ions distributed by the upper metal layer 1123 and the electric properties of the ions distributed by the lower metal layer 1124 are opposite to each other. Here, the vibration film 1125 is vibrated, and the sound-proof information is generated within the display area a of the panel 100 corresponding to the object S.
Fig. 10 is a flowchart of a video playback method (process F11) for a dynamic video wall according to another embodiment of the present invention. Please refer to fig. 10 and 11. In some embodiments, the image analysis module 130 may further detect the first image data 132 having the audio image P and the second image data 134 not having the audio image P in all the image data (step S108). After step S108, the control module 140 controls the panels 110 to display the image data (step S109), wherein the control module 140 controls some of the panels 110 (e.g., the panels 1107 and 1108 of fig. 11) to display at least one first image data 132 and controls other of the panels 110 (e.g., the panels 1101 to 1106 of fig. 3) to display at least one second image 134. Moreover, the control module 140 may further drive the sound generating unit 112 in the display area a of the panel 110 (for example, the panels 1107 and 1108 in fig. 11) displaying each of the first image data 132 among the panels 110 to play the sound file information (step S110). Here, the sound file information may be played not only on the panel 110 (for example, the panel 1104 in fig. 11) at the position M of the dynamic tv wall 100 in the first dimension D1 and/or the second dimension D2 corresponding to the object S, but also on the sound-emitting unit 112 of the panel 110 (for example, the panels 1107 and 1108 in fig. 11) displayed by the sound-emitting image P, and the sound file information is not played on the other panels 110 (for example, the panels 1101 to 1103, 1105 and 1106 in fig. 11).
Fig. 12 is a flowchart of another embodiment of step S111. In some embodiments, the step S111 may include comparing the number of the position information with a predetermined number value (step S1113a), driving the sound units of all the panels to play the sound information when the number of the at least one position information is more than the predetermined number value (step S1113b), and driving the sound units of at least one of the panels to play the sound information in response to the at least one position information when the number of the at least one position information is less than the predetermined number value (step S1113 c).
The position information represents a position M of a horizontal distance in the first dimension D1 of the dynamic tv wall 100 (herein, the position M is represented by a horizontal coordinate value), or the position information may represent a position M of a vertical distance in the second dimension D2 of the dynamic tv wall 100 (herein, the position M is represented by a vertical coordinate value), or the position information may also represent a position M of a horizontal distance in the first dimension D1 and a vertical distance in the second dimension D2 of the dynamic tv wall 100 (herein, the position M is represented by a horizontal coordinate value and a vertical coordinate value). The preset number refers to a number threshold preset for the position information. In one embodiment, the quantity threshold may include one or more thresholds, such as: a single threshold, a combination of upper and lower thresholds, or a combination of three or more thresholds, etc.
After the control module 140 compares the number of the position information with the preset number value (step S1113a) and the number of the position information is greater than the preset number value, the control module 140 drives the sound units 112 of all the panels 110 to play the sound file information (step S1113 b). For example, referring to fig. 12 and 13, the predetermined number is 4 (single value), for example. After the environment sensor 120 receives the environment detection signals from the objects S1 to S5 (step S103) and generates five pieces of position information (representing the positions M1 to M5, respectively) according to the environment detection signals (step S105), the control module 140 compares the number of pieces of position information with a preset number value (step S1113a) and the number of pieces of position information is greater than the preset number value, and the control module 140 drives the sound units 112 of all the panels 110 (e.g., the panels 1101 to 1112 of fig. 13) to play the sound file information (step S1113 b). For another example, referring to fig. 5 and 12, after the environment sensor 120 receives the environment detection signals from the objects S1 to S3 (step S103) and generates three pieces of position information (representing the positions M1 to M3, respectively) according to the environment detection signals (step S105), the control module 140 compares the number of the position information with a preset number value (step S1113a), and the number of the position information is smaller than the preset number value, and the control module 140 responds to the position information and plays the sound file information on the panel units 112 of the panels 110 (e.g., the panel 1104, the sounding panel 1106, and the panel 1108 of fig. 3) at the positions M1, M2, and M3 (step S1113 c). Here, the control module 140 may adjust the number of the sound units 112 playing the music file information according to the number of the position information.
Fig. 14 is a flowchart of another embodiment of step S111. In some embodiments, the step S106 may include comparing the relative distance value H with a preset depth value Ha (step S1114a), driving the sound-generating units 112 of the panels 110 located at the at least one position M to play the sound-file information at a first volume when the relative distance value H is smaller than the preset depth value Ha (step S1114b), and driving the sound-generating units 112 of the panels 110 located at the at least one position M to play the sound-file information at a second volume when the relative distance value H is greater than the preset depth value Ha (step S1114 c). The relative distance value H is associated with the position information, and further represents a relative distance between the target object S corresponding to the position M of the dynamic tv wall 100 and the dynamic tv wall 100.
For an example, referring to fig. 14 and 15, taking the object S1 as an example, when the environment sensor 120 receives the environment detection signal of the object S1 relative to the dynamic tv wall 100 and performs an operation process to obtain the object S1 corresponding to the horizontal distance X1 of the dynamic tv wall 100 in the first dimension D1 and the relative distance value H1 between the object S1 and the dynamic tv wall 100 in the third dimension D3 (here, the position M1 is represented by the horizontal coordinate value X1) (step S103 and step S105), the control module 140 compares the relative distance value with the preset depth value (step S1114 a). When the relative distance value H1 is smaller than the preset depth value Ha, the control module 140 drives the sound unit 112 of the panel 1104 at the position M1 in the panels 110 to play the sound stage information at the first volume (step S1114 b).
Taking the object S2 as an example, when the environment sensor 120 receives the environment detection signal of the object S2 relative to the dynamic tv wall 100 and performs the operation to obtain the horizontal distance X2 of the object S2 corresponding to the dynamic tv wall 100 in the first dimension D1 and the relative distance H2 between the object S2 and the dynamic tv wall 100 in the third dimension D3 (here, the position M2 is represented by the horizontal coordinate value X2) (steps S103 and S105), the control module 140 compares the relative distance H2 with the preset depth value Ha (step S1114 a). When the relative distance value H2 is greater than the preset depth value Ha, the control module 140 drives the sound unit 112 of the panel 1108 located at the position M2 in the panels 110 to play the sound-level information at the second volume (step S1114 c). Wherein the second volume is different from the first volume.
In one embodiment, the second volume is greater than the first volume, so that the volume of the audio file information can be increased as the relative distance value H between the target object S and the dynamic video wall 100 increases.
Fig. 16 is a flowchart of another embodiment of step S111. In some embodiments, the at least one position information at the first time point represents a first position Ma in the first dimension D1 of the dynamic tv wall 100, and the at least one position information at the second time point represents a second position Ma in the first dimension D1 of the dynamic tv wall 100. Step S111 may include driving the sound emitting unit 112 of the panel 110 located at the first position Ma to play the musical-scale information at the first time point in response to the position information corresponding to the first time point (step S1115a) and driving the sound emitting unit of the panel located at the second position to play the musical-scale information at the second time point in response to the position information corresponding to the second time point (step S1114 b). For example, referring to fig. 16 and 17, taking the object S1 as an example, at a first time point, the position information represents a first position Ma on the first dimension D1 of the dynamic tv wall 100, and the control module 140 drives the sound unit 112 of the panel 1104 at the first position Ma to play the sound file information in response to the position information corresponding to the first time point (step S1115 a). Next, referring to fig. 16 and 18, at the second time point, the position information represents a second position Mb on the first dimension D1 of the dynamic tv wall 100, and the control module 140 drives the sound unit 112 of the panel 1108 located at the second position Mb to play the sound file information in response to the position information corresponding to the second time point (step S1115 b). Here, the control module 140 may adjust the sound unit 112 of the panel 110 located at the position M corresponding to different time points to play the sound file information according to the position information received at different time points.
In other embodiments, step S1115a may also be performed by driving, at the first time point, the first position Ma along each sound-emitting unit 112 of each panel 110 (e.g., panels 1103 and 1104 in fig. 17) in the second dimension D2 in response to the position information corresponding to the first time point to play the sound file information; and, in step S1114b, the second time point may drive the sound emitting unit 112 of each panel 110 (e.g., panels 1107 and 1108 in fig. 18) located at the second position Mb along the second dimension D2 to play the sound stage information in response to the position information corresponding to the second time point. For example, when the sound image P is a dynamic image, such as a lightning falling from the sky. At the first time point, the control module 140 responds to the position information corresponding to the first time point to drive the dynamic sounding image P located at the first position Ma to sequentially play the audio file information along the second dimension D2 from the panel 1103 to the panel 1104. At the second time point, the control module 140 responds to the position information corresponding to the second time point to drive the dynamic sounding image P located at the second position Mb to sequentially play the audio file information along the second dimension D2 from the panel 1107 to the panel 1108.
Fig. 19 is a flowchart of a video playback method (process F20) for a dynamic video wall according to another embodiment of the present invention. Please refer to fig. 19 and 20. The video and audio playing method of the dynamic video wall (process F20) includes receiving a multimedia file signal (step S201), dividing video frame information into a plurality of image data corresponding to the positions of a plurality of panels 110 spliced and arranged in the dynamic video wall 100 (step S203), detecting at least one first image data having a sound image P among all the image data (step S205), displaying the image data on the panels (step S207), and driving the sound unit 112 of the panel 110 displaying each first image data 132 of the panels 110 to play the video and audio file information at a first volume (step S209).
In step S201, the image analysis module 130 receives a multimedia file signal, wherein the multimedia file signal includes video information and audio information. The video frame information has at least one sounding image P, and the sound file information is associated with the at least one sounding image P.
In step S203, the image analysis module 130 divides the video frame information into a plurality of image data corresponding to the arrangement positions of the panels 110 spliced and arranged in the dynamic tv wall 100. Here, the image data are respectively displayed on the panels 110, and the image data are synchronously displayed by the panels 110 arranged in a splicing manner to be spliced into a complete picture.
In step S205, the image analysis module 130 detects the first image data 132 with the audio image P and the second image data 134 without the audio image P in all the image data.
In step S207, the control module 140 receives the image data from the image analysis module 130 and controls the panels 110 to display the image data respectively. Here, the image data are respectively displayed on the panels 110, and the image data are synchronously displayed by the panels 110 arranged in a splicing manner to be spliced into a complete picture. For example, as shown in fig. 2, the first image data 132 with the sounding image P are respectively displayed on the panels 1107 and 1108, and the second image data 134 without the sounding image P are respectively displayed on the panels 1101 to 1106.
In step S209, the control module 140 drives the sound generating unit 112 in the display area a of the panel 110 (for example, the panels 1107 and 1108 in fig. 20) displaying each of the first image data 132 in the panels 110 to play the audio file information at the first volume. Here, the sound file information may be played on the sound emitting unit 112 of the panel 110 displayed by the sound emitting image P, and the remaining panels 110 (for example, the panels 1101 to 1106 in fig. 20) do not play the sound file information.
Fig. 21 is a flowchart of a video playback method for a dynamic video wall (process F21) according to another embodiment of the present invention. First, steps S201 to S207 are substantially the same as those described above, and therefore, the description thereof is omitted. In some embodiments, the control module 140 may further drive the sound generating unit 112 in the display area a of the panel 110 (for example, the panels 1101 to 1106 of fig. 20) displaying each of the second image data 132 in the panels 110 to play the sound file information at the second volume (step S211). Here, the sound file information can be played on the sound unit 112 of the panel 110 displayed by the sound image P.
In one embodiment, the control module 140 executes the steps S209 and S211 at the same time. In another embodiment, the second volume is different from the first volume, the first volume is the volume corresponding to the audio file information of the first video data 132 with the sounding image P, and the second volume is the volume corresponding to the audio file information of the second video data 134 without the sounding image P (for example, background sound). In another embodiment, the second volume is smaller than the first volume, so that the volume of the sound level information corresponding to the first video data 132 with the sounding image P is higher than the volume of the sound level information corresponding to the second video data 134 without the sounding image P. Here, the sound file information may be played at a first volume of a louder sound by the sound unit 112 of the panel 110 (for example, the panels 1107 and 1108 in fig. 20) displayed by the sound image P, and played at a second volume of a smaller sound by the remaining panels 110 (for example, the panels 1101 to 1106 in fig. 20).
FIG. 22 is a flowchart illustrating an embodiment of step S209. Please refer to fig. 2, fig. 19, fig. 21, and fig. 22. In this embodiment, the sound generating unit 112 of each panel 110 includes an electric sensing element 1121 and a diaphragm 1122. The vibrating membrane 1122 is spaced apart from the electric sensing element 1121 and is located within the display region a of the panel 110. First, steps S201 to S207 are substantially the same as those described above, and therefore, the description thereof is omitted. In some embodiments, in the step of driving the sound generating units 112 of the panels displaying the first image data to play the sound level information at the first volume (step S209), the step of driving each sound generating unit 112 may further include transmitting a reference current to the vibrating membrane 1122 to enable the vibrating membrane 1122 to generate a first magnetic field (step S2091a) and transmitting a sound source current generated according to the sound level information to the electric sensing element 1121 to enable the electric sensing element 1121 to generate a second magnetic field (step S209 2091 b).
After the control module 140 receives the image data from the image analysis module 130 and controls some of the panels 110 to display the first image data 132 and some of the panels 110 to display the second image data 134 (step S207), the control module 140 transmits a reference current to the vibrating membrane 1122 of the sound generating unit 112 to control the vibrating membrane 1122 to generate a first magnetic field (step S2091a), and the control module 140 transmits a sound source current generated according to the sound stage information to the electric sensing element 1121 according to the sound stage information to control the electric sensing element 1121 to generate a second magnetic field. The first magnetic field and the second magnetic field are mutually exclusive, so that the diaphragm 1122 is vibrated, and the sound stage information can be generated at the first volume within the range of the display area a of the panel 100 corresponding to the object S (step S2091 b).
Fig. 23 is a flowchart of another embodiment of step S209. Please refer to fig. 4, fig. 19, fig. 21 and fig. 23. In this embodiment, the sound generating unit 112 of each panel 110 includes an upper metal layer 1123, a lower metal layer 1124, and a diaphragm 1125 located between the upper metal layer 1123 and the lower metal layer 1124 and in the display area a of the panel 110. First, steps S201 to S207 are substantially the same as those described above, and therefore, the description thereof is omitted. In some embodiments, in the step of driving the sound-emitting units 112 of the panels displaying the first image data to play the sound-grade information at the first volume (step S209), the step of driving each sound-emitting unit 112 may further include transmitting a sound-source current generated according to the sound-grade information to the upper metal layer 1123 and the lower metal layer 1124 (step S2092).
After the control module 140 receives the image data from the image analysis module 130 and controls some of the panels 110 to display the first image data 132 and some of the panels 110 to display the second image data 134 (step S207), the control module 140 transmits the sound source current generated according to the sound file information to the upper metal layer 1123 and the lower metal layer 1124, so that the upper metal layer 1123 and the lower metal layer 1124 distribute ions corresponding to the sound source current, and the electric properties of the ions distributed by the upper metal layer 1123 and the electric properties of the ions distributed by the lower metal layer 1124 are opposite to each other. Here, the diaphragm 1125 is vibrated, and the sound stage information is generated at the first volume within the display area a of the panel 100 corresponding to the object S (step S2092).
In summary, in the dynamic video wall and the video-audio playing method thereof according to the embodiment of the invention, the environmental detection signal is received, the position information is generated according to the environmental detection signal, and then the sound generating unit in the display area of at least one of the panels is driven to play the sound file information according to the position information. Therefore, the sound file information can be sounded in the range of the display area of the panel corresponding to the target object, and is transmitted in a more telepresence mode. In another embodiment of the invention, the dynamic video wall and the video-audio playing method thereof detect the first image data with the sound production image and the second image data without the sound production image in all the image data, and then drive the sound production units in the display areas of the panels displaying the first image data to play the sound file information with the first volume. In this way, the sound file information can be played in the sound production unit of the panel on which the sound production image is displayed.
The present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof, and it should be understood that various changes and modifications can be effected therein by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (14)

1. A video and audio playing method of a dynamic television wall is suitable for the dynamic television wall, the dynamic television wall comprises a plurality of panels which are spliced and configured, and each panel is respectively provided with a sound emitting unit, and the video and audio playing method of the dynamic television wall is characterized by comprising the following steps:
receiving a multimedia file signal, wherein the multimedia file signal comprises video picture information and audio file information;
receiving an environment detection signal, wherein the environment detection signal is a distance signal, an angle signal, a temperature signal and/or an image signal;
generating at least one position information according to the environment detection signal, wherein the position information represents a position of a horizontal distance of the dynamic television wall corresponding to the target object in a first dimension, a position of a vertical distance of the dynamic television wall corresponding to the target object in a second dimension, or a position of the horizontal distance of the dynamic television wall corresponding to the target object in the first dimension and the vertical distance in the second dimension;
dividing the video frame information into a plurality of image data corresponding to the configuration positions of the panels;
respectively displaying the image data by the panels; and
driving a sounding unit of at least one of the panels to play the sound file information according to the at least one position information, wherein the sound file information is output in a display area of the at least one panel;
the step of driving the sound-producing unit of at least one of the panels to play the sound-file information according to the at least one position information includes:
and responding to the at least one position information to drive the sound production unit of the at least one panel positioned at the at least one position in the panels to play the sound file information.
2. The video-audio playing method of dynamic video wall according to claim 1, wherein each of the position information represents a position in a first dimension of the dynamic video wall, and the step of driving the sound-generating unit of at least one of the panels to play the sound file information according to the at least one position information comprises:
comparing the quantity of the at least one piece of position information with a preset quantity value;
when the at least one piece of position information is multiple and the quantity of the at least one piece of position information is greater than the preset quantity value, driving the sound production units of all the panels to play the sound file information, wherein the sound file information is output in all the display areas of all the panels; and
and when the quantity of the at least one position information is smaller than the preset quantity value, responding to the at least one position information to drive the sound production unit of each panel positioned at the at least one position in the panels to play the sound file information.
3. The video-audio playing method of dynamic video wall according to claim 1, wherein each of the position information includes a position in a first dimension of the dynamic video wall, and the step of driving the sound-generating unit of at least one of the panels to play the sound file information according to the at least one position information includes:
comparing a relative distance value with a preset depth value, wherein the relative distance value is associated with the position information, and the relative distance value is the relative distance value between the target object and the dynamic television wall;
when the relative distance value is smaller than the preset depth value, driving the sound generating unit of each panel positioned at the at least one position in the panels to play the sound file information at a first volume; and
when the relative distance value is larger than the preset depth value, the sound-producing unit of each panel positioned at the at least one position in the panels is driven to play the sound file information at a second volume, and the second volume is larger than the first volume.
4. The video-audio playing method of claim 1, wherein the at least one location information represents a first location in a first dimension of the dynamic video wall at a first time point, the at least one location information represents a second location in the first dimension at a second time point, and the step of driving the sound-generating unit of at least one of the panels to play the sound-file information according to the at least one location information comprises:
at the first time point, responding to the at least one position information to drive the sound production unit of the panel at the first position to play the sound file information; and
and at the second time point, responding to the at least one position information to drive the sound production unit of the panel at the second position to play the sound file information.
5. The video/audio playing method of dynamic video wall according to claim 1, wherein each of the sound units includes an electric sensing element and a diaphragm spaced apart from the electric sensing element and located in the display area of the panel, and in the step of driving the sound unit of at least one of the panels to play the sound file information according to the at least one location information, the step of driving each of the sound units includes:
transmitting a reference current to the vibrating membrane to enable the vibrating membrane to generate a first magnetic field; and
and transmitting an audio source current generated according to the sound file information to the electric induction element to enable the electric induction element to generate a second magnetic field, wherein the first magnetic field and the second magnetic field are mutually exclusive, so that the vibration film can vibrate.
6. The video/audio playing method of dynamic video wall according to claim 1, wherein each of the sound units includes an upper metal layer, a lower metal layer and a diaphragm located between the upper metal layer and the lower metal layer and located in the display area of the panel, and in the step of driving the sound unit of at least one of the panels according to the at least one location information to play the sound file information, the driving step of each of the sound units includes:
transmitting an audio current generated according to the audio file information to the upper metal layer and the lower metal layer, so that the upper metal layer and the lower metal layer distribute electric ions corresponding to the audio current, and the vibrating membrane can vibrate.
7. The video playback method of claim 1, wherein the video frame information has at least one audio image, the audio file information is associated with the at least one audio image, and the video playback method of the dynamic video wall further comprises:
detecting at least one first image data with the at least one sounding image and at least one second image data without the at least one sounding image in the image data; and
and driving the sound production unit of the panel displaying the first image data in the panels to play the sound file information.
8. A dynamic video wall, comprising:
the panel splicing configuration comprises a plurality of panels, each panel comprises a sound production unit, and the sound production unit is positioned in a display area of each panel;
an environment sensor, receiving an environment detection signal, generating at least one position information according to the environment detection signal, wherein the environment detection signal is a distance signal, an angle signal, a temperature signal and/or an image signal, and the position information represents a position of a horizontal distance of the dynamic television wall in a first dimension, a position of a vertical distance of the dynamic television wall in a second dimension, or a position of the horizontal distance of the dynamic television wall in the first dimension and the vertical distance in the second dimension;
the image analysis module receives a multimedia file signal, wherein the multimedia file signal comprises video picture information and sound file information, and the video picture information is divided into a plurality of image data by the image analysis module corresponding to the position of a dynamic television wall comprising a plurality of panels which are spliced and configured; and
and the control module controls the panels to display the image data, and drives the sound production unit in the display area of at least one of the panels to play the sound file information according to the at least one position information, wherein the sound file information is output in the display area.
9. The dynamic video wall of claim 8, wherein the at least one position information represents a position of a horizontal distance in a first dimension of the dynamic video wall, and at least one of the panels represents at least one panel located at the position of the horizontal distance in the first dimension.
10. The dynamic video wall of claim 8, wherein the at least one position information represents at least one position in a first dimension of the dynamic video wall, the control module compares the number of the at least one position information with a predetermined value, wherein when the number of the at least one position information is multiple and the number of the at least one position information is greater than the predetermined value, the control module drives the sound generating units in the display areas of all the panels to play the sound file information, and when the number of the at least one position information is less than the predetermined value, the control module drives the sound generating units in the display areas of at least one of the panels in the at least one position to play the sound file information in response to the at least one position information.
11. The dynamic video wall of claim 8, wherein the at least one location information represents a first location in a first dimension of the dynamic video wall at a first time point, the at least one location information represents a second location in the first dimension at a second time point, and at least one of the panels represents each of the panels of the first location along a second dimension and each of the panels of the second location along a second dimension.
12. The dynamic video wall of claim 8, wherein each sound generating unit includes an electric sensing element and a diaphragm spaced apart from the electric sensing element and located in the display area of the panel, the control module controls the diaphragm to generate a first magnetic field, and the control module controls the electric sensing element to generate a second magnetic field according to the sound stage information, wherein the first magnetic field and the second magnetic field are mutually exclusive, so that the diaphragm can vibrate.
13. The dynamic video wall of claim 8, wherein each sound generating unit includes an upper metal layer, a lower metal layer and a vibrating membrane located between the upper metal layer and the lower metal layer and located in the display area of the panel, and the control module controls the upper metal layer and the lower metal layer to distribute ions so that the vibrating membrane can vibrate.
14. The dynamic video wall of claim 8, wherein the video frame information has at least one audio image, the sound file information is associated with the at least one audio image, the image analysis module detects at least one first image data of the image data having the at least one audio image and at least one second image data of the image data not having the at least one audio image, and the control module drives the sound unit of the panels displaying the first image data to play the sound file information.
CN201811124645.0A 2018-07-06 2018-09-26 Dynamic television wall and video and audio playing method thereof Active CN109274998B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TW107123609A TWI687915B (en) 2018-07-06 2018-07-06 Dynamic video wall and playing method thereof
TW107123609 2018-07-06

Publications (2)

Publication Number Publication Date
CN109274998A CN109274998A (en) 2019-01-25
CN109274998B true CN109274998B (en) 2021-01-15

Family

ID=65198202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811124645.0A Active CN109274998B (en) 2018-07-06 2018-09-26 Dynamic television wall and video and audio playing method thereof

Country Status (2)

Country Link
CN (1) CN109274998B (en)
TW (1) TWI687915B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200310736A1 (en) * 2019-03-29 2020-10-01 Christie Digital Systems Usa, Inc. Systems and methods in tiled display imaging systems
JP7443870B2 (en) * 2020-03-24 2024-03-06 ヤマハ株式会社 Sound signal output method and sound signal output device
CN113450664A (en) * 2020-03-26 2021-09-28 深圳蓝普科技有限公司 Display screen system and signal transmission method thereof
TWI742689B (en) 2020-05-22 2021-10-11 宏正自動科技股份有限公司 Media processing device, media broadcasting system, and media processing method
CN113724628A (en) * 2020-05-25 2021-11-30 苏州佳世达电通有限公司 Audio-visual system
CN111741412B (en) * 2020-06-29 2022-07-26 京东方科技集团股份有限公司 Display device, sound emission control method, and sound emission control device
CN112153538B (en) * 2020-09-24 2022-02-22 京东方科技集团股份有限公司 Display device, panoramic sound implementation method thereof and nonvolatile storage medium
TWI787799B (en) * 2021-04-28 2022-12-21 宏正自動科技股份有限公司 Method and device for video and audio processing

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130065022A (en) * 2011-12-09 2013-06-19 현대자동차주식회사 Sound field displaying method using image correction of image distortion
CN102724604B (en) * 2012-06-06 2014-11-26 北京中自投资管理有限公司 Sound processing method for video meeting
CN103152528A (en) * 2013-03-26 2013-06-12 冠捷显示科技(厦门)有限公司 Method for assembling television wall by self splicing of televisions
WO2015114387A1 (en) * 2014-02-03 2015-08-06 Tv One Limited Systems and methods for configuring a video wall
WO2016018844A1 (en) * 2014-07-28 2016-02-04 Suzo-Happ Group Interactive display device with audio output
CN106331530B (en) * 2015-06-19 2019-11-26 杭州海康威视数字技术股份有限公司 A kind of simultaneously and rapidly switching display methods of video wall, decoding device
CN105208358B (en) * 2015-11-04 2018-11-20 武汉微创光电股份有限公司 A kind of video monitoring system for video wall configuration
CN106227492B (en) * 2016-08-03 2019-07-26 广东威创视讯科技股份有限公司 Combination and mobile intelligent terminal interconnected method and device
US10110831B2 (en) * 2016-09-27 2018-10-23 Panasonic Intellectual Property Management Co., Ltd. Videoconference device
CN106851133A (en) * 2017-02-08 2017-06-13 北京小米移动软件有限公司 Control method for playing back and device

Also Published As

Publication number Publication date
CN109274998A (en) 2019-01-25
TWI687915B (en) 2020-03-11
TW202006702A (en) 2020-02-01

Similar Documents

Publication Publication Date Title
CN109274998B (en) Dynamic television wall and video and audio playing method thereof
EP2550809B1 (en) Techniques for localized perceptual audio
CN1747533A (en) Audiovisual system and tuning methods thereof
JP2006067295A (en) Method and device for generating sound, and method and device for reproducing sound
US8311400B2 (en) Content reproduction apparatus and content reproduction method
KR102348658B1 (en) Display device and driving method thereof
CN113810837B (en) Synchronous sounding control method of display device and related equipment
US20220272472A1 (en) Methods, apparatus and systems for audio reproduction
EP3731537A1 (en) Systems and methods in tiled display imaging systems
CN104202592B (en) Large-scale orthogonal full-length extraordinary movie audio playing device and method
CN110475189B (en) Sound production control method and electronic equipment
JP2007101804A (en) Image display device and program
JP2009017438A (en) Information transmission apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant