CN109062536B - Screen sounding method and device, electronic device and storage medium - Google Patents

Screen sounding method and device, electronic device and storage medium Download PDF

Info

Publication number
CN109062536B
CN109062536B CN201810814037.6A CN201810814037A CN109062536B CN 109062536 B CN109062536 B CN 109062536B CN 201810814037 A CN201810814037 A CN 201810814037A CN 109062536 B CN109062536 B CN 109062536B
Authority
CN
China
Prior art keywords
sound
screen
area
small window
sound production
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810814037.6A
Other languages
Chinese (zh)
Other versions
CN109062536A (en
Inventor
张海平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810814037.6A priority Critical patent/CN109062536B/en
Publication of CN109062536A publication Critical patent/CN109062536A/en
Application granted granted Critical
Publication of CN109062536B publication Critical patent/CN109062536B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Abstract

The embodiment of the application discloses a screen sounding method and device, an electronic device and a storage medium, and relates to the technical field of electronic devices. The electronic device comprises a screen capable of vibrating to generate sound and an exciter used for driving the screen to generate sound, the screen comprises a plurality of sound generation areas, different sound generation areas are driven by different exciters to generate sound, and the method comprises the following steps: monitoring whether the video is played in a small window or not when the video is played; if so, selecting a target sounding area from the sounding areas except the position of the small window of the video; and driving the target sound production area to produce sound through vibration by an exciter. In the method, the sound can be produced through the screen, and the electronic device is not dependent on a sound production device such as a loudspeaker and the like needing a sound hole, so that the electronic device conforms to the direction of a thinned design.

Description

Screen sounding method and device, electronic device and storage medium
Technical Field
The present disclosure relates to the field of electronic devices, and more particularly, to a method and an apparatus for generating a screen sound, an electronic device and a storage medium.
Background
Currently, in electronic devices, such as mobile phones, tablet computers, and the like, sound is generated through a speaker to output a sound signal. However, the speaker arrangement occupies a large design space, resulting in the electronic device not conforming to the direction of the slim design.
Disclosure of Invention
In view of the above problems, the present application provides a method, an apparatus, an electronic apparatus, and a storage medium for generating a screen sound to improve the above problems.
In a first aspect, an embodiment of the present application provides a screen sound emission method, which is applied to an electronic device, where the electronic device includes a screen capable of vibrating to emit sound and an exciter for driving the screen to emit sound, the screen includes a plurality of sound emission areas, and different sound emission areas are driven to emit sound by different exciters, and the method includes: monitoring whether the video is played in a small window or not when the video is played; if so, selecting a target sounding area from the sounding areas except the position of the small window of the video; and driving the target sound production area to produce sound through vibration by an exciter.
In a second aspect, the embodiment of the present application provides a screen sound generating device, which is applied to an electronic device, the electronic device includes a screen capable of vibrating and sounding and is used for driving an exciter of the screen sound, the screen includes a plurality of sounding areas, and different sounding areas are driven by different exciters to sound, the screen sound generating device includes: the monitoring module is used for monitoring whether the video is played in a small window or not when the video is played; the area selection module is used for selecting a target sound production area from sound production areas except the position of the small window of the video if the video is played in the small window; and the sound production module is used for driving the target sound production area to produce sound through vibration of the exciter.
In a third aspect, an embodiment of the present application provides an electronic device, including a screen, an actuator for driving the screen to sound, a memory, and a processor, where the screen, the actuator, and the memory are coupled to the processor, and the memory stores instructions, and the processor performs the above method when the instructions are executed by the processor.
In a fourth aspect, the present application provides a computer-readable storage medium having program code executable by a processor, the program code causing the processor to perform the above-mentioned method.
In a fifth aspect, an embodiment of the present application provides an electronic apparatus, including: a screen including a plurality of sound emitting areas; the exciter is connected with the sounding area of the screen and is used for driving the screen to sound; the circuit is connected with the exciter and comprises a detection circuit and a driving circuit, wherein the detection circuit is used for monitoring whether the video is played in a small window or not when the video is played, and if so, a target sounding area is selected from a sounding area except the position of the small window of the video; the driving circuit is used for driving the target sound production area to produce sound in a vibration mode through the exciter.
The screen sounding method, the screen sounding device, the electronic device and the storage medium are applied to the electronic device. The screen of the electronic device comprises a plurality of sounding areas, and different sounding areas are driven to sound through different exciters. When the video is played through the small window, the target sound production area is selected from the sound production area except the position of the small window, the electronic device can produce sound through the screen without depending on sound production devices such as a loudspeaker and the like needing sound production holes, the electronic device is enabled to accord with the direction of thin design, the sound production position is enabled to be inconsistent with the display position of the video, and the influence of screen sound production vibration on video display is reduced.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram illustrating a viewing angle of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram illustrating another view of an electronic device provided in an embodiment of the present application;
fig. 3 is a schematic diagram illustrating a division of a sound emitting area of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic diagram illustrating another division of a sound emitting area of an electronic device according to an embodiment of the present application;
FIG. 5 is a flow chart of a screen sound generation method proposed by an embodiment of the present application;
FIG. 6 is a schematic diagram of a display of an electronic device according to an embodiment of the present application;
FIG. 7 is a flow chart illustrating a screen sound generation method proposed by an embodiment of the present application;
FIG. 8 is a flow chart showing some of the steps of a screen sound generation method proposed by an embodiment of the present application;
FIG. 9 is a schematic diagram of another display of an electronic device provided by an embodiment of the present application;
fig. 10 is a functional block diagram of a screen sound emission device according to an embodiment of the present application;
fig. 11 is a block diagram of an electronic device according to an embodiment of the present application;
fig. 12 is a block diagram showing another structure of an electronic apparatus according to an embodiment of the present application;
fig. 13 is a block diagram of an electronic device according to an embodiment of the present application for executing a screen sound emission method according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Electronic devices often produce sound through speakers, earphones, and the like, which often require openings in the electronic devices. On one hand, the product performance is reduced due to the open pores, and in order to achieve the appearance effect, various manufacturers try to reduce the size of the open pores, but the smaller the open pores are, the more the performance is affected; on the other hand, the screen accounts for the higher full screen user experience better, but because need keep certain panel region for the receiver realizes the vocal function, at the regional trompil of panel, can't realize full screen, influence user experience.
In addition, the display screen generally plays a role in an electronic device such as a mobile phone or a tablet computer to display contents such as text, pictures, icons, or video. With the development of touch technologies, more and more display screens arranged in electronic devices are touch display screens, and when a user is detected to perform touch operations such as dragging, clicking, double clicking, sliding and the like on the touch display screen, the touch operations of the user can be responded under the condition of arranging the touch display screens.
With the increasing requirements of users on the definition and the fineness of displayed contents, more electronic devices adopt touch display screens with larger sizes to achieve the display effect of a full screen. However, in the process of setting a touch display screen with a large size, it is found that functional devices such as a front camera, a proximity optical sensor, and a receiver, which are disposed at the front end of the electronic device, affect an area that the touch display screen can extend to.
Generally, an electronic device includes a front panel, a rear cover, and a bezel. The front panel includes a forehead area, a middle screen area and a lower key area. Generally, the forehead area is provided with a sound outlet of a receiver and functional devices such as a front camera, the middle screen area is provided with a touch display screen, and the lower key area is provided with one to three physical keys. With the development of the technology, the lower key area is gradually cancelled, and the physical keys originally arranged in the lower key area are replaced by the virtual keys in the touch display screen.
The earphone sound outlet holes arranged in the forehead area are important for the function support of the mobile phone and are not easy to cancel, so that the difficulty in expanding the displayable area of the touch display screen to cover the forehead area is high. After a series of researches, the inventor finds that sound can be emitted by controlling the screen, the frame or the rear cover of the mobile phone to vibrate, so that the arrangement of sound emitting devices such as a sound outlet hole of a receiver and a loudspeaker can be omitted.
Referring to fig. 1, an electronic device 100 according to an embodiment of the present disclosure is shown. The electronic device 100 comprises an electronic body 10, wherein the electronic body 10 comprises a housing 12 and a screen 120 arranged on the housing 12. The housing 12 may be made of metal, such as steel or aluminum alloy. As shown in fig. 2, the housing 12 may include a front panel 132, a rear cover 133, and a bezel 134, the bezel 134 being used to connect the front panel 132 and the rear cover 133, the screen 120 being disposed on the front panel. In this embodiment, the screen 120 may include a display screen, which generally includes the display panel 111, and may also include a circuit and the like for responding to a touch operation performed on the display panel 111. The Display panel 111 may be a Liquid Crystal Display (LCD) panel, and in some embodiments, the Display panel 111 is a touch screen 109.
The electronic device further comprises an actuator 131, the actuator 131 being configured to drive a vibrating component of the electronic device to vibrate and emit sound. Specifically, the vibration component is at least one of the screen 120 or the housing 12 of the electronic device, that is, the vibration component may be the screen 120, may be a part or all of the screen, may be a part or all of the housing 12, and may be a combination of the screen 120 and the housing 12. As an embodiment, when the vibration member is the housing 12, the vibration member may be a rear cover of the housing 12. The embodiment of the application mainly uses the screen as a vibration component to explain the sound production of the control screen.
In the embodiment of the present application, if the vibration component is the screen 120, the exciter 131 is connected to the screen 120 for driving the screen 120 to vibrate. In particular, the actuator 131 is attached below the screen 120, and the actuator 131 may be a piezoelectric driver or a motor. In one embodiment, actuator 131 is a piezoelectric actuator. The piezoelectric actuator transmits its own deformation to the screen 120 by a moment action, so that the screen 120 vibrates to generate sound. The screen 120 includes a touch screen and a display screen, the display screen is located below the touch screen, and the piezoelectric driver is attached below the display screen, that is, a surface of the display screen far from the touch screen. The piezoelectric driver includes a plurality of piezoelectric ceramic sheets. When the multilayer piezoelectric ceramic piece produces sound and expands and contracts, the screen is driven to bend and deform, and the whole screen forms bending vibration repeatedly, so that the screen can push air and produce sound.
As an embodiment, the electronic device 100 includes a detection circuit and a driving circuit, where the detection circuit is configured to monitor whether the video is played in a small window during video playing, and if so, select a target sound emission area from sound emission areas other than a position where the small window of the video is located; the driving circuit is used for driving the target sound production area to produce sound in a vibration mode through the exciter. The exciter 131 is connected to a driving circuit 135 of the electronic device, and the driving circuit 135 is configured to input a control signal value to the exciter 131 according to the vibration parameter, so as to drive the exciter 131 to vibrate, thereby driving the vibrating component to vibrate. The vibration parameter may be determined by the received sound signal, and specifically may be determined according to a vibration frequency and a vibration amplitude of the sound signal to be sounded.
In particular, the driving circuit may be a processor of the electronic device, or may be an integrated circuit capable of generating a driving voltage or current within the electronic device. The driving circuit outputs a high-low level driving signal to the exciter 131, the exciter 131 vibrates according to the driving signal, and the different electrical parameters of the driving signal output by the driving circuit may cause the different vibration parameters of the exciter 131, for example, the duty ratio of the driving signal corresponds to the vibration frequency of the exciter 131, and the amplitude of the driving signal corresponds to the vibration amplitude of the exciter 131.
In the present embodiment, there may be two or more actuators 131. As shown in fig. 1, the plurality of actuators 131 may be uniformly distributed on the screen 120, so that the screen 120 may divide different sound emission areas according to the settings of the actuators. Wherein each sound emitting area can be driven to emit sound by one or more than one actuator 131, and different sound emitting areas are driven to emit sound by different actuators. When the same sounding area is driven by a plurality of exciters to sound, the exciters can be uniformly controlled, so that the vibration of one sounding area is consistent under the driving of the exciters, and the sounding is consistent.
For example, the distribution of the drivers and the sound emitting areas may be as shown in fig. 3, and the number of the drivers is 4, then the display screen may include 4 rectangular areas a, b, c, and d, which are equally divided by the center line in the vertical direction and the horizontal direction as shown by the dashed line division in fig. 3, where the 4 drivers are disposed below the 4 rectangular areas, the 4 drivers are in one-to-one correspondence with the 4 rectangular areas, and each rectangular area may be used as one sound emitting area.
As another example, the drivers and the sound emission areas are distributed as shown in fig. 4, and the display screen includes 7 rectangular areas a1 to a7 as shown by the dashed line partitions in fig. 4, each of which serves as a sound emission area and is driven by one or more drivers. Of course, the number of the exciters is not limited in the embodiment of the present application, and the specific distribution of the exciters and the specific division of the sound emitting area are not limited in the embodiment of the present application.
When the sound is produced through the screen, the sound can be produced by one sound production area, and a plurality of sound production areas or all the sound production areas. The inventor finds that, under the conditions of video playing and the like, if a video is played in a small window, if the screen at the position of the small window vibrates to sound, the vibration of the screen has certain influence on the video display of the small window, and the watching experience of a user is influenced.
In addition, the inventors have studied and found that a small window may be in various positions in the screen and occupy less than the entire area of the screen. If a sound production area outside the position of the small window is selected in the screen for sound production, the influence of the vibration sound production of the screen on the video displayed by the small window can be reduced.
Therefore, the embodiment of the application provides a screen sound production method, a screen sound production device, an electronic device and a storage medium, wherein a screen is divided into a plurality of sound production areas, and the sound production areas except for the position of the video small window are driven to produce sound in a vibration mode under the driving of corresponding exciters, so that the influence of the screen vibration sound production on the video displayed by the small window is reduced, and the user experience is improved. The screen sounding method, device, electronic device and storage medium provided by the embodiments of the present application will be described with reference to the accompanying drawings and specific embodiments.
Referring to fig. 5, an embodiment of the present application provides a screen sounding method applied to an electronic device. As mentioned above, the electronic device comprises a screen capable of generating sound by vibration and an exciter used for driving the screen to generate sound, wherein the screen comprises a plurality of sound generation areas, and different sound generation areas are driven by different exciters to generate sound. Specifically, as shown in fig. 5, the screen sound generating method may specifically include the following steps:
step S110: and monitoring whether the video is played in a small window or not during video playing.
In this embodiment of the present application, the played video may be a voice call video, a video played by a video player, a video stored or cached in an electronic device, and the like, and is not limited in this embodiment of the present application.
During the video playing process, the electronic device may monitor the display status of the video, where the display status may be a full-screen display or a small-window display. The display area displayed by the small window is smaller than that displayed by the full screen, and the display area is smaller than the size of the displayable area in the screen of the electronic device. The specific display implementation manner of the widget is not limited in the embodiment of the present application, for example, two players are created, or two TextureView containers are created, and a smaller player container is created in Activity when switching from a full screen to the widget; for another example, a ViewGroup for placing the player with a height and width match _ parent is reserved in the Activity, and the switching of the size window is to switch back and forth between adding the player to the original small container and adding the player to the ViewGroup of the full screen.
And monitoring whether the played video is played in a small window or not. For example, if the window parameter corresponding to the electronic device is the first value when the video is played in the full screen mode, and the window parameter corresponding to the electronic device is the second value when the video is played in the small window mode, it is possible to determine whether the video is played in the small window mode by monitoring whether the window parameter is the first value or the second value.
Step S120: and if so, selecting a target sound production area from the sound production areas except the position of the small window of the video.
If the video small window is played, an area exists in the screen and the video playing is not carried out, and in order to reduce the influence of vibration generated when the screen sounds on the video playing, a sound-emitting area is selected from a sound-emitting area except the position of the small window.
The sound production area outside the position of the small window is not the sound production area corresponding to the display position of the small window in the screen. For example, as shown in fig. 6, the display position of the small window K of the video in the screen display area corresponds to the sound emission areas A3 and a6, and the target sound emission area is determined from the sound emission areas a1, a2, a4, a5, and a7 other than the sound emission areas A3 and a 6.
Step S130: and driving the target sound production area to produce sound through vibration by an exciter.
When the video is played, the received sound signal is determined, and when the sound signal is played, the target sound production area is driven by the exciter to vibrate and produce sound. Therefore, when the video is displayed in a small window, the sound production position of the played sound is inconsistent with the video playing position.
The received sound signal may be a sound signal of the video, or may include a sound signal of other sounds.
In the embodiment of the application, when the video is played in the small window, the sound production area is selected to vibrate and produce sound from the position outside the small window according to the position of the small window. On the basis of realizing screen sounding, the influence of the screen vibration sounding on video playing is reduced, and user experience is improved. In addition, the electronic device does not need to be provided with a sound generating device such as a loudspeaker and the like which needs a sound outlet, so that holes are not required to be formed in the electronic device due to the sound generating requirement, and the direction of the thinning design of the electronic device is met.
In the method provided by the embodiment of the application, the sound production area corresponding to the small window can be determined according to the position of the small window of the video, and then the target sound production area is selected from the sound production areas different from the sound production area corresponding to the small window. Specifically, referring to fig. 7, the method includes:
step S210: and monitoring whether the video is played in a small window or not during video playing.
And monitoring whether the video is played in the small window, and if the video is monitored in the small window, selecting a target sounding area from the sounding areas except the position of the small window of the video.
The target sound production area is selected from the sound production areas except the position of the small window, the sound production area corresponding to the small window can be determined, and then the target sound production area is selected from the sound production areas except the sound production area corresponding to the small window.
Specifically, the method may include: step S220: and if the video is played in the small window, determining a sound production area corresponding to the position of the small window as a window sound production area.
In the embodiment of the application, the window sounding area is determined, the position of the small window can be determined first, and then the corresponding sounding area is determined as the window sounding area according to the position of the small window. That is, as shown in fig. 8, step S220 may include:
step S221: and determining the coordinate position of the small window in the screen.
Specifically, if the video is being played in a small window, the position of the small window in the display area of the screen can be determined. The small window is a video playing window, the position of the small window is a position interval, and the small window can be represented by coordinates in a screen coordinate system. Of course, the representation mode of the position of the small window is not limited as long as the position of the small window in the screen can be accurately positioned.
Wherein the widget may be a fixed position, i.e. an immovable position; a movable position is also possible.
If the position of the small window is a fixed position, for example, for some players, when the small window plays a video, the display position of the small window in the screen is preset, and the display position is not movable and is a fixed position. The position information of the preset fixed position can be obtained, and the position of the small window is obtained.
If the position of the widget is a movable position, for example, for some players, when the widget is displayed on a video, a user may drag the widget to place the widget at a different position in the display area of the screen. The position of the widget can be monitored in real time so that the latest position of the widget in the screen can be obtained when the position of the widget changes. For example, the position of the small window is represented by coordinates, and when the small window is at one position, a position interval represented by the coordinates is obtained; when the small window moves, new coordinates representing the position interval are obtained.
Optionally, in this embodiment of the present application, it may be monitored whether the position of the widget moves. For example, when the position of the small window moves, a corresponding parameter value can be obtained, and when the parameter value is obtained, the position of the small window can be determined to move. And if the position of the small window is monitored to move, re-determining the position of the small window of the video, such as acquiring the coordinate parameter of the small window. And if the position of the small window is not moved, taking the position of the small window obtained at the previous time as the current position of the small window.
Step S222: and acquiring the coordinate position of each sounding area in the screen.
The electronic device may have stored therein the location ranges of the respective sound-emanating areas, such as the coordinate ranges in the screen. Therefore, the coordinate position of each sound emission area in the screen can be obtained, and the coordinate position of each sound emission area can be a coordinate interval.
After the position of the small window for video playing is obtained, a sound production area corresponding to the position of the small window can be obtained according to the coordinate position of the small window in the screen and the position of the sound production area, and the sound production area is used as a window sound production area.
In one embodiment, the sound emission area corresponding to the position of the small window may be a sound emission area within the position range of the small window. The determined window sound-emitting area is a sound-emitting area completely within the position range of the small window. That is, step S220 may further include: step S223: and determining the sounding area within the coordinate position range of the small window as a window sounding area according to the coordinate position of the small window and the coordinate position of the sounding area. As shown in fig. 9, the dashed line indicates the division line of the sound emission area, and the sound emission area a7 is completely within the position range of the small window K, and the sound emission area a7 is determined as the window sound emission area.
In one embodiment, if the part of the sound emission area overlaps the small window, the display of the small window is also affected to a certain extent. Therefore, a sound emission area overlapping with the position of the small window of the video may be used as the window sound emission area. As shown in fig. 9, the sound emission areas a2, a7, and a5 all overlap the small window K, and one of a2, a7, and a5 can be used as a window sound emission area. For example, the sound emission area and the small window may be represented by coordinates, and a sound emission area having an overlapping portion between the coordinate interval and the coordinate interval of the small window may be acquired as the window sound emission area.
In one embodiment, the display of the small window is less affected if the sound emission area for emitting sound overlaps only a small portion of the small window. Therefore, the window sound emission area may be a sound emission area having a portion overlapping the small window larger than a predetermined ratio of the entire area of the small window. For example, the predetermined ratio is one-half, three-quarters, etc. As shown in fig. 9, if half is taken as a preset ratio, in the sound emission regions a2, a7, and a5 having overlapping portions with the small window K in fig. 9, the overlapping portions of a2 and a5 with the small window are respectively less than half of their own area, and the overlapping portion of a7 with the small window is greater than half of the area of a7, then a7 is taken as the window sound emission region, and the other sound emission regions are not taken as the window sound emission regions.
In the embodiment of the application, since the screen display section of the electronic device is limited, the possible display positions of the small window in the screen are also limited, and therefore, an area correspondence table can be established and stored in advance, and the area correspondence table includes the correspondence between the positions of the small windows in the screen and the sound emitting areas. When the window sound production area is determined, the sound production area corresponding to the position of the small window in the screen can be searched according to the area correspondence table, and the searched sound production area is used as the window sound production area.
In this embodiment, the position of the widget in the area correspondence table may be a position section that matches the actual size of the widget, or may be a position section that is larger than the actual size of the widget. If the window is a position interval larger than the actual size of the small window, when the actual position of the small window is obtained, a position interval which surrounds the actual position of the small window is searched from the area corresponding table, and a sound production area corresponding to the searched position interval is used as a window sound production area.
Optionally, in this embodiment of the application, the position of the small window may also be determined once every preset time, and the window sound emitting area is determined according to the position of the small window. If the position of the small window acquired at a certain time is the same as the position acquired at the previous time, the position of the small window is not moved, the position of the window sounding area is unchanged, and the window sounding area determined at the previous time can be directly used as the window sounding area determined at this time.
In the embodiment of the application, the position of the small window determined this time is the same as the position of the small window determined last time, and the positions may be completely the same, that is, coordinate intervals in the screen are consistent; or the difference may be within a preset range, for example, the difference between the abscissa determined this time and the position determined last time is within a preset range corresponding to the abscissa, and the difference between the ordinate is within a preset range corresponding to the ordinate.
In the embodiments of the present application, the various implementations of determining the sounding region of the window may be implemented alone or in combination in a case of meeting logic, and the embodiments of the present application are not limited in this respect.
Step S230: selecting a sound emission area different from the window sound emission area from the plurality of sound emission areas of the screen as a target sound emission area.
Determining a sound production area different from the window sound production area from all sound production areas included in the screen, and selecting a target sound production area from the determined sound production area different from the window sound production area. Wherein one or more sound-emanating areas may be selected as target sound-emanating areas. For example, as shown in fig. 9, when a7 is used as the window sound emission area, the sound emission area can be selected from a1 to a6 as the target sound emission area.
As an embodiment, all the sound emission areas different from the window sound emission area may be selected as the target sound emission area.
As an embodiment, the sound emission area may be randomly selected as the target sound emission area from sound emission areas different from the window sound emission area.
As an embodiment, a sound emission area farthest from the window sound emission area may be selected from among the plurality of sound emission areas of the screen as the target sound emission area. Specifically, the distance from each sound emission area to the window sound emission area may be calculated according to the coordinate position of the window sound emission area in the screen and the coordinate position of each sound emission area in the screen. And selecting one or more sounding areas with the farthest distance as the target sounding areas. In calculating the positions between the sound-emitting areas, the distance between the centers of the sound-emitting areas (e.g., the intersection of the diagonals of the rectangular sound-emitting area) can be calculated.
In the present embodiment, as an embodiment, the target sound emission region is selected from sound emission regions other than the position of the small window, and a sound emission region farthest from the position of the small window may be selected as the target sound emission region. Specifically, the distance from each sound emission area to the small window may be calculated according to the coordinate position of the small window in the screen and the coordinate position of each sound emission area in the screen. The sound emission area where the distance to the small window is the farthest is selected as the target sound emission area. When the distance between each sound production area and the small window is calculated, the distance between the center of the sound production area and the center of the small window can be calculated.
In the embodiment of the present application, as an implementation manner, the target sound emission area is selected from the sound emission areas other than the position of the small window, and the selection may be performed according to a preset correspondence table. In this embodiment, the preset correspondence table may include correspondence between the sound emission area and another sound emission area. That is, in the area correspondence table, each of the window sound emission areas corresponds to one or more other sound emission areas. In this embodiment, after the window sound emission area is determined, the sound emission area corresponding to the window sound emission area may be searched from a preset correspondence table to be used as the target sound emission area.
In this embodiment of the application, the pre-stored preset mapping table may be stored in the electronic device before factory shipment. For each electronic device, the correspondence between the sound emitting areas in the preset correspondence table stored therein may be obtained according to a preliminary test. That is to say, can normally show the small window in the display area of screen, carry out the vocal test to other each vocal region again, to the influence of window vocal region when detecting each vocal region vibration vocal, confirm that the vocal region that influences the minimum is the vocal region that current window vocal region corresponds. And displaying the small window at different positions, so that the corresponding target sound production area can be determined when each sound production area is taken as the window sound production area.
Optionally, in this embodiment, the preset correspondence table may also include a correspondence between a position of the small window and the sound emitting area. That is to say, can be in advance when the screen display region shows the small window, carry out the vocal test to each vocal region, the influence to the video demonstration in the small window of current position when testing each vocal region vibration vocal to the vocal region that the influence is minimum is the vocal region that current small window position corresponds. The testing may be performed at a factory test of the electronic device.
In the using process of the electronic device, after the display position of the small window in the display area of the screen is determined, the corresponding sound production area of the position of the current small window in the display area of the screen in the preset corresponding table can be searched; and taking the searched sounding area as a target sounding area.
In the embodiment of the present application, as an implementation manner, the target sounding region may be determined once every time the small window of the video moves.
Alternatively, as an embodiment, the target sound emission area is determined once every preset time period. In the process of determining the target sound-emitting area in the embodiment, whether the position of the small window moves or not may be monitored, and if the position of the small window moves, the target sound-emitting area is selected from the sound-emitting areas except the position of the small window of the video; and if the position of the small window is not moved, taking the target sound production area determined at the previous time as the target sound production area determined at the current time.
Step S240: and driving the target sound production area to produce sound through vibration by an exciter.
The target sound production area is used as a sound production area for producing sound signals, so that a good sound production effect is obtained.
Optionally, in this embodiment of the present application, the target sound-emitting area is determined once every preset time. If the target sound-emitting area determined at a certain time is the same as the target sound-emitting area determined at the previous time, and the sound-emitting area required to be emitted in the screen is the same as the sound-emitting area required to be emitted determined at the previous time, in the step, the target sound-emitting area determined at the previous time is still subjected to vibration sound-emitting without changing corresponding parameters of the target sound-emitting area.
And if the target sound-emitting area determined at a certain time is different from the target sound-emitting area determined at the previous time, driving and emitting sound according to the currently determined target sound-emitting area. In this step, the parameters related to the sound emission area to be emitted by the driver are changed to the corresponding parameters of the target sound emission area determined this time.
In the embodiment of the present application, the same steps may be referred to each other, and details of the described information are not repeated in the embodiment of the present application.
In addition, optionally, if the video is not played in a small window, a sound production area adjacent to the edge of the screen may be selected from each sound production area of the screen to produce sound, and the video display content corresponding to the edge area may be a black edge or an unimportant display, so that the influence of the screen vibration sound production on the video display may be reduced.
Optionally, the sound production area adjacent to the top edge of the screen can be selected from the sound production areas adjacent to the edge of the screen; or selecting a sound production area adjacent to the bottom edge of the screen as a target sound production area. The top and bottom edges of the screen are two edges far from the center of the screen, for example, D1 and D2 shown in fig. 6.
According to the screen sound production method provided by the embodiment of the application, when a video is played in a small window, the sound production area outside the small window is determined as the target sound production area according to the position of the small window of the video. Under the condition that the video is played in a small window, if the sound production area is required to produce sound in a vibration mode, the sound production area is used for producing sound in a vibration mode, the influence of the sound production on video playing is reduced, and the use experience of a user is improved.
The embodiment of the application further provides a screen sounding device 300 applied to an electronic device, wherein the electronic device comprises a screen capable of vibrating and sounding and an exciter used for driving the screen to sound, the screen comprises a plurality of sounding areas, and different sounding areas are driven by different exciters to sound. Referring to fig. 10, the screen sound device 300 includes:
the monitoring module 310 is configured to monitor whether the video is played in a small window during video playing. And the area selection module 320 is configured to select a target sound production area from a sound production area other than the position of the video widget if the video is played in the widget. And the sound production module 330 is used for driving the target sound production area to produce sound through vibration of the exciter.
Optionally, the area selection module 320 may include a first determining unit, configured to determine a sound emitting area corresponding to a position where the small window is located, as the window sound emitting area. A second determination unit configured to select, as a target sound emission area, a sound emission area different from the window sound emission area from the plurality of sound emission areas of the screen.
Optionally, the second determining unit may be configured to select, as the target sound emission area, a sound emission area farthest from the window sound emission area from among the plurality of sound emission areas on the screen.
Optionally, the first determining unit may include: and the window position determining subunit is used for determining the coordinate position of the small window in the screen. And the area position determining subunit is used for acquiring the coordinate position of each sounding area in the screen. And the area determining subunit is used for determining the sounding area within the coordinate position range of the small window as the window sounding area according to the coordinate position of the small window and the coordinate position of the sounding area.
Optionally, the first determining unit may be configured to use a sound emitting area overlapping with a position where the small window of the video is located as the window sound emitting area.
Optionally, the embodiment of the present application may further include a window position monitoring module, configured to monitor whether the position of the widget moves. And if the position of the small window moves, the area selection module is used for selecting a target sound production area from sound production areas except the position of the small window of the video.
Based on the screen sounding method and device, the embodiment of the application provides an electronic device 100 capable of executing the screen sounding method.
As an embodiment, as shown in fig. 11, the electronic device 100 may include a screen 120, an actuator 131 for driving the screen to generate sound, a memory 104, and a processor 102, wherein the screen 120, the actuator 131, and the memory 104 are coupled to the processor 102. The memory 104 stores instructions that, when executed by the processor, the processor performs the method described above.
As an embodiment, as shown in fig. 12, the electronic device 100 includes a screen 120 and an actuator 131. The screen includes a plurality of sound-emanating areas, and the actuators may be plural (only one shown in the figure). The exciter 131 is connected with the sound emitting area of the screen 120, and the exciter 131 is used for driving the screen 120 to emit sound.
Specifically, the screen comprises a plurality of sounding areas, the exciters are connected with the sounding areas of the screen, and different sounding areas are driven by different exciters to sound.
The circuit 142 is connected with the exciter 131, the circuit 142 comprises a detection circuit 143 and a driving circuit 135, the detection circuit 143 is used for monitoring whether the video is played in a small window during video playing, and if so, a target sound production area is selected from a sound production area except for the position of the small window of the video; the driving circuit is used for driving the target sound production area to produce sound in a vibration mode through the exciter.
By way of example, the electronic device 100 may be any of various types of computer system equipment (only one modality shown in FIG. 1 by way of example) that is mobile or portable and that performs wireless communications. Specifically, the electronic apparatus 100 may be a mobile phone or a smart phone (e.g., an iPhone (TM) based phone), a Portable game device (e.g., Nintendo DS (TM), PlayStation Portable (TM), game Advance (TM), iPhone (TM)), a laptop computer, a PDA, a Portable internet device, a music player, and a data storage device, other handheld devices, and a head-mounted device (HMD) such as a watch, a headset, a pendant, a headset, and the like, and the electronic apparatus 100 may also be other wearable devices (e.g., a head-mounted device (HMD) such as electronic glasses, electronic clothes, an electronic bracelet, an electronic necklace, an electronic tattoo, an electronic device, or a smart watch).
The electronic apparatus 100 may also be any of a number of electronic devices including, but not limited to, cellular phones, smart phones, other wireless communication devices, personal digital assistants, audio players, other media players, music recorders, video recorders, cameras, other media recorders, radios, medical devices, vehicle transportation equipment, calculators, programmable remote controllers, pagers, laptop computers, desktop computers, printers, netbook computers, Personal Digital Assistants (PDAs), Portable Multimedia Players (PMPs), moving Picture experts group (MPEG-1 or MPEG-2) Audio layer 3(MP3) players, portable medical devices, and digital cameras, and combinations thereof.
In some cases, electronic device 100 may perform multiple functions (e.g., playing music, displaying videos, storing pictures, and receiving and sending telephone calls). If desired, the electronic apparatus 100 may be a portable device such as a cellular telephone, media player, other handheld device, wrist watch device, pendant device, earpiece device, or other compact portable device.
The electronic device 100 shown in fig. 1 includes an electronic body 10, and the electronic body 10 includes a housing 12 and a screen 120 disposed on the housing 12. The housing 12 may be made of metal, such as steel or aluminum alloy. In this embodiment, the screen 120 generally includes a display panel 111, and may also include a circuit and the like for responding to a touch operation performed on the display panel 111. The Display panel 111 may be a Liquid Crystal Display (LCD) panel, and in some embodiments, the Display panel 111 is a touch screen 109.
Referring to fig. 13, in an actual application scenario, the electronic device 100 may be used as a smartphone terminal, in which case the electronic body 10 generally further includes one or more processors 102 (only one is shown in the figure), a memory 104, an RF (Radio Frequency) module 106, an audio circuit 110, a sensor 114, an input module 118, and a power module 122. It will be understood by those skilled in the art that the structure shown in fig. 13 is merely illustrative and is not intended to limit the structure of the electronic body 10. For example, the electronics body section 10 may also include more or fewer components than shown in FIG. 13, or have a different correspondence than shown in FIG. 13.
Those skilled in the art will appreciate that all other components are peripheral devices with respect to the processor 102, and the processor 102 is coupled to the peripheral devices through a plurality of peripheral interfaces 124. The peripheral interface 124 may be implemented based on the following criteria: universal Asynchronous Receiver/Transmitter (UART), General Purpose Input/Output (GPIO), Serial Peripheral Interface (SPI), and Inter-Integrated Circuit (I2C), but the present invention is not limited to these standards. In some examples, the peripheral interface 124 may comprise only a bus; in other examples, the peripheral interface 124 may also include other elements, such as one or more controllers, for example, a display controller for interfacing with the display panel 111 or a memory controller for interfacing with a memory. These controllers may also be separate from the peripheral interface 124 and integrated within the processor 102 or a corresponding peripheral.
The memory 104 may be used to store software programs and modules, and the processor 102 executes various functional applications and data processing by executing the software programs and modules stored in the memory 104. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the electronics body portion 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The RF module 106 is configured to receive and transmit electromagnetic waves, and achieve interconversion between the electromagnetic waves and electrical signals, so as to communicate with a communication network or other devices. The RF module 106 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. The RF module 106 may communicate with various networks such as the internet, an intranet, a wireless network, or with other devices via a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network. The Wireless network may use various Communication standards, protocols, and technologies, including, but not limited to, Global System for Mobile Communication (GSM), Enhanced Mobile Communication (Enhanced Data GSM Environment, EDGE), wideband Code division multiple Access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Wireless Fidelity (WiFi) (e.g., IEEE802.1 a, IEEE802.11 b, IEEE802.1 g, and/or IEEE802.11 n), Voice over internet protocol (VoIP), world wide mail Access (Microwave for Wireless communications, wimax), and any other suitable protocol for instant messaging, and may even include those protocols that have not yet been developed.
The audio circuitry 110, microphone 103, microphone 105 together provide an audio interface between a user and the electronic body portion 10.
A sensor 114 is disposed within the electronics body portion 10, examples of the sensor 114 include, but are not limited to: acceleration sensor 114F, gyroscope 114G, magnetometer 114H, and other sensors.
In this embodiment, the input module 118 may include the touch screen 109 disposed on the display screen, and the touch screen 109 may collect a touch operation of the user (for example, an operation of the user on or near the touch screen 109 using any suitable object or accessory such as a finger, a stylus pen, etc.) on or near the touch screen 109, so that the touch gesture of the user may be obtained and the corresponding connection device may be driven according to a preset program, and thus, the user may select the target area through a touch operation on the display screen. Optionally, the touch screen 109 may include a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 102, and can receive and execute commands sent by the processor 102. In addition, the touch detection function of the touch screen 109 may be implemented by various types, such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch screen 109, in other variations, the input module 118 may include other input devices, such as keys 107. The keys 107 may include, for example, character keys for inputting characters, and control keys for activating control functions. Examples of such control keys include a "back to home" key, a power on/off key, and the like.
The screen 120 is used to display information input by the user, information provided to the user, and various graphical user interfaces of the electronic body section 10, which may be composed of graphics, text, icons, numbers, video, and any combination thereof. In one example, the touch screen 109 may be disposed on the display panel 111 so as to be integral with the display panel 111.
The power module 122 is used to provide power supply to the processor 102 and other components. Specifically, the power module 122 may include a power management system, one or more power sources (e.g., batteries or ac power), a charging circuit, a power failure detection circuit, an inverter, a power status indicator light, and any other components related to the generation, management, and distribution of power within the electronic body 10 or the screen 120.
The electronic device 100 further comprises a locator 119, the locator 119 being configured to determine an actual location of the electronic device 100. In this embodiment, the locator 119 uses a positioning service to locate the electronic device 100, and the positioning service is understood to be a technology or a service for obtaining the position information (e.g. longitude and latitude coordinates) of the electronic device 100 by a specific positioning technology and marking the position of the located object on the electronic map.
It should be understood that the electronic apparatus 100 described above is not limited to the smartphone terminal, and it should refer to a computer device that can be used in a mobile. Specifically, the electronic device 100 refers to a mobile computer device equipped with an intelligent operating system, and the electronic device 100 includes, but is not limited to, a smart phone, a smart watch, a tablet computer, and the like.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment. For any processing manner described in the method embodiment, all the processing manners may be implemented by corresponding processing modules in the apparatus embodiment, and details in the apparatus embodiment are not described again.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments. In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (8)

1. A screen sounding method applied to an electronic device, wherein the electronic device comprises a screen capable of vibrating to sound and an exciter used for driving the screen to sound, the screen comprises a plurality of sounding areas, and different sounding areas are driven by different exciters to sound, and the method comprises the following steps:
monitoring whether the video is played in a small window or not when the video is played;
if so, determining a sound production area corresponding to the position of the small window of the video, and taking a sound production area which is overlapped with the small window of the video and is larger than the preset proportion of all areas of the small window as a window sound production area;
selecting a sound production area different from the window sound production area from the plurality of sound production areas of the screen as a target sound production area;
and driving the target sound production area to produce sound through vibration by an exciter.
2. The method according to claim 1, wherein the selecting, as a target sound emission area, a sound emission area different from the window sound emission area from the plurality of sound emission areas of the screen comprises:
and selecting a sound production area which is farthest away from the window sound production area from a plurality of sound production areas of the screen as a target sound production area.
3. The method according to claim 1 or 2, wherein determining the sound-emitting area corresponding to the position of the small window as the window sound-emitting area comprises:
determining the coordinate position of the small window in the screen;
acquiring the coordinate position of each sounding area in a screen;
and determining the sounding area within the coordinate position range of the small window as a window sounding area according to the coordinate position of the small window and the coordinate position of the sounding area.
4. The method of claim 1, further comprising:
monitoring whether the position of the small window moves or not;
if so, re-determining the sounding area corresponding to the position of the small window of the video, and taking the sounding area, which is overlapped with the small window of the video and is larger than the preset proportion of all areas of the small window, as a window sounding area; selecting a sound emission area different from the window sound emission area from the plurality of sound emission areas of the screen as a target sound emission area.
5. The utility model provides a screen sound production device, its characterized in that is applied to electronic device, electronic device is including the screen that can vibrate the sound production and be used for the drive the exciter of screen sound production, the screen includes a plurality of vocal regions, and different vocal regions are driven the sound production by different exciters, screen sound production device includes:
the monitoring module is used for monitoring whether the video is played in a small window or not when the video is played;
the area selection module is used for determining a sound production area corresponding to the position of the small window of the video if the video is played in the small window, and taking the sound production area with the overlapping part of the small window of the video larger than the preset proportion of all the areas of the small window as a window sound production area; selecting a sound production area different from the window sound production area from the plurality of sound production areas of the screen as a target sound production area;
and the sound production module is used for driving the target sound production area to produce sound through vibration of the exciter.
6. An electronic device comprising a screen, an actuator for driving the screen to emit sound, a memory, and a processor, the screen, actuator, and memory coupled to the processor, the memory storing instructions that, when executed by the processor, the processor performs the method of any of claims 1-4.
7. A computer-readable storage medium having program code executable by a processor, the program code causing the processor to perform the method of any of claims 1 to 4.
8. An electronic device, comprising:
a screen including a plurality of sound emitting areas;
the exciter is connected with the sounding area of the screen and is used for driving the screen to sound;
the circuit is connected with the exciter and comprises a detection circuit and a driving circuit, the detection circuit is used for monitoring whether the video is played in a small window or not when the video is played, if so, a sound production area corresponding to the position of the small window of the video is determined, and the sound production area, which is overlapped with the small window of the video and is larger than the preset proportion of all areas of the small window, is used as a window sound production area; selecting a sound production area different from the window sound production area from the plurality of sound production areas of the screen as a target sound production area; the driving circuit is used for driving the target sound production area to produce sound in a vibration mode through the exciter.
CN201810814037.6A 2018-07-23 2018-07-23 Screen sounding method and device, electronic device and storage medium Active CN109062536B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810814037.6A CN109062536B (en) 2018-07-23 2018-07-23 Screen sounding method and device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810814037.6A CN109062536B (en) 2018-07-23 2018-07-23 Screen sounding method and device, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN109062536A CN109062536A (en) 2018-12-21
CN109062536B true CN109062536B (en) 2021-08-17

Family

ID=64836158

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810814037.6A Active CN109062536B (en) 2018-07-23 2018-07-23 Screen sounding method and device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN109062536B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110570752B (en) * 2019-09-11 2021-11-02 Oppo广东移动通信有限公司 Display screen, electronic equipment and control method thereof
CN111417064B (en) * 2019-12-04 2021-08-10 南京智芯胜电子科技有限公司 Audio-visual accompanying control method based on AI identification

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013073906A1 (en) * 2011-11-16 2013-05-23 Samsung Electronics Co., Ltd. Mobile communication terminal for displaying event-handling view on split screen and method for controlling the same
CN104952370A (en) * 2014-03-31 2015-09-30 联想(北京)有限公司 Electronic screen and application method thereof
CN106250034A (en) * 2016-07-18 2016-12-21 深圳市金立通信设备有限公司 A kind of method of windows exchange and terminal
CN107484005A (en) * 2017-08-08 2017-12-15 深圳创维数字技术有限公司 Monitoring method, set top box, monitoring system and storage medium
CN107493497A (en) * 2017-07-27 2017-12-19 努比亚技术有限公司 A kind of video broadcasting method, terminal and computer-readable recording medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2430833A4 (en) * 2009-05-13 2014-01-22 Coincident Tv Inc Playing and editing linked and annotated audiovisual works

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013073906A1 (en) * 2011-11-16 2013-05-23 Samsung Electronics Co., Ltd. Mobile communication terminal for displaying event-handling view on split screen and method for controlling the same
CN104952370A (en) * 2014-03-31 2015-09-30 联想(北京)有限公司 Electronic screen and application method thereof
CN106250034A (en) * 2016-07-18 2016-12-21 深圳市金立通信设备有限公司 A kind of method of windows exchange and terminal
CN107493497A (en) * 2017-07-27 2017-12-19 努比亚技术有限公司 A kind of video broadcasting method, terminal and computer-readable recording medium
CN107484005A (en) * 2017-08-08 2017-12-15 深圳创维数字技术有限公司 Monitoring method, set top box, monitoring system and storage medium

Also Published As

Publication number Publication date
CN109062536A (en) 2018-12-21

Similar Documents

Publication Publication Date Title
CN109194796B (en) Screen sounding method and device, electronic device and storage medium
CN108833638B (en) Sound production method, sound production device, electronic device and storage medium
CN108646971B (en) Screen sounding control method and device and electronic device
CN109032558B (en) Sound production control method and device, electronic device and computer readable medium
CN108881568B (en) Method and device for sounding display screen, electronic device and storage medium
CN109032556B (en) Sound production control method, sound production control device, electronic device, and storage medium
CN108683761B (en) Sound production control method and device, electronic device and computer readable medium
CN109189362B (en) Sound production control method and device, electronic equipment and storage medium
CN108958632B (en) Sound production control method and device, electronic equipment and storage medium
CN109040919B (en) Sound production method, sound production device, electronic device and computer readable medium
CN109086024B (en) Screen sounding method and device, electronic device and storage medium
CN109086023B (en) Sound production control method and device, electronic equipment and storage medium
CN109144249B (en) Screen sounding method and device, electronic device and storage medium
CN109144460B (en) Sound production control method, sound production control device, electronic device, and storage medium
CN108900728B (en) Reminding method, reminding device, electronic device and computer readable medium
CN109240413B (en) Screen sounding method and device, electronic device and storage medium
CN108958697B (en) Screen sounding control method and device and electronic device
CN108810198B (en) Sound production control method and device, electronic device and computer readable medium
CN109085985B (en) Sound production control method, sound production control device, electronic device, and storage medium
CN109062535B (en) Sound production control method and device, electronic device and computer readable medium
CN108712706B (en) Sound production method, sound production device, electronic device and storage medium
CN108810764B (en) Sound production control method and device and electronic device
CN109189360B (en) Screen sounding control method and device and electronic device
CN108712571A (en) Method, apparatus, electronic device and the storage medium of display screen sounding
CN109062533B (en) Sound production control method, sound production control device, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant