CN109062534B - Sound production control method, sound production control device, electronic device, and storage medium - Google Patents

Sound production control method, sound production control device, electronic device, and storage medium Download PDF

Info

Publication number
CN109062534B
CN109062534B CN201810776921.5A CN201810776921A CN109062534B CN 109062534 B CN109062534 B CN 109062534B CN 201810776921 A CN201810776921 A CN 201810776921A CN 109062534 B CN109062534 B CN 109062534B
Authority
CN
China
Prior art keywords
sound
screen
electronic device
areas
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810776921.5A
Other languages
Chinese (zh)
Other versions
CN109062534A (en
Inventor
张海平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810776921.5A priority Critical patent/CN109062534B/en
Publication of CN109062534A publication Critical patent/CN109062534A/en
Application granted granted Critical
Publication of CN109062534B publication Critical patent/CN109062534B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Telephone Function (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application discloses a sound production control method and device, an electronic device and a storage medium, and relates to the technical field of electronic devices. The method is applied to an electronic device, the electronic device comprises a screen capable of generating sound through vibration, an exciter used for driving the screen to generate sound and a camera, the screen comprises a plurality of areas, and the camera is arranged in a target area in the plurality of areas. The method comprises the following steps: when the electronic device is in a screen sound production mode, receiving instruction information for instructing the electronic device to start the camera, wherein in the screen sound production mode, the exciter drives the screen to vibrate and produce sound, the exciter responds to the instruction information, other areas at least except a target area are determined from the areas to serve as sound production areas, and the camera is started and is controlled to drive the sound production areas to vibrate and produce sound. This application is through when the camera starts, and the regional vibration sound production outside the region at least except the camera place is controlled, to the influence of camera shooting when weakening screen vibration sound production, promotes the shooting effect.

Description

Sound production control method, sound production control device, electronic device, and storage medium
Technical Field
The present disclosure relates to the field of electronic devices, and more particularly, to a method and an apparatus for controlling sound emission, an electronic device, and a storage medium.
Background
Currently, electronic devices such as mobile phones, tablet computers, etc. output sound signals through a handset. However, the arrangement of the earpiece occupies a large design space, resulting in the electronic device not conforming to the direction of the slim design.
Disclosure of Invention
In view of the above problems, the present application provides a sound emission control method, device, electronic device and storage medium to improve the above drawbacks.
In a first aspect, an embodiment of the present application provides a sound production control method, which is applied to an electronic device, where the electronic device includes a screen capable of vibrating to produce sound, an exciter for driving the screen to produce sound, and a camera, where the screen includes a plurality of regions, and the camera is disposed in a target region among the plurality of regions, and the method includes: when the electronic device is in a screen sounding mode, receiving instruction information for instructing the electronic device to start the camera, wherein the exciter drives the screen to vibrate and sound in the screen sounding mode; determining, in response to the instruction information, other areas than at least the target area from among the plurality of areas as sound emission areas; and starting the camera and controlling the exciter to drive the sound production area to produce sound in a vibration mode.
In a second aspect, an embodiment of the present application provides a sound production control device, which is applied to an electronic device, the electronic device includes a screen capable of vibrating to produce sound, an exciter for driving the screen to produce sound, and a camera, the screen includes a plurality of regions, the camera is disposed in a target region in the plurality of regions, the device includes: the receiving module is used for receiving instruction information for instructing the electronic device to start the camera when the electronic device is in a screen sounding mode, wherein the exciter drives the screen to vibrate and sound in the screen sounding mode; a determining module, configured to determine, in response to the instruction information, other areas at least except the target area from the plurality of areas as sound emission areas; and the control module is used for starting the camera and controlling the exciter to drive the sound production area to produce sound in a vibration mode.
In a third aspect, embodiments of the present application provide an electronic device, including a screen capable of vibrating a sound production, an exciter for driving the screen to produce the sound, a memory, and a processor, the screen, the exciter, and the memory being coupled to the processor, the memory storing instructions that, when executed by the processor, the processor performs the above method.
In a fourth aspect, the present application provides a computer readable storage medium having program code executable by a processor, the program code causing the processor to execute the above method.
In a fifth aspect, an embodiment of the present application provides an electronic device, including a screen capable of generating sound by vibration and a camera, where the screen includes a plurality of areas, and the camera is disposed in a target area among the plurality of areas; the exciter is connected with the screen and is used for driving the screen to vibrate and sound; the circuit is connected with the exciter and comprises a detection circuit and a driving circuit, wherein the detection circuit is used for receiving instruction information for instructing the electronic device to start the camera when the electronic device is in a screen sounding mode, and the exciter drives the screen to vibrate and sound in the screen sounding mode; determining, in response to the instruction information, other areas than at least the target area from among the plurality of areas as sound emission areas; and the driving circuit is used for starting the camera and controlling the exciter to drive the sounding area to vibrate and sound.
The embodiment of the application provides a sound production control method, a sound production control device, an electronic device and a storage medium, when the electronic device is in a screen sound production mode, the electronic device receives instruction information for instructing the electronic device to start a camera, wherein in the screen sound production mode, a screen is driven by an exciter to vibrate and produce sound, the instruction information is responded, other areas at least except a target area are determined from a plurality of areas to serve as sound production areas, the camera is started, the exciter is controlled to drive the sound production areas to vibrate and produce sound, therefore, when the camera is started, other areas at least except the area where the camera is located are controlled to vibrate and produce sound, the influence on shooting of the camera during screen vibration and sound production is weakened, and user experience is improved.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram illustrating a first viewing angle of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram illustrating a second viewing angle of an electronic device provided by an embodiment of the present application;
FIG. 3 is a block diagram of an electronic device according to an embodiment of the present disclosure;
fig. 4 is a first flowchart illustrating a sound emission control method according to an embodiment of the present application;
fig. 5 is a second flowchart of a sound emission control method provided by an embodiment of the present application;
fig. 6 is a schematic structural diagram illustrating a first viewing angle of an electronic device according to an embodiment of the present application;
fig. 7 shows a block diagram of a sound emission control device provided in an embodiment of the present application;
fig. 8 shows a block diagram of an electronic device for executing a sound emission control method according to an embodiment of the present application;
fig. 9 shows a block diagram of another electronic device for executing the sound emission control method according to the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The display screen generally plays a role in an electronic device such as a mobile phone or a tablet computer to display text, pictures, icons, or video. With the development of touch technologies, more and more display screens arranged in electronic devices are touch display screens, and when a user is detected to perform touch operations such as dragging, clicking, double clicking, sliding and the like on the touch display screen, the touch operations of the user can be responded under the condition of arranging the touch display screens.
As the user demands higher definition and higher fineness of the displayed content, more electronic devices employ touch display screens with larger sizes. However, in the process of setting a touch display screen with a large size, it is found that functional devices such as a front camera, a proximity optical sensor, and a receiver, which are disposed at the front end of the electronic device, affect an area that the touch display screen can extend to.
Generally, an electronic device includes a front panel, a rear cover, and a bezel. The front panel includes a forehead area, a middle screen area and a lower key area. Generally, the forehead area is provided with a sound outlet of a receiver and functional devices such as a front camera, the middle screen area is provided with a touch display screen, and the lower key area is provided with one to three physical keys. With the development of the technology, the lower key area is gradually cancelled, and the physical keys originally arranged in the lower key area are replaced by the virtual keys in the touch display screen.
The earphone sound outlet holes arranged in the forehead area are important for the function support of the mobile phone and are not easy to cancel, so that the difficulty in expanding the displayable area of the touch display screen to cover the forehead area is high. After a series of researches, the inventor finds that sound can be emitted by controlling the screen, the frame or the rear cover of the mobile phone to vibrate, so that the arrangement of a sound outlet hole of the receiver can be eliminated.
Referring to fig. 1 and fig. 2, an electronic device 100 according to an embodiment of the present disclosure is shown. Fig. 1 is a front view of the electronic device, and fig. 2 is a side view of the electronic device.
The electronic device 100 comprises an electronic body 10, wherein the electronic body 10 comprises a housing 12 and a screen 120 disposed on the housing 12, the housing 12 comprises a front panel 125, a rear cover 127 and a bezel 126, the bezel 126 is used for connecting the front panel 125 and the rear cover 127, and the screen 120 is disposed on the front panel 125.
The electronic device 100 further includes a camera 140, the camera 140 is disposed on the front panel 125, the camera 140 is located at one end of the front panel 125 close to the top of the electronic device 100, and the camera 140 is used for collecting images to shoot, unlock, scan codes and the like.
The electronic device 100 further includes an exciter 131, and the exciter 131 is configured to drive a vibration component of the electronic device 100 to generate sound by vibration, specifically, the vibration component is at least one of the screen 120 or the housing 12 of the electronic device, that is, the vibration component may be the screen 120, or may be a part or all of the housing 12, or may be a combination of the screen 120 and the housing 12. As an embodiment, when the vibration member is the housing 12, the vibration member may be a rear cover of the housing 12.
In the embodiment of the present application, if the vibration component is the screen 120, the exciter 131 is connected to the screen 120 for driving the screen 120 to vibrate. In particular, the actuator 131 is attached below the screen 120, and the actuator 131 may be a piezoelectric driver or a motor. In one embodiment, actuator 131 is a piezoelectric actuator. The piezoelectric actuator transmits its own deformation to the screen 120 by a moment action, so that the screen 120 vibrates to generate sound. The screen 120 includes a touch screen and a display screen, the display screen is located below the touch screen, and the piezoelectric driver is attached below the display screen, that is, a surface of the display screen away from the touch screen. The piezoelectric driver includes a plurality of piezoelectric ceramic sheets. When the multilayer piezoelectric ceramic piece produces sound and expands and contracts, the screen is driven to bend and deform, and the whole screen forms bending vibration repeatedly, so that the screen can push air and produce sound.
As an embodiment, as shown in fig. 3, the electronic device 100 further includes a circuit 200, and the circuit 200 is connected to the exciter 131. The circuit 200 comprises a detection circuit 210 and a driving circuit 211, wherein the detection circuit 210 is configured to receive instruction information instructing the electronic device to start the camera when the electronic device is in a screen sound emission mode, and the screen vibration sound emission is driven by the exciter in the screen sound emission mode; and in response to the instruction information, determining other areas at least except the target area from the plurality of areas as sound emitting areas, wherein the driving circuit 211 is configured to start the camera and control the exciter to drive the sound emitting areas to vibrate and emit sound. The exciter 131 is connected to the driving circuit 211 of the electronic device 100, and the driving circuit 211 is configured to input a control signal to the exciter 131 according to the vibration parameter, so as to drive the exciter 131 to vibrate, thereby driving the screen 120 to vibrate. Specifically, the driving circuit 211 may be a processor of the electronic device 100, or may be an integrated circuit capable of generating a driving voltage or current in the electronic device 100. The driving circuit outputs a high-low level driving signal to the exciter 131, the exciter 131 vibrates according to the driving signal, and the different electrical parameters of the driving signal output by the driving circuit may cause the different vibration parameters of the exciter 131, for example, the duty ratio of the driving signal corresponds to the vibration frequency of the exciter 131, and the amplitude of the driving signal corresponds to the vibration amplitude of the exciter 131.
In the present embodiment, the actuators 131 may be evenly distributed on the screen 120 so that the screen 120 is divided into a plurality of areas for emitting sounds individually. For example, if the number of the actuators 131 is 4, the screen 120 may be divided into 4 square areas along the center lines in the vertical direction and the horizontal direction, 4 actuators 131 are disposed below the 4 square areas, and the 4 actuators 131 correspond to the 4 square areas one by one. Of course, the number of the actuators 131 is not limited in the embodiment of the present application.
Therefore, aiming at the problem that the display effect of components such as a receiver arranged in the direction of the display screen can be greatly influenced, long-term research by the inventor finds and provides the sound production control method, the sound production control device, the electronic device and the storage medium, when the camera is started, the sound production control method, the sound production control device, the electronic device and the storage medium control the vibration sound production of other areas at least except the area where the camera is located, the influence of the screen vibration sound production on the shooting of the camera is weakened, and the shooting effect is improved.
Examples
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating a first sound emission control method according to an embodiment of the present application. The sound production control method is used for controlling other areas except the area where the camera is located to produce sound in a vibration mode when the camera is started, so that the influence of screen vibration sound production on camera shooting is weakened, and the shooting effect is improved. In a specific embodiment, the sound emission control method is applied to the sound emission control device 300 shown in fig. 7 and the electronic device 100 (fig. 1 and 2) equipped with the sound emission control device 300. The specific process of the present embodiment will be described below by taking an electronic device as an example, and it is understood that the electronic device applied in the present embodiment may be a smart phone, a tablet computer, a wearable electronic device, and the like, which is not limited herein. As will be described in detail with respect to the flow shown in fig. 4, the sound production control method may specifically include the following steps:
step S110: when the electronic device is in a screen sound production mode, receiving instruction information indicating that the electronic device starts the camera, wherein in the screen sound production mode, the exciter drives the screen to vibrate and produce sound.
In this embodiment, the screen of the electronic device may be used for vibration sound generation, and is suitable for a non-headset call mode of the electronic device, where the non-headset call mode includes a hands-free mode and an earpiece mode, and is used for playing a voice signal sent by the electronic device during a call, playing a video, and the like. In this embodiment, when the electronic device is in the non-earphone mode, the electronic device may be set to the screen sound emission mode by default, or the user may select whether to set the screen sound emission mode by himself, which is not limited herein. It is understood that the electronic device may include a speaker mode, etc. in addition to the screen sound emission mode, in which sound is emitted by the vibration of the speaker.
As one manner, whether the electronic device is in the screen sound production mode during a call or playing a video, or not, specifically, whether the electronic device is connected with an earphone may be determined first, where the determination may be made by checking a state of an earphone connection hole of the electronic device, for example, when the earphone connection hole of the electronic device is connected with the earphone, a first state value is returned, when the earphone in the connection hole is pulled out, a second state value is returned, and whether the current electronic device is connected with the earphone may be determined by detecting the first state value and the second state value. As a mode, the Android system sends a broadcast when the earphone is plugged into and unplugged from the connection hole, so the electronic device can determine whether the earphone is currently connected by monitoring the broadcast, and thus, can determine whether the electronic device is in an earphone call mode. Further, when it is determined that the electronic apparatus is in the non-headset call mode, the electronic apparatus is in the earpiece mode or the handsfree mode, and thus, the electronic apparatus may be considered to be in the screen sound emission mode at this time.
Alternatively, since the screen is driven to emit sound by the actuator in the screen-emission mode, it is possible to determine whether the electronic apparatus is in the screen-emission mode by detecting the state of the actuator. Specifically, the state of the exciter is detected, wherein when the exciter is detected to be in the working state and the screen is driven to vibrate, the electronic device can be considered to be in the screen sounding mode.
Further, when the electronic device is in the screen sound production mode, whether the electronic device receives instruction information indicating that the camera is started can be detected. For example, when a touch operation acting on a camera class icon is detected, a touch operation acting on a control for instructing the electronic device to perform image acquisition is detected, or voice information for inputting an instruction of the electronic device to start a camera is detected, the camera representing the electronic device is instructed to start. Optionally, in this embodiment, the camera is a front camera, and the camera is disposed in a target area of a plurality of areas of the screen, and it can be understood that the target area may include one area, two areas, three areas, and the like of the plurality of areas, which is not limited herein.
Step S120: and determining other areas except at least the target area from the plurality of areas as sounding areas in response to the instruction information.
In the present embodiment, the mobile terminal determines, as the sound emission area, an area other than at least the target area from the plurality of areas in response. Specifically, it is assumed that the screen includes four areas, namely, an area a, an area B, an area C, and an area D, and the camera is disposed in the area a, at this time, at least other areas except the area a may be used as a sound emission area, for example, the area B may be used as a sound emission area, the area C may be used as a sound emission area, the area D may be used as a sound emission area, the area B and the area C may be used together as a sound emission area, the area B and the area D may be used together as a sound emission area, the area C and the area D may be used together as a sound emission area, or the area B, the area C, and the area D may be used together as a sound emission area, so as to avoid the problem that focusing is impossible or difficult when the camera.
Step S130: and starting the camera and controlling the exciter to drive the sound production area to produce sound in a vibration mode.
Further, the electronic device responds to the instruction information to start the camera, and controls the exciter to drive the determined sound production area to produce sound in a vibration mode. It can be understood that, if before the instruction information is received, when all areas of the electronic device vibrate to sound, after the instruction information is received, only the target area where the camera is located needs to be controlled not to vibrate to sound, or the partial area including the target area where the camera is located does not vibrate to sound. So as to weaken the influence of screen sounding on image acquisition of the camera.
As one way, in this embodiment, the exciter has a plurality of vibration units disposed corresponding to a plurality of areas, that is, one vibration unit for each area, and each vibration unit is used for driving the corresponding area to vibrate and generate sound, specifically, after the sound generation area is determined from the plurality of areas, the camera is started, and the vibration unit corresponding to each area in the sound generation area is controlled to drive each area to vibrate and generate sound. Further, as another mode, each vibration unit corresponds to different vibration parameters, and therefore, after the sound emission areas are determined from a plurality of areas, the camera is started, and the vibration unit corresponding to each area in the sound emission areas is controlled to drive each sound emission area to vibrate according to the corresponding vibration parameters. As an implementation manner, after the sound emission area is determined from the plurality of areas, in order to keep the volume of the electronic device unchanged from the volume before the camera is started, the vibration parameters of other vibration units may be correspondingly controlled to be increased so as to compensate for the lack of volume when the target area does not vibrate for sound emission.
The first sound production control method provided by the embodiment of the application, when the electronic device is in the screen sound production mode, the instruction information for instructing the electronic device to start the camera is received, wherein, in the screen sound production mode, the screen is driven by the exciter to vibrate and produce sound, the instruction information is responded, other areas at least except a target area are determined from a plurality of areas to serve as sound production areas, the camera is started, the exciter is controlled to drive the sound production areas to vibrate and produce sound, therefore, when the camera is started, other areas at least except the area where the camera is located are controlled to vibrate and produce sound, the influence on the camera shooting when the screen vibrates and produces sound is weakened, and the user experience is improved.
Referring to fig. 5, fig. 5 is a schematic flowchart illustrating a second sound emission control method according to an embodiment of the present application. As will be explained in detail with respect to the flow shown in fig. 5, the method may specifically include the following steps:
step S210: when the electronic device is in a screen sound production mode, receiving instruction information indicating that the electronic device starts the camera, wherein in the screen sound production mode, the exciter drives the screen to vibrate and produce sound.
Step S220: and responding to the instruction information, displaying a selection control on the screen, and detecting a selection operation acted on the selection control.
As one mode, when receiving instruction information instructing the electronic device to start the camera, the electronic device displays a selection control on the screen in response, where the selection control may be displayed above a currently displayed page of the screen in a floating manner or displayed in the same level as the currently displayed page of the screen, and is not limited herein. Further, the selection control may include at least two options of "yes" and "no", and the selection operation performed on the selection control is detected, that is, the selection operation performed on "yes" and "no" are detected, it is understood that when the touch operation performed on the option "yes" is detected, the representation user indicates to turn off the screen sound emission mode, and when the touch operation performed on the option "no" is detected, the representation user indicates not to turn off the screen sound emission mode, that is, to keep the screen sound emission mode.
Step S230: when the selection operation indicates that the electronic device closes the screen sound production mode, the electronic device is controlled to be switched to a loudspeaker sound production mode from the screen sound production mode, wherein the loudspeaker vibrates to produce sound under the loudspeaker sound production mode.
In this embodiment, the electronic device further includes a speaker for generating sound by vibration, wherein the speaker may be disposed inside the electronic body and generates sound through a sound output hole disposed at the bottom of the electronic device, and specifically, the speaker generates mechanical vibration to push the surrounding air, so that the air medium fluctuates to realize the conversion of "electricity-force-sound". Further, the electronic device may include a speaker sound emission mode in which sound is emitted by the speaker vibration in addition to the screen sound emission mode. Therefore, as a mode, when the selection operation is detected to indicate that the electronic device turns off the screen sounding mode, the electronic device can be controlled to be switched from the screen sounding mode to the loudspeaker sounding mode, so that the influence of sounding of the electronic device on focusing when the camera collects images is weakened to the maximum extent.
Step S240: when the selection operation indicates that the electronic device maintains the screen sound emission mode, determining other areas at least except the target area from the plurality of areas as the sound emission areas.
When the selection operation instructs the electronic device to maintain the screen sound emission mode, the user is instructed to continue to use the screen sound emission mode, and the other area at least except the target area is determined as the sound emission area from the plurality of areas.
Step S250: and starting the camera and controlling the exciter to drive the second area to vibrate and sound, or starting the camera and controlling the exciter to drive the first area and the second area to vibrate and sound together.
In this embodiment, please refer to fig. 6, as a manner, the plurality of regions further includes a first region and a second region, that is, the plurality of regions includes the first region, the second region and a target region, the first region is denoted as S1, the second region is denoted as S2, and the target region is denoted as S3, wherein the first region S1 is located between the second region S2 and the target region S3, it can be understood that a relative size relationship among the first region S1, the second region S2 and the target region S3 is not limited herein. Further, in the present embodiment, the electronic device activates the camera in response to the instruction information, and controls the exciter to drive the second area S2 to make a vibration sound, or controls the exciter areas, the first area S1 and the second area S2 to make a vibration sound, specifically, the vibration unit corresponding to the second area S2 may be controlled to drive the second area S2 to make a vibration sound, or the vibration unit corresponding to the second area S2 may be controlled to drive the second area S2 to make a vibration sound and the vibration unit corresponding to the first area S1 may be controlled to drive the first area S1 to make a vibration sound, that is, in the present embodiment, since the second area S2 is further away from the target area S3 than the first area S1, that is, the second area S2 is further away from the camera than the first area S1, which has a smaller influence on the camera when the vibration sound is made to make a vibration sound than the first area S1, the second area S2 may be preferentially controlled to make a vibration sound, or controlling the first area S1 and the second area S2 to vibrate and generate sound together according to the sound generation requirement.
For example, a current volume value of the electronic device is obtained, where the current volume value may be a volume value of the electronic device when the electronic device receives instruction information instructing to start the camera, or may be a volume value of the electronic device when the electronic device determines a sound emitting area from a plurality of areas, and is not limited herein. As a manner, the electronic device is preset and stores a preset volume value, where the preset volume value is used as a basis for determining a current volume value, that is, after the current volume value is obtained, the current volume value is compared with a preset volume value to determine whether the current volume value is greater than the preset volume value, where when the current volume value is not greater than the preset volume value, the camera may be correspondingly started and the exciter may be controlled to drive the second area S2 to vibrate and generate sound to obtain a relatively smaller volume, or when the current volume value is greater than the preset volume value, the camera may be correspondingly started and the exciter may be controlled to drive the first area S1 and the second sound generation area S2 to vibrate together to obtain a relatively larger volume.
A second sound emission control method provided in an embodiment of the present application, when an electronic device is in a screen sound emission mode, receiving instruction information instructing the electronic device to start a camera, wherein in the screen sound emission mode, a screen is driven by an exciter to vibrate and emit sound, a selection control is displayed on the screen in response to the instruction information, and a selection operation acting on the selection control is detected, and when the selection operation instructs the electronic device to turn off the screen sound emission mode, the electronic device is controlled to switch from the screen sound emission mode to a speaker sound emission mode, wherein in the speaker sound emission mode, sound is emitted by a speaker in a vibrating manner, and when the selection operation instructs the electronic device to maintain the screen sound emission mode, a region other than at least a target region is determined from a plurality of regions as a sound emission region, the camera is started and the exciter is controlled to drive a second region to vibrate and emit sound, or the camera is started and the exciter is controlled to drive a first region and the second region to vibrate, compared with the first sound production control method, the method can also control the electronic device to adopt different sound production modes according to the selection of the user, automatically control different areas to produce sound, and enhance the applicability of the electronic device.
Referring to fig. 7, fig. 7 is a block diagram illustrating a sound emission control device 300 according to an embodiment of the present application. This sound production controlling means 300 is applied to electronic device, and this electronic device includes the screen that can vibrate the sound production, is used for driving the exciter and the camera of screen sound production, the screen includes a plurality of regions, the camera set up in target area in a plurality of regions, will explain to this block diagram below, sound production controlling means 300 includes: a receiving module 310, a determining module 320, and a control module 330, wherein:
a receiving module 310, configured to receive instruction information instructing the electronic device to start the camera when the electronic device is in a screen sound emission mode, where the screen is driven by the exciter to vibrate and emit sound in the screen sound emission mode.
A determining module 320, configured to determine, in response to the instruction information, other areas at least except the target area from the plurality of areas as sounding areas. Further, the determining module 320 includes: a selection operation detection sub-module, a sound production area determination sub-module and a mode switching sub-module, wherein:
and the selection operation detection submodule is used for responding to the instruction information, displaying a selection control on the screen and detecting the selection operation acted on the selection control.
A sound emission area determination sub-module configured to determine, as the sound emission area, an area other than at least the target area from the plurality of areas when the selection operation indicates that the electronic apparatus maintains the screen sound emission mode.
And the mode switching submodule is used for controlling the electronic device to switch the screen sound production mode to the loudspeaker sound production mode when the selection operation indicates that the electronic device closes the screen sound production mode, wherein the loudspeaker vibrates to produce sound under the loudspeaker sound production mode.
And the control module 330 is used for starting the camera and controlling the exciter to drive the sound production area to produce sound in a vibration mode. Further, the plurality of regions further includes a first region and a second region, the first region is located between the target region and the second region, and the control module 330 includes: a control sub-module, wherein:
the control submodule is used for starting the camera and controlling the exciter to drive the second area to vibrate and sound; or starting the camera and controlling the exciter to drive the first area and the second area to vibrate and sound together.
To sum up, the sound production control method, the sound production control device, the mobile terminal and the storage medium provided by the embodiments of the present application, when the electronic device is in the screen sound production mode, receive instruction information instructing the electronic device to start the camera, wherein, in the screen sound production mode, the driver drives the screen to vibrate and produce sound, respond to the instruction information, determine other areas at least except for the target area from a plurality of areas as sound production areas, start the camera and control the driver to drive the sound production areas to vibrate and produce sound, thereby, when the camera is started, control other areas at least except for the area where the camera is located to vibrate and produce sound, weaken the influence on the camera shooting when the screen vibrates and produces sound, and improve user experience.
It should be noted that, in the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment. For any processing manner described in the method embodiment, all the processing manners may be implemented by corresponding processing modules in the apparatus embodiment, and details in the apparatus embodiment are not described again.
An electronic device provided by the present application will be described with reference to fig. 8.
Referring to fig. 1, fig. 2 and fig. 8, based on the above-mentioned sound control method and apparatus, an embodiment of the present invention provides an electronic apparatus 100 capable of executing the above-mentioned sound control method. The electronic device 100 includes a screen 120 capable of vibrating a sound, an actuator 131 for driving the screen to emit the sound, a camera 140, a memory 104, and a processor 102, the screen 120, the camera 140, the actuator 131, and the memory 104 being coupled to the processor 102. The memory 104 stores programs that can execute the content of the foregoing embodiments, and the processor 102 can execute the programs stored in the memory 104.
Another electronic device provided by the present application will be described with reference to fig. 9.
Referring to fig. 1, fig. 2 and fig. 9, an electronic device 100 is further provided according to an embodiment of the present invention based on the foregoing sound control method and apparatus.
By way of example, the electronic device 100 may be any of various types of computer system equipment (only one modality shown in FIG. 1 by way of example) that is mobile or portable and that performs wireless communications. Specifically, the electronic apparatus 100 may be a mobile phone or a smart phone (e.g., an iPhone (TM) based phone), a Portable game device (e.g., Nintendo DS (TM), PlayStation Portable (TM), game Advance (TM), iPhone (TM)), a laptop computer, a PDA, a Portable internet device, a music player, and a data storage device, other handheld devices, and a head-mounted device (HMD) such as a watch, a headset, a pendant, a headset, and the like, and the electronic apparatus 100 may also be other wearable devices (e.g., a head-mounted device (HMD) such as electronic glasses, electronic clothes, an electronic bracelet, an electronic necklace, an electronic tattoo, an electronic device, or a smart watch).
The electronic apparatus 100 may also be any of a number of electronic devices including, but not limited to, cellular phones, smart phones, other wireless communication devices, personal digital assistants, audio players, other media players, music recorders, video recorders, cameras, other media recorders, radios, medical devices, vehicle transportation equipment, calculators, programmable remote controllers, pagers, laptop computers, desktop computers, printers, netbook computers, Personal Digital Assistants (PDAs), Portable Multimedia Players (PMPs), moving Picture experts group (MPEG-1 or MPEG-2) Audio layer 3(MP3) players, portable medical devices, and digital cameras, and combinations thereof.
In some cases, electronic device 100 may perform multiple functions (e.g., playing music, displaying videos, storing pictures, and receiving and sending telephone calls). If desired, the electronic apparatus 100 may be a portable device such as a cellular telephone, media player, other handheld device, wrist watch device, pendant device, earpiece device, or other compact portable device.
Referring to fig. 1, the electronic device 100 includes an electronic main body 10, and the electronic main body 10 includes a housing 12 and a screen 120 disposed on the housing 12. The housing 12 may be made of metal, such as steel or aluminum alloy. In this embodiment, the screen 120 generally includes a display panel 111, and may also include a circuit and the like for responding to a touch operation performed on the display panel 111. The Display panel 111 may be a Liquid Crystal Display (LCD) panel, and in some embodiments, the Display panel 111 is a screen 109.
Referring to fig. 9, in an actual application scenario, the electronic device 100 may be used as a smartphone terminal, in which case the electronic body 10 generally further includes one or more processors 102 (only one is shown in the figure), a memory 104, an RF (Radio Frequency) module 106, an audio circuit 110, a sensor 114, an input module 118, and a power module 122. It will be understood by those skilled in the art that the structure shown in fig. 9 is merely illustrative and is not intended to limit the structure of the electronic body 10. For example, the electronics body section 10 may also include more or fewer components than shown in FIG. 9, or have a different configuration than shown in FIG. 9.
Those skilled in the art will appreciate that all other components are peripheral devices with respect to the processor 102, and the processor 102 is coupled to the peripheral devices through a plurality of peripheral interfaces 124. The peripheral interface 124 may be implemented based on the following criteria: universal Asynchronous Receiver/Transmitter (UART), General Purpose Input/Output (GPIO), Serial Peripheral Interface (SPI), and Inter-Integrated Circuit (I2C), but the present invention is not limited to these standards. In some examples, the peripheral interface 124 may comprise only a bus; in other examples, the peripheral interface 124 may also include other elements, such as one or more controllers, for example, a display controller for interfacing with the display panel 111 or a memory controller for interfacing with a memory. These controllers may also be separate from the peripheral interface 124 and integrated within the processor 102 or a corresponding peripheral.
The memory 104 may be used to store software programs and modules, and the processor 102 executes various functional applications and data processing by executing the software programs and modules stored in the memory 104. For example, the memory 104 stores software programs and modules corresponding to the sound emission control method provided in the above embodiment, and the processor 102 executes the sound emission control method provided in the above embodiment when running the software programs and modules corresponding to the sound emission control method provided in the above embodiment. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located from the processor 102, which may be connected to the electronic body portion 10 or the screen 120 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The RF module 106 is used for receiving and transmitting electromagnetic waves, and implementing interconversion between the electromagnetic waves and electrical signals, so as to communicate with a communication network or other devices. The RF module 106 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. The RF module 106 may communicate with various networks such as the internet, an intranet, a wireless network, or with other devices via a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network. The Wireless network may use various Communication standards, protocols, and technologies, including, but not limited to, Global System for Mobile Communication (GSM), Enhanced Mobile Communication (Enhanced Data GSM Environment, EDGE), wideband Code division multiple Access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Wireless Fidelity (WiFi) (e.g., Institute of Electrical and Electronics Engineers (IEEE) standard IEEE 802.10A, IEEE802.11 b, IEEE802.1 g, and/or IEEE802.11 n), Voice over internet protocol (VoIP), world wide mail Access (Microwave for Wireless Communication), Wi-11 Wireless Access (Max), and any other suitable protocol for instant messaging, and may even include those protocols that have not yet been developed.
The audio circuitry 110, earpiece 101, sound jack 103, microphone 105 collectively provide an audio interface between a user and the electronics body portion 10 or the screen 120. Specifically, the audio circuit 110 receives sound data from the processor 102, converts the sound data into an electrical signal, and transmits the electrical signal to the earpiece 101. The earpiece 101 converts the electrical signal into sound waves that can be heard by the human ear. The audio circuitry 110 also receives electrical signals from the microphone 105, converts the electrical signals to sound data, and transmits the sound data to the processor 102 for further processing. Audio data may be retrieved from the memory 104 or through the RF module 106. In addition, audio data may also be stored in the memory 104 or transmitted through the RF module 106.
The sensor 114 is disposed in the electronic body portion 10 or in the screen 120, examples of the sensor 114 include, but are not limited to: light sensors, operational sensors, pressure sensors, gravitational acceleration sensors, and other sensors.
Specifically, the sensors 114 may include a light sensor 114F and a pressure sensor 114G. Among them, the pressure sensor 114G may be a sensor that detects pressure generated by pressing on the electronic device 100. That is, the pressure sensor 114G detects pressure resulting from contact or pressing between the user and the electronic device, for example, contact or pressing between the user's ear and the electronic device. Thus, the pressure sensor 114G may be used to determine whether contact or pressure has occurred between the user and the electronic device 100, as well as the magnitude of the pressure.
Referring to fig. 9 again, in the embodiment shown in fig. 9, the light sensor 114F and the pressure sensor 114G are disposed adjacent to the display panel 111. The light sensor 114F may turn off the display output by the processor 102 when an object is near the screen 120, for example, when the electronic body portion 10 moves to the ear.
As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in various directions (generally three axes), detect the magnitude and direction of gravity when the electronic device is stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping) and the like for recognizing the attitude of the electronic device 100. In addition, the electronic body 10 may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer and a thermometer, which are not described herein,
in this embodiment, the input module 118 may include the screen 109 disposed on the screen 120, and the screen 109 may collect touch operations of the user (such as operations of the user on or near the screen 109 using any suitable object or accessory such as a finger, a stylus, etc.) and drive the corresponding connection device according to a preset program. Optionally, the screen 109 may include a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 102, and can receive and execute commands sent by the processor 102. In addition, the touch detection function of the screen 109 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the screen 109, in other variations, the input module 118 may include other input devices, such as keys 107. The keys 107 may include, for example, character keys for inputting characters, and control keys for activating control functions. Examples of such control keys include a "back to home" key, a power on/off key, and the like.
The screen 120 is used to display information input by the user, information provided to the user, and various graphic user interfaces of the electronic body section 10, which may be configured by graphics, text, icons, numbers, video, and any combination thereof, and in one example, the screen 109 may be provided on the display panel 111 so as to be integrated with the display panel 111.
The power module 122 is used to provide power supply to the processor 102 and other components. Specifically, the power module 122 may include a power management system, one or more power sources (e.g., batteries or ac power), a charging circuit, a power failure detection circuit, an inverter, a power status indicator light, and any other components related to the generation, management, and distribution of power within the electronic body portion 10 or the screen 120.
The electronic device 100 further comprises a locator 119, the locator 119 being configured to determine an actual location of the electronic device 100. In this embodiment, the locator 119 uses a positioning service to locate the electronic device 100, and the positioning service is understood to be a technology or a service for obtaining the position information (e.g. longitude and latitude coordinates) of the electronic device 100 by a specific positioning technology and marking the position of the located object on the electronic map.
It should be understood that the electronic apparatus 100 described above is not limited to the smartphone terminal, and it should refer to a computer device that can be used in a mobile. Specifically, the electronic device 100 refers to a mobile computer device equipped with an intelligent operating system, and the electronic device 100 includes, but is not limited to, a smart phone, a smart watch, a tablet computer, and the like.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments. In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A sound production control method is applied to an electronic device, the electronic device comprises a screen capable of generating sound through vibration, an exciter used for driving the screen to generate sound and a camera, the screen comprises a plurality of areas, the camera is arranged in a target area of the areas, the exciter is provided with a plurality of vibration units which are arranged corresponding to the areas, and the method comprises the following steps:
when the electronic device is in a screen sounding mode, receiving instruction information for instructing the electronic device to start the camera, wherein the exciter drives the screen to vibrate and sound in the screen sounding mode;
determining, in response to the instruction information, other areas than at least the target area from among the plurality of areas as sound emission areas;
and starting the camera and respectively controlling the vibration unit corresponding to each area in the sound-emitting areas to drive each area to vibrate and emit sound.
2. The method of claim 1, wherein the plurality of zones further comprises a first zone and a second zone, the first zone being located between the target zone and the second zone, the activating the camera and controlling the actuator zone to vibrate the sound-emitting zone to emit sound comprises:
starting the camera and controlling the exciter to drive the second area to vibrate and sound; or
And starting the camera and controlling the exciter to drive the first area and the second area to vibrate and sound together.
3. The method of claim 2, further comprising:
acquiring a current volume value of the electronic device;
judging whether the current volume value is larger than a preset volume value or not;
when the current volume value is not larger than the preset volume value, starting the camera and controlling the exciter to drive the second area to vibrate and sound; or
And when the current volume value is larger than the preset volume value, starting the camera and controlling the exciter to drive the first area and the second area to vibrate and sound together.
4. The method according to claim 1, wherein the determining, as the sound emission area, at least an area other than the target area from among the plurality of areas in response to the instruction information includes:
responding to the instruction information, displaying a selection control on the screen, and detecting a selection operation acting on the selection control;
when the selection operation indicates that the electronic device maintains the screen sound emission mode, determining other areas at least except the target area from the plurality of areas as the sound emission areas.
5. The method of claim 4, wherein the electronic device further comprises a speaker, the method further comprising:
when the selection operation indicates that the electronic device closes the screen sound production mode, the electronic device is controlled to be switched to a loudspeaker sound production mode from the screen sound production mode, wherein the loudspeaker vibrates to produce sound under the loudspeaker sound production mode.
6. The method according to claim 1, wherein each of the vibration units corresponds to a different vibration parameter, and the respectively controlling the vibration unit corresponding to each of the sound-emitting areas to drive each of the areas to vibrate and emit sound comprises:
and respectively controlling the vibration unit corresponding to each area in the sound-emitting areas to drive each area to vibrate and emit sound according to the corresponding vibration parameters.
7. The utility model provides a vocal controlling means which characterized in that is applied to electronic device, electronic device is including the screen that can vibrate the vocal, be used for the drive the exciter and the camera of screen vocal, the screen includes a plurality of regions, the camera set up in target area in a plurality of regions, the exciter have a plurality of with a plurality of regions correspond the vibration unit that sets up, the device includes:
the receiving module is used for receiving instruction information for instructing the electronic device to start the camera when the electronic device is in a screen sounding mode, wherein the exciter drives the screen to vibrate and sound in the screen sounding mode;
a determining module, configured to determine, in response to the instruction information, other areas at least except the target area from the plurality of areas as sound emission areas;
and the control module is used for starting the camera and respectively controlling the vibration unit corresponding to each area in the sound-emitting areas to drive each area to vibrate and emit sound.
8. An electronic device comprising a screen capable of vibrating a sound production, an actuator for driving the screen to produce the sound, the screen comprising a plurality of regions, a camera disposed in a target region of the plurality of regions, the actuator having a plurality of vibration units disposed in correspondence with the plurality of regions, a memory coupled to the processor, the memory storing instructions that, when executed by the processor, the processor performs the method of any of claims 1-6.
9. A computer-readable storage medium having program code executable by a processor, the program code causing the processor to perform the method of any of claims 1-6.
10. An electronic device is characterized by comprising a screen and a camera, wherein the screen can vibrate to produce sound, the screen comprises a plurality of areas, and the camera is arranged in a target area in the plurality of areas;
the exciter is provided with a plurality of vibration units which are arranged corresponding to the plurality of areas and connected with the screen, and the exciter is used for driving the screen to vibrate and generate sound;
the circuit is connected with the exciter and comprises a detection circuit and a driving circuit, wherein the detection circuit is used for receiving instruction information for instructing the electronic device to start the camera when the electronic device is in a screen sounding mode, and the exciter drives the screen to vibrate and sound in the screen sounding mode; determining, in response to the instruction information, other areas than at least the target area from among the plurality of areas as sound emission areas; and the driving circuit is used for starting the camera and respectively controlling the vibration unit corresponding to each area in the sound-emitting areas to drive each area to vibrate and emit sound.
CN201810776921.5A 2018-07-13 2018-07-13 Sound production control method, sound production control device, electronic device, and storage medium Active CN109062534B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810776921.5A CN109062534B (en) 2018-07-13 2018-07-13 Sound production control method, sound production control device, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810776921.5A CN109062534B (en) 2018-07-13 2018-07-13 Sound production control method, sound production control device, electronic device, and storage medium

Publications (2)

Publication Number Publication Date
CN109062534A CN109062534A (en) 2018-12-21
CN109062534B true CN109062534B (en) 2021-07-13

Family

ID=64816653

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810776921.5A Active CN109062534B (en) 2018-07-13 2018-07-13 Sound production control method, sound production control device, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN109062534B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112423205A (en) * 2019-08-22 2021-02-26 Oppo广东移动通信有限公司 Electronic device and control method thereof
CN114125089A (en) * 2020-08-31 2022-03-01 北京小米移动软件有限公司 Terminal device, signal processing method and device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103096220A (en) * 2013-01-04 2013-05-08 瑞声科技(南京)有限公司 Flat plate sound production device
CN103778909A (en) * 2014-01-10 2014-05-07 瑞声科技(南京)有限公司 Screen sounding system and control method thereof
CN105164714A (en) * 2013-04-26 2015-12-16 三星电子株式会社 User terminal device and controlling method thereof
CN106856582A (en) * 2017-01-23 2017-06-16 瑞声科技(南京)有限公司 The method and system of adjust automatically tonequality
CN106954143A (en) * 2017-03-02 2017-07-14 瑞声科技(南京)有限公司 Manually adjust the method and electronic equipment of tonequality
CN107561753A (en) * 2016-06-30 2018-01-09 乐金显示有限公司 Panel vibration type sounding display device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006174004A (en) * 2004-12-15 2006-06-29 Citizen Electronics Co Ltd Flat surface speaker
JP6351964B2 (en) * 2013-12-11 2018-07-04 株式会社東海理化電機製作所 Input device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103096220A (en) * 2013-01-04 2013-05-08 瑞声科技(南京)有限公司 Flat plate sound production device
CN105164714A (en) * 2013-04-26 2015-12-16 三星电子株式会社 User terminal device and controlling method thereof
CN103778909A (en) * 2014-01-10 2014-05-07 瑞声科技(南京)有限公司 Screen sounding system and control method thereof
CN107561753A (en) * 2016-06-30 2018-01-09 乐金显示有限公司 Panel vibration type sounding display device
CN106856582A (en) * 2017-01-23 2017-06-16 瑞声科技(南京)有限公司 The method and system of adjust automatically tonequality
CN106954143A (en) * 2017-03-02 2017-07-14 瑞声科技(南京)有限公司 Manually adjust the method and electronic equipment of tonequality

Also Published As

Publication number Publication date
CN109062534A (en) 2018-12-21

Similar Documents

Publication Publication Date Title
CN109194796B (en) Screen sounding method and device, electronic device and storage medium
CN108683761B (en) Sound production control method and device, electronic device and computer readable medium
CN108833638B (en) Sound production method, sound production device, electronic device and storage medium
CN108646971B (en) Screen sounding control method and device and electronic device
CN108881568B (en) Method and device for sounding display screen, electronic device and storage medium
CN109032556B (en) Sound production control method, sound production control device, electronic device, and storage medium
CN109032558B (en) Sound production control method and device, electronic device and computer readable medium
CN109189362B (en) Sound production control method and device, electronic equipment and storage medium
CN108958632B (en) Sound production control method and device, electronic equipment and storage medium
CN109086023B (en) Sound production control method and device, electronic equipment and storage medium
CN109085985B (en) Sound production control method, sound production control device, electronic device, and storage medium
CN109144460B (en) Sound production control method, sound production control device, electronic device, and storage medium
CN108958697B (en) Screen sounding control method and device and electronic device
CN109062535B (en) Sound production control method and device, electronic device and computer readable medium
CN109086024B (en) Screen sounding method and device, electronic device and storage medium
CN108810198B (en) Sound production control method and device, electronic device and computer readable medium
CN109040919B (en) Sound production method, sound production device, electronic device and computer readable medium
CN108900728B (en) Reminding method, reminding device, electronic device and computer readable medium
CN109189360B (en) Screen sounding control method and device and electronic device
CN109144249B (en) Screen sounding method and device, electronic device and storage medium
CN108810764B (en) Sound production control method and device and electronic device
CN109240413B (en) Screen sounding method and device, electronic device and storage medium
CN108712706B (en) Sound production method, sound production device, electronic device and storage medium
CN109062533B (en) Sound production control method, sound production control device, electronic device, and storage medium
CN110505335B (en) Sound production control method and device, electronic device and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant