CN109144461B - Sound production control method and device, electronic device and computer readable medium - Google Patents

Sound production control method and device, electronic device and computer readable medium Download PDF

Info

Publication number
CN109144461B
CN109144461B CN201810747518.XA CN201810747518A CN109144461B CN 109144461 B CN109144461 B CN 109144461B CN 201810747518 A CN201810747518 A CN 201810747518A CN 109144461 B CN109144461 B CN 109144461B
Authority
CN
China
Prior art keywords
screen
electronic device
sound
vibration
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810747518.XA
Other languages
Chinese (zh)
Other versions
CN109144461A (en
Inventor
张海平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810747518.XA priority Critical patent/CN109144461B/en
Publication of CN109144461A publication Critical patent/CN109144461A/en
Application granted granted Critical
Publication of CN109144461B publication Critical patent/CN109144461B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Telephone Function (AREA)

Abstract

The embodiment of the application provides a sound production control method and device, an electronic device and a computer readable medium, and relates to the technical field of electronic devices. The method comprises the following steps: when the screen is detected to be in a vibration sound production mode, detecting the change condition of the distance between the human ear and the screen; if the distance is reduced, improving the vibration intensity of the screen; if the distance is increased, reducing the vibration intensity of the screen; and controlling the exciter according to the adjusted vibration intensity so as to drive the screen to vibrate and sound at a volume corresponding to the vibration intensity. Not only can come the sound production through the vibration mode of screen or back lid, can avoid setting up out the sound hole on electronic device, also can change vibration intensity according to the change of distance, and vibration intensity and distance negative correlation to when the ear is close to the screen, turn up electronic device's volume.

Description

Sound production control method and device, electronic device and computer readable medium
Technical Field
The present disclosure relates to the field of electronic devices, and more particularly, to a method and an apparatus for controlling sound generation, an electronic device, and a computer readable medium.
Background
Currently, in electronic devices, such as mobile phones, tablet computers, and the like, sound is generated through a speaker to output a sound signal. However, the speaker arrangement occupies a large design space, resulting in the electronic device not conforming to the direction of the slim design.
Disclosure of Invention
The application provides a sound production control method, a sound production control device, an electronic device and a computer readable medium, so as to overcome the defects.
In a first aspect, an embodiment of the present application provides a sound emission control method, which is applied to an electronic device, where the electronic device includes a screen and an exciter, and the exciter is used to drive the screen to emit sound. The method comprises the following steps: when the screen is detected to be in a vibration sound production mode, detecting the change condition of the distance between the human ear and the screen; if the distance is reduced, improving the vibration intensity of the screen; if the distance is increased, reducing the vibration intensity of the screen; and controlling the exciter according to the adjusted vibration intensity so as to drive the screen to vibrate and sound at a volume corresponding to the vibration intensity.
In a second aspect, an embodiment of the present application further provides a sound emission control device, which is applied to an electronic device, where the electronic device includes a screen and an exciter, and the exciter is used to drive the screen to emit sound. The sound production control device includes: the device comprises a detection unit, an improvement unit, a reduction unit and a driving unit. The detection unit is used for detecting the change situation of the distance between the human ear and the screen when the screen is detected to be in the vibration sound production mode. And the improving unit is used for improving the vibration intensity of the screen if the distance becomes smaller. And the reducing unit is used for reducing the vibration intensity of the screen if the distance is increased. And the driving unit is used for controlling the exciter according to the adjusted vibration intensity so as to drive the screen to vibrate and sound at a volume corresponding to the vibration intensity.
In a third aspect, an embodiment of the present application further provides an electronic device, which includes a screen and an actuator, where the actuator is configured to drive the screen to emit sound. The electronic device further includes: distance detection circuit, treater and drive circuit. And the distance detection unit is used for acquiring the distance between the human ear and the screen when the screen is detected to be in a vibration sound production mode. And the processor is used for detecting the change condition of the distance between the human ear and the screen, if the distance is smaller, the vibration intensity of the screen is improved, and if the distance is larger, the vibration intensity of the screen is reduced. And the driving circuit is connected with the processor and the exciter and is used for controlling the exciter according to the adjusted vibration intensity so as to drive the screen to vibrate and sound at a volume corresponding to the vibration intensity.
In a fourth aspect, embodiments of the present application further provide an electronic device, including a screen and an exciter, where the exciter is configured to drive the screen to sound; further comprising a memory and a processor, the memory coupled with the processor; the memory stores instructions that, when executed by the processor, cause the processor to perform the above-described method.
In a fifth aspect, the present application also provides a computer-readable medium having program code executable by a processor, where the program code causes the processor to execute the above method.
According to the sound production control method, the sound production control device, the electronic device and the computer readable medium, when the screen produces sound through vibration, the change of the distance between the ears of a person and the screen is detected, if the distance is increased, the vibration intensity of the screen is reduced, and if the distance is reduced, the vibration intensity of the screen is improved. And driving the screen to vibrate according to the adjusted vibration intensity, so that the screen can sound according to the volume corresponding to the vibration intensity. Therefore, the sound can be produced in a vibration mode of the screen or the rear cover, the sound outlet hole can be prevented from being formed in the electronic device, the vibration strength can be changed according to the change of the distance, the vibration strength is inversely related to the distance, and therefore when the ear is close to the screen, the user can not hear clearly when the current volume is shown, and therefore the volume of the electronic device is adjusted to be larger.
Additional features and advantages of embodiments of the present application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of embodiments of the present application. The objectives and other advantages of the embodiments of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 illustrates a schematic structural diagram of an electronic device provided in an embodiment of the present application from a first viewing angle;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application from a second viewing angle;
FIG. 3 is a flow chart of a method of controlling sound production according to an embodiment of the present application;
FIG. 4 is a flow chart of a method of controlling speech provided by another embodiment of the present application;
fig. 5 shows a block diagram of a sound emission control device provided in an embodiment of the present application;
FIG. 6 illustrates a block diagram of an electronic device provided by an embodiment of the present application;
FIG. 7 illustrates a block diagram of an electronic device provided by another embodiment of the present application;
fig. 8 shows a block diagram of an electronic device for performing the method provided by the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
The display screen generally plays a role in an electronic device such as a mobile phone or a tablet computer to display text, pictures, icons, or video. With the development of touch technologies, more and more display screens arranged in electronic devices are touch display screens, and when a user is detected to perform touch operations such as dragging, clicking, double clicking, sliding and the like on the touch display screen, the touch operations of the user can be responded under the condition of arranging the touch display screens.
As the user demands higher definition and higher fineness of the displayed content, more electronic devices employ touch display screens with larger sizes. However, in the process of setting a touch display screen with a large size, it is found that functional devices such as a front camera, a proximity optical sensor, and a receiver, which are disposed at the front end of the electronic device, affect an area that the touch display screen can extend to.
Generally, an electronic device includes a front panel, a rear cover, and a bezel. The front panel includes a top area, a middle screen area and a lower key area. Generally, the forehead area is provided with a sound outlet of a receiver and functional devices such as a front camera, the middle screen area is provided with a touch display screen, and the lower key area is provided with one to three physical keys. With the development of the technology, the lower key area is gradually cancelled, and the physical keys originally arranged in the lower key area are replaced by the virtual keys in the touch display screen.
The earphone sound outlet holes arranged in the forehead area are important for the function support of the mobile phone and are not easy to cancel, so that the difficulty in expanding the displayable area of the touch display screen to cover the forehead area is high. After a series of researches, the inventor finds that sound can be emitted by controlling the screen, the frame or the rear cover of the mobile phone to vibrate, so that the arrangement of the sound outlet hole of the receiver can be eliminated.
Referring to fig. 1 and 2, an electronic device 100 according to an embodiment of the present application is shown. Fig. 1 is a front view of the electronic device, and fig. 2 is a side view of the electronic device.
The electronic device 100 comprises an electronic body 10, wherein the electronic body 10 comprises a housing 12 and a screen 120 disposed on the housing 12, the housing 12 comprises a front panel 125, a rear cover 127 and a bezel 126, the bezel 126 is used for connecting the front panel 125 and the rear cover 127, and the screen 120 is disposed on the front panel 125.
The electronic device further comprises an exciter 131, wherein the exciter 131 is used for driving a vibration component of the electronic device to vibrate, specifically, the vibration component is at least one of the screen 120 or the housing 12 of the electronic device, that is, the vibration component can be the screen 120, the housing 12, or a combination of the screen 120 and the housing 12. As an embodiment, when the vibration member is the housing 12, the vibration member may be a rear cover of the housing 12.
In the embodiment of the present application, the vibration component is the screen 120, and the exciter 131 is connected to the screen 120 for driving the screen 120 to vibrate. In particular, the actuator 131 is attached below the screen 120, and the actuator 131 may be a piezoelectric driver or a motor. In one embodiment, actuator 131 is a piezoelectric actuator. The piezoelectric actuator transmits its own deformation to the screen 120 by a moment action, so that the screen 120 vibrates to generate sound. The screen 120 includes a touch screen 109 and a display panel 111, the display panel 111 is located below the touch screen 109, and the piezoelectric driver is attached below the display panel 111, i.e., a side of the display panel 111 away from the touch screen 109. The piezoelectric driver includes a plurality of piezoelectric ceramic sheets. When the multilayer piezoelectric ceramic piece produces sound and expands and contracts, the screen is driven to bend and deform, and the whole screen forms bending vibration repeatedly, so that the screen can push air and produce sound.
As an embodiment, the electronic device 100 includes a detection circuit for detecting whether the screen sound emission area is in an abnormal state, and adjusting the vibration parameter of the screen sound emission area when the screen sound emission area is determined to be in the abnormal state, and a driving circuit. The exciter 131 is connected to a driving circuit of the electronic device, and the driving circuit is configured to input a control signal value according to the vibration parameter to the exciter 131, so as to drive the exciter 131 to vibrate, thereby driving the vibrating component to vibrate. In particular, the driving circuit may be a processor of the electronic device, or may be an integrated circuit capable of generating a driving voltage or current within the electronic device. The driving circuit outputs a high-low level driving signal to the exciter 131, the exciter 131 vibrates according to the driving signal, and the different electrical parameters of the driving signal output by the driving circuit may cause the different vibration parameters of the exciter 131, for example, the duty ratio of the driving signal corresponds to the vibration frequency of the exciter 131, and the amplitude of the driving signal corresponds to the vibration amplitude of the exciter 131.
In the embodiment of the present application, the plurality of actuators 131 may be uniformly distributed on the screen 120, so that the screen 120 is divided into a plurality of areas for sounding independently. For example, if the number of the actuators is 4, the screen may be divided into 4 square areas along the center line in the vertical direction and the center line in the horizontal direction, the 4 actuators are disposed below the 4 square areas, and the 4 actuators correspond to the 4 square areas one by one. Of course, the number of actuators is not limited in the embodiments of the present application.
Through the vibration mode of the screen, the electronic device can play voice, such as voice chat or conversation. However, the inventor finds in research that when a user uses an electronic device to talk, the user can listen to the sound generated by the screen vibration with his ear close to the screen under some scenes and requirements, but if the volume setting is not reasonable, the user can adjust the volume, but the user needs to manually press a volume key or other buttons, so that the user's operation is increased, and the user's experience is reduced.
Referring to fig. 3, an embodiment of the present application provides a sound emission control method for automatically adjusting a sound emission volume of a screen according to a distance between an ear of a user and the screen when an electronic device emits a sound by vibration, and specifically, the method includes: s301 to S304.
S301: and when the screen is detected to be in a vibration sound production mode, detecting the change condition of the distance between the human ear and the screen.
Specifically, it may be determined whether an application program of the electronic device, which is capable of playing voice, such as music playing software or a call application, is currently in a vibration sound production mode by detecting an operation of the application program by a user. Specifically, the electronic device is in a vibration sound production mode, and may receive a sound production request. The sound production request indicates the mobile terminal for the user, and the information that the screen needs to be controlled to vibrate and produce sound is required. In one embodiment, the sound production request may be a reminder message or a voice playing request.
The reminding information comprises information for reminding a user that some events are triggered, such as call reminding information, short message reminding information, alarm reminding information and the like. For example, the call reminding information is used for reminding the user that there is an incoming call currently, and the electronic device may enter the vibration sound production mode after the reminding information is acquired and before the sound production is not performed, that is, the electronic device is in a state of waiting for sound production at this time. Then, after the vibration parameters are acquired, at least one of the screen or the rear cover is controlled to vibrate and sound, so that a sound for reminding, such as a ring tone, is emitted.
As another embodiment, the utterance request may be a request to play a voice every time during the utterance of the mobile terminal. The method provided by the embodiment of the application is used for collecting the environmental noise to adjust the vibration of the vibration component in the sound production process of the mobile terminal, so as to adjust the sound production.
For example, a user clicks a play button of a certain video APP, and the electronic device is not currently in a mute state, and when it is detected that the play button is triggered, at least one vibration sound production mode in the screen or the rear cover is entered, and the video voice is played through vibration of the screen.
When the electronic device calls, namely the telephone ring or the vibration reminding rings, the electronic device can detect and display a call interface on a screen. And the user clicks an answering key in the incoming call interface to establish call connection between the current SIM card number of the electronic device and the incoming call number. Specifically, the phone state of the electronic device may be monitored through a phone manager within a system of the electronic device, thereby enabling monitoring whether the electronic device is in a talk mode. The phone manager is an application module in the system of the electronic device, and the user obtains the call status of the electronic device, for example, when the system of the electronic device is an android system, the phone manager is TelephonyManager.
Therefore, when the screen is determined to be in the vibration sound production mode, the change of the distance between the human ear and the screen is detected, specifically, the distance sensor arranged on the electronic device, for example, the proximity light sensor, and the distance sensor and the screen are arranged on the use side of the electronic device, or the electronic device is further provided with a camera, the camera and the screen are arranged on the use side of the electronic device, the camera can be a front camera, the image of the human ear collected by the camera is obtained, the change of the distance between the human ear and the screen is determined according to the change of the image of the human ear, for example, the change between the images of the human ear of the current frame and the previous frame is compared, the change of the area occupied by the human ear area or the change of the outline of the human ear in the two images is judged, if the area or the outline is increased, the distance between the human ear and the screen is judged to be decreased, if the area or the outline becomes smaller, it is judged that the distance between the human ear and the screen becomes larger.
In addition, considering the position change between the user and the human ear, there may be a case that the human ear is not in contact with the screen, and then the human ear is in contact with the screen as the human ear approaches the screen, and when the human ear is in contact with the screen, the change of the distance is not easily detected by the distance sensor or the camera, so that the change of the distance between the human ear and the screen can be detected in different ways when the human ear is in contact with or not in contact with the screen.
Specifically, whether the human ear is in contact with the screen is detected.
Specifically, since the screen is provided with the touch screen, when the human ear is attached to the screen, the screen can detect the pressed region, and thus it can be detected whether the human ear is attached to the screen.
And detecting all pressed touch points on the screen, and integrating into a touch area according to all the touch points. And comparing the touch area with a preset standard, wherein the preset standard can be a distribution rule of all touch points acquired in advance when the human ear is in contact with the screen, and can also be a preset touch area, and the preset touch area is matched with an area when the human ear touches the screen.
Specifically, the embodiment of determining whether the touch area meets the preset criterion may be to acquire a contour line of the touch area. Specifically, after all the touch points pressed on the screen are acquired, all the touch points are fitted to one continuous curve to obtain the contour line. And judging whether the contour line is matched with a preset human ear contour line or not, wherein the preset human ear contour line can be the contour line of most human ears acquired based on the big data, and can also be the contour line of the ear part of a user attached to a screen, which is acquired in advance, so that the preset human ear contour line is acquired. If the contour line of the touch area is matched with the preset contour line of the human ear, the fact that the human ear is contacted on the screen is represented, namely, the touch area meets the preset standard is determined, if the contour line of the touch area is not matched with the preset contour line of the human ear, the fact that the human ear is not contacted on the screen is represented, namely, the fact that the touch area does not meet the preset standard is determined.
And if the contact between the human ear and the screen is judged, acquiring the touch parameters of the human ear acting on the screen. The touch parameter may be data related to contact detected by the screen when a human ear of the user contacts the screen, and may be at least one of a touch area, a touch pressure value, or a touch time, for example. After the touch parameters are acquired, detecting the change condition of the distance between the human ear and the screen according to the touch parameters. Specifically, the change of the touch parameter can reflect the change of the distance between the human ear and the screen.
As an embodiment, if the touch parameter is a touch area, an embodiment of detecting a change of a distance between an ear and the screen according to the touch parameter may be:
and detecting the change condition of the touch area, if the touch area is increased, determining that the distance is reduced, and if the touch area is reduced, determining that the distance is increased.
Specifically, the electronic device records the touch area of each time the collected human ear is in contact with the screen, specifically, the touch area and the collection time point of each time, as shown in table 1 below:
TABLE 1
Touch area Acquisition time point
S1 T1
S2 T2
S3 T3
The obtaining of the touch area may refer to the determination of the touch area when detecting whether the human ear is in contact with the screen, that is, obtaining all touch points, fitting all touch points into one area, that is, the touch area, and then calculating the touch area of the touch area.
Then, judging a difference value between the touch area acquired this time and the touch area acquired last time, if the difference value is positive, indicating that the touch area of this time is larger than the touch area of last time, and reducing the distance; if the difference value is negative, the touch area of the current time is smaller than the touch area of the last time, the touch area is reduced, and the distance is increased.
As another implementation, the touch parameter is a touch pressure value, a plurality of pressure sensors are arranged under the screen, and when the ear of the user makes contact with the screen, the pressure value of the user on the screen, that is, the touch pressure value, can be detected. Similarly, the recording of the pressure value may also be recorded by the electronic device according to the collected time point and the corresponding pressure value, similar to table 1 above.
Then, judging a difference value between the touch pressure value acquired this time and the touch pressure value acquired last time, if the difference value is positive, indicating that the touch pressure value acquired this time is larger than the touch pressure value acquired last time, and reducing the distance; if the difference value is negative, the touch pressure value of this time is smaller than the touch pressure value of the last time, the touch pressure value is reduced, and the distance is increased. That is, the closer the user is to the screen, the more the user's ears are pressed against the screen.
And if the human ear is not in contact with the screen, the change of the distance between the human ear and the screen can be determined through the distance sensor or the camera. As an embodiment, if the camera is not in contact with the screen, acquiring an ear image of the ear acquired by the camera; and detecting the change condition of the distance according to the change of the human ear image.
The method comprises the steps of acquiring an image collected by a camera, extracting a contour line in the image, determining that a hand comprises an ear contour line, determining an ear image in the image if the hand can be detected, further determining the area size of the ear image, determining whether the ear image is becoming large or small through detecting the area of the ear image, indicating that the distance is becoming small if the ear image is becoming large, and indicating that the distance is becoming large if the ear image is becoming small.
In addition, whether a user faces a screen or not is determined firstly when the ear image of the ear collected by the camera is obtained, specifically, the image collected by the camera is obtained, if the image is a two-dimensional image, whether a face image is collected or not can be determined by searching whether facial feature points exist in the image, and if the image is collected, whether a person exists in the visual field range of the camera is determined. In another embodiment, the camera comprises structured light, and whether human face three-dimensional information exists is determined according to three-dimensional information collected by the structured light, and if the human face three-dimensional information exists, it is determined that a person exists in the visual field of the camera.
Further, it is considered that although a person exists in the visual field of the camera, if the person is far away, it means that the person is likely to be a passerby rather than a user of the electronic device. Therefore, the embodiment of determining whether a person exists in the visual field of the camera is as follows:
the method comprises the steps of obtaining an image collected by a camera, extracting a head outline from the collected image, determining the outline area of the head outline, if the area is larger than a preset value, indicating that a user corresponding to the area is close to a screen, detecting the change condition of the distance between the ear of the user and the screen, namely the user is used as a target user, detecting the change condition of the distance between the ear of the user and the screen, and adjusting the vibration of the screen according to the change condition.
In addition, whether the electronic device is connected with the earphone or not can be judged before the change condition of the distance between the human ear and the screen is detected, specifically, the judgment can be carried out by checking the state of the earphone connecting hole of the electronic device, for example, when the earphone connecting hole of the electronic device is connected with the earphone, the first state value is returned, when the earphone in the connecting hole is pulled out, the second state value is returned, and whether the electronic device is connected with the earphone or not can be determined by detecting the first state value and the second state value. Specifically, the android system transmits a broadcast when the headset is plugged in and unplugged, so that the electronic device can determine whether the headset is currently connected to the electronic device by monitoring the broadcast. Thus, it can be determined whether the electronic device is in the headset talk mode. And if the electronic device is not in the earphone conversation mode and the screen is in the vibration sound production mode, detecting the change condition of the distance between the human ear and the screen.
S302: and if the distance is reduced, improving the vibration intensity of the screen.
S303: and if the distance is increased, reducing the vibration intensity of the screen.
As an embodiment, a correspondence relationship between the distance and the vibration intensity is preset in the electronic device, the correspondence relationship includes a plurality of distances and a plurality of vibration intensities, each distance corresponds to one vibration intensity, and the vibration intensity is inversely related to the distance, that is, the smaller the distance, the greater the vibration intensity is, and the greater the distance, the smaller the vibration intensity is. After the distance between the human ear and the screen is obtained, according to the corresponding relation, the degree before vibration corresponding to the distance is found and is used as the vibration intensity of the screen.
As another embodiment, the vibration intensity of the screen is gradually increased when the distance is gradually decreased, and the vibration intensity of the screen is gradually decreased when the distance is gradually increased.
Specifically, for example, when a user uses the electronic device to answer a call, the volume used is a preset volume, and as the user gradually brings the electronic device close to the ear of the user, the collected distance between the electronic device and the ear of the user continuously decreases, the vibration intensity of the screen is increased, that is, on the basis of the preset volume, the call volume of the electronic device also gradually increases as the distance between the electronic device and the ear of the user gradually decreases. Specifically, it may be a preset value that increases the vibration intensity every time the distance is decreased by a preset length, for example, increases the vibration intensity of the screen by 10 every time it is decreased by 5 mm. Of course, it is also possible to increase the vibration intensity of the screen by one step for each reduction of the distance by a preset length, with the interval between each step being a certain number of vibration intensities, and vice versa.
S304: and controlling the exciter according to the adjusted vibration intensity so as to drive the screen to vibrate and sound at a volume corresponding to the vibration intensity.
The driving circuit of the electronic device adjusts the driving signal according to the set vibration parameter, for example, if the set vibration parameter is a reduced vibration amplitude, the driving circuit reduces the level value of the high level of the driving signal, and if the set vibration parameter is a reduced vibration frequency, the driving circuit reduces the frequency of the driving signal.
The driving circuit sends the driving signal to the exciter, and the exciter controls the vibration component to vibrate according to the adjusted driving signal, so that the vibration frequency, amplitude and other parameters of the vibration component can be adjusted, and the sound characteristic information such as the strength, frequency and the like of the emitted sound can be changed.
Specifically, different vibration intensities correspond to different sound production volumes of the electronic device, and the larger the vibration intensity is, the larger the sound production volume is, and the smaller the vibration intensity is, the smaller the sound production volume is, for example, the vibration intensity is increased by 10, and the sound production volume is increased by 5 db.
After the exciter acquires the vibration intensity, the area of the screen corresponding to the exciter is driven to vibrate according to the vibration intensity, and the volume of the vibration sound corresponds to the vibration intensity.
For example, when the electronic device answers a call, the volume when entering the vibration sound generation mode is 100 db, i.e. the preset volume is 100 db. Then, along with the gradual decrease of the distance between the screen of the electronic device and the ear of the user, when the decrease of 5 mm is detected, the sound volume of the electronic device is increased by 5 db, that is, the sound volume of the electronic device is 105 db, and then the decrease of 5 mm is continued, the sound volume of the electronic device is changed to 110 db, and then the sound volume of the electronic device is sequentially increased in an arithmetic progression, and the tolerance is 5 db. Of course, an equal ratio series or other mathematical model may be sequentially incremented.
For another example, when the electronic device answers the call, the volume when entering the vibration sound production mode is in 8 th gear. When a decrease of 5 mm is detected, the sound volume of the electronic device is increased by 1 gear, namely, 9 gears, and then decreased by 5 mm, and the sound volume of the electronic device is changed to 10 gears. Each gear corresponds to one volume, and the volume is larger when the gear is higher.
Therefore, the distance between the ears and the screen can adjust the sound volume of the electronic device, i.e. the smaller the distance, the larger the sound volume and the larger the distance, the smaller the sound volume, which not only can indicate that the user can not hear the sound emitted by the electronic device more closely to the screen, thereby automatically increasing the sound volume of the electronic device, but also can decrease the sound volume of the electronic device when the ears of the user are far away from the screen, because the ears are far away from the screen, thereby indicating that the current sound is too large, the instinctive reaction of the user is to keep the ears away from the generating source, namely the screen, the sound volume of the electronic device is decreased, which just meets the requirements of the user, in addition, the ears are far away from the screen, and also can indicate that the user does not want to hear the sound emitted by the electronic device, at this time, the vibration intensity of the screen is decreased, thereby effectively, and resources are saved.
In addition, considering that if the distance from the ear of the user to the screen is too far, the sound emission volume of the electronic device may not need to be adjusted because the ear of the user is too far, which may be a temporary state that the user gets closer to the screen, or may be about to abandon the use of the screen or listen to the sound emitted by the vibration of the screen, the embodiment of the present application provides a sound emission control method, as shown in fig. 4, for automatically adjusting the sound emission volume of the vibration of the screen according to the distance between the ear of the user and the screen when the electronic device vibrates to emit the sound, specifically, the method includes: s401 to S405.
S401: and when the screen is detected to be in a vibration sound production mode, acquiring the distance between the human ear and the screen.
Specifically, the distance between the human ear and the screen can be detected by the distance sensor. For example, the distance sensor is a proximity light sensor, and is generally disposed on the same surface of a screen of the electronic device, for example, in a top area of the screen, when a user receives a call, light emitted from a light emitting unit of the proximity light sensor is reflected by a body (for example, ear) of the user and then enters a light receiving unit of the proximity light sensor, and a distance between the screen and the human ear is determined according to the light emitted from the light emitting unit and a time when the reflected light is received.
In addition, considering that a user may be talking to other users using the electronic device, it is possible to receive a sound input by the user in addition to a sound emitted by the electronic device, for example, the electronic device is provided with a microphone through which a distance between the screen and the human ear is determined.
Specifically, whether the electronic device is in a call mode is determined, wherein when the electronic device is in the call mode, a microphone of the electronic device is in an open state, and an audio signal input by a user can be collected.
The microphone collects voice input by a user, time difference of the voice collected for two times is obtained, the distance between the electronic device and the user is determined according to the time difference, and the distance is used as the distance between the screen and the ears of the user. Specifically, the time difference is multiplied by the speed of sound to obtain the distance between the electronic device and the user. In addition, a microphone and a screen of the electronic device can be used to form a doppler sensor, namely, the screen can vibrate to make sound and the microphone can collect the sound, specifically, a return volume value of a preset sound signal collected by the microphone and returned by the user is obtained, and the distance between the electronic device and the user is determined according to the return volume value. The preset sound signal is a sound signal emitted by screen vibration of the electronic device. Recording a first time point of a sound signal emitted by screen vibration and a second time point of the sound signal emitted by the screen vibration and collected by a microphone after being reflected by a user, and determining a sound time difference according to the first time point and the second time point, so that the distance between the electronic device and the user, namely the distance between the human ear and the screen can be determined according to the sound time difference and the sound propagation speed.
S402: and if the distance value is smaller than a preset value, detecting the change of the distance.
Wherein, the preset value needs to be set according to a call mode. The preset value is used to determine whether the distance between the electronic device and the user is close enough because if the distance between the electronic device and the user is too far away, the call volume adjustment is less required. The communication mode comprises a play-out mode and a private answering mode and is used for playing voice signals sent by the electronic device under the conditions of communication, video playing and the like.
Wherein the maximum output volume of the play-out mode is greater than the maximum output volume of the private listening mode.
In the private answering mode, a user can clearly hear the sound emitted by the screen only by pressing the ear close to a certain area of the screen, in the play-out mode, the emitted sound is larger, the user does not need to press the ear close to the screen to hear the sound, the maximum output volume of the user is larger than that in the private answering mode, and the vibration area of the screen in the private answering mode is smaller than that in the play-out mode.
Taking a private answering mode as an example, the corresponding screen vibration sounding area is an answering vibration area, when the distance between the screen and the ear of the user is greater than or equal to a preset value, it indicates that the ear of the user is far away from the answering vibration area, and at this time, the call volume does not need to be adjusted, because the user may not move the mobile terminal to a proper answering position of the ear of the user. The preset value can be set by the user according to actual needs, and is not limited herein. In other communication modes, the preset value does not need to be set, that is, the step of obtaining the distance between the human ear and the screen and the subsequent steps are not executed.
That is, a call mode of the electronic device is detected. And if the call mode is the private answering mode, acquiring the distance between the ears and the screen and subsequent operations, and if the call mode is not the private answering mode, ending the method.
S403: and if the distance is reduced, improving the vibration intensity of the screen.
S404: and if the distance is increased, reducing the vibration intensity of the screen.
S405: and controlling the exciter according to the adjusted vibration intensity so as to drive the screen to vibrate and sound at a volume corresponding to the vibration intensity.
It should be noted that, the above-mentioned step of acquiring the distance between the ear and the screen and determining whether the distance is smaller than the preset value when detecting that the screen is in the vibration sound production mode may be combined with the embodiment corresponding to fig. 3, specifically, the step may be to acquire the image acquired by the camera when detecting that the screen is in the vibration sound production mode, determine whether a user faces the screen according to the acquired image, and if a user faces the screen, acquire the distance between the ear and the screen, and detect a change in the distance.
The method may further include detecting whether a human ear is in contact with the screen when it is detected that the screen is in the vibration sound production mode, if so, detecting a change condition of a distance between the human ear and the screen, if no human ear is in contact with the screen, acquiring the distance between the human ear and the screen, and if the distance value is smaller than a preset value, detecting a change of the distance.
Moreover, when the screen is detected to be in a vibration sound production mode, whether a human ear is in contact with the screen or not is detected, if so, the change situation of the distance between the human ear and the screen is detected, if no human ear is in contact with the screen, an image collected by the camera is obtained, whether a user faces the screen or not is determined according to the obtained image, if the user faces the screen, the distance between the human ear of the user and the screen is obtained, and if the distance value is smaller than a preset value, the change of the distance is detected.
It should be noted that, for the parts not described in detail in the above steps, reference may be made to the foregoing embodiments, and details are not described herein again.
Referring to fig. 5, an embodiment of the present application provides a sound control apparatus 500, specifically, the apparatus includes: a detection unit 501, an increasing unit 502, a decreasing unit 503, and a driving unit 504.
The detecting unit 501 is configured to detect a change in a distance between an ear of a person and the screen when the screen is detected to be in a vibration sound production mode.
An increasing unit 502, configured to increase the vibration intensity of the screen if the distance becomes smaller.
A reducing unit 503, configured to reduce the vibration intensity of the screen if the distance is larger.
And the driving unit 504 is configured to control the exciter according to the adjusted vibration intensity, so as to drive the screen to vibrate and generate sound at a volume corresponding to the vibration intensity.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Referring to fig. 6, an electronic device provided by an embodiment of the present application is shown, which includes a screen 120 and an actuator 131, where the actuator 131 is used to drive the screen 120 to emit sound. The electronic device further includes: a distance detection circuit 601, a processor 102 and a drive circuit 602.
A distance detecting unit 601, configured to acquire a distance between an ear of a person and the screen 120 when the screen 120 is detected to be in a vibration sound emission mode.
And a processor 102, configured to detect a change in a distance between an ear of a person and the screen 120, increase the vibration intensity of the screen 120 if the distance is smaller, and decrease the vibration intensity of the screen 120 if the distance is larger.
And the driving circuit 602 is connected to the processor 102 and the exciter 131, and is configured to control the exciter 131 according to the adjusted vibration intensity, so as to drive the screen 120 to vibrate and generate sound at a volume corresponding to the vibration intensity.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Referring to fig. 7, an electronic device 100 provided in the embodiment of the present application is shown, including: a memory 104 and a processor 102, the memory 104 coupled with the processor 102; the memory 104 stores instructions that, when executed by the processor 102, cause the processor 102 to perform the above-described method.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Referring to fig. 1 and 2, based on the above method and apparatus, the embodiment of the present application further provides an electronic apparatus 100, and the electronic apparatus 100 may be any of various types of computer system devices (only one form is exemplarily shown in fig. 1 and 2) that is mobile or portable and performs wireless communication. Specifically, the electronic apparatus 100 may be a mobile phone or a smart phone (e.g., an iPhone (TM) based phone), a Portable game device (e.g., Nintendo DS (TM), PlayStation Portable (TM), game Advance (TM), iPhone (TM)), a laptop computer, a PDA, a Portable internet device, a music player, and a data storage device, other handheld devices, and a head-mounted device (HMD) such as a watch, a headset, a pendant, a headset, and the like, and the electronic apparatus 100 may also be other wearable devices (e.g., a head-mounted device (HMD) such as electronic glasses, electronic clothes, an electronic bracelet, an electronic necklace, an electronic tattoo, an electronic device, or a smart watch).
The electronic apparatus 100 may also be any of a number of electronic devices including, but not limited to, cellular phones, smart phones, other wireless communication devices, personal digital assistants, audio players, other media players, music recorders, video recorders, cameras, other media recorders, radios, medical devices, vehicle transportation equipment, calculators, programmable remote controllers, pagers, laptop computers, desktop computers, printers, netbook computers, Personal Digital Assistants (PDAs), Portable Multimedia Players (PMPs), moving Picture experts group (MPEG-1 or MPEG-2) Audio layer 3(MP3) players, portable medical devices, and digital cameras, and combinations thereof.
In some cases, electronic device 100 may perform multiple functions (e.g., playing music, displaying videos, storing pictures, and receiving and sending telephone calls). If desired, the electronic apparatus 100 may be a portable device such as a cellular telephone, media player, other handheld device, wrist watch device, pendant device, earpiece device, or other compact portable device.
The electronic device 100 includes an electronic main body 10, and the electronic main body 10 includes a housing 12 and a main display 120 disposed on the housing 12. The housing 12 may be made of metal, such as steel or aluminum alloy. In this embodiment, the main display 120 generally includes a display panel 111, and may also include a circuit or the like for responding to a touch operation performed on the display panel 111. The Display panel 111 may be a Liquid Crystal Display (LCD) panel, and in some embodiments, the Display panel 111 is a touch screen 109.
Referring to fig. 8, in an actual application scenario, the electronic device 100 may be used as a smartphone terminal, in which case the electronic body 10 generally further includes one or more processors 102 (only one is shown in the figure), a memory 104, an RF (Radio Frequency) module 106, an audio circuit 110, a sensor 114, an input module 118, and a power module 122. It will be understood by those skilled in the art that the structure shown in fig. 8 is merely illustrative and is not intended to limit the structure of the electronic body 10. For example, the electronics body section 10 may also include more or fewer components than shown in fig. 8, or have a different configuration than shown in fig. 1 and 2.
Those skilled in the art will appreciate that all other components are peripheral devices with respect to the processor 102, and the processor 102 is coupled to the peripheral devices through a plurality of peripheral interfaces 124. The peripheral interface 124 may be implemented based on the following criteria: universal Asynchronous Receiver/Transmitter (UART), General Purpose Input/Output (GPIO), Serial Peripheral Interface (SPI), and Inter-Integrated Circuit (I2C), but the present invention is not limited to these standards. In some examples, the peripheral interface 124 may comprise only a bus; in other examples, the peripheral interface 124 may also include other elements, such as one or more controllers, for example, a display controller for interfacing with the display panel 111 or a memory controller for interfacing with a memory. These controllers may also be separate from the peripheral interface 124 and integrated within the processor 102 or a corresponding peripheral.
The memory 104 may be used to store software programs and modules, and the processor 102 executes various functional applications and data processing by executing the software programs and modules stored in the memory 104. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the electronic body portion 10 or the primary display 120 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The RF module 106 is configured to receive and transmit electromagnetic waves, and achieve interconversion between the electromagnetic waves and electrical signals, so as to communicate with a communication network or other devices. The RF module 106 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. The RF module 106 may communicate with various networks such as the internet, an intranet, a wireless network, or with other devices via a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network. The Wireless network may use various Communication standards, protocols, and technologies, including, but not limited to, Global System for Mobile Communication (GSM), Enhanced Mobile Communication (Enhanced Data GSM Environment, EDGE), wideband Code division multiple Access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Wireless Fidelity (WiFi) (e.g., Institute of Electrical and Electronics Engineers (IEEE) standard IEEE 802.10A, IEEE802.11 b, IEEE802.1 g, and/or IEEE802.11 n), Voice over internet protocol (VoIP), world wide mail Access (Microwave for Wireless Communication), Wi-11 Wireless Access (wimax), and any other suitable protocol for instant messaging, and may even include those protocols that have not yet been developed.
The audio circuitry 110, sound jack 103, microphone 105 collectively provide an audio interface between a user and the electronic body portion 10 or the main display 120. Specifically, the audio circuit 110 may be used as the driving circuit described above if the audio circuit 110 receives sound data from the processor 102, converts the sound data into an electrical signal, and transmits the electrical signal to the exciter 131. The electric signal is used as a driving signal of the exciter 131, and the exciter 131 controls the vibration of the vibration part according to the electric signal, thereby converting the sound data into sound waves audible to human ears. The audio circuitry 110 also receives electrical signals from the microphone 105, converts the electrical signals to sound data, and transmits the sound data to the processor 102 for further processing. Audio data may be retrieved from the memory 104 or through the RF module 106. In addition, audio data may also be stored in the memory 104 or transmitted through the RF module 106.
The sensor 114 is disposed in the electronic body portion 10 or the main display 120, examples of the sensor 114 include, but are not limited to: light sensors, pressure sensors, acceleration sensors 114F, proximity sensors 114J, and other sensors.
In particular, the light sensor may comprise a light line sensor. The light sensor can adjust the brightness of the screen according to the light of the environment where the mobile terminal is located. For example, in a well-lit area, the screen may be bright, whereas in a dark area, the screen may be dark (depending on the brightness setting of the screen), which both protects the eyes and saves power.
Among them, the pressure sensor may detect a pressure generated by pressing on the electronic device 100. That is, the pressure sensor detects pressure generated by contact or pressing between the user and the mobile terminal, for example, contact or pressing between the user's ear and the mobile terminal. Thus, the pressure sensor may be used to determine whether contact or pressure has occurred between the user and the electronic device 100, as well as the magnitude of the pressure.
Referring to fig. 1 and 2 again, in particular, in the embodiment shown in fig. 1 and 2, the light sensor and the pressure sensor are disposed adjacent to the display panel 111. The light sensor may turn off the display output by the processor 102 when an object is near the main display 120, for example, when the electronic body portion 10 is moved to the ear.
As one of the motion sensors, the acceleration sensor 114F can detect the magnitude of acceleration in various directions (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping) and the like for recognizing the attitude of the electronic device 100. In addition, the electronic body 10 may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer and a thermometer, which are not described herein,
in this embodiment, the input module 118 may include the touch screen 109 disposed on the main display 120, and the touch screen 109 may collect touch operations of the user (for example, operations of the user on or near the touch screen 109 using any suitable object or accessory such as a finger, a stylus, etc.) and drive the corresponding connection device according to a preset program. Optionally, the touch screen 109 may include a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 102, and can receive and execute commands sent by the processor 102. In addition, the touch detection function of the touch screen 109 may be implemented by various types, such as resistive, capacitive, infrared, and surface acoustic wave.
The main display 120 is used to display information input by a user, information provided to the user, and various graphic user interfaces of the electronic body section 10, which may be composed of graphics, text, icons, numbers, video, and any combination thereof, and in one example, the touch screen 109 may be provided on the display panel 111 so as to be integrated with the display panel 111.
The power module 122 is used to provide power supply to the processor 102 and other components. Specifically, the power module 122 may include a power management system, one or more power sources (e.g., batteries or ac power), a charging circuit, a power failure detection circuit, an inverter, a power status indicator light, and any other components associated with the generation, management, and distribution of power within the electronic body portion 10 or the primary display 120.
The electronic device 100 further comprises a locator 119, the locator 119 being configured to determine an actual location of the electronic device 100. In this embodiment, the locator 119 uses a positioning service to locate the electronic device 100, and the positioning service is understood to be a technology or a service for obtaining the position information (e.g. longitude and latitude coordinates) of the electronic device 100 by a specific positioning technology and marking the position of the located object on the electronic map.
The electronic device 100 also includes a camera 132, and the camera 132 may be any device capable of capturing an image of an object within its field of view. The camera 132 may include an image sensor. The image sensor may be a CMOS (Complementary Metal Oxide Semiconductor) sensor, or a CCD (Charge-coupled Device) sensor, or the like. The camera 132 may communicate with the processor 102 and send image data to the processor 102. The camera 132 may also receive command signals from the processor 102 to set parameters for capturing images. Exemplary parameters for capturing the image may include, among others, parameters for setting exposure time, aperture, image resolution/size, field of view (e.g., zoom in and out), and/or color space of the image (e.g., color or black and white), and/or for performing other types of known functions of the camera. The processor 102 may acquire the image captured by the camera 132, and may process the image, for example, to extract features in the image or perform image processing on the original image to eliminate the effect of speckle-like patterns formed by other objects. The camera 132 and the processor 102 may be connected via a network connection, bus, or other type of data link (e.g., hard wire, wireless (e.g., Bluetooth (TM)), or other connection known in the art).
To sum up, the sound production control method, the sound production control device, the electronic device, and the computer readable medium provided in the embodiments of the present application detect a change in a distance between a human ear and a screen when the screen vibrates to produce sound, and reduce the vibration intensity of the screen if the distance is increased, and improve the vibration intensity of the screen if the distance is decreased. And driving the screen to vibrate according to the adjusted vibration intensity, so that the screen can sound according to the volume corresponding to the vibration intensity. Therefore, the sound can be produced in a vibration mode of the screen or the rear cover, the sound outlet hole can be prevented from being formed in the electronic device, the vibration strength can be changed according to the change of the distance, the vibration strength is inversely related to the distance, and therefore when the ear is close to the screen, the user can not hear clearly when the current volume is shown, and therefore the volume of the electronic device is adjusted to be larger.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments. In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A sound production control method is applied to an electronic device, wherein the electronic device comprises a microphone, a screen and an exciter, and the exciter is used for driving the screen to produce sound, and the method comprises the following steps:
when the screen is detected to be in a vibration sound production mode, detecting whether human ears are in contact with the screen;
if the electronic device is not in contact with the screen, acquiring a return volume value of a preset sound signal collected by the microphone and returned by a user, wherein the preset sound signal is a sound signal emitted by screen vibration of the electronic device;
determining the change situation of the target distance between the human ear of the user and the screen according to the return volume value;
if the target distance is smaller, improving the vibration intensity of the screen;
if the target distance is increased, reducing the vibration intensity of the screen;
and controlling the exciter according to the adjusted vibration intensity so as to drive the screen to vibrate and sound at a volume corresponding to the vibration intensity.
2. The method of claim 1, further comprising:
if the human ear touches the screen, acquiring touch parameters of the human ear acting on the screen;
and detecting the change condition of the target distance between the human ear and the screen according to the touch parameters.
3. The method of claim 2, wherein the touch parameter is a touch area, and the detecting a change in the target distance between the human ear and the screen according to the touch parameter comprises:
detecting the change condition of the touch area;
if the touch area is increased, judging that the target distance is reduced;
and if the touch area is smaller, determining that the target distance is larger.
4. The method of claim 2, wherein the detecting whether the human ear is in contact with the screen comprises:
acquiring a touch area detected by the screen;
obtaining a contour line of the touch area;
judging whether the contour line is matched with a preset human ear contour line or not;
if the human ears are matched with the screen, the human ears are judged to be in contact with the screen;
and if not, judging that the human ear is not in contact with the screen.
5. The method of claim 2, wherein the electronic device further comprises a camera disposed on a same side of the electronic device as the screen; the method further comprises the following steps:
if the camera is not in contact with the screen, acquiring an ear image of the ear acquired by the camera;
and detecting the change condition of the target distance according to the change of the human ear image.
6. The method of claim 1, further comprising:
when the screen is detected to be in a vibration sound production mode, acquiring a target distance between the human ear and the screen;
and if the target distance value is smaller than a preset value, detecting the change of the target distance.
7. The sound production control device is applied to an electronic device, wherein the electronic device comprises a microphone, a screen and an exciter, and the exciter is used for driving the screen to produce sound; the sound production control device includes:
the detection unit is used for detecting whether human ears are in contact with the screen or not when the screen is detected to be in a vibration sound production mode; if the electronic device is not in contact with the screen, acquiring a return volume value of a preset sound signal collected by the microphone and returned by a user, wherein the preset sound signal is a sound signal emitted by screen vibration of the electronic device; determining the change situation of the target distance between the human ear of the user and the screen according to the return volume value;
an increasing unit configured to increase a vibration intensity of the screen if the target distance becomes smaller;
a reducing unit configured to reduce the vibration intensity of the screen if the target distance becomes larger;
and the driving unit is used for controlling the exciter according to the adjusted vibration intensity so as to drive the screen to vibrate and sound at a volume corresponding to the vibration intensity.
8. An electronic device comprising a microphone, a screen, and an actuator for driving the screen to emit sound; further comprising:
a distance detection unit for detecting whether a human ear is in contact with the screen; if the electronic device is not in contact with the screen, acquiring a return volume value of a preset sound signal collected by the microphone and returned by a user, wherein the preset sound signal is a sound signal emitted by screen vibration of the electronic device; acquiring a target distance between the ears of the user and the screen according to the return volume value;
the processor is used for detecting the change condition of a target distance between the human ear and the screen, if the target distance is smaller, the vibration intensity of the screen is improved, and if the target distance is larger, the vibration intensity of the screen is reduced;
and the driving circuit is connected with the processor and the exciter and is used for controlling the exciter according to the adjusted vibration intensity so as to drive the screen to vibrate and sound at a volume corresponding to the vibration intensity.
9. An electronic device comprising a microphone, a screen, and an actuator for driving the screen to emit sound; further comprising a memory and a processor, the memory coupled with the processor; the memory stores instructions that, when executed by the processor, cause the processor to perform the method of any of claims 1-6.
10. A computer-readable medium having program code executable by a processor, wherein the program code causes the processor to perform the method of any of claims 1-6.
CN201810747518.XA 2018-07-09 2018-07-09 Sound production control method and device, electronic device and computer readable medium Active CN109144461B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810747518.XA CN109144461B (en) 2018-07-09 2018-07-09 Sound production control method and device, electronic device and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810747518.XA CN109144461B (en) 2018-07-09 2018-07-09 Sound production control method and device, electronic device and computer readable medium

Publications (2)

Publication Number Publication Date
CN109144461A CN109144461A (en) 2019-01-04
CN109144461B true CN109144461B (en) 2021-07-13

Family

ID=64800110

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810747518.XA Active CN109144461B (en) 2018-07-09 2018-07-09 Sound production control method and device, electronic device and computer readable medium

Country Status (1)

Country Link
CN (1) CN109144461B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110830624A (en) * 2019-10-31 2020-02-21 RealMe重庆移动通信有限公司 Mobile terminal
CN113419220B (en) * 2021-06-17 2022-04-01 上海航天电子通讯设备研究所 High-voltage type phased array radar exciter with multiple protection functions

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010232960A (en) * 2009-03-27 2010-10-14 Casio Computer Co Ltd Sound insulator
CN101937682A (en) * 2010-09-16 2011-01-05 华为终端有限公司 Method and device for handling receiving voice
CN105653231A (en) * 2015-12-30 2016-06-08 浙江德景电子科技有限公司 Method and device for automatically regulating earphone acoustic effect
CN108174011A (en) * 2017-12-28 2018-06-15 上海传英信息技术有限公司 A kind of sound conduction system for mobile terminal and the mobile terminal with the system
CN108196815A (en) * 2017-12-28 2018-06-22 维沃移动通信有限公司 A kind of adjusting method and mobile terminal of sound of conversing

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9706329B2 (en) * 2015-01-08 2017-07-11 Raytheon Bbn Technologies Corp. Multiuser, geofixed acoustic simulations

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010232960A (en) * 2009-03-27 2010-10-14 Casio Computer Co Ltd Sound insulator
CN101937682A (en) * 2010-09-16 2011-01-05 华为终端有限公司 Method and device for handling receiving voice
CN105653231A (en) * 2015-12-30 2016-06-08 浙江德景电子科技有限公司 Method and device for automatically regulating earphone acoustic effect
CN108174011A (en) * 2017-12-28 2018-06-15 上海传英信息技术有限公司 A kind of sound conduction system for mobile terminal and the mobile terminal with the system
CN108196815A (en) * 2017-12-28 2018-06-22 维沃移动通信有限公司 A kind of adjusting method and mobile terminal of sound of conversing

Also Published As

Publication number Publication date
CN109144461A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
CN109194796B (en) Screen sounding method and device, electronic device and storage medium
CN109062535B (en) Sound production control method and device, electronic device and computer readable medium
CN108683761B (en) Sound production control method and device, electronic device and computer readable medium
CN109032558B (en) Sound production control method and device, electronic device and computer readable medium
CN108881568B (en) Method and device for sounding display screen, electronic device and storage medium
CN108646971B (en) Screen sounding control method and device and electronic device
CN108810198B (en) Sound production control method and device, electronic device and computer readable medium
CN109189362B (en) Sound production control method and device, electronic equipment and storage medium
CN109032556B (en) Sound production control method, sound production control device, electronic device, and storage medium
CN110764730A (en) Method and device for playing audio data
CN109144460B (en) Sound production control method, sound production control device, electronic device, and storage medium
WO2019206077A1 (en) Video call processing method and mobile terminal
CN108196815B (en) Method for adjusting call sound and mobile terminal
CN109086023B (en) Sound production control method and device, electronic equipment and storage medium
CN108958697B (en) Screen sounding control method and device and electronic device
CN108958632B (en) Sound production control method and device, electronic equipment and storage medium
CN109085985B (en) Sound production control method, sound production control device, electronic device, and storage medium
CN108810764B (en) Sound production control method and device and electronic device
CN109189360B (en) Screen sounding control method and device and electronic device
CN108900688B (en) Sound production control method and device, electronic device and computer readable medium
KR20180055243A (en) Mobile terminal and method for controlling the same
CN108712706B (en) Sound production method, sound production device, electronic device and storage medium
CN111613213B (en) Audio classification method, device, equipment and storage medium
CN110798327B (en) Message processing method, device and storage medium
CN109062533B (en) Sound production control method, sound production control device, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant