CN109032008B - Sound production control method and device and electronic device - Google Patents

Sound production control method and device and electronic device Download PDF

Info

Publication number
CN109032008B
CN109032008B CN201810746931.4A CN201810746931A CN109032008B CN 109032008 B CN109032008 B CN 109032008B CN 201810746931 A CN201810746931 A CN 201810746931A CN 109032008 B CN109032008 B CN 109032008B
Authority
CN
China
Prior art keywords
sound
sound production
electronic device
faces
volume
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810746931.4A
Other languages
Chinese (zh)
Other versions
CN109032008A (en
Inventor
张海平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810746931.4A priority Critical patent/CN109032008B/en
Publication of CN109032008A publication Critical patent/CN109032008A/en
Application granted granted Critical
Publication of CN109032008B publication Critical patent/CN109032008B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • G05B19/0423Input/output
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/25Pc structure of the system
    • G05B2219/25257Microcontroller

Abstract

The embodiment of the application discloses a sound production control method and device and an electronic device. The electronic device comprises a sounding device and an exciter for driving the sounding device to vibrate and sound, the electronic device further comprises an image acquisition device, and the method comprises the following steps: when the electronic device is in a preset sound production mode, obtaining an image collected by the image collecting device; identifying the number of human faces in the image; and controlling the sound production volume of the sound production device under the driving of an exciter based on the number of the human faces. The method can represent the number of people in the surrounding environment of the electronic device through the number of the faces in the image acquired by the image acquisition device, so that the vibration amplitude or frequency of the exciter can be adjusted according to the number of people to adjust the sounding state of the sounding device, the volume adjustment is more intelligent, and the user experience is improved.

Description

Sound production control method and device and electronic device
Technical Field
The present disclosure relates to the field of electronic devices, and more particularly, to a method and an apparatus for controlling sound emission, and an electronic device.
Background
Currently, in electronic devices, such as mobile phones, tablet computers, and the like, sound is generated through a speaker to output a sound signal. However, the speaker arrangement occupies a large design space, resulting in the electronic device not conforming to the direction of the slim design.
Disclosure of Invention
In view of the above problems, the present application provides a sound emission control method, device and electronic device to improve the above drawbacks.
In a first aspect, the present application provides a sound production control method applied to an electronic device, where the electronic device includes a sound production device and an actuator for driving the sound production device to vibrate and produce sound, the electronic device further includes an image capture device, and the method includes: when the electronic device is in a preset sound production mode, obtaining an image collected by the image collecting device; identifying the number of human faces in the image; and controlling the sound production volume of the sound production device under the driving of an exciter based on the number of the human faces.
In a second aspect, the present application provides a sound production control device, operating in an electronic device, the electronic device includes a sound production device and a drive the sound production device vibrates the exciter of sound production, the electronic device further includes an image acquisition device, the sound production control device includes: the image acquisition unit is used for acquiring the image acquired by the image acquisition device when the electronic device is in a preset sound production mode; the face recognition unit is used for recognizing the number of faces in the image; and the sounding control unit is used for controlling the sounding volume of the sounding device under the driving of the exciter based on the number of the human faces.
In a third aspect, the present application provides an electronic device comprising one or more processors and a memory; one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the methods described above.
In a fourth aspect, the present application provides a computer-readable storage medium comprising a stored program, wherein the method described above is performed when the program is executed.
In a fifth aspect, the present application provides an electronic device comprising a sound producing device; the exciter is used for driving the sounding device to vibrate and sound; the circuit is connected with the plurality of exciters and comprises a processing circuit and a driving circuit, and the processing circuit is used for acquiring the image acquired by the image acquisition device when the electronic device is in a preset sound production mode; and the driving circuit is used for controlling the sound production volume of the sound production device under the driving of the exciter based on the number of the human faces.
The application provides a pair of sound production control method, device and electron device, at sound production device and drive the exciter of sound production device vibration sound production, electron device still includes under the condition of image acquisition device, is working as electron device is under the preset sound production mode, acquires the image that image acquisition device gathered, then discerns face quantity in the image is based on face quantity control again the sound production state of sound production device under the drive of exciter to make and to come the number of people in the characterization electron device surrounding environment through face quantity in the image that image acquisition device gathered, so that adjust the vibration range or the frequency of exciter according to the number of people, with the sound production state of adjusting sound production device, thereby make volume control more intelligent, promoted user experience.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an electronic device to which a sound emission control method proposed in the present application is applied;
fig. 2 is a circuit block diagram of an electronic device to which a sound emission control method proposed in the present application is applied;
fig. 3 is a schematic diagram illustrating a region division in a sound emission control method proposed in the present application;
fig. 4 is a schematic diagram illustrating another area division in a sound emission control method proposed in the present application;
fig. 5 shows a flow chart of a sound emission control method proposed by the present application;
FIG. 6 is a flow chart illustrating another utterance control method proposed by the present application;
fig. 7 is a schematic diagram illustrating division of a set region in an image captured by an image capturing device in another sound emission control method proposed in the present application;
fig. 8 is a schematic diagram illustrating a face recognized by each set area in another utterance control method proposed in the present application;
fig. 9 is a schematic diagram showing a sound emission area that corresponds to a set area in another sound emission control method proposed by the present application;
fig. 10 is a schematic view showing a common sound emission area in another sound emission control method proposed by the present application;
fig. 11 shows a flow chart of yet another sound emission control method proposed by the present application;
fig. 12 is a block diagram illustrating a structure of a screen control apparatus according to the present application;
fig. 13 is a block diagram showing the structure of another screen control device proposed in the present application;
fig. 14 is a block diagram showing the construction of another screen control apparatus proposed in the present application;
fig. 15 is a block diagram of an electronic device according to the present application;
fig. 16 is a block diagram of an electronic device according to the present application;
fig. 17 is a block diagram of an electronic device according to the present application;
fig. 18 is a block diagram showing a configuration of an electronic apparatus of the present application for executing a sound emission control method according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The sound generating device of the electronic device generally includes a speaker, a receiver, and the like, and the receiver is a moving-coil receiver, and the working principle of the receiver is similar to that of a traditional moving-coil speaker: the voice coil vibrates up and down under the drive of the changing electromagnetic field force generated by the changing current and drives the vibrating membrane to drive the front air and the rear air to generate sound waves. However, in order to facilitate the transmission of the sound waves generated by the receiver, an opening is usually formed in the housing of the electronic device.
Under the trend that the screen proportion of the electronic device is higher and higher, the opening of the shell brings the biggest defect to the electronic device, namely, the expansion of the display screen is influenced. After a series of researches, the inventor finds that sound can be generated by controlling the screen, the frame or the rear cover of the mobile phone to vibrate, so that the arrangement of a sound outlet hole of a receiver can be eliminated, and therefore, the problems are effectively solved along with the occurrence of a screen sound generation technology. However, the inventor has found that the electronic device repeatedly and manually adjusts the volume according to the number of current users during the sound production process, for example, the volume is increased when the number of users is large, so that all users can hear the sound clearly, and the volume is decreased when the number of users is small, so as to save the power, and especially in the hands-free mode or the public address mode, the above problems are more prominent. Therefore, the inventor proposes a sound emission control method, a sound emission control device, and an electronic device that can flexibly adjust sound emission volume in the present application.
The application environment to which the present application relates will be described first.
Referring to fig. 1, an electronic device 100 according to an embodiment of the present disclosure is shown. The electronic device 100 includes an electronic body 10, where the electronic body 10 includes a housing 12 and a display screen 120 disposed on the housing 12, the housing 12 includes a front panel, a rear cover, and a bezel, the bezel is used to connect the front panel and the rear cover, and the display screen 120 is disposed on the front panel.
The electronic device further comprises an exciter 101, wherein the exciter 101 is used for driving a vibration component of the electronic device to vibrate, specifically, the vibration component is at least one of the display screen 120 or the housing 12 of the electronic device, that is, the vibration component can be the display screen 120, the housing 12, or a combination of the display screen 120 and the housing 12. As an embodiment, when the vibration member is the housing 12, the vibration member may be a rear cover of the housing 12.
In the embodiment of the present application, the vibration component is the display screen 120, and the exciter 101 is connected to the display screen 120 for driving the display screen 120 to vibrate. In particular, the actuator 101 is attached below the display screen 120, and the actuator 101 may be a piezoelectric driver or a motor. In one embodiment, actuator 101 is a piezoelectric actuator. The piezoelectric actuator transmits its own deformation to the display screen 120 by a moment action, so that the display screen 120 vibrates to produce sound. The display screen 120 includes a touch screen and a display screen, the display screen is located below the touch screen, and the piezoelectric driver is attached below the display screen, that is, a surface of the display screen away from the touch screen. The piezoelectric driver includes a plurality of piezoelectric ceramic sheets. When the multilayer piezoelectric ceramic piece produces sound and expands and contracts, the screen is driven to bend and deform, and the whole screen forms bending vibration repeatedly, so that the screen can push air and produce sound.
As an embodiment, as shown in fig. 2, the electronic device 100 includes a processing circuit 210 and a driving circuit 211, where the processing circuit 211 is configured to detect a touch operation applied to the display screen 120, use a region where the touch operation is detected as a target region, and configure a vibration parameter corresponding to the target region, the actuators 101 (only 2 actuators are shown in fig. 2) are connected to the driving circuit 211, and the driving circuit 211 is configured to input a control signal value to the actuators 101 according to the vibration parameter, so as to drive the actuators 101 to vibrate, thereby driving the vibration component to vibrate.
Specifically, the driving circuit may be a processor of the electronic device, an audio circuit, or the like. The driving circuit outputs a high-low level driving signal to the actuator 101, and the actuator 101 vibrates according to the driving signal, but when the driving signal output by the driving circuit has different electrical parameters, the vibration parameters of the actuator 101 may be different, for example, the duty ratio of the driving signal corresponds to the vibration frequency of the actuator 101, and the amplitude of the driving signal corresponds to the vibration amplitude of the actuator 101.
In the embodiment of the present application, the plurality of actuators 101 may be uniformly distributed over a plurality of areas of the display screen 120, so that the display screen 120 is divided into a plurality of areas for emitting sound individually. For example, if the number of the drivers is 4, the display screen may be equally divided into 4 square areas along the center lines in the vertical direction and the horizontal direction, the 4 drivers are disposed below the 4 square areas, and the 4 drivers correspond to the 4 square areas one by one. Further, there are various ways of dividing the plurality of regions, for example, as shown in fig. 3, the display screen of the electronic device may be sequentially divided into A, B regions and C regions from one end to the other end. As shown in fig. 4, the a region may be divided into a1 and a2, the B region may be divided into B1 and B2, and the C region may be divided into C1 and C2. Wherein the electronic device can distinguish different areas by coordinates. Of course, the number of actuators and the number of areas into which the display screen 120 is divided are not limited in the embodiments of the present application.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 5, a sound generation control method provided by the present application is applied to an electronic device, where the electronic device includes a sound generating device and an actuator for driving the sound generating device to vibrate and generate sound, the electronic device further includes an image capturing device, and the method includes:
step S110: and when the electronic device is in a preset sound production mode, acquiring the image acquired by the image acquisition device.
The preset sound production mode comprises a hands-free call mode or a loud speaking mode. The hands-free call mode refers to that when the electronic device is in a call state, the sound volume emitted by the electronic device can make a user hear the content of the sound emitted by the electronic device clearly without pressing the electronic device close to the ear. The loudspeaking mode is that when the electronic device runs the audio/video application program, the electronic device directly makes a sound based on the screen without using an earphone or other equipment.
Step S120: and identifying the number of human faces in the image.
It should be noted that the electronic device may recognize a face image from the acquired image based on a face model learned in advance.
As one way, if the face of a person who merely passes through the electronic device is recognized by the electronic device, the volume may be erroneously adjusted. For example, in practice, only one user of the electronic device is in use, and many users may move within the acquisition range of the image acquisition device, and then the electronic device may also use the faces of the moving users as the number of recognized faces, so that the sound volume of the electronic device is larger than actually required. Then, as a mode, the electronic device only recognizes a face image of a static face image in the image, where the static face image is a face image whose displacement distance is smaller than a preset value.
Step S130: and controlling the sound production volume of the sound production device under the driving of an exciter based on the number of the human faces.
As one mode, the electronic device may acquire the number of faces recognized from the image acquired by the image acquisition device; if the number of the human faces is not smaller than a preset value, controlling the sounding device to sound at a first volume; if the number of the human faces is smaller than the preset value, the sound production device is controlled to produce sound at a second volume, wherein the second volume is smaller than the first volume. Through the mode, the electronic device can determine the size of the sound emitted by the sound-producing device according to the number of the detected face images.
It should be noted that the electronic device may include only one sounding device, or may include a plurality of sounding devices. Under the condition that a plurality of sounding devices are arranged, and the number of detected faces is not smaller than a preset value, the vibration amplitudes of the sounding devices are simultaneously controlled, so that the integral output volume of the electronic device is the first volume. Of course, if there is one sound generating device, the vibration amplitude of the single sound generating device is controlled so that the output volume of the whole electronic device is the first volume.
In one embodiment, the sound generating device includes a display screen, and the image capturing device includes a front camera facing the same direction as the display screen, and in another embodiment, the sound generating device includes a rear cover, and the image capturing device includes a rear camera facing the same direction as the rear cover. Furthermore, the sound production device may include both the display screen and the rear cover, and the image capture device may include both the front camera and the rear camera. In order to facilitate flexible control, different exciters are arranged in the electronic device to respectively control the display screen and the rear cover to vibrate and sound.
It should be noted that the electronic device controls the sound volume of the display screen according to the face image in the image collected by the front camera, and controls the sound volume of the rear cover according to the face image in the image collected by the rear camera.
The application provides a pair of sound production control method, at sound production device and drive the exciter of sound production device vibration sound production, electronic device still includes under the condition of image acquisition device, is working as electronic device is in under the preset sound production mode, acquires the image that the image acquisition device was gathered, then discernment face quantity in the image is based on face quantity control again sound production device is the sound production state under the drive of exciter to make and to come the number of people in the characterization electronic device surrounding environment through the face quantity in the image that the image acquisition device gathered, so that adjust the vibration amplitude or the frequency of exciter according to the number of people, with the sound production state of adjusting sound production device, thereby make volume control more intelligent, promoted user experience.
Referring to fig. 6, a sound generation control method provided by the present application is applied to an electronic device, where the electronic device includes a sound generation device and an actuator for driving the sound generation device to generate sound by vibration, the sound generation device includes a plurality of sound generation areas, the electronic device further includes an image capture device, and the method includes:
step S210: and when the electronic device is in a preset sound production mode, acquiring the image acquired by the image acquisition device.
Step S220: and identifying the number of human faces in the image.
Step S230: and acquiring the number of the faces respectively recognized in a plurality of set areas in the image.
Among them, there are various methods for dividing a plurality of setting regions in an image. As one mode, the setting regions divided in the image may be in one-to-one correspondence with the sound emission regions. For example, as shown in fig. 7, the left image in fig. 7 is a sound emission area a, a sound emission area B, and a sound emission area C divided in the display screen of the electronic device. Whereas the image 99 in the right image in fig. 7 is an image captured by the image capturing device, and the electronic device divides the image into a set region a1, a set region B1, and a set region C1. Here, the setting area a1 corresponds to the sound emission area a, the setting area B1 corresponds to the sound emission area B, the setting area C1 corresponds to the sound emission area C, all of the setting areas a1 fall within the sound emission area a, all of the setting areas B1 fall within the sound emission area B, and all of the setting areas C1 fall within the sound emission area C. In this case, the sound emission volumes of the sound emission areas corresponding to the setting area a1, the setting area B1, and the setting area C1 may be determined according to the number of faces recognized by the setting area a1, the setting area B1, and the setting area C1, respectively.
Step S240: and controlling the sound volume of the sound production area of the sound production device corresponding to the set area based on the number of the recognized faces in the set area.
As shown in fig. 8, if 3 face images are recognized in the setting area a1, 1 face image is recognized in the setting area B1, and no face image is recognized in the setting area C1. As one mode, the sound volume may be determined directly according to the number of faces in each region, and in this case, the sound volume of the sound emission region a corresponding to the setting region a1 is greater than the sound volume of the sound emission region B corresponding to the setting region B1 is greater than the sound volume of the sound emission region C corresponding to the setting region C1. Alternatively, a plurality of intervals in which the number of human faces corresponds to the volume value may be set, for example, 0 face image-corresponding sound C, 1 to 3 face numbers correspond to the volume value a, and 4 to 6 face images correspond to the volume value B, in which case the sound volume of the sound emission area a corresponding to the area a1 is set to a, the sound volume of the sound emission area B corresponding to the area B1 is set to a, and the sound volume of the sound emission area C corresponding to the area C1 is set to C. As one mode, the volume of the sound emission area corresponding to the area where the face image is not recognized may be directly configured to be 0.
In addition to the above-described one-to-one correspondence between the setting regions and the sound emission regions, a plurality of setting regions may be divided to correspond to one sound emission region in order to improve the flexibility of dividing the setting regions in the image.
In this case, the electronic device may acquire a common sound emission area, where the common sound emission area corresponds to at least two of the setting areas; and controlling the sound volume of the public sound production area according to the number of the faces identified in the corresponding one of the at least two setting areas with the larger number of the faces. As one mode, a sound emission area, into which a part of the set area exceeding a preset proportion falls, is used as a sound emission area corresponding to the set area.
For example, as shown in fig. 9, the image captured by the image capturing device may be zoomed or dragged on the display screen, and if the sound emission area in which more than fifty percent of the setting area falls is the sound emission area corresponding to the setting area, the image 99 captured by the image capturing device is divided into a setting area a1, a setting area a2, a setting area B1, a setting area B2, a setting area C1, and a setting area C2. And if more than 50 percent of the setting areas a1 and a2 fall into the sound emission area B (the dotted lines are the middle lines of the setting areas a1 and a 2), the sound emission areas corresponding to the setting areas a1 and a2 are the sound emission areas B.
As shown in fig. 10, when 2 face images are recognized from the setting area a1 and 1 face image is recognized from the setting area a2, the electronic apparatus controls the sound emission area B to emit sound with the sound emission amount control sound emission area B corresponding to the 2 face images.
The application provides a sounding control method, at sounding device and drive the exciter of sounding device vibration sound production, under the condition that electronic device still includes the image acquisition device, when electronic device is in the preset sound production mode, acquire the image that the image acquisition device gathered, then discern the face quantity in the image, acquire the face quantity that respectively discerns in a plurality of settlement regions in the image, again based on the face quantity that discerns in the settlement region, control the sound production volume of the sound production region of the sounding device that corresponds with this settlement region, thereby make the number of people in the image that can gather through the image acquisition device in the face quantity in the characterization electronic device surrounding environment to adjust the vibration amplitude or the frequency of exciter according to the number of people, in order to adjust the sound production state of sounding device, thereby make the volume adjustment more intelligent, the user experience is improved.
Referring to fig. 11, a sound generation control method provided by the present application is applied to an electronic device, where the electronic device includes a sound generation device and an actuator for driving the sound generation device to generate sound by vibration, the sound generation device includes a plurality of sound generation areas, the electronic device further includes an image capture device, and the method includes:
step S310: and when the electronic device is in a preset sound production mode, acquiring the image acquired by the image acquisition device.
Step S320: and identifying the number of human faces in the image.
Step S330: and if the face image in the image is identified, acquiring data acquired by the distance sensor, and judging whether an object exists in a set distance.
Step S340: and if an object is detected within the set distance, controlling the sound production volume of the sound production device under the driving of the exciter based on the number of the human faces.
Step S350: and if no object is detected within the set distance, not controlling the exciter to vibrate.
It can be understood that the sound generating device, such as the display screen and the rear cover, vibrates under the driving of the exciter, and if the exciter does not vibrate, the sound generating device does not vibrate, and the sound is not generated.
In the sounding control method provided by the application, under the condition that the electronic device also comprises an image acquisition device, the sounding device and the exciter for driving the sounding device to vibrate and sound, when the electronic device is in a preset sound production mode, acquiring an image acquired by the image acquisition device, then identifying the number of human faces in the image, under the condition that objects are detected around the electronic device, the sound production state of the sound production device under the driving of the exciter is controlled based on the number of the human faces, so that the number of people in the environment around the electronic device can be characterized by the number of faces in the image acquired by the image acquisition device, so that the vibration amplitude or the frequency of the exciter can be adjusted according to the number of people, the sounding state of the sounding device can be adjusted, the volume adjustment is more intelligent, and the user experience is improved.
Referring to fig. 12, the sound production control apparatus 400 provided by the present application operates in an electronic apparatus, the electronic apparatus includes a sound production device and an actuator for driving the sound production device to vibrate and produce sound, the electronic apparatus further includes an image capture device, and the sound production control apparatus 400 includes:
the image acquisition unit 410 is used for acquiring an image acquired by the image acquisition device when the electronic device is in a preset sound production mode;
a face recognition unit 420 for recognizing the number of faces in the image;
and the sound production control unit 430 is used for controlling the sound production volume of the sound production device under the driving of the exciter based on the number of the human faces.
As one mode, the sound production control unit 430 is specifically configured to obtain the number of faces recognized from the image collected by the image collection device; if the number of the human faces is not smaller than a preset value, controlling the sounding device to sound at a first volume; if the number of the human faces is smaller than the preset value, the sound production device is controlled to produce sound at a second volume, wherein the second volume is smaller than the first volume
Referring to fig. 13, the sound-generating control device 500 provided by the present application is applied to an electronic device, the electronic device includes a sound-generating device and an actuator for driving the sound-generating device to generate sound by vibration, the electronic device further includes an image capturing device, and the sound-generating control device 500 includes:
an image obtaining unit 510, configured to obtain, when the electronic apparatus is in a preset sound generating mode, an image collected by the image collecting device;
a face recognition unit 520 configured to recognize the number of faces in the image;
a face number acquiring unit 530 configured to acquire the number of faces recognized in each of a plurality of setting regions in the image;
and the sound production control unit 540 is configured to control the sound production volume of the sound production area of the sound production device corresponding to the setting area based on the number of the faces recognized in the setting area.
Referring to fig. 14, the sound-generating control device 600 provided by the present application operates in an electronic device, the electronic device includes a sound-generating device and an actuator for driving the sound-generating device to generate sound by vibration, the electronic device further includes an image-capturing device, the electronic device further includes a distance sensor, and the sound-generating control device 600 includes:
the image acquisition unit 610 is used for acquiring an image acquired by the image acquisition device when the electronic device is in a preset sound production mode;
a face recognition unit 620, configured to recognize the number of faces in the image;
an environment detection unit 630, configured to acquire data acquired by the distance sensor, and determine whether an object is located within a set distance;
and the sound production control unit 640 is configured to control the sound production volume of the sound production device driven by the exciter based on the number of the human faces if the environment detection unit 630 detects that an object is located within a set distance.
It should be noted that the foregoing apparatus embodiment corresponds to the foregoing method embodiment, and specific contents in the apparatus embodiment may refer to contents in the foregoing method embodiment.
To sum up, the sound production control method, device and electronic device that this application provided are including the display screen and a plurality of being used for the drive the exciter of display screen sound production, a plurality of exciters correspond respectively under the different regional circumstances of display screen, will acquire the region that detects there is the touch operation effect as the target area, then acquire the vibration parameter that the target area corresponds, so that based on parameter control a plurality of exciters vibrate to realized adjusting the vibration parameter of a plurality of exciters through the different regions of touch-control, promoted the flexibility of screen sound production.
An electronic device provided by the present application will be described with reference to fig. 15.
Referring to fig. 15, based on the data processing method and apparatus, another electronic apparatus 100 capable of executing the terminal control method is further provided in the embodiment of the present application. The electronic device 100 includes one or more (only one shown) processors 102, memory 104, sound emitting device 98, and actuator 101 coupled to one another. The memory 104 stores therein a program that can execute the content of the foregoing method embodiments, and the processor 102 can execute the program stored in the memory 104.
An electronic device provided by the present application will be described with reference to fig. 16, 17, and 18.
Referring to fig. 16, based on the above-mentioned sound control method and apparatus, an embodiment of the present application further provides an electronic apparatus 100 capable of executing the above-mentioned sound control method.
By way of example, the electronic device 100 may be any of various types of computer system equipment (only one modality shown in fig. 16 by way of example) that is mobile or portable and performs wireless communications. Specifically, the electronic apparatus 100 may be a mobile phone or a smart phone (e.g., an iPhone (TM) based phone), a Portable game device (e.g., Nintendo DS (TM), PlayStation Portable (TM), game Advance (TM), iPhone (TM)), a laptop computer, a PDA, a Portable internet device, a music player, and a data storage device, other handheld devices, and a head-mounted device (HMD) such as a watch, a headset, a pendant, a headset, and the like, and the electronic apparatus 100 may also be other wearable devices (e.g., a head-mounted device (HMD) such as electronic glasses, electronic clothes, an electronic bracelet, an electronic necklace, an electronic tattoo, an electronic device, or a smart watch).
The electronic apparatus 100 may also be any of a number of electronic devices including, but not limited to, cellular phones, smart phones, other wireless communication devices, personal digital assistants, audio players, other media players, music recorders, video recorders, cameras, other media recorders, radios, medical devices, vehicle transportation equipment, calculators, programmable remote controllers, pagers, laptop computers, desktop computers, printers, netbook computers, Personal Digital Assistants (PDAs), Portable Multimedia Players (PMPs), moving Picture experts group (MPEG-1 or MPEG-2) Audio layer 3(MP3) players, portable medical devices, and digital cameras, and combinations thereof.
In some cases, electronic device 100 may perform multiple functions (e.g., playing music, displaying videos, storing pictures, and receiving and sending telephone calls). If desired, the electronic apparatus 100 may be a portable device such as a cellular telephone, media player, other handheld device, wrist watch device, pendant device, earpiece device, or other compact portable device.
The electronic device 100 shown in fig. 16 includes an electronic body 10, where the electronic body 10 includes a housing 12 and a display 120 disposed on the housing 12, and it is understood that the display 120 is a screen referred to in this application. The housing 12 may be made of metal, such as steel or aluminum alloy. As shown in fig. 17, the electronic body 10 further includes a rear cover 121. In this embodiment, the display screen 120 generally includes a display panel 111, and may also include a circuit and the like for responding to a touch operation performed on the display panel 111. The Display panel 111 may be a Liquid Crystal Display (LCD) panel, and in some embodiments, the Display panel 111 is a touch screen 109.
Both the display screen in fig. 16 and the rear cover in fig. 17 can be used as sound generating devices, and the sound generating devices vibrate to generate sound under the driving of the exciter 101.
As shown in fig. 18, in an actual application scenario, the electronic device 100 may be used as a smartphone terminal, in which case the electronic body 10 generally further includes one or more processors 102 (only one is shown in the figure), a memory 104, an RF (Radio Frequency) module 106, an audio circuit 110, a sensor 114, an input module 118, and a power module 122. It will be understood by those skilled in the art that the present application is not intended to be limited to the configuration of the electronics body portion 10. For example, the electronics body section 10 may include more or fewer components than shown, or have a different configuration than shown.
Those skilled in the art will appreciate that all other components are peripheral devices with respect to the processor 102, and the processor 102 is coupled to the peripheral devices through a plurality of peripheral interfaces 124. The peripheral interface 124 may be implemented based on the following criteria: universal Asynchronous Receiver/Transmitter (UART), General Purpose Input/Output (GPIO), Serial Peripheral Interface (SPI), and Inter-Integrated Circuit (I2C), but the present invention is not limited to these standards. In some examples, the peripheral interface 124 may comprise only a bus; in other examples, the peripheral interface 124 may also include other elements, such as one or more controllers, for example, a display controller for interfacing with the display panel 111 or a memory controller for interfacing with a memory. These controllers may also be separate from the peripheral interface 124 and integrated within the processor 102 or a corresponding peripheral.
The memory 104 may be used to store software programs and modules, and the processor 102 executes various functional applications and data processing by executing the software programs and modules stored in the memory 104. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, the memory 104 may further include memory remotely located from the processor 102, which may be connected to the electronics body portion 10 or the display screen 120 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The RF module 106 is configured to receive and transmit electromagnetic waves, and achieve interconversion between the electromagnetic waves and electrical signals, so as to communicate with a communication network or other devices. The RF module 106 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. The RF module 106 may communicate with various networks such as the internet, an intranet, a wireless network, or with other devices via a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network. The Wireless network may use various Communication standards, protocols, and technologies, including, but not limited to, Global System for Mobile Communication (GSM), Enhanced Mobile Communication (Enhanced Data GSM Environment, EDGE), wideband Code division multiple Access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Wireless Fidelity (WiFi) (e.g., IEEE802.1 a, IEEE802.11 b, IEEE802.1 g, and/or IEEE802.11 n), Voice over internet protocol (VoIP), world wide mail Access (Microwave for Wireless communications, wimax), and any other suitable protocol for instant messaging, and may even include those protocols that have not yet been developed.
The audio circuitry 110, the actuator 101, the sound jack 103, and the microphone 105 collectively provide an audio interface between a user and the electronics body portion 10 or the display screen 120. Specifically, the audio circuit 110 receives sound data from the processor 102, converts the sound data into an electrical signal, and transmits the electrical signal to the exciter 101. The exciter 101 converts the electrical signal into sound waves that can be heard by the human ear by vibrating the display screen 120. The audio circuitry 110 also receives electrical signals from the microphone 105, converts the electrical signals to sound data, and transmits the sound data to the processor 102 for further processing. Audio data may be retrieved from the memory 104 or through the RF module 106. In addition, audio data may also be stored in the memory 104 or transmitted through the RF module 106.
The sensor 114 is disposed in the electronics body portion 10 or in the display screen 120, examples of the sensor 114 include, but are not limited to: light sensor 114F, operational sensors, pressure sensor 114G, infrared heat sensors, distance sensors, gravitational acceleration sensors, and other sensors.
Among them, the pressure sensor 114G may be a sensor that detects pressure generated by pressing on the electronic device 100. That is, the pressure sensor 114G detects pressure resulting from contact or pressing between the user and the electronic device, for example, contact or pressing between the user's ear and the electronic device. Thus, the pressure sensor 114G may be used to determine whether contact or pressure has occurred between the user and the electronic device 100, as well as the magnitude of the pressure.
Referring to fig. 18 again, in the embodiment shown in fig. 18, the light sensor 114F and the pressure sensor 114G are disposed adjacent to the display panel 111. The light sensor 114F may turn off the display output when an object is near the display screen 120, for example, when the electronic body portion 10 moves to the ear.
As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in various directions (generally three axes), detect the magnitude and direction of gravity when the electronic device is stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping) and the like for recognizing the attitude of the electronic device 100. In addition, the electronic body 10 may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer and a thermometer, which are not described herein,
in this embodiment, the input module 118 may include the touch screen 109 disposed on the display screen 120, and the touch screen 109 may collect a touch operation of a user (for example, an operation of the user on or near the touch screen 109 using any suitable object or accessory such as a finger, a stylus, etc.) and drive a corresponding connection device according to a preset program. Optionally, the touch screen 109 may include a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 102, and can receive and execute commands sent by the processor 102. In addition, the touch detection function of the touch screen 109 may be implemented by using resistive, capacitive, infrared, and surface acoustic wave types. In addition to the touch screen 109, in other variations, the input module 118 may include other input devices, such as keys. The keys may include, for example, character keys for inputting characters, and control keys for triggering control functions. Examples of such control keys include a "back to home" key, a power on/off key, and the like.
The display screen 120 is used to display information input by a user, information provided to the user, and various graphic user interfaces of the electronic main body part 10, which may be composed of graphics, text, icons, numbers, videos, and any combination thereof, and in one example, the touch screen 109 may be provided on the display panel 111 so as to be integrated with the display panel 111.
The power module 122 is used to provide power supply to the processor 102 and other components. Specifically, the power module 122 may include a power management system, one or more power sources (e.g., batteries or ac power), a charging circuit, a power failure handling circuit, an inverter, a power status indicator light, and any other components associated with the generation, management, and distribution of power within the electronics body portion 10 or the display screen 120.
The electronic device 100 further comprises a locator 119, the locator 119 being configured to determine an actual location of the electronic device 100. In this embodiment, the locator 119 uses a positioning service to locate the electronic device 100, and the positioning service is understood to be a technology or a service for obtaining the position information (e.g. longitude and latitude coordinates) of the electronic device 100 by a specific positioning technology and marking the position of the located object on the electronic map.
It should be understood that the electronic apparatus 100 described above is not limited to the smartphone terminal, and it should refer to a computer device that can be used in a mobile. Specifically, the electronic device 100 refers to a mobile computer device equipped with an intelligent operating system, and the electronic device 100 includes, but is not limited to, a smart phone, a smart watch, a tablet computer, and the like.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments. In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (11)

1. A sound production control method is applied to an electronic device, the electronic device comprises a sound production device and an actuator for driving the sound production device to vibrate and produce sound, the electronic device further comprises an image acquisition device, and the method comprises the following steps:
when the electronic device is in a preset sound production mode, obtaining an image collected by the image collecting device;
identifying the number of human faces in the image, wherein the number of the human faces is the number of human faces with displacement distances smaller than a preset value;
controlling the sound production volume of the sound production device under the driving of an exciter based on the number of the human faces, wherein the sound production volume is larger when the number of the human faces is larger.
2. The utterance control method according to claim 1, wherein the step of controlling the volume of the utterance device upon actuation of an actuator based on the number of faces comprises:
acquiring the number of faces recognized from the image acquired by the image acquisition device;
if the number of the human faces is not smaller than a preset value, controlling the sounding device to sound at a first volume;
if the number of the human faces is smaller than the preset value, the sound production device is controlled to produce sound at a second volume, wherein the second volume is smaller than the first volume.
3. The utterance control method according to claim 1, wherein the utterance device includes a plurality of utterance areas, and the step of controlling an utterance volume of the utterance device upon driving of an actuator based on the number of faces includes:
acquiring the number of faces recognized in a plurality of set areas in the image;
and controlling the sound volume of the sound production area of the sound production device corresponding to the set area based on the number of the recognized faces in the set area.
4. The utterance control method according to claim 3, wherein the step of controlling, based on the number of faces recognized in the setting area, an utterance volume of an utterance area of a vocal device corresponding to the setting area includes:
acquiring a public sounding area, wherein the public sounding area is a sounding area corresponding to at least two set areas;
and controlling the sound volume of the public sound production area according to the number of the faces identified in the corresponding one of the at least two setting areas with the larger number of the faces.
5. The sound emission control method according to claim 4, wherein a sound emission area in which a portion of the set area that exceeds a preset ratio falls is set as a sound emission area corresponding to the set area.
6. The method according to claim 1, wherein the electronic device further comprises a distance sensor, and the step of controlling the sounding state of the sounding device under the driving of an actuator based on the number of faces further comprises:
acquiring data acquired by the distance sensor, and judging whether an object exists in a set distance;
and if an object is detected within the set distance, controlling the sound production volume of the sound production device under the driving of an exciter based on the number of the human faces.
7. The sound production control method according to any one of claims 1 to 6, wherein the sound production device comprises a display screen, and the image acquisition device comprises a front camera; or the sound production device comprises a rear cover, and the image acquisition device comprises a rear camera.
8. The utility model provides a sound production controlling means, its characterized in that, operation in electronic device, electronic device includes sound production device and drive the actuator of sound production device vibration sound production, electronic device still includes the image acquisition device, sound production controlling means includes:
the image acquisition unit is used for acquiring the image acquired by the image acquisition device when the electronic device is in a preset sound production mode;
the face recognition unit is used for recognizing the number of faces in the image, wherein the number of faces is the number of faces with displacement distances smaller than a preset value;
and the sounding control unit is used for controlling the sounding volume of the sounding device under the driving of the exciter based on the number of the human faces, wherein the sounding volume is larger when the number of the human faces is larger.
9. An electronic device comprising one or more processors and memory;
one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-7.
10. A computer-readable storage medium having program code executable by a processor, the computer-readable storage medium comprising a stored program, wherein the method of any of claims 1-7 is performed when the program is run.
11. An electronic device, comprising a sound producing device; the exciter is used for driving the sounding device to vibrate and sound; the circuit is connected with the exciter and comprises a processing circuit and a driving circuit, wherein the processing circuit is used for acquiring an image acquired by the image acquisition device when the electronic device is in a preset sound production mode; the driving circuit is used for controlling the sound production volume of the sound production device under the driving of the exciter based on the human face number, wherein the sound production volume is larger when the human face number is larger.
CN201810746931.4A 2018-07-09 2018-07-09 Sound production control method and device and electronic device Active CN109032008B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810746931.4A CN109032008B (en) 2018-07-09 2018-07-09 Sound production control method and device and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810746931.4A CN109032008B (en) 2018-07-09 2018-07-09 Sound production control method and device and electronic device

Publications (2)

Publication Number Publication Date
CN109032008A CN109032008A (en) 2018-12-18
CN109032008B true CN109032008B (en) 2022-01-07

Family

ID=64640720

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810746931.4A Active CN109032008B (en) 2018-07-09 2018-07-09 Sound production control method and device and electronic device

Country Status (1)

Country Link
CN (1) CN109032008B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110191303B (en) * 2019-06-21 2021-04-13 Oppo广东移动通信有限公司 Video call method, device and apparatus based on screen sound production and computer readable storage medium
CN113301329B (en) * 2021-05-21 2022-08-05 康佳集团股份有限公司 Television sound field correction method and device based on image recognition and display equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07332731A (en) * 1994-06-06 1995-12-22 Matsushita Seiko Co Ltd Air conditioning controller
CN1822710A (en) * 2004-12-30 2006-08-23 蒙多系统公司 Integrated audio video signal processing system using centralized processing of signals
CN202976756U (en) * 2012-07-02 2013-06-05 中国联合网络通信集团有限公司 Multimedia equipment capable of automatically adjusting volume and multimedia playing system
CN103929522A (en) * 2013-01-10 2014-07-16 联想(北京)有限公司 Control method and control apparatus of electronic device, and electronic device
CN107592592A (en) * 2017-07-28 2018-01-16 捷开通讯(深圳)有限公司 Display panel, mobile terminal and screen sounding control method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07332731A (en) * 1994-06-06 1995-12-22 Matsushita Seiko Co Ltd Air conditioning controller
CN1822710A (en) * 2004-12-30 2006-08-23 蒙多系统公司 Integrated audio video signal processing system using centralized processing of signals
CN202976756U (en) * 2012-07-02 2013-06-05 中国联合网络通信集团有限公司 Multimedia equipment capable of automatically adjusting volume and multimedia playing system
CN103929522A (en) * 2013-01-10 2014-07-16 联想(北京)有限公司 Control method and control apparatus of electronic device, and electronic device
CN107592592A (en) * 2017-07-28 2018-01-16 捷开通讯(深圳)有限公司 Display panel, mobile terminal and screen sounding control method

Also Published As

Publication number Publication date
CN109032008A (en) 2018-12-18

Similar Documents

Publication Publication Date Title
CN109194796B (en) Screen sounding method and device, electronic device and storage medium
CN108833638B (en) Sound production method, sound production device, electronic device and storage medium
CN108683761B (en) Sound production control method and device, electronic device and computer readable medium
CN108646971B (en) Screen sounding control method and device and electronic device
CN108881568B (en) Method and device for sounding display screen, electronic device and storage medium
CN109032556B (en) Sound production control method, sound production control device, electronic device, and storage medium
CN109032558B (en) Sound production control method and device, electronic device and computer readable medium
CN109189362B (en) Sound production control method and device, electronic equipment and storage medium
CN109144460B (en) Sound production control method, sound production control device, electronic device, and storage medium
CN109086023B (en) Sound production control method and device, electronic equipment and storage medium
CN109062535B (en) Sound production control method and device, electronic device and computer readable medium
CN108958697B (en) Screen sounding control method and device and electronic device
CN108958632B (en) Sound production control method and device, electronic equipment and storage medium
CN108810198B (en) Sound production control method and device, electronic device and computer readable medium
CN109086024B (en) Screen sounding method and device, electronic device and storage medium
CN109040919B (en) Sound production method, sound production device, electronic device and computer readable medium
CN109189360B (en) Screen sounding control method and device and electronic device
CN109085985B (en) Sound production control method, sound production control device, electronic device, and storage medium
CN108810764B (en) Sound production control method and device and electronic device
CN108712706B (en) Sound production method, sound production device, electronic device and storage medium
CN108900728B (en) Reminding method, reminding device, electronic device and computer readable medium
CN109240413B (en) Screen sounding method and device, electronic device and storage medium
CN109144249B (en) Screen sounding method and device, electronic device and storage medium
CN109062533B (en) Sound production control method, sound production control device, electronic device, and storage medium
CN110505335B (en) Sound production control method and device, electronic device and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant