CN113467747B - Volume adjusting method, electronic device and storage medium - Google Patents

Volume adjusting method, electronic device and storage medium Download PDF

Info

Publication number
CN113467747B
CN113467747B CN202110610053.5A CN202110610053A CN113467747B CN 113467747 B CN113467747 B CN 113467747B CN 202110610053 A CN202110610053 A CN 202110610053A CN 113467747 B CN113467747 B CN 113467747B
Authority
CN
China
Prior art keywords
volume
scene
electronic device
electronic equipment
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110610053.5A
Other languages
Chinese (zh)
Other versions
CN113467747A (en
Inventor
孙运平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202110610053.5A priority Critical patent/CN113467747B/en
Publication of CN113467747A publication Critical patent/CN113467747A/en
Application granted granted Critical
Publication of CN113467747B publication Critical patent/CN113467747B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Telephone Function (AREA)

Abstract

The application relates to the field of intelligent equipment, and provides a volume adjusting method, electronic equipment and a storage medium. The volume adjusting part includes: when the electronic equipment is determined to be in the first scene, the scene identification parameters of the electronic equipment in the first scene are obtained, and when the scene identification parameters are determined to meet the preset conditions corresponding to the first scene, the volume of the electronic equipment is adjusted from the first volume to the second volume, wherein the second volume is smaller than the first volume, so that the probability that the electronic equipment suddenly makes loud sound under the preset conditions corresponding to the first scene can be reduced, and the user experience is improved.

Description

Volume adjusting method, electronic device and storage medium
Technical Field
The present application relates to the field of intelligent devices, and in particular, to a volume adjustment method, an electronic device, and a storage medium.
Background
With the popularization of intelligent devices such as mobile phones and tablet computers, people can use the intelligent devices in more and more scenes. In some non-exclusive office scenes or quiet scenes, a situation that a user forgets to wear an earphone or to turn down an external sound volume causes a large sound to be suddenly emitted from the smart device occurs, and no matter what content is played by the electronic device, discomfort of people around the smart device may be caused and the user may feel embarrassed.
Disclosure of Invention
The application provides a volume adjusting method, an electronic device and a storage medium, which can adjust the volume of the electronic device according to the scene where the electronic device is located, and reduce the probability that the electronic device suddenly makes a loud sound in a quiet scene.
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, a volume adjustment method is provided, and is applied to an electronic device, and the method includes:
determining that the electronic equipment is in a first scene, and acquiring scene identification parameters of the electronic equipment in the first scene, wherein the scene identification parameters comprise any one or more of sound information of the environment where the electronic equipment is located, illuminance of the environment where the electronic equipment is located, and current time; and determining that the scene identification parameter meets a preset condition corresponding to the first scene, and adjusting the volume of the electronic equipment from a first volume to a second volume, wherein the second volume is smaller than the first volume.
In the above embodiment, when it is determined that the electronic device is in the first scene, the scene identification parameter of the electronic device in the first scene is obtained, and when it is determined that the scene identification parameter meets the preset condition corresponding to the first scene, the volume of the electronic device is reduced, so that the probability that the electronic device suddenly makes a loud sound under the preset condition corresponding to the first scene can be reduced, and the user experience is improved.
In one possible implementation manner, before the obtaining of the scene identification parameter of the electronic device in the first scene, the method further includes: acquiring the position information of the electronic equipment; and if the position information indicates that the position of the electronic equipment is within the coverage range of a preset home scene, determining that the first scene is the home scene. By determining the first scene and setting different preset conditions for different scenes, the intelligent degree of the volume adjusting method is improved, and the user experience is further improved.
In a possible implementation manner, the determining that the scene identification parameter satisfies a preset condition corresponding to the first scene includes: when the current time is within a preset first time period, if it is detected that the illuminance of the environment where the electronic device is located is smaller than a first preset illuminance and the sound pressure level of the environment where the electronic device is located is smaller than a first preset value, it is determined that the scene identification parameter meets a preset condition corresponding to the first scene.
In a possible implementation manner, before the obtaining of the scene identification parameter of the electronic device in the first scene, the method further includes: acquiring the position information of the electronic equipment; and if the position information indicates that the position of the electronic equipment is within the coverage range of a preset working scene, determining that the first scene is the working scene.
In a possible implementation manner, the determining that the scene identification parameter satisfies a preset condition corresponding to the first scene includes: when the current time is within a preset second time period, if it is detected that the illuminance of the environment where the electronic device is located is less than a second preset illuminance, or the sound pressure level of the environment where the electronic device is located is less than a second preset value, it is determined that the scene identification parameter meets a preset condition corresponding to the first scene.
In a possible implementation manner, before the obtaining of the scene identification parameter of the electronic device in the first scene, the method further includes: acquiring the position information of the electronic equipment; and if the position information indicates that the position of the electronic equipment is out of the coverage range of a preset working scene and the coverage range of a preset home scene, determining that the first scene is an outgoing scene.
In a possible implementation manner, the determining that the scene identification parameter satisfies a preset condition corresponding to the first scene includes: determining the acoustic features of the first scene according to the sound information of the environment where the electronic equipment is located; and determining a preset condition corresponding to the first scene according to the acoustic features. According to the acoustic characteristics, the current environment of the electronic equipment, such as a station, a movie theater and the like, can be determined, and then the preset condition corresponding to the first scene is determined according to the environment of the electronic equipment, so that the calculation accuracy is improved.
In a possible implementation manner, the determining that the scene identification parameter satisfies a preset condition corresponding to the first scene includes: and if the detected illuminance of the environment where the electronic device is located is smaller than a third preset illuminance and/or the detected sound pressure level of the environment where the electronic device is located is smaller than a third preset value, determining that the scene identification parameter meets a preset condition corresponding to the first scene.
In one possible implementation manner, after the adjusting the volume of the electronic device from the first volume to the second volume, the volume adjusting method further includes: if the operation of indicating to increase the volume is detected, the volume after the reduction cannot meet the requirement of the user, and the volume of the electronic equipment is adjusted from the second volume to the first volume, so that the operation of the user can be simplified.
In one possible implementation manner, after the adjusting the volume of the electronic device from the first volume to the second volume, the volume adjusting method further includes: and if the operation of indicating volume reduction is detected and the second volume is not 0, adjusting the volume of the electronic equipment to 0. If the operation of indicating to reduce the volume is detected, which indicates that the reduced volume is still higher for the user, the volume of the electronic device is directly adjusted to 0, so that the user experience is improved.
In one possible implementation manner, after the adjusting the volume of the electronic device from the first volume to the second volume, the volume adjusting method further includes: and determining that the scene identification parameters do not meet the preset conditions corresponding to the first scene, and restoring the volume of the electronic equipment to the first volume, so that the volume can be adjusted in real time according to the scene where the electronic equipment is located, and the intelligent degree of the electronic equipment is improved.
In a possible implementation manner, the volume adjusting method further includes: if at least two operations of increasing the volume or at least two operations of decreasing the volume are detected within the preset time interval, which indicates that the user continuously operates the volume control key, the volume of the electronic device may be urgently required to be decreased, and at this time, the volume of the electronic device is adjusted to 0, so that the volume of the electronic device can be stopped in time.
In one possible implementation, after the adjusting the volume of the electronic device from the first volume to the second volume, the method further includes: and if the operation of indicating to quit the volume self-adaptive adjusting function is detected, the volume of the electronic equipment is restored to the first volume.
In a possible implementation manner, the operation of instructing to exit the volume adaptive adjustment function is an operation of simultaneously pressing a first volume key and a second volume key of the electronic device, or an operation of touching a preset control.
In a second aspect, a volume adjusting device is provided, which includes a communication module and a processing module;
the communication module is used for determining that the electronic equipment is in a first scene, and acquiring scene identification parameters of the electronic equipment in the first scene, wherein the scene identification parameters comprise any one or more of sound information of an environment where the electronic equipment is located, illuminance of the environment where the electronic equipment is located, and current time;
the processing module is used for determining that the scene identification parameter meets a preset condition corresponding to the first scene, and adjusting the volume of the electronic equipment from a first volume to a second volume, wherein the second volume is smaller than the first volume.
In one possible implementation, the communication module is further configured to:
acquiring the position information of the electronic equipment;
and if the position information indicates that the position of the electronic equipment is within the coverage range of a preset home scene, determining that the first scene is the home scene.
In a possible implementation manner, the processing module is specifically configured to:
when the current time is within a preset first time period, if it is detected that the illuminance of the environment where the electronic device is located is smaller than a first preset illuminance and the sound pressure level of the environment where the electronic device is located is smaller than a first preset value, it is determined that the scene identification parameter meets a preset condition corresponding to the first scene.
In one possible implementation, the communication module is further configured to:
acquiring the position information of the electronic equipment;
and if the position information indicates that the position of the electronic equipment is within the coverage range of a preset working scene, determining that the first scene is the working scene.
In a possible implementation manner, the processing module is specifically configured to:
when the current time is within a preset second time period, if it is detected that the illuminance of the environment where the electronic device is located is smaller than a second preset illuminance, or the sound pressure level of the environment where the electronic device is located is smaller than a second preset value, it is determined that the scene identification parameter meets a preset condition corresponding to the first scene.
In one possible implementation, the communication module is further configured to:
acquiring the position information of the electronic equipment;
and if the position information indicates that the position of the electronic equipment is out of the coverage range of a preset working scene and the coverage range of a preset home scene, determining that the first scene is an outgoing scene.
In a possible implementation manner, the processing module is specifically configured to:
determining the acoustic features of the first scene according to the sound information of the environment where the electronic equipment is located;
and determining a preset condition corresponding to the first scene according to the acoustic features.
In a possible implementation manner, the processing module is specifically configured to:
and if the detected illuminance of the environment where the electronic device is located is smaller than a third preset illuminance and/or the detected sound pressure level of the environment where the electronic device is located is smaller than a third preset value, determining that the scene identification parameter meets a preset condition corresponding to the first scene.
In one possible implementation, the processing module is further configured to:
and if the operation of indicating volume increase is detected, adjusting the volume of the electronic equipment from the second volume to the first volume.
In one possible implementation, the processing module is further configured to:
and if the operation of indicating volume reduction is detected and the second volume is not 0, adjusting the volume of the electronic equipment to 0.
In one possible implementation, the processing module is further configured to:
and determining that the scene identification parameters do not meet the preset conditions corresponding to the first scene, and restoring the volume of the electronic equipment to the first volume.
In one possible implementation, the processing module is further configured to:
and if at least two operations of increasing the volume or at least two operations of reducing the volume are detected within a preset time interval, adjusting the volume of the electronic equipment to 0.
In one possible implementation, the processing module is further configured to:
and if the operation of indicating to quit the volume adaptive adjustment function is detected, restoring the volume of the electronic equipment to the first volume.
In a possible implementation manner, the operation of instructing to exit the volume adaptive adjustment function is an operation of simultaneously pressing a first volume key and a second volume key of the electronic device, or an operation of touching a preset control.
In a third aspect, an electronic device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements, when executing the computer program:
determining that the electronic equipment is in a first scene, and acquiring scene identification parameters of the electronic equipment in the first scene, wherein the scene identification parameters comprise any one or more of sound information of the environment where the electronic equipment is located, illuminance of the environment where the electronic equipment is located, and current time; and determining that the scene identification parameter meets a preset condition corresponding to the first scene, and adjusting the volume of the electronic equipment from a first volume to a second volume, wherein the second volume is smaller than the first volume.
In one possible implementation, the processor, when executing the computer program, further implements: before the scene identification parameters of the electronic equipment in a first scene are obtained, obtaining the position information of the electronic equipment; and if the position information indicates that the position of the electronic equipment is within the coverage range of a preset home scene, determining that the first scene is the home scene.
In one possible implementation, the processor, when executing the computer program, further implements:
when the current time is within a preset first time period, if it is detected that the illuminance of the environment where the electronic device is located is smaller than a first preset illuminance and the sound pressure level of the environment where the electronic device is located is smaller than a first preset value, it is determined that the scene identification parameter meets a preset condition corresponding to the first scene.
In one possible implementation, the processor, when executing the computer program, further implements: before the scene identification parameters of the electronic equipment in a first scene are obtained, obtaining the position information of the electronic equipment; and if the position information indicates that the position of the electronic equipment is within the coverage range of a preset working scene, determining that the first scene is the working scene.
In one possible implementation, the processor, when executing the computer program, further implements:
when the current time is within a preset second time period, if it is detected that the illuminance of the environment where the electronic device is located is smaller than a second preset illuminance, or the sound pressure level of the environment where the electronic device is located is smaller than a second preset value, it is determined that the scene identification parameter meets a preset condition corresponding to the first scene.
In one possible implementation, the processor, when executing the computer program, further implements: before the scene identification parameters of the electronic equipment in a first scene are obtained, obtaining the position information of the electronic equipment; and if the position information indicates that the position of the electronic equipment is located outside the coverage range of a preset working scene and the coverage range of a preset home scene, determining that the first scene is an outgoing scene.
In one possible implementation, the processor, when executing the computer program, further implements:
determining the acoustic characteristics of the first scene according to the sound information of the environment where the electronic equipment is located;
and determining a preset condition corresponding to the first scene according to the acoustic features.
In one possible implementation, the processor, when executing the computer program, further implements:
and if the detected illuminance of the environment where the electronic device is located is smaller than a third preset illuminance and/or the detected sound pressure level of the environment where the electronic device is located is smaller than a third preset value, determining that the scene identification parameter meets a preset condition corresponding to the first scene.
In one possible implementation, the processor, when executing the computer program, further implements: after the volume of the electronic equipment is adjusted from the first volume to the second volume, if the operation of indicating volume increase is detected, the volume of the electronic equipment is adjusted from the second volume to the first volume.
In one possible implementation, the processor, when executing the computer program, further implements: after the volume of the electronic device is adjusted from the first volume to the second volume, if the operation indicating that the volume is reduced is detected and the second volume is not 0, the volume of the electronic device is adjusted to 0.
In one possible implementation, the processor, when executing the computer program, further implements: after the volume of the electronic equipment is adjusted from a first volume to a second volume, the scene identification parameters are determined not to meet the preset conditions corresponding to the first scene, and the volume of the electronic equipment is restored to the first volume.
In one possible implementation, the processor, when executing the computer program, further implements:
and if at least two operations of increasing the volume or at least two operations of reducing the volume are detected within a preset time interval, adjusting the volume of the electronic equipment to 0.
In one possible implementation, the processor, when executing the computer program, further implements: after the volume of the electronic equipment is adjusted from the first volume to the second volume, if the operation of indicating to quit the volume adaptive adjustment function is detected, the volume of the electronic equipment is restored to the first volume.
In a possible implementation manner, the operation of instructing to exit the volume adaptive adjustment function is an operation of simultaneously pressing a first volume key and a second volume key of the electronic device, or an operation of touching a preset control.
In a fourth aspect, an electronic device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the volume adjustment method according to the first aspect when executing the computer program.
In a fifth aspect, a computer-readable storage medium is provided, which stores a computer program that, when executed by a processor, implements the volume adjustment method according to the first aspect.
In a sixth aspect, a computer program product is provided, the computer program product comprising computer instructions for instructing a computer to execute the volume adjustment method of the first aspect.
Drawings
Fig. 1 is a schematic flow chart of a volume adjustment method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a volume adaptive adjustment interface provided in an embodiment of the present application;
fig. 3 is a schematic view of a home scene interface provided in the embodiment of the present application;
FIG. 4 is a schematic diagram of a work scenario interface provided by an embodiment of the present application;
fig. 5 is a schematic diagram of a volume adjustment method according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating a volume adjustment method according to another embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
In addition, in the description of the present application, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
In some non-independent office scenes or quiet scenes, a situation that a user forgets to wear earphones or adjust down the volume of the played sound to suddenly make a loud sound of the electronic equipment occurs, and troubles are caused to the user and surrounding people. To this end, the present application provides a volume adjustment method, including: under the condition that the electronic equipment is in the first scene, scene identification parameters of the electronic equipment in the first scene are obtained, wherein the scene identification parameters comprise any one or more of sound information of the environment where the electronic equipment is located, illuminance of the environment where the electronic equipment is located and current time. The sound information may be a sound pressure level or an acoustic characteristic of the sound signal. The illumination may be collected by an ambient light sensor on the electronic device to indicate how bright or dark the electronic device is in the environment. And under the condition that the scene identification parameters meet the preset conditions corresponding to the first scene, adjusting the volume of the electronic equipment from the first volume to a second volume, wherein the second volume is smaller than the first volume. The volume of the electronic equipment is reduced under the condition that the preset condition corresponding to the first scene is met, so that the volume of the electronic equipment can be adjusted in advance, the situation that the electronic equipment sends out larger volume under the preset condition corresponding to the first scene is avoided, and user experience is improved.
The following provides an exemplary explanation of the volume adjustment method provided in the present application.
The volume adjusting method is applied to electronic equipment, the electronic equipment has a volume self-adaptive adjusting function, and the volume self-adaptive adjusting function is in an on state. The volume adaptive adjustment function refers to a function of adjusting the volume of the electronic device according to the scene where the electronic device is located and the scene identification parameter.
Referring to fig. 1, a volume adjusting method according to an embodiment of the present application includes:
s101: determining that the electronic equipment is in a first scene, and acquiring scene identification parameters of the electronic equipment in the first scene, wherein the scene identification parameters comprise any one or more of sound information of the environment where the electronic equipment is located, illuminance of the environment where the electronic equipment is located, and current time.
As an example, the electronic device may be a mobile phone, a palm top computer, a notebook, a desktop computer, a smart wearable device, or the like.
The first scene may be any one of a work scene, a home scene, or an out scene. The electronic device may determine the scene in which the electronic device is located according to the position information of the electronic device or the current time. For example, taking the first scene as a home scene as an example, if the position information of the electronic device indicates that the position of the electronic device is within the coverage range of the preset home scene, it is determined that the scene where the electronic device is located is the home scene, that is, the electronic device is located in the first scene. For another example, if the first scene is a working scene, if the current time is consistent with the working time set by the user, it is determined that the scene where the electronic device is located is the working scene, that is, the electronic device is located in the first scene. The sound information may be collected by a microphone on the electronic device. The sound information may be a sound pressure level or an acoustic characteristic of the sound signal. The light level may be collected by an ambient light sensor on the electronic device. Of course, the sound information and the illuminance may also be obtained by the electronic device from other electronic devices, which is not limited in this embodiment of the application.
In one embodiment of the present application, the electronic device determining that the electronic device is in the first scenario may be implemented by:
1. the user sets, for example, that the first scene is a home scene, and when the user is at home, the scene where the electronic device is located is set as the home scene, and then the electronic device is located in the first scene. For another example, the first scene is a work scene, and when the user is at a work place, the scene where the electronic device is located is set as the work scene, and then the electronic device is located in the first scene.
2. The method includes the steps that automatic determination is carried out, for example, when the electronic device detects that the position information of the electronic device is located in a preset range, the electronic device is determined to be in a first scene, and when the electronic device detects that the current time is located in a preset time period, the electronic device is determined to be in the first scene.
In one embodiment of the present application, the electronic device is always in a state of acquiring sound information of an environment where the electronic device is located and illuminance of the environment where the electronic device is located in a standby state. In another embodiment, when the electronic device is in the first scene, the electronic device performs actions of collecting sound information of an environment where the electronic device is located and illuminance of the environment where the electronic device is located, so that the power of the electronic device can be saved compared with the situation that information is collected all the time.
S102: and determining that the scene identification parameter meets a preset condition corresponding to the first scene, and adjusting the volume of the electronic equipment from a first volume to a second volume, wherein the second volume is smaller than the first volume.
If the first scene is a home scene, the preset condition includes any one or more of that the current time is within a first preset time period, the illuminance of the environment where the electronic device is located is less than the first preset illuminance, and the sound pressure level of the environment where the electronic device is located is less than the first preset value. If the first scene is a working scene, the preset condition includes any one or more of that the current time is within a second preset time period, the illuminance of the environment where the electronic device is located is less than a second preset illuminance, and the sound pressure level of the environment where the electronic device is located is less than a second preset value. If the first scene is an outgoing scene, the preset condition includes that the illuminance of the environment where the electronic device is located is less than a first preset illuminance, and/or the sound pressure level of the environment where the electronic device is located is less than a first preset value.
The volume of the electronic device refers to the media volume of the electronic device, and specifically, the volume of each application software (e.g., music software, video software, chat software, game software, etc.). The first volume is a current volume of the electronic device. The first volume may be a volume that was last set by the user. The first volume may also be set by the electronic device according to a first scene, for example, when it is detected that the electronic device is in a working scene, if the current volume is greater than the first volume corresponding to the working scene, the volume of the electronic device is adjusted to the first volume, and if the current volume is less than or equal to the first volume, the volume is not adjusted, and the current volume is used as the first volume. The second volume may be 0 or any volume value smaller than the first volume. The second volume is related to preset conditions in the first scenes, and each preset condition in the first scenes corresponds to one second volume. It should be noted that the second volumes corresponding to different first scenes may be the same or different. For example, in the case that the scene identification parameter satisfies the preset condition corresponding to the working scene, the second volume is 10dB. And under the condition that the scene identification parameters meet the preset conditions corresponding to the home scene, the second volume is 20dB. Where "dB" is the unit decibel of the volume.
In a possible implementation manner, under the condition that the scene identification parameter meets a preset condition corresponding to the first scene, the electronic device first determines a magnitude relationship between the first volume and the second volume. If the first volume is larger than the second volume, adjusting the volume of the electronic equipment from the first volume to the second volume; if the first volume is smaller than the second volume, the volume of the electronic equipment is not adjusted, so that the electronic equipment can be prevented from suddenly making a loud sound in a quiet scene.
In the above embodiment, the scene identification parameter of the electronic device is acquired when the electronic device is in the first scene, and when the scene identification parameter meets the preset condition corresponding to the first scene, the volume of the electronic device is reduced, so that the volume of the electronic device can be adjusted in advance, the electronic device is prevented from emitting larger volume under the preset condition corresponding to the first scene, and the user experience is improved.
The following describes a volume adjustment method provided in the embodiment of the present application with reference to a specific scenario.
As shown in fig. 2, the electronic device opens a setting interface of the volume adaptation function in response to the user clicking the operation of "volume adaptation". In the setting interface, the user can start or quit the volume adaptive adjustment function by touching the volume adaptive adjustment control 21. When the volume adaptive adjustment function is in an on state, the user can authorize the position information by touching the control 22 of the position information, and when the control 22 of the position information is detected to be on, the electronic device determines that the user authorizes the position information to obtain the position information of the electronic device. The user can authorize the sound information by touching the control 23 of the microphone, and when detecting that the control 23 times of the microphone is opened, the electronic device determines that the user authorizes the sound information and obtains the sound information of the environment where the electronic device is located, wherein the sound information comprises a sound pressure level and acoustic characteristics; the user can perform authorization of the illuminance by touching the control 24 of the illuminance, and when the control 24 of the illuminance is detected to be turned on, the electronic device determines the authorized illuminance of the user and obtains the illuminance of the environment where the electronic device is located. And at the setting interface, the user can open the added home scene and the added work scene, respectively enter the setting interfaces of the home scene and the work scene, and set the home scene and the work scene. The user can also make additions and settings of scenes by touching the add control 25. For example, "meeting scenes," "learning scenes," etc. may also be added. When detecting that a user clicks the operation of the first scene, the electronic device enters a setting interface of the corresponding scene.
As shown in fig. 3, the electronic device opens a setting interface of the home scene in response to an operation of clicking the "home scene" by the user. In the setting interface of the home scene, a user can open or close the home scene by touching the control 31 of the home scene, wherein opening the home scene means opening the recognition function of the home scene and the volume adaptive adjustment function in the home scene, and closing the home scene means not executing the recognition function of the home scene and the volume adaptive adjustment function in the home scene. When the home scene is in the open state, the user can set the position, the time period and the media volume corresponding to the home scene, the position corresponding to the home scene is the home position, the time period corresponding to the home scene is the home time period, and the media volume corresponding to the home scene is the home volume. The user may set the current location as a home location, or may set the home location as "xx province xx city xx district xx cell", and if the user does not set the home location, the electronic device may determine the home location according to historical location data of the user, for example, a location where the user frequently locates in 23 to 5 days of the day. The home time period is a time period in which a volume adaptive adjustment function needs to be executed, specifically, a sleeping time period of a user, for example, from 22. The home volume is the volume under the condition that the preset condition corresponding to the home scene is met, the user can select the default volume, and the home volume can be set by sliding a scroll bar for volume adjustment.
After a user sets a home position, a home time period and a home volume, if a home scene and a volume adaptive adjustment function are detected to be in an open state and the user authorizes position information, sound information and illuminance, the electronic device acquires position information of the electronic device, and if the position information indicates that the electronic device is located in a coverage range of a preset home scene, the scene where the electronic device is located is determined to be the home scene. The coverage range of the preset home scene is a preset range of a home location set by a user, for example, a range of 100 meters around the home location. If the scene where the electronic device is located is determined to be a home scene, when the current time is within a preset first time period, if it is detected that the illuminance of the environment where the electronic device is located is smaller than a first preset illuminance and the sound pressure level of the environment where the electronic device is located is smaller than a first preset value, it is determined that the scene identification parameter meets a preset condition corresponding to the home scene, and the volume of the electronic device is adjusted from the first volume to the second volume. The preset first time period refers to a home time period set by a user; the first preset illuminance may be 5lux, "lux" being the unit lux of the illuminance; the sound pressure level of the environment where the electronic device is located may be a total sound pressure level in a frequency band of 20Hz-20KHz in the environment where the electronic device is located, and the first preset value may be 40dB; the first volume is the current volume of the electronic equipment, and the second volume is the media volume corresponding to the home scene set by the user.
In other scenarios, when the control 22 of the location information is closed, the electronic device may also determine whether the scene where the electronic device is located is a home scene according to the current time, for example, if the current time is 23. Under the condition that the control 23 of the microphone is closed, the electronic device may also determine whether the preset condition corresponding to the home scene is satisfied according to the current time and the illuminance of the environment where the electronic device is located. For example, when the current time is within a preset first time period, if it is detected that the illuminance of the environment where the electronic device is located is smaller than a first preset illuminance, it is determined that a preset condition corresponding to a home scene is met. When the control 24 of the illuminance is turned off, the electronic device may determine whether a preset condition corresponding to the home scene is satisfied according to the current time and the sound pressure level of the environment where the electronic device is located. For example, when the current time is within a preset first time period, if it is detected that the sound pressure level of the environment where the electronic device is located is smaller than a first preset value, it is determined that a preset condition corresponding to a home scene is met.
In other embodiments, the user may also set the first preset illumination and the first preset value on the setting interface of the home scene. The home time period set by the user may not be the preset first time period but may be an actual home time period of the user, and the electronic device may determine the preset first time period according to the home time period set by the user to determine whether the current time is within the preset first time period. For example, the preset first time period is 8 hours in the middle of the home time period set by the user, if the home time period set by the user is from 20 to 8 of the following day. The preset first time period may also be set after the electronic device counts the big data, for example, the electronic device counts the big data of the user sleep time, and if the sleep time is 23 to 7 per day.
As shown in fig. 4, the electronic device opens a setting interface of a work scene in response to an operation of clicking the "work scene" by the user. In the setting interface of the working scene, the user can touch the control 41 of the working scene to turn on or turn off the working scene, the turning on of the working scene means that the recognition function of the working scene and the volume adaptive adjustment function under the working scene are turned on, and the turning off of the working scene means that the recognition function of the working scene and the volume adaptive adjustment function under the working scene are not executed. When the working scene is in the open state, the user can set the position, the time period and the media volume corresponding to the working scene, the position corresponding to the working scene is the working position, the time period corresponding to the working scene is the working time period, and the media volume corresponding to the working scene is the working volume. The user may set the current position as the operation position, or may set the operation position as "x province x city x district x building". If the user does not set the working position, the electronic device may determine the working position according to the historical position data of the user, for example, a position where the user is often located in 10. The working period refers to a period in which the volume adaptive adjustment function needs to be executed, and specifically refers to a noon break period of the user, for example, 12 to 14. The working volume refers to the volume under the condition that the preset condition corresponding to the working scene is met, the user can select the default volume, and the working volume can be set by sliding a scroll bar for volume adjustment.
After a user sets a working position, a working period and a working volume, if a working scene and a volume self-adaptive adjusting function are detected to be in an open state, the electronic equipment acquires position information of the electronic equipment, and if the position information indicates that the electronic equipment is located in a coverage range of a preset working scene, the scene where the electronic equipment is located is determined to be the working scene. The coverage range of the preset working scene refers to a preset range of the working position set by the user, for example, a range of 100 meters around the working position. If the scene where the electronic equipment is located is determined to be a working scene, when the current time is within a preset second time period, if the detected illuminance of the environment where the electronic equipment is located is smaller than a second preset illuminance or the detected sound pressure level of the environment where the electronic equipment is located is smaller than a second preset value, the scene identification parameter is determined to meet a preset condition corresponding to the home scene, and the volume of the electronic equipment is adjusted from the first volume to the second volume. The preset second time period refers to a working time period set by a user; the first preset illumination may be 50lux, the second preset illumination may be 50dB, the first volume is a current volume of the electronic device, and the second volume is a media volume corresponding to a work scene set by a user.
In other scenarios, in the case that the control 22 of the position information is closed, the electronic device may also determine whether the scene where the electronic device is located is a working scene according to the current time, for example, if the current time is 11.
In other embodiments, the user may also set the second preset illumination and the second preset value in the setting interface of the work scene. The working period set by the user may also be not the preset second time period but an actual working period of the user, and the electronic device may determine the preset second time period according to the working period set by the user, so as to determine whether the current time is within the preset second time period. For example, the preset second time period is two hours before and after 12 o' clock of the day of the work period set by the user, if the work period set by the user is monday through friday of each week, the preset second time period is 11 to 13. The preset second time period may also be set after the electronic device counts the big data, for example, the electronic device counts the big data of the user's noon break time, and if the noon break time is from 11 to 14.
When the volume adaptive adjustment function is in an on state, the electronic equipment acquires position information of the electronic equipment, and if the position information indicates that the position of the electronic equipment is located outside the coverage range of a preset working scene and the coverage range of a preset home scene, it is determined that the scene where the electronic equipment is located is an outgoing scene. If the current scene is an outgoing scene, acquiring sound information of the environment where the electronic equipment is located, and determining acoustic features of the environment where the electronic equipment is located according to the sound information, wherein the acoustic features are audio features obtained by extracting and enhancing features of an audio signal, such as spectral features, time-frequency features, loudness features or energy features of the audio signal. The acoustic features correspond to a scene in which the electronic device is located, and the scene in which the electronic device is located can be determined according to the acoustic features, for example, the scene in which the electronic device is located may be an airport, a subway, a movie theater, a shopping mall, and the like.
In a possible implementation manner, sound information of each scene is collected in advance, acoustic features are extracted from the sound information, the acoustic features and the corresponding scenes are used as training samples, and a machine learning algorithm is adopted to train the classification model to obtain a scene recognition model. In the using process of the electronic equipment, sound information of a scene where the electronic equipment is located is obtained, acoustic features of the sound information are extracted, and the acoustic features are input into a scene recognition model to obtain the scene where the electronic equipment is located. In another possible implementation manner, the scene where the electronic device is located may also be determined according to the acoustic features and the illuminance of the scene where the electronic device is located, for example, sound information and illuminance of each scene are collected in advance, the acoustic features are extracted from the sound information, the acoustic features, the illuminance and the corresponding scene are used as training samples, and the classification model is trained by using a machine learning algorithm to obtain the scene recognition model. In the using process of the electronic equipment, sound information and illuminance of a scene where the electronic equipment is located are obtained, acoustic features of the sound information are extracted, the acoustic features and the illuminance are input into a scene recognition model, and the scene where the electronic equipment is located is obtained, so that the accuracy of scene recognition can be further improved.
After the scene where the electronic device is located is determined, the preset condition corresponding to the scene is further determined, and whether the preset condition corresponding to the scene is met is judged according to the illuminance and the sound pressure level of the environment where the electronic device is located. Specifically, when it is determined that the scene where the electronic device is located is a movie theater, a library, a church, or the like, if it is detected that the illuminance of the environment where the electronic device is located is less than the third preset illuminance, or the sound pressure level of the environment where the electronic device is located is less than the third preset value, it is determined that the scene identification parameter meets a preset condition corresponding to an outgoing scene, and the volume of the electronic device is adjusted from the first volume to the second volume. When the scene where the electronic device is located is determined to be a scene such as an airport, a subway, a train, an automobile, a market and the like, if it is detected that the illuminance of the environment where the electronic device is located is less than a third preset illuminance and the sound pressure level of the environment where the electronic device is located is less than a third preset value, it is determined that the scene identification parameter meets a preset condition corresponding to an outgoing scene, and the volume of the electronic device is adjusted from the first volume to the second volume. Wherein, the third preset illuminance may be 50lux, and the third preset value may be 40dB.
In an embodiment, as shown in fig. 5, a user first performs information entry, that is, sets a home scenario and a working scenario, for example, a preset first time period corresponding to the home scenario is 23 to 00, and a second time period corresponding to the working scenario is 11 to 00. After a user starts a volume adjusting function, the electronic equipment firstly acquires position information of the electronic equipment, and if the position information indicates that the position of the electronic equipment is within a coverage range of a preset home scene, the scene where the electronic equipment is located is determined to be the home scene. If the current time is within the range of 5. If the current time is within the range of 23. If the illuminance of the environment where the electronic device is located is smaller than the first preset illuminance and the sound pressure level of the environment where the electronic device is located is smaller than the first preset value, the volume of the electronic device is adjusted from the first volume to the second volume. And if the illuminance of the environment where the electronic equipment is located is greater than the first preset illuminance or the sound pressure level of the environment where the electronic equipment is located is greater than the first preset value, keeping the current first volume unchanged.
And if the position information indicates that the position of the electronic equipment is within the coverage range of the preset working scene, determining that the scene where the electronic equipment is located is the working scene. If the current time is within a range of 14. And if the illuminance of the environment where the electronic equipment is located is less than the second preset illuminance or the sound pressure level of the environment where the electronic equipment is located is less than the second preset value, adjusting the volume of the electronic equipment from the first volume to the second volume. And if the illuminance of the environment where the electronic equipment is located is greater than the second preset illuminance and the sound pressure level of the environment where the electronic equipment is located is greater than the second preset value, keeping the current first volume unchanged.
If the position information indicates that the position of the electronic equipment is located outside the coverage range of the preset working scene and the coverage range of the preset home scene, determining that the scene where the electronic equipment is located is an outgoing scene, acquiring sound information of the environment where the electronic equipment is located, and determining the acoustic characteristics according to the sound information. If the acoustic characteristics are consistent with the preset first characteristics, determining that the scene where the electronic equipment is located is a quiet scene similar to a home scene, such as a movie theater, a museum, a church and the like, and then acquiring the illuminance of the environment where the electronic equipment is located. If the detected illuminance of the environment where the electronic equipment is located is smaller than the third preset illuminance, or the sound pressure level of the environment where the electronic equipment is located is smaller than the third preset value, the volume of the electronic equipment is adjusted from the first volume to the second volume, otherwise, the current first volume is kept unchanged. If the acoustic characteristic is consistent with the preset second characteristic, determining that the scene where the electronic equipment is located is a public transportation scene or a market scene, and then obtaining the illuminance of the environment where the electronic equipment is located. If the illuminance of the environment where the electronic equipment is located is detected to be smaller than the third preset illuminance, and the sound pressure level of the environment where the electronic equipment is located is detected to be smaller than the third preset value, the volume of the electronic equipment is adjusted from the first volume to the second volume, otherwise, the current first volume is kept unchanged.
In a possible implementation manner, after the volume of the electronic device is adjusted from the first volume to the second volume, if it is detected that the scene identification parameter does not satisfy the preset condition corresponding to the first scene, the volume of the electronic device is restored to the first volume. For example, in a working scene, after the volume of the electronic device is adjusted from the first volume to the second volume, if the current time is not within the preset second time period or the user leaves the coverage range of the preset working scene, it is determined that the scene identification parameter does not meet the preset condition corresponding to the first scene, and the volume of the electronic device is restored to the first volume before adjustment, so that the volume can be flexibly set according to the scene where the user is located.
As shown in fig. 6, a flow of a volume adjustment method according to another embodiment of the present application is as follows: after the volume adaptive adjustment function is started, the electronic equipment acquires position information and scene identification parameters of the electronic equipment, wherein the scene identification parameters comprise sound information of the environment where the electronic equipment is located, illuminance of the environment where the electronic equipment is located and current time. The method comprises the steps that the electronic equipment firstly determines a scene where the electronic equipment is located according to position information, after the scene where the electronic equipment is located is determined, a first probability that the volume needs to be controlled is determined according to the scene where the electronic equipment is located and the illuminance of the environment where the electronic equipment is located, a second probability that the volume needs to be controlled is determined according to the current time, a third probability that the volume needs to be controlled is determined according to the sound information of the environment where the electronic equipment is located, the first probability, the second probability and the third probability are summed to obtain a comprehensive probability, the comprehensive probability is the probability that the volume needs to be controlled and is obtained by combining scene identification parameters, if the comprehensive probability is larger than a preset value, the scene identification parameters meet preset conditions corresponding to the first scene, and the volume of the electronic equipment is adjusted from the first volume to the second volume.
For example, the probability that the volume needs to be controlled in a working scene is set to be 30%, the probability that the volume needs to be controlled in a home scene is set to be 20%, and the probability that the volume needs to be controlled in the working scene and a scene other than the home scene is set to be 0. If the illuminance is less than 5lux, the probability that the volume needs to be controlled is 30%, if the illuminance is between 5lux and 20lux, the probability that the volume needs to be controlled is 20%, and if the illuminance is greater than 20lux, the probability that the volume needs to be controlled is 0. If the current time is 23. If the sound pressure level is less than 30dB, the probability that the sound volume needs to be controlled is 30%, if the sound pressure level is 30dB-50dB, the probability that the sound volume needs to be controlled is 20%, and if the sound pressure level is greater than 50dB, the probability that the sound volume needs to be controlled is 0. And when the comprehensive probability is greater than or equal to 80%, determining that the scene identification parameters meet the preset conditions corresponding to the scene, and adjusting the volume of the electronic equipment from the first volume to the second volume.
In an application scenario, a scene where the electronic device is located is a working scene, the probability that the volume needs to be controlled is 30%, the illuminance of an environment where the electronic device is located is 15lux, the probability that the volume needs to be controlled is 20%, and the two probabilities are summed to obtain a first probability of 50%; the current time is 13; the sound pressure level of the environment where the electronic device is located is 40dB, the corresponding probability that the volume needs to be controlled is 20%, the third probability is 20%, the first probability, the second probability and the third probability are summed to obtain a comprehensive probability of 90%, the scene identification parameter is determined to meet the preset condition corresponding to the working scene, and the volume of the electronic device is adjusted from the first volume to the second volume.
In another application scenario, a scene where the electronic device is located is a home scene, the probability that the volume needs to be controlled is 20%, the illuminance of an environment where the electronic device is located is 3lux, the probability that the volume needs to be controlled is 30%, and the two probabilities are summed to obtain a first probability of 50%; the current time is 6; the sound pressure level of the environment where the electronic equipment is located is 20dB, the probability that the volume needs to be controlled is 30%, the third probability is 30%, the first probability, the second probability and the third probability are summed to obtain a comprehensive probability of 80%, the scene identification parameter is determined to meet the preset condition corresponding to the home scene, and the volume of the electronic equipment is adjusted from the first volume to the second volume.
In other possible implementation manners, after the scene where the electronic device is located is determined, the maximum value of the probabilities, corresponding to the scene identification parameters, of the sound volume needing to be controlled is used as the probability corresponding to the scene identification parameters, the probabilities corresponding to the scene identification parameters and the probabilities, corresponding to the scene where the electronic device is located, of the sound volume needing to be controlled are summed to obtain a comprehensive probability, and when the comprehensive probability is larger than a preset value, the sound volume of the electronic device is adjusted from the first sound volume to the second sound volume.
In other possible implementation manners, the probability that the volume needs to be controlled corresponding to each scene identification parameter may also be determined according to the scene where the electronic device is located, and then the probabilities that the volume needs to be controlled corresponding to each scene identification parameter are summed to obtain the comprehensive probability. For example, in a preset working scene, if the illuminance is less than 10lux, the probability that the volume needs to be controlled is 30%, if the illuminance is 10lux to 30lux, the probability that the volume needs to be controlled is 20%, and if the illuminance is greater than 30lux, the probability that the volume needs to be controlled is 0. If the current time is 12. If the sound pressure level is less than 30dB, the probability of controlling the volume is 30%, if the sound pressure level is 30dB-50dB, the probability of controlling the volume is 20%, and if the sound pressure level is more than 50dB, the probability of controlling the volume is 0. In a set home scene, if the illuminance is less than 5lux, the probability that the volume needs to be controlled is 30%, if the illuminance is between 5lux and 20lux, the probability that the volume needs to be controlled is 20%, and if the illuminance is greater than 20lux, the probability that the volume needs to be controlled is 0. If the current time is 23. If the sound pressure level is less than 30dB, the probability of controlling the volume is 30%, if the sound pressure level is 30dB-50dB, the probability of controlling the volume is 20%, and if the sound pressure level is more than 50dB, the probability of controlling the volume is 0. Under the condition that the volume self-adaptive adjusting function is in an on state, firstly, a scene where the electronic equipment is located is determined, then, the probability that the volume needs to be controlled and corresponding to each scene identification parameter in the scene is determined according to each scene identification parameter, and the probabilities that the volume needs to be controlled and corresponding to each scene identification parameter in the scene are summed to obtain the comprehensive probability. And when the comprehensive probability is greater than the preset value, adjusting the volume of the electronic equipment from the first volume to the second volume.
With continued reference to fig. 6, in a possible implementation manner, after the volume of the electronic device is adjusted from the first volume to the second volume by using the volume adaptive adjustment function, if an operation of user intervention is detected, the adaptive adjustment function is exited. Wherein the operation of the user intervention may be any one of an operation of increasing the volume, an operation of decreasing the volume, and an operation of exiting the adaptive adjustment function.
In an embodiment, after the volume of the electronic device is adjusted from the first volume to the second volume by using the volume adaptive adjustment function, if the operation of exiting the volume adaptive adjustment function is detected, the volume of the electronic device is restored to the first volume. The operation of indicating to quit the volume adaptive adjustment function is the operation of simultaneously pressing a first volume key and a second volume key of the electronic equipment, wherein the first volume key and the second volume key are a volume increasing key and a volume decreasing key respectively. The operation indicating to exit the volume adaptive adjustment function may also be an operation of touching a preset control, for example, touching a control of the volume adaptive adjustment function shown in fig. 2 to exit the volume adaptive adjustment function.
In an embodiment, after the volume of the electronic device is adjusted from the first volume to the second volume by using the volume adaptive adjustment function, if an operation indicating to increase the volume is detected, which indicates that the adjusted second volume is too small for the user, the volume of the electronic device is adjusted from the second volume to the first volume, that is, the volume is restored to the volume before the adjustment. Optionally, after the electronic device recovers to the volume before adjustment, the electronic device exits the volume adaptive adjustment function. The operation of instructing to increase the volume may be pressing a "volume up" button or sliding a "volume up" scroll bar.
In an embodiment, after the volume of the electronic device is adjusted from the first volume to the second volume by using the volume adaptive adjustment function, if an operation indicating to decrease the volume is detected, it indicates that the user needs a smaller volume, and if the adjusted second volume is not 0, the volume of the electronic device is adjusted to 0, that is, the electronic device is set to a mute state. Optionally, after the electronic device recovers to the volume before adjustment, the electronic device exits the volume adaptive adjustment function. The operation of instructing to decrease the volume may be pressing a "decrease volume" button or sliding a "adjust volume" scroll bar.
In a possible implementation manner, after the operation of increasing or decreasing the volume is detected and the volume adaptive adjustment function is exited, the volume adaptive adjustment function is turned on again after a preset time period (for example, 1 hour) elapses. Or after the preset time, reminding the user whether to restart the volume adaptive adjustment function, if the user agrees to start, starting the volume adaptive adjustment function, and if the user disagrees with the start, not starting the volume adaptive adjustment function, so that the problem that the volume is too large or too small due to the fact that the user forgets to start the volume adaptive adjustment function can be solved.
In another possible implementation manner, after the operation of increasing or decreasing the volume is detected and the volume adaptive adjustment function is exited, the electronic device does not perform volume adjustment even if the preset condition corresponding to the first scene is met. Or after the volume self-adaptive adjusting function is quitted, the electronic equipment does not detect the first scene and collect scene identification parameters, so that the electric quantity of the electronic equipment can be saved.
In a possible implementation manner, if at least two operations of reducing the volume are detected within a preset time interval, which indicates that the user may urgently need to reduce the volume, the electronic device adjusts the volume of the electronic device to 0.
If the operation of increasing the volume is detected at least twice within the preset time interval, it is indicated that the user may urgently need to decrease the volume, and the operation of pressing the wrong key may occur, and at this time, the volume of the electronic device is still adjusted to 0, so that the electronic device is prevented from emitting larger volume in a quiet scene.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 7 shows a schematic diagram of an electronic device provided in an embodiment of the present application.
As shown in fig. 7, the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, the electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose-input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bidirectional synchronous serial bus including a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through an I2C bus interface to implement a touch function of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 through an I2S bus, enabling communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through the I2S interface, so as to implement a function of receiving a call through a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to implement the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In other embodiments, the power management module 141 may be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), general Packet Radio Service (GPRS), code Division Multiple Access (CDMA), wideband Code Division Multiple Access (WCDMA), time division code division multiple access (time-division multiple access, TD-SCDMA), long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou satellite navigation system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, connected to the display screen 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV and other formats. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in the external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a Universal Flash Storage (UFS), and the like. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into a sound signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking near the microphone 170C through the mouth. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association) standard interface of the USA.
The pressure sensor 180A is used for sensing a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. Illustratively, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180C.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, the electronic device 100 may utilize the distance sensor 180F to range to achieve fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from a nearby object using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to save power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 may utilize the collected fingerprint characteristics to unlock a fingerprint, access an application lock, photograph a fingerprint, answer an incoming call with a fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also called a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided via the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human body pulse to receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 180M may also be provided in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be attached to and detached from the electronic device 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U.S. disk, removable hard disk, magnetic diskette, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signal, telecommunications signal, and software distribution medium, etc.
The present application also provides a computer program product comprising computer instructions stored in a computer readable storage medium. The processor 110 of the electronic device 100 may read the computer instructions from the computer-readable storage medium, and the processor 110 executes the computer instructions, so that the electronic device 100 performs the vibration adjustment method described above.
Finally, it should be noted that: the above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. A volume adjusting method is applied to an electronic device, and comprises the following steps:
determining that the electronic equipment is in a first scene, and acquiring scene identification parameters of the electronic equipment in the first scene, wherein the scene identification parameters comprise sound information of the environment where the electronic equipment is located, illuminance of the environment where the electronic equipment is located and current time;
determining a first probability of volume control according to a scene where the electronic device is located and the illuminance of the environment where the electronic device is located, determining a second probability of volume control according to the current time, determining a third probability of volume control according to the sound information of the environment where the electronic device is located, and determining a comprehensive probability according to the first probability, the second probability and the third probability; if the comprehensive probability is larger than a preset value, determining that the scene identification parameter meets a preset condition corresponding to the first scene, and adjusting the volume of the electronic equipment from a first volume to a second volume, wherein the second volume is smaller than the first volume; the first volume is the volume of the electronic device before the electronic device is determined to be in the first scene;
after the volume of the electronic equipment is adjusted from the first volume to the second volume,
if at least two operations of reducing the volume are detected within a preset time interval, or at least two operations of increasing the volume are detected within the preset time interval, adjusting the volume of the electronic equipment to 0; alternatively, the first and second electrodes may be,
after the volume of the electronic equipment is adjusted from the first volume to the second volume,
and if the operation of indicating volume increase is detected, adjusting the volume of the electronic equipment from the second volume to the first volume.
2. The volume adjustment method according to claim 1, wherein before the obtaining of the scene identification parameter of the electronic device in the first scene, the method further comprises:
acquiring the position information of the electronic equipment;
and if the position information indicates that the position of the electronic equipment is within the coverage range of a preset home scene, determining that the first scene is the home scene.
3. The volume adjustment method according to claim 1, wherein before the obtaining of the scene identification parameter of the electronic device in the first scene, the method further comprises:
acquiring the position information of the electronic equipment;
and if the position information indicates that the position of the electronic equipment is within the coverage range of a preset working scene, determining that the first scene is the working scene.
4. The volume adjustment method according to claim 1, wherein before the obtaining of the scene identification parameter of the electronic device in the first scene, the method further comprises:
acquiring the position information of the electronic equipment;
and if the position information indicates that the position of the electronic equipment is out of the coverage range of a preset working scene and the coverage range of a preset home scene, determining that the first scene is an outgoing scene.
5. The volume adjustment method according to claim 4, wherein the determining that the scene identification parameter satisfies the preset condition corresponding to the first scene includes:
determining the acoustic features of the first scene according to the sound information of the environment where the electronic equipment is located;
and determining a preset condition corresponding to the first scene according to the acoustic features.
6. The volume adjustment method according to claim 1, wherein after the adjusting the volume of the electronic device from a first volume to a second volume, the volume adjustment method further comprises:
and if the operation of indicating volume reduction is detected and the second volume is not 0, adjusting the volume of the electronic equipment to 0.
7. The volume adjustment method according to any one of claims 1 to 6, wherein after the volume of the electronic device is adjusted from a first volume to a second volume, the volume adjustment method further comprises:
and determining that the scene identification parameters do not meet the preset conditions corresponding to the first scene, and restoring the volume of the electronic equipment to the first volume.
8. The volume adjusting method according to any one of claims 1 to 6, wherein after the volume of the electronic device is adjusted from a first volume to a second volume, the method further comprises:
and if the operation of indicating to quit the volume adaptive adjustment function is detected, restoring the volume of the electronic equipment to the first volume.
9. The volume adjusting method according to claim 8, wherein the operation of indicating the exit of the volume adaptive adjustment function is an operation of simultaneously pressing a first volume key and a second volume key of the electronic device or an operation of touching a preset control.
10. An electronic device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the volume adjustment method according to any one of claims 1 to 9 when executing the computer program.
11. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements the volume adjustment method according to any one of claims 1 to 9.
CN202110610053.5A 2021-06-01 2021-06-01 Volume adjusting method, electronic device and storage medium Active CN113467747B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110610053.5A CN113467747B (en) 2021-06-01 2021-06-01 Volume adjusting method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110610053.5A CN113467747B (en) 2021-06-01 2021-06-01 Volume adjusting method, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN113467747A CN113467747A (en) 2021-10-01
CN113467747B true CN113467747B (en) 2023-03-31

Family

ID=77872096

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110610053.5A Active CN113467747B (en) 2021-06-01 2021-06-01 Volume adjusting method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN113467747B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114302298B (en) * 2021-12-31 2023-07-21 联想(北京)有限公司 Volume adjustment method and device, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105827797A (en) * 2015-07-29 2016-08-03 维沃移动通信有限公司 Method for adjusting volume of electronic device and electronic device

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7278101B1 (en) * 1999-09-30 2007-10-02 Intel Corporation Controlling audio volume in processor-based systems
US9431983B2 (en) * 2013-06-17 2016-08-30 Tencent Technology (Shenzhen) Company Limited Volume adjusting method, volume adjusting apparatus and electronic device using the same
CN105975241A (en) * 2016-04-22 2016-09-28 北京小米移动软件有限公司 Volume regulation method and device
CN107277216A (en) * 2017-05-16 2017-10-20 努比亚技术有限公司 A kind of volume adjusting method, terminal and computer-readable recording medium
CN108521521B (en) * 2018-04-19 2021-04-02 Oppo广东移动通信有限公司 Volume adjusting method, mobile terminal and computer readable storage medium
CN108810739A (en) * 2018-05-22 2018-11-13 出门问问信息科技有限公司 A kind of speech playing method and device, storage medium, electronic equipment
CN108733342B (en) * 2018-05-22 2021-03-26 Oppo(重庆)智能科技有限公司 Volume adjusting method, mobile terminal and computer readable storage medium
US11531516B2 (en) * 2019-01-18 2022-12-20 Samsung Electronics Co., Ltd. Intelligent volume control
CN110995933A (en) * 2019-12-12 2020-04-10 Oppo广东移动通信有限公司 Volume adjusting method and device of mobile terminal, mobile terminal and storage medium
CN111405114A (en) * 2020-03-18 2020-07-10 捷开通讯(深圳)有限公司 Method and device for automatically adjusting volume, storage medium and terminal

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105827797A (en) * 2015-07-29 2016-08-03 维沃移动通信有限公司 Method for adjusting volume of electronic device and electronic device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
移动终端音量调节专利技术综述;左赛哲;《中国新通信》;20150505(第09期);全文 *

Also Published As

Publication number Publication date
CN113467747A (en) 2021-10-01

Similar Documents

Publication Publication Date Title
CN110347269B (en) Empty mouse mode realization method and related equipment
CN112289313A (en) Voice control method, electronic equipment and system
CN113395388B (en) Screen brightness adjusting method and electronic equipment
CN110730114B (en) Method and equipment for configuring network configuration information
CN112312366B (en) Method, electronic equipment and system for realizing functions through NFC (near field communication) tag
CN111182140B (en) Motor control method and device, computer readable medium and terminal equipment
CN111835907A (en) Method, equipment and system for switching service across electronic equipment
CN111930335A (en) Sound adjusting method and device, computer readable medium and terminal equipment
CN113438364B (en) Vibration adjustment method, electronic device, and storage medium
CN113467735A (en) Image adjusting method, electronic device and storage medium
CN114095602B (en) Index display method, electronic device and computer readable storage medium
CN115514844A (en) Volume adjusting method, electronic equipment and system
CN113467747B (en) Volume adjusting method, electronic device and storage medium
CN109285563B (en) Voice data processing method and device in online translation process
CN113129916A (en) Audio acquisition method, system and related device
WO2022206825A1 (en) Method and system for adjusting volume, and electronic device
CN115665632A (en) Audio circuit, related device and control method
CN114221402A (en) Charging method and device of terminal equipment and terminal equipment
CN115714890A (en) Power supply circuit and electronic device
CN114661258A (en) Adaptive display method, electronic device, and storage medium
CN114116610A (en) Method, device, electronic equipment and medium for acquiring storage information
CN113867520A (en) Device control method, electronic device, and computer-readable storage medium
CN114554012A (en) Incoming call answering method, electronic equipment and storage medium
CN114120987A (en) Voice awakening method, electronic equipment and chip system
CN113391735A (en) Display form adjusting method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant