CN112866480B - Information processing method, information processing device, electronic equipment and storage medium - Google Patents

Information processing method, information processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112866480B
CN112866480B CN202110006165.XA CN202110006165A CN112866480B CN 112866480 B CN112866480 B CN 112866480B CN 202110006165 A CN202110006165 A CN 202110006165A CN 112866480 B CN112866480 B CN 112866480B
Authority
CN
China
Prior art keywords
application scene
electronic equipment
scene
type
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110006165.XA
Other languages
Chinese (zh)
Other versions
CN112866480A (en
Inventor
张奎
刘爱根
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202110006165.XA priority Critical patent/CN112866480B/en
Publication of CN112866480A publication Critical patent/CN112866480A/en
Application granted granted Critical
Publication of CN112866480B publication Critical patent/CN112866480B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Telephone Function (AREA)

Abstract

The embodiment of the disclosure discloses an information processing method, an information processing device, electronic equipment and a storage medium; the information processing method comprises the following steps: acquiring sound parameters of the acquired voice signals; determining the type of an application scene where the electronic equipment is located according to the sound parameters; and controlling the electronic equipment to output first information based on the type of the application scene where the electronic equipment is located. According to the information processing method disclosed by the embodiment of the invention, the application scene of the electronic equipment can be determined according to the sound parameters, and the content matched with different application scenes is output according to different application scenes of the electronic equipment, so that the intellectualization of the electronic equipment and the accuracy of the output content can be improved, and the experience of a user is further improved.

Description

Information processing method, information processing device, electronic equipment and storage medium
Technical Field
The present disclosure relates to, but not limited to, the field of communications technologies, and in particular, to an information processing method, an apparatus, an electronic device, and a storage medium.
Background
Currently, more and more electronic devices, such as televisions, mobile phones, or audios, popularize far-field voice functions; and along with the wider and wider field voice interaction application, the method brings more convenience to the life, work and other aspects of people. In far-field voice interactions, however, related voice interactions are typically made based on content spoken by a person only by analyzing the content; as such, the content of the voice interaction may be rendered less accurate.
Disclosure of Invention
The disclosure provides an information processing method, an information processing device, electronic equipment and a storage medium.
According to a first aspect of the present disclosure, there is provided an information processing method, applied to an electronic device, including:
acquiring sound parameters of the acquired voice signals;
determining the type of an application scene where the electronic equipment is located according to the sound parameters;
and controlling the electronic equipment to output first information based on the type of the application scene where the electronic equipment is located.
In the above scheme, the determining, according to the sound parameter, the type of the application scene where the electronic device is located includes:
according to at least one parameter in the sound parameters, determining that the application scene where the electronic equipment is located is a first type application scene or a second type application scene;
wherein the sound parameters include at least one of: loudness parameters, audio parameters, and ultrasound parameters.
In the above scheme, the determining, according to at least one parameter of the sound parameters, that the application scene where the electronic device is located is a first type application scene or a second type application scene includes one of the following:
determining a first type of application scene where the electronic equipment is located in response to at least one parameter in the sound parameters being in a preset threshold range;
And determining a second type of application scene where the electronic equipment is located in response to at least one parameter of the sound parameters being out of the preset threshold range.
In the above solution, the determining, in response to at least one parameter of the sound parameters being within a predetermined threshold range, the first application scenario in which the electronic device is located includes at least one of the following:
determining the application scene where the electronic equipment is located as the first type application scene in response to the loudness floating range in the sound parameter being smaller than the preset loudness floating range in the preset time range;
determining the application scene where the electronic equipment is located as the first type of application scene in response to the fact that the floating range of the audio frequency in the sound parameter is smaller than the floating range of the preset frequency in the preset time range;
responding to the continuous existence of the audio frequency in the sound parameter within a preset time range, and determining the application scene where the electronic equipment is located as the first type of application scene;
and determining the application scene where the electronic equipment is located as the first type of application scene according to the response that the echo intensity returned by the ultrasonic wave in the sound parameter is smaller than the preset signal intensity and/or the signal to noise ratio in the echo intensity is smaller than the preset signal to noise ratio.
In the above solution, the determining, in response to at least one parameter of the sound parameters being outside a predetermined threshold range, the second type of application scenario where the electronic device is located includes at least one of:
determining that the application scene where the electronic equipment is located is the second type of application scene in response to the loudness floating range in the sound parameter being greater than or equal to a preset loudness floating range in a preset time range;
determining an application scene where the electronic equipment is located as the second type of application scene in response to the fact that the floating range of the audio frequency in the sound parameter is larger than or equal to the floating range of the preset frequency in the preset time range;
in response to the absence of audio frequency in at least part of the time period in the preset time range, determining the application scene where the electronic equipment is located as the second type of application scene;
and determining the application scene where the electronic equipment is located as the second type of application scene based on the fact that the echo intensity returned by the ultrasonic wave in the sound parameter is larger than the preset signal intensity and/or the signal to noise ratio in the echo intensity is smaller than the preset signal to noise ratio.
In the above solution, the first type of application scenario includes: a noisy scene or a quiet scene;
The determining, according to at least one parameter of the sound parameters, that the application scene where the electronic device is located is a first type of application scene includes at least one of the following:
determining that an application scene where the electronic equipment is located is the noisy scene in response to the loudness in the sound parameters being greater than a first predetermined loudness threshold;
determining that the application scene where the electronic equipment is located is the quiet scene in response to the loudness in the sound parameter being less than or equal to the first predetermined loudness threshold;
determining that an application scene where the electronic equipment is located is the noisy scene in response to the peak frequency of the audio frequency spectrum in the sound parameter being greater than a first preset frequency;
and determining that the application scene where the electronic equipment is positioned is a quiet scene in response to the peak frequency of the audio frequency spectrum in the sound parameter is smaller than or equal to the first preset frequency.
In the above solution, the second type of application scenario includes: a first age stage scene or a second age stage scene; wherein the age of the first age stage scene is less than the age of the second age stage scene;
according to at least one parameter in the sound parameters, determining that the application scene where the electronic equipment is located is a second type of application scene, wherein the second type of application scene comprises at least one of the following:
Responding to the audio frequency in the sound parameter in a first preset frequency threshold interval, and determining an application scene where the electronic equipment is located as a scene of a first age stage;
responding to the audio frequency in the sound parameter in a second preset frequency threshold interval, and determining an application scene where the electronic equipment is located as a scene of a second age stage; wherein the minimum value of the first predetermined frequency threshold interval is greater than the maximum value of the second predetermined frequency threshold interval.
In the above scheme, the controlling the electronic device to output the first information based on the type of the application scene where the electronic device is located includes at least one of:
based on the type of the application scene where the electronic equipment is located, adjusting the volume of the first information output by the electronic equipment;
outputting the first information matched with the type of the application scene where the electronic equipment is located based on the type of the application scene where the electronic equipment is located;
and controlling the duration of outputting the first information based on the type of the application scene where the electronic equipment is located.
In the above scheme, the adjusting the volume of the first information output by the electronic device based on the type of the multi-application scene where the electronic device is located includes one of the following:
Responding to the situation that the application scene where the electronic equipment is located is a noisy scene or a scene of a second age stage, and adjusting the volume of the first information output by the electronic equipment to be larger than or equal to a preset volume threshold value;
and responding to the application scene of the electronic equipment as a quiet scene or a scene of a first age stage, and adjusting the volume of the electronic equipment for outputting the first information to be smaller than the preset volume threshold value.
In the above scheme, the controlling the duration of outputting the first information based on the type of the application scene where the electronic device is located includes one of the following:
responding to the situation that the application scene of the electronic equipment is a first age stage scene, and controlling the duration of outputting the first information to be a first duration;
responding to the situation that the application scene of the electronic equipment is a scene of a second age stage, and controlling the duration of outputting the first information to be a second duration; wherein the second time period is longer than the first time period.
According to a second aspect of the present disclosure, there is provided an information processing apparatus applied to an electronic device, the apparatus including:
the acquisition module is used for acquiring sound parameters of the acquired voice signals;
the determining module is used for determining the type of the application scene where the electronic equipment is located according to the sound parameters;
And the processing module is used for controlling the electronic equipment to output the first information based on the type of the application scene where the electronic equipment is located.
In the above scheme, the determining module is configured to determine, according to at least one parameter of the sound parameters, that an application scene where the electronic device is located is a first type application scene or a second type application scene;
wherein the sound parameters include at least one of: loudness parameters, audio parameters, and ultrasound parameters.
In the above scheme, the determining module is configured to determine, in response to at least one parameter of the sound parameters being within a predetermined threshold range, a first type of application scenario where the electronic device is located;
or,
and the determining module is used for determining a second type application scene where the electronic equipment is located in response to at least one parameter of the sound parameters being out of the preset threshold range.
In the above solution, the determining module is configured to at least one of:
determining the application scene where the electronic equipment is located as the first type application scene in response to the loudness floating range in the sound parameter being smaller than the preset loudness floating range in the preset time range;
determining the application scene where the electronic equipment is located as the first type of application scene in response to the fact that the floating range of the audio frequency in the sound parameter is smaller than the floating range of the preset frequency in the preset time range;
Responding to the continuous existence of the audio frequency in the sound parameter within a preset time range, and determining the application scene where the electronic equipment is located as the first type of application scene;
and determining the application scene where the electronic equipment is located as the first type of application scene according to the response that the echo intensity returned by the ultrasonic wave in the sound parameter is smaller than the preset signal intensity and/or the signal to noise ratio in the echo intensity is smaller than the preset signal to noise ratio.
In the above solution, the determining module is configured to at least one of:
determining that the application scene where the electronic equipment is located is the second type of application scene in response to the loudness floating range in the sound parameter being greater than or equal to a preset loudness floating range in a preset time range;
determining an application scene where the electronic equipment is located as the second type of application scene in response to the fact that the floating range of the audio frequency in the sound parameter is larger than or equal to the floating range of the preset frequency in the preset time range;
in response to the absence of audio frequency in at least part of the time period in the preset time range, determining the application scene where the electronic equipment is located as the second type of application scene;
and determining the application scene where the electronic equipment is located as the second type of application scene based on the fact that the echo intensity returned by the ultrasonic wave in the sound parameter is larger than the preset signal intensity and/or the signal to noise ratio in the echo intensity is smaller than the preset signal to noise ratio.
In the above solution, the first type of application scenario includes: a noisy scene or a quiet scene;
the determining module is used for at least one of the following:
determining that an application scene where the electronic equipment is located is the noisy scene in response to the loudness in the sound parameters being greater than a first predetermined loudness threshold;
determining that the application scene where the electronic equipment is located is the quiet scene in response to the loudness in the sound parameter being less than or equal to the first predetermined loudness threshold;
determining that an application scene where the electronic equipment is located is the noisy scene in response to the peak frequency of the audio frequency spectrum in the sound parameter being greater than a first preset frequency;
and determining that the application scene where the electronic equipment is positioned is a quiet scene in response to the peak frequency of the audio frequency spectrum in the sound parameter is smaller than or equal to the first preset frequency.
In the above solution, the second type of application scenario includes: a first age stage scene or a second age stage scene; wherein the age of the first age stage scene is less than the age of the second age stage scene;
the determining module is configured to determine, in response to the audio frequency in the sound parameter being within a first predetermined frequency threshold interval, that an application scenario where the electronic device is located is a first age stage scenario;
Or,
the determining module is configured to determine, in response to the audio frequency in the sound parameter being within a second predetermined frequency threshold interval, that an application scenario where the electronic device is located is a second age stage scenario; wherein the minimum value of the first predetermined frequency threshold interval is greater than the maximum value of the second predetermined frequency threshold interval.
In the above aspect, the processing module is configured to at least one of:
based on the type of the application scene where the electronic equipment is located, adjusting the volume of the first information output by the electronic equipment;
outputting the first information matched with the type of the application scene where the electronic equipment is located based on the type of the application scene where the electronic equipment is located;
and controlling the duration of outputting the first information based on the type of the application scene where the electronic equipment is located.
In the above scheme, the processing module is configured to adjust, in response to a situation that an application scene where the electronic device is located is a noisy scene or a scene of a second age stage, a volume of the electronic device outputting the first information to be greater than or equal to a predetermined volume threshold;
or,
and the processing module is used for responding to the situation that the application scene of the electronic equipment is a quiet scene or a scene of a first age stage, and adjusting the volume of the first information output by the electronic equipment to be smaller than the preset volume threshold value.
In the above scheme, the processing module is configured to control, in response to an application scene where the electronic device is located being a first age group scene, a duration of outputting the first information to be a first duration;
or,
the processing module is used for responding to the situation that the application scene where the electronic equipment is located is a second age stage scene, and controlling the duration of outputting the first information to be a second duration; wherein the second time period is longer than the first time period.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: and when the executable instructions are executed, the information processing method according to any embodiment of the disclosure is realized.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a computer readable storage medium, wherein the readable storage medium stores an executable program, and wherein the executable program when executed by a processor implements the information processing method according to any of the embodiments of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects:
The embodiment of the disclosure can acquire the sound parameters of the acquired voice signals; determining the type of an application scene where the electronic equipment is located according to the sound parameters; and controlling the electronic equipment to output first information based on the type of the application scene where the electronic equipment is located. Therefore, the embodiment of the disclosure can determine the type of the application scene where the electronic device is located according to the sound parameters of the sound signals collected by the electronic device, and output the content matched with different application scenes according to different application scenes where the electronic device is located, so that the intelligentization of the electronic device and the accuracy of the output content can be improved, and the experience of a user is further improved.
In addition, if the method in the embodiment of the disclosure is applied to far-field voice interaction scenes, compared with the prior art that the content is output only according to voice signals, the method can also analyze the type of the application scene where the electronic equipment is located and control the content output by the electronic equipment based on the type of the application scene, so that the problem of inaccuracy of far-field voice interaction caused by interference such as background sound can be reduced, and the accuracy of far-field voice interaction can be improved. And because far-field voice interaction can be directly interacted based on the application scene of the electronic equipment, the convenience of operation can be further improved, and accordingly, the user experience can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a schematic diagram illustrating an information processing method according to an exemplary embodiment.
Fig. 2 is a schematic diagram illustrating far-field voice interaction according to an example embodiment.
Fig. 3 is a schematic diagram illustrating an information processing method according to an exemplary embodiment.
Fig. 4 is a schematic diagram showing an information processing method according to an exemplary embodiment.
Fig. 5 is a schematic diagram showing an information processing method according to an exemplary embodiment.
Fig. 6 is a schematic diagram illustrating an information processing method according to an exemplary embodiment.
Fig. 7 is a block diagram of an information processing apparatus according to an exemplary embodiment.
Fig. 8 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of devices and apparatus consistent with aspects of the invention as detailed in the accompanying claims.
As shown in fig. 1, an embodiment of the present disclosure provides a flowchart of an information processing method, and as shown in fig. 1, the apparatus includes the following steps:
step S11: acquiring sound parameters of the acquired voice signals;
step S12: determining the type of an application scene where the electronic equipment is located according to the sound parameters;
step S13: and controlling the electronic equipment to output first information based on the type of the application scene where the electronic equipment is located.
The electronic device of the embodiments of the present disclosure may be various types of mobile devices or fixed devices. For example, the first device may be an electronic device such as a mobile phone, a computer, a server, or a tablet computer; as another example, the first device may be an electronic device such as a television, a sound box, etc.; for another example, the first device may be an electronic device such as a wearable bracelet or a watch.
In some embodiments, the electronic device may be used for interaction of far-field speech scenes. For example, as shown in fig. 2, the electronic device is a smart television, and the smart television integrates a far-field voice component, so that voice interaction of a far-field voice scene of the smart television can be realized.
In other embodiments, the electronic device may be used for interaction of near-field speech scenes.
Here far field speech scene and near field speech scene are relatively speaking; for example, far-field speech scenes are scenes greater than or equal to a predetermined distance, and near-field speech scenes are scenes less than the predetermined distance. The far-field voice scene is a voice scene greater than 2 meters, such as a conference room, a vehicle-mounted scene, an intelligent home scene and the like.
The speech signal here includes any sound of the external environment, for example, the speech signal includes: human voice, voice output by electronic devices, noise, and/or multipath reflected reverberation, among others. The noise here may be air conditioning noise, refrigerator noise and/or human voice other than voice interaction with the electronic device, etc. In some application scenarios, the speech signal may also be understood as a background sound.
In some embodiments, the step S11 includes: collecting the voice signal; and acquiring the sound parameters in the voice signal based on the voice signal.
The electronic device herein may comprise a sound collection module. For example, the electronic device collects the voice signal based on a sound collection module.
The sound collection module can be a microphone. The microphone may be one or more.
For example, a plurality of the microphones are in a linear array; as another example, the plurality of microphones may be a circular array. In this way, as much background sound as possible can be acquired.
The sound parameters herein include, but are not limited to, at least one of: loudness parameters, audio parameters, and ultrasound parameters. Of course, the sound parameters herein may also be other parameters characterizing any characteristics of the speech signal, such as the sound parameters may be intensity parameters of sound, and/or audio frequency spectrum, etc.
The application scenario here includes: the first type of application scene and/or the second type of application scene.
The first type of application scenario here is an application scenario indicating the surrounding environment. For example, the first type of application scene may be a quiet scene or a noisy scene.
The second type of application scenario here is an application scenario indicating different age phases. For example, the second type of application scenario may be a first age stage scenario or a second age stage scenario, wherein the first age stage scenario has an age less than the second age stage scenario. For example, the first age group scenario includes: child, including in the second age group scene: middle aged or elderly people.
Of course, the classification of application scenes at different ages herein may also be other types of classification. For example, the application scene may be classified into a first age group scene, a second age group scene, or a third age group according to different age groups; wherein the first age group is less than the second age group, which is less than the third age group. The users in the first age group scene are children, the users in the second age group scene are middle-aged people, and the users in the third age group scene are old people. The application scenes of different age stages can be classified according to the specific ages of people; for example, the age of a person in a first age group scene is 0 to 20 years old, the age of a person in a second age group scene is 21 to 50 years old, and the age of a person in a third age group scene is greater than 50 years old.
Of course, the classification of application scenes herein may also be other forms of classification. For example, application scenes may be classified according to the sex of a person, for example, into a female scene or a male scene; as another example, application scenes may be categorized according to sound source attributes, such as application scenes classified as people or electronic device application scenes, and so forth.
The first information herein includes, but is not limited to, at least one of: audio data, video data. In step S13, the electronic device is controlled to output the first information, which may be the content of the first information output by the electronic device or may be the duration of the first information output by the electronic device; etc.
The first information may also be various information with audio signals. For example, it may be broadcast news, novel, music, etc.; as another example, the television programs can be broadcast, such as sports programs, juvenile programs, movies and various kinds of variety programs.
In other embodiments, the step S13 may also be: prompting the electronic equipment to output first information based on the type of an application scene where the electronic equipment is located; alternatively, the step S13 may be: and determining prompt information and the like of the electronic equipment for outputting the first information based on the type of the application scene where the electronic equipment is located.
In the embodiment of the disclosure, the type of the application scene corresponding to the sound parameter may be determined according to the acquired sound parameter of the collected voice signal, and the first information output by the electronic device may be controlled based on the type of the application scene. Therefore, the embodiment of the disclosure can determine the type of the application scene where the electronic device is located according to the sound parameters of the sound signals collected by the electronic device, and output the content matched with different application scenes according to different application scenes where the electronic device is located, so that the intelligentization of the electronic device and the accuracy of the output content can be improved, and the experience of a user is further improved.
In addition, if the method of the embodiment of the disclosure is applied to a far-field voice interaction scene, compared with the prior art that the content is output only according to the voice signal, the type of the application scene where the electronic device is located can be analyzed, and the content output by the electronic device is controlled based on the type of the application scene, so that the problem of inaccuracy of the far-field voice interaction caused by interference such as background sound can be reduced, and the accuracy of the far-field voice interaction can be improved. And because far-field voice interaction can be directly interacted based on the application scene of the electronic equipment, the convenience of operation can be further improved, and accordingly, the user experience can be improved.
In some embodiments, the step S12 includes:
in response to querying an alternative application scene matched with the sound parameters from the acoustic model, determining that the alternative application scene is the type of the application scene where the electronic equipment is located; wherein the acoustic model comprises at least: and matching relation between each alternative application scene and the sound parameter.
In some embodiments, the acoustic model includes at least: and matching relation between each alternative application scene and at least one of the loudness parameter, the audio parameter and the ultrasonic parameter.
For example, the acoustic model comprises: and matching relation between each alternative application scene and the loudness parameter. As another example, the acoustic model includes: and matching relation between each alternative application scene and the loudness parameter with the ultrasonic parameter. For another example, the acoustic model includes: and matching relation between each alternative application scene and the loudness parameter, the audio parameter and the ultrasonic parameter.
The acoustic model herein may further include: and matching relation between each alternative application scene and an audio frequency spectrogram in the audio signal.
The audio spectrogram herein is used to indicate a segment of audio frequencies over a predetermined time.
It can be understood that at least one of the loudness parameter, the audio parameter, the ultrasonic parameter, the audio spectrogram, the ultrasonic pattern, etc. corresponding to each alternative application scene is different. For example, the audio frequency of a child is different from the audio frequency of a middle aged person; the audio frequency spectrum is different for different children. As another example, the sound decibel of a quiet scene is different from the sound decibel of a noisy scene; the noisy scene is different from the audio frequency spectrum of people with different ages; etc. Thus, for different loudness parameters, audio parameters, ultrasound parameters, and/or audio spectrograms, different alternative application scenarios may be corresponding.
In some embodiments, before the step S12, the method further includes:
acquiring sound parameters of sound signals of various alternative application scenes;
and establishing the acoustic model based on the sound parameters of the candidate application scenes and the matching sound parameters of the candidate application scenes.
Thus, in the embodiment of the present disclosure, sound signals under various application scenarios may be collected in advance, and sound parameters may be obtained based on the sound signals; and establishing acoustic models of various alternative application scenes and sound parameters as samples; and searching for an alternative application scene corresponding to the sound parameter from the sample based on the sound parameter of the current collected sound signal, wherein the alternative application scene is the application scene where the current electronic equipment is located. Thus, the embodiment of the disclosure can facilitate the electronic equipment to quickly and accurately obtain the application scene of the electronic equipment.
As shown in fig. 3, in some embodiments, the step S12 includes:
step S121: according to at least one parameter in the sound parameters, determining that the application scene where the electronic equipment is located is a first type application scene or a second type application scene;
wherein the sound parameters include at least one of: loudness parameters, audio parameters, and ultrasound parameters.
Of course, in other embodiments, the sound parameters may also be other parameters characterizing any feature of the speech signal, such as the sound parameters may be intensity parameters of sound, and/or audio spectrum, etc.
Thus, in the embodiment of the present disclosure, an application scenario where an electronic device is located may be accurately determined based on at least one parameter of sound parameters.
In some embodiments, the step S121 includes:
responding to the fact that the application scene where the electronic equipment is located cannot be determined to be the first type application scene or the second type application scene based on i parameters in the sound parameters, and determining that the application scene where the electronic equipment is located is the first type application scene or the second type application scene according to i+1 parameters in the sound parameters; wherein i is a positive integer greater than or equal to 1.
For example, in one application scenario, if the electronic device determines that the electronic device may be a first type application scenario or a second type application scenario according to the loudness parameter in the sound parameter, the type of the electronic device may be determined to be the first type application scenario or the second type application scenario according to the audio parameter of the sound parameter.
In the embodiment of the disclosure, the type of the application scene where the electronic device is located may be determined according to i parameters of the sound parameters; and when the type of the application scene where the electronic equipment is located cannot be determined according to the i parameters, one or more parameters are added for further determination. Thus, in the embodiment of the disclosure, on one hand, resources for determining the type of the application scene where the electronic device is located can be saved as much as possible, and on the other hand, the type of the application scene where the electronic device is located can be judged in a multi-dimensional manner, so that accuracy for determining the type of the application scene where the electronic device is located is improved.
In some embodiments, the step S121 includes one of the following:
determining a first type of application scene where the electronic equipment is located in response to at least one parameter in the sound parameters being in a preset threshold range;
and determining a second type of application scene where the electronic equipment is located in response to at least one parameter of the sound parameters being out of the preset threshold range.
In some embodiments, the determining, in response to at least one of the sound parameters being within a predetermined threshold range, the first type of application scenario in which the electronic device is located includes at least one of:
Determining the application scene where the electronic equipment is located as the first type application scene in response to the loudness floating range in the sound parameter being smaller than the preset loudness floating range in the preset time range;
determining the application scene where the electronic equipment is located as the first type of application scene in response to the fact that the floating range of the audio frequency in the sound parameter is smaller than the floating range of the preset frequency in the preset time range;
responding to the continuous existence of the audio frequency in the sound parameter within a preset time range, and determining the application scene where the electronic equipment is located as the first type of application scene;
and determining the application scene where the electronic equipment is located as the first type of application scene according to the response that the echo intensity returned by the ultrasonic wave in the sound parameter is smaller than the preset signal intensity and/or the signal to noise ratio in the echo intensity is smaller than the preset signal to noise ratio.
In other embodiments, the determining, in response to at least one of the sound parameters being outside a predetermined threshold range, the second type of application scenario in which the electronic device is located includes at least one of:
determining that the application scene where the electronic equipment is located is the second type of application scene in response to the loudness floating range in the sound parameter being greater than or equal to a preset loudness floating range in a preset time range;
Determining an application scene where the electronic equipment is located as the second type of application scene in response to the fact that the floating range of the audio frequency in the sound parameter is larger than or equal to the floating range of the preset frequency in the preset time range;
in response to the absence of audio frequency in at least part of the time period in the preset time range, determining the application scene where the electronic equipment is located as the second type of application scene;
and determining the application scene where the electronic equipment is located as the second type of application scene based on the fact that the echo intensity returned by the ultrasonic wave in the sound parameter is larger than the preset signal intensity and/or the signal to noise ratio in the echo intensity is smaller than the preset signal to noise ratio.
For example, in some embodiments, in an application scenario where an electronic device is located, there is a smoke exhaust ventilator or a sweeping robot that emits noise. In the application scene, the loudness in the voice signal collected by the electronic equipment is smaller in a floating range within a preset time range, such as smaller than the preset loudness floating range, and/or the audio frequency of the collected voice signal is smaller in a floating range within the preset time range, such as smaller than the preset frequency floating range; the application scene where the electronic device is located can be determined to be a first type of application scene.
For another example, in some embodiments, in an application scenario where an electronic device is located, there is a voice of a person speaking, the person speaking louder and louder. In the application scene, the loudness in the voice signal collected by the electronic equipment is larger in a floating range within a preset time range, such as larger than the preset loudness floating range, and/or the audio frequency of the collected voice signal is also larger in a floating range within the preset time range, such as larger than the preset frequency floating range; the application scene where the electronic device is located can be determined to be the second type of application scene.
As another example, in some embodiments, in an application scenario where an electronic device is located, there is a robot cleaner that emits noise, and the robot cleaner continues to operate within a predetermined time range. Under the application scene, the audio frequency in the voice signal collected by the electronic equipment always exists to form a continuous audio frequency spectrum; the application scene where the electronic device is located can be determined to be a first type of application scene.
Here, the audio frequency continuously exists in the sound parameter within a predetermined time range, which may be regarded as a continuous audio frequency spectrum exists within the predetermined time range.
As another example, in some embodiments, in an application scenario in which an electronic device is located, there is a voice of a person speaking, who pauses for a period of time after speaking, and then subsequently speaks. In the application scene, the audio frequency in the voice signal collected by the electronic equipment does not exist continuously, and the formed audio frequency spectrum is that the audio frequency is intermittent; for example, there is greater than 120HZ at seconds 1 to 2 and several HZ or nearly 0 at seconds 2 to 3; and determining the application scene where the electronic is located as a second type of application scene.
The absence of audio frequencies for at least a portion of the predetermined time period may be considered to be an intermittent audio spectrum for the predetermined time period.
As another example, in some embodiments, in an application scenario where an electronic device is located, a robot cleaner may send out noise; if the electronic equipment sends out ultrasonic waves, the signal-to-noise ratio in the echo of the ultrasonic waves received by the electronic equipment is relatively large, if the signal-to-noise ratio is larger than a preset signal-to-noise ratio; the application scene where the electronic device is located can be determined to be a first type of application scene.
For another example, in some embodiments, in an application scenario where the electronic device is located, if a person speaks, the signal strength of the echo of the received ultrasonic wave is greater than the predetermined signal strength, and it is determined that the application scenario is a second type of application scenario.
In some embodiments, an ultrasonic device may be installed in the electronic apparatus for transmitting ultrasonic waves and receiving echoes based on the ultrasonic waves. The application scenes can be distinguished based on the echo of the ultrasonic wave; for example, whether a person exists in an application scene where the electronic device is located can be determined according to whether the ultrasonic wave is received or not; for another example, whether the object is an electronic device, a child, a adult, or the like may be determined based on the shape of the object characterized by the echo of the ultrasonic wave.
In the embodiment of the disclosure, whether the application scene where the electronic device is located is a first type application scene or a second type application scene may be determined according to a predetermined threshold range to which at least one parameter of the sound parameters belongs, for example, the loudness is determined in a predetermined time range, so that the type of the application scene where the electronic device is located can be accurately determined.
In some embodiments, the first type of application scenario includes: a noisy scene or a quiet scene;
the determining, according to at least one parameter of the sound parameters, that the application scene where the electronic device is located is a first type of application scene includes:
and determining that the first type of application scene where the electronic equipment is located is a noisy scene or a quiet scene according to at least one parameter in the sound parameters.
In some embodiments, the first type of application scenario includes: a noisy scene or a quiet scene;
the determining, according to at least one parameter of the sound parameters, that the application scene where the electronic device is located is a first type of application scene includes at least one of the following:
determining that an application scene where the electronic equipment is located is the noisy scene in response to the loudness in the sound parameters being greater than a first predetermined loudness threshold;
determining that the application scene where the electronic equipment is located is the quiet scene in response to the loudness in the sound parameter being less than or equal to the first predetermined loudness threshold;
determining that an application scene where the electronic equipment is located is the noisy scene in response to the peak frequency of the audio frequency spectrum in the sound parameter being greater than a first preset frequency;
and determining that the application scene where the electronic equipment is positioned is a quiet scene in response to the peak frequency of the audio frequency spectrum in the sound parameter is smaller than or equal to the first preset frequency.
The loudness here may be characterized in decibels.
Of course, in other embodiments, the electronic device may also determine that the first type of application scene in which the electronic device is located is a quiet scene or a noisy scene based on an average of audio frequencies within a predetermined time range. For example, in some embodiments, the determining, according to at least one parameter of the sound parameters, that the application scene in which the electronic device is located is a first type of application scene includes one of the following: determining that the application scene where the electronic equipment is located is noisy in response to the frequency average value of the audio frequency in the preset time range in the sound parameter being greater than a second preset frequency; and determining that the application scene where the electronic equipment is positioned is a quiet scene in response to the frequency average value of the audio frequency in the preset time range in the sound parameter is smaller than the second preset frequency.
In some embodiments, the second class of application scenarios includes: a first age stage scene or a second age stage scene; wherein the age of the first age stage scene is less than the age of the second age stage scene;
according to at least one parameter in the sound parameters, determining that the application scene where the electronic equipment is located is a second type of application scene, wherein the second type of application scene comprises at least one of the following:
responding to the audio frequency in the sound parameter in a first preset frequency threshold interval, and determining an application scene where the electronic equipment is located as a scene of a first age stage;
responding to the audio frequency in the sound parameter in a second preset frequency threshold interval, and determining an application scene where the electronic equipment is located as a scene of a second age stage; wherein the minimum value of the first predetermined frequency threshold interval is greater than the maximum value of the second predetermined frequency threshold interval.
In some embodiments, the person in the first age group scene comprises: child, the person in the second age group scene comprising: the elderly. In other embodiments, the first age group scene has a person with an age greater than a predetermined age and the second age group scene has a person with an age less than or equal to the predetermined age.
Of course, in other embodiments, the application scenes of the different age stages indicated by the second type of application scene may also be: a first age stage scene, a second age stage scene, … …, or an nth age stage scene; wherein N is an integer greater than or equal to 3.
Of course, in other embodiments, the second type of application scenario is also used to indicate the scenario of people of different sexes. For example, the second type of application scenario includes: a female scenario or a male scenario.
It will be appreciated that the frequency ranges of the voices of people of different genres or different age stages are different; thus, the application scenes of people with different polarities or different age stages can be determined according to different preset frequency threshold intervals. Of course, in other embodiments, the frequency threshold range corresponding to the quiet scene or the noisy scene is also different in the first type of application scene; thus, a noisy scene or a quiet scene can also be determined differently according to different predetermined frequency threshold intervals.
In the embodiment of the disclosure, the application scenes of people in different age stages can be determined based on the preset frequency threshold value where the audio frequency is located in the sound parameter; for example, when the audio frequency in the sound parameter is within a first preset frequency threshold interval, determining that the audio frequency is a first age stage scene; for another example, when the audio frequency in the sound parameter is within the second predetermined frequency threshold interval, it is determined that the audio frequency is the scene of the second age group. In this way, the embodiment of the disclosure can accurately determine the type of the second type of application scene according to the preset frequency threshold interval where the audio frequency is located in the sound parameter.
In other embodiments, the application scenario for different age phases may also be determined according to the size of the loudness parameter in the sound parameter, the peak frequency of the audio frequency, and/or the frequency average value in the audio frequency, etc. within a predetermined time range. For example, the application scene in which the electronic device is located is determined to be a first age stage scene in response to the loudness in the sound parameter being less than or equal to a second predetermined loudness threshold, or a second age stage scene in response to the loudness in the sound parameter being greater than the second predetermined loudness threshold.
As shown in fig. 4, in some embodiments, the step S13 includes at least one of:
step S131: based on the type of the application scene where the electronic equipment is located, adjusting the volume of the first information output by the electronic equipment;
step S132: outputting the first information matched with the type of the application scene where the electronic equipment is located based on the type of the application scene where the electronic equipment is located;
step S133: and controlling the duration of outputting the first information based on the type of the application scene where the electronic equipment is located.
In some embodiments, the step 131 includes one of:
Responding to the situation that the application scene where the electronic equipment is located is a noisy scene or a scene of a second age stage, and adjusting the volume of the first information output by the electronic equipment to be larger than or equal to a preset volume threshold value;
and responding to the application scene of the electronic equipment as a quiet scene or a scene of a first age stage, and adjusting the volume of the electronic equipment for outputting the first information to be smaller than the preset volume threshold value.
It can be appreciated that in a noisy scene, the loudness of the collected speech signal is relatively large; in this manner, the volume at which the electronic device outputs the first information may be turned up to be greater than or equal to the predetermined volume threshold. Alternatively, for a second age group scenario, such as an elderly person, the hearing is not good, and the volume of the first information output by the electronic device may be increased to be greater than or equal to the predetermined volume threshold. Alternatively, in a quiet scene, the loudness of the speech signal collected by itself is relatively small; in this manner, the volume at which the electronic device outputs the first information may be turned down to less than the predetermined volume threshold. Alternatively, for a first age group scenario, such as a child, where hearing is good, the volume of the first information output by the electronic device may be reduced to less than a predetermined volume threshold.
In this way, in the embodiment of the present disclosure, the electronic device may adjust the volume of the first information output by the electronic device based on the type of the application scene, so that the user may listen to the program in a more comfortable volume environment, thereby improving the user experience.
In some embodiments, the step S132 includes one of the following:
responding to the situation that the application scene where the electronic equipment is located is a noisy scene, and outputting the first information of at least one of sports programs, action movies, war movies and concerts;
responding to the application scene of the electronic equipment as a quiet scene, and outputting the first information of at least one of a warm film, a reading meeting and an emotion program;
responding to the application scene of the electronic equipment as a scene of a second age stage, and outputting the first information of at least one of a popular film, a story, emotion programs and news programs;
and responding to the application scene of the electronic equipment as a first age stage scene, and outputting the first information of at least one of the juvenile programs and the science and education programs.
The first information for controlling the electronic device to output or prompting the electronic device to output for different application scenes can also be other first information; for example, for quiet scenes, first information of a warm class may be output, and for noisy scenes, first information of a war or sports class may be output; for another example, for a first age stage scene, first information of animation or lesson may be output, and for a second age stage scene, first information of news or entertainment stage may be output.
In the embodiment of the disclosure, the electronic device may control the electronic device to play the first information matched with the type of the application scene based on the type of the application scene; therefore, different contents can be output based on different application scenes, and user experience can be improved.
In some embodiments, the step S133 includes one of:
responding to the situation that the application scene of the electronic equipment is a first age stage scene, and controlling the duration of outputting the first information to be a first duration;
responding to the situation that the application scene of the electronic equipment is a scene of a second age stage, and controlling the duration of outputting the first information to be a second duration; wherein the second time period is longer than the first time period.
For example, if the person in the first-age scene is a child, the duration of the juvenile program output by the electronic device may be controlled to be relatively short, for example, the first duration. Therefore, the children can watch television programs and the like in a relatively small time, and learning, eyesight and the like of the children are not affected as much as possible.
For another example, if the old man is in the second age scene, the old man is idle at ordinary times, so that the duration of the video program output by the electronic device can be controlled to be longer, for example, the second duration. Therefore, the old people can watch the video programs in more time, and the time of the old people is prolonged.
Thus, in the embodiment of the disclosure, the electronic device can determine the duration of outputting the first information by the electronic device according to the type of the application scene where the electronic device is located, so that the requirements of users can be further met, and the experience satisfaction degree of the users is improved.
As shown in fig. 5, in some embodiments, the method further comprises:
step S10: performing voice recognition on the voice signal;
the step S12 includes:
step S120: determining the type of an application scene where the electronic equipment is located based on the sound parameter in response to the recognition that the wake-up word exists in the voice signal;
the step S13 includes:
step S130: and controlling the electronic equipment to output the first information responding to the wake-up word based on the type of the application scene where the electronic equipment is located.
In step 12, the voice signal is subjected to voice recognition, which may be: collecting the voice signal; and performing speech recognition on the speech signal based on the speech recognition device.
Here, the wake words include a first type wake word or a second type wake word. The first type wake-up word is used for waking up the electronic equipment to execute the operation of determining the type of the application scene where the electronic equipment is located based on the sound parameters. For example, the first class wake word may be "college classmates". The second wake-up word may be used to wake up the electronic device to execute an operation of determining a type of an application scene where the electronic device is located based on the sound parameter, and may also be used to prompt the content that needs to be output by the electronic device. For example, the second wake-up word may be "little college, please play" three kingdoms "meaning".
In the embodiment of the present disclosure, if the wake-up word is a second type wake-up word, the electronic device controls the electronic device to output the first information in response to the wake-up word based on a type of an application scenario where the electronic device is located. For example, the application scene where the electronic device is located is a noisy scene, and the second wake-up word is "play 'three-country meaning'", then the electronic device is controlled to play the "three-country meaning" with a relatively large volume. For another example, the application scene where the electronic device is located is a quiet scene, and the second wake-up word is "play 'love photo'", the electronic device is controlled to select a love photo for playing at a relatively small volume.
In the embodiment of the disclosure, the electronic device determines the type of the application scene where the electronic device is based on the sound parameter only when the wake-up word is identified in the voice signal, so that the electronic device can be prevented from always executing the operation of determining the type of the application scene where the electronic device is located, and resources can be saved. And when the wake-up word contains some instructions for indicating the electronic equipment to output the content, the electronic equipment can be controlled to respond to the instruction to output the content according to the application scene where the electronic equipment is positioned, so that the intellectualization of the electronic equipment is further improved, and the requirements of users are met.
In other embodiments, the step S12 includes:
and responding to the detection of the triggering operation acting on the electronic equipment, and determining the type of the application scene where the electronic equipment is located based on the sound parameters.
Therefore, in the embodiment of the disclosure, the type of the application scene where the electronic device is located can be determined based on the sound parameter based on the triggering operation of the user, so that the electronic device can be prevented from always executing the operation of determining the type of the application scene where the electronic device is located, and resources can be saved.
A specific example is provided below in connection with any of the embodiments described above:
as shown in fig. 6, an embodiment of the present disclosure provides an information processing method, including the steps of:
step S21: collecting voice signals;
in an alternative embodiment, the electronic device collects the speech signal based on a sound collection module.
Step S22: acquiring sound parameters of the voice signals;
in an alternative embodiment, the electronic device obtains sound parameters of the voice signal based on the voice signal; wherein the sound parameters include at least one of: loudness parameters, audio parameters, and ultrasound parameters.
In another alternative embodiment, the sound parameters further include: an audio spectrum.
Step S23: based on the sound parameters, determining whether the application scene of the electronic equipment is a first type application scene or a second type application scene;
in an optional embodiment, the electronic device determines, based on at least one parameter of the sound parameters, that an application scenario in which the electronic device is located is a first type application scenario or a second type application scenario.
In another optional embodiment, the electronic device determines, based on at least one parameter of the sound parameters, that an application scenario in which the electronic device is located is a first type application scenario or a second type application scenario, including at least one of the following:
determining the application scene where the electronic equipment is located as the first type application scene in response to the loudness floating range in the sound parameter being smaller than the preset loudness floating range in the preset time range;
determining the application scene where the electronic equipment is located as the first type of application scene in response to the fact that the floating range of the audio frequency in the sound parameter is smaller than the floating range of the preset frequency in the preset time range;
responding to the continuous existence of the audio frequency in the sound parameter within a preset time range, and determining the application scene where the electronic equipment is located as the first type of application scene;
Determining an application scene where the electronic equipment is located as the first type of application scene based on the fact that the echo intensity returned by the ultrasonic wave in the sound parameter is smaller than a preset signal intensity and/or the signal to noise ratio in the echo intensity is smaller than a preset signal to noise ratio;
determining that the application scene where the electronic equipment is located is the second type of application scene in response to the loudness floating range in the sound parameter being greater than or equal to a preset loudness floating range in a preset time range;
determining an application scene where the electronic equipment is located as the second type of application scene in response to the fact that the floating range of the audio frequency in the sound parameter is larger than or equal to the floating range of the preset frequency in the preset time range;
in response to the absence of audio frequency in at least part of the time period in the preset time range, determining the application scene where the electronic equipment is located as the second type of application scene;
and determining the application scene where the electronic equipment is located as the second type of application scene based on the fact that the echo intensity returned by the ultrasonic wave in the sound parameter is larger than the preset signal intensity and/or the signal to noise ratio in the echo intensity is smaller than the preset signal to noise ratio.
Step S241: based on the sound parameters, determining that the first type of application scene in which the electronic equipment is located is a quiet scene or a noisy scene;
In an optional embodiment, the electronic device determines, based on at least one parameter of the sound parameters, that the first type of application scene in which the electronic device is located is a quiet scene or a noisy scene.
In another optional embodiment, the electronic device determines, based on at least one parameter of the sound parameters, that the first type of application scene in which the electronic device is located is a quiet scene or a noisy scene, including at least one of:
determining that an application scene where the electronic equipment is located is the noisy scene in response to the loudness in the sound parameters being greater than a first predetermined loudness threshold;
determining that the application scene where the electronic equipment is located is the quiet scene in response to the loudness in the sound parameter being less than or equal to the first predetermined loudness threshold;
determining that an application scene where the electronic equipment is located is the noisy scene in response to the peak frequency of the audio frequency spectrum in the sound parameter being greater than a first preset frequency;
and determining that the application scene where the electronic equipment is positioned is a quiet scene in response to the peak frequency of the audio frequency spectrum in the sound parameter is smaller than or equal to the first preset frequency.
Step S242: and determining that the second type of application scene where the electronic equipment is located is a first age stage scene or a second age stage scene based on the sound parameters.
In another embodiment, the determining, based on the sound parameter, that the second type of application scenario in which the electronic device is located is a first age group scenario or a second age group scenario includes at least one of:
responding to the audio frequency in the sound parameter in a first preset frequency threshold interval, and determining an application scene where the electronic equipment is located as a scene of a first age stage;
responding to the audio frequency in the sound parameter in a second preset frequency threshold interval, and determining an application scene where the electronic equipment is located as a scene of a second age stage; wherein the minimum value of the first predetermined frequency threshold interval is greater than the maximum value of the second predetermined frequency threshold interval.
Step S251: responding to a first type of application scene where the electronic equipment is positioned as a quiet scene, adjusting the volume of the electronic equipment for outputting the first information to be less than or equal to the preset volume threshold value, and/or outputting the first information of at least one of a warm film, a reading meeting and an emotion program;
step S252: responding to the situation that the application scene where the electronic equipment is positioned is a noisy scene, outputting first information of at least one of a warm film, a reading meeting and a news program of emotion program i, and/or adjusting the volume of the electronic equipment for outputting the first information to be larger than or equal to a preset volume threshold value;
Step S253: responding to the application scene of the electronic equipment as a first age stage scene, outputting the first information of at least one of the juvenile programs and the science and education programs, and/or controlling the duration of outputting the first information as a first duration;
step S254: responding to the application scene of the electronic equipment as a second age stage scene, outputting the first information of at least one of a popular film, a story, an emotion program and a news program, and/or controlling the duration of outputting the first information as a second duration; wherein the second time period is longer than the first time period.
In the embodiment of the disclosure, the electronic device determines that an application scene where the electronic device is located is a first type application scene or a second type application scene by collecting at least one parameter of sound parameters of a voice signal; and on the basis of determining that the application scene of the electronic equipment is a first type application scene or a second type application scene, determining that the first type application scene of the electronic equipment is a quiet scene or a noisy scene based on at least one parameter of the sound parameters, or determining that the second type application scene of the electronic equipment is a first age stage scene or a second age stage scene based on at least one parameter of the sound parameters. Thus, according to the embodiment of the disclosure, the application scene where the electronic equipment is located can be accurately determined through the sound parameters of the sound signals, so that the electronic equipment is controlled to output first information matched with the application scene; therefore, the intellectualization of the electronic equipment can be improved, and the user satisfaction can be improved.
Fig. 7 provides an information processing apparatus shown in an exemplary embodiment, which is applied to an electronic device, the apparatus including:
an acquisition module 41, configured to acquire sound parameters of the acquired voice signal;
a determining module 42, configured to determine, according to the sound parameter, a type of an application scenario in which the electronic device is located;
the processing module 43 is configured to control the electronic device to output the first information based on a type of an application scenario where the electronic device is located.
In some embodiments, the determining module 42 is configured to determine, according to at least one parameter of the sound parameters, whether the application scene in which the electronic device is located is a first type application scene or a second type application scene;
wherein the sound parameters include at least one of: loudness parameters, audio parameters, and ultrasound parameters.
In some embodiments, the determining module 42 is configured to determine, in response to at least one parameter of the sound parameters being within a predetermined threshold range, a first type of application scenario in which the electronic device is located;
or,
the determining module 42 is configured to determine, in response to at least one parameter of the sound parameters being outside the predetermined threshold range, a second type of application scenario in which the electronic device is located.
In some embodiments, the determination module 42 is configured to at least one of:
determining the application scene where the electronic equipment is located as the first type application scene in response to the loudness floating range in the sound parameter being smaller than the preset loudness floating range in the preset time range;
determining the application scene where the electronic equipment is located as the first type of application scene in response to the fact that the floating range of the audio frequency in the sound parameter is smaller than the floating range of the preset frequency in the preset time range;
responding to the continuous existence of the audio frequency in the sound parameter within a preset time range, and determining the application scene where the electronic equipment is located as the first type of application scene;
and determining the application scene where the electronic equipment is located as the first type of application scene according to the response that the echo intensity returned by the ultrasonic wave in the sound parameter is smaller than the preset signal intensity and/or the signal to noise ratio in the echo intensity is smaller than the preset signal to noise ratio.
In some embodiments, the determination module 42 is configured to at least one of:
determining that the application scene where the electronic equipment is located is the second type of application scene in response to the loudness floating range in the sound parameter being greater than or equal to a preset loudness floating range in a preset time range;
Determining an application scene where the electronic equipment is located as the second type of application scene in response to the fact that the floating range of the audio frequency in the sound parameter is larger than or equal to the floating range of the preset frequency in the preset time range;
in response to the absence of audio frequency in at least part of the time period in the preset time range, determining the application scene where the electronic equipment is located as the second type of application scene;
and determining the application scene where the electronic equipment is located as the second type of application scene based on the fact that the echo intensity returned by the ultrasonic wave in the sound parameter is larger than the preset signal intensity and/or the signal to noise ratio in the echo intensity is smaller than the preset signal to noise ratio.
In some embodiments, the first type of application scenario includes: a noisy scene or a quiet scene;
the determining module 42 is configured to at least one of:
determining that an application scene where the electronic equipment is located is the noisy scene in response to the loudness in the sound parameters being greater than a first predetermined loudness threshold;
determining that the application scene where the electronic equipment is located is the quiet scene in response to the loudness in the sound parameter being less than or equal to the first predetermined loudness threshold;
Determining that an application scene where the electronic equipment is located is the noisy scene in response to the peak frequency of the audio frequency spectrum in the sound parameter being greater than a first preset frequency;
and determining that the application scene where the electronic equipment is positioned is a quiet scene in response to the peak frequency of the audio frequency spectrum in the sound parameter is smaller than or equal to the first preset frequency.
In some embodiments, the second class of application scenarios includes: a first age stage scene or a second age stage scene; wherein the age of the first age stage scene is less than the age of the second age stage scene;
the determining module 42 is configured to determine, in response to the audio frequency in the sound parameter being within a first predetermined frequency threshold interval, that an application scenario in which the electronic device is located is a first age group scenario;
or,
the determining module 42 is configured to determine, in response to the audio frequency in the sound parameter being within a second predetermined frequency threshold interval, that an application scenario in which the electronic device is located is a second age group scenario; wherein the minimum value of the first predetermined frequency threshold interval is greater than the maximum value of the second predetermined frequency threshold interval.
In some embodiments, the processing module 43 is configured to at least one of:
based on the type of the application scene where the electronic equipment is located, adjusting the volume of the first information output by the electronic equipment;
outputting the first information matched with the type of the application scene where the electronic equipment is located based on the type of the application scene where the electronic equipment is located;
and controlling the duration of outputting the first information based on the type of the application scene where the electronic equipment is located.
In some embodiments, the processing module 43 is configured to adjust a volume of the electronic device outputting the first information to be greater than or equal to a predetermined volume threshold in response to the application scene in which the electronic device is located being a noisy scene or a second age group scene;
or,
the processing module 43 is configured to adjust a volume of the electronic device outputting the first information to be less than the predetermined volume threshold in response to the application scene of the electronic device being a quiet scene or a scene of a first age group.
In some embodiments, the processing module 43 is configured to control, in response to the application scenario in which the electronic device is located being a first age group scenario, a duration of outputting the first information to be a first duration;
Or,
the processing module 43 is configured to control, in response to the application scenario where the electronic device is located being a second age stage scenario, a duration of outputting the first information to be a second duration; wherein the second time period is longer than the first time period.
The specific manner in which the respective modules perform the operations in the apparatus of the above embodiments has been described in detail in the embodiments related to the apparatus, and will not be described in detail herein.
An embodiment of the present disclosure further provides a server, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: and when the executable instructions are executed, the information processing method according to any embodiment of the disclosure is realized.
The memory may include various types of storage media, which are non-transitory computer storage media capable of continuing to memorize information stored thereon after a power down of the communication device.
The processor may be coupled to the memory via a bus or the like for reading an executable program stored on the memory, for example, implementing at least one of the methods shown in fig. 1, 2 to 6.
Embodiments of the present disclosure also provide a computer-readable storage medium storing an executable program, wherein the executable program when executed by a processor implements the information processing method according to any embodiment of the present disclosure. For example, at least one of the methods shown in fig. 1, 2 to 6 is implemented.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 8 is a block diagram illustrating an electronic device 800, according to an example embodiment. For example, electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 8, an electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen between the electronic device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the electronic device 800. For example, the sensor assembly 814 may detect an on/off state of the device 800, a relative positioning of the components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in position of the electronic device 800 or a component of the electronic device 800, the presence or absence of a user's contact with the electronic device 800, an orientation or acceleration/deceleration of the electronic device 800, and a change in temperature of the electronic device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the electronic device 800 and other devices, either wired or wireless. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi,2G, or 3G, or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 804 including instructions executable by processor 820 of electronic device 800 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (22)

1. An information processing method, characterized by being applied to an electronic device, comprising:
acquiring sound parameters of the acquired voice signals; wherein the sound parameters at least comprise: ultrasonic parameters;
determining the type of an application scene where the electronic equipment is located according to the sound parameters; the determining, according to the sound parameter, the type of the application scene where the electronic device is located includes: determining the type of an application scene where the electronic equipment is located based on the sound parameter in response to the recognition that the wake-up word exists in the voice signal; the wake-up word is at least used for waking up the electronic equipment to execute the operation of determining the type of the application scene where the electronic equipment is located based on the sound parameters; the determining, based on the sound parameter, the type of the application scene where the electronic device is located includes: if the application scene of the electronic equipment cannot be determined based on i parameters in the sound parameters, determining the application scene of the electronic equipment according to i+1 parameters in the sound parameters; wherein i is a positive integer greater than or equal to 1;
And controlling the electronic equipment to output first information based on the type of the application scene where the electronic equipment is located.
2. The method of claim 1, wherein determining the type of the application scenario in which the electronic device is located according to the sound parameter comprises:
and determining that the application scene where the electronic equipment is located is a first type application scene or a second type application scene according to at least one parameter in the sound parameters.
3. The method according to claim 2, wherein the determining, according to at least one parameter of the sound parameters, that the application scene in which the electronic device is located is a first type application scene or a second type application scene includes one of:
determining a first type of application scene where the electronic equipment is located in response to at least one parameter in the sound parameters being in a preset threshold range;
and determining a second type of application scene where the electronic equipment is located in response to at least one parameter of the sound parameters being out of the preset threshold range.
4. The method of claim 3, wherein the determining, in response to at least one of the sound parameters being within a predetermined threshold range, a first type of application scenario in which the electronic device is located comprises at least one of:
Determining the application scene where the electronic equipment is located as the first type application scene in response to the loudness floating range in the sound parameter being smaller than the preset loudness floating range in the preset time range;
determining the application scene where the electronic equipment is located as the first type of application scene in response to the fact that the floating range of the audio frequency in the sound parameter is smaller than the floating range of the preset frequency in the preset time range;
responding to the continuous existence of the audio frequency in the sound parameter within a preset time range, and determining the application scene where the electronic equipment is located as the first type of application scene;
and determining the application scene where the electronic equipment is located as the first type of application scene according to the response that the echo intensity returned by the ultrasonic wave in the sound parameter is smaller than the preset signal intensity and/or the signal to noise ratio in the echo intensity is smaller than the preset signal to noise ratio.
5. The method according to claim 3 or 4, wherein the determining, in response to at least one of the sound parameters being outside a predetermined threshold range, a second type of application scenario in which the electronic device is located includes at least one of:
determining that the application scene where the electronic equipment is located is the second type of application scene in response to the loudness floating range in the sound parameter being greater than or equal to a preset loudness floating range in a preset time range;
Determining an application scene where the electronic equipment is located as the second type of application scene in response to the fact that the floating range of the audio frequency in the sound parameter is larger than or equal to the floating range of the preset frequency in the preset time range;
in response to the absence of audio frequency in at least part of the time period in the preset time range, determining the application scene where the electronic equipment is located as the second type of application scene;
and determining the application scene where the electronic equipment is located as the second type of application scene based on the fact that the echo intensity returned by the ultrasonic wave in the sound parameter is larger than the preset signal intensity and/or the signal to noise ratio in the echo intensity is smaller than the preset signal to noise ratio.
6. The method of claim 2, wherein the first type of application scenario comprises: a noisy scene or a quiet scene;
the determining, according to at least one parameter of the sound parameters, that the application scene where the electronic device is located is a first type of application scene includes at least one of the following:
determining that an application scene where the electronic equipment is located is the noisy scene in response to the loudness in the sound parameters being greater than a first predetermined loudness threshold;
determining that the application scene where the electronic equipment is located is the quiet scene in response to the loudness in the sound parameter being less than or equal to the first predetermined loudness threshold;
Determining that an application scene where the electronic equipment is located is the noisy scene in response to the peak frequency of the audio frequency spectrum in the sound parameter being greater than a first preset frequency;
and determining that the application scene where the electronic equipment is positioned is a quiet scene in response to the peak frequency of the audio frequency spectrum in the sound parameter is smaller than or equal to the first preset frequency.
7. The method according to claim 2, wherein the second type of application scenario comprises: a first age stage scene or a second age stage scene; wherein the age of the first age stage scene is less than the age of the second age stage scene;
according to at least one parameter in the sound parameters, determining that the application scene where the electronic equipment is located is a second type of application scene, wherein the second type of application scene comprises at least one of the following:
responding to the audio frequency in the sound parameter in a first preset frequency threshold interval, and determining an application scene where the electronic equipment is located as a scene of a first age stage;
responding to the audio frequency in the sound parameter in a second preset frequency threshold interval, and determining an application scene where the electronic equipment is located as a scene of a second age stage; wherein the minimum value of the first predetermined frequency threshold interval is greater than the maximum value of the second predetermined frequency threshold interval.
8. The method according to any one of claims 1 to 4 and 6 to 7, wherein the controlling the electronic device to output the first information based on the type of the application scenario in which the electronic device is located includes at least one of:
based on the type of the application scene where the electronic equipment is located, adjusting the volume of the first information output by the electronic equipment;
outputting the first information matched with the type of the application scene where the electronic equipment is located based on the type of the application scene where the electronic equipment is located;
and controlling the duration of outputting the first information based on the type of the application scene where the electronic equipment is located.
9. The method of claim 8, wherein adjusting the volume of the first information output by the electronic device based on the type of the multi-application scene in which the electronic device is located comprises one of:
responding to the situation that the application scene where the electronic equipment is located is a noisy scene or a scene of a second age stage, and adjusting the volume of the first information output by the electronic equipment to be larger than or equal to a preset volume threshold value;
and responding to the application scene of the electronic equipment as a quiet scene or a scene of a first age stage, and adjusting the volume of the electronic equipment for outputting the first information to be smaller than the preset volume threshold value.
10. The method of claim 8, wherein the controlling the duration of outputting the first information based on the type of the application scenario in which the electronic device is located includes one of:
responding to the situation that the application scene of the electronic equipment is a first age stage scene, and controlling the duration of outputting the first information to be a first duration;
responding to the situation that the application scene of the electronic equipment is a scene of a second age stage, and controlling the duration of outputting the first information to be a second duration; wherein the second time period is longer than the first time period.
11. An information processing apparatus, characterized by being applied to an electronic device, comprising:
the acquisition module is used for acquiring sound parameters of the acquired voice signals; wherein the sound parameters at least comprise: ultrasonic parameters;
the determining module is used for determining the type of the application scene where the electronic equipment is located according to the sound parameters;
the determining module is specifically configured to determine, based on the sound parameter, a type of an application scenario where the electronic device is located in response to identifying that a wake-up word exists in the voice signal; the wake-up word is at least used for waking up the electronic equipment to execute the operation of determining the type of the application scene where the electronic equipment is located based on the sound parameters; the determining, based on the sound parameter, the type of the application scene where the electronic device is located includes: if the application scene of the electronic equipment cannot be determined based on i parameters in the sound parameters, determining the application scene of the electronic equipment according to i+1 parameters in the sound parameters; wherein i is a positive integer greater than or equal to 1;
And the processing module is used for controlling the electronic equipment to output the first information based on the type of the application scene where the electronic equipment is located.
12. The apparatus of claim 11, wherein the device comprises a plurality of sensors,
the determining module is configured to determine, according to at least one parameter of the sound parameters, whether an application scene where the electronic device is located is a first type application scene or a second type application scene.
13. The apparatus of claim 12, wherein the device comprises a plurality of sensors,
the determining module is used for determining a first application scene where the electronic equipment is located in response to at least one parameter in the sound parameters being in a preset threshold range;
or,
and the determining module is used for determining a second type application scene where the electronic equipment is located in response to at least one parameter of the sound parameters being out of the preset threshold range.
14. The apparatus of claim 13, wherein the determining module is configured to at least one of:
determining the application scene where the electronic equipment is located as the first type application scene in response to the loudness floating range in the sound parameter being smaller than the preset loudness floating range in the preset time range;
Determining the application scene where the electronic equipment is located as the first type of application scene in response to the fact that the floating range of the audio frequency in the sound parameter is smaller than the floating range of the preset frequency in the preset time range;
responding to the continuous existence of the audio frequency in the sound parameter within a preset time range, and determining the application scene where the electronic equipment is located as the first type of application scene;
and determining the application scene where the electronic equipment is located as the first type of application scene according to the response that the echo intensity returned by the ultrasonic wave in the sound parameter is smaller than the preset signal intensity and/or the signal to noise ratio in the echo intensity is smaller than the preset signal to noise ratio.
15. The apparatus according to claim 13 or 14, wherein the determining module is configured to at least one of:
determining that the application scene where the electronic equipment is located is the second type of application scene in response to the loudness floating range in the sound parameter being greater than or equal to a preset loudness floating range in a preset time range;
determining an application scene where the electronic equipment is located as the second type of application scene in response to the fact that the floating range of the audio frequency in the sound parameter is larger than or equal to the floating range of the preset frequency in the preset time range;
In response to the absence of audio frequency in at least part of the time period in the preset time range, determining the application scene where the electronic equipment is located as the second type of application scene;
and determining the application scene where the electronic equipment is located as the second type of application scene based on the fact that the echo intensity returned by the ultrasonic wave in the sound parameter is larger than the preset signal intensity and/or the signal to noise ratio in the echo intensity is smaller than the preset signal to noise ratio.
16. The apparatus of claim 12, wherein the first type of application scenario comprises: a noisy scene or a quiet scene;
the determining module is used for at least one of the following:
determining that an application scene where the electronic equipment is located is the noisy scene in response to the loudness in the sound parameters being greater than a first predetermined loudness threshold;
determining that the application scene where the electronic equipment is located is the quiet scene in response to the loudness in the sound parameter being less than or equal to the first predetermined loudness threshold;
determining that an application scene where the electronic equipment is located is the noisy scene in response to the peak frequency of the audio frequency spectrum in the sound parameter being greater than a first preset frequency;
and determining that the application scene where the electronic equipment is positioned is a quiet scene in response to the peak frequency of the audio frequency spectrum in the sound parameter is smaller than or equal to the first preset frequency.
17. The apparatus of claim 12, wherein the second type of application scenario comprises: a first age stage scene or a second age stage scene; wherein the age of the first age stage scene is less than the age of the second age stage scene;
the determining module is configured to determine, in response to the audio frequency in the sound parameter being within a first predetermined frequency threshold interval, that an application scenario where the electronic device is located is a first age stage scenario;
or,
the determining module is configured to determine, in response to the audio frequency in the sound parameter being within a second predetermined frequency threshold interval, that an application scenario where the electronic device is located is a second age stage scenario; wherein the minimum value of the first predetermined frequency threshold interval is greater than the maximum value of the second predetermined frequency threshold interval.
18. The apparatus of any one of claims 11 to 14, 16 to 17, wherein the processing module is configured to at least one of:
based on the type of the application scene where the electronic equipment is located, adjusting the volume of the first information output by the electronic equipment;
outputting the first information matched with the type of the application scene where the electronic equipment is located based on the type of the application scene where the electronic equipment is located;
And controlling the duration of outputting the first information based on the type of the application scene where the electronic equipment is located.
19. The apparatus of claim 18, wherein the device comprises a plurality of sensors,
the processing module is used for responding to the situation that the application scene where the electronic equipment is located is a noisy scene or a scene of a second age stage, and adjusting the volume of the first information output by the electronic equipment to be larger than or equal to a preset volume threshold value;
or,
and the processing module is used for responding to the situation that the application scene of the electronic equipment is a quiet scene or a scene of a first age stage, and adjusting the volume of the first information output by the electronic equipment to be smaller than the preset volume threshold value.
20. The apparatus of claim 18, wherein the device comprises a plurality of sensors,
the processing module is used for responding to the situation that the application scene where the electronic equipment is located is a first age stage scene, and controlling the duration of outputting the first information to be a first duration;
or,
the processing module is used for responding to the situation that the application scene where the electronic equipment is located is a second age stage scene, and controlling the duration of outputting the first information to be a second duration; wherein the second time period is longer than the first time period.
21. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: for implementing the information processing method of any one of claims 1-10 when said executable instructions are executed.
22. A computer-readable storage medium, characterized in that the readable storage medium stores an executable program, wherein the executable program, when executed by a processor, implements the information processing method of any one of claims 1 to 10.
CN202110006165.XA 2021-01-05 2021-01-05 Information processing method, information processing device, electronic equipment and storage medium Active CN112866480B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110006165.XA CN112866480B (en) 2021-01-05 2021-01-05 Information processing method, information processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110006165.XA CN112866480B (en) 2021-01-05 2021-01-05 Information processing method, information processing device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112866480A CN112866480A (en) 2021-05-28
CN112866480B true CN112866480B (en) 2023-07-18

Family

ID=76001655

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110006165.XA Active CN112866480B (en) 2021-01-05 2021-01-05 Information processing method, information processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112866480B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113900577B (en) * 2021-11-10 2024-05-07 杭州逗酷软件科技有限公司 Application program control method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108920129A (en) * 2018-07-27 2018-11-30 联想(北京)有限公司 Information processing method and information processing system
CN111901055A (en) * 2020-02-14 2020-11-06 中兴通讯股份有限公司 Data transmission method, device, equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120065612A (en) * 2010-12-13 2012-06-21 삼성전자주식회사 Method and apparatus for notifying event of communication terminal in electronic device
CN104618446A (en) * 2014-12-31 2015-05-13 百度在线网络技术(北京)有限公司 Multimedia pushing implementing method and device
CN105959806A (en) * 2016-05-25 2016-09-21 乐视控股(北京)有限公司 Program recommendation method and device
US11232788B2 (en) * 2018-12-10 2022-01-25 Amazon Technologies, Inc. Wakeword detection
CN110995933A (en) * 2019-12-12 2020-04-10 Oppo广东移动通信有限公司 Volume adjusting method and device of mobile terminal, mobile terminal and storage medium
CN111081275B (en) * 2019-12-20 2023-05-26 惠州Tcl移动通信有限公司 Terminal processing method and device based on sound analysis, storage medium and terminal

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108920129A (en) * 2018-07-27 2018-11-30 联想(北京)有限公司 Information processing method and information processing system
CN111901055A (en) * 2020-02-14 2020-11-06 中兴通讯股份有限公司 Data transmission method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN112866480A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
US20170126192A1 (en) Method, device, and computer-readable medium for adjusting volume
WO2021031308A1 (en) Audio processing method and device, and storage medium
US10230891B2 (en) Method, device and medium of photography prompts
US20180054688A1 (en) Personal Audio Lifestyle Analytics and Behavior Modification Feedback
CN104991754A (en) Recording method and apparatus
CN109087650B (en) Voice wake-up method and device
CN111063354B (en) Man-machine interaction method and device
CN107147957B (en) Video broadcasting method and device
CN111696553A (en) Voice processing method and device and readable medium
JP2021509963A (en) Multi-beam selection method and equipment
CN110349578A (en) Equipment wakes up processing method and processing device
CN112185388B (en) Speech recognition method, device, equipment and computer readable storage medium
CN111741394A (en) Data processing method and device and readable medium
US11682412B2 (en) Information processing method, electronic equipment, and storage medium
CN111988704B (en) Sound signal processing method, device and storage medium
CN112866480B (en) Information processing method, information processing device, electronic equipment and storage medium
CN110970015B (en) Voice processing method and device and electronic equipment
CN112509596B (en) Wakeup control method, wakeup control device, storage medium and terminal
CN112489653B (en) Speech recognition method, device and storage medium
CN111108550A (en) Information processing device, information processing terminal, information processing method, and program
CN112185353A (en) Audio signal processing method and device, terminal and storage medium
CN111724783A (en) Awakening method and device of intelligent equipment, intelligent equipment and medium
CN111127846A (en) Door-knocking reminding method, door-knocking reminding device and electronic equipment
CN109788367A (en) A kind of information cuing method, device, electronic equipment and storage medium
CN110868495A (en) Message display method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant