CN109286772B - Sound effect adjusting method and device, electronic equipment and storage medium - Google Patents

Sound effect adjusting method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109286772B
CN109286772B CN201811026574.0A CN201811026574A CN109286772B CN 109286772 B CN109286772 B CN 109286772B CN 201811026574 A CN201811026574 A CN 201811026574A CN 109286772 B CN109286772 B CN 109286772B
Authority
CN
China
Prior art keywords
audio
electronic equipment
scene
sound effect
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811026574.0A
Other languages
Chinese (zh)
Other versions
CN109286772A (en
Inventor
李亚军
冷文华
许钊铵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201811026574.0A priority Critical patent/CN109286772B/en
Publication of CN109286772A publication Critical patent/CN109286772A/en
Application granted granted Critical
Publication of CN109286772B publication Critical patent/CN109286772B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/003Changing voice quality, e.g. pitch or formants
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72433User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for voice messaging, e.g. dictaphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application discloses a sound effect adjusting method and device, electronic equipment and a storage medium, and relates to the technical field of electronic equipment. The method is applied to the electronic equipment, and comprises the following steps: the method comprises the steps of acquiring a preview image acquired by the electronic equipment in real time in the process that the electronic equipment is in a video chat mode, identifying the preview image, acquiring a current scene where the electronic equipment is located based on an identified result, and adjusting an output sound effect of the electronic equipment in the process of being in the video chat mode based on the current scene. According to the method and the device, the output sound effect of the electronic equipment in the process of being in the video chat mode is adjusted according to the current scene where the electronic equipment is located, the output sound effect of the electronic equipment is automatically and quickly configured, and the effect of the output sound effect and the user experience are improved.

Description

Sound effect adjusting method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of electronic devices, and more particularly, to a sound effect adjusting method and apparatus, an electronic device, and a storage medium.
Background
With the development of science and technology, electronic devices have become one of the most common electronic products in people's daily life. Moreover, users often listen to music, watch videos, play games and the like through electronic equipment, but at present, the processing mode of the electronic equipment on audio data is fixed, the output sound effect is poor, and the user experience is poor.
Disclosure of Invention
In view of the foregoing problems, the present application provides a sound effect adjusting method, device, electronic device and storage medium to solve the foregoing problems.
In a first aspect, an embodiment of the present application provides a sound effect adjusting method, which is applied to an electronic device, and the method includes: acquiring a preview image acquired by the electronic equipment in real time in the process that the electronic equipment is in a video chat mode; identifying the preview image, and acquiring the current scene of the electronic equipment based on the identified result; and adjusting the output sound effect of the electronic equipment in the process of being in the video chat mode based on the current scene.
In a second aspect, an embodiment of the present application provides a sound effect adjusting apparatus, which is applied to an electronic device, the apparatus includes: the acquisition module is used for acquiring preview images acquired by the electronic equipment in real time in the process that the electronic equipment is in a video chat mode; the identification module is used for identifying the preview image and acquiring the current scene of the electronic equipment based on the identified result; and the adjusting module is used for adjusting the output sound effect of the electronic equipment in the process of being in the video chat mode based on the current scene.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, the memory being coupled to the processor, the memory storing instructions, and the processor performing the above method when the instructions are executed by the processor.
In a fourth aspect, the present application provides a computer-readable storage medium, in which a program code is stored, and the program code can be called by a processor to execute the above method.
Compared with the prior art, according to the scheme provided by the application, in the process that the electronic equipment is in the video chat mode, the preview image collected by the electronic equipment is acquired in real time, the preview image is identified, the current scene where the electronic equipment is located is acquired based on the identified result, the output sound effect of the electronic equipment in the process of being in the video chat mode is adjusted based on the current scene, the output sound effect of the electronic equipment in the process of being in the video chat mode is adjusted according to the current scene where the electronic equipment is located, the automatic and fast configuration of the output sound effect of the electronic equipment is achieved, and the effect of the output sound effect and the user experience are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart illustrating a sound effect adjustment method according to an embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating a step S130 of the sound effect adjusting method according to the embodiment of the present application shown in FIG. 1;
FIG. 3 is a flow chart illustrating a sound effect adjustment method according to another embodiment of the present application;
FIG. 4 is a flowchart illustrating a step S250 of the sound effect adjusting method provided by the embodiment of the application shown in FIG. 3;
FIG. 5 is a flow chart illustrating a sound effect adjustment method according to still another embodiment of the present application;
FIG. 6 is a block diagram illustrating an audio effect adjusting apparatus according to an embodiment of the present disclosure;
FIG. 7 is a block diagram of an electronic device for executing an audio effect adjustment method according to an embodiment of the present application;
fig. 8 illustrates a storage unit for storing or carrying program codes for implementing the sound effect adjustment method according to the embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
At present, the demand of users for electronic devices is increasing, and therefore, the electronic devices are beginning to support playing audio data in order to meet the demands of users and provide convenience for users, wherein the audio architecture of the electronic devices is similar to that of computers, and is mainly completed by processors and built-in audio CODECs (CODECs). Specifically, the processor receives the input of audio data, converts the audio data into an I2S signal, transmits the I2S signal to the CODEC, converts the I2 signal into an analog signal, and plays the analog signal.
Further, in order to enhance the hearing experience, more and more users are demanding on the output sound effect of the electronic device, wherein the sound effect refers to the effect made by sound, and refers to the artificial or enhanced sound added to the sound track for enhancing the reality, atmosphere or dramatic message of the scene, and the noise or sound added to the sound track is the sound processing for enhancing the artistic or other content of the movie, electronic game, music or other media, for example, the electronic game sound effect refers to the hitting sound, running sound, gunshot sound, etc. in the game. Therefore, with the development of electronic device technology, more and more electronic devices begin to provide sound effect output, but at present, the sound effect function of the electronic device mainly selects and sets a sound effect mode of a certain scene manually according to the preference of a user, adopts a conventional and fixed processing mode, does not consider the influence of external factors, and outputs fixed sound effects if the influence of different scenes is not considered.
In view of the above problems, the inventor finds, through long-term research, that the sound effect adjusting method, the sound effect adjusting device, the electronic device, and the storage medium provided in the embodiments of the present application are provided, and adjusts the output sound effect of the electronic device in the video chat mode according to the current scene where the electronic device is located, so as to implement automatic and fast configuration of the output sound effect of the electronic device, and improve the sound effect output and user experience. The specific sound effect adjusting method is described in detail in the following embodiments.
Examples
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating a sound effect adjusting method according to an embodiment of the present application. The sound effect adjusting method is used for adjusting the output sound effect of the electronic equipment in the video chat mode according to the current scene where the electronic equipment is located, so that the output sound effect of the electronic equipment is automatically and quickly configured, and the sound effect output effect and the user experience are improved. In a specific embodiment, the sound effect adjusting method is applied to the sound effect adjusting apparatus 200 shown in fig. 6 and the electronic device 100 (fig. 7) equipped with the sound effect adjusting apparatus 200. The specific flow of the embodiment will be described below by taking an electronic device as an example, and it is understood that the electronic device applied in the embodiment may be a smart phone, a tablet computer, a wearable electronic device, a vehicle-mounted device, a gateway, and the like, and is not limited specifically herein. As will be described in detail with respect to the flow shown in fig. 1, the sound effect adjusting method may specifically include the following steps:
step S110: and acquiring a preview image acquired by the electronic equipment in real time in the process that the electronic equipment is in a video chat mode.
In this embodiment, it is first detected whether the electronic device is in a video chat mode, where in the process of running the video chat, the application program capable of supporting the video chat may run in a foreground of the electronic device, run in a background of the electronic device, or run in a front-back and background switching manner of the electronic device. In particular, foreground operation means that the electronic device can generally interact with a user, can operate in the foreground of the electronic device, and can be suspended when the electronic device is invisible (for example: games); background running means that the interaction with the user is very limited, and the life time is hidden except during the configuration (such as an SMS automatic reply program and an alarm clock program); the switching operation between the foreground and the background of the electronic equipment means that the electronic equipment can be switched between the foreground and the background at will. It will be appreciated that when the application running video chat is not killed (kill), the electronic device is characterized as being in video chat mode. In addition, as a mode, whether the electronic device is in the video chat mode can be checked through a code of the electronic device, which is not described herein again.
Further, when the electronic device is determined to be in the video chat mode, the preview image acquired by the electronic device is acquired in real time corresponding to the process that the electronic device is in the video chat mode. Specifically, when it is determined that the electronic device is in the video chat mode, it is characterized that the electronic device is outputting audio or outputting audio at a next certain time, for example, a user corresponding to another electronic device that establishes a video chat with the electronic device is inputting voice information to the other electronic device for transmission to the electronic device for outputting audio, or inputting voice information to the other electronic device for transmission to the electronic device for outputting audio at a next certain time. Therefore, as a way, when the electronic device is determined to be in the video chat mode, acquiring the preview image acquired by the electronic device in real time can be performed; as another mode, whether the electronic device has audio output may be detected, and when it is detected that the electronic device has audio output, the preview image acquired by the electronic device may be acquired in real time, which is not limited herein.
In the present embodiment, a manner of acquiring the preview image in real time may be adopted to avoid the problem that the acquired preview image is inaccurate, because the acquired preview image may change with the change of time and/or place while the electronic device is in the video chat mode. Further, as a mode, the electronic device may include an electronic device body and a camera assembly, where the camera assembly is disposed on the electronic device body and is used for acquiring a preview image, and when the electronic device is in the video chat mode, the camera assembly is in an open state, so that the preview image may be acquired in real time through the camera assembly, and optionally, the camera assembly is a front camera or a rear camera, which is not limited herein.
Step S120: and identifying the preview image, and acquiring the current scene of the electronic equipment based on the identified result.
In the present embodiment, the preview image acquired by the electronic device is recognized, and as a first way, the input preview image may be processed by an OCR recognition technology to recognize image information included in the tour image. Specifically, the image-to-text recognition may be performed offline, that is, the image-to-text recognition library is transplanted to the electronic device, by using an image-to-text recognition technology. Specifically, image-to-character recognition operation is carried out on the picture information according to an image-to-character recognition library in the electronic equipment; the images can also be transmitted to a remote image-to-text server for identification in an online manner. And uploading the picture information to an image-to-character server, carrying out image-to-character recognition operation by the image-to-character server according to the picture information of an internal image-to-character recognition library, and sending a recognition result to the electronic equipment. Further, the image-to-text conversion can be accompanied by the x coordinate, the y coordinate, the width, the height, and the like of each text in addition to the text information in the returned image, and will not be described herein again.
As a second mode, the electronic device or a server connected to the electronic device stores a plurality of images in advance, where the images are used as identification bases of preview images captured by the electronic device, and it can be understood that after the preview images are acquired, the preview images may be compared with the plurality of pre-stored images to acquire images matching the preview images from the plurality of images, so that the preview images can be identified.
Further, the preview image is recognized to obtain a recognized result, and then the current scene where the electronic device is located is obtained based on the recognized result. Specifically, when the preview image is identified through an image-to-text operation, the identified result is text information, such as "supermarket", "subway station", "home", and the like, and at this time, the current scene where the electronic device is located, which is acquired based on the identified result, is "supermarket", "subway station", "home", and the like; when the preview image is identified through image comparison operation, the identified result is image information, such as a supermarket image, a subway station image and the like, and at the moment, the current scene where the electronic device is located, which is acquired based on the identified result, is "supermarket", "subway station", "home" and the like.
As a mode, the current scene at least includes one or a combination of several of a supermarket, a shopping mall, a bookstore, a subway and a street.
As an implementation manner, the electronic device further includes a sound receiving device disposed on the electronic device body, where the sound receiving device is configured to receive, in real time, voice information input by a user into the electronic device when the electronic device is in the video chat mode, and transmit the voice information to the processor for analysis processing to obtain voice content carried by the voice information. In this embodiment, the electronic device may at least include a voice call microphone, wherein the voice call microphone is mainly used for receiving voice information input to the electronic device, for example, recording the voice information input to the electronic device when recording, and converting the voice information into an electrical signal in real time to analyze the voice information so as to obtain the voice content of the voice information.
Further, whether the voice information includes the voice content representing the scene is judged, for example, whether the voice information includes the voice content representing the scene, such as "i am in a supermarket", "i am in a subway station", and the like is judged. It can be understood that, when the voice information includes the voice content representing the scene, the current scene where the electronic device is located may be jointly acquired based on the voice content and based on the result after the preview image recognition. For example, when the result of the preview image recognition is a supermarket image and the voice information includes the voice content of the supermarket, it can be determined that the current scene where the electronic device is located is the supermarket; when the preview image recognition result is a supermarket image and a subway station image and the voice information comprises the voice content of the user in the supermarket, the current scene where the electronic equipment is located can be confirmed to be the supermarket, so that the accuracy of the obtained current scene is improved.
Step S130: and adjusting the output sound effect of the electronic equipment in the process of being in the video chat mode based on the current scene.
In this embodiment, after the current scene where the electronic device is located is determined, the output sound effect of the electronic device in the process of being in the video chat mode may be adjusted based on the current scene. Specifically, the electronic device may increase or decrease the sound effect output by the electronic device while in the video chat mode based on the current scene. As a way, when the current scene is an outdoor scene, it represents that the location of the electronic device is noisy, and at this time, if the electronic device is in a video chat mode, the output sound effect of the electronic device is greatly interfered by the environmental sound of the current scene, that is, the output sound effect of the electronic device is improved, so as to improve the effect of the output sound effect of the electronic device; when the current scene is an indoor scene, the electronic equipment is characterized to be quiet, and at the moment, if the electronic equipment is in the video chat mode, the output sound effect of the electronic equipment is less interfered by the environmental sound of the current scene, so that the output sound effect of the electronic equipment in the video chat mode can be kept, or the output sound effect of the electronic equipment in the video chat mode can be slightly reduced, and the power consumption of the electronic equipment is reduced on the basis of ensuring the effect of the output sound effect.
As one way, the electronic device may establish a mapping relationship between a scene and an output sound effect in advance, and generate a mapping relationship table to be stored in the electronic device, as shown in table 1. The mapping relationship between the scenes and the output sound effects may be manually associated by a user, may be automatically associated by the electronic device or automatically associated by a server connected to the electronic device, and the like, and is not limited herein, and the mapping relationship between the scenes and the output sound effects may include that one scene corresponds to one output sound effect, or that a plurality of scenes correspond to one output sound effect, and the like.
Furthermore, after the current scene is determined, the current scene is compared with a plurality of scenes prestored in a mapping relation table one by one to obtain a scene matched with the current scene, and then the output sound effect corresponding to the scene is searched according to the mapping relation table, so that the output sound effect corresponding to the current scene can be obtained, the output sound effect of the electronic equipment in the process of being in the video chat mode is adjusted according to the current scene where the electronic equipment is located, the output sound effect of the electronic equipment is automatically and quickly configured, and the effect of the output sound effect and the user experience are improved.
TABLE 1
Scene Output sound effect
A1 B1
A2 B2
A3 B3
Referring to fig. 2, fig. 2 is a flowchart illustrating a step S130 of the sound effect adjusting method according to the embodiment of the present application shown in fig. 1. In this embodiment, the current scene is a first current scene or a second current scene, where an environment pitch of the first current scene is higher than an environment pitch of the second current scene, which will be described in detail with reference to the flow shown in fig. 2, where the method specifically includes the following steps:
step S131: and when the current scene is the first current scene, improving the output sound effect of the electronic equipment in the process of being in the video chat mode.
In this embodiment, as a first manner, the environmental sound of the first current scene may be obtained through estimation, for example, when the first current scene is a supermarket, the environmental sound of the supermarket may be estimated, where the environmental sound of the supermarket may be obtained through statistical analysis of historical data of multiple supermarkets, and details are not repeated here. As a second way, the sound receiving device of the electronic device may be used to obtain the environmental sound when the current scene where the electronic device is located is the first current scene, for example, when the first current scene is a supermarket, the sound receiving device may be used to obtain the current environmental sound of the electronic device.
In this embodiment, the first current scene may be an outdoor scene or other noisy scenes, that is, the environmental sound of the first current scene is high, and therefore, the output sound effect of the electronic device in the process of being in the video chat mode may be correspondingly improved, so as to weaken the interference of the environmental sound and improve the effect of the output sound effect.
Step S132: and when the current scene is the second current scene, reducing the output sound effect of the electronic equipment in the process of the video chat mode.
Similarly, as a first mode, the environmental sound of the second current scene may be obtained through estimation, for example, when the second current scene is a home, the environmental sound of the home may be estimated, where the environmental sound of the home may be obtained through statistical analysis of historical data of the home, and is not described herein again. As a second way, the sound receiving device of the electronic device may obtain the environmental sound when the current scene where the electronic device is located is the second current scene, for example, when the second current scene is at home, the sound receiving device may obtain the current environmental sound of the electronic device.
In this embodiment, the environmental sound of the first current scene is higher than the environmental sound of the second current scene, that is, the second current scene may be an indoor scene or other quiet scenes, that is, the environmental sound of the second current scene is lower, so that the output sound effect of the electronic device in the process of being in the video chat mode is correspondingly reduced, so as to reduce the power consumption of the electronic device.
According to the sound effect adjusting method provided by one embodiment of the application, in the process that the electronic equipment is in the video chat mode, the preview image collected by the electronic equipment is obtained in real time, the preview image is identified, the current scene where the electronic equipment is located is obtained based on the identified result, the sound effect output of the electronic equipment in the process of being in the video chat mode is adjusted based on the current scene, the sound effect output of the electronic equipment in the process of being in the video chat mode is adjusted according to the current scene where the electronic equipment is located, the sound effect output of the electronic equipment is automatically and quickly configured, and the sound effect output and the user experience are improved.
Referring to fig. 3, fig. 3 is a schematic flow chart illustrating a sound effect adjusting method according to another embodiment of the present application. The sound effect adjusting method is applied to the electronic equipment, wherein in the embodiment, the electronic equipment is provided with a plurality of audio channels, and each audio channel in the plurality of audio channels corresponds to different sound effect processing algorithms. As will be explained in detail with respect to the flow shown in fig. 3, the method may specifically include the following steps:
step S210: and acquiring a preview image acquired by the electronic equipment in real time in the process that the electronic equipment is in a video chat mode.
For detailed description of step S210, please refer to step S110, which is not described herein again.
Step S220: and identifying the preview image and extracting a plurality of sub preview images in the preview image.
It can be understood that the preview image can be regarded as being composed of a plurality of sub preview images, and therefore, in the present embodiment, after the preview image is acquired, the preview image can be identified and the plurality of sub preview images in the preview image can be extracted. For example, when the preview image includes a supermarket, a blue sky, a white cloud, a tree, a pedestrian, a vehicle, and the like, then the supermarket, the blue sky, the white cloud, the tree, the pedestrian, the vehicle, and the like are all sub preview images of the preview image, and therefore, a supermarket sub-image, a blue sky sub-image, a white cloud sub-image, a tree sub-image, a pedestrian sub-image, a vehicle sub-image, and the like can be obtained by extracting the sub preview images of the preview image, which is not limited herein.
Step S230: and judging whether the plurality of sub preview images comprise target sub preview images representing scenes or not.
Further, after the plurality of sub preview images are acquired, the plurality of sub preview images are identified to judge whether a target sub preview image representing a scene is included in the plurality of sub preview images. Specifically, the electronic device may create and store a correspondence between the images and the scenes in advance, that is, a certain image corresponds to a certain scene, a certain number of images correspond to a certain scene, some images do not correspond to a scene, and the like, and after obtaining a plurality of sub preview images, may compare the plurality of sub preview images with the stored images corresponding to the scenes to determine whether the plurality of sub preview images include an image corresponding to the scene. As can be understood, when a sub preview image matched with an image of a corresponding scene is included in the plurality of sub preview images, a target sub preview image characterizing the scene is included in the plurality of sub preview images; and when the plurality of sub preview images do not comprise the sub preview image matched with the image of the corresponding scene, representing that the plurality of sub preview images do not comprise the target sub preview image representing the scene.
Step S240: and when the plurality of sub preview images comprise target sub preview images representing scenes, acquiring the current scene of the electronic equipment based on the target sub preview images.
In this embodiment, when it is determined that the plurality of sub preview images include a target sub preview image representing a scene, a current scene where the electronic device is located is acquired based on the target sub preview image. For example, when the plurality of target sub-preview images include a sub-preview image representing a supermarket, the current scene where the electronic device is located may be determined as the supermarket; when the plurality of target sub-preview images include a sub-preview image representing a subway station, the current scene where the electronic device is located can be determined as the subway station.
As a mode, when the plurality of sub preview images include at least two sub preview images representing scenes, for example, when the plurality of sub preview images include a sub preview image representing a supermarket and a sub preview image representing a subway station at the same time, a foreground image and a background image in the preview image are acquired, and the foreground image is taken as a target sub preview image. For example, when the sub preview image representing the supermarket in the preview image is a foreground image and the sub preview image representing the subway station is a background image, determining the sub preview image representing the supermarket as a target sub preview image, and obtaining the current scene where the electronic equipment is located based on the target sub preview image as the supermarket; when the sub preview image representing the subway station in the preview image is a foreground image and the sub preview image representing the supermarket is a background image, determining the sub preview image representing the subway station as a target sub preview image, and obtaining the current scene where the electronic equipment is located based on the target sub preview image as the subway station.
Step S250: selecting at least one audio lane from the plurality of audio lanes as a target audio lane based on the current scene.
In this embodiment, the electronic device is provided with a plurality of audio paths, each of the plurality of audio paths corresponds to a different audio processing algorithm, and specifically, the electronic device separately adds different audio paths for different scenes according to an existing framework of an audio system, where each audio path refers to a path through which different sounds travel. As a way, after a current scene where the electronic device is located is obtained, at least one audio channel corresponding to the scene is obtained, it can be understood that different audio processing algorithms corresponding to each audio channel may have different audio effects, such as different volumes, different loudness, different types of sounds, and the like, when the audio data transmitted through the audio channel is output. Therefore, in this embodiment, when it is determined that the current scene where the electronic device is located is a supermarket and the volume of the required output sound effect is high, an audio channel capable of outputting high volume is correspondingly selected from the multiple audio channels as a target audio channel; when the current scene where the electronic equipment is located is determined to be at home, the volume of the required output sound effect is low, and an audio channel capable of outputting low volume is correspondingly selected from the multiple audio channels to serve as a target audio channel. Of course, the alternative audio channels may include all audio channels, may include only a plurality of audio channels corresponding to the application program supporting video chat, may include only one audio channel corresponding to the application program supporting video chat, and so on.
Further, the type of the application program, such as WeChat or QQ, for performing the video chat by the electronic device during the video chat mode can be obtained. After the type of the application program is obtained, at least one audio channel corresponding to the type is obtained, specifically, the audio data of the application program is analyzed to obtain the sound type in the audio data, and then at least one audio channel is selected from the multiple audio channels as a target audio channel according to the analyzed result, that is, one application program may correspond to multiple audio channels. As one way, when only one kind of sound is included in the audio data, only one of the audio paths may be selected as a target audio path; when a plurality of kinds of sounds are included in the audio data, a plurality of audio paths may be selected as the target audio path. For example, when the application program includes sound types such as hitting sound, running sound, gunshot sound, etc., an audio channel corresponding to the hitting sound, running sound, gunshot sound, etc. can be selected from multiple audio channels as a target audio channel to transmit the hitting sound, running sound, gunshot sound, etc. respectively, so that different audio data are processed by different audio processing algorithms, and the processing effect is better.
Of course, in this embodiment, each application program may also have an audio channel corresponding to it individually, that is, one application program corresponds to one audio channel, so after the type of the application program is obtained, the audio channel corresponding to the application program is obtained according to the type as a target audio channel, and of course, the target audio channel may also run different sound effect processing algorithms according to different scenes to obtain different output sound effects.
Referring to fig. 4, fig. 4 is a flowchart illustrating a step S240 of the sound effect adjusting method according to the embodiment shown in fig. 3 of the present application. As will be explained in detail with respect to the flow shown in fig. 4, the method may specifically include the following steps:
step S251: and determining a current audio channel corresponding to the current scene according to a preset relation table, wherein the preset relation table comprises the corresponding relation between the scene and the audio channel.
As an embodiment, the electronic device may establish a mapping relationship between the scene and the audio channel in advance, and generate a mapping relationship table to be stored in the electronic device, as shown in table 2. The mapping relationship between the scene and the audio channel may be associated manually by a user, may be associated automatically by an electronic device, or may be associated automatically by a server, and the like, which is not limited herein.
Further, after a current scene is determined, the current scene is compared with a plurality of scenes pre-stored in a mapping relation table one by one to obtain a scene matched with the current scene, and an audio channel corresponding to the scene is searched according to the mapping relation table, so that the audio channel corresponding to the current scene can be obtained.
TABLE 2
Scene Audio channel
A1 C1
A2 C2
A3 C3
Step S252: determining the current audio path as the target audio path.
Step S260: and transmitting the output sound effect of the electronic equipment in the process of the video chat mode through the target audio channel.
It can be understood that the target audio channel at least comprises one audio channel, when the number of the audio channels is one, the original audio data is processed through the audio channel, and different sound effect processing algorithms are operated according to different scenes to obtain corresponding output sound effects; when the number of the audio channels is multiple, each audio channel corresponds to an independent sound effect processing algorithm, so that the output sound effect of the electronic equipment in the video chat mode is transmitted through the target audio channel, different processing can be performed on the original audio data, the output sound effects with different sounds can be obtained, and the processing effect is better.
In another embodiment of the present application, in a process of an electronic device in a video chat mode, a preview image acquired by the electronic device is acquired in real time, the preview image is identified and a plurality of sub preview images in the preview image are extracted, whether a target sub preview image representing a scene is included in the plurality of sub preview images is determined, when the plurality of sub preview images include the target sub preview image representing the scene, a current scene where the electronic device is located is acquired based on the target sub preview image, at least one audio channel is selected from the plurality of audio channels as a target audio channel based on the current scene, an output sound effect of the electronic device in the process of the video chat mode is transmitted through the target audio channel, and compared with the sound effect adjustment method shown in fig. 1, the embodiment acquires the current scene where the electronic device is located by extracting the sub preview image in the preview image and the sub preview image, in order to promote the speed and the accuracy of obtaining of current scene, in addition, this embodiment is through setting up the different audio frequency access of many audio frequency processing algorithm to convenient and swift adjust the output audio, promote the effect of output audio.
Referring to fig. 5, fig. 5 is a schematic flow chart illustrating a sound effect adjusting method according to still another embodiment of the present application. The sound effect adjusting method is applied to the electronic equipment, wherein in the embodiment, the electronic equipment is provided with a plurality of audio channels, and each audio channel in the plurality of audio channels corresponds to different sound effect processing algorithms. As will be explained in detail with respect to the flow shown in fig. 5, the method may specifically include the following steps:
step S310: and acquiring a preview image acquired by the electronic equipment in real time in the process that the electronic equipment is in a video chat mode.
Step S320: and identifying the preview image and extracting a plurality of sub preview images in the preview image.
Step S330: and judging whether the plurality of sub preview images comprise target sub preview images representing scenes or not.
For the detailed description of steps S310 to S330, refer to steps S210 to S230, which are not described herein again.
Step S340: and when the plurality of sub preview images comprise target sub preview images representing scenes, acquiring the current position information of the electronic equipment.
In this embodiment, the target sub-preview image may include a plurality of sub-preview images representing scenes, for example, the target sub-preview image may include a supermarket image and a subway station image at the same time, and it can be understood that a scene where the electronic device is located may not be judged from the target sub-preview images representing scenes of this kind, and therefore, as a mode, the current location information of the electronic device is obtained, where the current location information of the electronic device may be obtained through location, and as a mode, the current location of the electronic device may be obtained through a Location Based Service (LBS) and/or a Global Positioning System (GPS). It can be understood that, since the location of the electronic device may change at any time, in this embodiment, the location information of the electronic device is obtained in real time, so that when the location of the current electronic device changes, the LBS and/or the GPS may obtain the current location information of the electronic device in real time according to the change of the location of the electronic device.
Step S350: and determining a scene corresponding to the current position information based on the current position information.
Further, after the current location information is obtained, a scene corresponding to the current location information may be determined based on the current location information, specifically, a location of the current location information is obtained, and a scene at the location is obtained.
Step S360: and when the scene corresponding to the current position information is consistent with the scene represented by the target sub-preview image, acquiring the current scene of the electronic equipment based on the target sub-preview image and/or the current position information.
Further, comparing the scene corresponding to the current position with the scene represented by the target sub-preview image to determine whether the scene corresponding to the current position is consistent with the scene represented by the target sub-preview image, for example, if the scene corresponding to the current position is "supermarket" and the scene represented by the target sub-preview image is "supermarket", the scene corresponding to the current position is consistent with the scene represented by the target sub-preview image; and if the scene corresponding to the current position is assumed to be a supermarket and the scene represented by the target sub-preview image is assumed to be a subway station, the scene corresponding to the current position is inconsistent with the scene represented by the target sub-preview image.
In this embodiment, when the scene corresponding to the current position is consistent with the scene represented by the target sub-preview image, the scene represented by the target sub-preview image is the scene where the electronic device is located, that is, the scene represented by the target sub-preview image can be regarded as the scene where the user is located, so that the current scene where the electronic device is located can be obtained based on the target sub-preview image and/or the current position information, so as to reduce misjudgment.
Step S370: selecting at least one audio lane from the plurality of audio lanes as a target audio lane based on the current scene.
Step S380: and transmitting the output sound effect of the electronic equipment in the process of the video chat mode through the target audio channel.
For detailed description of steps S370 to S380, refer to steps S250 to S260, which are not described herein again.
In the sound effect adjusting method provided in another embodiment of the present application, in a process that an electronic device is in a video chat mode, a preview image acquired by the electronic device is obtained in real time, the preview image is identified and a plurality of sub preview images in the preview image are extracted, whether a target sub preview image representing a scene is included in the plurality of sub preview images is determined, when the plurality of sub preview images include the sub preview image representing the scene, current location information of the electronic device is obtained, a scene corresponding to the current location information is determined based on the current location information, when the scene corresponding to the current location information is consistent with the scene represented by the target sub preview image, a current scene of the electronic device is obtained based on the target sub preview image and/or the current location information, at least one audio channel is selected from the plurality of audio channels as a target audio channel based on the current scene, compared with the sound effect adjusting method shown in fig. 3, the present embodiment also obtains the current scene through the sub-image and the position information together to improve the accuracy of the current scene.
Referring to fig. 6, fig. 6 is a block diagram illustrating a sound effect adjusting apparatus 200 according to an embodiment of the present disclosure. The sound effect adjusting apparatus 200 is applied to the electronic device. As will be described below with reference to the block diagram shown in fig. 6, the sound effect adjusting apparatus 200 includes: an obtaining module 210, an identifying module 220, and an adjusting module 230, wherein:
the obtaining module 210 is configured to obtain a preview image acquired by the electronic device in real time when the electronic device is in a video chat mode.
The identifying module 220 is configured to identify the preview image, and obtain a current scene where the electronic device is located based on an identified result. Further, the identification module 230 includes: the device comprises a voice information acquisition submodule, a voice information judgment submodule, a current scene acquisition submodule, a sub preview image extraction submodule and a sub preview image judgment submodule, wherein:
and the voice information acquisition submodule is used for acquiring the voice information input into the electronic equipment in real time in the process that the electronic equipment is in the video chat mode.
And the voice information judgment submodule is used for identifying the voice information and judging whether the voice information comprises voice content representing a scene.
And the current scene obtaining sub-module is used for obtaining the current scene of the electronic equipment based on the voice content and the recognized result when the voice information comprises the voice content representing the scene.
And the sub preview image extraction sub-module is used for identifying the preview image and extracting a plurality of sub preview images in the preview image.
And the sub preview image judgment sub-module is used for judging whether the plurality of sub preview images comprise target sub preview images representing scenes.
The current scene obtaining sub-module is further configured to obtain a current scene where the electronic device is located based on the target sub-preview image when the plurality of sub-preview images include a target sub-preview image representing a scene. Further, the current scene acquisition sub-module includes: a location information acquisition unit, a scene determination unit, and a scene acquisition unit, wherein:
and the position information acquisition unit is used for acquiring the current position information of the electronic equipment.
And the scene determining unit is used for determining a scene corresponding to the current position information based on the current position information.
And the scene acquiring unit is used for acquiring the current scene of the electronic equipment based on the target sub preview image and/or the current position information when the scene corresponding to the current position information is consistent with the scene represented by the target sub preview image.
An adjusting module 230, configured to adjust an output sound effect of the electronic device in the process of being in the video chat mode based on the current scene. Further, the electronic device is provided with a plurality of audio channels, each of the plurality of audio channels corresponds to a different sound effect processing algorithm, and the adjusting module 230 includes: a selection sub-module and a transmission sub-module, wherein:
a selection sub-module for selecting at least one audio channel from the plurality of audio channels as a target audio channel based on the current scene. Further, the selection submodule includes: a determination unit, wherein:
and the determining unit is used for determining a current audio channel corresponding to the current scene according to a preset relation table, wherein the preset relation table comprises the corresponding relation between the scene and the audio channel.
The determining unit is further configured to determine the current audio path as the target audio path.
And the transmission sub-module is used for transmitting the output sound effect of the electronic equipment in the process of the video chat mode through the target audio channel.
Further, the current scene is a first current scene or a second current scene, where an environment tone of the first current scene is higher than an environment tone of the second current scene, and the adjusting module 230 further includes: lifting the submodule and lowering the submodule, wherein:
and the promoting submodule is used for promoting the output sound effect of the electronic equipment in the process of the video chat mode when the current scene is the first current scene.
And the reducing submodule is used for reducing the output sound effect of the electronic equipment in the process of the video chat mode when the current scene is the second current scene.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling between the modules may be electrical, mechanical or other type of coupling.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
The sound effect adjusting device comprises an acquisition module, an identification module and an adjusting module, wherein the acquisition module is used for acquiring preview images acquired by electronic equipment in real time in the process that the electronic equipment is in a video chat mode, the identification module is used for identifying the preview images and acquiring the preview images based on the identified result in the current scene where the electronic equipment is located, the adjusting module is used for adjusting the current scene based on the electronic equipment is in the output sound effect in the process of the video chat mode, so that the output sound effect of the electronic equipment in the process of the video chat mode is adjusted according to the current scene where the electronic equipment is located, the output sound effect of the electronic equipment is automatically and quickly configured, and the effect of the output sound effect and the user experience are improved.
Referring to fig. 7, a block diagram of an electronic device 100 according to an embodiment of the present disclosure is shown. The electronic device 100 may be a smart phone, a tablet computer, an electronic book, or other electronic devices capable of running an application. The electronic device 100 in the present application may include one or more of the following components: a processor 110, a memory 120, and one or more applications, wherein the one or more applications may be stored in the memory 120 and configured to be executed by the one or more processors 110, the one or more programs configured to perform a method as described in the aforementioned method embodiments.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the overall electronic device 100 using various interfaces and lines, and performs various functions of the electronic device 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the terminal 100 in use, such as a phonebook, audio-video data, chat log data, and the like.
Referring to fig. 8, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable medium 300 has stored therein a program code that can be called by a processor to execute the method described in the above-described method embodiments.
The computer-readable storage medium 300 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 300 includes a non-volatile computer-readable storage medium. The computer readable storage medium 300 has storage space for program code 310 for performing any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 310 may be compressed, for example, in a suitable form.
To sum up, the sound effect adjusting method, the device, the electronic device and the storage medium provided by the embodiment of the application acquire the preview image acquired by the electronic device in real time in the process that the electronic device is in the video chat mode, recognize the preview image, acquire the current scene where the electronic device is located based on the recognized result, and adjust the output sound effect of the electronic device in the process that the electronic device is in the video chat mode based on the current scene, so that the output sound effect of the electronic device in the process that the electronic device is in the video chat mode is adjusted according to the current scene where the electronic device is located, thereby realizing automatic and fast configuration of the output sound effect of the electronic device, and improving the effect of the output sound effect and user experience.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (9)

1. The utility model provides a sound effect adjustment method, its characterized in that is applied to electronic equipment, electronic equipment is provided with many audio frequency passageways, every audio frequency passageway in many audio frequency passageways all corresponds different sound effect processing algorithm, the method includes:
acquiring a preview image acquired by the electronic equipment in real time in the process that the electronic equipment is in a video chat mode;
identifying the preview image, and acquiring the current scene of the electronic equipment based on the identified result;
selecting at least one audio lane from the plurality of audio lanes based on the current scene;
acquiring audio data of the electronic equipment in a video chat mode, and acquiring sound types in the audio data;
selecting only one of the at least one audio path as a target audio path when only one kind of sound is included in the audio data, and selecting a plurality of audio paths from the at least one audio path as target audio paths when a plurality of kinds of sounds are included in the audio data;
and transmitting the output sound effect of the electronic equipment in the process of the video chat mode through the target audio channel.
2. The method according to claim 1, wherein selecting only one of the at least one audio path as a target audio path when only one kind of sound is included in the audio data, and selecting a plurality of audio paths as target audio paths from the at least one audio path when a plurality of kinds of sound are included in the audio data comprises:
when the audio data only comprise one type of sound, determining only one audio channel corresponding to the current scene from the at least one audio channel as a current audio channel according to a preset relation table, and when the audio data comprise a plurality of types of sounds, determining a plurality of audio channels corresponding to the current scene from the at least one audio channel as current audio channels according to the preset relation, wherein the preset relation table comprises the corresponding relation between the scene and the audio channels;
determining the current audio path as the target audio path.
3. The method of claim 1, further comprising:
acquiring voice information input into the electronic equipment in real time in the process that the electronic equipment is in a video chat mode;
recognizing the voice information and judging whether the voice information comprises voice content representing a scene;
and when the voice information comprises voice content representing a scene, acquiring the current scene of the electronic equipment based on the voice content and the recognized result.
4. The method of claim 1, wherein the recognizing the preview image and obtaining the current scene of the electronic device based on the recognized result comprises:
identifying the preview image and extracting a plurality of sub preview images in the preview image;
judging whether the plurality of sub preview images comprise target sub preview images representing scenes or not;
and when the plurality of sub preview images comprise target sub preview images representing scenes, acquiring the current scene of the electronic equipment based on the target sub preview images.
5. The method of claim 4, further comprising:
acquiring current position information of the electronic equipment;
determining a scene corresponding to the current position information based on the current position information;
and when the scene corresponding to the current position information is consistent with the scene represented by the target sub-preview image, acquiring the current scene of the electronic equipment based on the target sub-preview image and/or the current position information.
6. The method according to any one of claims 1 to 5, wherein the current scene comprises at least one or a combination of supermarkets, shopping malls, bookstores, subways, and streets.
7. The utility model provides a sound effect adjusting device which characterized in that is applied to electronic equipment, electronic equipment is provided with many audio frequency passageways, every audio frequency passageway in many audio frequency passageways all corresponds different sound effect processing algorithm, the device includes:
the acquisition module is used for acquiring preview images acquired by the electronic equipment in real time in the process that the electronic equipment is in a video chat mode;
the identification module is used for identifying the preview image and acquiring the current scene of the electronic equipment based on the identified result;
an adjusting module, configured to select at least one audio channel from the multiple audio channels based on the current scene, acquire audio data of the electronic device in a video chat mode, and acquire a sound type in the audio data, when the audio data only includes one type of sound, only select one of the at least one audio channel as a target audio channel, when the audio data includes multiple types of sounds, select multiple audio channels from the at least one audio channel as a target audio channel, and transmit an output sound effect of the electronic device in the video chat mode through the target audio channel.
8. An electronic device comprising a memory and a processor, the memory coupled to the processor, the memory storing instructions that, when executed by the processor, the processor performs the method of any of claims 1-6.
9. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 6.
CN201811026574.0A 2018-09-04 2018-09-04 Sound effect adjusting method and device, electronic equipment and storage medium Active CN109286772B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811026574.0A CN109286772B (en) 2018-09-04 2018-09-04 Sound effect adjusting method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811026574.0A CN109286772B (en) 2018-09-04 2018-09-04 Sound effect adjusting method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109286772A CN109286772A (en) 2019-01-29
CN109286772B true CN109286772B (en) 2021-03-12

Family

ID=65183952

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811026574.0A Active CN109286772B (en) 2018-09-04 2018-09-04 Sound effect adjusting method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109286772B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110047487B (en) * 2019-06-05 2022-03-18 广州小鹏汽车科技有限公司 Wake-up method and device for vehicle-mounted voice equipment, vehicle and machine-readable medium
CN113129917A (en) * 2020-01-15 2021-07-16 荣耀终端有限公司 Speech processing method based on scene recognition, and apparatus, medium, and system thereof
CN113556604B (en) * 2020-04-24 2023-07-18 深圳市万普拉斯科技有限公司 Sound effect adjusting method, device, computer equipment and storage medium
CN113573143B (en) * 2021-07-21 2023-09-19 维沃移动通信有限公司 Audio playing method and electronic equipment
CN117501363A (en) * 2022-05-30 2024-02-02 北京小米移动软件有限公司 Sound effect control method, device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202889458U (en) * 2012-11-02 2013-04-17 姚西 Automatic call volume regulation mobile phone based on environmental noise
CN105244048A (en) * 2015-09-25 2016-01-13 小米科技有限责任公司 Audio play control method and apparatus
CN106375590A (en) * 2016-09-28 2017-02-01 珠海格力电器股份有限公司 Volume adjustment method and device for smart terminal
CN106464939A (en) * 2016-07-28 2017-02-22 北京小米移动软件有限公司 Method and device for playing sound effect
CN107231471A (en) * 2017-05-15 2017-10-03 努比亚技术有限公司 In Call method of adjustment, mobile terminal and storage medium
CN107395873A (en) * 2017-06-30 2017-11-24 广东欧珀移动通信有限公司 volume adjusting method, device, storage medium and terminal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008199449A (en) * 2007-02-15 2008-08-28 Funai Electric Co Ltd Television receiver
CN101552010B (en) * 2009-04-30 2011-09-14 华为技术有限公司 Audio treating method and audio treating device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202889458U (en) * 2012-11-02 2013-04-17 姚西 Automatic call volume regulation mobile phone based on environmental noise
CN105244048A (en) * 2015-09-25 2016-01-13 小米科技有限责任公司 Audio play control method and apparatus
CN106464939A (en) * 2016-07-28 2017-02-22 北京小米移动软件有限公司 Method and device for playing sound effect
CN106375590A (en) * 2016-09-28 2017-02-01 珠海格力电器股份有限公司 Volume adjustment method and device for smart terminal
CN107231471A (en) * 2017-05-15 2017-10-03 努比亚技术有限公司 In Call method of adjustment, mobile terminal and storage medium
CN107395873A (en) * 2017-06-30 2017-11-24 广东欧珀移动通信有限公司 volume adjusting method, device, storage medium and terminal

Also Published As

Publication number Publication date
CN109286772A (en) 2019-01-29

Similar Documents

Publication Publication Date Title
CN109286772B (en) Sound effect adjusting method and device, electronic equipment and storage medium
CN109388367B (en) Sound effect adjusting method and device, electronic equipment and storage medium
CN108933915B (en) Video conference device and video conference management method
CN109101216B (en) Sound effect adjusting method and device, electronic equipment and storage medium
CN109284080B (en) Sound effect adjusting method and device, electronic equipment and storage medium
CN109240641B (en) Sound effect adjusting method and device, electronic equipment and storage medium
CN111263234B (en) Video clipping method, related device, equipment and storage medium
CN103207728B (en) The method of augmented reality is provided and the terminal of this method is supported
CN109151194B (en) Data transmission method, device, electronic equipment and storage medium
CN109271129B (en) Sound effect adjusting method and device, electronic equipment and storage medium
CN112053683A (en) Voice instruction processing method, device and control system
CN113542875B (en) Video processing method, device, electronic equipment and storage medium
CN102760077A (en) Method and device for self-adaptive application scene mode on basis of human face recognition
CN110995933A (en) Volume adjusting method and device of mobile terminal, mobile terminal and storage medium
CN110808044B (en) Voice control method and device for intelligent household equipment, electronic equipment and storage medium
CN113676592B (en) Recording method, recording device, electronic equipment and computer readable medium
CN109151789A (en) Interpretation method, device, system and bluetooth headset
US20240169687A1 (en) Model training method, scene recognition method, and related device
CN112750186A (en) Virtual image switching method and device, electronic equipment and storage medium
CN111522524B (en) Presentation control method and device based on conference robot, storage medium and terminal
CN113596240B (en) Recording method, recording device, electronic equipment and computer readable medium
CN113284500B (en) Audio processing method, device, electronic equipment and storage medium
CN114531564A (en) Processing method and electronic equipment
CN113411725B (en) Audio playing method and device, mobile terminal and storage medium
CN113542466A (en) Audio processing method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant