CN113556604B - Sound effect adjusting method, device, computer equipment and storage medium - Google Patents

Sound effect adjusting method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN113556604B
CN113556604B CN202010332278.4A CN202010332278A CN113556604B CN 113556604 B CN113556604 B CN 113556604B CN 202010332278 A CN202010332278 A CN 202010332278A CN 113556604 B CN113556604 B CN 113556604B
Authority
CN
China
Prior art keywords
sound effect
content
program
target
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010332278.4A
Other languages
Chinese (zh)
Other versions
CN113556604A (en
Inventor
肖荣权
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oneplus Technology Shenzhen Co Ltd
Original Assignee
Oneplus Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oneplus Technology Shenzhen Co Ltd filed Critical Oneplus Technology Shenzhen Co Ltd
Priority to CN202010332278.4A priority Critical patent/CN113556604B/en
Publication of CN113556604A publication Critical patent/CN113556604A/en
Application granted granted Critical
Publication of CN113556604B publication Critical patent/CN113556604B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4852End-user interface for client configuration for modifying audio parameters, e.g. switching between mono and stereo

Abstract

The application relates to an audio effect adjusting method, an audio effect adjusting device, computer equipment and a storage medium. The method comprises the following steps: and acquiring screen content to obtain multi-frame screen images, identifying the content of the screen images, and generating a predicted content scene list according to the identification result to acquire system operation information. And matching the content scenes corresponding to the screen content from the predicted content scene list according to the system operation information, dynamically adjusting the original sound effects based on the content scenes, and generating target sound effects conforming to the content scenes. By adopting the method, more accurate content scenes can be determined according to comprehensive judgment of system operation information, and the problem that repeated operation is required due to poor sound effect caused by manual adjustment is avoided because the user is not required to manually adjust the sound effects of different programs, so that the sound effect adjusting efficiency is further improved.

Description

Sound effect adjusting method, device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and apparatus for adjusting sound effects, a computer device, and a storage medium.
Background
With the development of computer technology and the wide application of television products in life, the number of channels that can be received and played by television products is increasing due to different demands of different users on television products. The playable program types of the channels are inconsistent, and the playable program types comprise video types, news types, sports types, music types and the like.
When using a television product, the corresponding sound effects differ according to the type of program selected by the user. Such as music programs, require more low frequencies of sound to enhance the shocking effect, and require soft and smooth sound quality to make the human ear feel more comfortable. While requiring less bass and less enhancement of human voice when watching news-like programs. Likewise, surround sound effects and better realism are required for video-like programs. In the conventional art, television products provide several predefined audio modes for manual selection by users, such as standard mode, movie mode, music mode, news mode, etc.
However, current adjustment methods often require the user to manually select different modes. If the user forgets to select the sound effect mode or in the using process, the user cannot accurately judge the current sound scene and accurately understand various sound effect modes, and when the selection of the sound effect mode is wrong, the sound effect which is not matched with the current program type can appear. Because the tone quality experience that unmatched sound effect brought is relatively poor, need the user to select again manually, the operation is comparatively loaded down with trivial details, leads to the sound effect to adjust inefficiency.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an audio adjustment method, apparatus, computer device, and storage medium that can improve the audio adjustment efficiency.
A method of sound effect adjustment, the method comprising:
acquiring screen content to obtain multi-frame screen images;
identifying the content of the screen image, and generating a predicted content scene list according to the identification result;
acquiring system operation information, and matching a content scene corresponding to the screen content from the predicted content scene list according to the system operation information;
and dynamically adjusting the original sound effect based on the content scene, and generating a target sound effect conforming to the content scene.
In one embodiment, the multi-frame screen image is a continuous dynamic image with a preset duration.
In one embodiment, the method further comprises:
when a program switching instruction is detected, responding to the program switching instruction, and switching an original playing program into a target playing program;
acquiring screen content when the target playing program is played, and acquiring the playing time length of the target playing program;
when the playing time length of the target playing program exceeds the preset time domain, if the acquisition time length reaches the preset time length, obtaining a dynamic screen image with the preset time length.
In one embodiment, the dynamically adjusting the original sound effect based on the content scene, generating the target sound effect according with the content scene includes:
extracting an original configuration parameter list corresponding to the original sound effect;
according to the content scene, determining original sound effect parameters to be adjusted and target parameter values in the original configuration parameter list;
adjusting the original sound effect parameters to be adjusted according to the target parameter values to generate adjusted sound effect parameters;
and generating target sound effects conforming to the content scene according to the adjusted sound effect parameters.
In one embodiment, the obtaining the system operation information and matching the corresponding content scene from the predicted content scene list according to the system operation information includes:
acquiring system operation information, wherein the system operation information comprises play state information and interface information;
determining the current playing state of the screen according to the playing state information;
determining the type of the running application program according to the interface information;
and matching corresponding content scenes from the predicted content scene list according to the current playing state and the type of the running application program.
In one embodiment, the determining, according to the content scenario, the original sound effect parameter to be adjusted and the target parameter value in the original configuration parameter list includes:
determining predefined sound effect parameters associated with the content scene from a predefined sound effect parameter list according to the content scene;
comparing the predefined sound effect parameters with all original sound effect parameters of the original configuration parameter list to determine original sound effect parameters to be adjusted;
and extracting a target parameter value corresponding to the predefined sound effect parameter.
An audio conditioning apparatus, the apparatus comprising:
the screen image acquisition module is used for acquiring screen contents to obtain multi-frame screen images;
the predicted content scene list generation module is used for identifying the content of the screen image and generating a predicted content scene list according to the identification result;
the content scene matching module is used for acquiring system operation information and matching the content scene corresponding to the screen content from the predicted content scene list according to the system operation information;
and the sound effect adjusting module is used for dynamically adjusting the original sound effect based on the content scene and generating a target sound effect conforming to the content scene.
In one embodiment, the image acquisition module is further configured to:
when a program switching instruction is detected, responding to the program switching instruction, and switching an original playing program into a target playing program;
acquiring screen content when the target playing program is played, and acquiring the playing time length of the target playing program;
when the playing time length of the target playing program exceeds the preset time domain, if the acquisition time length reaches the preset time length, obtaining a dynamic screen image with the preset time length.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring screen content to obtain multi-frame screen images;
identifying the content of the screen image, and generating a predicted content scene list according to the identification result;
acquiring system operation information, and matching a content scene corresponding to the screen content from the predicted content scene list according to the system operation information;
and dynamically adjusting the original sound effect based on the content scene, and generating a target sound effect conforming to the content scene.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Acquiring screen content to obtain multi-frame screen images;
identifying the content of the screen image, and generating a predicted content scene list according to the identification result;
acquiring system operation information, and matching a content scene corresponding to the screen content from the predicted content scene list according to the system operation information;
and dynamically adjusting the original sound effect based on the content scene, and generating a target sound effect conforming to the content scene.
According to the sound effect adjusting method, the sound effect adjusting device, the computer equipment and the storage medium, the screen content is collected to obtain multi-frame screen images, the content of the screen images is identified, and a predicted content scene list is generated according to the identification result. By acquiring the system operation information, comprehensive judgment is carried out according to the system operation information, more accurate content scenes corresponding to screen content are matched from the predicted content scene list, and then the original sound effect is dynamically adjusted based on the content scenes, so that the target sound effect which accords with the content scenes is generated. The problem of repeated operation caused by poor manual sound effect adjustment is avoided without manual adjustment of a user, and the sound effect adjustment efficiency is further improved.
Drawings
FIG. 1 is a diagram of an application environment for a sound effect adjustment method in one embodiment;
FIG. 2 is a flow chart of a method of adjusting sound effects according to an embodiment;
FIG. 3 is a flow diagram of generating target sound effects conforming to a content scene in one embodiment;
FIG. 4 is a flowchart of a method for adjusting sound effects according to another embodiment;
FIG. 5 is a schematic diagram of a program switch interface in one embodiment;
FIG. 6 is a block diagram of an audio conditioning apparatus in one embodiment;
fig. 7 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The sound effect adjusting method provided by the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The terminal 102 autonomously collects screen content to obtain multi-frame screen images. The content played by the screen is sent to the terminal 102 by the server 104 via a data communication connection. The terminal 102 generates a recognition result for the screen image by recognizing the content of the screen image, and further generates a predicted content scene list according to the recognition result. The terminal 102 matches the content scene corresponding to the screen content from the predicted content scene list according to the system operation information by acquiring the system operation information of itself. Further, the terminal 102 dynamically adjusts the original sound effects based on the content scene, and generates target sound effects conforming to the content scene. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and television products, and the server 104 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers.
In one embodiment, as shown in fig. 2, an audio effect adjustment method is provided, and the method is applied to the terminal in fig. 1 for illustration, and includes the following steps:
step S202, screen content is collected, and multi-frame screen images are obtained.
Specifically, the terminal acquires the image content being played by the screen in real time to obtain a multi-frame screen image consisting of the image content being played by the screen.
Further, the terminal may be a television product, and when the user uses the television product, the television product may collect screen content corresponding to a program selected by the user in real time, and extract the screen content to obtain a multi-frame screen image. The television product can also collect the image content played by the screen in a preset extraction period to obtain multi-frame screen images.
Step S204, identifying the content of the screen image, and generating a predicted content scene list according to the identification result.
Specifically, as a television product of the terminal, a recognition result of the corresponding screen image is generated by recognizing the extracted multi-frame screen image content. Wherein, the content scene expressed by the extracted multi-frame screen image can be dynamically identified by utilizing AI artificial intelligence technology, wherein, the content scene can comprise that the user plays music, the user watches film and television programs, the user watches sports programs, the user watches news programs, the user plays games, and the like.
The method comprises the steps of carrying out image content recognition on a plurality of acquired screen images locally, sending the images to a cloud server through a network to carry out more accurate content recognition, and determining a current content scene by applying a weighted big data matching algorithm and rules according to the content of the current image and the content recognized in the last period of time. Wherein the image of the near time refers to the recognition of other images already cached in front of the frame image, namely, continuous reference of the most recently used content.
Further, the extracted multi-frame screen image is a continuous dynamic image with preset duration. Compared with the traditional static image recognition operation which focuses on the instant state, the accuracy degree of the generated recognition result can be further improved by extracting continuous multi-frame screen images and then performing content recognition operation. And analyzing the generated identification result to obtain a predicted content scene list, namely the content scene to which the content played by the current screen possibly belongs.
In one embodiment, taking the example that a user plays music and views pictures at the same time, namely, the user searches pictures on a television product to view, and the background opens music playing software at the same time to play music, the searched pictures are still scenery photos or photo figures. When the current screen content is collected and identified, the identification result is static landscape or character photo, and the obtained predicted content scene list can comprise different content scenes of the user playing music and the like when the user watches the video program, and the picture is in a static state and can also be used as a protection interface of a music application program.
Step S206, acquiring system operation information, and matching the content scene corresponding to the screen content from the predicted content scene list according to the system operation information.
Specifically, the system operation information includes playing state information and interface information, and the current playing state of the screen can be determined according to the playing state information, wherein the current playing state includes different states such as playing video, playing audio and displaying pictures. The type of the application program in the current running of the television product can be determined according to the interface information, wherein the type of the application program comprises a video application program, a game application program, a music application program, a shopping application program and the like.
Further, according to whether the current playing state is video playing, audio playing or picture displaying, and whether a video application program, a game application program or a music application program is running in combination with the currently running application program, determining content scenes matched with system running information, namely playing state information and interface information, from a predicted content scene list.
In one embodiment, taking the example that the user plays music and views the picture at the same time, by acquiring system operation information including play state information and interface information, since the user is viewing the picture, the current play state can be determined to be the display picture, and when detecting that the currently running application program in the system is a music application program, the user can be determined to play music. Because the picture is viewed without setting corresponding music sound effects, the content scene needing to be subjected to sound effect adjustment can be further determined to be the music played by the user.
In another embodiment, taking the example that the user plays a game program, that is, the application program opened by the user is a video application program, the program selected to be played is a game program. The method comprises the steps of acquiring screen content to obtain multi-frame screen images related to games, identifying the multi-frame screen images to obtain identification results, and obtaining a predicted content scene list according to the identification results, wherein the predicted content scene list comprises video programs watched by a user or games played by the user. By acquiring system operation information including play state information and interface information, since a user plays a game program and opens a video application program, the current play state is determined to be a play video, and the type of the currently operated application program is a video application program. And judging by combining the identification result of the image content and the system operation information, wherein the more accurate content scene matched from the predicted content scene list is that the user is watching the film and television program.
Step S208, based on the content scene, dynamically adjusting the original sound effect to generate a target sound effect conforming to the content scene.
Specifically, before the sound effect adjustment operation is performed, the sound effect applied by the television product is a preset original sound effect. The original sound effect is the conventional sound effect of the television product, and can be used in different content scenes, but the sound effect cannot meet all content scenes, and cannot reach the optimal expected sound effect of the user in different content scenes. Dynamic adjustment of the original sound effects based on the content scene is therefore required to meet the best desired sound effects in the current content scene.
Further, by extracting an original configuration parameter list corresponding to the original sound effect, determining target parameter values of each sound effect parameter according to the determined content scene, and determining the original sound effect parameter to be adjusted from the original configuration parameter list according to the content scene. And then the original sound effect parameters to be adjusted are adjusted according to the target parameter values, adjusted sound effect parameters are generated, and target sound effects which accord with the content scene are generated according to the adjusted sound effect parameters.
In the sound effect adjusting method, the screen content is collected to obtain multi-frame screen images, the content of the screen images is identified, and a predicted content scene list is generated according to the identification result. By acquiring the system operation information, comprehensive judgment is carried out according to the system operation information, more accurate content scenes corresponding to screen content are matched from the predicted content scene list, and then the original sound effect is dynamically adjusted based on the content scenes, so that the target sound effect which accords with the content scenes is generated. The problem of repeated operation caused by poor manual sound effect adjustment is avoided without manual adjustment of a user, and the sound effect adjustment efficiency is further improved.
In one embodiment, as shown in fig. 3, the step of generating the target sound effect conforming to the content scene, that is, the step of dynamically adjusting the original sound effect based on the content scene, and generating the target sound effect conforming to the content scene specifically includes the following steps corresponding to S302 to S308:
Step S302, extracting an original configuration parameter list corresponding to the original sound effect.
Specifically, before the sound effect adjustment operation is performed, the sound effect applied by the television product is a preset original sound effect, original configuration parameters corresponding to the original sound effect and corresponding parameter values are stored in a local database of the television product in advance, and when the sound effect adjustment operation is required based on a content scene, an original configuration parameter list corresponding to the original sound effect can be extracted from the local database, wherein the original configuration parameters and the original parameter values corresponding to the configuration parameters are included.
The components determining the sound quality include tone, timbre, volume and sound quality, wherein the tone is determined by the frequency spectrum of sound waves, and represents the level of sound frequency and is related to the vibration times of a sound source per second. A low pitch means a low vibration frequency, a deep sound, a high pitch means a high vibration frequency, and a sharp sound. The volume is determined by the amplitude of the sound wave, which refers to the intensity or loudness of the sound, and the intensity of the marked sound is related to the magnitude of the vibration amplitude of the sound source, so that the sound cannot be heard, and the sound cannot be heard, so that the sound cannot be accepted by human ears.
The musical sound is determined by the waveform envelope of the sound wave, the musical sound represents the sound used in the music, the harmonic composition and the waveform envelope of the sound wave, including the starting and ending transients of the musical sound, determine the characteristics of the musical sound, namely the musical sound. The growth and decay process of the sound can determine the quality of the sound, and meanwhile, the difference exists between the corresponding sound spectrums of the quality, and the difference appears in the intensity distribution of spectral lines. The tone is determined by the frequency spectrum of sound waves, and refers to the color and characteristics of sound, and the tone can be used as a sign of a single frequency by correlating with the frequency spectrum of sound source vibration, so that the tone can be expressed as a representation of a composite frequency composed of multiple frequencies. The fundamental frequency component in the sound spectrum forms fundamental tone of sound, the tone can be determined by the fundamental frequency, other components in the sound spectrum are overtones, the overtones are multiplied by the fundamental tone, and the tone is determined by the structure of the overtones.
Further, excitation adjustment, voltage limit adjustment, noise reduction adjustment, delay adjustment, and equalization adjustment may be performed for each configuration parameter. The excitation adjusting device comprises a sound adjusting device, a pressure limiting adjusting device, a sound adjusting device and a sound adjusting device, wherein the excitation adjusting device is used for generating harmonic waves, modifying the sound, enhancing frequency dynamic of the sound, improving definition, brightness, volume, warmth and heaviness, the pressure limiting adjusting device is used for normalizing and modifying the sound, enabling the sound to be more powerful and comfortable, the noise reducing adjusting device is used for removing noise and noise of the sound, and the delay adjusting device is used for improving the loudness of direct sound. The equalization parameters include subwoofer, bass, midrange, treble, and the equalization parameters are used to respectively adjust the electric signals of different frequency components to compensate defects of the loudspeaker and the sound field, and to compensate and modify various sound sources.
Step S304, according to the content scene, determining the original sound effect parameters to be adjusted and the target parameter values in the original configuration parameter list.
Specifically, according to the content scene, a predefined sound effect parameter associated with the content scene is determined from a predefined sound effect parameter list, and a target parameter value corresponding to the predefined sound effect parameter is extracted. And comparing the predefined sound effect parameters with the original sound effect parameters of the original configuration parameter list to further determine the original sound effect parameters to be adjusted.
Wherein the predefined sound effect parameters included in the predefined sound effect parameter list include: sound frequency, sound spectrum, sound pressure, sound intensity, sound power and the like, wherein the frequency represents the number of times of completing periodic change in unit time and is used for describing the frequency degree of periodic movement, the sound frequency represents the number of times of vibration of a sound source per second, and the frequency range of audible sound which can be heard by human ears is usually 20 Hz-20 kHz, and the frequency is called audio frequency or audio frequency. Spectral representation represents the components of a time function as a distribution pattern of frequency functions in terms of amplitude or phase, and since sound is composed of a number of pure tones of different frequencies and different intensities, the analysis of the frequency content and intensity of sound emitted by a sound source is called spectral analysis. The intensity of the sound wave can be quantitatively described by sound pressure and sound pressure level, and the sound pressure can be used for representing the intensity of the sound wave. Acoustic power refers to the acoustic energy of a sound source passing vertically through a designated area per unit time, representing the power radiated over the entire audible frequency range, or to the power radiated over some limited frequency range.
And meanwhile, target parameter values corresponding to all the predefined sound effect parameters are also stored in the predefined sound effect parameter list, and the associated predefined sound effect parameters are determined from the predefined sound effect parameter list based on the content scene. For example, based on the determined content scene, the predefined sound effect parameters associated with the determined content scene include sound pressure, sound intensity and sound frequency, and after the associated predefined sound effect parameters are determined, target parameter values corresponding to the predefined sound effect parameters are obtained from the predefined sound effect parameter list.
Further, after the predefined sound effect parameter is determined, the predefined sound effect parameter is compared with each original sound effect parameter in the original configuration parameter list, so that the original sound effect parameter inconsistent with the predefined sound effect parameter in the original configuration parameter list is determined, and the original sound effect parameter to be adjusted is determined. For example, the original sound effect parameters include: the predefined sound effect parameters also comprise sound frequency, sound frequency spectrum, sound pressure and sound intensity, and the target parameter values of the predefined sound effect parameters are respectively compared with the original parameter values of the corresponding original sound effect parameters, so that the original sound effect parameters inconsistent with the predefined sound effect parameters are determined as the original sound effect parameters to be adjusted.
Step S306, the original sound effect parameters to be adjusted are adjusted according to the target parameter values, and adjusted sound effect parameters are generated.
Specifically, the original sound effect value of the original sound effect parameter is obtained, the original sound effect value of the original sound effect parameter is respectively compared with the corresponding target parameter value of the predefined sound effect parameter, after the original sound effect parameter to be adjusted is obtained, the original parameter value of the original sound effect parameter to be adjusted is adjusted according to the target parameter value of the predefined sound effect parameter until the original parameter value of the original sound effect parameter to be adjusted is consistent with the corresponding target parameter value of the predefined sound effect parameter, and then the adjusted sound effect parameter is obtained.
Further, by adjusting the original sound effect parameters, including adjusting the sound frequency, the sound spectrum, the sound pressure, the sound intensity, the sound power, and the like, the modification and adjustment of the components determining the sound quality, including the tone, the timbre, the volume, and the sound quality, can be realized so as to achieve the sound effect more meeting the content scene. For example, according to different content scenes, the original sound effect parameters to be adjusted can be adjusted in an equalizing manner, the sound effect can be softer by adding bass or mid-bass components, the sound effect is more suitable for music programs, and the sound can be more intense by adding mid-treble or treble components, so that the sound is more suitable for sports programs.
Step S308, generating target sound effects which accord with the content scene according to the adjusted sound effect parameters.
Specifically, according to the adjusted sound effect parameters, including the modified parameter values of the sound effect parameters, the adjusted target sound effect conforming to the content scene is generated.
In the above steps, the original configuration parameter list corresponding to the original sound effect is extracted, and the original sound effect parameter to be adjusted and the target parameter value in the original configuration parameter list are determined according to the content scene. And then the original sound effect parameters to be adjusted are adjusted according to the target parameter values, and the adjusted sound effect parameters are generated. According to the adjusted sound effect parameters, the target sound effect which accords with the content scene can be automatically generated, manual adjustment of a user is not needed, the problem that repeated operation is needed due to the fact that the user manually adjusts the condition that the sound effect is poor is avoided, and the sound effect adjusting efficiency is further improved.
In one embodiment, as shown in fig. 4, there is provided an audio effect adjusting method, which specifically includes the following steps:
step S402, when the program switching instruction is detected, the original playing program is switched to the target playing program in response to the program switching instruction.
Specifically, when detecting that a user triggers a program switching instruction on a television product, responding to the program switching instruction, acquiring a target playing program corresponding to the program switching instruction, and switching an original playing program before triggering the program switching instruction into the target playing program.
Further, fig. 5 provides a schematic view of a program switching interface, referring to fig. 5, the type of a program played on the screen of the current television product 500 is a sports program 502, when a program switching instruction triggered by a user is detected, the obtained target playing program corresponding to the program switching instruction is a movie program 504, and the program switching instruction is responded, and the sports program 502 before the program switching instruction is triggered is switched to the movie program 504.
Step S404, screen content when the target playing program is played is collected, and playing time length of the target playing program is obtained.
Specifically, screen content when the target playing program is played is collected in real time and stored, a multi-frame screen image is obtained, and meanwhile, the playing time of the target playing program is counted.
Further, when the target playing program is a video program, screen content when the video program is played is collected in real time, and meanwhile playing time of the video program is counted.
Step S406, when the playing time of the target playing program exceeds the preset time domain, if the collecting time reaches the preset time, obtaining the dynamic screen image of the preset time.
Specifically, when the target playing program is a video program, comparing the playing time length of the video program with a preset time domain, and judging whether the playing time length exceeds the preset time domain. Wherein the time domain represents the continuity of the dynamically identified scene in the time dimension. For example, the scene is automatically identified as a football match in a sports program, but other television functions may be operated by a user in the middle, such as switching to a movie program, comparing the acquired time length of acquiring the screen image content with the preset time length when the playing time length of the movie program exceeds the preset time domain, and generating a dynamic screen image with the preset time length according to the acquired screen image content when the acquired time length reaches the preset time length.
In this embodiment, the preset time domain may be set to 20S, the preset duration may be set to 30S, after the playing duration of the movie program exceeds 20S, the collection duration may be counted, when the collection duration reaches 30S, and when the collection duration reaches the preset time size, a dynamic screen image with a length of 30S may be generated according to the content of the screen image collected by collection.
Further, when the duration of the process of watching the video program is shorter and does not reach the preset time domain, and the user switches back to the sports program to watch the football match, the screen image of playing the video program is not required to be acquired because the duration of switching to the video program does not reach the preset time domain.
In the above sound effect adjusting method, when the program switching instruction is detected, the original broadcast program is switched to the target broadcast program in response to the program switching instruction. Acquiring screen content when a target broadcast program is broadcast, acquiring the broadcast time length of the target broadcast program, and acquiring a dynamic screen image of the preset time length if the acquired time length reaches the preset time length when the broadcast time length of the target broadcast program exceeds the preset time domain. By judging that the target playing program needs to be subjected to sound effect adjustment when the time of continuously playing the switched program reaches a preset time domain, frequent sound effect switching and misoperation can be avoided, and the accuracy of sound effect adjustment is further improved.
It should be understood that, although the steps in the flowcharts of fig. 2-4 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 2-4 may include multiple steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the steps or stages in other steps or other steps.
In one embodiment, as shown in fig. 6, there is provided an audio effect adjusting apparatus including: a screen image acquisition module 602, a predicted content scene list generation module 604, a content scene matching module 606, and a sound effect adjustment module 608, wherein:
the screen image acquisition module 602 is configured to acquire screen content, and obtain a multi-frame screen image.
The predicted content scene list generating module 604 is configured to identify the content of the screen image, and generate a predicted content scene list according to the identification result.
The content scene matching module 606 is configured to obtain system operation information, and match a content scene corresponding to the screen content from the predicted content scene list according to the system operation information.
The sound effect adjustment module 608 is configured to dynamically adjust the original sound effect based on the content scene, and generate a target sound effect that accords with the content scene.
According to the sound effect adjusting device, the screen content is collected, the multi-frame screen image is obtained, the content of the screen image is identified, and the predicted content scene list is generated according to the identification result. By acquiring the system operation information, comprehensive judgment is carried out according to the system operation information, more accurate content scenes corresponding to screen content are matched from the predicted content scene list, and then the original sound effect is dynamically adjusted based on the content scenes, so that the target sound effect which accords with the content scenes is generated. The problem of repeated operation caused by poor manual sound effect adjustment is avoided without manual adjustment of a user, and the sound effect adjustment efficiency is further improved.
In one embodiment, the image acquisition module is further configured to:
when a program switching instruction is detected, responding to the program switching instruction, and switching an original playing program into a target playing program; acquiring screen content when a target playing program is played, and acquiring the playing time length of the target playing program; when the playing time length of the target playing program exceeds the preset time domain, if the acquisition time length reaches the preset time length, obtaining a dynamic screen image with the preset time length.
In the image acquisition module, when a program switching instruction is detected, the original playing program is switched to a target playing program in response to the program switching instruction. Acquiring screen content when a target broadcast program is broadcast, acquiring the broadcast time length of the target broadcast program, and acquiring a dynamic screen image of the preset time length if the acquired time length reaches the preset time length when the broadcast time length of the target broadcast program exceeds the preset time domain. By judging that the target playing program needs to be subjected to sound effect adjustment when the time of continuously playing the switched program reaches a preset time domain, frequent sound effect switching and misoperation can be avoided, and the accuracy of sound effect adjustment is further improved.
In one embodiment, the sound effect adjustment module is further configured to:
extracting an original configuration parameter list corresponding to the original sound effect; according to the content scene, determining original sound effect parameters to be adjusted in an original configuration parameter list and target parameter values; adjusting the original sound effect parameters to be adjusted according to the target parameter values to generate adjusted sound effect parameters; and generating target sound effects which accord with the content scene according to the adjusted sound effect parameters.
In the sound effect adjusting module, the original configuration parameter list corresponding to the original sound effect is extracted, and the original sound effect parameter to be adjusted and the target parameter value in the original configuration parameter list are determined according to the content scene. And then the original sound effect parameters to be adjusted are adjusted according to the target parameter values, and the adjusted sound effect parameters are generated. According to the adjusted sound effect parameters, the target sound effect which accords with the content scene can be automatically generated, manual adjustment of a user is not needed, the problem that repeated operation is needed due to the fact that the user manually adjusts the condition that the sound effect is poor is avoided, and the sound effect adjusting efficiency is further improved.
For specific limitations of the sound effect adjusting device, reference may be made to the above limitations of the sound effect adjusting method, and no further description is given here. The above-described individual modules in the sound effect adjustment apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a method of sound effect adjustment. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 7 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
acquiring screen content to obtain multi-frame screen images;
identifying the content of the screen image, and generating a predicted content scene list according to the identification result;
acquiring system operation information, and matching content scenes corresponding to screen content from a predicted content scene list according to the system operation information;
based on the content scene, the original sound effect is dynamically adjusted, and the target sound effect conforming to the content scene is generated.
In one embodiment, the processor when executing the computer program further performs the steps of:
the multi-frame screen image is a continuous dynamic image with preset duration.
In one embodiment, the processor when executing the computer program further performs the steps of:
when a program switching instruction is detected, responding to the program switching instruction, and switching an original playing program into a target playing program;
acquiring screen content when a target playing program is played, and acquiring the playing time length of the target playing program;
when the playing time length of the target playing program exceeds the preset time domain, if the acquisition time length reaches the preset time length, obtaining a dynamic screen image with the preset time length.
In one embodiment, the processor when executing the computer program further performs the steps of:
extracting an original configuration parameter list corresponding to the original sound effect;
according to the content scene, determining original sound effect parameters to be adjusted in an original configuration parameter list and target parameter values;
adjusting the original sound effect parameters to be adjusted according to the target parameter values to generate adjusted sound effect parameters;
and generating target sound effects which accord with the content scene according to the adjusted sound effect parameters.
In one embodiment, the processor when executing the computer program further performs the steps of:
acquiring system operation information, wherein the system operation information comprises play state information and interface information;
Determining the current playing state of the screen according to the playing state information;
determining the type of the running application program according to the interface information;
and matching corresponding content scenes from the predicted content scene list according to the current playing state and the program type.
In one embodiment, the processor when executing the computer program further performs the steps of:
determining predefined sound effect parameters associated with the content scene from a predefined sound effect parameter list according to the content scene;
comparing the predefined sound effect parameters with all original sound effect parameters of an original configuration parameter list to determine original sound effect parameters to be adjusted;
and extracting a target parameter value corresponding to the predefined sound effect parameter.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring screen content to obtain multi-frame screen images;
identifying the content of the screen image, and generating a predicted content scene list according to the identification result;
acquiring system operation information, and matching content scenes corresponding to screen content from a predicted content scene list according to the system operation information;
Based on the content scene, the original sound effect is dynamically adjusted, and the target sound effect conforming to the content scene is generated.
In one embodiment, the computer program when executed by the processor further performs the steps of:
the multi-frame screen image is a continuous dynamic image with preset duration.
In one embodiment, the computer program when executed by the processor further performs the steps of:
when a program switching instruction is detected, responding to the program switching instruction, and switching an original playing program into a target playing program;
acquiring screen content when a target playing program is played, and acquiring the playing time length of the target playing program;
when the playing time length of the target playing program exceeds the preset time domain, if the acquisition time length reaches the preset time length, obtaining a dynamic screen image with the preset time length.
In one embodiment, the computer program when executed by the processor further performs the steps of:
extracting an original configuration parameter list corresponding to the original sound effect;
according to the content scene, determining original sound effect parameters to be adjusted in an original configuration parameter list and target parameter values;
adjusting the original sound effect parameters to be adjusted according to the target parameter values to generate adjusted sound effect parameters;
And generating target sound effects which accord with the content scene according to the adjusted sound effect parameters.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring system operation information, wherein the system operation information comprises play state information and interface information;
determining the current playing state of the screen according to the playing state information;
determining the type of the running application program according to the interface information;
and matching corresponding content scenes from the predicted content scene list according to the current playing state and the program type.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining predefined sound effect parameters associated with the content scene from a predefined sound effect parameter list according to the content scene;
comparing the predefined sound effect parameters with all original sound effect parameters of an original configuration parameter list to determine original sound effect parameters to be adjusted;
and extracting a target parameter value corresponding to the predefined sound effect parameter.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (10)

1. A method of sound effect adjustment, the method comprising:
acquiring screen content to obtain multi-frame screen images;
identifying the content of the screen image, and generating a predicted content scene list according to the identification result;
acquiring system operation information, comprehensively judging according to the system operation information, and matching the content scene corresponding to the screen content from the predicted content scene list; the system operation information comprises play state information and interface information, wherein the play state information comprises play video, play audio and display pictures, the interface information is used for determining the type of an application program in current operation, and the application program type comprises a film application program, a game application program, a music application program and a shopping application program;
And dynamically adjusting the original sound effect based on the content scene, and generating a target sound effect conforming to the content scene.
2. The method of claim 1, wherein the multi-frame screen image is a continuous dynamic image of a preset duration.
3. The method according to claim 2, wherein the method further comprises:
when a program switching instruction is detected, responding to the program switching instruction, and switching an original playing program into a target playing program;
acquiring screen content when the target playing program is played, and acquiring the playing time length of the target playing program;
when the playing time length of the target playing program exceeds the preset time domain, if the acquisition time length reaches the preset time length, obtaining a dynamic screen image with the preset time length.
4. The method of claim 1, wherein dynamically adjusting the original sound effects based on the content scene to generate target sound effects that conform to the content scene comprises:
extracting an original configuration parameter list corresponding to the original sound effect;
according to the content scene, determining original sound effect parameters to be adjusted and target parameter values in the original configuration parameter list;
Adjusting the original sound effect parameters to be adjusted according to the target parameter values to generate adjusted sound effect parameters;
and generating target sound effects conforming to the content scene according to the adjusted sound effect parameters.
5. The method of claim 1, wherein the obtaining system operation information and matching corresponding content scenes from the list of predicted content scenes according to the system operation information comprises:
acquiring system operation information;
determining the current playing state of the screen according to the playing state information;
determining the type of the running application program according to the interface information;
and matching corresponding content scenes from the predicted content scene list according to the current playing state and the type of the running application program.
6. The method of claim 4, wherein determining the original sound effect parameters to be adjusted and the target parameter values in the original configuration parameter list according to the content scene comprises:
determining predefined sound effect parameters associated with the content scene from a predefined sound effect parameter list according to the content scene;
comparing the predefined sound effect parameters with all original sound effect parameters of the original configuration parameter list to determine original sound effect parameters to be adjusted;
And extracting a target parameter value corresponding to the predefined sound effect parameter.
7. An audio conditioning apparatus, the apparatus comprising:
the screen image acquisition module is used for acquiring screen contents to obtain multi-frame screen images;
the predicted content scene list generation module is used for identifying the content of the screen image and generating a predicted content scene list according to the identification result;
the content scene matching module is used for acquiring system operation information, comprehensively judging according to the system operation information, and matching the content scenes corresponding to the screen content from the predicted content scene list; the system operation information comprises play state information and interface information, wherein the play state information comprises play video, play audio and display pictures, the interface information is used for determining the type of an application program in current operation, and the application program type comprises a film application program, a game application program, a music application program and a shopping application program;
and the sound effect adjusting module is used for dynamically adjusting the original sound effect based on the content scene and generating a target sound effect conforming to the content scene.
8. The sound effect adjustment device of claim 7, wherein the image acquisition module is further configured to:
when a program switching instruction is detected, responding to the program switching instruction, and switching an original playing program into a target playing program;
acquiring screen content when the target playing program is played, and acquiring the playing time length of the target playing program;
when the playing time length of the target playing program exceeds the preset time domain, if the acquisition time length reaches the preset time length, obtaining a dynamic screen image with the preset time length.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN202010332278.4A 2020-04-24 2020-04-24 Sound effect adjusting method, device, computer equipment and storage medium Active CN113556604B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010332278.4A CN113556604B (en) 2020-04-24 2020-04-24 Sound effect adjusting method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010332278.4A CN113556604B (en) 2020-04-24 2020-04-24 Sound effect adjusting method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113556604A CN113556604A (en) 2021-10-26
CN113556604B true CN113556604B (en) 2023-07-18

Family

ID=78129619

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010332278.4A Active CN113556604B (en) 2020-04-24 2020-04-24 Sound effect adjusting method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113556604B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114035871A (en) * 2021-10-28 2022-02-11 深圳市优聚显示技术有限公司 Display method and system of 3D display screen based on artificial intelligence and computer equipment
CN114025231A (en) * 2021-11-18 2022-02-08 紫光展锐(重庆)科技有限公司 Sound effect adjusting method, sound effect adjusting device, chip and chip module thereof
CN114866791A (en) * 2022-03-31 2022-08-05 北京达佳互联信息技术有限公司 Sound effect switching method and device, electronic equipment and storage medium
CN114443886A (en) * 2022-04-06 2022-05-06 南昌航天广信科技有限责任公司 Sound effect adjusting method and system of broadcast sound box, computer and readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105100831A (en) * 2014-04-16 2015-11-25 北京酷云互动科技有限公司 Television set playing mode adjustment method, television playing system and television set
CN106775562A (en) * 2016-12-09 2017-05-31 奇酷互联网络科技(深圳)有限公司 The method and device of audio frequency parameter treatment
WO2017101357A1 (en) * 2015-12-14 2017-06-22 乐视控股(北京)有限公司 Sound effect mode selection method and device
CN108900616A (en) * 2018-06-29 2018-11-27 百度在线网络技术(北京)有限公司 Sound resource listens to method and apparatus
CN109286772A (en) * 2018-09-04 2019-01-29 Oppo广东移动通信有限公司 Audio method of adjustment, device, electronic equipment and storage medium
CN109348040A (en) * 2018-08-09 2019-02-15 北京奇艺世纪科技有限公司 A kind of effect adjusting method, device and terminal device
CN110933490A (en) * 2019-11-20 2020-03-27 深圳创维-Rgb电子有限公司 Automatic adjustment method for picture quality and tone quality, smart television and storage medium

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104090766B (en) * 2014-07-17 2017-08-25 广东欧珀移动通信有限公司 The audio switching method and system of a kind of mobile terminal
CN104506901B (en) * 2014-11-12 2018-06-15 科大讯飞股份有限公司 Voice householder method and system based on tv scene state and voice assistant
US10853412B2 (en) * 2016-06-16 2020-12-01 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Scenario-based sound effect control method and electronic device
CN105959481B (en) * 2016-06-16 2019-04-30 Oppo广东移动通信有限公司 A kind of control method and electronic equipment of scene audio
CN108924361B (en) * 2018-07-10 2021-02-19 南昌黑鲨科技有限公司 Audio playing and acquisition control method, system and computer readable storage medium
CN108966007B (en) * 2018-09-03 2021-08-31 海信视像科技股份有限公司 Method and device for distinguishing video scenes under HDMI
CN109240641B (en) * 2018-09-04 2021-09-14 Oppo广东移动通信有限公司 Sound effect adjusting method and device, electronic equipment and storage medium
CN109272970A (en) * 2018-10-30 2019-01-25 维沃移动通信有限公司 A kind of screen luminance adjustment method and mobile terminal
CN109582463B (en) * 2018-11-30 2021-04-06 Oppo广东移动通信有限公司 Resource allocation method, device, terminal and storage medium
CN110493639A (en) * 2019-10-21 2019-11-22 南京创维信息技术研究院有限公司 A kind of method and system of adjust automatically sound and image model based on scene Recognition
CN110989961A (en) * 2019-10-30 2020-04-10 华为终端有限公司 Sound processing method and device
CN110868628B (en) * 2019-11-29 2021-03-16 深圳创维-Rgb电子有限公司 Intelligent control method for television sound and picture modes, television and storage medium
CN110996153B (en) * 2019-12-06 2021-09-24 深圳创维-Rgb电子有限公司 Scene recognition-based sound and picture quality enhancement method and system and display

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105100831A (en) * 2014-04-16 2015-11-25 北京酷云互动科技有限公司 Television set playing mode adjustment method, television playing system and television set
WO2017101357A1 (en) * 2015-12-14 2017-06-22 乐视控股(北京)有限公司 Sound effect mode selection method and device
CN106775562A (en) * 2016-12-09 2017-05-31 奇酷互联网络科技(深圳)有限公司 The method and device of audio frequency parameter treatment
CN108900616A (en) * 2018-06-29 2018-11-27 百度在线网络技术(北京)有限公司 Sound resource listens to method and apparatus
CN109348040A (en) * 2018-08-09 2019-02-15 北京奇艺世纪科技有限公司 A kind of effect adjusting method, device and terminal device
CN109286772A (en) * 2018-09-04 2019-01-29 Oppo广东移动通信有限公司 Audio method of adjustment, device, electronic equipment and storage medium
CN110933490A (en) * 2019-11-20 2020-03-27 深圳创维-Rgb电子有限公司 Automatic adjustment method for picture quality and tone quality, smart television and storage medium

Also Published As

Publication number Publication date
CN113556604A (en) 2021-10-26

Similar Documents

Publication Publication Date Title
CN113556604B (en) Sound effect adjusting method, device, computer equipment and storage medium
CN108305603B (en) Sound effect processing method and equipment, storage medium, server and sound terminal thereof
CN110827843B (en) Audio processing method and device, storage medium and electronic equipment
KR102148006B1 (en) Method and apparatus for providing special effects to video
CN106488311B (en) Sound effect adjusting method and user terminal
CN105409243A (en) Pre-processing of a channelized music signal
WO2020108045A1 (en) Video playback method and apparatus and multimedia data playback method
CN112637670B (en) Video generation method and device
CN110047497B (en) Background audio signal filtering method and device and storage medium
CN115866487B (en) Sound power amplification method and system based on balanced amplification
US20230290382A1 (en) Method and apparatus for matching music with video, computer device, and storage medium
US9979766B2 (en) System and method for reproducing source information
CN113170260B (en) Audio processing method and device, storage medium and electronic equipment
CN102244750A (en) Video display apparatus having sound level control function and control method thereof
CN110928518B (en) Audio data processing method and device, electronic equipment and storage medium
CN113286161A (en) Live broadcast method, device, equipment and storage medium
CN112291615A (en) Audio output method and audio output device
CN113077771B (en) Asynchronous chorus sound mixing method and device, storage medium and electronic equipment
CN114067827A (en) Audio processing method and device and storage medium
CN113345439A (en) Subtitle generating method, device, electronic equipment and storage medium
JP3888239B2 (en) Digital audio processing method and apparatus, and computer program
CN115119110A (en) Sound effect adjusting method, audio playing device and computer readable storage medium
WO2021008350A1 (en) Audio playback method and apparatus and computer readable storage medium
Hoffmann et al. Smart Virtual Bass Synthesis algorithm based on music genre classification
US20220076687A1 (en) Electronic device, method and computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant