CN113556604A - Sound effect adjusting method and device, computer equipment and storage medium - Google Patents

Sound effect adjusting method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113556604A
CN113556604A CN202010332278.4A CN202010332278A CN113556604A CN 113556604 A CN113556604 A CN 113556604A CN 202010332278 A CN202010332278 A CN 202010332278A CN 113556604 A CN113556604 A CN 113556604A
Authority
CN
China
Prior art keywords
sound effect
content
program
content scene
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010332278.4A
Other languages
Chinese (zh)
Other versions
CN113556604B (en
Inventor
肖荣权
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oneplus Technology Shenzhen Co Ltd
Original Assignee
Oneplus Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oneplus Technology Shenzhen Co Ltd filed Critical Oneplus Technology Shenzhen Co Ltd
Priority to CN202010332278.4A priority Critical patent/CN113556604B/en
Publication of CN113556604A publication Critical patent/CN113556604A/en
Application granted granted Critical
Publication of CN113556604B publication Critical patent/CN113556604B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4852End-user interface for client configuration for modifying audio parameters, e.g. switching between mono and stereo

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application relates to a sound effect adjusting method and device, computer equipment and a storage medium. The method comprises the following steps: acquiring screen content to obtain multi-frame screen images, identifying the content of the screen images, generating a predicted content scene list according to an identification result, and acquiring system operation information. And matching a content scene corresponding to the screen content from the predicted content scene list according to the system operation information, dynamically adjusting the original sound effect based on the content scene, and generating a target sound effect according with the content scene. By adopting the method, more accurate content scenes can be determined according to the comprehensive judgment of the system operation information, and the problem that the sound effect is poor due to manual adjustment to cause repeated operation is avoided because the sound effects of different programs are not required to be adjusted manually by a user, so that the sound and sound effect adjusting efficiency is further improved.

Description

Sound effect adjusting method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a sound effect adjustment method and apparatus, a computer device, and a storage medium.
Background
With the development of computer technology and the widespread use of television products in life, the channels that can be received and played by the television products are increasing due to the different requirements of different users on the television products. The types of programs that can be played on each channel are also inconsistent, including the types of programs such as movies, news, sports, and music.
When using the television product, the corresponding sound effect is different according to the program type selected by the user. For example, music programs require more low frequency sound to enhance the shocking effect, and the sound quality is soft and smooth to make people feel more comfortable. And requires a weaker bass and a stronger vocal when watching news programs. Similarly, surround sound effect and better presence are required for the movie and television programs. Conventionally, a television product provides several predefined sound effect modes for a user to manually select, such as a standard mode, a movie mode, a music mode, and a news mode.
However, the current adjustment method requires the user to manually select different modes. If the user forgets to select the sound effect mode or in the using process, the current sound scene cannot be accurately judged and various sound effect modes cannot be accurately understood, and when the sound effect mode is selected incorrectly, sound effects which are not matched with the current program type can occur. Because the tone quality that unmatched sound audio brought experiences poorly, needs the user manual selection again, and the operation is comparatively loaded down with trivial details, leads to the audio to adjust efficiency lower.
Disclosure of Invention
In view of the above, it is necessary to provide an audio effect adjusting method, an apparatus, a computer device and a storage medium capable of improving the efficiency of audio effect adjustment.
A sound effect adjustment method, the method comprising:
acquiring screen content to obtain a plurality of frames of screen images;
identifying the content of the screen image, and generating a predicted content scene list according to an identification result;
acquiring system operation information, and matching a content scene corresponding to the screen content from the predicted content scene list according to the system operation information;
and dynamically adjusting the original sound effect based on the content scene to generate a target sound effect according with the content scene.
In one embodiment, the multi-frame screen image is a continuous dynamic image with a preset time length.
In one embodiment, the method further comprises:
when a program switching instruction is detected, responding to the program switching instruction, and switching an original playing program into a target playing program;
acquiring screen content when the target playing program is played, and acquiring the playing time of the target playing program;
and when the playing time of the target playing program exceeds a preset time domain, if the acquisition time reaches the preset time, obtaining a dynamic screen image with the preset time.
In one embodiment, the dynamically adjusting the original sound effects based on the content scene to generate target sound effects according to the content scene includes:
extracting an original configuration parameter list corresponding to an original sound effect;
according to the content scene, determining original sound effect parameters to be adjusted and target parameter values in the original configuration parameter list;
adjusting the original sound effect parameters to be adjusted according to the target parameter values to generate adjusted sound effect parameters;
and generating a target sound effect according with the content scene according to the adjusted sound effect parameters.
In one embodiment, the obtaining system operation information and matching corresponding content scenes from the predicted content scene list according to the system operation information includes:
acquiring system operation information, wherein the system operation information comprises playing state information and interface information;
determining the current playing state of the screen according to the playing state information;
determining the type of the running application program according to the interface information;
and matching corresponding content scenes from the predicted content scene list according to the current playing state and the type of the running application program.
In one embodiment, the determining, according to the content scene, an original sound effect parameter and a target parameter value to be adjusted in the original configuration parameter list includes:
according to the content scene, determining a predefined sound effect parameter associated with the content scene from a predefined sound effect parameter list;
comparing the predefined sound effect parameters with the original sound effect parameters in the original configuration parameter list to determine original sound effect parameters to be adjusted;
and extracting a target parameter value corresponding to the predefined sound effect parameter.
An audio effect adjustment apparatus, the apparatus comprising:
the screen image acquisition module is used for acquiring screen contents to obtain a plurality of frames of screen images;
the predicted content scene list generating module is used for identifying the content of the screen image and generating a predicted content scene list according to an identification result;
the content scene matching module is used for acquiring system operation information and matching a content scene corresponding to the screen content from the predicted content scene list according to the system operation information;
and the sound effect adjusting module is used for dynamically adjusting the original sound effect based on the content scene to generate a target sound effect according with the content scene.
In one embodiment, the image acquisition module is further configured to:
when a program switching instruction is detected, responding to the program switching instruction, and switching an original playing program into a target playing program;
acquiring screen content when the target playing program is played, and acquiring the playing time of the target playing program;
and when the playing time of the target playing program exceeds a preset time domain, if the acquisition time reaches the preset time, obtaining a dynamic screen image with the preset time.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring screen content to obtain a plurality of frames of screen images;
identifying the content of the screen image, and generating a predicted content scene list according to an identification result;
acquiring system operation information, and matching a content scene corresponding to the screen content from the predicted content scene list according to the system operation information;
and dynamically adjusting the original sound effect based on the content scene to generate a target sound effect according with the content scene.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring screen content to obtain a plurality of frames of screen images;
identifying the content of the screen image, and generating a predicted content scene list according to an identification result;
acquiring system operation information, and matching a content scene corresponding to the screen content from the predicted content scene list according to the system operation information;
and dynamically adjusting the original sound effect based on the content scene to generate a target sound effect according with the content scene.
According to the sound effect adjusting method, the sound effect adjusting device, the computer equipment and the storage medium, the multi-frame screen image is obtained by collecting the screen content, the content of the screen image is identified, and the predicted content scene list is generated according to the identification result. By acquiring the system operation information and carrying out comprehensive judgment according to the system operation information, matching a more accurate content scene corresponding to the screen content from the predicted content scene list, and further dynamically adjusting the original sound effect based on the content scene to generate a target sound effect according with the content scene. Need not user's manual regulation, avoid manual regulation audio relatively poor and lead to the problem of repeated operation, further improve sound audio and adjust efficiency.
Drawings
FIG. 1 is a diagram of an application environment of a sound effect adjustment method in an embodiment;
FIG. 2 is a flow chart illustrating a sound effect adjustment method according to an embodiment;
FIG. 3 is a flow diagram illustrating an embodiment of generating target audio effects according to content scenes;
FIG. 4 is a flow chart illustrating a sound effect adjustment method according to another embodiment;
FIG. 5 is a diagram of a program switch interface in one embodiment;
FIG. 6 is a block diagram of an embodiment of an audio effect adjusting apparatus;
FIG. 7 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The sound effect adjusting method provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 and the server 104 communicate via a network. The terminal 102 autonomously collects screen content to obtain a plurality of frames of screen images. Wherein the content of the screen play is sent from the server 104 to the terminal 102 via the data communication connection. The terminal 102 recognizes the content of the screen image, generates a recognition result for the screen image, and further generates a predicted content scene list according to the recognition result. The terminal 102 matches a content scene corresponding to the screen content from the predicted content scene list according to the system operation information by acquiring the system operation information of itself. And then the terminal 102 dynamically adjusts the original sound effect based on the content scene to generate a target sound effect according with the content scene. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and television products, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In one embodiment, as shown in fig. 2, a sound effect adjusting method is provided, which is described by taking the method as an example applied to the terminal in fig. 1, and includes the following steps:
step S202, screen content is collected, and multiple frames of screen images are obtained.
Specifically, the terminal acquires the image content being played by the screen in real time to obtain a multi-frame screen image composed of the image content played by the screen.
Further, the terminal can be a television product, and when the user uses the television product, the television product can acquire the screen content corresponding to the program selected and played by the user in real time, and extract the screen content to obtain a multi-frame screen image. The television product can also collect image content played by a screen in a preset extraction period to obtain a plurality of frames of screen images.
And step S204, identifying the content of the screen image, and generating a predicted content scene list according to the identification result.
Specifically, a television product as a terminal generates an identification result of a corresponding screen image by identifying the contents of the extracted multiple frames of screen images. The content scenes expressed by the extracted multi-frame screen images can be dynamically identified by utilizing an AI artificial intelligence technology, wherein the content scenes can comprise that a user plays music, the user watches movie and television programs, the user watches sports programs, the user watches news programs, the user plays games and the like.
The identification operation aiming at the collected multi-frame screen image comprises image content identification at local, more accurate content identification is carried out by sending the image to a cloud server through a network, the identified content in the latest period of time is referred again according to the content of the current image, and the current content scene is determined by applying a weighted big data matching algorithm and rules. The image of the near period refers to the identification condition of other images which are cached in front of the frame image, namely the reference of the continuous most-recently-used content.
Further, the extracted multi-frame screen images are continuous dynamic images with preset duration. Compared with the traditional static image identification operation focusing on the moment, the accuracy of the generated identification result can be further improved by extracting continuous multi-frame screen images and then carrying out content identification operation. And analyzing the generated identification result to obtain a predicted content scene list, namely a content scene to which the content played by the current screen possibly belongs.
In one embodiment, for example, a user plays music and views pictures simultaneously, that is, the user searches for pictures on a television product for viewing, the background simultaneously opens music playing software for playing music, and the searched pictures are static scenery pictures or people pictures. When the current screen content is collected and identified, the identification result is a static landscape or a portrait, the obtained predicted content scene list may include different content scenes such as when the user watches movie and television programs, and the pictures are in a static state, and may also be used as a protection interface of a music application program, and may also include different content scenes such as when the user plays music.
And step S206, acquiring system operation information, and matching a content scene corresponding to the screen content from the predicted content scene list according to the system operation information.
Specifically, the system operation information includes play state information and interface information, and a current play state of the screen can be determined according to the play state information, where the current play state includes different states such as playing video, playing audio, and displaying pictures. The type of the application program currently running in the television product can be determined according to the interface information, wherein the type of the application program comprises a movie application program, a game application program, a music application program, a shopping application program and the like.
Further, according to whether the current playing state is video playing, audio playing or picture displaying, and by combining with the currently running application program, whether the video application program, the game application program or the music application program is running, the content scene matched with the system running information, namely the playing state information and the interface information, is determined from the predicted content scene list.
In an embodiment, for example, when a user plays music and views a picture at the same time, by acquiring system operation information including play state information and interface information, since the user views the picture, it is determined that a current play state is a display picture, and when it is detected that an application program currently running in the system is a music application program, it is determined that the user is playing music. Due to the fact that corresponding music sound effects do not need to be set when the picture is viewed, the content scene needing sound effect adjustment can be further determined to be the music played by the user.
In another embodiment, for example, the user is playing a game program, that is, the application program opened by the user is a movie application program, and the program selected to be played is the game program. Acquiring screen content to obtain multi-frame screen images related to a game, identifying the multi-frame screen images to obtain an identification result, and obtaining a predicted content scene list according to the identification result, wherein the predicted content scene list comprises a movie and television program watched by a user or a game played by the user. By acquiring system operation information including playing state information and interface information, when a user opens a movie application program when playing a game program, the current playing state is determined to be a playing video, and the type of the currently running application program is determined to be a movie application program. And (4) integrating the identification result of the image content and the system operation information for judgment, wherein the more accurate content scene matched from the predicted content scene list is the video program watched by the user.
Step S208, based on the content scene, dynamically adjusting the original sound effect to generate a target sound effect according with the content scene.
Specifically, before the sound effect adjustment operation is performed, the sound effect applied by the television product is the preset original sound effect. The original sound effect is a conventional sound effect of a television product, and can be used in different content scenes, but the sound effect cannot meet all content scenes, and the optimal expected sound effect of a user in different content scenes cannot be achieved. There is therefore a need for dynamic adjustment of the original sound effects based on the content scene to meet the best desired sound effects in the current content scene.
Further, an original configuration parameter list corresponding to the original sound effect is extracted, the target parameter value of each sound effect parameter is determined according to the determined content scene, and meanwhile the original sound effect parameter to be adjusted is determined from the original configuration parameter list according to the content scene. And then adjusting the original sound effect parameters to be adjusted according to the target parameter values to generate adjusted sound effect parameters, and generating the target sound effect which accords with the content scene according to the adjusted sound effect parameters.
In the sound effect adjusting method, a plurality of frames of screen images are obtained by collecting screen contents, the contents of the screen images are identified, and a predicted content scene list is generated according to the identification result. By acquiring the system operation information and carrying out comprehensive judgment according to the system operation information, matching a more accurate content scene corresponding to the screen content from the predicted content scene list, and further dynamically adjusting the original sound effect based on the content scene to generate a target sound effect according with the content scene. Need not user's manual regulation, avoid manual regulation audio relatively poor and lead to the problem of repeated operation, further improve sound audio and adjust efficiency.
In an embodiment, as shown in fig. 3, the step of generating the target sound effect conforming to the content scene, that is, the step of dynamically adjusting the original sound effect based on the content scene to generate the target sound effect conforming to the content scene specifically includes the following steps corresponding to S302 to S308:
step S302, an original configuration parameter list corresponding to the original sound effect is extracted.
Specifically, before sound effect adjustment operation is performed, sound effect applied to a television product is preset original sound effect, original configuration parameters and corresponding parameter values corresponding to the original sound effect are stored in a local database of the television product in advance, and when sound effect adjustment operation is required based on a content scene, an original configuration parameter list corresponding to the original sound effect can be extracted from the local storage, wherein the original configuration parameter list comprises the original configuration parameters and the original parameter values corresponding to the configuration parameters.
The components for determining the sound quality comprise tone, timbre, volume and timbre, wherein the tone is determined by the frequency spectrum of the sound wave, represents the high and low of the sound frequency and is related to the vibration times of the sound source per second. A low pitch indicates a low vibration frequency and a deep sound, and a high pitch indicates a high vibration frequency and a sharp sound. The volume is determined by the amplitude of the sound wave, which means the intensity or loudness of the sound, and the intensity of the marking sound is related to the magnitude of the vibration amplitude of the sound source, and too weak causes the sound to be inaudible, and too strong causes the sound to be unacceptable for human ears.
The tone is determined by the waveform envelope of the sound waves, the tones representing the sounds used in music, their harmonic composition and the envelope of the waveform, including the transients of the tone start and end, determine the characteristics of the tones, i.e. the tone. The sound quality can be determined by the growth and decay process of the sound, and the difference of the sound spectrum corresponding to different sound quality shows that the intensity distribution of the spectral line is different. The timbre is determined by the frequency spectrum of the sound wave, refers to the color and the characteristics of the sound, is related to the frequency spectrum of the vibration of the sound source, can take the tone as the symbol of a single frequency, and can be represented as the expression of a composite frequency consisting of a plurality of frequencies. The fundamental frequency component in the sound spectrum forms the fundamental tone of the sound, the pitch can be determined by the height of the fundamental frequency, other components in the sound spectrum are overtones, the overtones and the fundamental tone are in a multiple relation, and the tone color is determined by the structure of the overtones.
Further, excitation adjustment, pressure limit adjustment, noise reduction adjustment, delay adjustment, and equalization adjustment may be performed for each configuration parameter. The excitation regulation is used for generating harmonic waves, modifying sound, enhancing frequency dynamics of the sound, improving definition, brightness, volume, warm feeling and thickness feeling, the pressure limit regulation is used for performing normalized modification on the sound, enabling the sound to be more powerful and comfortable, the noise reduction regulation is used for removing noise and noise of the sound, and the delay regulation is used for improving loudness of direct sound. The equalization parameters include ultra-low sound, bass, mid-low sound, mid-high sound, high sound and ultra-high sound, and the equalization parameters are used for adjusting electric signals with different frequency components respectively so as to compensate the defects of a loudspeaker and a sound field and compensate and modify various sound sources.
Step S304, according to the content scene, determining the original sound effect parameters to be adjusted and the target parameter values in the original configuration parameter list.
Specifically, according to the content scene, the predefined sound effect parameters related to the content scene are determined from the predefined sound effect parameter list, and target parameter values corresponding to the predefined sound effect parameters are extracted. And comparing the predefined sound effect parameters with the original sound effect parameters in the original configuration parameter list to further determine the original sound effect parameters to be adjusted.
Wherein, the predefined sound effect parameters included in the predefined sound effect parameter list include: the sound frequency represents the number of times of completing periodic variation in unit time and is used for describing the amount of frequent periodic movement, the sound frequency represents the number of times of vibration of a sound source per second, and the frequency range of sound which can be heard by human ears, namely audible sound, is usually 20Hz to 20kHz, and the frequency is called audio frequency or audio frequency. The spectral representation represents the distribution pattern of the components of a time function in terms of amplitude or phase as a function of frequency, and since sound is composed of a combination of pure tones of different frequencies and different intensities, the analysis of the frequency content and intensity of sound emitted by a sound source is called spectral analysis. The intensity of the sound wave can be quantitatively described by sound pressure and sound pressure level, and the magnitude of the sound pressure can be used for representing the strength of the sound wave. Acoustic power refers to the acoustic energy of a sound source passing vertically through a specified area per unit time, representing the power radiated over the entire audible frequency range, or to the power radiated over some limited frequency range.
And simultaneously, target parameter values corresponding to the predefined sound effect parameters are stored in the predefined sound effect parameter list, and the associated predefined sound effect parameters are determined from the predefined sound effect parameter list based on the content scene. For example, based on the determined content scene, the predefined sound effect parameters associated therewith include sound pressure, sound intensity, and sound frequency, and after the associated predefined sound effect parameters are determined, the target parameter values corresponding to the predefined sound effect parameters are obtained from the predefined sound effect parameter list.
Further, after the predefined sound effect parameters are determined, the predefined sound effect parameters are compared with all original sound effect parameters in the original configuration parameter list, so that the original sound effect parameters which are inconsistent with the predefined sound effect parameters in the original configuration parameter list are determined, and the original sound effect parameters to be adjusted are determined. For example, the original sound-effect parameters include: the sound frequency, the sound frequency spectrum, the sound pressure and the sound intensity are included, the predefined sound effect parameters also include the sound frequency, the sound frequency spectrum, the sound pressure and the sound intensity, target parameter values of the predefined sound effect parameters are compared with original parameter values of corresponding original sound effect parameters respectively, and therefore the original sound effect parameters which are inconsistent with the predefined sound effect parameters are determined to be the original sound effect parameters to be adjusted.
And S306, adjusting the original sound effect parameter to be adjusted according to the target parameter value to generate the adjusted sound effect parameter.
Specifically, after the original sound effect value of the original sound effect parameter is obtained by obtaining the original sound effect value of the original sound effect parameter and comparing the original sound effect value of the original sound effect parameter with the corresponding target parameter value of the predefined sound effect parameter, the original parameter value of the original sound effect parameter to be adjusted is adjusted according to the target parameter value of the predefined sound effect parameter until the original parameter value of the original sound effect parameter to be adjusted is consistent with the target parameter value of the corresponding predefined sound effect parameter, and then the adjusted sound effect parameter is obtained.
Furthermore, through the adjustment of the original sound effect parameters, including the adjustment of sound frequency, sound spectrum, sound pressure, sound intensity, sound power and the like, the modification and adjustment of the components determining the sound quality, including tone, timbre, volume and timbre, can be realized, so as to achieve the sound effect more satisfying the content scene. For example, according to different content scenes, the original sound effect parameters to be adjusted can be subjected to equalization adjustment, the sound effect is softer and more suitable for music type programs by adding low or middle low sound components, and the sound is hotter and more suitable for sports type programs by adding middle high sound or high sound components.
And step S308, generating a target sound effect according with the content scene according to the adjusted sound effect parameters.
Specifically, according to the adjusted sound effect parameters including the modified parameter values of the sound effect parameters, the adjusted target sound effect which conforms to the content scene is generated.
In the above steps, the original sound effect parameter to be adjusted and the target parameter value in the original configuration parameter list are determined according to the content scene by extracting the original configuration parameter list corresponding to the original sound effect. And then adjusting the original sound effect parameters to be adjusted according to the target parameter values to generate adjusted sound effect parameters. According to the adjusted sound effect parameters, the target sound effect which accords with the content scene can be automatically generated, the manual adjustment of a user is not needed, the problem that the sound effect is poor due to the fact that the sound effect is poor in the manual adjustment of the user is avoided, the problem that repeated operation is needed is solved, and the sound effect adjustment efficiency is further improved.
In one embodiment, as shown in fig. 4, a sound effect adjusting method is provided, which specifically includes the following steps:
step S402, when the program switching instruction is detected, the original playing program is switched to the target playing program in response to the program switching instruction.
Specifically, when a user is detected to trigger a program switching instruction on a television product, the program switching instruction is responded, a target playing program corresponding to the program switching instruction is obtained, and an original playing program before the program switching instruction is triggered is switched into the target playing program.
Further, fig. 5 provides a schematic diagram of a program switching interface, referring to fig. 5, a program type played on a screen of a current television product 500 is a sports program 502, when a program switching instruction triggered by a user is detected, a target played program corresponding to the acquired program switching instruction is a video program 504, and the sports program 502 before the program switching instruction is triggered is switched to the video program 504 in response to the program switching instruction.
Step S404, collecting the screen content when the target playing program is played, and obtaining the playing time length of the target playing program.
Specifically, the screen content of the target playing program is collected in real time and stored to obtain a multi-frame screen image, and meanwhile, the playing time of the target playing program is counted.
Further, when the target playing program is a movie program, the screen content of the movie program is collected in real time, and the playing time of the movie program is counted.
Step S406, when the playing time of the target playing program exceeds the preset time domain, if the collection time reaches the preset time, obtaining a dynamic screen image with the preset time.
Specifically, when the target broadcast program is a video program, the broadcast time of the video program is compared with a preset time domain, and whether the broadcast time exceeds the preset time domain is determined. Wherein the time domain represents the continuity of the dynamically identified scene in the time dimension. For example, the automatic recognition scene is a football game in a sports program, but other television functions may be operated by a user midway, for example, switching to a video program, when the playing time of the video program exceeds a preset time domain, comparing the acquisition time for acquiring the screen image content with the preset time, and when the acquisition time reaches the preset time, generating a dynamic screen image with the preset time according to the acquired screen image content.
In this embodiment, the preset time domain may be set to 20S, the preset duration may be set to 30S, the collection duration is counted after the playing duration of the video program exceeds 20S, and when the collection duration reaches 30S and reaches the preset duration, a dynamic screen image with a length of 30S is generated according to the collected screen image content.
Further, when the duration of the process of watching the movie and television programs is short and does not reach the preset time domain, and the user continues to switch back to the sports program to watch the football match, the duration of the switching to the movie and television programs does not reach the preset time domain, and therefore the screen images of the movie and television programs do not need to be collected.
In the sound effect adjusting method, when a program switching instruction is detected, the original playing program is switched to the target playing program in response to the program switching instruction. By acquiring the screen content when the target playing program is played and acquiring the playing time of the target playing program, when the playing time of the target playing program exceeds a preset time domain, if the acquisition time reaches the preset time, a dynamic screen image with the preset time is obtained. The target playing program is judged to need to be subjected to sound effect adjustment when the time for continuously playing the switched program reaches the preset time domain, so that frequent sound effect switching and misoperation can be avoided, and the accuracy of sound effect adjustment is further improved.
It should be understood that although the various steps in the flow charts of fig. 2-4 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-4 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 6, there is provided a sound effect adjusting apparatus including: a screen image acquisition module 602, a predicted content scene list generation module 604, a content scene matching module 606, and a sound effect adjustment module 608, wherein:
the screen image collecting module 602 is configured to collect screen content to obtain a multi-frame screen image.
And a predicted content scene list generating module 604, configured to identify content of the screen image, and generate a predicted content scene list according to the identification result.
And a content scene matching module 606, configured to acquire system operation information, and match a content scene corresponding to the screen content from the predicted content scene list according to the system operation information.
And a sound effect adjusting module 608, configured to dynamically adjust the original sound effect based on the content scene, so as to generate a target sound effect according with the content scene.
The sound effect adjusting device acquires the multi-frame screen image by collecting the screen content, identifies the content of the screen image, and generates the predicted content scene list according to the identification result. By acquiring the system operation information and carrying out comprehensive judgment according to the system operation information, matching a more accurate content scene corresponding to the screen content from the predicted content scene list, and further dynamically adjusting the original sound effect based on the content scene to generate a target sound effect according with the content scene. Need not user's manual regulation, avoid manual regulation audio relatively poor and lead to the problem of repeated operation, further improve sound audio and adjust efficiency.
In one embodiment, the image acquisition module is further configured to:
when a program switching instruction is detected, responding to the program switching instruction, and switching an original playing program into a target playing program; acquiring screen content when a target playing program is played, and acquiring the playing time of the target playing program; when the playing time of the target playing program exceeds the preset time domain, if the acquisition time reaches the preset time, a dynamic screen image with the preset time is obtained.
In the image acquisition module, when a program switching instruction is detected, the original playing program is switched to the target playing program in response to the program switching instruction. By acquiring the screen content when the target playing program is played and acquiring the playing time of the target playing program, when the playing time of the target playing program exceeds a preset time domain, if the acquisition time reaches the preset time, a dynamic screen image with the preset time is obtained. The target playing program is judged to need to be subjected to sound effect adjustment when the time for continuously playing the switched program reaches the preset time domain, so that frequent sound effect switching and misoperation can be avoided, and the accuracy of sound effect adjustment is further improved.
In one embodiment, the sound effect adjustment module is further configured to:
extracting an original configuration parameter list corresponding to an original sound effect; according to the content scene, determining an original sound effect parameter to be adjusted and a target parameter value in an original configuration parameter list; adjusting the original sound effect parameters to be adjusted according to the target parameter values to generate adjusted sound effect parameters; and generating a target sound effect according with the content scene according to the adjusted sound effect parameters.
In the sound effect adjusting module, the original sound effect parameter to be adjusted and the target parameter value in the original configuration parameter list are determined according to the content scene by extracting the original configuration parameter list corresponding to the original sound effect. And then adjusting the original sound effect parameters to be adjusted according to the target parameter values to generate adjusted sound effect parameters. According to the adjusted sound effect parameters, the target sound effect which accords with the content scene can be automatically generated, the manual adjustment of a user is not needed, the problem that the sound effect is poor due to the fact that the sound effect is poor in the manual adjustment of the user is avoided, the problem that repeated operation is needed is solved, and the sound effect adjustment efficiency is further improved.
For the specific definition of the sound effect adjusting device, reference may be made to the above definition of the sound effect adjusting method, which is not described herein again. The modules in the sound effect adjusting device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 7. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a sound effect adjustment method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring screen content to obtain a plurality of frames of screen images;
identifying the content of the screen image, and generating a predicted content scene list according to an identification result;
acquiring system operation information, and matching a content scene corresponding to the screen content from the predicted content scene list according to the system operation information;
and dynamically adjusting the original sound effect based on the content scene to generate a target sound effect according with the content scene.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
the multi-frame screen image is a continuous dynamic image with preset duration.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
when a program switching instruction is detected, responding to the program switching instruction, and switching an original playing program into a target playing program;
acquiring screen content when a target playing program is played, and acquiring the playing time of the target playing program;
when the playing time of the target playing program exceeds the preset time domain, if the acquisition time reaches the preset time, a dynamic screen image with the preset time is obtained.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
extracting an original configuration parameter list corresponding to an original sound effect;
according to the content scene, determining an original sound effect parameter to be adjusted and a target parameter value in an original configuration parameter list;
adjusting the original sound effect parameters to be adjusted according to the target parameter values to generate adjusted sound effect parameters;
and generating a target sound effect according with the content scene according to the adjusted sound effect parameters.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring system operation information, wherein the system operation information comprises playing state information and interface information;
determining the current playing state of the screen according to the playing state information;
determining the type of the running application program according to the interface information;
and matching the corresponding content scene from the predicted content scene list according to the current playing state and the program type.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
according to the content scene, determining predefined sound effect parameters related to the content scene from a predefined sound effect parameter list;
comparing the predefined sound effect parameters with the original sound effect parameters in the original configuration parameter list to determine the original sound effect parameters to be adjusted;
and extracting a target parameter value corresponding to the predefined sound effect parameter.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring screen content to obtain a plurality of frames of screen images;
identifying the content of the screen image, and generating a predicted content scene list according to an identification result;
acquiring system operation information, and matching a content scene corresponding to the screen content from the predicted content scene list according to the system operation information;
and dynamically adjusting the original sound effect based on the content scene to generate a target sound effect according with the content scene.
In one embodiment, the computer program when executed by the processor further performs the steps of:
the multi-frame screen image is a continuous dynamic image with preset duration.
In one embodiment, the computer program when executed by the processor further performs the steps of:
when a program switching instruction is detected, responding to the program switching instruction, and switching an original playing program into a target playing program;
acquiring screen content when a target playing program is played, and acquiring the playing time of the target playing program;
when the playing time of the target playing program exceeds the preset time domain, if the acquisition time reaches the preset time, a dynamic screen image with the preset time is obtained.
In one embodiment, the computer program when executed by the processor further performs the steps of:
extracting an original configuration parameter list corresponding to an original sound effect;
according to the content scene, determining an original sound effect parameter to be adjusted and a target parameter value in an original configuration parameter list;
adjusting the original sound effect parameters to be adjusted according to the target parameter values to generate adjusted sound effect parameters;
and generating a target sound effect according with the content scene according to the adjusted sound effect parameters.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring system operation information, wherein the system operation information comprises playing state information and interface information;
determining the current playing state of the screen according to the playing state information;
determining the type of the running application program according to the interface information;
and matching the corresponding content scene from the predicted content scene list according to the current playing state and the program type.
In one embodiment, the computer program when executed by the processor further performs the steps of:
according to the content scene, determining predefined sound effect parameters related to the content scene from a predefined sound effect parameter list;
comparing the predefined sound effect parameters with the original sound effect parameters in the original configuration parameter list to determine the original sound effect parameters to be adjusted;
and extracting a target parameter value corresponding to the predefined sound effect parameter.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A sound effect adjustment method, the method comprising:
acquiring screen content to obtain a plurality of frames of screen images;
identifying the content of the screen image, and generating a predicted content scene list according to an identification result;
acquiring system operation information, and matching a content scene corresponding to the screen content from the predicted content scene list according to the system operation information;
and dynamically adjusting the original sound effect based on the content scene to generate a target sound effect according with the content scene.
2. The method according to claim 1, wherein the multi-frame screen image is a continuous dynamic image of a preset duration.
3. The method of claim 2, further comprising:
when a program switching instruction is detected, responding to the program switching instruction, and switching an original playing program into a target playing program;
acquiring screen content when the target playing program is played, and acquiring the playing time of the target playing program;
and when the playing time of the target playing program exceeds a preset time domain, if the acquisition time reaches the preset time, obtaining a dynamic screen image with the preset time.
4. The method of claim 1, wherein dynamically adjusting original sound effects based on the content scene to generate target sound effects according to the content scene comprises:
extracting an original configuration parameter list corresponding to an original sound effect;
according to the content scene, determining original sound effect parameters to be adjusted and target parameter values in the original configuration parameter list;
adjusting the original sound effect parameters to be adjusted according to the target parameter values to generate adjusted sound effect parameters;
and generating a target sound effect according with the content scene according to the adjusted sound effect parameters.
5. The method of claim 1, wherein the obtaining system operation information and matching corresponding content scenes from the predicted content scene list according to the system operation information comprises:
acquiring system operation information, wherein the system operation information comprises playing state information and interface information;
determining the current playing state of the screen according to the playing state information;
determining the type of the running application program according to the interface information;
and matching corresponding content scenes from the predicted content scene list according to the current playing state and the type of the running application program.
6. The method according to claim 4, wherein the determining original sound-effect parameters and target parameter values to be adjusted in the original configuration parameter list according to the content scene comprises:
according to the content scene, determining a predefined sound effect parameter associated with the content scene from a predefined sound effect parameter list;
comparing the predefined sound effect parameters with the original sound effect parameters in the original configuration parameter list to determine original sound effect parameters to be adjusted;
and extracting a target parameter value corresponding to the predefined sound effect parameter.
7. An audio effect adjustment apparatus, the apparatus comprising:
the screen image acquisition module is used for acquiring screen contents to obtain a plurality of frames of screen images;
the predicted content scene list generating module is used for identifying the content of the screen image and generating a predicted content scene list according to an identification result;
the content scene matching module is used for acquiring system operation information and matching a content scene corresponding to the screen content from the predicted content scene list according to the system operation information;
and the sound effect adjusting module is used for dynamically adjusting the original sound effect based on the content scene to generate a target sound effect according with the content scene.
8. The sound effect adjustment device of claim 7 wherein the image capture module is further configured to:
when a program switching instruction is detected, responding to the program switching instruction, and switching an original playing program into a target playing program;
acquiring screen content when the target playing program is played, and acquiring the playing time of the target playing program;
and when the playing time of the target playing program exceeds a preset time domain, if the acquisition time reaches the preset time, obtaining a dynamic screen image with the preset time.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
CN202010332278.4A 2020-04-24 2020-04-24 Sound effect adjusting method, device, computer equipment and storage medium Active CN113556604B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010332278.4A CN113556604B (en) 2020-04-24 2020-04-24 Sound effect adjusting method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010332278.4A CN113556604B (en) 2020-04-24 2020-04-24 Sound effect adjusting method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113556604A true CN113556604A (en) 2021-10-26
CN113556604B CN113556604B (en) 2023-07-18

Family

ID=78129619

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010332278.4A Active CN113556604B (en) 2020-04-24 2020-04-24 Sound effect adjusting method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113556604B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114025231A (en) * 2021-11-18 2022-02-08 紫光展锐(重庆)科技有限公司 Sound effect adjusting method, sound effect adjusting device, chip and chip module thereof
CN114035871A (en) * 2021-10-28 2022-02-11 深圳市优聚显示技术有限公司 Display method and system of 3D display screen based on artificial intelligence and computer equipment
CN114443886A (en) * 2022-04-06 2022-05-06 南昌航天广信科技有限责任公司 Sound effect adjusting method and system of broadcast sound box, computer and readable storage medium
CN114464210A (en) * 2022-02-15 2022-05-10 游密科技(深圳)有限公司 Sound processing method, sound processing device, computer equipment and storage medium
CN114866791A (en) * 2022-03-31 2022-08-05 北京达佳互联信息技术有限公司 Sound effect switching method and device, electronic equipment and storage medium

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104090766A (en) * 2014-07-17 2014-10-08 广东欧珀移动通信有限公司 Sound effect switching method and system for mobile terminal
CN104506901A (en) * 2014-11-12 2015-04-08 科大讯飞股份有限公司 Voice assisting method and system based on television scene state and voice assistant
CN105100831A (en) * 2014-04-16 2015-11-25 北京酷云互动科技有限公司 Television set playing mode adjustment method, television playing system and television set
CN106775562A (en) * 2016-12-09 2017-05-31 奇酷互联网络科技(深圳)有限公司 The method and device of audio frequency parameter treatment
WO2017101357A1 (en) * 2015-12-14 2017-06-22 乐视控股(北京)有限公司 Sound effect mode selection method and device
CN108900616A (en) * 2018-06-29 2018-11-27 百度在线网络技术(北京)有限公司 Sound resource listens to method and apparatus
CN108924361A (en) * 2018-07-10 2018-11-30 南昌黑鲨科技有限公司 Audio plays and collection control method, system and computer readable storage medium
CN108966007A (en) * 2018-09-03 2018-12-07 青岛海信电器股份有限公司 A kind of method and device for distinguishing video scene at HDMI
CN109240641A (en) * 2018-09-04 2019-01-18 Oppo广东移动通信有限公司 Audio method of adjustment, device, electronic equipment and storage medium
CN109272970A (en) * 2018-10-30 2019-01-25 维沃移动通信有限公司 A kind of screen luminance adjustment method and mobile terminal
CN109286772A (en) * 2018-09-04 2019-01-29 Oppo广东移动通信有限公司 Audio method of adjustment, device, electronic equipment and storage medium
CN109348040A (en) * 2018-08-09 2019-02-15 北京奇艺世纪科技有限公司 A kind of effect adjusting method, device and terminal device
CN109582463A (en) * 2018-11-30 2019-04-05 Oppo广东移动通信有限公司 Resource allocation method, device, terminal and storage medium
US20190143214A1 (en) * 2016-06-16 2019-05-16 Guangdong Oppo Mobile Telcommunications Corp., Ltd. Control method of scene sound effect and related products
CN110493639A (en) * 2019-10-21 2019-11-22 南京创维信息技术研究院有限公司 A kind of method and system of adjust automatically sound and image model based on scene Recognition
US20200026728A1 (en) * 2016-06-16 2020-01-23 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Scenario-based sound effect control method and electronic device
CN110868628A (en) * 2019-11-29 2020-03-06 深圳创维-Rgb电子有限公司 Intelligent control method for television sound and picture modes, television and storage medium
CN110933490A (en) * 2019-11-20 2020-03-27 深圳创维-Rgb电子有限公司 Automatic adjustment method for picture quality and tone quality, smart television and storage medium
CN110989961A (en) * 2019-10-30 2020-04-10 华为终端有限公司 Sound processing method and device
CN110996153A (en) * 2019-12-06 2020-04-10 深圳创维-Rgb电子有限公司 Scene recognition-based sound and picture quality enhancement method and system and display

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105100831A (en) * 2014-04-16 2015-11-25 北京酷云互动科技有限公司 Television set playing mode adjustment method, television playing system and television set
CN104090766A (en) * 2014-07-17 2014-10-08 广东欧珀移动通信有限公司 Sound effect switching method and system for mobile terminal
CN104506901A (en) * 2014-11-12 2015-04-08 科大讯飞股份有限公司 Voice assisting method and system based on television scene state and voice assistant
WO2017101357A1 (en) * 2015-12-14 2017-06-22 乐视控股(北京)有限公司 Sound effect mode selection method and device
US20190143214A1 (en) * 2016-06-16 2019-05-16 Guangdong Oppo Mobile Telcommunications Corp., Ltd. Control method of scene sound effect and related products
US20200026728A1 (en) * 2016-06-16 2020-01-23 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Scenario-based sound effect control method and electronic device
CN106775562A (en) * 2016-12-09 2017-05-31 奇酷互联网络科技(深圳)有限公司 The method and device of audio frequency parameter treatment
CN108900616A (en) * 2018-06-29 2018-11-27 百度在线网络技术(北京)有限公司 Sound resource listens to method and apparatus
CN108924361A (en) * 2018-07-10 2018-11-30 南昌黑鲨科技有限公司 Audio plays and collection control method, system and computer readable storage medium
CN109348040A (en) * 2018-08-09 2019-02-15 北京奇艺世纪科技有限公司 A kind of effect adjusting method, device and terminal device
CN108966007A (en) * 2018-09-03 2018-12-07 青岛海信电器股份有限公司 A kind of method and device for distinguishing video scene at HDMI
CN109240641A (en) * 2018-09-04 2019-01-18 Oppo广东移动通信有限公司 Audio method of adjustment, device, electronic equipment and storage medium
CN109286772A (en) * 2018-09-04 2019-01-29 Oppo广东移动通信有限公司 Audio method of adjustment, device, electronic equipment and storage medium
CN109272970A (en) * 2018-10-30 2019-01-25 维沃移动通信有限公司 A kind of screen luminance adjustment method and mobile terminal
CN109582463A (en) * 2018-11-30 2019-04-05 Oppo广东移动通信有限公司 Resource allocation method, device, terminal and storage medium
CN110493639A (en) * 2019-10-21 2019-11-22 南京创维信息技术研究院有限公司 A kind of method and system of adjust automatically sound and image model based on scene Recognition
CN110989961A (en) * 2019-10-30 2020-04-10 华为终端有限公司 Sound processing method and device
CN110933490A (en) * 2019-11-20 2020-03-27 深圳创维-Rgb电子有限公司 Automatic adjustment method for picture quality and tone quality, smart television and storage medium
CN110868628A (en) * 2019-11-29 2020-03-06 深圳创维-Rgb电子有限公司 Intelligent control method for television sound and picture modes, television and storage medium
CN110996153A (en) * 2019-12-06 2020-04-10 深圳创维-Rgb电子有限公司 Scene recognition-based sound and picture quality enhancement method and system and display

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114035871A (en) * 2021-10-28 2022-02-11 深圳市优聚显示技术有限公司 Display method and system of 3D display screen based on artificial intelligence and computer equipment
CN114025231A (en) * 2021-11-18 2022-02-08 紫光展锐(重庆)科技有限公司 Sound effect adjusting method, sound effect adjusting device, chip and chip module thereof
CN114464210A (en) * 2022-02-15 2022-05-10 游密科技(深圳)有限公司 Sound processing method, sound processing device, computer equipment and storage medium
CN114866791A (en) * 2022-03-31 2022-08-05 北京达佳互联信息技术有限公司 Sound effect switching method and device, electronic equipment and storage medium
CN114443886A (en) * 2022-04-06 2022-05-06 南昌航天广信科技有限责任公司 Sound effect adjusting method and system of broadcast sound box, computer and readable storage medium

Also Published As

Publication number Publication date
CN113556604B (en) 2023-07-18

Similar Documents

Publication Publication Date Title
CN113556604B (en) Sound effect adjusting method, device, computer equipment and storage medium
CN107994879B (en) Loudness control method and device
CN104091423B (en) A kind of method for transmitting signals and family's order programme
CN112216294B (en) Audio processing method, device, electronic equipment and storage medium
CN111031386B (en) Video dubbing method and device based on voice synthesis, computer equipment and medium
CN108965981B (en) Video playing method and device, storage medium and electronic equipment
CN108259925A (en) Music gifts processing method, storage medium and terminal in net cast
CN112511750A (en) Video shooting method, device, equipment and medium
CN109089043A (en) Shoot image pre-processing method, device, storage medium and mobile terminal
WO2017185584A1 (en) Method and device for playback optimization
CN110928518B (en) Audio data processing method and device, electronic equipment and storage medium
US9053710B1 (en) Audio content presentation using a presentation profile in a content header
CN111785238A (en) Audio calibration method, device and storage medium
CN109891405A (en) The method, system and medium of the presentation of video content on a user device are modified based on the consumption mode of user apparatus
CN107547732A (en) A kind of media play volume adjusting method, device, terminal and storage medium
CN102244750A (en) Video display apparatus having sound level control function and control method thereof
CN112511779A (en) Video data processing method and device, computer storage medium and electronic equipment
CN114095793A (en) Video playing method and device, computer equipment and storage medium
CN110364188A (en) Audio frequency playing method, device and computer readable storage medium
JP6560503B2 (en) Rise notification system
WO2019114582A1 (en) Video image processing method and computer storage medium and terminal
CN115665504A (en) Event identification method and device, electronic equipment and storage medium
JP7466087B2 (en) Estimation device, estimation method, and estimation system
CN115278352A (en) Video playing method, device, equipment and storage medium
CN112584225A (en) Video recording processing method, video playing control method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant