CN110879850B - Method, device and equipment for acquiring jitter parameters and storage medium - Google Patents
Method, device and equipment for acquiring jitter parameters and storage medium Download PDFInfo
- Publication number
- CN110879850B CN110879850B CN201911115640.6A CN201911115640A CN110879850B CN 110879850 B CN110879850 B CN 110879850B CN 201911115640 A CN201911115640 A CN 201911115640A CN 110879850 B CN110879850 B CN 110879850B
- Authority
- CN
- China
- Prior art keywords
- audio data
- amplitude
- filtering
- frequency domain
- jitter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/63—Querying
- G06F16/635—Filtering based on additional data, e.g. user or group profiles
- G06F16/637—Administration of user profiles, e.g. generation, initialization, adaptation or distribution
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/68—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/6063—Methods for processing data by generating or executing the game program for sound processing
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses a method, a device, equipment and a storage medium for acquiring jitter parameters, belonging to the technical field of internet, wherein the method comprises the following steps: acquiring audio data; generating a frequency domain discrete signal corresponding to the audio data; filtering the frequency domain discrete signal, and filtering non-key frequencies to obtain a filtered frequency domain discrete signal; generating modified audio data according to the filtered frequency domain discrete signal; and extracting amplitude key points from the corrected audio data to obtain jitter parameters. According to the technical scheme provided by the embodiment of the application, on one hand, the technical problems that the jitter parameter acquisition in the related technology is high in dependence on workers and low in efficiency are solved, the generation efficiency of the jitter parameter is improved, and the dependence of the process of generating the jitter parameter on the workers is reduced; on the other hand, when the game interface shakes along with the sound effect, the shaking is smooth, and the rough feeling of the game is reduced.
Description
Technical Field
The embodiment of the application relates to the technical field of computers and internet, in particular to a method, a device, equipment and a storage medium for acquiring jitter parameters.
Background
With the increasing richness of game types and contents, the skills of virtual objects in games are increasing.
In the related art, when a virtual object in a game releases a skill, a user interface plays an art action corresponding to the virtual object according to the skill, and meanwhile, the user interface generates corresponding lens shake. The parameter of the lens shake can be determined according to the art action of the virtual object, for example, after determining the art action corresponding to the virtual object releasing skill, the game art person determines the lens shake parameter according to the art action.
However, in the above-described related art, since an art worker is required to artificially determine the lens shake parameter according to art actions, the acquisition of the shake parameter is highly dependent on the worker and is inefficient.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for acquiring jitter parameters, which can be used for solving the technical problems of strong dependence on workers and low efficiency in acquiring the jitter parameters in the related technology. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a method for acquiring a jitter parameter, where the method includes:
acquiring audio data;
generating a frequency domain discrete signal corresponding to the audio data;
filtering the frequency domain discrete signal, and filtering non-key frequency signals to obtain a filtered frequency domain discrete signal; wherein the non-critical frequency signals refer to noise signals in the audio data;
generating modified audio data according to the filtering frequency domain discrete signal;
and extracting amplitude key points from the corrected audio data to obtain jitter parameters.
On the other hand, an embodiment of the present application provides a method for acquiring a jitter parameter, where the method includes:
acquiring audio data in a game application program;
generating a frequency domain discrete signal corresponding to the audio data;
filtering the frequency domain discrete signal, and filtering non-key frequency signals to obtain a filtered frequency domain discrete signal; wherein the non-critical frequency signals refer to noise signals in the audio data;
generating modified audio data according to the filtering frequency domain discrete signal;
and extracting amplitude key points from the corrected audio data to obtain jitter parameters, wherein the jitter parameters are used for controlling the jitter of a camera in the game application program.
In another aspect, an embodiment of the present application provides a method for displaying a game interface, where the method includes:
displaying a game interface;
receiving a release instruction corresponding to the target skill;
controlling the virtual object to release the target skill according to the release instruction;
playing a skill sound effect corresponding to the target skill, and controlling the game interface to shake;
and generating the dithering parameters of the game interface according to the audio data corresponding to the skill sound effect.
In another aspect, an embodiment of the present application provides an apparatus for obtaining jitter parameters, where the apparatus includes:
the audio acquisition module is used for acquiring audio data;
the signal generating module is used for generating a frequency domain discrete signal corresponding to the audio data;
the signal filtering module is used for filtering the frequency domain discrete signal, filtering non-key frequency signals and obtaining a filtering frequency domain discrete signal; wherein the non-critical frequency signals refer to noise signals in the audio data;
the audio frequency correction module is used for generating corrected audio data according to the filtering frequency domain discrete signal;
and the parameter acquisition module is used for extracting amplitude key points from the corrected audio data to obtain jitter parameters.
In a further aspect, an embodiment of the present application provides an apparatus for obtaining a jitter parameter, where the apparatus includes:
the audio acquisition module is used for acquiring audio data in the game application program;
the signal generating module is used for generating a frequency domain discrete signal corresponding to the audio data;
the signal filtering module is used for filtering the frequency domain discrete signal, filtering non-key frequency signals and obtaining a filtering frequency domain discrete signal; wherein the non-critical frequency signals refer to noise signals in the audio data;
the audio frequency correction module is used for generating corrected audio data according to the filtering frequency domain discrete signal;
and the parameter acquisition module is used for extracting amplitude key points from the corrected audio data to obtain jitter parameters, and the jitter parameters are used for controlling the jitter of a camera in the game application program.
In another aspect, an embodiment of the present application provides a display device for a game interface, where the device includes:
the interface display module is used for displaying a game interface;
the instruction receiving module is used for receiving a release instruction corresponding to the target skill;
the skill release module is used for controlling the virtual object to release the target skill according to the release instruction;
the interface shaking module is used for playing skill sound effects corresponding to the target skills and controlling the game interface to shake; and generating the dithering parameters of the game interface according to the audio data corresponding to the skill sound effect.
In yet another aspect, an embodiment of the present application provides a computer device, where the computer device includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or a set of instructions, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the method for acquiring the jitter parameter. Optionally, the computer device may be a server or a terminal.
In a further aspect, an embodiment of the present application provides a terminal, where the terminal includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or an instruction set, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the display method of the game interface.
In a further aspect, the present application provides a computer-readable storage medium, where at least one instruction, at least one program, a code set, or a set of instructions is stored in the storage medium, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor to implement the above method for acquiring jitter parameters.
In a further aspect, the present application provides a computer-readable storage medium, where at least one instruction, at least one program, a code set, or a set of instructions is stored in the storage medium, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor to implement the display method of the game interface.
In a further aspect, an embodiment of the present application provides a computer program product, where the computer program product is used to implement the method for acquiring a jitter parameter described above when being executed by a processor.
In a further aspect, an embodiment of the present application provides a computer program product, where the computer program product is used to implement the display method of the game interface described above when being executed by a processor.
The technical scheme provided by the embodiment of the application can bring the following beneficial effects:
on one hand, the technical scheme for automatically generating the jitter parameters corresponding to the audio data is provided, and on the other hand, the technical problems that the dependency of the acquisition of the jitter parameters on workers in the related technology is high and the efficiency is low are solved by generating the jitter parameters through the technical method, so that the generation efficiency of the jitter parameters is improved, and the dependency of the process of generating the jitter parameters on the workers is reduced; on the other hand, the matching degree of the jitter parameters generated by the technical scheme and the audio data is high, and when a game interface shakes along with the sound effect, the shake is smooth, so that the roughness of the game is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment provided by one embodiment of the present application;
fig. 2 is a flowchart of a method for obtaining jitter parameters according to an embodiment of the present application;
FIG. 3 shows a schematic of the signal before and after a Fourier transform;
FIG. 4 shows a schematic of the variation of critical points of amplitude before and after a simple clipping filter;
fig. 5 is a flowchart of a method for obtaining jitter parameters according to another embodiment of the present application;
FIG. 6 is a schematic diagram showing one method of selecting the maximum amplitude;
FIG. 7 is a schematic diagram illustrating the shape of a segmented frequency domain sample data;
FIG. 8 is a flow chart of a method of displaying a game interface provided by one embodiment of the present application;
FIG. 9 is a schematic diagram showing icons for two different skills in a gaming application;
FIG. 10 is a flow chart illustrating a method of displaying a game interface;
fig. 11 is a flowchart of a method for obtaining jitter parameters according to yet another embodiment of the present application;
fig. 12 is a block diagram of an apparatus for obtaining jitter parameters according to an embodiment of the present application;
fig. 13 is a block diagram of an apparatus for obtaining jitter parameters according to another embodiment of the present application;
FIG. 14 is a block diagram of a display device of a game interface provided in one embodiment of the present application;
fig. 15 is a block diagram of a server according to an embodiment of the present disclosure;
fig. 16 is a block diagram of a terminal according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, a schematic diagram of an implementation environment provided by an embodiment of the present application is shown. The implementation environment may include: a terminal 10 and a server 20.
The terminal 10 may be an electronic device such as a mobile phone, a tablet Computer, a game console, an electronic book reader, a multimedia playing device, a wearable device, a PC (Personal Computer), and the like.
Optionally, a client of the target application installed in the terminal 10 may implement an interface display function, for example, the target application may be a video application, a social application, an instant messaging application, a game application, an information application, a reading application, a shopping application, a music application, and the like, which is not limited in this embodiment of the present application.
The server 20 is used to provide background services for clients of target applications in the terminal 10. For example, the server 20 may be a backend server of the target application described above. The server 20 may be a server, a server cluster composed of a plurality of servers, or a cloud computing service center.
The terminal 10 and the server 20 can communicate with each other through the network 30. The network 30 may be a wired network or a wireless network.
In a possible application scenario, a target application program is a game application program, and by adopting the technical scheme provided by the embodiment of the application, a jitter parameter for controlling camera jitter in the game application program is generated according to audio data in the game application program, and then the camera in the game application program is controlled to jitter according to the jitter parameter, so that the displayed game interface is jittered.
Optionally, the game interface refers to a user interface including a game virtual environment. The virtual environment refers to a scene displayed (or provided) by a client of a game application program when the client runs on a terminal, and the virtual environment refers to a scene created for a virtual object to perform an activity (such as a game competition), such as a virtual house, a virtual island, a virtual map and the like. The virtual environment may be a simulation environment of a real world, a semi-simulation semi-fictional environment, or a pure fictional environment. The virtual environment may be a two-dimensional virtual environment, a 2.5-dimensional virtual environment, or a three-dimensional virtual environment, which is not limited in this embodiment of the present application. Optionally, the game interface may be a picture for observing the virtual environment at a first person perspective, or may be a picture for observing the virtual environment at a third person perspective, which is not limited in the embodiment of the present application.
Virtual objects refer to virtual characters in a game application. Alternatively, the virtual object may be a virtual Character controlled by a user account in a game application, or an NPC (Non-Player Character) for artificial intelligent control of a computer device. The virtual object may be in the form of a character, an animal, a cartoon or other forms, which is not limited in this application. The virtual object may be displayed in a three-dimensional form or a two-dimensional form, which is not limited in the embodiment of the present application. Optionally, when the virtual environment is a three-dimensional virtual environment, the virtual object is a three-dimensional stereo model created based on an animated skeleton technique. Each virtual object has its own shape and volume in the three-dimensional virtual environment, occupying a portion of the space in the three-dimensional virtual environment.
In an embodiment of the present application, the game interface is generated by a camera in the game application. The camera is a virtual camera which is arranged in the game application program and shoots and views the virtual environment at the first person perspective or the third person perspective of the virtual object controlled by the user. Optionally, the terminal captures a virtual environment around the user-controlled virtual object in real time by a virtual camera and displays the virtual environment in the game interface. Next, a description will be given of a method for acquiring jitter parameters provided in the present application by using several embodiments. The execution subject of each step of the method may be the server 20 in the implementation environment shown in fig. 1, or may be the terminal 10, which is not limited in this embodiment of the present application. For convenience of explanation, in the following method embodiments, only the execution subject of each step is described as a server, but the present invention is not limited thereto.
Referring to fig. 2, a flowchart of a method for obtaining jitter parameters according to an embodiment of the present application is shown. The method comprises the following steps (201-205):
Audio data refers to sound that is digitally processed and stored in a computer device. Alternatively, the sound may be a biological sound, an instrumental sound, a noise, and the like. In the embodiment of the present application, the audio data refers to a sound time domain signal generated during use in an application installed in the terminal. Alternatively, the sound may be a call voice between users, or may be a sound effect generated by a virtual object. The sound effect refers to a sound effect stored in the game application program. For example, the skill sound effect corresponding to the virtual object controlled by the user, and the sound effect of the NPC controlled by the artificial intelligence.
Optionally, in this embodiment of the application, the audio data may be audio data corresponding to a skill sound effect when the virtual object releases the skill in the game application. It should be noted that, in the game application program, different skill sound effects can be generated when the virtual object releases different skills, and the different skill sound effects in the game application program correspond to different audio data.
The frequency domain discrete signal refers to a signal discretely distributed with frequency variation in the frequency domain. Alternatively, the time domain signal and the frequency domain signal may be converted to each other. For example, the time domain signal is transformed into a corresponding frequency domain signal by fourier transform, and the frequency domain signal is transformed into a time domain signal by inverse fourier transform.
In this embodiment of the application, after the server acquires the audio data, the server may perform fourier transform on the audio data to generate a corresponding frequency domain discrete signal. Illustratively, referring to fig. 3 in combination, the server-acquired audio data 31 is fourier transformed to generate a frequency domain discrete signal 32.
And 203, filtering the frequency domain discrete signal, and filtering a non-key frequency signal to obtain a filtered frequency domain discrete signal.
The non-critical frequency signal refers to a noise signal in the above audio data. In this embodiment of the present application, the frequency domain discrete signal includes a noise signal, and before the server obtains the jitter parameter, the server needs to perform filtering processing on the frequency domain discrete signal to filter a non-critical frequency signal, so as to obtain a filtered frequency domain discrete signal.
It should be noted that the noise signal includes a time-domain noise signal and a frequency-domain noise signal. In the embodiment of the present application, a noise signal in audio data is a time domain noise signal, and a noise signal in a frequency domain discrete signal obtained by performing fourier transform on audio data is a frequency domain noise signal.
And step 204, generating modified audio data according to the filtered frequency domain discrete signal.
The modified audio data refers to a time domain signal obtained by filtering the noise signal from the audio data. Optionally, after acquiring the filtered frequency domain discrete signal, the server performs inverse fourier transform on the filtered frequency domain discrete signal to obtain modified audio data.
The amplitude key point refers to sample data in the modified audio data. Optionally, after acquiring the modified audio data, the server samples the modified audio data at a certain time interval to obtain an amplitude key point. The time interval is set by the server, and may be 0.01s, 0.02s, or 0.03s, and the like, which is not limited in this embodiment of the application.
Optionally, the step 205 includes the following sub-steps:
1. and sampling and acquiring amplitude key points from the modified audio data. As described above, the server performs sampling at a certain time interval after acquiring the modified audio data to obtain the amplitude key point.
It should be noted that the number of amplitude key points may be one or more, and the embodiment of the present application is not limited thereto.
2. And filtering the amplitude key points to obtain the processed amplitude key points.
Optionally, because the audio data has a background noise, and only a noise signal can be filtered in the filtering process, and the background noise in the audio data cannot be filtered, the background noise still exists in the amplitude key point, so that the server can filter the amplitude key point after acquiring the amplitude key point, and filter the background noise to obtain the processed amplitude key point.
In the embodiment of the application, the amplitude of the background noise in the amplitude key points acquired by the server is low and exists in a large amount, and the shake parameters acquired according to the amplitude key points easily cause long-time low-amplitude shake of a camera, so that the picture of a user interface is blurred, and a user is dazzled. Alternatively, the server may filter the background noise in the amplitude key points according to a simple amplitude limiting filtering method, so as to prevent the user from being dizzy.
Optionally, the server adjusts the amplitude of the amplitude key point with the amplitude lower than the preset threshold value to be zero, so as to obtain the processed amplitude key point. The preset threshold is set by the server, and may be 10, 20, or 30, and the like, which is not limited in this embodiment of the application. For example, assuming that the number N of amplitude key points is freqarray. If the amplitude is smaller than the minimum amplitude, the amplitude of the corresponding amplitude key point is adjusted to zero. Illustratively, referring to fig. 4 in combination, the curve 42 corresponding to the amplitude key point 41 is processed by a simple clipping filtering method to obtain a curve 44 corresponding to a processed amplitude key point 43. Illustratively, the pseudo code of the step of filtering the amplitude key points is as follows:
wherein freqArray [ x ] represents the amplitude corresponding to the amplitude key point labeled x.
3. And obtaining the jitter parameters based on the processed amplitude key points.
In the embodiment of the application, the server obtains the jitter parameter according to the processed amplitude key point, and then controls the user interface to jitter according to the jitter parameter when the application program plays the sound effect corresponding to the audio data.
Optionally, after acquiring the processed amplitude key point, the server acquires relevant information of the processed amplitude key point, where the relevant information may include an identifier, a quantity, an amplitude, and the like; further, the server generates a Bezier curve according to the related information, wherein the Bezier curve is used for representing the jitter parameters. Of course, the jitter parameter may also be characterized by other curves, numbers or letters, which are not limited in the embodiments of the present application.
Taking a bezier curve as an example, the pseudo code for generating the bezier curve is as follows:
(1) and defining a KeyFrame array with the size equal to the number of the key points.
KeyFrame[]m_arreyKeyFrame=new Keyframe[_nKFCount];
Wherein, nKFCount represents the number of the amplitude key points after the treatment.
(2) And adding the amplitude key points into a KeyFrame array one by one.
Wherein, KFIndex refers to the identification of the key point, and fKFValue refers to the amplitude of the key point.
(3) And generating a Bezier curve.
If (m _ arraykey frame, length ═ 0) is to judge whether the length of the currently processed amplitude key point array is 0, If so, it means that no amplitude key point is added to the array, a bezier curve cannot be generated, so null is returned; creating an animation curve object by using an API (Application Programming Interface) called by the animation curve, wherein the parameter is m _ array KeyFrame, namely creating a Bezier curve by using the processed amplitude key point; prewrrapmode represents the mode before the first frame, postWrapMode represents the mode after the last frame; and 5, playing the sound effect corresponding to the processed amplitude key point once, and then automatically stopping and returning to the initial state.
To sum up, in the technical solution provided in the embodiment of the present application, a technical solution for automatically generating a jitter parameter corresponding to audio data is provided, on one hand, the technical solution for generating a jitter parameter by the above technical method solves the technical problems of strong dependency and low efficiency of acquisition of a jitter parameter on a worker in the related art, improves the generation efficiency of the jitter parameter, and reduces the dependency of a process of generating the jitter parameter on the worker; on the other hand, the matching degree of the jitter parameters generated by the technical scheme and the audio data is high, and when a game interface shakes along with the sound effect, the shake is smooth, so that the roughness of the game is reduced.
In addition, the background noise removal processing is carried out on the amplitude key points, so that the picture blurring caused by low-amplitude jitter of a game interface is avoided, and the visual effect is improved.
In addition, Bezier curve representation shaking parameters are generated according to the processed amplitude key points, so that the jumping property of the shaking of the game interface is reduced, the shaking of the game interface is smooth, and the roughness is low.
In addition, audio data are obtained according to the skill sound effect of the virtual object, so that jitter parameters are generated, and the game play intensity is improved.
Please refer to fig. 5, which shows a flowchart of a method for obtaining jitter parameters according to another embodiment of the present application. The execution subject of the method may be the server 20 of the implementation environment shown in fig. 1. The method can comprise the following steps (501-509):
The key frequency is a frequency corresponding to an effective signal in the audio data, and the effective signal is a signal obtained by non-noise sound effect processing in an application program. The key frequency is obtained by processing the audio data by the server. Optionally, the step 503 includes the following steps:
1. and acquiring the maximum amplitude value of each segmented audio data.
Optionally, after acquiring the audio data, the server performs segmentation processing on the audio data to obtain n pieces of segmented audio data, where n is an integer greater than 1, and optionally, the time corresponding to each piece of the n pieces of segmented audio data may be the same or different, and may be 0.01, 0.017, or 0.02, and the like, which is not limited in this embodiment of the application; further, the server acquires the maximum amplitude values of the segmented audio data, wherein the maximum amplitude values acquired by the server are the same as the segmented audio data. Alternatively, the server obtains the amplitude maximum by traversing each piece of the segmented audio data. Exemplarily, taking any segmented audio data as an example, with reference to fig. 6, the server sets the maximum value of the segmented audio data to m, further, when traversing the segmented audio data, the server compares the amplitude corresponding to each audio data in the segmented audio data with the magnitude of m according to a time sequence, if the amplitude corresponding to the audio data is greater than m, the amplitude of the audio data is given to m to continue traversing until the segmented audio data is traversed, and then m is the maximum value of the amplitude of the segmented audio data.
2. Target segmented audio data is selected from the n segmented audio data.
Optionally, after obtaining the maximum amplitude value, the server selects target segmented audio data from the n segmented audio data, where the target segmented audio data is the segmented audio data whose maximum amplitude value meets a preset condition. The preset condition refers to audio data with an amplitude greater than or equal to a preset target, and the preset target may be a ratio obtained according to the n maximum values of the amplitude. For example, if the preset target is 25%, the server selects audio data with amplitude of the first 25% as the target segmented audio data according to the values of the n maximum amplitudes.
3. And filtering noise signals in the frequency domain discrete signals corresponding to the target segmented audio data to obtain the key frequency.
And filtering noise signals in the frequency domain discrete signals corresponding to the target segmented audio data by the server to obtain key frequency signals. Optionally, the server may further filter the noise signal in the frequency domain discrete signal corresponding to the target segmented audio data according to the difference between the frequency of the noise signal in the frequency domain discrete signal and the frequency of the non-noise signal, to obtain a key frequency signal, where the frequency corresponding to the key frequency signal is the key frequency.
In another possible embodiment, after obtaining the maximum amplitude value, the server performs sampling processing on the audio data at the maximum amplitude value to obtain segmented audio sample data at the maximum amplitude value; further, the server can perform Fourier transform on the segmented audio sampling data to obtain segmented frequency domain sampling data, and selects frequency domain data of which the amplitude meets preset conditions in the segmented frequency domain sampling data as target segmented frequency domain data; and finally, filtering the noise signal in the target segmented frequency domain data by the server, and performing Fourier inversion on the target segmented frequency domain data with the noise signal filtered to obtain a key frequency signal, wherein the frequency corresponding to the key frequency signal is the key frequency.
Illustratively, referring to fig. 7 in combination, since there is a noise signal in the audio data, the shape of the segmented frequency-domain sample data 71 obtained by the server through fourier transform is triangular instead of absolute impulse function, and therefore, the server needs to perform filtering processing on the segmented frequency-domain sample data 71 to filter out the noise signal. Optionally, the server performs filtering processing on the segmented frequency domain sampled data by an iterative least square method. Illustratively, a method of filtering out a noise signal by an iterative least squares method is briefly described below.
(1) And initializing parameters.
Initializing the order of a (Z), NA ═ 2;
initializing the order of B (Z), NB ═ 2;
the initial values of the adaptive gain matrix P are initialized,P0=1.e-6;
initializing a forgetting factor alpha, wherein alpha is 0.995;
initializing an initial value of a parameter vector theta0=0.001;
Wherein A (Z) and B (Z) represent curve parameters; the gain matrix P is used for subsequent iterative computation, and optionally, the gain matrix P may be a positive integer with a small value, and the gain matrix P is a second-order matrix; the forgetting factor alpha is used for ensuring the accuracy of the iterative computation result.
For i, j belongs to [ 0-NA + NB),
P[i][j]=0f;
θ[i]=θ0;
p[i][j]=p0;
wherein i represents the number of rows of the gain matrix P, j represents the number of columns of the gain matrix P, and P [ i ]][j]Represents the value of ith row and jth column in gain matrix P, and f represents P [ i][j]Is a floating point number; theta [ i ]]Represents an initial value of a parameter vector;representing the initial phase of the parameter.
(2) Calculate an estimated forecast error yer.
Alternatively, the initial prediction error yer is set to 0, for i, j belongs to [0 to NA + NB),
(3) and calculating the estimated accumulated error a.
For i, j belongs to [ 0-NA + NB),
for j belonging to [0 to NA + NB),
a=α;
for i, j belongs to [ 0-NA + NB)
(4) And iteration least square method.
Iteratively calculating θ (k) ═ a (1), a (2),.. a (NA); b (1), b (2);. b (NB) ]
For i belonging to [0 to NA + NB),
θ[i]+=pf[i]*yer/a;
where pf [ i ] represents the ith determinant of the gain matrix P.
(5) And calculating a gain matrix P (k) at the time k.
For i belonging to [ 0-NA + NB ], j belongs to [0, i ],
p[i][j]-=pf[i]*pf[j]/a;
p[i][j]/=α;
for i belonging to [ 0-NA + NB), j belonging to [0, i),
p[j][i]=p[i][j];
where pf [ j ] represents the jth determinant of the gain matrix P.
For i belonging to [0 to NA),
for i belonging to [0 to NB),
(7) and (4) repeating from the step (3) until the iteration is finished. And (4) calculating to obtain a result theta (k) which is a corresponding parameter obtained by the iterative least square method.
(8) And obtaining the key frequency according to theta (k).
It should be noted that the critical frequency may be a specific frequency value or a frequency range.
Step 503, performing fourier transform on any segmented audio data to obtain a frequency domain discrete signal.
And step 504, filtering the frequency domain discrete signal to obtain a filtered frequency domain discrete signal.
The filtered frequency-domain discrete signal refers to a non-noise signal in the frequency-domain discrete signal. Next, a method for acquiring the filtered frequency domain discrete signal will be described. Optionally, the step 504 further includes the following steps:
1. and determining the filtering range of the frequency domain discrete signal according to the key frequency.
The filtering range refers to a frequency range corresponding to the non-critical frequency signal. Optionally, the server determines the filtering range according to the maximum deviation amplitude allowed by the frequency error. The maximum deviation amplitude is set by the server, and may be 5%, 10%, or 12%, and so on, which is not limited in this embodiment of the application. Illustratively, if the maximum deviation amplitude is 10% and the above-mentioned key frequency is a frequency range [80Hz, 100Hz ], the filtering range is [70Hz, 110Hz ].
2. And filtering the frequency domain discrete signals outside the filtering range, and reserving the frequency domain discrete signals inside the filtering range to obtain the filtering frequency domain discrete signals.
Optionally, after obtaining the filtering range, the server performs filtering processing on the frequency domain discrete signal, where the filtering processing method may be an average amplitude limiting filtering method, an amplitude filtering method, or a median value filtering method, and the like, which is not limited in this embodiment of the present application. For example, the frequency domain discrete signal is judged frequency by using an average amplitude limiting filtering method, the frequency domain discrete signal outside the filtering range is determined to be an invalid signal, and the invalid signal is filtered; meanwhile, the frequency domain discrete signal in the filtering range is determined to be an effective signal, and the effective signal is reserved as a filtering frequency domain discrete signal.
And 505, performing inverse fourier transform on the filtered frequency domain discrete signal to generate modified audio data.
And 507, filtering the amplitude key points to obtain the processed amplitude key points.
In step 509, if all the segmented audio data are processed, a bezier curve is generated according to the processed amplitude key points, where the bezier curve is used to represent the jitter parameters.
In summary, in the technical scheme provided in the embodiment of the present application, the noise signal of the audio data is filtered according to the key frequency generated by the audio data, so that noise interference when the bezier curve is generated is reduced, and the accuracy of the jitter parameter is improved.
In addition, the audio data are subjected to segmented sampling filtering to obtain the amplitude key points, and the reliability of the amplitude key points is improved.
Referring to fig. 8, a flowchart of a display method of a game interface according to an embodiment of the present application is shown. The execution subject of the method may be the terminal 10 (hereinafter simply referred to as "client") in the implementation environment shown in fig. 1. The method can comprise the following steps (801-805):
Optionally, in this embodiment of the application, a skill icon may be further included in the game interface, where the skill icon is used to instruct the user to control the virtual object to release the corresponding skill. Illustratively, referring to fig. 9 in combination, for example, the skill sound effect is taken as an example, in the game application, the skill sound effect corresponding to the skill 91 is different from the skill sound effect corresponding to the skill 92, that is, the jitter parameters of the game interface are different when the user controls the virtual object to release the skill 91 and the skill 92.
At step 802, a release instruction corresponding to a target skill is received.
Skills refer to the ability of a virtual object to be assigned in a game application that can affect a virtual environment or other virtual objects. The skill release instruction refers to an instruction for controlling the virtual object to release the corresponding skill. Alternatively, the skill release instruction may be generated by user triggering. For example, for a terminal configured with a touch screen, a user controls a virtual object to release a skill by clicking a corresponding skill icon; for another example, for the PC side, the user controls the virtual object release skills by pressing the corresponding key position.
In the embodiments of the present application, the target skill refers to any skill given to the virtual object by the game application. After the user triggers and generates a release instruction corresponding to the target skill, the client receives the corresponding skill release instruction.
And step 803, controlling the virtual object to release the target skill according to the release instruction.
Optionally, after receiving the skill release instruction, the client controls the virtual object to release the target skill, and controls the game interface to play the animation special effect corresponding to the target skill.
And step 804, playing a skill sound effect corresponding to the target skill, and controlling the game interface to shake.
The dithering parameters of the game interface are generated according to the audio data corresponding to the skill sound effect. Optionally, in this embodiment of the application, the client controls the game interface to play the corresponding animation special effect, and simultaneously plays the skill sound effect corresponding to the target skill, and controls the game interface to shake according to the bezier curve corresponding to the skill sound effect.
It should be noted that the method for acquiring the jitter parameter is the same as the method for acquiring the jitter parameter described in the embodiments of fig. 2 and 5.
A display method of the game interface will be described with reference to fig. 10 by way of example.
And 102, loading camera shaking parameters according to the release instruction.
And 105, controlling the shake of the game interface according to the current position of the camera, the view port of the camera and the shake parameters.
And step 106, judging whether the dithering is finished. If the dithering is not finished, step 123 is executed again.
And step 107, stopping the shaking of the game interface if the shaking is finished.
To sum up, in the technical scheme provided by the embodiment of the application, the virtual object is controlled to release the skill through the skill sound effect and simultaneously the interface is shaken, so that the dependence of game interface display on personnel is reduced, the operation is simple and convenient, the visual effect of the game is improved, the product performance is improved, and the game excitement is increased.
The above is an introduction of an application scenario of the method for acquiring the jitter parameter in the game application program. It should be noted that the above method for acquiring the jitter parameter may also be applied to other applications, such as a reading application, a video application, a social application, and the like. Next, other application scenarios of the method for acquiring jitter parameters will be described.
Please refer to fig. 11, which shows a flowchart of a method for obtaining jitter parameters according to still another embodiment of the present application. The method can comprise the following steps (1101-1105):
Optionally, the server acquires the corresponding time domain signal as audio data according to the sound effect of the application program. In a possible implementation manner, after the application program acquires the audio data, the server acquires the audio data in real time, where the audio data may be audio data corresponding to one or more sound effects in the application program, and this is not limited in this embodiment of the application. Taking a social application program as an example, the server acquires an expression sound effect, wherein the expression sound effect refers to a sound effect generated when the user uses a corresponding expression icon, and further, the server acquires corresponding audio data according to the expression sound effect, wherein the audio data refers to a time domain signal corresponding to the expression sound effect. Optionally, after the server acquires the audio data, the server processes the audio data to further acquire a corresponding shaking parameter, and when the user uses an expression icon corresponding to the expression sound effect, the user interface shakes according to the shaking parameter. It should be noted that, the sound effect in the application may be in a state of being continuously updated, and optionally, the server acquires the updated audio data according to a certain time interval, where the time interval may be 1s, 10min, 1h, one day, one week, or the like, which is not limited in this embodiment of the application.
In another possible embodiment, the server acquires audio data when the user generates sound effects by using the above application. Taking a reading application program as an example, after a user triggers a text reading function of the reading application program, the server acquires a reading sound effect of the reading application program in real time. The function of "reading characters" may be a function of playing corresponding characters for the user in real time in the reading application program, and the reading sound effect may be a sound effect generated when the reading application program plays corresponding characters for the user. Further, the server acquires corresponding audio data according to the reading sound effect, wherein the audio data refers to a time domain signal corresponding to the reading sound effect. Optionally, after the server obtains the audio data, the server processes the audio data to obtain a corresponding jitter parameter, and when the reading application program plays the corresponding text, the user interface jitters according to the jitter parameter. For example, when the reading application described above plays to "run," the user interface simply dithers to mimic the running action.
And 1104, generating modified audio data according to the filtered frequency domain discrete signal.
It should be noted that the method for acquiring the jitter parameters in other application programs is the same as the method for acquiring the jitter parameters in the game application program in the embodiment of fig. 2 and 5, and is not described herein again.
To sum up, in the technical solution provided in the embodiment of the present application, a technical solution for automatically generating a jitter parameter corresponding to audio data is provided, on one hand, the technical solution for generating a jitter parameter by the above technical method solves the technical problems of strong dependency and low efficiency of acquisition of a jitter parameter on a worker in the related art, improves the generation efficiency of the jitter parameter, and reduces the dependency of a process of generating the jitter parameter on the worker; on the other hand, the matching degree of the jitter parameters generated by the technical scheme and the audio data is high, and when the user interface shakes along with the sound effect, the user interface shakes smoothly, so that the shaking roughness of the user interface is reduced.
In addition, the amplitude key points are subjected to background noise removal processing, so that the image blurring caused by low-amplitude jitter of a user interface is avoided, and the visual effect is improved.
In addition, Bezier curve representation jitter parameters are generated according to the processed amplitude key points, so that the jump of user interface jitter is reduced, the user interface jitter is smooth, and the roughness is low.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 12, a block diagram of an apparatus for obtaining jitter parameters according to an embodiment of the present application is shown. The device has the function of realizing the method for acquiring the jitter parameters. The functions can be realized by hardware, and can also be realized by hardware executing corresponding software. The device can be a computer device and can also be arranged in the computer device. The apparatus 1200 may include: an audio acquisition module 1210, a signal generation module 1220, a signal filtering module 1230, an audio modification module 1240, and a parameter acquisition module 1250.
An audio obtaining module 1210 is configured to obtain audio data.
The signal generating module 1220 is configured to generate a frequency domain discrete signal corresponding to the audio data.
The signal filtering module 1230 is configured to perform filtering processing on the frequency domain discrete signal, filter a non-critical frequency signal, and obtain a filtered frequency domain discrete signal; wherein the non-critical frequency signal refers to a noise signal in the audio data.
And an audio modification module 1240, configured to generate modified audio data according to the filtered frequency domain discrete signal.
A parameter obtaining module 1250, configured to extract an amplitude key point from the modified audio data to obtain a jitter parameter.
In an exemplary embodiment, as shown in fig. 13, the parameter obtaining module 1250 includes: a key point obtaining unit 1251, a key point filtering unit 1252 and a parameter obtaining unit 1253.
A keypoint acquisition unit 1251, configured to sample and acquire an amplitude keypoint from the modified audio data.
The key point filtering unit 1252 is configured to filter the amplitude key point to obtain a processed amplitude key point.
A parameter obtaining unit 1253, configured to obtain the jitter parameter based on the processed amplitude key point.
In an exemplary embodiment, the keypoint filtering unit 1252 is configured to adjust the amplitude of the amplitude keypoint with the amplitude lower than the preset threshold to be zero, so as to obtain the processed amplitude keypoint.
In an exemplary embodiment, the parameter obtaining unit 1253 is configured to obtain relevant information of the processed amplitude keypoints, where the relevant information includes: identification, quantity, and amplitude; and generating a Bezier curve according to the related information, wherein the Bezier curve is used for representing the jitter parameters.
In an exemplary embodiment, as shown in fig. 13, the signal filtering module 1230 includes: a frequency acquisition unit 1231, a range determination unit 1232, and a signal filtering unit 1233.
A frequency obtaining unit 1231, configured to obtain the key frequency.
A range determining unit 1232, configured to determine a filtering range of the frequency domain discrete signal according to the key frequency.
The signal filtering unit 1233 is configured to filter the frequency domain discrete signal outside the filtering range, and reserve the frequency domain discrete signal within the filtering range to obtain the filtering frequency domain discrete signal.
In an exemplary embodiment, the frequency obtaining unit 1231 is configured to perform segmentation processing on the audio data to obtain n segmented audio data, where n is an integer greater than 1; obtaining the maximum amplitude value of each segmented audio data; selecting target segmented audio data from the n segmented audio data, wherein the target segmented audio data refers to the segmented audio data with the maximum amplitude value meeting a preset condition; and filtering noise signals in the frequency domain discrete signals corresponding to the target segmented audio data to obtain the key frequency.
In an exemplary embodiment, the audio data is audio data in a game application; accordingly, the shake parameters are used to control the shake of a camera in the gaming application. Optionally, the audio data refers to audio data corresponding to a skill sound effect when a virtual object in the game application releases a skill.
To sum up, in the technical solution provided in the embodiment of the present application, a technical solution for automatically generating a jitter parameter corresponding to audio data is provided, on one hand, the technical solution for generating a jitter parameter by the above technical method solves the technical problems of strong dependency and low efficiency of acquisition of a jitter parameter on a worker in the related art, improves the generation efficiency of the jitter parameter, and reduces the dependency of a process of generating the jitter parameter on the worker; on the other hand, the matching degree of the jitter parameters generated by the technical scheme and the audio data is high, and when a game interface shakes along with the sound effect, the shake is smooth, so that the roughness of the game is reduced.
Referring to fig. 14, a block diagram of a display device of a game interface according to an embodiment of the present application is shown. The device has a function of realizing the display method of the game interface at the terminal side. The functions can be realized by hardware, and can also be realized by hardware executing corresponding software. The device may be a terminal or may be provided in a terminal. The apparatus 1400 may include: interface display module 1410, instruction receipt module 1420, skill release module 1430, and interface dithering module 1440.
And the interface display module 1410 is used for displaying a game interface.
The instruction receiving module 1420 is configured to receive a release instruction corresponding to the target skill.
And a skill release module 1430, configured to control the virtual object to release the target skill according to the release instruction.
The interface shaking module 1440 is used for playing skill sound effects corresponding to the target skills and controlling the game interface to shake; and generating the dithering parameters of the game interface according to the audio data corresponding to the skill sound effect.
To sum up, in the technical scheme provided by the embodiment of the application, the virtual object is controlled to release the skill through the skill sound effect and simultaneously the interface is shaken, so that the dependence of game interface display on personnel is reduced, the operation is simple and convenient, the visual effect of the game is improved, the product performance is improved, and the game excitement is increased.
Referring to fig. 15, a block diagram of a computer device according to an embodiment of the present application is shown. The computer device may be used to implement the method for acquiring jitter parameters provided in the above embodiments. Specifically, the method comprises the following steps:
the computer apparatus 1500 includes a Processing Unit (e.g., a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), an FPGA (Field Programmable Gate Array), etc.) 1501, a system Memory 1504 including a RAM (Random Access Memory) 1502 and a ROM (Read Only Memory) 1503, and a system bus 1505 connecting the system Memory 1504 and the Central Processing Unit 1501. The computer device 1500 also includes a basic I/O system (Input/Output) 1506 to facilitate information transfer between various components within the computer device, and a mass storage device 1507 to store an operating system 1513, application programs 1514, and other program modules 1512.
The basic input/output system 1506 includes a display 1508 for displaying information and an input device 1509 such as a mouse, keyboard, etc. for a user to input information. The display 1508 and the input device 1509 are connected to the central processing unit 1501 via an input/output controller 1510 connected to the system bus 1505. The basic input/output system 1506 may also include an input/output controller 1510 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, the input-output controller 1510 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1507 is connected to the central processing unit 1501 through a mass storage controller (not shown) connected to the system bus 1505. The mass storage device 1507 and its associated computer-readable media provide non-volatile storage for the computer device 1500. That is, the mass storage device 1507 may include a computer-readable medium (not shown) such as a hard disk or a CD-ROM (Compact disk Read-Only Memory) drive.
Without loss of generality, the computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), flash Memory or other solid state Memory technology, CD-ROM, DVD (Digital Video Disc) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 1704 and mass storage device 1507 described above may be collectively referred to as memory.
The computer device 1500 may also operate as a remote computer connected to a network via a network, such as the Internet, in accordance with embodiments of the present application. That is, the computer device 1500 may be connected to the network 1512 through the network interface unit 1511 connected to the system bus 1505 or may be connected to other types of networks or remote computer systems (not shown) using the network interface unit 1511.
The memory also includes at least one instruction, at least one program, set of codes, or set of instructions stored in the memory and configured to be executed by the one or more processors to implement the method for obtaining jitter parameters described above.
Referring to fig. 16, a block diagram of a terminal 1600 according to an embodiment of the present application is shown. The terminal 1600 may be a mobile phone, a tablet computer, an electronic book reader, a multimedia playing device, a wearable device, a PC, etc. The terminal is used for implementing the display method of the game interface at the terminal side provided in the above embodiment. The terminal may be the terminal 10 in the implementation environment shown in fig. 1. Specifically, the method comprises the following steps:
generally, terminal 1600 includes: a processor 1601, and a memory 1602.
In some embodiments, the terminal 1600 may also optionally include: peripheral interface 1603 and at least one peripheral. Processor 1601, memory 1602 and peripheral interface 1603 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1603 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1604, a touch screen display 1605, a camera 1606, audio circuitry 1607, a positioning component 1608, and a power supply 1609.
Those skilled in the art will appreciate that the configuration shown in fig. 16 is not intended to be limiting of terminal 1600, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
In an embodiment of the present application, a computer-readable storage medium is further provided, where at least one instruction, at least one program, a code set, or a set of instructions is stored in the storage medium, and when executed by a processor, the at least one instruction, the at least one program, the code set, or the set of instructions implements the method for acquiring the jitter parameter.
In an embodiment of the present application, a computer-readable storage medium is further provided, where at least one instruction, at least one program, a code set, or a set of instructions is stored in the storage medium, and when executed by a processor, the at least one instruction, the at least one program, the code set, or the set of instructions implements the display method of the game interface.
Optionally, the computer-readable storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a Solid State Drive (SSD), or an optical disc. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM).
In an exemplary embodiment, a computer program product is also provided, which when executed by a processor is configured to implement the above-mentioned jitter parameter obtaining method.
In an exemplary embodiment, a computer program product is also provided, which when executed by a processor is used to implement the above-mentioned display method of the game interface.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. In addition, the step numbers described herein only exemplarily show one possible execution sequence among the steps, and in some other embodiments, the steps may also be executed out of the numbering sequence, for example, two steps with different numbers are executed simultaneously, or two steps with different numbers are executed in a reverse order to the order shown in the figure, which is not limited by the embodiment of the present application.
The above description is only exemplary of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the present application should be included in the protection scope of the present application.
Claims (13)
1. A method for obtaining jitter parameters, the method comprising:
acquiring audio data;
generating a frequency domain discrete signal corresponding to the audio data;
filtering the frequency domain discrete signal, and filtering non-key frequency signals to obtain a filtered frequency domain discrete signal; wherein the non-critical frequency signals refer to noise signals in the audio data;
generating modified audio data according to the filtering frequency domain discrete signal;
sampling and acquiring amplitude key points from the corrected audio data;
adjusting the amplitude of the amplitude key point with the amplitude lower than a preset threshold value to be zero, and filtering the background noise in the audio data to obtain a processed amplitude key point;
and obtaining a jitter parameter based on the processed amplitude key point, wherein the jitter parameter is used for controlling the jitter of the user interface.
2. The method of claim 1, wherein obtaining the jitter parameters based on the processed amplitude keypoints comprises:
acquiring relevant information of the processed amplitude key points, wherein the relevant information comprises: identification, quantity, and amplitude;
and generating a Bezier curve according to the related information, wherein the Bezier curve is used for representing the jitter parameters.
3. The method according to claim 1 or 2, wherein the filtering the frequency-domain discrete signal to filter the non-critical frequency signal to obtain a filtered frequency-domain discrete signal comprises:
acquiring a key frequency;
determining a filtering range of the frequency domain discrete signal according to the key frequency;
and filtering the frequency domain discrete signals outside the filtering range, and reserving the frequency domain discrete signals inside the filtering range to obtain the filtering frequency domain discrete signals.
4. The method of claim 3, wherein the obtaining the critical frequency comprises:
carrying out segmentation processing on the audio data to obtain n segmented audio data, wherein n is an integer greater than 1;
obtaining the maximum amplitude value of each segmented audio data;
selecting target segmented audio data from the n segmented audio data, wherein the target segmented audio data refers to the segmented audio data with the maximum amplitude value meeting a preset condition;
and filtering noise signals in the frequency domain discrete signals corresponding to the target segmented audio data to obtain the key frequency.
5. A method for obtaining jitter parameters, the method comprising:
acquiring audio data in a game application program;
generating a frequency domain discrete signal corresponding to the audio data;
filtering the frequency domain discrete signal, and filtering non-key frequency signals to obtain a filtered frequency domain discrete signal; wherein the non-critical frequency signals refer to noise signals in the audio data;
generating modified audio data according to the filtering frequency domain discrete signal;
sampling and acquiring amplitude key points from the corrected audio data;
adjusting the amplitude of the amplitude key point with the amplitude lower than a preset threshold value to be zero, and filtering the background noise in the audio data to obtain a processed amplitude key point;
and obtaining a jitter parameter based on the processed amplitude key point, wherein the jitter parameter is used for controlling the jitter of a camera in the game application program.
6. The method according to claim 5, wherein the audio data is audio data corresponding to a skill sound effect when a virtual object releases a skill in the game application.
7. A method for displaying a game interface, the method comprising:
displaying a game interface;
receiving a release instruction corresponding to the target skill;
controlling the virtual object to release the target skill according to the release instruction;
playing a skill sound effect corresponding to the target skill, and controlling the game interface to shake;
the vibration parameters of the game interface are generated according to the corrected audio data corresponding to the skill sound effect, the corrected audio data refer to data obtained after noise signals and background noises in the audio data corresponding to the skill sound effect are filtered, and the background noises are filtered by adjusting the amplitude of the amplitude key points with the amplitude lower than a preset threshold value to be zero.
8. An apparatus for obtaining jitter parameters, the apparatus comprising:
the audio acquisition module is used for acquiring audio data;
the signal generating module is used for generating a frequency domain discrete signal corresponding to the audio data;
the signal filtering module is used for filtering the frequency domain discrete signal, filtering non-key frequency signals and obtaining a filtering frequency domain discrete signal; wherein the non-critical frequency signals refer to noise signals in the audio data;
the audio frequency correction module is used for generating corrected audio data according to the filtering frequency domain discrete signal;
the parameter acquisition module is used for sampling and acquiring amplitude key points from the corrected audio data; adjusting the amplitude of the amplitude key point with the amplitude lower than a preset threshold value to be zero, and filtering the background noise in the audio data to obtain a processed amplitude key point; and obtaining a jitter parameter based on the processed amplitude key point, wherein the jitter parameter is used for controlling the jitter of the user interface.
9. An apparatus for obtaining jitter parameters, the apparatus comprising:
the audio acquisition module is used for acquiring audio data in the game application program;
the signal generating module is used for generating a frequency domain discrete signal corresponding to the audio data;
the signal filtering module is used for filtering the frequency domain discrete signal, filtering non-key frequency signals and obtaining a filtering frequency domain discrete signal; wherein the non-critical frequency signals refer to noise signals in the audio data;
the audio frequency correction module is used for generating corrected audio data according to the filtering frequency domain discrete signal;
the parameter acquisition module is used for sampling and acquiring amplitude key points from the corrected audio data; adjusting the amplitude of the amplitude key point with the amplitude lower than a preset threshold value to be zero, and filtering the background noise in the audio data to obtain a processed amplitude key point; and obtaining a jitter parameter based on the processed amplitude key point, wherein the jitter parameter is used for controlling the jitter of a camera in the game application program.
10. A display device for a game interface, the device comprising:
the interface display module is used for displaying a game interface;
the instruction receiving module is used for receiving a release instruction corresponding to the target skill;
the skill release module is used for controlling the virtual object to release the target skill according to the release instruction;
the interface shaking module is used for playing skill sound effects corresponding to the target skills and controlling the game interface to shake; the vibration parameters of the game interface are generated according to the corrected audio data corresponding to the skill sound effect, the corrected audio data refer to data obtained after noise signals and background noises in the audio data corresponding to the skill sound effect are filtered, and the background noises are filtered by adjusting the amplitude of the amplitude key points with the amplitude lower than a preset threshold value to be zero.
11. A computer device, characterized in that the computer device comprises a processor and a memory, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, which is loaded and executed by the processor to implement the method of acquiring jitter parameters according to any one of claims 1 to 4, or to implement the method of acquiring jitter parameters according to claim 5 or 6.
12. A terminal, characterized in that it comprises a processor and a memory, in which at least one instruction, at least one program, a set of codes or a set of instructions is stored, which is loaded and executed by the processor to implement the display method of a game interface according to claim 7.
13. A computer-readable storage medium, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the storage medium, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by a processor to implement the method for acquiring jitter parameters according to any one of claims 1 to 4, or to implement the method for acquiring jitter parameters according to claim 5 or 6, or to implement the method for displaying a game interface according to claim 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911115640.6A CN110879850B (en) | 2019-11-14 | 2019-11-14 | Method, device and equipment for acquiring jitter parameters and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911115640.6A CN110879850B (en) | 2019-11-14 | 2019-11-14 | Method, device and equipment for acquiring jitter parameters and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110879850A CN110879850A (en) | 2020-03-13 |
CN110879850B true CN110879850B (en) | 2021-02-09 |
Family
ID=69729665
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911115640.6A Active CN110879850B (en) | 2019-11-14 | 2019-11-14 | Method, device and equipment for acquiring jitter parameters and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110879850B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113289340A (en) * | 2021-04-28 | 2021-08-24 | 网易(杭州)网络有限公司 | Game skill sound effect processing method and device and electronic device |
CN113419210A (en) * | 2021-06-09 | 2021-09-21 | Oppo广东移动通信有限公司 | Data processing method and device, electronic equipment and storage medium |
CN116129948B (en) * | 2022-12-13 | 2024-08-23 | 网易(杭州)网络有限公司 | Visual display processing method, device, equipment and medium for audio signals |
CN116610282B (en) * | 2023-07-18 | 2023-11-03 | 北京万物镜像数据服务有限公司 | Data processing method and device and electronic equipment |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8740706B2 (en) * | 2011-10-25 | 2014-06-03 | Spielo International Canada Ulc | Gaming console having movable screen |
CN107174824B (en) * | 2017-05-23 | 2021-01-15 | 网易(杭州)网络有限公司 | Special effect information processing method and device, electronic equipment and storage medium |
CN109120983B (en) * | 2018-09-28 | 2021-07-27 | 腾讯音乐娱乐科技(深圳)有限公司 | Audio processing method and device |
CN109947248A (en) * | 2019-03-14 | 2019-06-28 | 努比亚技术有限公司 | Vibration control method, mobile terminal and computer readable storage medium |
-
2019
- 2019-11-14 CN CN201911115640.6A patent/CN110879850B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN110879850A (en) | 2020-03-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110879850B (en) | Method, device and equipment for acquiring jitter parameters and storage medium | |
CN110559658B (en) | Information interaction method, device, terminal and storage medium | |
CN110189246B (en) | Image stylization generation method and device and electronic equipment | |
CN112274921B (en) | Game character rendering method and device, electronic equipment and storage medium | |
CN111476871A (en) | Method and apparatus for generating video | |
KR101954010B1 (en) | Method and terminal for implementing virtual character turning | |
CN108939535B (en) | Sound effect control method and device for virtual scene, storage medium and electronic equipment | |
CN108096833B (en) | Motion sensing game control method and device based on cascade neural network and computing equipment | |
CN112755516B (en) | Interactive control method and device, electronic equipment and storage medium | |
CN113705520A (en) | Motion capture method and device and server | |
CN113952720A (en) | Game scene rendering method and device, electronic equipment and storage medium | |
CN111589114B (en) | Virtual object selection method, device, terminal and storage medium | |
CN109045694A (en) | Virtual scene display method, apparatus, terminal and storage medium | |
JP2019155103A (en) | Game replay method and system | |
CN114697568A (en) | Special effect video determination method and device, electronic equipment and storage medium | |
CN109445573A (en) | A kind of method and apparatus for avatar image interactive | |
CN115984943B (en) | Facial expression capturing and model training method, device, equipment, medium and product | |
CN110197459A (en) | Image stylization generation method, device and electronic equipment | |
US9981190B2 (en) | Telemetry based interactive content generation | |
CN113946604A (en) | Staged go teaching method and device, electronic equipment and storage medium | |
CN113144606A (en) | Skill triggering method of virtual object and related equipment | |
CN113360343B (en) | Method and device for analyzing memory occupation condition, storage medium and computer equipment | |
CN113223128A (en) | Method and apparatus for generating image | |
WO2024124664A1 (en) | Video processing method and apparatus, computer device, and computer-readable storage medium | |
WO2024148924A1 (en) | Method and apparatus for controlling ai virtual object, device, medium, and program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40022263 Country of ref document: HK |
|
GR01 | Patent grant | ||
GR01 | Patent grant |