WO2023179524A1 - 游戏数据生成方法及装置、交互方法及装置 - Google Patents

游戏数据生成方法及装置、交互方法及装置 Download PDF

Info

Publication number
WO2023179524A1
WO2023179524A1 PCT/CN2023/082439 CN2023082439W WO2023179524A1 WO 2023179524 A1 WO2023179524 A1 WO 2023179524A1 CN 2023082439 W CN2023082439 W CN 2023082439W WO 2023179524 A1 WO2023179524 A1 WO 2023179524A1
Authority
WO
WIPO (PCT)
Prior art keywords
game
audio
audio data
display parameters
user
Prior art date
Application number
PCT/CN2023/082439
Other languages
English (en)
French (fr)
Inventor
李超然
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Publication of WO2023179524A1 publication Critical patent/WO2023179524A1/zh

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene

Definitions

  • Embodiments of the present disclosure relate to a game data generation method and device, and an interaction method and device.
  • the game needs to be equipped with background music.
  • Players can participate in the game based on the game's background music, which can improve the player's gaming experience.
  • the game can be developed after the background music of the game is determined.
  • this method can easily lead to the game elements in the game not matching the background music well. For example, when the background music is a cheerful melody, the game elements in the game do not present a kind of Shape of cheerful effect. It can be seen that the matching degree between the game elements and the background music in the game needs to be improved.
  • Embodiments of the present disclosure provide a game data generation method and device, an interaction method and device, and are used to improve the matching degree between game elements and background music in the game.
  • embodiments of the present disclosure provide a method for generating game data, which includes: obtaining background audio data of a target game, and separating the background audio data into at least two sub-audio data according to preset audio attribute dimensions.
  • a control strategy for controlling the display parameters of elements wherein, the control strategy includes the audio features and the display parameters of the game elements corresponding to the audio features; in the control strategy, there are Priority order; according to the control strategy and the audio characteristics, determine the display parameters of the game elements under the background audio data in each frame, and generate according to the display parameters of the game elements under the background audio data in each frame Game data of the target game.
  • embodiments of the present disclosure provide an interaction method, including: in response to a user's starting instruction for a target game, displaying at least two game elements of the target game; wherein the at least two game elements Determine based on at least two audio features included in the background audio data of the target game; obtain the game operation instructions sent by the user, and run the target game according to the game operation instructions.
  • embodiments of the present disclosure provide a game data generation device, including: a feature extraction unit configured to obtain background audio data of a target game, and separate the background audio data into at least one according to preset audio attribute dimensions. Two kinds of sub-audio data.
  • the audio feature extraction type corresponding to each said sub-audio data at least one audio feature of each said sub-audio data is extracted to obtain at least two kinds of audio features; a strategy acquisition unit is used to obtain based on The audio feature controls a control strategy for controlling display parameters of game elements in the target game; wherein the control strategy includes the audio feature and the display parameters of the game element corresponding to the audio feature; There is a priority order among various audio features in the control strategy; a data generation unit is used to determine the display parameters of the game elements under the background audio data of each frame according to the control strategy and the audio features, Game data of the target game is generated according to the display parameters of the game elements under the background audio data of each frame.
  • embodiments of the present disclosure provide an interactive device, including: an element display unit configured to display at least two game elements of the target game in response to a user's startup instruction for the target game; wherein, The at least two game elements are determined based on at least two audio features included in the background audio data of the target game; the game running unit is used to obtain the game operation instructions sent by the user, and run according to the game operation instructions. Said target game.
  • embodiments of the present disclosure provide an electronic device, including: a processor; and a memory configured to store computer-executable instructions, which when executed cause the processor to implement: The method described in the first or second aspect above.
  • embodiments of the present disclosure provide a computer-readable storage medium, which is used to store computer-executable instructions. When executed by a processor, the computer-executable instructions implement the first step described above. aspect or the method described in the second aspect.
  • At least two audio features are extracted from the background audio data of the target game.
  • the at least two audio features have different audio attributes.
  • the target game is determined.
  • the display parameters of game elements According to the display parameters of game elements, Generating the game data of the target game can match the audio characteristics of the game elements with the background audio data, and improve the matching degree between the game elements and the background music in the game.
  • Figure 1 is a schematic flowchart of a game data generation method provided by an embodiment of the present disclosure
  • Figure 2 is a schematic flowchart of audio feature extraction provided by an embodiment of the present disclosure
  • Figure 3 is a schematic flowchart of determining display parameters of game elements provided by an embodiment of the present disclosure
  • Figure 4a is a schematic diagram of a target game provided by an embodiment of the present disclosure.
  • Figure 4b is a schematic diagram of a target game provided by another embodiment of the present disclosure.
  • Figure 4c is a schematic diagram of a target game provided by yet another embodiment of the present disclosure.
  • Figure 5 is a schematic flowchart of an interaction method provided by an embodiment of the present disclosure.
  • Figure 6 is a schematic structural diagram of a game data generating device provided by an embodiment of the present disclosure.
  • Figure 7 is a schematic structural diagram of an interactive device provided by an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • Figure 1 is a schematic flow chart of a game data generation method provided by an embodiment of the present disclosure. As shown in Figure 1, the process includes the following steps:
  • Step S102 Obtain the background audio data of the target game, and separate the background audio data into at least two sub-audio data according to the preset audio attribute dimensions.
  • Frequency feature extraction type extract at least one audio feature of each sub-audio data, and obtain at least two audio features;
  • Step S104 obtain a control strategy for controlling the display parameters of game elements in the target game based on audio features; wherein the control strategy includes audio features and display parameters of game elements corresponding to the audio features; among various audio features in the control strategy Have a priority order;
  • Step S106 Determine the display parameters of the game elements under each frame of background audio data based on the control strategy and audio characteristics, and generate game data for the target game based on the display parameters of the game elements under each frame of background audio data.
  • At least two audio features are extracted from the background audio data of the target game.
  • the at least two audio features have different audio attributes.
  • the display of game elements in the target game is determined.
  • Parameters, based on the display parameters of the game elements, generate game data of the target game, which can match the audio characteristics of the game elements with the background audio data, and improve the matching degree between the game elements and the background music in the game.
  • the game data generation method in Figure 1 can be executed by a backend server used to develop the target game, or by a user terminal used by users of the target game.
  • the user terminal can be a user terminal such as a mobile phone, computer, tablet, etc.
  • the background audio data of the target game is obtained.
  • the backend server can obtain the audio data specified by the developer as the background audio data of the target game.
  • the user terminal can obtain the local audio data uploaded by the user as the background audio data of the target game.
  • the developer specifies the background audio data of the target game through the backend server.
  • the game user can also upload the local audio in the user terminal. The data is reused as background audio data for the target game.
  • the background audio data is separated into at least two sub-audio data according to the preset audio attribute dimensions, and at least one audio feature of each sub-audio data is extracted according to the audio feature extraction type corresponding to each sub-audio data, to obtain At least two audio characteristics, wherein the audio attributes of the at least two sub-audio data are different from each other.
  • the background audio data includes at least two sub-audio data with different audio attributes.
  • the audio attributes include timbre.
  • the background audio data includes vocals, piano sounds, bass sounds, drum sounds, etc., so the background audio data includes timbre
  • the four sub-audio data that are different from each other are sub-audio data corresponding to the human voice, sub-audio data corresponding to the piano sound, sub-audio data corresponding to the bass sound, and sub-audio data corresponding to the drum sound.
  • At least two audio features are extracted from at least two sub-audio data whose audio attributes are different from each other.
  • one audio feature is extracted from each sub-audio data to obtain at least two audio features.
  • multiple audio features can also be extracted from each sub-audio data. There is no limitation here. Continuing from the above example, you can extract scale height features from the sub-audio data corresponding to the human voice, extract beat features from the sub-audio data corresponding to the piano sound, extract beat features from the sub-audio data corresponding to the bass sound, and extract beat features from the sub-audio data corresponding to the drum sound.
  • the beat features are extracted from the sub-audio data to obtain four audio features, namely the scale features of the human voice, the beat features of the piano, the beat features of the bass, and the beat features of the drums.
  • the audio features extracted in this embodiment include but are not limited to beat features, scale features, volume features, sound severity features, etc. under various timbres.
  • the background audio data is separated into at least two sub-audio data according to preset audio attribute dimensions.
  • the preset audio attribute dimensions include the timbre dimension.
  • the background audio data is separated into at least two sub-audio data.
  • the background audio data can be separated into at least two sub-audio data according to the timbre dimension.
  • At least two types of sub-audio data where the timbres of each sub-audio data are different from each other.
  • the background audio data contains vocals, piano, bass, and drums, so the background audio data is separated into four sub-audio.
  • the data are respectively the sub-audio data corresponding to the human voice, the sub-audio data corresponding to the piano sound, the sub-audio data corresponding to the bass sound, and the sub-audio data corresponding to the drum sound.
  • the method in this embodiment can also perform: obtaining each dimension value of the audio attribute dimension
  • the mapping relationship between the sub-audio data and the audio feature extraction type determines the audio feature extraction type corresponding to the sub-audio data based on the dimension value to which the sub-audio data belongs and the mapping relationship.
  • the preset audio attribute dimensions include the timbre dimension, then each dimension value of the audio attribute dimension includes various timbres.
  • the relationship between the various preset timbres and the audio feature extraction types is obtained.
  • the mapping relationship determines the audio feature extraction type corresponding to the sub-audio data based on the timbre to which the sub-audio data belongs and the mapping relationship.
  • the preset audio feature extraction type corresponding to the human voice is the scale feature
  • the preset audio feature extraction type corresponding to the bass is the beat feature
  • the preset audio feature extraction type corresponding to the drum sound is the beat feature
  • the preset piano sound corresponding to the The type of audio feature extraction is beat feature.
  • the scale height feature is extracted from the sub-audio data corresponding to the human voice
  • the beat feature is extracted from the sub-audio data corresponding to the piano sound
  • the beat feature is extracted from the sub-audio data corresponding to the bass sound. Extract beat features from the sub-audio data corresponding to the drum sound.
  • FIG. 2 is a schematic flowchart of audio feature extraction provided by an embodiment of the present disclosure.
  • the audio attribute dimension is timbre as an example for explanation.
  • the process includes:
  • Step S202 obtain the background audio data of the target game
  • Step S204 Separate the background audio data into at least two sub-audio data according to the timbre dimension; wherein the timbre of each sub-audio data is different from each other.
  • Step S206 Obtain preset mapping relationships between various timbres and audio feature extraction types, and determine the audio feature extraction types corresponding to various sub-audio data based on the timbres to which various sub-audio data belong and the mapping relationship;
  • Step S208 For each sub-audio data, extract at least one audio feature of the sub-audio data according to the audio feature extraction type corresponding to the sub-audio data.
  • step S104 a control strategy for controlling the display parameters of game elements in the target game based on the audio characteristics is obtained.
  • the display parameters include but are not limited to the size, position, display duration, start display time, end display time, color, etc. of the game element.
  • This control strategy can be, for example, when the beat characteristics of drums appear in the background audio data, controlling the game elements in the target game - the lights to brighten to a certain brightness, or when the high-pitched characteristics of the human voice appear in the background audio data.
  • game elements are not limited to lighting. Game elements can also include obstacles that the user needs to avoid, various objects in the game screen, etc.
  • a control strategy for controlling the display parameters of game elements in the target game based on audio characteristics is obtained, specifically:
  • the backend server can obtain the pre-stored audio-based game data through method (b1) during the process of developing the target game and generating the game data of the target game.
  • the stored control strategy can be pre-configured by the developer into the backend server to further develop the target game.
  • the user terminal can use the method after starting the target game. (b1), obtain a pre-stored control strategy in the target game that controls the display parameters of game elements in the target game based on audio features.
  • the game player that is, the user wants to make changes to the target game that has been developed, the player can configure the above control strategy by himself, so that the user terminal can obtain the user's own control strategy through method (b2) after starting the target game.
  • a defined control strategy that controls the display parameters of game elements in the target game based on audio characteristics.
  • the interface can still be opened to the player so that the player can configure the control strategy in the game, thereby realizing the player's customization of the game data and improving the player's gaming experience.
  • the user terminal when there are multiple control strategies, in one scenario, when the method process in this embodiment is executed by the user terminal used by the user of the target game, the user terminal can also obtain some of the pre-stored control strategies and obtain some of the control strategies. User-defined control strategies.
  • the game element may be an object corresponding to the user character, or may be an object that does not correspond to the user character but continues to be displayed on the screen as the game background music plays, or, as the game background music plays, An object that appears on the screen intermittently.
  • the control strategy includes the above audio characteristics and display parameters of the game elements corresponding to the audio characteristics. The above obtains a user-defined control strategy for controlling the display parameters of game elements in the target game based on audio characteristics, specifically:
  • Target display parameters of game elements filtered for audio features in the display parameters; target display parameters filtered by the user for audio features as part of a control strategy for controlling display parameters of game elements in the target game based on audio features.
  • the control strategy includes the above audio characteristics and the display parameters of the game elements corresponding to the audio characteristics, so that the display parameters of the game elements are controlled based on the audio characteristics through the control strategy.
  • the game element is an obstacle that the user needs to avoid in the game.
  • the audio feature includes the beat feature of the drum sound.
  • the display parameters of the game element corresponding to the audio feature are used to indicate that when the beat of the drum sound in the background audio data arrives, the obstacle will The size of the object after it becomes larger.
  • the audio features also include the high-pitched feature of the human voice.
  • the display parameters of the game elements corresponding to this audio feature are used to represent the time it takes for the obstacle to disappear when the human voice in the background audio data is high-pitched.
  • the audio features also include the pitch characteristics of the human voice.
  • the display parameters of the game elements corresponding to the audio characteristics are used to represent the size of obstacles that change as the pitch of the human voice changes.
  • the user can customize the display parameters of the corresponding game elements for a certain audio feature, thereby obtaining the display parameters of the game elements customized by the user for that type of audio feature, and displaying the user-customized display parameters for that type of audio feature.
  • the defined display parameters of game elements are used as part of a control strategy for controlling game elements in the target game based on audio features.
  • Another part of the control strategy may also include various audio features extracted above.
  • the user can customize the display parameters of the corresponding game elements for some of the audio features, or can customize the display parameters of the corresponding game elements for all of the audio features.
  • the user can filter the target display parameters of the corresponding game element for a certain audio feature among the multiple initial display parameters, thereby obtaining
  • the target display parameters of the game elements selected by the user for this audio feature among multiple initial display parameters are used as the control strategy for controlling the display parameters of the game elements based on the audio features.
  • part of the control strategy may also include various audio features extracted above.
  • the user can filter the target display parameters of the corresponding game elements for some of the audio features, or can filter the target display parameters of the corresponding game elements for all of the audio features.
  • the above two methods (b21) and (b22) can also be used in combination.
  • the user customizes the corresponding game elements for some of the audio features.
  • Display parameters the user filters the display parameters of its corresponding game elements for the rest of the audio features.
  • the display parameters of game elements customized by the user for audio characteristics are obtained, specifically:
  • the display materials can be pictures or animations stored locally on the user terminal. Then, multiple display methods are displayed to the user, and the user specified
  • the display method can be, for example, a gradient display from small to large, a gradient display from 0 to 100% transparency, etc.
  • the display method is also related to the game elements. For example, the display method includes superimposing the display materials and game elements on together, or display the display material above the game elements. In one example, the display method may also include a pattern processing method for the display material, such as cropping, graphic transformation, etc. of the display material.
  • the display parameters of user-defined game elements are generated. For example, based on the display materials and display methods, the display materials and game elements are superimposed together, and the transparency of the display materials is controlled to change over time. Gradient display from 0 to 100%.
  • determine the audio characteristics associated with the display parameters of the user-defined game elements For example, associate the display parameters in the above example with the beat characteristics of the bass. Then when the beat of the bass arrives, control the display material and game elements to be superimposed on Together, and control the transparency of the display material to change over time from 0 to 100% gradient display.
  • an interface can be provided for the user, so that the user can develop the display parameters of the game elements for the audio characteristics according to his own wishes, thereby achieving the purpose of controlling the game according to the user's wishes.
  • multiple initial display parameters of game elements are provided to the user, and target display parameters of the game elements filtered by the user for audio features among the multiple initial display parameters are obtained, including:
  • (b221) Provide the initial display parameters of game elements corresponding to various audio features to the user as a filtering range, obtain the target display parameters of the game elements filtered by the user within the filtering range, and determine the audio features associated with the target display parameters for the user .
  • the initial display parameters of the game elements corresponding to each audio feature are displayed to the user respectively.
  • One audio feature can correspond to one or more initial display parameters.
  • the initial display parameters are pre-developed and stored in the target game. parameters in.
  • One or more initial display parameters corresponding to each audio feature are used as a filtering range, thereby obtaining multiple filtering ranges.
  • the user can filter the target display parameters of game elements for that audio feature within the corresponding filtering range.
  • the initial display parameters of game elements corresponding to various audio characteristics are displayed to the user as a whole filtering range.
  • Users can filter target display parameters for game elements within this filtering range and determine the audio characteristics associated with the target display parameters. For example, a total of 5 initial display parameters of game elements corresponding to various audio characteristics are provided to the user. These 5 parameters are pre-developed and stored in the target game. The user can select the 5 parameters for the game elements. Filter the target display parameters, and determine the audio features associated with each target display parameter, so that at least one corresponding target display parameter is screened for each audio feature.
  • the user can not only develop the display parameters of the game elements, but also filter the display parameters of the game elements according to the user's wishes, thereby achieving the purpose of controlling the game according to the user's wishes.
  • the target game has game elements
  • the control strategy includes audio features and display parameters of the game elements corresponding to the audio features.
  • the game elements here include existing game elements in the target game and/or are newly generated based on the audio features.
  • Display parameters include but are not limited to the size, position, display duration, color and other parameters of game elements.
  • the control strategy also includes generation rules for game elements corresponding to the audio features.
  • the specific steps of generating new game elements based on the audio features include: generating new game elements based on the generation rules.
  • the generation rules of game elements corresponding to audio features are used to stipulate what type of game elements are generated when which audio features arrive or disappear, and the specific content of the display parameters of the generated game elements.
  • the game elements corresponding to the user character are generated, and the size and color of the game elements corresponding to the user character are specified, or when the high pitch of the human voice arrives, the game elements corresponding to the user are generated.
  • the game element corresponding to the opponent (such as an obstacle), and specifies the size and color of the game element corresponding to the game opponent.
  • Gamers can customize what types of game elements are generated when which audio signatures arrive or disappear.
  • the specific content of the elements and display parameters of the generated game elements that is, the generation rules of the game elements corresponding to the custom audio features.
  • the display parameters of the game elements under each frame of background audio data are determined based on the control strategy and the aforementioned extracted audio features.
  • the control strategy includes audio features and display parameters of game elements corresponding to the audio features.
  • Figure 3 is a schematic flowchart of determining the display parameters of game elements provided by an embodiment of the present disclosure. As shown in Figure 3, according to the control strategy and the previously extracted audio features, the display of game elements under each frame of background audio data is determined. Parameters, specifically include:
  • Step S302 determine at least one display parameter of the game element under each frame of background audio data according to the control strategy and audio characteristics; wherein each display parameter is related to the audio characteristics of each frame of background audio data;
  • Step S304 for any frame of background audio data, determine whether the various display parameters of the game elements under the frame of background audio data satisfy the display parameter constraints in the control strategy;
  • Step S306 If satisfied, the various display parameters of the game elements under the background audio data of the frame are retained. If not satisfied, the background audio data of the frame are stored according to the priority order between various audio features. Various display parameters of the game elements are adjusted, and the adjusted display parameters are retained; among them, the adjusted display parameters satisfy the display parameter constraints;
  • Step S308 Determine the display parameters of the game elements under each frame of background audio data based on the display parameters retained by the game elements under each frame of background audio data.
  • At least one display parameter of the game element under each frame of background audio data is determined based on the control strategy introduced above and various extracted audio features.
  • Each display parameter is related to the audio characteristics of each frame of background audio data.
  • the control strategy includes each audio feature and the display parameters of the game elements corresponding to each audio feature. Therefore, for each frame of background audio data, it can be determined based on the audio features of the background audio data of that frame. Display parameters of game elements under the background audio data of this frame.
  • one audio feature can correspond to one display parameter.
  • each frame of background audio data can have multiple types of audio data. Therefore, each frame of background audio data has at least one type of game element. Display parameters. For example, if a certain frame of background audio data has two audio effects, including drum beats and bass accents, then the game element under the background audio data of this frame has two corresponding display parameters.
  • step S304 for each frame of background audio data, it is determined whether various display parameters of the game elements under the frame of background audio data satisfy the display parameter constraints in the control strategy.
  • the control strategy includes display parameter constraints.
  • the display parameter constraints are used to limit that for one frame of background audio data, under this frame of data, various display parameters of the game elements can be displayed simultaneously and the final result displayed is The effect is visually reasonable and beautiful. Therefore, in this step, the display parameter constraints are used to judge and analyze the various display parameters of the game elements under any frame of background audio data.
  • step S306 for any frame of background audio data, if the various display parameters of the game elements under the frame of background audio data satisfy the above-mentioned display parameter constraints, then the various display parameters of the game elements under the frame of background audio data will be The parameters are retained. On the contrary, if they are not satisfied, the various display parameters of the game elements under the background audio data of the frame are adjusted according to the priority order between the various audio features, so that the adjusted display parameters satisfy Display parameter constraints and retain adjusted display parameters.
  • a certain frame of background audio data has two audio effects, including drum beats and bass accents
  • the game element under the background audio data of this frame has two corresponding display parameters, which are the sizes of obstacles after they become larger. and the size of the obstacle after it becomes smaller, if these two effects do not meet the preset display parameter constraints, these two display parameters need to be adjusted.
  • a certain frame of background audio data has two audio effects, including drum beats and bass sounds. Then the game element under the background audio data of this frame has two corresponding display parameters, which are the sizes of obstacles after they become larger. and the specific parameters of the obstacle floating display, if these two effects meet the preset display parameter constraints, these two display parameters will be retained.
  • the display parameters of the game elements under each frame of background audio data are determined based on the display parameters retained by the game elements under each frame of background audio data.
  • the display parameters retained by the game elements under each frame of background audio data are used as the display parameters of the game elements under each frame of background audio data.
  • various display parameters of the game elements under the background audio data of the frame are adjusted, which can be:
  • the background audio data of this frame is Screen the parameters to be deleted from the various display parameters of the play element and delete them.
  • the priority order between various audio features determine the priority order between various display parameters of the game elements under the frame of background audio data, and then, according to the various display parameters In order of priority, among the various display parameters of the game element under the background audio data of the frame, the effect with the lowest priority is selected as the effect to be deleted and deleted. After deletion, it is also necessary to determine whether the various display parameters of the game elements under the background audio data of the frame satisfy the display parameter constraints in the control strategy after the deletion process. If so, the various display parameters are retained. If you are still not satisfied, you can repeat actions (c1) and (c2). Through repeated filtering, deletion and judgment, until after the deletion process, the various display parameters of the game elements under the background audio data of this frame satisfy the control strategy. display parameter constraints.
  • the priority order between various audio features is determined as the first order; according to the first order, various displays of the game elements under the background audio data of the frame are determined.
  • the priority order among various audio features is recorded in the control policy, then the priority order among various audio features is extracted from the control policy as the first order, and based on the first order, determine The priority order among the various display parameters of the game elements under the background audio data of this frame is consistent with the priority order among the corresponding audio features.
  • the priority order among the multiple display parameters is the same.
  • the priority order among the various audio features set by the user when determining the priority order among the various display parameters of the game elements under the background audio data of the frame according to the first order, the priority order among the various audio features set by the user can be obtained.
  • Order that is, the first order set by the user is obtained.
  • the target game after the user starts the target game in the user terminal, the target game can display various audio features of the background audio data to the user, and prompt the user to set the priority order among the various audio features.
  • the priority order between the various audio features customized by the game developer can also be obtained.
  • the user can also set the priority order between various display parameters of the game elements as the second order.
  • the background audio of the frame is determined directly based on the second order.
  • the priority order among the various display parameters of the game elements under the data. It can be understood that among the various display parameters of the game elements under the background audio data of this frame The priority order is consistent with the second order.
  • the effect with the lowest priority is selected as the effect to be deleted and deleted, and it is judged that after the deletion process, the background of this frame Whether the various display parameters of the game elements under the audio data meet the display parameter constraints in the control strategy, if they are met, the various display parameters will be retained. If they are still not met, they can be repeatedly filtered, deleted and It is determined that until after the deletion process, the various display parameters of the game elements under the background audio data of this frame satisfy the display parameter constraints in the control strategy.
  • the target game can display various display parameters of the game elements to the user, and prompt the user to set the priority order among the various display parameters.
  • the priority order between various display parameters of game elements set by the user is obtained, specifically:
  • the display parameters of the game elements corresponding to each audio feature are displayed to the user respectively, where one audio feature can correspond to one or more display parameters.
  • One or more display parameters corresponding to each audio characteristic are used as a setting range, thereby obtaining multiple setting ranges.
  • the user can set the priority order among various display parameters of game elements for that audio feature within the corresponding setting range.
  • the display parameters of game elements corresponding to various audio characteristics are displayed to the user as a whole setting range. Users can set the priority order among various display parameters of game elements within this setting range. For example, a total of 5 display parameters of game elements corresponding to various audio characteristics are provided to the user. These 5 parameters are pre-developed and stored in the target game. The user can set among the 5 parameters. The order of priority between various display parameters of game elements.
  • the user can also set various game elements according to his own wishes. This displays the priority order between parameters, thereby improving the user's gaming experience.
  • the display parameters of the game elements under each frame of background audio data are determined according to the control strategy and audio characteristics, specifically as follows:
  • the priority order among various audio features is obtained from the control policy. This sequence can be set for the user and saved in the control policy.
  • the audio feature with the highest priority among the audio features contained in the frame of background audio data is determined.
  • the display parameters of the game element under the background audio data of the frame are determined, that is, the display parameters of the game element corresponding to the audio feature with the highest priority recorded in the control strategy are Display parameters are determined as display parameters of game elements under the background audio data of this frame.
  • the frame of background audio data includes drum beat characteristics and vocal treble characteristics.
  • the background audio data of this frame The audio feature with the highest priority among the included audio features is the drum beat feature.
  • the display parameters of the game elements corresponding to the drum beat characteristics recorded in the control strategy are determined as the display parameters of the game elements under the background audio data of the frame. If the display parameter of the game element corresponding to the recorded drum beat characteristics in the control strategy is to generate several obstacles with specified sizes and positions, then when the background audio data of this frame arrives, several obstacles with specified sizes and positions will be generated.
  • the audio features include musical instrument beat features and vocal strength features; the priority of the musical instrument beat features is higher than the priority of the vocal strength features; in action (f2), based on the audio with the highest priority
  • the control strategy corresponding to the feature determines the display parameters of the game elements under the background audio data of the frame, specifically: when the background audio data of the frame does not have the characteristics of an instrument beat but has the characteristics of the strength of the human voice, determine the audio feature with the highest priority is the strength and weakness characteristics of the human voice in the background audio data of this frame. According to the control strategy corresponding to the strength and weakness characteristics of the human voice and the strength and weakness characteristics of the human voice in the background audio data of this frame, the display of the game elements under the background audio data of this frame is determined. parameter.
  • the audio feature with the highest priority in the background audio data of this frame is determined to be the vocal intensity feature in the background audio data of this frame, and the control strategy is
  • the display parameters of the game elements corresponding to the strength and weakness characteristics of the human voice are determined as the display parameters of the game elements under the background audio data of the frame, so that based on the strength and weakness characteristics of the human voice, the display parameters of the game elements are set for the background audio data of the frame.
  • the control strategy includes audio features and display parameters of game elements corresponding to the audio features.
  • the game elements here include existing game elements in the target game and/or newly generated game elements based on the audio features. Therefore, in the above step S106, the action of generating new game elements according to audio characteristics may also be included. Based on this, based on the control strategy, not only the existing game elements can be controlled, but also the newly generated game elements can be controlled. For example, according to the control strategy, when the beat characteristics of the background audio data arrive, the existing game elements in the target game can be controlled. Some obstacles become larger, and according to the control strategy, when the high pitch of the human voice arrives, multiple obstacles are newly generated, and the display parameters of the newly generated obstacles are set.
  • step S106 game data of the target game is generated based on the display parameters of the game elements under each frame of background audio data.
  • the display parameters of the game elements under each frame of background audio data are used as part of the game data of the target game.
  • the game data of the target game may also include the aforementioned background audio data.
  • the user terminal can execute the method process in Figure 1 to extract audio features, and when any audio feature corresponds to multiple display parameters of the game element, the user terminal randomly Select a display parameter for this type of audio feature, and generate game data for the target game based on the selected display parameter.
  • This random selection method enables the target game to be randomly set every time the user starts the target game. Thereby improving the richness and diversity of the game and improving the user's gaming experience.
  • the target game is an obstacle avoidance game.
  • the user characters need to move with the playback of background audio data and avoid obstacles.
  • at least two audio features in the background audio data of the target game are extracted, including the beat feature of the drum beat and the intensity feature of the human voice.
  • Figure 4a is a schematic diagram of a target game provided by an embodiment of the present disclosure. As shown in Figure 4a, when a strong drum beat arrives during the playback of background audio data, the display of obstacles is controlled through the method flow in Figure 1 The parameter is a long bar.
  • Figure 4b is a schematic diagram of the target game provided by another embodiment of the present disclosure. As shown in Figure 4b, according to the control strategy, when a weak drum beat arrives, multiple obstacles are generated through the previous method process. objects, and control the display parameters of each obstacle to be a floating small square.
  • Figure 4c is a schematic diagram of a target game provided by yet another embodiment of the present disclosure. As shown in Figure 4c, when the accent of a human voice arrives, through Figure 1 The method flow controls the display parameters of obstacles to be columnar.
  • At least two audio features are extracted from the background audio data of the target game.
  • the at least two audio features have different audio attributes.
  • the target game is determined According to the display parameters of the game elements in the game, the game data of the target game is generated, which can match the audio characteristics of the game elements with the background audio data, improve the matching degree between the game elements and the background music in the game, and improve User’s gaming experience.
  • FIG. 5 is a schematic flowchart of the interactive method provided by an embodiment of the present disclosure. This method can be used by user terminals such as mobile phones, computers, Tablet PC, etc. are executed, as shown in Figure 5.
  • the method includes:
  • Step S502 in response to the user's start instruction for the target game, display at least two game elements of the target game; wherein the at least two game elements are determined based on at least two audio features included in the background audio data of the target game;
  • Step S504 Obtain the game operation instructions sent by the user, and run the target game according to the game operation instructions.
  • At least two game elements of the target game are displayed.
  • the at least two game elements are based on at least two elements contained in the background audio data of the target game through the aforementioned method.
  • the two audio characteristics are determined, and then the target game is run according to the game operation instructions sent by the user.
  • the matching degree between the game elements in the target game and the background audio data of the target game is high, thereby improving the user's gaming experience.
  • the user sends a start instruction for the game game of the target game.
  • the user terminal receives the instruction and displays at least two game elements in the target game on the screen according to the instruction.
  • Game elements can be objects displayed on the game screen, such as objects that the user wants to control or obstacles that the user wants to avoid.
  • the display parameters of the at least two game elements are determined based on at least two audio features contained in the background audio data of the target game through the method in Figure 1. Therefore, the display parameters of the two game elements are consistent with the target game. The matching degree between the background audio data of the game is high, and the user's gaming experience is improved accordingly.
  • step S504 the user sends a game operation instruction for the game match of the target game.
  • the user terminal receives the instruction and runs the target game according to the instruction, thereby facilitating the user to play the game.
  • the game operation instruction sent by the user is at least one of the following instructions:
  • the virtual object can be a virtual object corresponding to the user character in the game, that is, a virtual object controlled by the user in the game;
  • Game element control instructions that control the display parameters of game elements.
  • the method further includes: displaying the virtual object controlled by the user.
  • obtain the game operation instructions sent by the user and run the target game according to the game operation instructions, specifically:
  • the virtual objects are also objects corresponding to the user's character.
  • the user sends a virtual object control instruction for the virtual object to the user terminal.
  • the user terminal receives the virtual object control instruction and controls the movement of the virtual object in the game screen according to the virtual object control instruction.
  • the positional relationship between the virtual object and the game element is the first preset positional relationship
  • the score of the target game is adjusted. For example, when the virtual object does not touch the game element, the user score is increased.
  • the positional relationship between the virtual object and the game element is the second preset positional relationship
  • the target game game is ended. For example, when a virtual object comes into contact with a game element, the target game session ends.
  • game elements can be obstacles that users need to avoid.
  • the user's corresponding virtual object in the game of avoiding obstacles, can be controlled to move through the user's game operation instructions, and the game score can be adjusted or the game can be ended according to the game progress.
  • the game operation instructions sent by the user include game element control instructions that control the display parameters of the game elements
  • the game operation instructions sent by the user are obtained, and the target game is run according to the game operation instructions, specifically as follows:
  • the user sends a game element control instruction
  • the user terminal receives the game element control instruction.
  • the display parameters of the game element are changed. Specifically, the display parameters such as the size and color of the game element can be changed.
  • the user when the user sends a game element control instruction by short pressing, and the game element control instruction is used to change the color of the game element, if the time when the user sends the game element control instruction is equal to the display time of the game element, that is, just before When the game element is displayed and the game element is triggered, the user's score will be increased.
  • the user when the user sends a game element control instruction by long pressing, and the game element control instruction is used to change the size of the game element, if the duration of the user's long press is greater than or equal to the display duration of the game element, the user's score will be increased.
  • the target game game is ended.
  • the time when the user triggers the game element control instruction that is, the time relationship between the game element control instruction triggering time and the game element display time
  • the target game game is ended.
  • the game element control instruction is used to change the color of the game element
  • the time when the user sends the game element control instruction is not equal to the display time of the game element, that is, there is no
  • the game element is triggered when the game element is displayed, the game will end.
  • the user sends a game element control instruction by long pressing and the game element control instruction is used to change the size of the game element, if the duration of the user's long pressing is less than the display duration of the game element, the game will end.
  • the target game in a game that triggers game elements, can be played based on the time relationship between the triggering time of the user's game element control instruction and the display time of the game element.
  • Figure 6 is a schematic structural diagram of a game data generation device provided by an embodiment of the present disclosure. As shown in Figure 6, the device includes:
  • the feature extraction unit 61 is used to obtain the background audio data of the target game, separate the background audio data into at least two sub-audio data according to the preset audio attribute dimensions, and extract the audio features corresponding to each of the sub-audio data. Type, extract at least one audio feature of each of the sub-audio data to obtain at least two audio features;
  • the strategy acquisition unit 62 is configured to acquire a control strategy for controlling the display parameters of game elements in the target game based on the audio characteristics; wherein the control strategy includes the audio characteristics and all corresponding parameters of the audio characteristics.
  • Data generation unit 63 configured to determine the display parameters of the game elements under the background audio data of each frame according to the control strategy and the audio characteristics, and according to the display parameters of the game elements under the background audio data of each frame Parameters to generate game data for the target game.
  • the preset audio attribute dimensions include a timbre dimension; the feature extraction unit 61 is specifically configured to: separate the background audio data into at least two sub-audio data according to the timbre dimension; wherein each of the sub-audio data The timbres are different from each other.
  • a feature type determination unit configured to obtain the audio before extracting at least one audio feature of each sub-audio data according to the audio feature extraction type corresponding to each sub-audio data.
  • the mapping relationship between each dimension value of the attribute dimension and the audio feature extraction type; based on the dimension value to which each of the sub-audio data belongs and the mapping relationship, the audio feature extraction type corresponding to each of the sub-audio data is determined.
  • the strategy acquisition unit 62 is specifically configured to: acquire a pre-stored control strategy for controlling the display parameters of game elements in the target game based on the audio characteristics; or, acquire a user-defined control strategy based on the audio characteristics.
  • the policy acquisition unit 62 is further specifically configured to: obtain the display parameters of the game elements customized by the user for the audio characteristics; and obtain the display parameters of the game elements customized by the user for the audio characteristics. display parameters, as part of the control strategy for controlling the display parameters of game elements in the target game based on the audio characteristics; and/or, providing the user with multiple initial display parameters of the game elements, obtaining the user Among the plurality of initial display parameters, the target display parameters of the game element screened for the audio feature are used; the target display parameters screened by the user for the audio feature are used as the target display parameters based on the audio feature. Display parameters of game elements in the game are controlled as part of the control strategy.
  • the policy acquisition unit 62 is also specifically configured to: acquire display materials related to the game elements uploaded by the user, and determine the display mode related to the game elements specified by the user; according to the display materials and the Display mode, generate user-defined display parameters of the game elements, and determine the audio features associated with the generated display parameters by the user.
  • the strategy acquisition unit 62 is also specifically configured to: provide the initial display parameters of the game elements corresponding to each of the audio characteristics to the user as a filtering range, and obtain the user's results for each filtering range in each filtering range.
  • the target display parameters of the game elements filtered by the audio features; or, the initial display parameters of the game elements corresponding to the various audio features are jointly provided to the user as a screening range, and the user's performance within the screening range is obtained. Screen the target display parameters of the game elements, and determine the audio features that the user associates with the target display parameters.
  • the data generation unit 63 is specifically configured to: determine at least one display parameter of the game element under the background audio data of each frame according to the control strategy and the audio characteristics; wherein, each of the The display parameters are related to the audio characteristics of the background audio data of each frame; for the background audio data of any frame, it is determined whether the various display parameters of the game elements under the background audio data of the frame meet the requirements.
  • the display parameter constraints in the control strategy if satisfied, the various display parameters of the game elements under the background audio data of the frame will be retained; if not satisfied, the various display parameters of the game elements under the background audio data of the frame will be retained according to the various audio characteristics.
  • the various display parameters of the game elements under the background audio data of the frame are adjusted according to the priority order between them, and the adjusted display parameters are retained; wherein, the adjusted display parameters satisfy the above Display parameter constraints: determine the display parameters of the game elements under the background audio data in each frame according to the display parameters retained by the game elements under the background audio data in each frame.
  • the data generation unit 63 is also specifically configured to: determine the priority between various display parameters of the game element under the background audio data of the frame according to the priority order between the various audio features. level order; according to the priority order between the various display parameters, the Parameters to be deleted are selected from various display parameters of the game element under the background audio data and deleted.
  • the data generation unit 63 is specifically configured to: for any frame of the background audio data, determine the priority among the audio features contained in the background audio data of the frame according to the priority order between the various audio features. The audio feature with the highest priority; according to the control strategy corresponding to the audio feature with the highest priority, determine the display parameters of the game element under the background audio data of the frame.
  • the audio features include musical instrument beat features and vocal strength features; the priority of the musical instrument beat features is higher than the priority of the vocal strength features; the data generation unit 63 is also specifically configured to: When the background audio data of the frame does not have the beat characteristics of the musical instrument but has the intensity characteristics of the human voice, the audio characteristic with the highest priority is determined to be the intensity characteristics of the human voice in the background audio data of the frame; according to the The control strategy corresponding to the vocal strength characteristics and the vocal strength characteristics in the background audio data of the frame are used to determine the display parameters of the game elements under the background audio data of the frame.
  • a priority acquisition unit is also included, configured to acquire the priority order among the various audio features set by the user.
  • the game data generating device in this embodiment can implement each process of the aforementioned game data generating method embodiment and achieve the same effects and functions, which will not be repeated here.
  • Figure 7 is a schematic structural diagram of an interactive device provided by an embodiment of the present disclosure. As shown in Figure 7, the device includes:
  • the element display unit 71 is configured to display at least two game elements of the target game in response to the user's starting instruction for the target game; wherein the at least two game elements are based on the background audio data of the target game. At least two audio features included are determined;
  • the game running unit 72 is used to obtain the game operation instructions sent by the user, and run the target game according to the game operation instructions.
  • the game operation instructions include at least one of the following instructions: virtual object control instructions to control virtual objects; game element control instructions to control display parameters of the game elements.
  • an object display unit is also included for displaying virtual objects controlled by the user; the game running unit 72 is specifically configured to: obtain the virtual object control instructions sent by the user, and control the virtual objects according to the The instruction controls the virtual object to move; in response to the positional relationship between the virtual object and the game element being a first preset positional relationship, adjusting the target game pair the score of the game; in response to the positional relationship between the virtual object and the game element being the second preset positional relationship, end the target game game.
  • the game running unit 72 is specifically configured to: obtain the game element control instruction sent by the user, and change the display parameters of the game element according to the game element control instruction; respond to the game element control instruction When the time relationship between the trigger time and the game element display time is the first preset time relationship, the score of the target game is adjusted; in response to the game element control instruction trigger time and the game element display time, the time relationship between the trigger time and the game element display time is adjusted. When the time relationship is the second preset time relationship, the target game is ended.
  • the interactive device in this embodiment can implement each process of the aforementioned interactive method embodiment and achieve the same effects and functions, which will not be repeated here.
  • FIG. 8 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the electronic device may have relatively large differences due to different configurations or performances. It may include one or more processors 801 and memory 802, and one or more storage application programs or data may be stored in the memory 802. Among them, the memory 802 may be short-term storage or persistent storage. Application programs stored in memory 802 may include one or more modules (not shown), each of which may include a sequence of computer-executable instructions in the electronic device. Furthermore, the processor 801 may be configured to communicate with the memory 802 and execute a series of computer-executable instructions in the memory 802 on the electronic device.
  • the electronic device may also include one or more power supplies 803, one or more wired or wireless network interfaces 804, one or more input or output interfaces 805, one or more keyboards 806, etc.
  • the electronic device is a game data generating device, including a processor; and, a memory configured to store computer-executable instructions, which when executed cause the processor to implement The following process:
  • the background audio data of the target game separate the background audio data into at least two sub-audio data according to the preset audio attribute dimensions, and extract each of the sub-audio data according to the audio feature extraction type corresponding to each sub-audio data. At least one audio feature of the sub-audio data is used to obtain at least two audio features;
  • control strategy for controlling the display parameters of game elements in the target game based on the audio features; wherein the control strategy includes the audio features and the display parameters of the game elements corresponding to the audio features;
  • the various audio features in the control strategy have priority order;
  • the display parameters of the game elements under the background audio data of each frame are determined, and the target game is generated according to the display parameters of the game elements under the background audio data of each frame. game data.
  • the preset audio attribute dimensions include timbre dimensions; according to the preset audio attribute dimensions, the background audio data is separated into at least two sub-audio data, including : According to the timbre dimension, the background audio data is separated into at least two kinds of sub-audio data; wherein the timbres of each of the sub-audio data are different from each other.
  • the process it also includes: obtaining a mapping relationship between each dimension value of the audio attribute dimension and the audio feature extraction type; and determining each type of sub-audio data based on the dimension value to which each type of sub-audio data belongs and the mapping relationship. Corresponding audio feature extraction type.
  • control strategy for controlling display parameters of game elements in the target game based on the audio characteristics including: obtaining a pre-stored control strategy based on the audio characteristics.
  • the display parameters of the game elements under the background audio data of each frame including: according to the control strategy and the audio characteristics, determining at least one display parameter of the game element under the background audio data of each frame; wherein each of the display parameters is related to the audio characteristics of the background audio data of each frame; For the background audio data of any frame, determine whether the various display parameters of the game elements under the background audio data of the frame satisfy the display parameter constraints in the control strategy; if satisfied, then determine whether the frame The various display parameters of the game elements under the background audio data are retained.
  • the background audio data for the frame is Various display parameters of the game elements are adjusted, and the adjusted display parameters are retained; wherein the adjusted display parameters satisfy the display parameter constraints; the game is based on the background audio data of each frame.
  • the display parameters retained by the elements determine the display parameters of the game elements under the background audio data of each frame. number.
  • the computer-executable instructions adjust various display parameters of the game element under the background audio data of the frame according to the priority order between the various audio features. , including: determining the priority order between various display parameters of the game element under the background audio data of the frame according to the priority order between various audio features; according to the various display parameters According to the priority order between them, the parameters to be deleted are screened out among the various display parameters of the game element under the background audio data of the frame and deleted.
  • the control strategy determines the display parameters of the game elements under the background audio data of each frame, including: for any frame For the background audio data, according to the priority order among the various audio features, the audio feature with the highest priority is determined among the audio features contained in the background audio data of the frame; according to the audio feature corresponding to the highest priority audio feature
  • the control strategy determines the display parameters of the game elements under the background audio data of the frame.
  • the audio features include musical instrument beat features and vocal strength features; the priority of the musical instrument beat features is higher than the priority of the vocal strength features. ;
  • the control strategy corresponding to the audio feature with the highest priority determine the display parameters of the game element under the background audio data of the frame, including: the background audio data of the frame does not have the instrument beat feature
  • the audio feature with the highest priority is determined to be the human voice strength feature in the background audio data of the frame; according to the control strategy corresponding to the human voice strength feature and the frame
  • the intensity characteristics of the human voice in the background audio data determine the display parameters of the game elements under the background audio data of the frame.
  • the game data generation device in this embodiment can implement each process of the aforementioned game data generation method embodiment and achieve the same effects and functions, which will not be repeated here.
  • the electronic device is an interactive device and includes a processor; and, a memory configured to store computer-executable instructions, which when executed, causes the processor to implement the following process :
  • At least two game elements of the target game are displayed; wherein the at least two game elements are based on at least two audio features included in the background audio data of the target game Sure;
  • the game operation instructions include at least one of the following instructions: virtual object control instructions to control virtual objects; control display parameters of the game elements. Game element control instructions.
  • the process further includes: displaying a virtual object controlled by the user; obtaining a game operation instruction sent by the user, and executing the game operation instruction according to the game operation instruction.
  • running the target game including: obtaining a virtual object control instruction sent by the user, and controlling the virtual object to move according to the virtual object control instruction; responding to the positional relationship between the virtual object and the game element When the positional relationship is the first preset position, adjust the score of the target game; in response to the positional relationship between the virtual object and the game element being the second preset positional relationship, end the target game.
  • obtaining the game operation instructions sent by the user, and running the target game according to the game operation instructions includes: obtaining the game operation instructions sent by the user. element control instructions, and change the display parameters of the game elements according to the game element control instructions; in response to the time relationship between the trigger time of the game element control instruction and the display time of the game element being the first preset time relationship, Adjust the score of the target game; in response to the time relationship between the game element control instruction triggering time and the game element display time being a second preset time relationship, end the target game.
  • the interactive device in this embodiment can implement each process of the aforementioned interactive method embodiment and achieve the same effects and functions, which will not be repeated here.
  • An embodiment of the present disclosure also provides a computer-readable storage medium.
  • the computer-readable storage medium is used to store computer-executable instructions. When executed by a processor, the computer-executable instructions implement the following process:
  • the background audio data of the target game separate the background audio data into at least two sub-audio data according to the preset audio attribute dimensions, and extract each of the sub-audio data according to the audio feature extraction type corresponding to each sub-audio data. At least one audio feature of the sub-audio data is used to obtain at least two audio features;
  • control strategy for controlling the display parameters of game elements in the target game based on the audio features; wherein the control strategy includes the audio features and the display parameters of the game elements corresponding to the audio features;
  • the various audio features in the control strategy have priority order;
  • the display parameters of the game elements under the background audio data of each frame are determined, and the target game is generated according to the display parameters of the game elements under the background audio data of each frame. game data.
  • the preset audio attribute dimensions include timbre dimensions; and the background audio data is separated into at least two sub-audio data according to the preset audio attribute dimensions. , including: separating the background audio data into at least two kinds of sub-audio data according to the timbre dimension; wherein the timbres of each of the sub-audio data are different from each other.
  • the The process when the computer-executable instructions are executed by the processor, before extracting at least one audio feature of each of the sub-audio data according to an audio feature extraction type corresponding to each of the sub-audio data, the The process also includes: obtaining a mapping relationship between each dimension value of the audio attribute dimension and the audio feature extraction type; determining each of the sub-audio data based on the dimension value to which each of the sub-audio data belongs and the mapping relationship. Type of audio feature extraction corresponding to audio data.
  • control strategy for controlling display parameters of game elements in the target game based on the audio characteristics including: obtaining a pre-stored control strategy based on the audio characteristics.
  • the display parameters of the game elements under the background audio data of each frame are determined, including: according to the Control strategy and the audio characteristics to determine at least one display parameter of the game element under the background audio data of each frame; wherein each of the display parameters and the audio characteristics of the background audio data of each frame Related; for the background audio data of any frame, determine whether the various display parameters of the game elements under the background audio data of the frame satisfy the display parameter constraints in the control strategy; if satisfied, then The various display parameters of the game elements under the background audio data of the frame are retained. If they are not satisfied, the background audio data of the frame is stored according to the priority order between the various audio features.
  • Various display parameters of the game element are adjusted, and the adjusted display parameters are retained; wherein the adjusted display parameters satisfy the display parameter constraints; according to the background audio data of each frame, The display parameters retained by the game elements are determined to determine the display parameters of the game elements under the background audio data of each frame. Display parameters.
  • various display parameters of the game element under the background audio data of the frame are Making adjustments includes: determining the priority order between various display parameters of the game element under the background audio data of the frame according to the priority order between various audio features; according to the various According to the priority order between display parameters, the parameters to be deleted are screened and deleted from various display parameters of the game element under the background audio data of the frame.
  • the control strategy and the audio characteristics determine the display parameters of the game elements under the background audio data of each frame, including: for any According to the priority order among the various audio features of the frame of the background audio data, the audio feature with the highest priority among the audio features contained in the background audio data of the frame is determined; according to the audio feature with the highest priority
  • the corresponding control strategy determines the display parameters of the game elements under the background audio data of the frame.
  • the audio features include musical instrument beat features and vocal strength features; the priority of the musical instrument beat features is higher than that of the vocal strength features. Priority; determine the display parameters of the game element under the background audio data of the frame according to the control strategy corresponding to the audio feature with the highest priority, including: the background audio data of the frame does not have the musical instrument When the beat feature but the vocal strength feature is present, the audio feature with the highest priority is determined to be the vocal strength feature in the background audio data of the frame; according to the control strategy corresponding to the vocal strength feature and The intensity characteristics of the human voice in the background audio data of this frame determine the display parameters of the game elements under the background audio data of this frame.
  • the storage medium in this embodiment can implement each process of the aforementioned game data generation method embodiment and achieve the same effects and functions, which will not be repeated here.
  • An embodiment of the present disclosure also provides a computer-readable storage medium.
  • the computer-readable storage medium is used to store computer-executable instructions. When executed by a processor, the computer-executable instructions implement the following process:
  • At least two game elements of the target game are displayed; wherein the at least two game elements are based on at least two audio features included in the background audio data of the target game Sure;
  • the game operation instructions include at least one of the following instructions: virtual object control instructions for controlling virtual objects; display parameters for the game elements. Control instructions for controlling game elements.
  • the process further includes: displaying a virtual object controlled by the user; obtaining the game operation instructions sent by the user, and executing the game operation instructions according to the game.
  • Operating instructions to run the target game include: obtaining a virtual object control instruction sent by the user, and controlling the virtual object to move according to the virtual object control instruction; responding to the interaction between the virtual object and the game element.
  • the positional relationship is the first preset positional relationship, adjust the score of the target game; in response to the positional relationship between the virtual object and the game element being the second preset positional relationship, end the target game game.
  • obtaining the game operation instructions sent by the user, and running the target game according to the game operation instructions includes: obtaining the game operation instructions sent by the user. game element control instructions, and change the display parameters of the game elements according to the game element control instructions; the time relationship between the trigger time and the game element display time in response to the game element control instructions is the first preset time relationship when the time relationship between the trigger time of the game element control instruction and the display time of the game element is a second preset time relationship, end the target game game.
  • the storage medium in this embodiment can implement each process of the aforementioned interactive method embodiment and achieve the same effects and functions, which will not be repeated here.
  • the computer-readable storage medium includes read-only memory (Read-Only Memory, ROM for short), random access memory (Random Access Memory, RAM for short), magnetic disk or optical disk, etc.
  • a programmable logic device such as a field programmable gate array (FPGA)
  • FPGA field programmable gate array
  • HDL Hardware Description Language
  • HDL High-Speed Integrated Circuit Hardware Description Language
  • ABEL Advanced Boolean Expression Language
  • AHDL Advanced Boolean Expression Language
  • Confluence CUPL
  • HDCal Component Description Language
  • JHDL Java Hardware Description Language
  • Lava Lava
  • Lola MyHDL
  • PALASM RHDL
  • VHDL Very-High-Speed Integrated Circuit Hardware Description Language
  • Verilog Verilog
  • the controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (eg, software or firmware) executable by the (micro)processor. , logic gates, switches, Application Specific Integrated Circuit (ASIC), programmable logic controllers and embedded microcontrollers. Examples of controllers include but are not limited to the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20 and Silicone Labs C8051F320, the memory controller can also be implemented as part of the memory control logic.
  • the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (eg, software or firmware) executable by the (micro)processor. , logic gates, switches, Application Specific Integrated Circuit (ASIC), programmable logic controllers and embedded microcontrollers. Examples of controllers include but are not limited to the following microcontrollers
  • the controller in addition to implementing the controller in the form of pure computer-readable program code, the controller can be completely programmed with logic gates, switches, application-specific integrated circuits, programmable logic controllers and embedded logic by logically programming the method steps. Microcontroller, etc. to achieve the same function. Therefore, this controller can be considered as a hardware component, and the devices included therein for implementing various functions can also be considered as structures within the hardware component. Or even, the means for implementing various functions can be considered as implementing methods. Software modules can in turn be structures within hardware components.
  • An example implementation device is a computer.
  • the computer may be, for example, a personal computer, a laptop computer, a cellular phone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or A combination of any of these devices.
  • one or more embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment that combines software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory that causes a computer or other programmable data processing apparatus to operate in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction means, the instructions
  • the device implements the functions specified in a process or processes of the flowchart and/or a block or blocks of the block diagram.
  • These computer program instructions may also be loaded onto a computer or other programmable data processing device, causing a series of operating steps to be performed on the computer or other programmable device to produce computer-implemented processing, thereby executing on the computer or other programmable device. Instructions are provided for implementing the process Diagram of a process or processes and/or block diagram of the steps of a function specified in a block or blocks.
  • One or more embodiments of the present disclosure may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer.
  • program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types.
  • One or more embodiments of the present disclosure may also be practiced in distributed computing environments where tasks are performed by remote processing devices connected through a communications network.
  • program modules may be located in both local and remote computer storage media including storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • User Interface Of Digital Computer (AREA)
  • Pinball Game Machines (AREA)

Abstract

本公开实施例提供了一种游戏数据生成方法及装置、交互方法及装置,该游戏数据生成方法包括:获取目标游戏的背景音频数据,按照预设的音频属性维度,将背景音频数据分离成至少两种子音频数据,根据每种子音频数据对应的音频特征提取种类,提取每种子音频数据的至少一种音频特征,得到至少两种音频特征;获取基于音频特征对目标游戏中的游戏元素的显示参数进行控制的控制策略;根据控制策略和音频特征,确定各帧背景音频数据下游戏元素的显示参数,根据各帧背景音频数据下游戏元素的显示参数,生成目标游戏的游戏数据。通过本实施例,能够提高游戏中的游戏元素与背景音乐的匹配程度。

Description

游戏数据生成方法及装置、交互方法及装置
本申请要求于2022年3月23日递交的中国专利申请第202210289521.8号的优先权,在此全文引用上述中国专利申请公开的内容以作为本申请的一部分。
技术领域
本公开的实施例涉及一种游戏数据生成方法及装置、交互方法及装置。
背景技术
在游戏开发时,需要为游戏配备背景音乐,玩家结合游戏的背景音乐参与到游戏中,能够提高玩家的游戏体验。可以在游戏的背景音乐确定后开发游戏,然而这种方式容易导致游戏中的游戏元素与背景音乐不够贴合,比如,在背景音乐是一段欢快的旋律时,游戏中的游戏元素没有呈现一种欢快效果的形态。可见,游戏中的游戏元素与背景音乐的匹配程度有待提高。
发明内容
本公开实施例提供了一种游戏数据生成方法及装置、交互方法及装置,用于提高游戏中的游戏元素与背景音乐的匹配程度。
第一方面,本公开实施例提供了一种游戏数据生成方法,包括:获取目标游戏的背景音频数据,按照预设的音频属性维度,将所述背景音频数据分离成至少两种子音频数据,根据每种所述子音频数据对应的音频特征提取种类,提取每种所述子音频数据的至少一种音频特征,得到至少两种音频特征;获取基于所述音频特征对所述目标游戏中的游戏元素的显示参数进行控制的控制策略;其中,所述控制策略包括所述音频特征和所述音频特征对应的所述游戏元素的显示参数;所述控制策略中各种所述音频特征之间具有优先级顺序;根据所述控制策略和所述音频特征,确定各帧所述背景音频数据下所述游戏元素的显示参数,根据各帧所述背景音频数据下所述游戏元素的显示参数,生成所述目标游戏的游戏数据。
第二方面,本公开实施例提供了一种交互方法,包括:响应于用户针对目标游戏对局的启动指令,显示所述目标游戏的至少两种游戏元素;其中,所述至少两种游戏元素基于所述目标游戏的背景音频数据中包含的至少两种音频特征确定;获取所述用户发送的游戏操作指令,并根据所述游戏操作指令,运行所述目标游戏。
第三方面,本公开实施例提供了一种游戏数据生成装置,包括:特征提取单元,用于获取目标游戏的背景音频数据,按照预设的音频属性维度,将所述背景音频数据分离成至少两种子音频数据,根据每种所述子音频数据对应的音频特征提取种类,提取每种所述子音频数据的至少一种音频特征,得到至少两种音频特征;策略获取单元,用于获取基于所述音频特征对所述目标游戏中的游戏元素的显示参数进行控制的控制策略;其中,所述控制策略包括所述音频特征和所述音频特征对应的所述游戏元素的显示参数;所述控制策略中各种所述音频特征之间具有优先级顺序;数据生成单元,用于根据所述控制策略和所述音频特征,确定各帧所述背景音频数据下所述游戏元素的显示参数,根据各帧所述背景音频数据下所述游戏元素的显示参数,生成所述目标游戏的游戏数据。
第四方面,本公开实施例提供了一种交互装置,包括:元素显示单元,用于响应于用户针对目标游戏对局的启动指令,显示所述目标游戏的至少两种游戏元素;其中,所述至少两种游戏元素基于所述目标游戏的背景音频数据中包含的至少两种音频特征确定;游戏运行单元,用于获取所述用户发送的游戏操作指令,并根据所述游戏操作指令,运行所述目标游戏。
第五方面,本公开实施例提供了一种电子设备,包括:处理器;以及,被配置为存储计算机可执行指令的存储器,所述计算机可执行指令在被执行时使所述处理器实现如上述第一方面或第二方面所述的方法。
第六方面,本公开实施例提供了一种计算机可读存储介质,所述计算机可读存储介质用于存储计算机可执行指令,所述计算机可执行指令在被处理器执行时实现如上述第一方面或第二方面所述的方法。
通过本公开的一个或多个实施例,从目标游戏的背景音频数据中提取至少两种音频特征,该至少两种音频特征的音频属性不同,基于该至少两种音频特征,确定目标游戏中的游戏元素的显示参数,根据游戏元素的显示参数, 生成目标游戏的游戏数据,能够使得游戏元素与背景音频数据的音频特征相匹配,提高游戏中的游戏元素与背景音乐的匹配程度。
附图说明
为了更清楚地说明本公开的一个或多个实施例,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图;
图1为本公开一实施例提供的游戏数据生成方法的流程示意图;
图2为本公开一实施例提供的音频特征提取的流程示意图;
图3为本公开一实施例提供的确定游戏元素的显示参数的流程示意图;
图4a为本公开一实施例提供的目标游戏的示意图;
图4b为本公开另一实施例提供的目标游戏的示意图;
图4c为本公开又一实施例提供的目标游戏的示意图;
图5为本公开一实施例提供的交互方法的流程示意图;
图6为本公开一实施例提供的游戏数据生成装置的结构示意图;
图7为本公开一实施例提供的交互装置的结构示意图;以及
图8为本公开一实施例提供的电子设备的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本公开的一个或多个实施例,下面将结合本公开的一个或多个实施例中的附图,对本公开的一个或多个实施例进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开的一部分实施例,而不是全部的实施例。基于本公开的一个或多个实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本公开的保护范围。
参考图1,图1为本公开一实施例提供的游戏数据生成方法的流程示意图,如图1所示,该流程包括以下步骤:
步骤S102,获取目标游戏的背景音频数据,按照预设的音频属性维度,将背景音频数据分离成至少两种子音频数据,根据每种子音频数据对应的音 频特征提取种类,提取每种子音频数据的至少一种音频特征,得到至少两种音频特征;
步骤S104,获取基于音频特征对目标游戏中的游戏元素的显示参数进行控制的控制策略;其中,控制策略包括音频特征和音频特征对应的游戏元素的显示参数;控制策略中各种音频特征之间具有优先级顺序;
步骤S106,根据控制策略和音频特征,确定各帧背景音频数据下游戏元素的显示参数,根据各帧背景音频数据下游戏元素的显示参数,生成目标游戏的游戏数据。
可见,通过本实施例,从目标游戏的背景音频数据中提取至少两种音频特征,该至少两种音频特征的音频属性不同,基于该至少两种音频特征,确定目标游戏中的游戏元素的显示参数,根据游戏元素的显示参数,生成目标游戏的游戏数据,能够使得游戏元素与背景音频数据的音频特征相匹配,提高游戏中的游戏元素与背景音乐的匹配程度。
图1中的游戏数据生成方法,能够由用于开发目标游戏的后台服务器执行,也能够由目标游戏的用户所使用的用户终端执行,该用户终端可以为手机、电脑、平板电脑等用户终端。
上述步骤S102中,获取目标游戏的背景音频数据。当本实施例中的方法流程由用于开发目标游戏的后台服务器执行时,后台服务器可以获取开发人员指定的音频数据作为目标游戏的背景音频数据。当本实施例中的方法流程由目标游戏的用户所使用的用户终端执行时,用户终端可以获取用户上传的本地音频数据,作为目标游戏的背景音频数据。在一个具体的实施方式中,目标游戏在开发过程中,由开发人员通过后台服务器指定目标游戏的背景音频数据,当目标游戏开发完成投放到市场后,游戏用户还可以上传用户终端中的本地音频数据重新作为目标游戏的背景音频数据。
上述步骤S102中,按照预设的音频属性维度,将背景音频数据分离成至少两种子音频数据,根据每种子音频数据对应的音频特征提取种类,提取每种子音频数据的至少一种音频特征,得到至少两种音频特征,其中,该至少两种子音频数据的音频属性彼此不同。本实施例中,背景音频数据包括音频属性彼此不同的至少两种子音频数据,比如,音频属性包括音色,背景音频数据中具有人声、钢琴声、贝斯声、鼓声等,因此背景音频数据包括音色 彼此不同的四种子音频数据,分别为人声对应的子音频数据、钢琴声对应的子音频数据、贝斯声对应的子音频数据和鼓声对应的子音频数据。
本实施例中,从音频属性彼此不同的至少两种子音频数据中,提取至少两种音频特征。比如,从每种子音频数据中,提取一种音频特征,从而得到至少两种音频特征,当然,也可以从每种子音频数据中,提取多种音频特征,这里不做限制。续接上例,可以从人声对应的子音频数据中提取音阶高低特征,从钢琴声对应的子音频数据中提取节拍特征,从贝斯声对应的子音频数据中提取节拍特征,从鼓声对应的子音频数据中提取节拍特征,从而得到四种音频特征,分别为人声的音阶高低特征、钢琴声的节拍特征、贝斯声的节拍特征、鼓声的节拍特征。本实施例中提取的音频特征包括但不限于各种音色下的节拍特征、音阶高低特征、音量大小特征、声音轻重特征等。
步骤S102中,首先,按照预设的音频属性维度,将背景音频数据分离成至少两种子音频数据。在一个实施例中,预设的音频属性维度包括音色维度,按照预设的音频属性维度,将背景音频数据分离成至少两种子音频数据,具体可以为:按照音色维度,将背景音频数据分离成至少两种子音频数据,其中,各子音频数据的音色彼此不同,比如像前面的例子,背景音频数据中具有人声、钢琴声、贝斯声、鼓声,因此将背景音频数据分离成四种子音频数据,分别为人声对应的子音频数据、钢琴声对应的子音频数据、贝斯声对应的子音频数据和鼓声对应的子音频数据。
在一个实施例中,在根据每种子音频数据对应的音频特征提取种类,提取每种子音频数据的至少一种音频特征之前,本实施例中的方法还可以执行:获取音频属性维度的各维度值与音频特征提取种类之间的映射关系,基于子音频数据所属的维度值和该映射关系,确定子音频数据对应的音频特征提取种类。在确定子音频数据对应的音频特征提取种类之后,再针对每种子音频数据,根据该子音频数据对应的音频特征提取种类,提取该子音频数据的至少一种音频特征。
在一个实施例中,预设的音频属性维度包括音色维度,则音频属性维度的各维度值包括各种音色,则本实施例中,获取预设的各种音色与音频特征提取种类之间的映射关系,基于子音频数据所属的音色和该映射关系,确定子音频数据对应的音频特征提取种类。
比如,预设人声对应的音频特征提取种类为音阶高低特征,预设贝斯对应的音频特征提取种类为节拍特征,预设鼓声对应的音频特征提取种类为节拍特征,预设钢琴声对应的音频特征提取种类为节拍特征,则像前面的例子,从人声对应的子音频数据中提取音阶高低特征,从钢琴声对应的子音频数据中提取节拍特征,从贝斯声对应的子音频数据中提取节拍特征,从鼓声对应的子音频数据中提取节拍特征。
图2为本公开一实施例提供的音频特征提取的流程示意图,该流程中,以音频属性维度为音色为例进行说明,如图2所示,该流程包括:
步骤S202,获取目标游戏的背景音频数据;
步骤S204,按照音色维度,将背景音频数据分离成至少两种子音频数据;其中,各子音频数据的音色彼此不同。
步骤S206,获取预设的各种音色与音频特征提取种类之间的映射关系,基于各种子音频数据所属的音色和该映射关系,确定各种子音频数据对应的音频特征提取种类;
步骤S208,针对每种子音频数据,根据该种子音频数据对应的音频特征提取种类,提取该种子音频数据的至少一种音频特征。
图2的流程详见前面的描述,这里不再重复。
在图1的流程中,上述步骤S104中,获取基于上述音频特征对目标游戏中的游戏元素的显示参数进行控制的控制策略。其中,显示参数包括但不限于游戏元素的大小、位置、显示时长、开始显示的时间、结束显示的时间、颜色等。该控制策略可以举例为,当背景音频数据中出现鼓声的节拍特征时,控制目标游戏中的游戏元素——灯光变亮至某一亮度,或者,当背景音频数据中出现人声的高音特征时,控制目标游戏中的游戏元素——灯光颜色变为白色。当然,游戏元素不局限于灯光,游戏元素还可以举例为用户要躲避的障碍物,游戏画面中的各种对象等。
在一个实施例中,获取基于音频特征对目标游戏中的游戏元素的显示参数进行控制的控制策略,具体为:
(b1)获取预先存储的基于音频特征对目标游戏中的游戏元素的显示参数进行控制的控制策略;或者,(b2)获取用户自定义的基于音频特征对目标游戏中的游戏元素的显示参数进行控制的控制策略。
当本实施例中的方法流程由用于开发目标游戏的后台服务器执行时,后台服务器在开发目标游戏并生成目标游戏的游戏数据的过程中,可以通过方式(b1),获取预先存储的基于音频特征对目标游戏中的游戏元素的显示参数进行控制的控制策略,该存储的控制策略可以是开发人员预先配置到后台服务器中的,从而进一步开发目标游戏。
当本实施例中的方法流程由目标游戏的用户所使用的用户终端执行时,一种情况下,用户不想对已经开发完成的目标游戏进行改动,则用户终端在启动目标游戏后,可以通过方式(b1),获取目标游戏中预先存储的基于音频特征对目标游戏中的游戏元素的显示参数进行控制的控制策略。另一种情况下,游戏玩家即用户想对已经开发完成的目标游戏进行改动,则玩家可以自行配置上述的控制策略,从而用户终端在启动目标游戏后,可以通过方式(b2),获取用户自定义的基于音频特征对目标游戏中的游戏元素的显示参数进行控制的控制策略。
可见,本实施例中,当目标游戏已经位于用户终端中之后,仍然可以向玩家开放接口,使得玩家可以配置游戏中的控制策略,从而实现玩家对游戏数据的自定义,提高玩家的游戏体验。
当控制策略有多条时,在一种场景下,当本实施例中的方法流程由目标游戏的用户所使用的用户终端执行时,用户终端还可以获取部分预先存储好的控制策略,获取部分由用户自定义的控制策略。
在一个实施例中,游戏元素可以是用户角色对应的对象,也可以是不与用户角色对应但是随着游戏背景音乐的播放持续显示在屏幕上的对象,或者,随着游戏背景音乐的播放,间歇性显示在屏幕上的对象。控制策略中包括上述音频特征和音频特征对应的游戏元素的显示参数。上述获取用户自定义的基于音频特征对目标游戏中的游戏元素的显示参数进行控制的控制策略,具体为:
(b21)获取用户为音频特征自定义的游戏元素的显示参数;将用户为音频特征自定义的游戏元素的显示参数,作为基于音频特征对目标游戏中的游戏元素的显示参数进行控制的控制策略的一部分;
和/或,
(b22)向用户提供游戏元素的多个初始显示参数,获取用户在多个初始 显示参数中为音频特征筛选的游戏元素的目标显示参数;将用户为音频特征筛选的目标显示参数,作为基于音频特征对目标游戏中的游戏元素的显示参数进行控制的控制策略的一部分。
具体而言,控制策略中包括上述音频特征和音频特征对应的游戏元素的显示参数,从而通过控制策略,基于音频特征对游戏元素的显示参数进行控制。比如,游戏元素为游戏中用户要躲避的障碍物,音频特征包括鼓声的节拍特征,该音频特征对应的游戏元素的显示参数用于表示,当背景音频数据中鼓声的节拍到来时,障碍物变大后的尺寸,音频特征还包括人声的高音特征,该音频特征对应的游戏元素的显示参数用于表示,当背景音频数据中人声为高音时,障碍物消失至不见所用的时间,音频特征还包括人声的音阶高低特征,该音频特征对应的游戏元素的显示参数用于表示,随着人声高低变化而变化的障碍物的尺寸大小。
上述方式(b21)中,用户可以为某种音频特征自定义其对应的游戏元素的显示参数,从而获取用户为该种音频特征自定义的游戏元素的显示参数,将用户为该种音频特征自定义的游戏元素的显示参数,作为基于音频特征对目标游戏中的游戏元素进行控制的控制策略的一部分,控制策略的另一部分还可以包括上述提取出来的各种音频特征。这里,对于提取出来的各种音频特征,用户可以为其中的部分音频特征自定义其对应的游戏元素的显示参数,也可以为其中的全部音频特征自定义其对应的游戏元素的显示参数。
上述方式(b22)中,在向用户显示了游戏元素的多个初始显示参数之后,用户可以在多个初始显示参数中,为某种音频特征筛选其对应的游戏元素的目标显示参数,从而获取用户在多个初始显示参数中为该种音频特征筛选的游戏元素的目标显示参数,将用户为该种音频特征筛选的目标显示参数,作为基于音频特征对游戏元素的显示参数进行控制的控制策略的一部分,控制策略的另一部分还可以包括上述提取出来的各种音频特征。这里,对于提取出来的各种音频特征,用户可以为其中的部分音频特征筛选其对应的游戏元素的目标显示参数,也可以为其中的全部音频特征筛选其对应的游戏元素的目标显示参数。
当然,上述(b21)和(b22)两种方式还可以结合使用,比如,对于提取到的多种音频特征,用户为其中的部分音频特征自定义其对应的游戏元素 的显示参数,用户为其余的音频特征筛选其对应的游戏元素的显示参数。
在一个实施例中,获取用户为音频特征自定义的游戏元素的显示参数,具体为:
(b211)获取用户上传的与游戏元素相关的显示素材,并确定用户指定的与游戏元素相关的显示方式;
(b212)根据显示素材和显示方式,生成用户自定义的游戏元素的显示参数,并确定用户为生成的显示参数关联的音频特征。
首先,获取用户上传的与游戏元素相关的显示素材,显示素材可以为用户终端本地中存储的图片或者动图,接着,向用户展示多种显示方式,并在这多种显示方式中确定用户指定的显示方式,显示方式可以举例为大小由小变大的渐变显示,透明度由0到100%的渐变显示等,显示方式还与游戏元素有关,比如显示方式包括,将显示素材与游戏元素叠加到一起,或者将显示素材显示在游戏元素的上方。在一个例子中,显示方式中还可以包括对显示素材的图案处理方式,比如,对显示素材进行裁剪、图形变换等处理。
然后,根据显示素材和显示方式,生成用户自定义的游戏元素的显示参数,比如,根据显示素材和显示方式,将显示素材与游戏元素叠加到一起,并控制显示素材的透明度随着时间变化由0到100%的渐变显示。最后,确定用户自定义的游戏元素的显示参数所关联的音频特征,比如,将上面例子中的显示参数关联到贝斯的节拍特征,则当贝斯的节拍到来时,控制显示素材与游戏元素叠加到一起,并控制显示素材的透明度随着时间变化由0到100%的渐变显示。
可见,通过本实施例,能够为用户提供接口,以使用户可以根据自己的意愿为音频特征开发游戏元素的显示参数,从而达到根据用户意愿控制游戏的目的。
在一个实施例中,向用户提供游戏元素的多个初始显示参数,获取用户在多个初始显示参数中为音频特征筛选的游戏元素的目标显示参数,包括:
(b221)分别将每种音频特征对应的游戏元素的初始显示参数作为筛选范围提供给用户,获取用户分别在每个筛选范围中为每种音频特征筛选的游戏元素的目标显示参数;
或者,
(b221)将各种音频特征对应的游戏元素的初始显示参数共同作为筛选范围提供给用户,获取用户在筛选范围内筛选的游戏元素的目标显示参数,并确定用户为目标显示参数关联的音频特征。
一种情况下,分别向用户显示每种音频特征对应的游戏元素的初始显示参数,其中,一种音频特征可以对应一种或多种初始显示参数,初始显示参数是预先开发并存储在目标游戏中的参数。将每种音频特征对应的一种或多种初始显示参数,作为一个筛选范围,从而得到多个筛选范围。用户对于每种音频特征,用户可以在对应的筛选范围内,为该种音频特征筛选游戏元素的目标显示参数。
另一种情况下,将各种音频特征对应的游戏元素的初始显示参数,共同作为一个整体的筛选范围显示给用户。用户可以在该筛选范围内,为游戏元素筛选目标显示参数,并确定与该目标显示参数关联的音频特征。比如,将各种音频特征对应的游戏元素的初始显示参数共5种提供给用户,这5种参数是预先开发并存储在目标游戏中的参数,用户可以在该5种参数中,为游戏元素筛选目标显示参数,并确定与每种目标显示参数关联的音频特征,从而为每种音频特征都筛选得到至少一种对应的目标显示参数。
可见,通过本实施例,用户不仅能够开发游戏元素的显示参数,还能够根据意愿筛选游戏元素的显示参数,从而达到根据用户意愿控制游戏的目的。
根据前面描述可知,目标游戏中具有游戏元素,控制策略包括音频特征和音频特征对应的游戏元素的显示参数,这里的游戏元素,包括目标游戏中已有的游戏元素和/或根据音频特征新生成的游戏元素。显示参数包括但不限于游戏元素的大小、位置、显示时长、颜色等参数。控制策略中还包括音频特征对应的游戏元素的生成规则,根据音频特征新生成游戏元素具体为:根据该生成规则,生成新的游戏元素。音频特征对应的游戏元素的生成规则用于规定,在何种音频特征到来或消失时,生成何种类型的游戏元素,且该生成的游戏元素的显示参数的具体内容。比如,在背景音频数据的节拍特征到来时,生成用户角色对应的游戏元素,并规定该用户角色对应的游戏元素的大小和颜色,或者,在人声的高音到来时,生成用户所对应的游戏对手(如障碍物)对应的游戏元素,并规定该游戏对手对应的游戏元素的大小和颜色。游戏玩家可以自定义在何种音频特征到来或消失时,生成何种类型的游戏元 素和该生成的游戏元素的显示参数的具体内容,也即自定义音频特征对应的游戏元素的生成规则。
上述步骤S106中,获取控制策略之后,根据控制策略和前述提取的音频特征,确定各帧背景音频数据下游戏元素的显示参数。如前所述,控制策略中包括音频特征和音频特征对应的游戏元素的显示参数。并且,在控制策略中,各种音频特征之间具有优先级顺序。基于此,图3为本公开一实施例提供的确定游戏元素的显示参数的流程示意图,如图3所示,根据控制策略和前述提取的音频特征,确定各帧背景音频数据下游戏元素的显示参数,具体包括:
步骤S302,根据控制策略和音频特征,确定各帧背景音频数据下游戏元素具有的至少一种显示参数;其中,每种显示参数与各帧背景音频数据所具备的音频特征相关;
步骤S304,针对任一帧背景音频数据,判断该帧背景音频数据下游戏元素具有的各种显示参数,是否满足控制策略中的显示参数约束条件;
步骤S306,若满足,则对该帧背景音频数据下游戏元素具有的各种显示参数进行保留处理,若不满足,则根据各种音频特征之间的优先级顺序,对该帧背景音频数据下游戏元素具有的各种显示参数进行调整,并保留调整后的显示参数;其中,调整后的显示参数满足显示参数约束条件;
步骤S308,根据各帧背景音频数据下游戏元素保留的显示参数,确定各帧背景音频数据下游戏元素的显示参数。
上述步骤S302中,根据以上介绍的控制策略和提取的各种音频特征,确定各帧背景音频数据下游戏元素具有的至少一种显示参数。其中,每种显示参数与各帧背景音频数据所具备的音频特征相关。具体而言,控制策略中包括了每种音频特征以及每种音频特征对应的游戏元素的显示参数,因此对于每帧背景音频数据而言,可以根据该帧背景音频数据所具备的音频特征,确定该帧背景音频数据下游戏元素的显示参数。在控制策略中,一种音频特征可以对应一种显示参数,对于每帧背景音频数据而言,每帧背景音频数据可以具有多种音频数据,因此每帧背景音频数据下游戏元素至少具有一种显示参数。比如,某帧背景音频数据具有两种音频效果,包括鼓声节拍和贝斯声重音,则该帧背景音频数据下游戏元素具有相应的两种显示参数。
上述步骤S304中,对于每帧背景音频数据,判断该帧背景音频数据下游戏元素具有的各种显示参数,是否满足控制策略中的显示参数约束条件。具体而言,控制策略中包括显示参数约束条件,该显示参数约束条件用来限制对于一帧背景音频数据,该一帧数据下,游戏元素具有的各种显示参数能够同时显示且显示出来的最终效果视觉上合理美观。因此,这一步骤中,利用显示参数约束条件,来对任意一帧背景音频数据下游戏元素具有的各种显示参数进行判断分析。
上述步骤S306中,对于任意一帧背景音频数据,若该帧背景音频数据下游戏元素具有的各种显示参数满足上述的显示参数约束条件,则对帧背景音频数据下游戏元素具有的各种显示参数进行保留处理,反之,若不满足,则根据各种音频特征之间的优先级顺序,对该帧背景音频数据下游戏元素具有的各种显示参数进行调整,以使调整后的显示参数满足显示参数约束条件,并保留调整后的显示参数。
举例而言,某帧背景音频数据具有两种音频效果,包括鼓声节拍和贝斯声重音,则该帧背景音频数据下游戏元素具有相应的两种显示参数,分别为障碍物变大后的尺寸和障碍物变小后的尺寸,则这两种效果不满足预设的显示参数约束条件,则需要对这两种显示参数进行调整。又如,某帧背景音频数据具有两种音频效果,包括鼓声节拍和贝斯声轻音,则该帧背景音频数据下游戏元素具有相应的两种显示参数,分别为障碍物变大后的尺寸和障碍物漂浮显示的具体参数,则这两种效果满足预设的显示参数约束条件,则对这两种显示参数进行保留处理。
上述步骤S308中,根据各帧背景音频数据下游戏元素保留的显示参数,确定各帧背景音频数据下游戏元素的显示参数。一种方式中,将各帧背景音频数据下游戏元素保留的显示参数,作为各帧背景音频数据下游戏元素的显示参。
上述流程中,根据各种音频特征之间的优先级顺序,对该帧背景音频数据下游戏元素具有的各种显示参数进行调整,可以为:
(c1)根据各种音频特征之间的优先级顺序,确定该帧背景音频数据下游戏元素具有的各种显示参数之间的优先级顺序;
(c2)根据各种显示参数之间的优先级顺序,在该帧背景音频数据下游 戏元素具有的各种显示参数中筛选待删除参数并删除。
对于任意一帧背景音频数据,根据各种音频特征之间的优先级顺序,确定该帧背景音频数据下游戏元素具有的各种显示参数之间的优先级高低顺序,接着,根据各种显示参数之间的优先级高低顺序,在该帧所述背景音频数据下游戏元素具有的各种显示参数中筛选优先级最低的效果作为待删除效果并删除。在删除之后,还需要判断删除处理之后,该帧背景音频数据下游戏元素具有的各种显示参数,是否满足控制策略中的显示参数约束条件,若满足,则对该各种显示参数进行保留,若还是不满足,则可以重复动作(c1)和(c2),通过反复的筛选、删除和判断,直至删除处理之后,该帧背景音频数据下游戏元素具有的各种显示参数,满足控制策略中的显示参数约束条件。
在一个具体的实施例中,动作(c1)中,确定各种音频特征之间的优先级高低顺序作为第一顺序;根据第一顺序,确定该帧背景音频数据下游戏元素具有的各种显示参数之间的优先级高低顺序。
一种情况下,控制策略中记录有各种音频特征之间的优先级高低顺序,则从控制策略中提取各种音频特征之间的优先级高低顺序作为第一顺序,根据第一顺序,确定该帧背景音频数据下游戏元素具有的各种显示参数之间的优先级高低顺序,其中,各种显示参数之间的优先级高低顺序与其对应的音频特征之间的优先级高低顺序相一致。当一个音频特征对应多个显示参数时,这多个显示参数之间的优先级顺序相同。
本实施例中,在根据第一顺序,确定该帧背景音频数据下游戏元素具有的各种显示参数之间的优先级高低顺序时,可以获取用户设置的各种音频特征之间的优先级高低顺序,也即获取用户设定的第一顺序。在一个具体的实施例中,用户在用户终端中启动目标游戏后,目标游戏可以向用户显示背景音频数据的各种音频特征,并提示用户设置各种音频特征之间的优先级高低顺序。当然,也可以获取游戏开发者自定义的各种所述音频特征之间的优先级顺序。
在一个实施例中,用户还可以设置游戏元素的各种显示参数之间的优先级高低顺序作为第二顺序,则本实施例中,步骤S306中,直接根据第二顺序,确定该帧背景音频数据下游戏元素具有的各种显示参数之间的优先级高低顺序,能够理解,该帧背景音频数据下游戏元素具有的各种显示参数之间 的优先级高低顺序与第二顺序相一致,在该帧背景音频数据下游戏元素具有的各种显示参数中筛选优先级最低的效果作为待删除效果并删除,并判断删除处理之后,该帧背景音频数据下游戏元素具有的各种显示参数,是否满足控制策略中的显示参数约束条件,若满足,则对该各种显示参数进行保留,若还是不满足,则可通过反复的筛选、删除和判断,直至删除处理之后,该帧背景音频数据下游戏元素具有的各种显示参数,满足控制策略中的显示参数约束条件。
在一个具体的实施例中,用户在用户终端中启动目标游戏后,目标游戏可以向用户显示游戏元素的各种显示参数,并提示用户设置各种显示参数之间的优先级高低顺序。在一个具体的实施例中,获取用户设置的游戏元素的各种显示参数之间的优先级高低顺序,具体为:
(e1)分别将每种音频特征对应的游戏元素的显示参数作为设定范围,获取用户分别在每个设定范围内设定的游戏元素的各种显示参数之间的优先级高低顺序;
或者,
(e2)将各种音频特征对应的游戏元素的显示参数共同作为设定范围,获取用户在设定范围内设定的游戏元素的各种显示参数之间的优先级高低顺序。
一种情况下,分别向用户显示每种音频特征对应的游戏元素的显示参数,其中,一种音频特征可以对应一种或多种显示参数。将每种音频特征对应的一种或多种显示参数,作为一个设定范围,从而得到多个设定范围。用户对于每种音频特征,用户可以在对应的设定范围内,为该种音频特征设定游戏元素的各种显示参数之间的优先级高低顺序。
另一种情况下,将各种音频特征对应的游戏元素的显示参数,共同作为一个整体的设定范围显示给用户。用户可以在该设定范围内,设定的游戏元素的各种显示参数之间的优先级高低顺序。比如,将各种音频特征对应的游戏元素的显示参数共5种参数提供给用户,这5种参数是预先开发并存储在目标游戏中的参数,用户可以在该5种参数中,设定的游戏元素的各种显示参数之间的优先级高低顺序。
可见,通过本实施例,用户还能够根据自己的意愿,设定游戏元素的各 种显示参数之间的优先级高低顺序,从而提高用户的游戏体验。
在一个实施例中,上述步骤S106中,根据控制策略和音频特征,确定各帧背景音频数据下游戏元素的显示参数,具体为:
(f1)针对任一帧背景音频数据,根据各种音频特征之间的优先级顺序,在该帧背景音频数据所包含的音频特征中确定优先级最高的音频特征;
(f2)根据优先级最高的音频特征所对应的控制策略,确定该帧背景音频数据下游戏元素的显示参数。
具体而言,首先,从控制策略中获取各种音频特征之间的优先级高低顺序。该顺序可以为用户设置并保存在控制策略中。接着,针对任意一帧背景音频数据,根据各种音频特征之间的优先级顺序,在该帧背景音频数据所包含的音频特征中确定优先级最高的音频特征。然后,根据该优先级最高的音频特征所对应的控制策略,确定该帧背景音频数据下游戏元素的显示参数,也即,将控制策略中记录的该优先级最高的音频特征对应的游戏元素的显示参数,确定为该帧背景音频数据下游戏元素的显示参数。
举例而言,对于某帧背景音频数据,该帧背景音频数据包括鼓点节拍特征和人声高音特征,根据控制策略中记录的各种音频特征之间的优先级高低顺序,在该帧背景音频数据所包含的音频特征中确定优先级最高的音频特征为鼓点节拍特征。接着,将控制策略中记录的鼓点节拍特征对应的游戏元素的显示参数,确定为该帧背景音频数据下游戏元素的显示参数。若控制策略中记录鼓点节拍特征对应的游戏元素的显示参数为生成若干个大小位置指定的障碍物,则在该帧背景音频数据到来时,生成若干个大小位置指定的障碍物。
在一个具体的实施例中,音频特征包括乐器节拍特征和人声强弱特征;乐器节拍特征的优先级高于人声强弱特征的优先级;动作(f2)中,根据优先级最高的音频特征所对应的控制策略,确定该帧背景音频数据下游戏元素的显示参数,具体为:在该帧背景音频数据不具备乐器节拍特征但具备人声强弱特征时,确定优先级最高的音频特征为该帧背景音频数据中的人声强弱特征,根据人声强弱特征所对应的控制策略和该帧背景音频数据中的人声强弱特征,确定该帧背景音频数据下游戏元素的显示参数。
当控制策略中乐器节拍特征的优先级高于人声强弱特征的优先级,且, 该帧背景音频数据不具备乐器节拍特征但具备人声强弱特征时,确定该帧背景音频数据中优先级最高的音频特征为该帧背景音频数据中的人声强弱特征,将控制策略中人声强弱特征对应的游戏元素的显示参数,确定为该帧背景音频数据下游戏元素的显示参数,从而基于人声强弱特征,为该帧背景音频数据设置游戏元素的显示参数。
根据前面描述可知,控制策略包括音频特征和音频特征对应的游戏元素的显示参数,这里的游戏元素,包括目标游戏中已有的游戏元素和/或根据音频特征新生成的游戏元素。因此在上述步骤S106中,还可以包括根据音频特征生成新的游戏元素的动作。基于此,基于控制策略,不仅可以对已有的游戏元素进行控制,还可以对新生成的游戏元素进行控制,比如,根据控制策略,在背景音频数据的节拍特征到来时,控制目标游戏中已有的障碍物变大,以及,根据控制策略,在人声的高音到来时,又新生成多个障碍物,并设置新生成的障碍物的显示参数。
上述步骤S106中,根据各帧背景音频数据下游戏元素的显示参数,生成目标游戏的游戏数据。一个实施例中,将各帧背景音频数据下游戏元素的显示参数,作为目标游戏的游戏数据的一部分,目标游戏的游戏数据还可以包括前述的背景音频数据。
在另外一个实施例中,用户每次启动游戏时,用户终端都可以执行图1中的方法流程,提取音频特征,并且在任意一种音频特征对应游戏元素的多种显示参数时,用户终端随机为该种音频特征选择一种显示参数,基于选择的显示参数,生成目标游戏的游戏数据,这种随机选择的方式,能够使得用户每次启动目标游戏时,目标游戏都是随机设定的,从而提高游戏的丰富和多样性,提高用户的游戏体验。
以上详细介绍了,根据目标游戏的背景音频数据的音频特征,确定目标游戏中的游戏元素的显示参数,并生成目标游戏的游戏数据的过程。下面结合一个具体的例子,详细描述该过程。
该例子中,目标游戏是一款障碍物躲避游戏。目标游戏中具有障碍物和用户角色,用户角色需要随着背景音频数据的播放移动,并躲避障碍物。通过图1中的方法流程,提取目标游戏的背景音频数据中的至少两种音频特征,包括鼓点的节拍特征和人声的强弱的特征。
图4a为本公开一实施例提供的目标游戏的示意图,如图4a所示,当背景音频数据播放过程中,有鼓点强节拍到来时,则通过图1中的方法流程,控制障碍物的显示参数为长条形,图4b为本公开另一实施例提供的目标游戏的示意图,如图4b所示,根据控制策略,有鼓点弱节拍到来时,则通过前文的方法流程,生成多个障碍物,并控制各个障碍物的显示参数为漂浮的小方块,图4c为本公开又一实施例提供的目标游戏的示意图,如图4c所示,有人声的重音到来时,则通过图1中的方法流程,控制障碍物的显示参数为柱状。
综上,通过以上的介绍,通过本实施例,从目标游戏的背景音频数据中提取至少两种音频特征,该至少两种音频特征的音频属性不同,基于该至少两种音频特征,确定目标游戏中的游戏元素的显示参数,根据游戏元素的显示参数,生成目标游戏的游戏数据,能够使得游戏元素与背景音频数据的音频特征相匹配,提高游戏中的游戏元素与背景音乐的匹配程度,提高用户的游戏体验。
对应于前面介绍的游戏数据生成方法,本公开一实施例还提供了一种交互方法,图5为本公开一实施例提供的交互方法的流程示意图,该方法能够由用户终端如手机、电脑、平板电脑等执行,如图5所示,该方法包括:
步骤S502,响应于用户针对目标游戏对局的启动指令,显示目标游戏的至少两种游戏元素;其中,至少两种游戏元素基于目标游戏的背景音频数据中包含的至少两种音频特征确定;
步骤S504,获取用户发送的游戏操作指令,并根据游戏操作指令,运行目标游戏。
本实施例中,首先,根据用户针对目标游戏对局的启动指令,显示目标游戏的至少两种游戏元素,该至少两种游戏元素通过前述的方法,基于目标游戏的背景音频数据中包含的至少两种音频特征确定,接着,根据用户发送的游戏操作指令运行目标游戏。通过本实施例,用户启动目标游戏后,目标游戏中的游戏元素与目标游戏的背景音频数据之间的匹配程度高,从而提高用户的游戏体验。
上述步骤S502中,用户针对目标游戏的游戏对局发送启动指令,用户终端接收到指令,根据该指令,在屏幕上显示目标游戏中的至少两种游戏元 素。游戏元素可以是显示在游戏画面上的对象,比如用户要操控的对象或者用户要躲避的障碍物。本实施例中,这至少两种游戏元素的显示参数通过前面图1中的方法,根据目标游戏的背景音频数据中包含的至少两种音频特征确定,因此这两种游戏元素的显示参数与目标游戏的背景音频数据之间的匹配程度高,用户的游戏体验随之提高。
上述步骤S504中,用户针对目标游戏的游戏对局发送游戏操作指令,用户终端接收到该指令,根据该指令,运行目标游戏,从而便于用户进行游戏。
在一个实施例中,用户发送的游戏操作指令以下指令中的至少一种:
A、对虚拟对象进行控制的虚拟对象控制指令;该虚拟对象可以为游戏中用户角色对应的虚拟对象,也即游戏中由用户控制的虚拟对象;
B、对游戏元素的显示参数进行控制的游戏元素控制指令。
当用户发送的游戏操作指令包括对虚拟对象进行控制的虚拟对象控制指令时,在一个实施例中,该方法还包括:显示由用户控制的虚拟对象。相应地,获取用户发送的游戏操作指令,并根据游戏操作指令,运行目标游戏,具体为:
(g1)获取用户发送的虚拟对象控制指令,并根据虚拟对象控制指令控制虚拟对象进行移动;
(g2)响应于虚拟对象与游戏元素的位置关系为第一预设位置关系时,调整目标游戏对局的分值;
(g3)响应于虚拟对象与游戏元素的位置关系为第二预设位置关系时,结束目标游戏对局。
在某些游戏中,游戏画面中具有由用户控制的虚拟对象,该虚拟对象也是用户角色对应的对象。用户向用户终端发送针对该虚拟对象的虚拟对象控制指令,用户终端接收该虚拟对象控制指令,并根据该虚拟对象控制指令,控制游戏画面中的虚拟对象移动。当虚拟对象与游戏元素的位置关系为第一预设位置关系时,调整目标游戏对局的分值。比如,当虚拟对象没有接触到游戏元素时,增加用户得分。当虚拟对象与游戏元素的位置关系为第二预设位置关系时,结束目标游戏对局。比如,当虚拟对象接触到游戏元素时,结束目标游戏对局。其中,游戏元素可以为用户需要躲避的障碍物。
可见,通过本实施例,能够在躲避障碍物的游戏中,通过用户的游戏操作指令,控制用户对应的虚拟对象进行移动,并根据游戏进度调整游戏得分或者结束游戏。
当用户发送的游戏操作指令包对游戏元素的显示参数进行控制的游戏元素控制指令时,获取用户发送的游戏操作指令,并根据游戏操作指令,运行目标游戏,具体为:
(h1)获取用户发送的游戏元素控制指令,并根据游戏元素控制指令更改游戏元素的显示参数;
(h2)响应于游戏元素控制指令触发时间与游戏元素显示时间的时间关系为第一预设时间关系时,调整目标游戏对局的分值;
(h3)响应于游戏元素控制指令触发时间与游戏元素显示时间的时间关系为第二预设时间关系时,结束目标游戏对局。
在某些游戏中,用户发送游戏元素控制指令,用户终端接收到游戏元素控制指令,根据该指令,更改游戏元素的显示参数,具体可以更改游戏元素的大小、颜色等显示参数。当用户触发游戏元素控制指令的时间即游戏元素控制指令触发时间与游戏元素显示时间的时间关系为第一预设时间关系时,调整目标游戏对局的分值。比如,当用户通过短按的方式发送游戏元素控制指令,且该游戏元素控制指令用于改变游戏元素的颜色时,若用户发送游戏元素控制指令的时间等于游戏元素的显示时间,也即刚好在显示游戏元素时触发到游戏元素,则增加用户得分。又如,当用户通过长按的方式发送游戏元素控制指令,且该游戏元素控制指令用于改变游戏元素的大小时,若用户长按的时长大于等于游戏元素显示时长,则增加用户得分。
相应地,当用户触发游戏元素控制指令的时间即游戏元素控制指令触发时间与游戏元素显示时间的时间关系为第二预设时间关系时,结束目标游戏对局。比如,当用户通过短按的方式发送游戏元素控制指令,且该游戏元素控制指令用于改变游戏元素的颜色时,若用户发送游戏元素控制指令的时间不等于游戏元素的显示时间,也即没有在显示游戏元素时触发到游戏元素,则结束游戏对局。又如,当用户通过长按的方式发送游戏元素控制指令,且该游戏元素控制指令用于改变游戏元素的大小时,若用户长按的时长小于游戏元素显示时长,则结束游戏对局。
可见,通过本实施例,能够在触发游戏元素的游戏中,通过用户的游戏元素控制指令的触发时间与游戏元素的显示时间之间的时间关系,进行目标游戏。
图6为本公开一实施例提供的游戏数据生成装置的结构示意图,如图6所示,该装置包括:
特征提取单元61,用于获取目标游戏的背景音频数据,按照预设的音频属性维度,将所述背景音频数据分离成至少两种子音频数据,根据每种所述子音频数据对应的音频特征提取种类,提取每种所述子音频数据的至少一种音频特征,得到至少两种音频特征;
策略获取单元62,用于获取基于所述音频特征对所述目标游戏中的游戏元素的显示参数进行控制的控制策略;其中,所述控制策略包括所述音频特征和所述音频特征对应的所述游戏元素的显示参数;所述控制策略中各种所述音频特征之间具有优先级顺序;
数据生成单元63,用于根据所述控制策略和所述音频特征,确定各帧所述背景音频数据下所述游戏元素的显示参数,根据各帧所述背景音频数据下所述游戏元素的显示参数,生成所述目标游戏的游戏数据。
可选地,所述预设的音频属性维度包括音色维度;特征提取单元61具体用于:按照音色维度,将所述背景音频数据分离成至少两种子音频数据;其中,各所述子音频数据的音色彼此不同。
可选地,还包括特征种类确定单元,用于:在根据每种所述子音频数据对应的音频特征提取种类,提取每种所述子音频数据的至少一种音频特征之前,获取所述音频属性维度的各维度值与音频特征提取种类之间的映射关系;基于每种所述子音频数据所属的维度值和所述映射关系,确定每种所述子音频数据对应的音频特征提取种类。
可选地,策略获取单元62具体用于:获取预先存储的基于所述音频特征对所述目标游戏中的游戏元素的显示参数进行控制的控制策略;或者,获取用户自定义的基于所述音频特征对所述目标游戏中的游戏元素的显示参数进行控制的控制策略。
可选地,策略获取单元62还具体用于:获取用户为所述音频特征自定义的所述游戏元素的显示参数;将用户为所述音频特征自定义的所述游戏元素 的显示参数,作为基于所述音频特征对所述目标游戏中的游戏元素的显示参数进行控制的控制策略的一部分;和/或,向用户提供所述游戏元素的多个初始显示参数,获取用户在所述多个初始显示参数中为所述音频特征筛选的所述游戏元素的目标显示参数;将用户为所述音频特征筛选的所述目标显示参数,作为基于所述音频特征对所述目标游戏中的游戏元素的显示参数进行控制的控制策略的一部分。
可选地,策略获取单元62还具体用于:获取用户上传的与所述游戏元素相关的显示素材,并确定用户指定的与所述游戏元素相关的显示方式;根据所述显示素材和所述显示方式,生成用户自定义的所述游戏元素的显示参数,并确定用户为生成的所述显示参数关联的所述音频特征。
可选地,策略获取单元62还具体用于:分别将每种所述音频特征对应的所述游戏元素的初始显示参数作为筛选范围提供给用户,获取用户分别在每个筛选范围中为每种所述音频特征筛选的所述游戏元素的目标显示参数;或者,将各种所述音频特征对应的所述游戏元素的初始显示参数共同作为筛选范围提供给用户,获取用户在所述筛选范围内筛选的所述游戏元素的目标显示参数,并确定用户为所述目标显示参数关联的所述音频特征。
可选地,数据生成单元63具体用于:根据所述控制策略和所述音频特征,确定各帧所述背景音频数据下所述游戏元素具有的至少一种显示参数;其中,每种所述显示参数与各帧所述背景音频数据所具备的音频特征相关;针对任一帧所述背景音频数据,判断该帧所述背景音频数据下所述游戏元素具有的各种显示参数,是否满足所述控制策略中的显示参数约束条件;若满足,则对该帧所述背景音频数据下所述游戏元素具有的各种显示参数进行保留处理,若不满足,则根据各种所述音频特征之间的优先级顺序,对该帧所述背景音频数据下所述游戏元素具有的各种显示参数进行调整,并保留调整后的所述显示参数;其中,调整后的所述显示参数满足所述显示参数约束条件;根据各帧所述背景音频数据下所述游戏元素保留的显示参数,确定各帧所述背景音频数据下所述游戏元素的显示参数。
可选地,数据生成单元63还具体用于:根据各种所述音频特征之间的优先级顺序,确定该帧所述背景音频数据下所述游戏元素具有的各种显示参数之间的优先级顺序;根据所述各种显示参数之间的优先级顺序,在该帧所述 背景音频数据下所述游戏元素具有的各种显示参数中筛选待删除参数并删除。
可选地,数据生成单元63具体用于:针对任一帧所述背景音频数据,根据各种所述音频特征之间的优先级顺序,在该帧背景音频数据所包含的音频特征中确定优先级最高的音频特征;根据所述优先级最高的音频特征所对应的控制策略,确定该帧所述背景音频数据下所述游戏元素的显示参数。
可选地,所述音频特征包括乐器节拍特征和人声强弱特征;所述乐器节拍特征的优先级高于所述人声强弱特征的优先级;数据生成单元63还具体用于:在该帧所述背景音频数据不具备所述乐器节拍特征但具备人声强弱特征时,确定所述优先级最高的音频特征为该帧所述背景音频数据中的人声强弱特征;根据所述人声强弱特征所对应的控制策略和该帧所述背景音频数据中的人声强弱特征,确定该帧所述背景音频数据下所述游戏元素的显示参数。
可选地,还包括优先级获取单元,用于获取用户设置的各种所述音频特征之间的优先级顺序。
本实施例中的游戏数据生成装置能够实现前述的游戏数据生成方法实施例的各个过程,并达到相同的效果和功能,这里不再重复。
图7为本公开一实施例提供的交互装置的结构示意图,如图7所示,该装置包括:
元素显示单元71,用于响应于用户针对目标游戏对局的启动指令,显示所述目标游戏的至少两种游戏元素;其中,所述至少两种游戏元素基于所述目标游戏的背景音频数据中包含的至少两种音频特征确定;
游戏运行单元72,用于获取所述用户发送的游戏操作指令,并根据所述游戏操作指令,运行所述目标游戏。
可选地,所述游戏操作指令包括以下指令中的至少一种:对虚拟对象进行控制的虚拟对象控制指令;对所述游戏元素的显示参数进行控制的游戏元素控制指令。
可选地,还包括对象显示单元,用于显示由所述用户控制的虚拟对象;所述游戏运行单元72具体用于:获取所述用户发送的虚拟对象控制指令,并根据所述虚拟对象控制指令控制所述虚拟对象进行移动;响应于所述虚拟对象与所述游戏元素的位置关系为第一预设位置关系时,调整所述目标游戏对 局的分值;响应于所述虚拟对象与所述游戏元素的位置关系为第二预设位置关系时,结束所述目标游戏对局。
可选地,所述游戏运行单元72具体用于:获取所述用户发送的游戏元素控制指令,并根据所述游戏元素控制指令更改所述游戏元素的显示参数;响应于所述游戏元素控制指令触发时间与所述游戏元素显示时间的时间关系为第一预设时间关系时,调整所述目标游戏对局的分值;响应于所述游戏元素控制指令触发时间与所述游戏元素显示时间的时间关系为第二预设时间关系时,结束所述目标游戏对局。
本实施例中的交互装置能够实现前述的交互方法实施例的各个过程,并达到相同的效果和功能,这里不再重复。
本公开一实施例还提供了一种电子设备,图8为本公开一实施例提供的电子设备的结构示意图,如图8所示,电子设备可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上的处理器801和存储器802,存储器802中可以存储有一个或一个以上存储应用程序或数据。其中,存储器802可以是短暂存储或持久存储。存储在存储器802的应用程序可以包括一个或一个以上模块(图示未示出),每个模块可以包括电子设备中的一系列计算机可执行指令。更进一步地,处理器801可以设置为与存储器802通信,在电子设备上执行存储器802中的一系列计算机可执行指令。电子设备还可以包括一个或一个以上电源803,一个或一个以上有线或无线网络接口804,一个或一个以上输入或输出接口805,一个或一个以上键盘806等。
在一个具体的实施例中,电子设备为游戏数据生成设备,包括有处理器;以及,被配置为存储计算机可执行指令的存储器,所述计算机可执行指令在被执行时使所述处理器实现以下流程:
获取目标游戏的背景音频数据,按照预设的音频属性维度,将所述背景音频数据分离成至少两种子音频数据,根据每种所述子音频数据对应的音频特征提取种类,提取每种所述子音频数据的至少一种音频特征,得到至少两种音频特征;
获取基于所述音频特征对所述目标游戏中的游戏元素的显示参数进行控制的控制策略;其中,所述控制策略包括所述音频特征和所述音频特征对应的所述游戏元素的显示参数;所述控制策略中各种所述音频特征之间具有 优先级顺序;
根据所述控制策略和所述音频特征,确定各帧所述背景音频数据下所述游戏元素的显示参数,根据各帧所述背景音频数据下所述游戏元素的显示参数,生成所述目标游戏的游戏数据。
可选地,所述计算机可执行指令在被执行时,所述预设的音频属性维度包括音色维度;按照预设的音频属性维度,将所述背景音频数据分离成至少两种子音频数据,包括:按照音色维度,将所述背景音频数据分离成至少两种子音频数据;其中,各所述子音频数据的音色彼此不同。
可选地,所述计算机可执行指令在被执行时,在根据每种所述子音频数据对应的音频特征提取种类,提取每种所述子音频数据的至少一种音频特征之前,所述流程还包括:获取所述音频属性维度的各维度值与音频特征提取种类之间的映射关系;基于每种所述子音频数据所属的维度值和所述映射关系,确定每种所述子音频数据对应的音频特征提取种类。
可选地,所述计算机可执行指令在被执行时,获取基于所述音频特征对所述目标游戏中的游戏元素的显示参数进行控制的控制策略,包括:获取预先存储的基于所述音频特征对所述目标游戏中的游戏元素的显示参数进行控制的控制策略;或者,获取用户自定义的基于所述音频特征对所述目标游戏中的游戏元素的显示参数进行控制的控制策略。
可选地,所述计算机可执行指令在被执行时,根据所述控制策略和所述音频特征,确定各帧所述背景音频数据下所述游戏元素的显示参数,包括:根据所述控制策略和所述音频特征,确定各帧所述背景音频数据下所述游戏元素具有的至少一种显示参数;其中,每种所述显示参数与各帧所述背景音频数据所具备的音频特征相关;针对任一帧所述背景音频数据,判断该帧所述背景音频数据下所述游戏元素具有的各种显示参数,是否满足所述控制策略中的显示参数约束条件;若满足,则对该帧所述背景音频数据下所述游戏元素具有的各种显示参数进行保留处理,若不满足,则根据各种所述音频特征之间的优先级顺序,对该帧所述背景音频数据下所述游戏元素具有的各种显示参数进行调整,并保留调整后的所述显示参数;其中,调整后的所述显示参数满足所述显示参数约束条件;根据各帧所述背景音频数据下所述游戏元素保留的显示参数,确定各帧所述背景音频数据下所述游戏元素的显示参 数。
可选地,所述计算机可执行指令在被执行时,根据各种所述音频特征之间的优先级顺序,对该帧所述背景音频数据下所述游戏元素具有的各种显示参数进行调整,包括:根据各种所述音频特征之间的优先级顺序,确定该帧所述背景音频数据下所述游戏元素具有的各种显示参数之间的优先级顺序;根据所述各种显示参数之间的优先级顺序,在该帧所述背景音频数据下所述游戏元素具有的各种显示参数中筛选待删除参数并删除。
可选地,所述计算机可执行指令在被执行时,根据所述控制策略和所述音频特征,确定各帧所述背景音频数据下所述游戏元素的显示参数,包括:针对任一帧所述背景音频数据,根据各种所述音频特征之间的优先级顺序,在该帧背景音频数据所包含的音频特征中确定优先级最高的音频特征;根据所述优先级最高的音频特征所对应的控制策略,确定该帧所述背景音频数据下所述游戏元素的显示参数。
可选地,所述计算机可执行指令在被执行时,所述音频特征包括乐器节拍特征和人声强弱特征;所述乐器节拍特征的优先级高于所述人声强弱特征的优先级;根据所述优先级最高的音频特征所对应的控制策略,确定该帧所述背景音频数据下所述游戏元素的显示参数,包括:在该帧所述背景音频数据不具备所述乐器节拍特征但具备人声强弱特征时,确定所述优先级最高的音频特征为该帧所述背景音频数据中的人声强弱特征;根据所述人声强弱特征所对应的控制策略和该帧所述背景音频数据中的人声强弱特征,确定该帧所述背景音频数据下所述游戏元素的显示参数。
本实施例中的游戏数据生成设备能够实现前述的游戏数据生成方法实施例的各个过程,并达到相同的效果和功能,这里不再重复。
在一个具体的实施例中,电子设备为交互设备,包括有处理器;以及,被配置为存储计算机可执行指令的存储器,所述计算机可执行指令在被执行时使所述处理器实现以下流程:
响应于用户针对目标游戏对局的启动指令,显示所述目标游戏的至少两种游戏元素;其中,所述至少两种游戏元素基于所述目标游戏的背景音频数据中包含的至少两种音频特征确定;
获取所述用户发送的游戏操作指令,并根据所述游戏操作指令,运行所 述目标游戏。
可选地,所述计算机可执行指令在被执行时,所述游戏操作指令包括以下指令中的至少一种:对虚拟对象进行控制的虚拟对象控制指令;对所述游戏元素的显示参数进行控制的游戏元素控制指令。
可选地,所述计算机可执行指令在被执行时,所述流程还包括:显示由所述用户控制的虚拟对象;所述获取所述用户发送的游戏操作指令,并根据所述游戏操作指令,运行所述目标游戏,包括:获取所述用户发送的虚拟对象控制指令,并根据所述虚拟对象控制指令控制所述虚拟对象进行移动;响应于所述虚拟对象与所述游戏元素的位置关系为第一预设位置关系时,调整所述目标游戏对局的分值;响应于所述虚拟对象与所述游戏元素的位置关系为第二预设位置关系时,结束所述目标游戏对局。
可选地,所述计算机可执行指令在被执行时,所述获取所述用户发送的游戏操作指令,并根据所述游戏操作指令,运行所述目标游戏,包括:获取所述用户发送的游戏元素控制指令,并根据所述游戏元素控制指令更改所述游戏元素的显示参数;响应于所述游戏元素控制指令触发时间与所述游戏元素显示时间的时间关系为第一预设时间关系时,调整所述目标游戏对局的分值;响应于所述游戏元素控制指令触发时间与所述游戏元素显示时间的时间关系为第二预设时间关系时,结束所述目标游戏对局。
本实施例中的交互设备能够实现前述的交互方法实施例的各个过程,并达到相同的效果和功能,这里不再重复。
本公开一实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质用于存储计算机可执行指令,所述计算机可执行指令在被处理器执行时实现以下流程:
获取目标游戏的背景音频数据,按照预设的音频属性维度,将所述背景音频数据分离成至少两种子音频数据,根据每种所述子音频数据对应的音频特征提取种类,提取每种所述子音频数据的至少一种音频特征,得到至少两种音频特征;
获取基于所述音频特征对所述目标游戏中的游戏元素的显示参数进行控制的控制策略;其中,所述控制策略包括所述音频特征和所述音频特征对应的所述游戏元素的显示参数;所述控制策略中各种所述音频特征之间具有 优先级顺序;
根据所述控制策略和所述音频特征,确定各帧所述背景音频数据下所述游戏元素的显示参数,根据各帧所述背景音频数据下所述游戏元素的显示参数,生成所述目标游戏的游戏数据。
可选地,所述计算机可执行指令在被处理器执行时,所述预设的音频属性维度包括音色维度;按照预设的音频属性维度,将所述背景音频数据分离成至少两种子音频数据,包括:按照音色维度,将所述背景音频数据分离成至少两种子音频数据;其中,各所述子音频数据的音色彼此不同。
可选地,所述计算机可执行指令在被处理器执行时,在根据每种所述子音频数据对应的音频特征提取种类,提取每种所述子音频数据的至少一种音频特征之前,所述流程还包括:获取所述音频属性维度的各维度值与音频特征提取种类之间的映射关系;基于每种所述子音频数据所属的维度值和所述映射关系,确定每种所述子音频数据对应的音频特征提取种类。
可选地,所述计算机可执行指令在被处理器执行时,获取基于所述音频特征对所述目标游戏中的游戏元素的显示参数进行控制的控制策略,包括:获取预先存储的基于所述音频特征对所述目标游戏中的游戏元素的显示参数进行控制的控制策略;或者,获取用户自定义的基于所述音频特征对所述目标游戏中的游戏元素的显示参数进行控制的控制策略。
可选地,所述计算机可执行指令在被处理器执行时,根据所述控制策略和所述音频特征,确定各帧所述背景音频数据下所述游戏元素的显示参数,包括:根据所述控制策略和所述音频特征,确定各帧所述背景音频数据下所述游戏元素具有的至少一种显示参数;其中,每种所述显示参数与各帧所述背景音频数据所具备的音频特征相关;针对任一帧所述背景音频数据,判断该帧所述背景音频数据下所述游戏元素具有的各种显示参数,是否满足所述控制策略中的显示参数约束条件;若满足,则对该帧所述背景音频数据下所述游戏元素具有的各种显示参数进行保留处理,若不满足,则根据各种所述音频特征之间的优先级顺序,对该帧所述背景音频数据下所述游戏元素具有的各种显示参数进行调整,并保留调整后的所述显示参数;其中,调整后的所述显示参数满足所述显示参数约束条件;根据各帧所述背景音频数据下所述游戏元素保留的显示参数,确定各帧所述背景音频数据下所述游戏元素的 显示参数。
可选地,所述计算机可执行指令在被处理器执行时,根据各种所述音频特征之间的优先级顺序,对该帧所述背景音频数据下所述游戏元素具有的各种显示参数进行调整,包括:根据各种所述音频特征之间的优先级顺序,确定该帧所述背景音频数据下所述游戏元素具有的各种显示参数之间的优先级顺序;根据所述各种显示参数之间的优先级顺序,在该帧所述背景音频数据下所述游戏元素具有的各种显示参数中筛选待删除参数并删除。
可选地,所述计算机可执行指令在被处理器执行时,根据所述控制策略和所述音频特征,确定各帧所述背景音频数据下所述游戏元素的显示参数,包括:针对任一帧所述背景音频数据,根据各种所述音频特征之间的优先级顺序,在该帧背景音频数据所包含的音频特征中确定优先级最高的音频特征;根据所述优先级最高的音频特征所对应的控制策略,确定该帧所述背景音频数据下所述游戏元素的显示参数。
可选地,所述计算机可执行指令在被处理器执行时,所述音频特征包括乐器节拍特征和人声强弱特征;所述乐器节拍特征的优先级高于所述人声强弱特征的优先级;根据所述优先级最高的音频特征所对应的控制策略,确定该帧所述背景音频数据下所述游戏元素的显示参数,包括:在该帧所述背景音频数据不具备所述乐器节拍特征但具备人声强弱特征时,确定所述优先级最高的音频特征为该帧所述背景音频数据中的人声强弱特征;根据所述人声强弱特征所对应的控制策略和该帧所述背景音频数据中的人声强弱特征,确定该帧所述背景音频数据下所述游戏元素的显示参数。
本实施例中的存储介质能够实现前述的游戏数据生成方法实施例的各个过程,并达到相同的效果和功能,这里不再重复。
本公开一实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质用于存储计算机可执行指令,所述计算机可执行指令在被处理器执行时实现以下流程:
响应于用户针对目标游戏对局的启动指令,显示所述目标游戏的至少两种游戏元素;其中,所述至少两种游戏元素基于所述目标游戏的背景音频数据中包含的至少两种音频特征确定;
获取所述用户发送的游戏操作指令,并根据所述游戏操作指令,运行所 述目标游戏。
可选地,所述计算机可执行指令在被处理器执行时,所述游戏操作指令包括以下指令中的至少一种:对虚拟对象进行控制的虚拟对象控制指令;对所述游戏元素的显示参数进行控制的游戏元素控制指令。
可选地,所述计算机可执行指令在被处理器执行时,所述流程还包括:显示由所述用户控制的虚拟对象;所述获取所述用户发送的游戏操作指令,并根据所述游戏操作指令,运行所述目标游戏,包括:获取所述用户发送的虚拟对象控制指令,并根据所述虚拟对象控制指令控制所述虚拟对象进行移动;响应于所述虚拟对象与所述游戏元素的位置关系为第一预设位置关系时,调整所述目标游戏对局的分值;响应于所述虚拟对象与所述游戏元素的位置关系为第二预设位置关系时,结束所述目标游戏对局。
可选地,所述计算机可执行指令在被处理器执行时,所述获取所述用户发送的游戏操作指令,并根据所述游戏操作指令,运行所述目标游戏,包括:获取所述用户发送的游戏元素控制指令,并根据所述游戏元素控制指令更改所述游戏元素的显示参数;响应于所述游戏元素控制指令触发时间与所述游戏元素显示时间的时间关系为第一预设时间关系时,调整所述目标游戏对局的分值;响应于所述游戏元素控制指令触发时间与所述游戏元素显示时间的时间关系为第二预设时间关系时,结束所述目标游戏对局。
本实施例中的存储介质能够实现前述的交互方法实施例的各个过程,并达到相同的效果和功能,这里不再重复。
其中,所述的计算机可读存储介质包括只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、磁碟或者光盘等。
需要说明的是,本公开中关于存储介质的实施例与本公开中关于网络访问处理方法的实施例基于同一发明构思,因此该实施例的具体实施可以参见前述对应方法的实施,重复之处不再赘述。
在20世纪90年代,对于一个技术的改进可以很明显地区分是硬件上的改进(例如,对二极管、晶体管、开关等电路结构的改进)还是软件上的改进(对于方法流程的改进)。然而,随着技术的发展,当今的很多方法流程的改进已经可以视为硬件电路结构的直接改进。设计人员几乎都通过将改进的 方法流程编程到硬件电路中来得到相应的硬件电路结构。因此,不能说一个方法流程的改进就不能用硬件实体模块来实现。例如,可编程逻辑器件(Programmable Logic Device,PLD)(例如现场可编程门阵列(Field Programmable Gate Array,FPGA))就是这样一种集成电路,其逻辑功能由用户对器件编程来确定。由设计人员自行编程来把一个数字系统“集成”在一片PLD上,而不需要请芯片制造厂商来设计和制作专用的集成电路芯片。而且,如今,取代手工地制作集成电路芯片,这种编程也多半改用“逻辑编译器(logic compiler)”软件来实现,它与程序开发撰写时所用的软件编译器相类似,而要编译之前的原始代码也得用特定的编程语言来撰写,此称之为硬件描述语言(Hardware Description Language,HDL),而HDL也并非仅有一种,而是有许多种,如ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language)等,目前最普遍使用的是VHDL(Very-High-Speed Integrated Circuit Hardware Description Language)与Verilog。本领域技术人员也应该清楚,只需要将方法流程用上述几种硬件描述语言稍作逻辑编程并编程到集成电路中,就可以很容易得到实现该逻辑方法流程的硬件电路。
控制器可以按任何适当的方式实现,例如,控制器可以采取例如微处理器或处理器以及存储可由该(微)处理器执行的计算机可读程序代码(例如软件或固件)的计算机可读介质、逻辑门、开关、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑控制器和嵌入微控制器的形式,控制器的例子包括但不限于以下微控制器:ARC 625D、Atmel AT91SAM、Microchip PIC18F26K20以及Silicone Labs C8051F320,存储器控制器还可以被实现为存储器的控制逻辑的一部分。本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的 软件模块又可以是硬件部件内的结构。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种示例的实现设备为计算机。具体的,计算机例如可以为个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。
为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本公开实施例时可以把各单元的功能在同一个或多个软件和/或硬件中实现。
本领域内的技术人员应明白,本公开的一个或多个实施例可提供为方法、系统或计算机程序产品。因此,本公开的一个或多个实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本公开可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本公开是参照根据本公开实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程 图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本公开的一个或多个实施例可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本公开的一个或多个实施例,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
本公开中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本公开的实施例而已,并不用于限制本公开。对于本领域技术人员来说,本公开可以有各种更改和变化。凡在本公开的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本公开的权利要求范围之内。

Claims (20)

  1. 一种游戏数据生成方法,包括:
    获取目标游戏的背景音频数据,按照预设的音频属性维度,将所述背景音频数据分离成至少两种子音频数据,根据每种所述子音频数据对应的音频特征提取种类,提取每种所述子音频数据的至少一种音频特征,得到至少两种音频特征;
    获取基于所述音频特征对所述目标游戏中的游戏元素的显示参数进行控制的控制策略;其中,所述控制策略包括所述音频特征和所述音频特征对应的所述游戏元素的显示参数;所述控制策略中各种所述音频特征之间具有优先级顺序;
    根据所述控制策略和所述音频特征,确定各帧所述背景音频数据下所述游戏元素的显示参数,根据各帧所述背景音频数据下所述游戏元素的显示参数,生成所述目标游戏的游戏数据。
  2. 根据权利要求1所述的方法,其中,所述预设的音频属性维度包括音色维度;按照预设的音频属性维度,将所述背景音频数据分离成至少两种子音频数据,包括:
    按照音色维度,将所述背景音频数据分离成至少两种子音频数据;其中,各所述子音频数据的音色彼此不同。
  3. 根据权利要求1或2所述的方法,其中,在根据每种所述子音频数据对应的音频特征提取种类,提取每种所述子音频数据的至少一种音频特征之前,所述方法还包括:
    获取所述音频属性维度的各维度值与音频特征提取种类之间的映射关系;
    基于每种所述子音频数据所属的维度值和所述映射关系,确定每种所述子音频数据对应的音频特征提取种类。
  4. 根据权利要求1-3任一项所述的方法,其中,获取基于所述音频特征 对所述目标游戏中的游戏元素的显示参数进行控制的控制策略,包括:
    获取预先存储的基于所述音频特征对所述目标游戏中的游戏元素的显示参数进行控制的控制策略;
    或者,
    获取用户自定义的基于所述音频特征对所述目标游戏中的游戏元素的显示参数进行控制的控制策略。
  5. 根据权利要求4所述的方法,其中,获取用户自定义的基于所述音频特征对所述目标游戏中的游戏元素的显示参数进行控制的控制策略,包括:
    获取用户为所述音频特征自定义的所述游戏元素的显示参数;将用户为所述音频特征自定义的所述游戏元素的显示参数,作为基于所述音频特征对所述目标游戏中的游戏元素的显示参数进行控制的控制策略的一部分;
    和/或,
    向用户提供所述游戏元素的多个初始显示参数,获取用户在所述多个初始显示参数中为所述音频特征筛选的所述游戏元素的目标显示参数;将用户为所述音频特征筛选的所述目标显示参数,作为基于所述音频特征对所述目标游戏中的游戏元素的显示参数进行控制的控制策略的一部分。
  6. 根据权利要求5所述的方法,其中,获取用户为所述音频特征自定义的所述游戏元素的显示参数,包括:
    获取用户上传的与所述游戏元素相关的显示素材,并确定用户指定的与所述游戏元素相关的显示方式;
    根据所述显示素材和所述显示方式,生成用户自定义的所述游戏元素的显示参数,并确定用户为生成的所述显示参数关联的所述音频特征。
  7. 根据权利要求5或6所述的方法,其中,向用户提供所述游戏元素的多个初始显示参数,获取用户在所述多个初始显示参数中为所述音频特征筛选的所述游戏元素的目标显示参数,包括:
    分别将每种所述音频特征对应的所述游戏元素的初始显示参数作为筛选范围提供给用户,获取用户分别在每个筛选范围中为每种所述音频特征筛 选的所述游戏元素的目标显示参数;
    或者,
    将各种所述音频特征对应的所述游戏元素的初始显示参数共同作为筛选范围提供给用户,获取用户在所述筛选范围内筛选的所述游戏元素的目标显示参数,并确定用户为所述目标显示参数关联的所述音频特征。
  8. 根据权利要求1-7任一项所述的方法,其中,根据所述控制策略和所述音频特征,确定各帧所述背景音频数据下所述游戏元素的显示参数,包括:
    根据所述控制策略和所述音频特征,确定各帧所述背景音频数据下所述游戏元素具有的至少一种显示参数;其中,每种所述显示参数与各帧所述背景音频数据所具备的音频特征相关;
    针对任一帧所述背景音频数据,判断该帧所述背景音频数据下所述游戏元素具有的各种显示参数,是否满足所述控制策略中的显示参数约束条件;
    若满足,则对该帧所述背景音频数据下所述游戏元素具有的各种显示参数进行保留处理,若不满足,则根据各种所述音频特征之间的优先级顺序,对该帧所述背景音频数据下所述游戏元素具有的各种显示参数进行调整,并保留调整后的所述显示参数;其中,调整后的所述显示参数满足所述显示参数约束条件;
    根据各帧所述背景音频数据下所述游戏元素保留的显示参数,确定各帧所述背景音频数据下所述游戏元素的显示参数。
  9. 根据权利要求8所述的方法,其中,根据各种所述音频特征之间的优先级顺序,对该帧所述背景音频数据下所述游戏元素具有的各种显示参数进行调整,包括:
    根据各种所述音频特征之间的优先级顺序,确定该帧所述背景音频数据下所述游戏元素具有的各种显示参数之间的优先级顺序;
    根据所述各种显示参数之间的优先级顺序,在该帧所述背景音频数据下所述游戏元素具有的各种显示参数中筛选待删除参数并删除。
  10. 根据权利要求1-7任一项所述的方法,其中,根据所述控制策略和 所述音频特征,确定各帧所述背景音频数据下所述游戏元素的显示参数,包括:
    针对任一帧所述背景音频数据,根据各种所述音频特征之间的优先级顺序,在该帧背景音频数据所包含的音频特征中确定优先级最高的音频特征;
    根据所述优先级最高的音频特征所对应的控制策略,确定该帧所述背景音频数据下所述游戏元素的显示参数。
  11. 根据权利要求10所述的方法,其中,所述音频特征包括乐器节拍特征和人声强弱特征;所述乐器节拍特征的优先级高于所述人声强弱特征的优先级;根据所述优先级最高的音频特征所对应的控制策略,确定该帧所述背景音频数据下所述游戏元素的显示参数,包括:
    在该帧所述背景音频数据不具备所述乐器节拍特征但具备人声强弱特征时,确定所述优先级最高的音频特征为该帧所述背景音频数据中的人声强弱特征;
    根据所述人声强弱特征所对应的控制策略和该帧所述背景音频数据中的人声强弱特征,确定该帧所述背景音频数据下所述游戏元素的显示参数。
  12. 根据权利要求1-11任一项所述的方法,其中,还包括:
    获取用户设置的各种所述音频特征之间的优先级顺序。
  13. 一种交互方法,包括:
    响应于用户针对目标游戏对局的启动指令,显示所述目标游戏的至少两种游戏元素;其中,所述至少两种游戏元素基于所述目标游戏的背景音频数据中包含的至少两种音频特征确定;
    获取所述用户发送的游戏操作指令,并根据所述游戏操作指令,运行所述目标游戏。
  14. 根据权利要求13所述的方法,其中,所述游戏操作指令包括以下指令中的至少一种:
    对虚拟对象进行控制的虚拟对象控制指令;
    对所述游戏元素的显示参数进行控制的游戏元素控制指令。
  15. 根据权利要求14所述的方法,其中,所述方法还包括:
    显示由所述用户控制的虚拟对象;
    所述获取所述用户发送的游戏操作指令,并根据所述游戏操作指令,运行所述目标游戏,包括:
    获取所述用户发送的虚拟对象控制指令,并根据所述虚拟对象控制指令控制所述虚拟对象进行移动;
    响应于所述虚拟对象与所述游戏元素的位置关系为第一预设位置关系时,调整所述目标游戏对局的分值;
    响应于所述虚拟对象与所述游戏元素的位置关系为第二预设位置关系时,结束所述目标游戏对局。
  16. 根据权利要求14所述的方法,其中,
    所述获取所述用户发送的游戏操作指令,并根据所述游戏操作指令,运行所述目标游戏,包括:
    获取所述用户发送的游戏元素控制指令,并根据所述游戏元素控制指令更改所述游戏元素的显示参数;
    响应于所述游戏元素控制指令触发时间与所述游戏元素显示时间的时间关系为第一预设时间关系时,调整所述目标游戏对局的分值;
    响应于所述游戏元素控制指令触发时间与所述游戏元素显示时间的时间关系为第二预设时间关系时,结束所述目标游戏对局。
  17. 一种游戏数据生成装置,包括:
    特征提取单元,用于获取目标游戏的背景音频数据,按照预设的音频属性维度,将所述背景音频数据分离成至少两种子音频数据,根据每种所述子音频数据对应的音频特征提取种类,提取每种所述子音频数据的至少一种音频特征,得到至少两种音频特征;
    策略获取单元,用于获取基于所述音频特征对所述目标游戏中的游戏元素的显示参数进行控制的控制策略;其中,所述控制策略包括所述音频特征 和所述音频特征对应的所述游戏元素的显示参数;所述控制策略中各种所述音频特征之间具有优先级顺序;以及
    数据生成单元,用于根据所述控制策略和所述音频特征,确定各帧所述背景音频数据下所述游戏元素的显示参数,根据各帧所述背景音频数据下所述游戏元素的显示参数,生成所述目标游戏的游戏数据。
  18. 一种交互装置,包括:
    元素显示单元,用于响应于用户针对目标游戏对局的启动指令,显示所述目标游戏的至少两种游戏元素;其中,所述至少两种游戏元素基于所述目标游戏的背景音频数据中包含的至少两种音频特征确定;以及
    游戏运行单元,用于获取所述用户发送的游戏操作指令,并根据所述游戏操作指令,运行所述目标游戏。
  19. 一种电子设备,包括:
    处理器;以及,
    被配置为存储计算机可执行指令的存储器,所述计算机可执行指令在被执行时使所述处理器实现如权利要求1-12任一项或13-16任一项所述的方法。
  20. 一种计算机可读存储介质,用于存储计算机可执行指令,其中,所述计算机可执行指令在被处理器执行时实现如权利要求1-12任一项或13-16任一项所述的方法。
PCT/CN2023/082439 2022-03-23 2023-03-20 游戏数据生成方法及装置、交互方法及装置 WO2023179524A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210289521.8 2022-03-23
CN202210289521.8A CN114452647A (zh) 2022-03-23 2022-03-23 游戏数据生成方法及装置、交互方法及装置

Publications (1)

Publication Number Publication Date
WO2023179524A1 true WO2023179524A1 (zh) 2023-09-28

Family

ID=81416702

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/082439 WO2023179524A1 (zh) 2022-03-23 2023-03-20 游戏数据生成方法及装置、交互方法及装置

Country Status (2)

Country Link
CN (1) CN114452647A (zh)
WO (1) WO2023179524A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117579979A (zh) * 2024-01-15 2024-02-20 深圳瑞利声学技术股份有限公司 游戏全景声的生成方法、装置、设备及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114452647A (zh) * 2022-03-23 2022-05-10 北京字节跳动网络技术有限公司 游戏数据生成方法及装置、交互方法及装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113457135A (zh) * 2021-06-29 2021-10-01 网易(杭州)网络有限公司 游戏中的显示控制方法、装置及电子设备
CN113975800A (zh) * 2021-11-05 2022-01-28 北京字跳网络技术有限公司 一种交互控制方法、装置、计算机设备及存储介质
CN114452647A (zh) * 2022-03-23 2022-05-10 北京字节跳动网络技术有限公司 游戏数据生成方法及装置、交互方法及装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004057517A (ja) * 2002-07-29 2004-02-26 Open Interface Inc ゲーム制御方法及びゲーム制御装置並びにゲーム制御プログラム
US7105736B2 (en) * 2003-09-09 2006-09-12 Igt Gaming device having a system for dynamically aligning background music with play session events
CN101251875A (zh) * 2008-04-01 2008-08-27 广州成沣信息科技有限公司 一种视音频交融互动游戏方法和系统
CN109754819B (zh) * 2018-12-29 2021-08-10 努比亚技术有限公司 一种数据处理方法、装置及存储介质
CN113856199A (zh) * 2021-09-29 2021-12-31 歌尔股份有限公司 游戏数据处理方法、装置及游戏控制系统
CN114143700B (zh) * 2021-12-01 2023-01-10 腾讯科技(深圳)有限公司 一种音频处理方法、装置、设备、介质及程序产品

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113457135A (zh) * 2021-06-29 2021-10-01 网易(杭州)网络有限公司 游戏中的显示控制方法、装置及电子设备
CN113975800A (zh) * 2021-11-05 2022-01-28 北京字跳网络技术有限公司 一种交互控制方法、装置、计算机设备及存储介质
CN114452647A (zh) * 2022-03-23 2022-05-10 北京字节跳动网络技术有限公司 游戏数据生成方法及装置、交互方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117579979A (zh) * 2024-01-15 2024-02-20 深圳瑞利声学技术股份有限公司 游戏全景声的生成方法、装置、设备及存储介质
CN117579979B (zh) * 2024-01-15 2024-04-19 深圳瑞利声学技术股份有限公司 游戏全景声的生成方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN114452647A (zh) 2022-05-10

Similar Documents

Publication Publication Date Title
WO2023179524A1 (zh) 游戏数据生成方法及装置、交互方法及装置
US9114320B2 (en) System and method for video choreography
JP6179871B2 (ja) 音楽洗濯機および制御方法
JP7003040B2 (ja) オーディオコンテンツのダイナミック変更
CN106095384B (zh) 一种音效调节方法及用户终端
KR20010049659A (ko) 비디오게임의 제어방법, 비디오게임장치 및 비디오게임의프로그램을 기록한 컴퓨터판독가능한 매체
CN107861665A (zh) 一种音量调节的提示方法和装置、终端、可读存储介质
CN113821189A (zh) 音频播放方法、装置、终端设备及存储介质
WO2024108925A1 (zh) 游戏中的音频信息处理方法、装置和存储介质
US10304434B2 (en) Methods, devices and computer program products for interactive musical improvisation guidance
CN111831250A (zh) 音频处理方法、装置、存储介质及电子设备
JP2001517814A (ja) 音響効果システム
JP2004354423A (ja) 音楽再生装置及びその映像表示方法
JP3839417B2 (ja) ゲームbgm生成プログラム、ゲームbgm生成方法およびゲーム装置
JP2011206267A (ja) ゲーム装置、ゲームの進行方法、及びゲーム進行プログラム
JP2007159675A (ja) ゲーム機、ゲームシステム及びプログラム
JP2014123085A (ja) カラオケにおいて歌唱に合わせて視聴者が行う身体動作等をより有効に演出し提供する装置、方法、およびプログラム
US10981063B2 (en) Video game processing apparatus and video game processing program product
Enns Understanding Game Scoring: Software Programming, Aleatoric Composition and Mimetic Music Technology
JP7176105B2 (ja) 再生制御装置、プログラムおよび再生制御方法
KR102135117B1 (ko) 가상환경 내의 사용자 행동에 따른 가청음의 사용자 인식 구조의 생성 방법 및 시스템
CN113908548A (zh) 游戏中的音乐控制方法、装置、存储介质和电子设备
GB2615299A (en) Audio modification system and method
JP6126726B1 (ja) 情報処理装置、ゲームプログラム、及び、情報処理方法
Lin et al. Intuitive, Interactive Training System for the Sense of Sound Using Portable Theremin Device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23773772

Country of ref document: EP

Kind code of ref document: A1