CN112988027B - Object control method and device - Google Patents

Object control method and device Download PDF

Info

Publication number
CN112988027B
CN112988027B CN202110277940.5A CN202110277940A CN112988027B CN 112988027 B CN112988027 B CN 112988027B CN 202110277940 A CN202110277940 A CN 202110277940A CN 112988027 B CN112988027 B CN 112988027B
Authority
CN
China
Prior art keywords
special effect
audio
page
preset
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110277940.5A
Other languages
Chinese (zh)
Other versions
CN112988027A (en
Inventor
赵双琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202110277940.5A priority Critical patent/CN112988027B/en
Publication of CN112988027A publication Critical patent/CN112988027A/en
Priority to PCT/CN2022/080685 priority patent/WO2022194097A1/en
Application granted granted Critical
Publication of CN112988027B publication Critical patent/CN112988027B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The embodiment of the disclosure provides an object control method and device, wherein the method comprises the following steps: displaying a first page, wherein a target object image of a user is displayed on the first page in real time; dynamically displaying a first special effect on the first page, wherein the dynamic display attribute of the first special effect is determined according to the rhythm information of the audio played by the first page; responding to the movement of a first target object in the target object image of the first page, and controlling a second special effect displayed on the first page to move in real time; displaying a third special effect according to the relation between the real-time position of the second special effect and the dynamic position corresponding to the first special effect; the method and the device realize the contactless interaction of the user according to the music rhythm, the second special effect can be controlled without sliding a finger on a screen, and the control convenience and the interaction interestingness are improved.

Description

Object control method and device
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to an object control method and device.
Background
With the continuous development of terminal technology, in order to meet the entertainment demands of users and exercise the attention of users, special effect games of mobile control type are rapidly developed.
Currently, when a user uses a special effect game of a mobile control type on a mobile terminal, the special effect game plays corresponding audio and randomly presents a plurality of first objects (e.g., balls), and the user slides on a screen through fingers to control a second object (e.g., an animal) to move, i.e., perform game interaction, so that the second object can touch the first object.
However, because the user is required to slide and control the first object to move through the fingers on the screen, the control mode is inconvenient, particularly when the user is inconvenient in both hands or the screen is too large, the user cannot control the first object to move through the fingers, so that the user cannot use the special effect game of the mobile control type, and meanwhile, because the audio played by the special effect game is only background music and is not related to other objects of the special effect game, the user cannot perform game interaction according to the audio.
Disclosure of Invention
The embodiment of the disclosure provides an object control method and device, which are used for solving the technical problems that a control mode is inconvenient and game interaction cannot be performed according to audio in the prior art.
In a first aspect, an embodiment of the present disclosure provides an object control method, including:
Displaying a first page, wherein a target object image of a user is displayed on the first page in real time;
dynamically displaying a first special effect on the first page, wherein the dynamic display attribute of the first special effect is determined according to the rhythm information of the audio played by the first page;
responding to the movement of a first target object in the target object image of the first page, and controlling a second special effect displayed on the first page to move in real time;
and displaying a third special effect according to the relation between the real-time position of the second special effect and the dynamic position corresponding to the first special effect.
In a second aspect, an embodiment of the present disclosure provides an object control apparatus, including:
the display module is used for displaying a first page, wherein a target object image of a user is displayed on the first page in real time;
the display module is further configured to dynamically display a first special effect on the first page, where a dynamic display attribute of the first special effect is determined according to rhythm information of audio played on the first page;
the processing module is used for responding to the movement of a first target object in the target object image of the first page and controlling a second special effect displayed on the first page to move in real time;
The processing module is further configured to display a third special effect according to a relationship between the real-time position of the second special effect and the dynamic position corresponding to the first special effect.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor and a memory.
The memory stores computer-executable instructions.
The at least one processor executes the computer-executable instructions stored in the memory, causing the at least one processor to perform the object control method as described above in the first aspect and the various possible designs of the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, implement the object control method according to the first aspect and the various possible designs of the first aspect.
In a fifth aspect, embodiments of the present disclosure provide a computer program product comprising a computer program which, when executed by a processor, implements the object control method according to the first aspect and the various possible designs of the first aspect.
The embodiment of the disclosure provides an object control method and device, in the method, a first special effect is dynamically displayed on a first page by utilizing a dynamic display attribute determined according to rhythm information of audio played on the first page, namely, the first special effect is controlled to move according to the dynamic display attribute, when the first target object in a target object image on the first page is detected to move, the first target object of a user is indicated to move, namely, a control operation is input, the second special effect is controlled to correspondingly move, and the movement control of the second special effect is realized; the second special effect movement can be controlled without sliding a finger on the screen, so that the control convenience is improved, and a user can successfully use the special effect game of the movement control type. In the process of controlling the movement of the second special effect, displaying the third special effect according to the position relation between the second special effect and the first special effect so as to inform a user of the game effect brought by the input control operation; and because the dynamic display attribute of the first special effect is determined according to the real rhythm information of the audio, namely the audio played by the first page is not only background music, but also is related to the first special effect, and the user can input corresponding control operation according to the rhythm of the audio so as to enable the second special effect to touch the first special effect, thereby realizing the contactless interaction of the user according to the music rhythm, increasing the interestingness of the interaction, enhancing the rhythm sense of the user, improving the sense of reality of the user during use and improving the use satisfaction of the user.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, a brief description will be given below of the drawings that are needed in the embodiments or the description of the prior art, it being obvious that the drawings in the following description are some embodiments of the present disclosure, and that other drawings may be obtained from these drawings without inventive effort to a person of ordinary skill in the art.
FIG. 1 is a schematic diagram of a finger control effect provided by an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of an object control method according to an embodiment of the disclosure;
FIG. 3 is a schematic diagram I of object control provided by an embodiment of the present disclosure;
FIG. 4 is a second schematic diagram of object control provided by an embodiment of the present disclosure;
fig. 5 is a second flowchart of an object control method according to an embodiment of the disclosure;
FIG. 6 is a schematic diagram of an enlarged target object provided by an embodiment of the present disclosure;
fig. 7 is a block diagram of an object control apparatus provided in an embodiment of the present disclosure;
fig. 8 is a schematic hardware structure of an electronic device according to an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some embodiments of the present disclosure, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
Currently, a special game plays corresponding audio and randomly presents a plurality of first objects (such as balls as shown in fig. 1). The user slides on the screen by means of the finger to control the second object (as shown in fig. 1) to move, i.e. to perform a game interaction, so that the second object may touch the first object, e.g. when the finger slides from left to right, the cat also moves from left to right. However, because the user is required to slide and control the first object to move through the fingers on the screen, the control mode is inconvenient, especially when the user is inconvenient to hold the mobile terminal with hands due to the overlarge screen, the user cannot control the first object to move through the fingers, so that the user cannot use the special effect game of the mobile control type, and meanwhile, because the audio played by the special effect game is only background music and is not related to other objects of the special effect game, the user cannot perform game interaction according to the audio.
In order to solve the above problems, the technical concept of the present disclosure is that when a user uses a game, an electronic device photographs the user to display a target object image of the user on a first page, and the user can indicate that the user inputs a start operation through actions such as blinking, nodding, etc., that is, the dynamic display of a first special effect needs to be started; after the start, the user can move the head left and right, for example, when moving the nose left and right, the second special effect is controlled to move left and right in response to the movement, so that the first special effect dynamically displayed on the first page is accurately impacted, and the control convenience is improved. Meanwhile, the dynamic display attribute of the first special effect is determined according to the real rhythm information of the audio, namely the audio played by the first page is associated with the first special effect, and the user can input corresponding control operation according to the rhythm of the audio so as to enable the second special effect to touch the first special effect, so that the user can perform contactless interaction according to the music rhythm, the rhythm sense of the user can be enhanced, the sense of reality of the user during use is improved, and the use satisfaction of the user is improved.
The technical scheme of the present disclosure is described in detail below with specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
Referring to fig. 2, fig. 2 is a flowchart illustrating an object control method according to an embodiment of the disclosure. The method of the present embodiment may be applied to an electronic device, for example, a mobile terminal such as a smart phone, a palm computer, a tablet computer, a computer device (e.g., a desktop computer, a notebook computer, an integrated machine, etc.), etc., as shown in fig. 2, and the object control method includes:
s201, displaying a first page, wherein a target object image of a user is displayed on the first page in real time.
In the embodiment of the disclosure, when a user wants to use the special effect game of the mobile control type, a related application program on the electronic device can be opened, the application program displays a first page, the first page can display a target object image of the user acquired by the electronic device in real time, namely after the first page is displayed, a camera corresponding to the electronic device acquires the image of the user in real time, and the image is displayed on the first page in real time. The first page may also display effects (e.g., first effect, second effect, third effect, etc.) associated with the effect game.
The target object image is a user face image, images of other body parts, pet images or the like acquired by the camera in real time.
The camera corresponding to the electronic device may be a camera integrated on the electronic device (for example, a front camera), or may be a camera externally connected to the electronic device, that is, the camera is not integrated on the electronic device, but is connected to the electronic device, which is not limited by the disclosure.
S202, dynamically displaying the first special effect on the first page, wherein the dynamic display attribute of the first special effect is determined according to the rhythm information of the audio played by the first page.
In the embodiment of the disclosure, after a game starts, playing corresponding audio to be played (e.g., playing a corresponding audio (e.g., a song) on a first page, and dynamically displaying the first special effects corresponding to the audio on the first page according to a dynamic display attribute determined according to rhythm information of the audio, i.e., a display style of the first special effects, i.e., continuously moving each first special effect displayed on the first page.
Further, optionally, when the first special effect is dynamically displayed, the first special effect may be dynamically displayed in a preset display area on the first page according to a first preset moving direction, where the first preset moving direction includes at least one of: from top to bottom, from bottom to top, from left to right, and from right to left.
For example, when the first preset moving direction is from top to bottom, during the process of playing audio, new first special effects continuously appear in the preset display area on the first page, and the first special effects in the preset display area move from top to bottom, as shown in fig. 3 (a), only one first special effect in the preset display area, and after a certain period of time, one new first special effect moves from top to bottom into the preset display area, that is, two first special effects exist in the preset display area, as shown in fig. 3 (b).
Optionally, the preset display area may be set according to practical situations, for example, it is a central area of the first page.
Optionally, the dynamic display attribute of the first effects includes determining a distance between the first effects according to the rhythm information and/or determining a moving speed of the first effects according to the rhythm information.
Wherein the distance between the first effects indicates the distance between the two first effects, for example, the distance between the two first effects as shown in (b) of fig. 3.
Further, optionally, the cadence information includes one or more of: rhythm type, beat information and drum spot information.
The rhythm type includes a fast rhythm type and a slow rhythm type. When the rhythm type of the audio is a fast rhythm type, the rhythm of the audio is faster, and more drum points exist. When the rhythm type of the audio is a slow rhythm type, it is indicated that the rhythm of the audio is slower and the drum points are fewer.
Wherein the drum point information comprises the number of drum points and/or the positions of the drum points. Specifically, the number of drum points indicates the number of drum points included in the audio. The drum point position indicates the time position where the drum point is in the audio, e.g. the drum point position comprises 3 seconds and 10 seconds, i.e. it indicates that there is one drum point at the 3 rd second of the audio and one drum point also at the 10 th second of the audio.
Wherein the beat information indicates beat types included in the audio, for example, beat types including a four-two beat type.
Alternatively, the drum points are in one-to-one correspondence with the first special effects, and the distance between the first special effects is related to the drum point positions, i.e. the distance between two adjacent drum point positions. For example, if the position of the drum point 1 is 3 seconds, the position of the drum point 2 is 5 seconds, the drum point 1 is adjacent to the drum point 2, and the distance between the drum point 1 and the drum point 2 is 2 seconds, the distance corresponding to 2 seconds is found, and the distance between the first special effect corresponding to the drum point 1 and the second special effect corresponding to the drum point 2 is determined as the distance corresponding to 2 seconds.
Additionally, alternatively, a drum point may be used to represent a sound intensity characteristic of the audio, and the drum point may also be referred to as a re-beat, which is located in an audio frame with a large amplitude in the audio, so that the drum point position of the audio may be determined according to the audio waveform, which specifically includes:
And acquiring a waveform corresponding to the audio to be played, and extracting a target waveform segment from the waveform corresponding to the audio to be played, wherein the target waveform segment comprises a peak value of the waveform corresponding to the audio to be played. And matching the target waveform segment with a preset drum point waveform segment to determine the drum point position of the audio to be played.
Specifically, a target waveform segment including a peak value is extracted from waveforms corresponding to audio to be played, the target waveform segment is matched with a preset drum point waveform segment, namely, for each target waveform segment, the similarity between the target waveform segment and the preset drum point waveform segment is calculated, and when the similarity is larger than the preset similarity, the time corresponding to the peak value in the target waveform segment is indicated to be a drum point position.
Wherein the preset drum point waveform segment is a waveform segment of drum point audio acquired in advance,
in addition, alternatively, in determining the drum point of the audio to be played, the determination may be made in other manners, such as by using the amplitude in the waveform of the audio, which is not limited by the present disclosure.
Optionally, the rhythm type of the audio can be determined according to the number of drum points of the audio, and the specific process is as follows: and determining the number of drum points corresponding to the audio to be played, namely acquiring the number of drum points included in the audio to be played on the first page. And determining the rhythm type according to the number of the drum points, namely when the number of the drum points is larger than a preset number value, indicating that the number of the drum points of the audio is larger and the rhythm of the audio is faster, and determining that the rhythm type of the audio is a fast rhythm type. When the number of the drum points is smaller than or equal to the preset number value, the number of the drum points of the audio is smaller, the rhythm of the audio is slower, and the rhythm type of the audio is determined to be a slow rhythm type.
Optionally, the moving speed of the first special effect may be determined according to the rhythm type of the audio, and when the rhythm type is a fast rhythm type, the number of drum points of the audio is indicated to be more, and the number of first special effects that need to be dropped is more, so that the first special effects can be moved according to a faster speed, that is, according to a first preset moving speed; when the tempo type is a slow tempo type, it is indicated that the number of drum points of the audio is small and the number of first effects that need to be dropped is small, so that the first effects can be moved at a slower speed, i.e. at a second preset movement speed.
In addition, alternatively, the moving speeds corresponding to the fast tempo type and the slow tempo type may be the same, that is, the first special effects corresponding to all the audios are moved according to the preset moving speed.
It can be understood that, whether the moving speeds corresponding to the audio of different rhythm types are the same or not, when a drum point of the audio is reached, the first special effect corresponding to the drum point needs to be moved to a designated area so as to realize accurate movement of the first special effect.
In addition, optionally, the rhythm type of the audio may also be determined by using the beat information of the audio, and the process is similar to the process of determining the rhythm type of the audio according to the number of drum points, which is not described herein.
S203, responding to the movement of the first target object in the target object image of the first page, and controlling the second special effect displayed on the first page to move in real time.
In the embodiment of the disclosure, when the movement of the first target object in the target object image of the first page is detected, it is indicated that the user inputs a corresponding control operation, that is, the movement of the second special effect needs to be controlled, the second special effect is controlled to move in real time, so that the movement of the second special effect is controlled in a contactless manner in response to the control operation input by the user, that is, the movement of the second special effect does not need to be controlled by the user through sliding a finger on a screen.
Optionally, when the second special effect is controlled to move, the second special effect is moved according to the moving direction of the first target object, for example, the moving direction of the first target object is from left to right, and then the second special effect is moved from left to right.
The first target object includes a nose, a hand, and other parts of the user, such as eyes, a head, and the like, which are not limited by the present disclosure.
S204, displaying the third special effect according to the relation between the real-time position of the second special effect and the dynamic position corresponding to the first special effect.
In the embodiment of the disclosure, in the process of moving the second special effect, the position of the second special effect changes in real time, whether the second special effect touches the first special effect or not can be determined according to the relation between the real-time position of the second special effect and the dynamic position corresponding to the first special effect, namely, whether the second special effect hits the first special effect or not is determined, and the corresponding prompt special effect, namely, the third special effect is displayed, so that interaction with a user is realized, the operation wish of the user is promoted, the interestingness is effectively improved, and therefore, the use satisfaction of the user is improved.
Alternatively, the first effect includes a core region (a circular region in the first effect as shown in (a) or (b) of fig. 3) and an edge region (a region other than a circle in the first effect as shown in (a) or (b) of fig. 3). When a drum point of the audio is reached, the first special effect corresponding to the drum point moves to the appointed area, the core area in the first special effect corresponds to the accurate drum point position, namely when the user controls the second special effect to hit the core area, the user is indicated to accurately step on the drum point, namely, the user is accurately clamped to the rhythm. The edge area in the first special effect corresponds to a position close to the drum point, namely when the user controls the second special effect to hit the edge area, the user is indicated to not step on the drum point accurately, namely not clamp the rhythm accurately.
In addition, alternatively, the first effect shown in (a) or (b) of fig. 3 is only an example, and the first effect may be other types of effects, for example, the first effect is an arrow, a circle, or the like, which is not limited by the present disclosure.
Alternatively, the second effect may be a ball with eyes, such as the second effect shown in fig. 4, but of course, the second effect may be other types of effects, for example, the second effect is an animal such as a dog, and the present disclosure is not limited thereto.
Optionally, the third special effect includes a prompt message and/or a prompt animation. The prompt information comprises information such as prompt characters, prompt pictures and the like.
Taking a specific application scenario as an example, as shown in fig. 4, after a game starts, the first page plays the audio 1, the first special effect 10 moves from top to bottom, when detecting that the user moves the head from left to right, i.e. the nose, the second special effect 20 is controlled to move from left to right, when reaching a drum point of the audio 1, the first special effect 10 corresponding to the drum point moves to a designated area, the second special effect 20 is in a core area of the first special effect, which indicates that the second special effect 20 accurately hits the first special effect, i.e. the user accurately blocks the rhythm, and then a "perfect" prompt message is output.
In the embodiment of the disclosure, the distance between the first special effects is determined according to the real drum point position of the audio, namely, the first special effects are arranged according to the real rhythm, so that the first special effects arranged according to the real rhythm of the audio can be dynamically displayed when the first page plays the audio, a user can precisely collide with the first special effects according to the real rhythm, the rhythm sense of the user is enhanced, the reality of the experience is enhanced, and the immersive experience is brought to the user.
In the embodiment of the disclosure, in the process that the user moves the first target object, the first target object in the target object image in the first page also moves in real time, and the electronic device tracks the first target object to determine the moving direction of the first target object, so that the second special effect bound with the first target object is controlled according to the moving direction, and the user can realize the movement control of the second special effect by only moving the first target object, and does not need to contact the screen of the electronic device, so that the non-contact control of the second special effect is realized, and the convenience of control is improved.
As can be seen from the above description, the first special effect is dynamically displayed on the first page by using the dynamic display attribute determined according to the rhythm information of the audio played on the first page, that is, the first special effect is controlled to move according to the dynamic display attribute, and when the first target object in the target object image on the first page is detected to move, it is indicated that the first target object of the user moves, that is, the control operation is input, the second special effect is controlled to correspondingly move, so that the movement control of the second special effect is realized; the second special effect movement can be controlled without sliding a finger on the screen, so that the control convenience is improved, and a user can successfully use the special effect game of the movement control type. In the process of controlling the movement of the second special effect, displaying the third special effect according to the position relation between the second special effect and the first special effect so as to inform a user of the game effect brought by the input control operation; and because the dynamic display attribute of the first special effect is determined according to the real rhythm information of the audio, namely the audio played by the first page is not only background music, but also is related to the first special effect, and the user can input corresponding control operation according to the rhythm of the audio so as to enable the second special effect to touch the first special effect, thereby realizing the contactless interaction of the user according to the music rhythm, increasing the interestingness of the interaction, enhancing the rhythm sense of the user, improving the sense of reality of the user during use and improving the use satisfaction of the user.
Referring to fig. 5, fig. 5 is a second schematic flow chart of an object control method according to an embodiment of the disclosure. Based on the embodiment of fig. 2, the user may also start the game by not touching the screen to implement the contactless trigger, and the following will describe in detail how the process of starting the contactless trigger game is performed according to a specific embodiment, as shown in fig. 5, the method includes:
s501, displaying a first page, wherein a target object image of a user is displayed on the first page in real time.
In the embodiment of the disclosure, when the first page displays the target object image acquired by the electronic device, the second target object in the target object image in the first page is amplified, that is, a certain part of the user in the first page is amplified, so that the user can more intuitively observe the movement of the first target object, the control accuracy is improved, and the interestingness is increased.
For example, the second target object is a user's head, which is of a normal size (as shown in fig. 6 (a)) when the user's head is not subjected to the enlargement processing, and is enlarged (as shown in fig. 6 (b)) when the user's head is subjected to the enlargement processing, so as to achieve the large-head special effect.
S502, responding to a trigger action in a target object image of the first page, determining audio to be played, and playing the audio to be played on the first page.
In the embodiment of the disclosure, when detecting that a user in a target object image in a first page performs a trigger action, the user indicates that the user inputs an operation for triggering the start of a game, and the game needs to be started, determining audio to be played on the first page, that is, determining audio to be played corresponding to the first page.
Alternatively, the trigger action may include head up and down, left and right, hand left and right, up and down, eye blink, nose up and down, left and right, etc.
It will be appreciated that when the nose is moving, it is actually the user that is doing the head movement, for example, when the user's head is moving left and right, the nose is also moving left and right.
In the embodiment of the present disclosure, optionally, when determining the audio to be played corresponding to the first page, the determination may be performed in the following ways.
One way is that the audio is randomly selected from the preset audio list as the audio to be played, that is, the electronic device randomly selects one audio from the preset audio list, that is, the preset audio library, and determines the one audio as the audio to be played corresponding to the first page.
In another way, the audio selected by the user from the preset audio list is obtained as the audio to be played. That is, the user may select one audio from the provided preset audio list, and the electronic device determines the audio selected by the user as the audio to be played corresponding to the first page.
Another way is to acquire the audio uploaded by the user as the audio to be played. That is, when the audio wanted to be played by the user does not exist in the preset audio list, the user can upload the wanted audio, and the electronic device determines the audio uploaded by the user as the audio to be played corresponding to the first page.
In addition, optionally, the rhythm information corresponding to the audio in the preset audio list is predetermined, that is, the dynamic display attribute of the first special effect is predetermined, and when the game starts, the first special effect can be directly moved by the dynamic display attribute. When the audio to be played corresponding to the first page is the audio uploaded by the user, the electronic equipment detects a drum point when acquiring the audio uploaded by the user so as to determine corresponding rhythm information, and accordingly, the dynamic display attribute of the first special effect corresponding to the audio is determined according to the rhythm information.
S503, dynamically displaying the first special effect on the first page, wherein the dynamic display attribute of the first special effect is determined according to the rhythm information of the audio played by the first page.
In the embodiment of the present disclosure, after a game starts, a first special effect with dynamic display attribute matching with rhythm information of audio to be played corresponding to a first page is continuously displayed on the first page, that is, when a drum point of the audio is reached, one first special effect moves to a designated area on the first page.
S504, responding to the movement of the first target object in the target object image of the first page, and controlling the second special effect displayed on the first page to move in real time.
S505, displaying the third special effect according to the relation between the real-time position of the second special effect and the dynamic position corresponding to the first special effect.
In the embodiment of the present disclosure, according to a relationship between a real-time position of a second effect and a dynamic position corresponding to a first effect, it is determined whether the second effect touches the first effect, that is, whether the second effect controlled by a certain portion of a user hits the first effect, so as to display a corresponding feedback effect, that is, a third effect, and the specific process is as follows: and displaying first prompt information and/or first prompt animation if the real-time position of the second special effect is in the core area in the process that the first special effect dynamically moves in the preset display area of the first page. And if the real-time position of the second special effect is in the edge area, displaying second prompt information and/or second prompt animation. And if the real-time position of the second special effect and the dynamic position of the first special effect continuously have no overlapped part, displaying third prompt information and/or third prompt animation.
In the embodiment of the disclosure, when the first special effect dynamically moves in the preset display area in the first page, the user may move the first target object according to the second preset moving direction (for example, from left to right), and the electronic device controls the second special effect to move according to the second preset moving direction in response to the movement. In the moving process of the second special effect, the real-time position of the second special effect is continuously changed, when the real-time position of the second special effect is in the core area of the first special effect, the moving track of the second special effect and the core area are indicated to have an overlapped part, namely, the first special effect hits the core area of the first special effect, and then, first prompt information and/or first prompt animation are output so as to inform a user that the user accurately hits the first special effect, namely, the user accurately steps on the drum point of the audio. When the real-time position of the second special effect is in the edge area of the first special effect, the moving track of the second special effect is only overlapped with the edge area, but not overlapped with the core area, and the second prompt information and/or the second prompt animation are output so as to warn the user that the user hits the first special effect, but does not precisely tread on the drum point of the audio. When the real-time position of the second special effect is not overlapped with the dynamic position of the first special effect continuously, the moving track of the second special effect is not overlapped with the first special effect, namely, the second special effect is continuously positioned at a position beyond the first special effect, and third prompt information and/or third prompt animation are displayed so as to inform a user that the user does not hit the first special effect, namely, does not step on the drum point of the audio.
Further, optionally, the number of times of displaying the third prompt information or the third prompt animation may be counted, and when the number of times is greater than a preset threshold, it indicates that the number of times that the user does not step on the drum point of the audio is greater, that is, the user does not clip the rhythm for multiple times, and then end information is displayed, so as to inform the user that the game is ended, and the user needs to restart to experience the special effect game.
The first prompt information, the first prompt animation, the second prompt information, the second prompt animation, the third prompt information and the third prompt animation can be set according to the user requirement, for example, the first prompt information is perfect, the second prompt information is good, and the third prompt information is failed. Meanwhile, the preset threshold value can also be set according to the requirement of the user, for example, the preset threshold value is 2 times, namely, when the user steps on the drum point of the audio for the third time, the ending information is displayed.
It can be understood that when the first effect is in the designated area in the preset display area, a drum point of the audio is indicated, and at this time, if the real-time position of the second effect is in the edge area or the core area of the first effect, it is determined that the second effect hits the edge area or the core area of the first effect, that is, when the second effect hits the first effect on the designated area of the first page, it is indicated that the user steps on the drum point of the audio, that is, controls the first effect to hit the first effect corresponding to the drum point, when reaching a drum point of the audio played by the first page, a first effect falls on the designated area, and at this time, the position of the second effect is on the core area of the first effect, then a "perfect" prompt message is output. Alternatively, when the second effect hits the first effect in the preset display area, the second effect may be considered to hit the first effect.
In addition, optionally, if the real-time position of the second special effect is in the core area or the edge area, it indicates that the second special effect encounters the first special effect corresponding to the drum point, that is, indicates that the user steps on the drum point successfully, then plays the first preset audio or enhances the sound effect of the audio currently played on the first page, that is, increases the sound of the drum point of the audio currently played on the first page, so as to increase the interest of the user in use and improve the satisfaction of the user in use.
The first preset audio is preset audio, for example, the first preset audio is voice containing "success".
In addition, optionally, the electronic device may further control the third target object in the second special effect to perform corresponding display in different directions in response to the moving direction of the first target object in the target object image of the first page, that is, in the process of moving the first target object, control the third target object in the second special effect to perform corresponding movement according to the moving direction of the first target object.
The moving direction of the third target object may be the same as the moving direction of the first target object, for example, the first target object is a nose of the user, the third target object is eyes, and in the process that the user moves the head from left to right, the nose of the user also moves from left to right, so that the eyes in the second special effect are controlled to move from left to right.
In any of the disclosed embodiments, in addition, optionally, during the process of displaying the first special effect on the first page, a style of the first special effect may be changed, that is, a remaining playing duration of the first page audio may be obtained. And displaying the first special effect according to different preset patterns according to different residual playing time lengths.
Specifically, the remaining playing time length corresponding to the audio played by the first page is obtained, namely, the time length from the end of the audio playing is obtained, the preset pattern corresponding to the remaining playing time length is searched, the first special effect is displayed according to the preset pattern, so that the user is prompted on the progress of the audio playing, namely, the user is informed of the remaining game time, and the use satisfaction degree of the user is improved.
The preset patterns comprise appearance patterns such as colors, shapes and the like of the first special effects. For example, the preset pattern includes the color of the first special effect, the remaining playing time period is 10 seconds, and the preset pattern corresponding to 10 seconds, that is, the color of the first special effect is purple, and the color of the displayed first special effect changes to purple within the remaining playing time period.
In the embodiment of the disclosure, in the process of moving the first target object by the user, the user is indicated to input corresponding control operation, so that not only is the first special effect controlled to move according to the moving direction of the first target object, but also the third target object in the second special effect is controlled to move according to the moving direction of the first target object, dual response of the control operation input by the user is realized, the user can know the control operation input by the user more clearly, and the interestingness is increased.
In the embodiment of the disclosure, in the process that the user moves the first target object, the second special effect also moves correspondingly, when the second special effect hits the first special effect in the designated area, corresponding prompt information and/or animation are output according to the hit portion of the second special effect to inform the user whether the user steps on the drum point of the audio precisely, namely, the user is blocked to the rhythm, and when the second special effect does not hit the first special effect in the designated area, namely, the second special effect does not touch the first special effect corresponding to a certain drum point, corresponding prompt information and/or animation are output to inform the user that the user does not block to the rhythm, interaction with the user is realized, operation wish of the user is promoted, and interestingness is increased.
In the embodiment of the disclosure, after detecting that the user in the target object image in the first page performs the triggering action, the user inputs the operation of triggering the game to start, and if the user wants to start the game, the audio to be played on the first page is determined, so that the audio is played on the first page, and the first special effect with the dynamic display attribute related to the audio is displayed, so that the user can experience the game, the user does not need to slide on the screen through fingers to trigger the game to start, the contactless triggering is realized, and the convenience of game triggering is improved.
Corresponding to the object control method of the above embodiment, fig. 7 is a block diagram of the structure of the object control apparatus provided by the embodiment of the present disclosure. For ease of illustration, only portions relevant to embodiments of the present disclosure are shown. Referring to fig. 7, the apparatus includes: a display module 701 and a processing module 702.
The display module 701 is configured to display a first page, where a target object image of a user is displayed on the first page in real time;
the display module 701 is further configured to dynamically display a first special effect on the first page, where a dynamic display attribute of the first special effect is determined according to rhythm information of audio played on the first page;
a processing module 702, configured to control, in response to movement of a first target object in the target object image of the first page, a second special effect displayed on the first page to move in real time;
the processing module 702 is further configured to instruct the display module 701 to display a third special effect according to a relationship between the real-time position of the second special effect and the dynamic position corresponding to the first special effect.
In one embodiment of the present disclosure, the dynamic display attribute of the first effects includes determining a spacing between the first effects according to the cadence information and/or determining a movement speed of the first effects according to the cadence information.
In one embodiment of the present disclosure, the cadence information includes one or more of the following: rhythm type, beat information and drum spot information.
In one embodiment of the present disclosure, the display module 701 is further configured to:
according to a first preset moving direction, dynamically displaying the first special effect in a preset display area on the first page, wherein the first preset moving direction comprises at least one of the following: from top to bottom, from bottom to top, from left to right, and from right to left.
In one embodiment of the present disclosure, the display module 701 is further configured to:
and responding to the triggering action in the target object image of the first page, determining the audio to be played, and playing the audio to be played on the first page.
In one embodiment of the present disclosure, the display module 701 is further configured to:
randomly selecting audio from a preset audio list as the audio to be played;
or alternatively, the process may be performed,
acquiring audio selected by a user from the preset audio list as the audio to be played;
or alternatively, the process may be performed,
and acquiring the audio uploaded by the user as the audio to be played.
In one embodiment of the present disclosure, the tempo types include a fast tempo type and a slow tempo type;
The processing module 702 is further configured to:
determining the number of drum points corresponding to the audio to be played;
and determining the rhythm type according to the number of drum points.
In one embodiment of the present disclosure, the drum point information includes a drum point location;
the processing module 702 is further configured to:
acquiring a waveform corresponding to audio to be played, and extracting a target waveform segment from the waveform, wherein the target waveform segment comprises a peak value of the waveform;
and matching the target waveform segment with a preset drum point waveform segment to determine the drum point position of the audio to be played.
In one embodiment of the present disclosure, the first effect includes a core region and an edge region; the third special effect comprises prompt information and/or prompt animation;
the processing module 702 is further configured to:
in the process that the first special effect dynamically moves in the preset display area of the first page, if the real-time position of the second special effect is in the core area, the display module 701 is instructed to display first prompt information and/or first prompt animation;
if the real-time position of the second special effect is in the edge area, the display module 701 is instructed to display a second prompt message and/or a second prompt animation;
If the real-time position of the second special effect and the dynamic position of the first special effect continuously have no overlapped part, the indication display module displays third prompt information and/or third prompt animation; and/or counting the number of times of displaying the third prompt information or the third prompt animation, and when the number of times is greater than a preset threshold, indicating the display module 701 to display the end information.
In one embodiment of the present disclosure, the processing module 702 is further configured to:
and if the real-time position of the second special effect is in the core area or the edge area, playing a first preset audio or enhancing the sound effect of the audio currently played on the first page.
In one embodiment of the present disclosure, the processing module 702 is further configured to:
and amplifying a second target object in the target object image in the first page.
In one embodiment of the present disclosure, the processing module 702 is further configured to:
and responding to the moving direction of a first target object in the target object image of the first page, and controlling a third target object in the second special effect to display in different directions correspondingly.
In one embodiment of the present disclosure, the processing module 702 is further configured to:
Acquiring the residual playing time length of the first page audio;
and displaying the first special effect according to different preset patterns according to different residual playing time lengths.
The device provided in this embodiment may be used to execute the technical solution of the foregoing method embodiment, and its implementation principle and technical effects are similar, and this embodiment will not be described herein again.
Referring to fig. 8, there is shown a schematic structural diagram of an electronic device 800 suitable for use in implementing embodiments of the present disclosure, which electronic device 800 may be a terminal device or a server. The terminal device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a personal digital assistant (Personal Digital Assistant, PDA for short), a tablet (Portable Android Device, PAD for short), a portable multimedia player (Portable Media Player, PMP for short), an in-vehicle terminal (e.g., an in-vehicle navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 8 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 8, the electronic apparatus 800 may include a processing device (e.g., a central processing unit, a graphics processor, etc.) 801 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 802 or a program loaded from a storage device 806 into a random access Memory (Random Access Memory, RAM) 803. In the RAM 803, various programs and data required for the operation of the electronic device 800 are also stored. The processing device 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
In general, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 807 including, for example, a liquid crystal display (Liquid Crystal Display, LCD for short), a speaker, a vibrator, and the like; storage 806 including, for example, magnetic tape, hard disk, etc.; communication device 806. The communication means 806 may allow the electronic device 800 to communicate wirelessly or by wire with other devices to exchange data. While fig. 8 shows an electronic device 800 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communications device 806, or installed from the storage device 806, or installed from the ROM 802. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 801.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods shown in the above-described embodiments.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (Local Area Network, LAN for short) or a wide area network (Wide Area Network, WAN for short), or it may be connected to an external computer (e.g., connected via the internet using an internet service provider).
The disclosed embodiments also provide a computer program product comprising a computer program for implementing the object control method provided by the disclosed embodiments when the computer program is executed.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit does not in any way constitute a limitation of the unit itself, for example the first acquisition unit may also be described as "unit acquiring at least two internet protocol addresses".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to a first aspect, according to one or more embodiments of the present disclosure, there is provided an object control method including:
displaying a first page, wherein a target object image of a user is displayed on the first page in real time;
dynamically displaying a first special effect on the first page, wherein the dynamic display attribute of the first special effect is determined according to the rhythm information of the audio played by the first page;
responding to the movement of a first target object in the target object image of the first page, and controlling a second special effect displayed on the first page to move in real time;
and displaying a third special effect according to the relation between the real-time position of the second special effect and the dynamic position corresponding to the first special effect.
According to one or more embodiments of the present disclosure, the dynamic display attribute of the first effects includes determining a spacing between the first effects according to the rhythm information and/or determining a moving speed of the first effects according to the rhythm information.
According to one or more embodiments of the present disclosure, the cadence information includes one or more of: rhythm type, beat information and drum spot information.
According to one or more embodiments of the present disclosure, the dynamically displaying the first special effect on the first page includes:
According to a first preset moving direction, dynamically displaying the first special effect in a preset display area on the first page, wherein the first preset moving direction comprises at least one of the following: from top to bottom, from bottom to top, from left to right, and from right to left.
According to one or more embodiments of the present disclosure, the method further comprises:
and responding to the triggering action in the target object image of the first page, determining the audio to be played, and playing the audio to be played on the first page.
According to one or more embodiments of the present disclosure, the determining audio to be played includes:
randomly selecting audio from a preset audio list as the audio to be played;
or alternatively, the process may be performed,
acquiring audio selected by a user from the preset audio list as the audio to be played;
or alternatively, the process may be performed,
and acquiring the audio uploaded by the user as the audio to be played.
According to one or more embodiments of the present disclosure, the tempo types include a fast tempo type and a slow tempo type;
the method further comprises the steps of:
determining the number of drum points corresponding to the audio to be played;
and determining the rhythm type according to the number of drum points.
According to one or more embodiments of the present disclosure, the drum point information includes a drum point location;
the method further comprises the steps of:
acquiring a waveform corresponding to audio to be played, and extracting a target waveform segment from the waveform, wherein the target waveform segment comprises a peak value of the waveform;
and matching the target waveform segment with a preset drum point waveform segment to determine the drum point position of the audio to be played.
According to one or more embodiments of the present disclosure, the first effect includes a core region and an edge region; the third special effect comprises prompt information and/or prompt animation;
displaying the third special effect according to the relation between the real-time position of the second special effect and the dynamic position corresponding to the first special effect, comprising:
displaying first prompt information and/or first prompt animation if the real-time position of the second special effect is in the core area in the process that the first special effect moves dynamically in a preset display area of the first page;
if the real-time position of the second special effect is in the edge area, displaying second prompt information and/or second prompt animation;
if the real-time position of the second special effect and the dynamic position of the first special effect continuously have no overlapped part, third prompt information and/or third prompt animation are displayed; and/or counting the times of displaying the third prompt information or the third prompt animation, and displaying the ending information when the times are larger than a preset threshold value.
According to one or more embodiments of the present disclosure, the method further comprises:
and if the real-time position of the second special effect is in the core area or the edge area, playing a first preset audio or enhancing the sound effect of the audio currently played on the first page.
According to one or more embodiments of the present disclosure, the method further comprises:
and amplifying a second target object in the target object image in the first page.
According to one or more embodiments of the present disclosure, the method further comprises:
and responding to the moving direction of a first target object in the target object image of the first page, and controlling a third target object in the second special effect to display in different directions correspondingly.
According to one or more embodiments of the present disclosure, the method further comprises:
acquiring the residual playing time length of the first page audio;
and displaying the first special effect according to different preset patterns according to different residual playing time lengths.
In a second aspect, according to one or more embodiments of the present disclosure, there is provided an object control apparatus including:
the display module is used for displaying a first page, wherein a target object image of a user is displayed on the first page in real time;
The display module is further configured to dynamically display a first special effect on the first page, where a dynamic display attribute of the first special effect is determined according to rhythm information of audio played on the first page;
the processing module is used for responding to the movement of a first target object in the target object image of the first page and controlling a second special effect displayed on the first page to move in real time;
the processing module is further configured to instruct the display module to display a third special effect according to a relationship between the real-time position of the second special effect and the dynamic position corresponding to the first special effect.
According to one or more embodiments of the present disclosure, the dynamic display attribute of the first effects includes determining a spacing between the first effects according to the rhythm information and/or determining a moving speed of the first effects according to the rhythm information.
According to one or more embodiments of the present disclosure, the cadence information includes one or more of: rhythm type, beat information and drum spot information.
According to one or more embodiments of the present disclosure, the display module is further configured to:
according to a first preset moving direction, dynamically displaying the first special effect in a preset display area on the first page, wherein the first preset moving direction comprises at least one of the following: from top to bottom, from bottom to top, from left to right, and from right to left.
According to one or more embodiments of the present disclosure, the display module is further configured to:
and responding to the triggering action in the target object image of the first page, determining the audio to be played, and playing the audio to be played on the first page.
According to one or more embodiments of the present disclosure, the display module is further configured to:
randomly selecting audio from a preset audio list as the audio to be played;
or alternatively, the process may be performed,
acquiring audio selected by a user from the preset audio list as the audio to be played;
or alternatively, the process may be performed,
and acquiring the audio uploaded by the user as the audio to be played.
According to one or more embodiments of the present disclosure, the tempo types include a fast tempo type and a slow tempo type;
the processing module is further configured to:
determining the number of drum points corresponding to the audio to be played;
and determining the rhythm type according to the number of drum points.
According to one or more embodiments of the present disclosure, the drum point information further includes a drum point location;
the processing module is further configured to:
acquiring a waveform corresponding to audio to be played, and extracting a target waveform segment from the waveform, wherein the target waveform segment comprises a peak value of the waveform;
And matching the target waveform segment with a preset drum point waveform segment to determine the drum point position of the audio to be played.
According to one or more embodiments of the present disclosure, the first effect includes a core region and an edge region; the third special effect comprises prompt information and/or prompt animation;
the processing module is further configured to:
in the process that the first special effect moves dynamically in a preset display area of the first page, if the real-time position of the second special effect is in the core area, the display module is instructed to display first prompt information and/or first prompt animation;
if the real-time position of the second special effect is in the edge area, indicating the display module to display second prompt information and/or second prompt animation;
if the real-time position of the second special effect and the dynamic position of the first special effect continuously have no overlapped part, the indication display module displays third prompt information and/or third prompt animation; and/or counting the times of displaying the third prompt information or the third prompt animation, and indicating the display module to display the end information when the times are greater than a preset threshold value.
According to one or more embodiments of the present disclosure, the processing module is further configured to:
And if the real-time position of the second special effect is in the core area or the edge area, playing a first preset audio or enhancing the sound effect of the audio currently played on the first page.
According to one or more embodiments of the present disclosure, the processing module is further configured to:
and amplifying a second target object in the target object image in the first page.
According to one or more embodiments of the present disclosure, the processing module is further configured to:
and responding to the moving direction of a first target object in the target object image of the first page, and controlling a third target object in the second special effect to display in different directions correspondingly.
According to one or more embodiments of the present disclosure, the processing module is further configured to:
acquiring the residual playing time length of the first page audio;
and displaying the first special effect according to different preset patterns according to different residual playing time lengths.
In a third aspect, according to one or more embodiments of the present disclosure, there is provided an electronic device comprising: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executes the computer-executable instructions stored in the memory, causing the at least one processor to perform the object control method as described above in the first aspect and the various possible designs of the first aspect.
In a fourth aspect, according to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium having stored therein computer-executable instructions which, when executed by a processor, implement the object control method as described above in the first aspect and the various possible designs of the first aspect.
In a fifth aspect, according to one or more embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the object control method according to the first aspect and the various possible designs of the first aspect.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (14)

1. An object control method, the method comprising:
displaying a first page, wherein a target object image of a user is displayed on the first page in real time;
Dynamically displaying a first special effect on the first page, wherein the first special effect comprises a core area and an edge area; the dynamic display attribute of the first special effect is determined according to the rhythm information of the audio played by the first page; the rhythm information comprises drum point information; the drum point information comprises drum point positions;
responding to the movement of a first target object in the target object image of the first page, and controlling a second special effect displayed on the first page to move in real time;
displaying a corresponding third special effect according to the relation between the real-time position of the second special effect and the dynamic position corresponding to the first special effect; the relationship between the real-time position of the second special effect and the dynamic position corresponding to the first special effect comprises one of the following position relationships: the real-time position of the second special effect is located in the core area, the real-time position of the second special effect is located in the edge area, and no overlapping part exists between the real-time position of the second special effect and the dynamic position of the first special effect continuously;
the method further comprises the steps of:
acquiring the residual playing time length of the first page audio;
Displaying the first special effect according to different preset patterns according to different residual playing time lengths; the preset pattern comprises the color and/or the shape of the first special effect;
the method further comprises the steps of:
acquiring a waveform corresponding to audio to be played, and extracting a target waveform segment from the waveform, wherein the target waveform segment comprises a peak value of the waveform;
calculating the similarity between the target waveform segment and a preset drum point waveform segment; the preset drum point waveform segment is a waveform segment of drum point audio acquired in advance;
and if the similarity is greater than the preset similarity, determining the time corresponding to the wave crest in the target waveform section as the drum point position of the audio to be played.
2. The method of claim 1, wherein the dynamic display attribute of the first effects comprises determining a spacing between the first effects from the cadence information and/or determining a speed of movement of the first effects from the cadence information.
3. The method of claim 2, wherein the tempo information further comprises tempo type and/or beat information.
4. The method of claim 1, wherein the dynamically displaying the first effect on the first page comprises:
According to a first preset moving direction, dynamically displaying the first special effect in a preset display area on the first page, wherein the first preset moving direction comprises at least one of the following: from top to bottom, from bottom to top, from left to right, and from right to left.
5. The method of claim 1, wherein the method further comprises:
and responding to the triggering action in the target object image of the first page, determining the audio to be played, and playing the audio to be played on the first page.
6. The method of claim 5, wherein the determining audio to play comprises:
randomly selecting audio from a preset audio list as the audio to be played;
or alternatively, the process may be performed,
acquiring audio selected by a user from the preset audio list as the audio to be played;
or alternatively, the process may be performed,
and acquiring the audio uploaded by the user as the audio to be played.
7. A method according to claim 3, wherein the tempo types include a fast tempo type and a slow tempo type;
the method further comprises the steps of:
determining the number of drum points corresponding to the audio to be played;
and determining the rhythm type according to the number of drum points.
8. The method of any of claims 1 to 7, wherein the third special effects comprise a reminder information and/or a reminder animation;
displaying the third special effect according to the relation between the real-time position of the second special effect and the dynamic position corresponding to the first special effect, comprising:
displaying first prompt information and/or first prompt animation if the real-time position of the second special effect is in the core area in the process that the first special effect moves dynamically in a preset display area of the first page;
if the real-time position of the second special effect is in the edge area, displaying second prompt information and/or second prompt animation;
if the real-time position of the second special effect and the dynamic position of the first special effect continuously have no overlapped part, third prompt information and/or third prompt animation are displayed; and/or counting the times of displaying the third prompt information or the third prompt animation, and displaying the ending information when the times are larger than a preset threshold value.
9. The method of claim 8, wherein the method further comprises:
and if the real-time position of the second special effect is in the core area or the edge area, playing a first preset audio or enhancing the sound effect of the audio currently played on the first page.
10. The method of claim 1, wherein the method further comprises:
and amplifying a second target object in the target object image in the first page.
11. The method of claim 1, wherein the method further comprises:
and responding to the moving direction of a first target object in the target object image of the first page, and controlling a third target object in the second special effect to display in different directions correspondingly.
12. An object control apparatus, the apparatus comprising:
the display module is used for displaying a first page, wherein a target object image of a user is displayed on the first page in real time;
the display module is further configured to dynamically display a first special effect on the first page, where the first special effect includes a core area and an edge area; the dynamic display attribute of the first special effect is determined according to the rhythm information of the audio played by the first page; the rhythm information comprises drum point information; the drum point information comprises drum point positions;
the processing module is used for responding to the movement of a first target object in the target object image of the first page and controlling a second special effect displayed on the first page to move in real time;
The processing module is further configured to instruct the display module to display a corresponding third special effect according to a relationship between the real-time position of the second special effect and the dynamic position corresponding to the first special effect; the relationship between the real-time position of the second special effect and the dynamic position corresponding to the first special effect comprises one of the following position relationships: the real-time position of the second special effect is located in the core area, the real-time position of the second special effect is located in the edge area, and no overlapping part exists between the real-time position of the second special effect and the dynamic position of the first special effect continuously;
the processing module is further used for obtaining the residual playing duration of the first page audio; displaying the first special effect according to different preset patterns according to different residual playing time lengths; the preset pattern comprises the color and/or the shape of the first special effect;
the processing module is further configured to:
acquiring a waveform corresponding to audio to be played, and extracting a target waveform segment from the waveform, wherein the target waveform segment comprises a peak value of the waveform;
calculating the similarity between the target waveform segment and a preset drum point waveform segment; the preset drum point waveform segment is a waveform segment of drum point audio acquired in advance;
And if the similarity is greater than the preset similarity, determining the time corresponding to the wave crest in the target waveform section as the drum point position of the audio to be played.
13. An electronic device, comprising: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing computer-executable instructions stored in the memory causes the at least one processor to perform the object control method of any one of claims 1 to 11.
14. A computer readable storage medium having stored therein computer executable instructions which, when executed by a processor, implement the object control method according to any one of claims 1 to 11.
CN202110277940.5A 2021-03-15 2021-03-15 Object control method and device Active CN112988027B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110277940.5A CN112988027B (en) 2021-03-15 2021-03-15 Object control method and device
PCT/CN2022/080685 WO2022194097A1 (en) 2021-03-15 2022-03-14 Object control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110277940.5A CN112988027B (en) 2021-03-15 2021-03-15 Object control method and device

Publications (2)

Publication Number Publication Date
CN112988027A CN112988027A (en) 2021-06-18
CN112988027B true CN112988027B (en) 2023-06-27

Family

ID=76335561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110277940.5A Active CN112988027B (en) 2021-03-15 2021-03-15 Object control method and device

Country Status (2)

Country Link
CN (1) CN112988027B (en)
WO (1) WO2022194097A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112988027B (en) * 2021-03-15 2023-06-27 北京字跳网络技术有限公司 Object control method and device
CN113744135A (en) * 2021-09-16 2021-12-03 北京字跳网络技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114329001B (en) * 2021-12-23 2023-04-28 游艺星际(北京)科技有限公司 Display method and device of dynamic picture, electronic equipment and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100028858A (en) * 2008-09-05 2010-03-15 엔에이치엔(주) System for providing on-line music game and method thereof
US20190147841A1 (en) * 2017-11-13 2019-05-16 Facebook, Inc. Methods and systems for displaying a karaoke interface
CN108111909A (en) * 2017-12-15 2018-06-01 广州市百果园信息技术有限公司 Method of video image processing and computer storage media, terminal
CN108833818B (en) * 2018-06-28 2021-03-26 腾讯科技(深圳)有限公司 Video recording method, device, terminal and storage medium
CN109045688B (en) * 2018-07-23 2022-04-26 广州方硅信息技术有限公司 Game interaction method and device, electronic equipment and storage medium
CN111857923B (en) * 2020-07-17 2022-10-28 北京字节跳动网络技术有限公司 Special effect display method and device, electronic equipment and computer readable medium
CN111857482B (en) * 2020-07-24 2022-05-17 北京字节跳动网络技术有限公司 Interaction method, device, equipment and readable medium
CN112259062B (en) * 2020-10-20 2022-11-04 北京字节跳动网络技术有限公司 Special effect display method and device, electronic equipment and computer readable medium
CN112988027B (en) * 2021-03-15 2023-06-27 北京字跳网络技术有限公司 Object control method and device

Also Published As

Publication number Publication date
CN112988027A (en) 2021-06-18
WO2022194097A1 (en) 2022-09-22

Similar Documents

Publication Publication Date Title
CN112988027B (en) Object control method and device
CN111857923B (en) Special effect display method and device, electronic equipment and computer readable medium
CN110090444B (en) Game behavior record creating method and device, storage medium and electronic equipment
CN107251550B (en) Information processing program and information processing method
KR101063283B1 (en) Method, apparatus and recording medium for providing touch game
CN109474850B (en) Motion pixel video special effect adding method and device, terminal equipment and storage medium
JP2010176332A (en) Information processing apparatus, information processing method, and program
WO2023045783A1 (en) Page processing method and apparatus, device, and storage medium
US20160267804A1 (en) Training and cognitive skill improving system and method
EP3107632A1 (en) Advanced game mechanics on hover-sensitive devices
CN111760272B (en) Game information display method and device, computer storage medium and electronic equipment
CN111420395A (en) Interaction method and device in game, readable storage medium and electronic equipment
WO2007094326A1 (en) Trace information processing device, trace information processing method, information recording method, and program
CN111481923A (en) Rocker display method and device, computer storage medium and electronic equipment
CN111857482B (en) Interaction method, device, equipment and readable medium
WO2024016924A1 (en) Video processing method and apparatus, and electronic device and storage medium
JP5932905B2 (en) Program and game system
JP2023085442A (en) program
CN110908568B (en) Control method and device for virtual object
CN115729434A (en) Writing and drawing content display method and related equipment
JP2015008987A (en) Program and game device
JP5597825B2 (en) Program, information storage medium, game system, and input instruction device
JP5777775B1 (en) GAME PROGRAM, COMPUTER CONTROL METHOD, AND COMPUTER
CN110141854B (en) Information processing method and device in game, storage medium and electronic equipment
JP2020048604A (en) Game program, method for executing game program, and information processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant