CN118034841A - Method, device and program product for displaying audio playing interface - Google Patents

Method, device and program product for displaying audio playing interface Download PDF

Info

Publication number
CN118034841A
CN118034841A CN202410301070.4A CN202410301070A CN118034841A CN 118034841 A CN118034841 A CN 118034841A CN 202410301070 A CN202410301070 A CN 202410301070A CN 118034841 A CN118034841 A CN 118034841A
Authority
CN
China
Prior art keywords
target
dimensional model
target audio
audio
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410301070.4A
Other languages
Chinese (zh)
Inventor
王博
贺继
孔繁鸣
贺英杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Music Entertainment Technology Shenzhen Co Ltd
Original Assignee
Tencent Music Entertainment Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Music Entertainment Technology Shenzhen Co Ltd filed Critical Tencent Music Entertainment Technology Shenzhen Co Ltd
Priority to CN202410301070.4A priority Critical patent/CN118034841A/en
Publication of CN118034841A publication Critical patent/CN118034841A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure provides a display method, device and program product of an audio playing interface, and belongs to the technical field of display. The method comprises the following steps: acquiring a song cover picture corresponding to the target audio; rendering the pre-stored initial three-dimensional model based on the song cover picture corresponding to the target audio to obtain a target three-dimensional model corresponding to the target audio, wherein the initial three-dimensional model and the target three-dimensional model comprise a frame, a transparent cover plate and an operation panel, the transparent cover plate and the operation panel are both positioned in the frame and connected with the frame, and a play/pause control, a previous control and a next control are arranged on the operation panel; and displaying a playing interface corresponding to the target audio, wherein a target three-dimensional model is displayed in the playing interface, and song cover pictures corresponding to the target audio and lyric data corresponding to the target audio are displayed on a transparent cover plate of the target three-dimensional model. By adopting the method and the device, the display diversity of the playing interface is improved.

Description

Method, device and program product for displaying audio playing interface
Technical Field
The disclosure relates to the technical field of display, and in particular relates to a display method, device and program product of an audio playing interface.
Background
Music players are a popular type of application by which people can play various audio, such as songs or photo sounds, etc.
When a user plays a song through the music player, a play interface is displayed, wherein background pictures, lyric information, a play/pause control, a song switching control and the like are displayed in the play interface, and when the song is played, the lyric information in the play interface changes along with the playing progress of the song.
But the display interface is single.
Disclosure of Invention
The embodiment of the disclosure provides a display method of an audio playing interface, which can improve the display diversity of the audio playing interface, and the technical scheme is as follows:
in a first aspect, a method for displaying an audio playing interface is provided, where the method includes:
acquiring a song cover picture corresponding to the target audio;
Rendering an initial three-dimensional model stored in advance based on a song cover picture corresponding to the target audio to obtain a target three-dimensional model corresponding to the target audio, wherein the initial three-dimensional model and the target three-dimensional model comprise a frame, a transparent cover plate and an operation panel, the transparent cover plate and the operation panel are both positioned in the frame and connected with the frame, and a play/pause control, a previous control and a next control are arranged on the operation panel;
Displaying a playing interface corresponding to the target audio, wherein the playing interface is provided with the target three-dimensional model, and a transparent cover plate of the target three-dimensional model is provided with a song cover picture corresponding to the target audio and lyric data corresponding to the target audio.
In one possible implementation, the method further includes:
Acquiring the position and the motion trail of a finger contact;
generating angle adjustment information corresponding to the target three-dimensional model based on the finger contact point position and the motion trail;
And generating the angle adjustment instruction based on the angle adjustment information corresponding to the target three-dimensional model, and adjusting the display view angle of the target three-dimensional model based on the angle adjustment instruction.
In one possible implementation, the initial three-dimensional model is provided with a virtual light source, the method further comprising:
Determining reflected light of the frame to the light rays emitted by the virtual light source based on the material of the frame of the target three-dimensional model;
And determining the reflected light of the transparent cover plate to the light rays emitted by the virtual light source based on the material of the transparent cover plate of the target three-dimensional model.
In one possible implementation, the virtual light source includes at least one of direct light and ambient light.
In a possible implementation manner, the rendering processing is performed on the pre-stored initial three-dimensional model based on the song cover picture corresponding to the target audio to obtain a target three-dimensional model corresponding to the target audio, including:
identifying the main tone of the song cover picture corresponding to the target audio;
And performing color rendering processing on the frame of the initial three-dimensional model based on the main tone to obtain a target three-dimensional model corresponding to the target audio.
In one possible implementation manner, the initial three-dimensional model is a three-dimensional model corresponding to a handheld interactive style player.
In a possible implementation manner, a background image corresponding to the target audio is also displayed in a playing interface corresponding to the target audio;
The method further comprises the steps of:
identifying the main tone of the song cover picture corresponding to the target audio;
and performing color rendering processing on the pre-stored initial background image based on the dominant hue to obtain a background image corresponding to the target audio.
In one possible implementation, the initial background image comprises a spectral image;
The color rendering processing is performed on the pre-stored initial background image based on the main tone to obtain a background image corresponding to the target audio, and the method comprises the following steps:
And performing color rendering processing on the spectrum image based on the dominant hue to obtain a background image corresponding to the target audio.
In one possible implementation, the spectral image is a dynamic waveform image that varies with the loudness of the target audio.
In one possible implementation, the initial three-dimensional model further includes a film cartridge and a black disc, the film cartridge being located in and connected to the frame, the film cartridge having an opening, the transparent cover plate being located within the opening of the film cartridge and connected to the film cartridge, the black disc being located at least partially in the cavity of the film cartridge;
the rendering processing is performed on the pre-stored initial three-dimensional model based on the song cover picture corresponding to the target audio to obtain a target three-dimensional model corresponding to the target audio, and the rendering processing comprises the following steps:
and displaying at least part of images of the song cover pictures at the center of the black glue disc, and displaying the song cover pictures on the transparent cover plate.
In one possible implementation manner, a playing time axis corresponding to the target audio is also displayed on the transparent cover plate.
In one possible implementation, the method further includes:
when the target audio is played, the black rubber disc is positioned in the film bin and is dynamically and automatically rotated for display;
when the clicking operation of the pause control is detected, the black disc stops the dynamic autorotation display and moves to a target position, wherein when the black disc is positioned at the target position, a preset part of the black disc is positioned outside the film bin;
When the clicking operation of the playing control is detected, the black disc moves from the target position to be positioned in the film bin, and the black disc dynamically rotates and displays.
In a possible implementation manner, the playing interface further displays an operation layer, the operation layer is located at the upper layer of the target three-dimensional model, and a plurality of operation controls are arranged on the operation layer;
The method further comprises the steps of:
And executing the instruction corresponding to the operation control when the clicking operation of the operation control is detected.
In one possible implementation, the method further includes:
And closing the operation layer when detecting the clicking operation of the target area in the playing interface.
In one possible implementation, the method further includes:
And closing the operation layer when the duration of the playing interface corresponding to the target audio reaches a preset duration threshold.
In a second aspect, there is provided a display device of an audio playing interface, the device comprising:
The acquisition module is used for acquiring a song cover picture corresponding to the target audio;
The three-dimensional model determining module is used for rendering an initial three-dimensional model stored in advance based on a song cover picture corresponding to the target audio to obtain a target three-dimensional model corresponding to the target audio, wherein the initial three-dimensional model and the target three-dimensional model comprise a frame, a transparent cover plate and an operation panel, the transparent cover plate and the operation panel are both positioned in the frame and connected with the frame, and a play/pause control, a previous control and a next control are arranged on the operation panel;
The display module is used for displaying a playing interface corresponding to the target audio, the target three-dimensional model is displayed in the playing interface, and song cover pictures corresponding to the target audio and lyric data corresponding to the target audio are displayed on a transparent cover plate of the target three-dimensional model.
In one possible implementation, the display module is further configured to:
Acquiring the position and the motion trail of a finger contact;
generating angle adjustment information corresponding to the target three-dimensional model based on the finger contact point position and the motion trail;
And generating the angle adjustment instruction based on the angle adjustment information corresponding to the target three-dimensional model, and adjusting the display view angle of the target three-dimensional model based on the angle adjustment instruction.
In a possible implementation manner, the initial three-dimensional model is provided with a virtual light source, and the display module is further configured to:
Determining reflected light of the frame to the light rays emitted by the virtual light source based on the material of the frame of the target three-dimensional model;
And determining the reflected light of the transparent cover plate to the light rays emitted by the virtual light source based on the material of the transparent cover plate of the target three-dimensional model.
In one possible implementation, the virtual light source includes at least one of direct light and ambient light.
In one possible implementation manner, the three-dimensional model determining module is configured to:
identifying the main tone of the song cover picture corresponding to the target audio;
And performing color rendering processing on the frame of the initial three-dimensional model based on the main tone to obtain a target three-dimensional model corresponding to the target audio.
In one possible implementation manner, the initial three-dimensional model is a three-dimensional model corresponding to a handheld interactive style player.
In a possible implementation manner, a background image corresponding to the target audio is also displayed in a playing interface corresponding to the target audio;
the display module is further configured to:
identifying the main tone of the song cover picture corresponding to the target audio;
and performing color rendering processing on the pre-stored initial background image based on the dominant hue to obtain a background image corresponding to the target audio.
In one possible implementation, the initial background image comprises a spectral image;
the display module is further configured to:
And performing color rendering processing on the spectrum image based on the dominant hue to obtain a background image corresponding to the target audio.
In one possible implementation, the spectral image is a dynamic waveform image that varies with the loudness of the target audio.
In one possible implementation, the initial three-dimensional model further includes a film cartridge and a black disc, the film cartridge being located in and connected to the frame, the film cartridge having an opening, the transparent cover plate being located within the opening of the film cartridge and connected to the film cartridge, the black disc being located at least partially in the cavity of the film cartridge;
the display module is further configured to:
and displaying at least part of images of the song cover pictures at the center of the black glue disc, and displaying the song cover pictures on the transparent cover plate.
In one possible implementation manner, a playing time axis corresponding to the target audio is also displayed on the transparent cover plate.
In one possible implementation, the display module is further configured to:
when the target audio is played, the black rubber disc is positioned in the film bin and is dynamically and automatically rotated for display;
when the clicking operation of the pause control is detected, the black disc stops the dynamic autorotation display and moves to a target position, wherein when the black disc is positioned at the target position, a preset part of the black disc is positioned outside the film bin;
When the clicking operation of the playing control is detected, the black disc moves from the target position to be positioned in the film bin, and the black disc dynamically rotates and displays.
In a possible implementation manner, the playing interface further displays an operation layer, the operation layer is located at the upper layer of the target three-dimensional model, and a plurality of operation controls are arranged on the operation layer;
the display module is further configured to:
And executing the instruction corresponding to the operation control when the clicking operation of the operation control is detected.
In one possible implementation, the display module is further configured to:
And closing the operation layer when detecting the clicking operation of the target area in the playing interface.
In one possible implementation, the display module is further configured to:
And closing the operation layer when the duration of the playing interface corresponding to the target audio reaches a preset duration threshold.
In a third aspect, a computer device is provided, the computer device comprising a processor and a memory, the memory storing at least one instruction, the instructions being loaded and executed by the processor to implement operations performed by a display method of an audio playback interface.
In a fourth aspect, a computer-readable storage medium having stored therein at least one instruction that is loaded and executed by a processor to perform operations performed by a display method of an audio playback interface is provided.
In a fifth aspect, a computer program product is provided, the computer program product comprising at least one instruction therein, the at least one instruction being loaded and executed by a processor to implement operations performed by a display method of an audio playback interface.
The technical scheme provided by the embodiment of the disclosure has the beneficial effects that: according to the scheme, on one hand, the picture information is displayed in the playing interface corresponding to the target audio, but the target three-dimensional model is displayed, and different target three-dimensional models are rendered according to different song cover pictures corresponding to the target audio, so that the display diversity of the playing interface is improved by the various target three-dimensional models, and on the other hand, the display diversity of the playing interface is improved by the plurality of controls (the playing/pause control, the previous control and the next control) arranged on the operation panel of the target three-dimensional model and the song cover pictures and the lyric data displayed on the transparent cover plate.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort for a person of ordinary skill in the art.
Fig. 1 is a schematic diagram of a playing interface corresponding to a target audio according to an embodiment of the disclosure;
fig. 2 is a schematic diagram of a playing interface corresponding to a target audio according to an embodiment of the disclosure;
fig. 3 is a schematic diagram of a playing interface corresponding to a target audio according to an embodiment of the disclosure;
fig. 4 is a schematic diagram of a playing interface corresponding to a target audio according to an embodiment of the disclosure;
fig. 5 is a schematic diagram of a playing interface corresponding to a target audio according to an embodiment of the disclosure;
fig. 6 is a schematic diagram of a playing interface corresponding to a target audio according to an embodiment of the disclosure;
Fig. 7 is a schematic structural diagram of a display device of a playing interface according to an embodiment of the disclosure;
fig. 8 is a block diagram of a terminal according to an embodiment of the present disclosure;
fig. 9 is a block diagram of a server according to an embodiment of the present disclosure.
Detailed Description
For the purposes of clarity, technical solutions and advantages of the present disclosure, the following further details the embodiments of the present disclosure with reference to the accompanying drawings.
The embodiment of the disclosure provides a display method of a playing interface, which can be realized by computer equipment. The computer device may be a terminal, a server, etc., the terminal may be a desktop computer, a notebook computer, a tablet computer, a mobile phone, etc., and the server may be a single server or a server cluster, etc.
The computer device may include a processor, memory, communication components, and the like.
The processor may be a central processing unit (central processing unit, CPU), and the processor may be configured to read the instruction and process the data, for example, perform rendering processing on the pre-stored initial three-dimensional model based on the song cover picture corresponding to the target audio to obtain a target three-dimensional model corresponding to the target audio, generate angle adjustment information corresponding to the target three-dimensional model according to the finger contact position and the motion trail, adjust the display view angle of the target three-dimensional model based on the angle adjustment instruction, determine reflected light of the frame to light emitted by the virtual light source based on the material of the frame of the target three-dimensional model, determine reflected light of the transparent cover to light emitted by the virtual light source based on the material of the transparent cover of the target three-dimensional model, and so on.
The memory may be various volatile memory or nonvolatile memory, such as Solid State Disk (SSD) STATE DISK, dynamic random access memory (dynamic random access memory, DRAM) memory, and the like. The memory may be used for data storage, for example, storage of a song cover picture corresponding to the obtained target audio, storage of data corresponding to the initial three-dimensional model, storage of data corresponding to the obtained target three-dimensional model, storage of data corresponding to a playing interface corresponding to the target audio, and so on.
The communication component may be a wired network connector, a wireless fidelity (WIRELESS FIDELITY, wiFi) module, a bluetooth module, a cellular network communication module, or the like. The communication means may be used for data transmission with other devices, for example, to obtain a song cover picture corresponding to the target audio, etc.
The computer device may be provided with an application program such as a music player, which may be a stand-alone application program or a plug-in other application programs. The user can play various audio through the music player, for example, songs, dramas, sounds, light music, and the like. In the following, a method provided by the embodiment of the present disclosure will be described by taking a case that a user uses a music player on a terminal as an example, and other scenes are similar and will not be described herein.
Fig. 1 is a flowchart of a display method of an audio playing interface according to an embodiment of the present disclosure. Referring to fig. 1, this embodiment includes:
101. And obtaining a song cover picture corresponding to the target audio.
In implementation, a user may open a music player on the terminal, then select one audio in the music player, click on a play control of the music player, thereby triggering a play instruction of the target audio, and when the terminal receives the play instruction of the target audio, may acquire the target audio and a song cover picture corresponding to the target audio.
In the embodiment of the present disclosure, the target audio and the song cover picture corresponding to the target audio may be obtained from the database corresponding to the music player based on the identifier corresponding to the target audio, may be obtained from the local storage of the computer device based on the identifier corresponding to the target audio, or the like, and the song cover picture corresponding to the target audio may be obtained from the actual storage location thereof, which is not particularly limited in the embodiment of the present disclosure.
For the song cover picture corresponding to the target audio, when the target audio is a song, the song cover picture corresponding to the target audio is the album cover picture of the song; when the target audio is the sound, the song cover picture corresponding to the target audio can be a propaganda picture of the sound; when the target audio is a voiced novel, the song cover picture corresponding to the target audio may be the cover picture of the voiced novel, and so on.
102. And rendering the pre-stored initial three-dimensional model based on the song cover picture corresponding to the target audio to obtain a target three-dimensional model corresponding to the target audio.
The database corresponding to the music application program stores data corresponding to an initial three-dimensional model in advance, and the storage file format of the initial three-dimensional model may be any reasonable file format, for example, in the embodiment of the present disclosure, the data corresponding to the initial three-dimensional model may be stored as a GLB (Graphics Language Transmission Format Binary, graphic language transmission binary format) (an open standard file format for storing and transmitting a 3D model and related data thereof), and of course, may also be other file formats, which is not limited in the embodiment of the present disclosure.
In implementation, after a song cover picture corresponding to the target audio is obtained, a rendering process can be performed on the stored initial three-dimensional model based on the song cover picture to obtain a target three-dimensional model corresponding to the target audio, wherein the target three-dimensional model is an element related to the song cover picture on the initial three-dimensional model, so that different target audio corresponds to different target three-dimensional models.
In one possible implementation, the rendering process may be any reasonable rendering method, for example, an open source renderer may be used for rendering, for example, a file renderer (an open source mobile-oriented PBR real-time renderer) may be used for rendering the initial three-dimensional model, and so on, which is not limited by the embodiments of the present disclosure.
In the embodiments of the present disclosure, for a specific structural arrangement of the initial three-dimensional model and the target three-dimensional model, it may be: the initial three-dimensional model and the target three-dimensional model comprise a frame, a transparent cover plate and an operation panel, wherein the transparent cover plate and the operation panel are both positioned in the frame and are connected with the frame, a play/pause control, a previous control and a next control are arranged on the operation panel, and a user can control the play of the audio through the play/pause control, the previous control and the next control.
The frame has an annular structure, and the frame may be in a rectangular structure or a circular structure, etc., which is not limited in the embodiments of the present disclosure.
For the transparent cover plate, the transparency of the transparent cover plate can be set according to requirements, and the material of the transparent cover plate can be plastic or glass, and the embodiment of the disclosure is not limited to the transparent cover plate.
In one possible implementation, the initial three-dimensional model may be a three-dimensional model corresponding to the handheld interactive style player, that is, the target three-dimensional model represents a style that is the handheld interactive style player, see fig. 1, and the target three-dimensional model shown in fig. 1 is the handheld interactive style player.
Of course, in the embodiments of the present disclosure, the initial three-dimensional model and the target three-dimensional model may be other reasonable structural arrangements, which are not limited in the embodiments of the present disclosure.
103. And displaying a playing interface corresponding to the target audio.
In implementation, after the target three-dimensional model corresponding to the target audio is obtained, a playing interface corresponding to the target audio can be displayed on an application interface of the music player, and the target audio starts to be played, so that a user can listen to the played target audio on the music player and see the playing interface corresponding to the target audio.
The target three-dimensional model is displayed in the playing interface, so that a user can see the three-dimensional structure image of the target three-dimensional model in the playing interface corresponding to the target audio, a new visual effect is brought to the user, and the display diversity of the playing interface is improved.
Moreover, the target audio is different, and the corresponding song cover pictures are also different, so that different target three-dimensional models can be rendered by different song cover pictures, the playing interfaces among different audio are also different, and the display diversity of the playing interfaces is further improved.
In the embodiment of the disclosure, the transparent cover plate of the target three-dimensional model can also display song cover pictures corresponding to the target audio and lyric data corresponding to the target audio, so that the information of the playing interface is enriched, and the display diversity of the playing interface is improved.
In one possible implementation, the lyric information corresponding to the target audio may include at least one of a name, an author, and lyrics corresponding to the target audio, and of course, the lyric information may include other displayable information about the target audio in addition to the above information, which is not limited by the embodiment of the present disclosure.
In one possible implementation manner, a plurality of controls are further arranged on the operation panel of the target three-dimensional model in the playing interface, and the control arrangement can have the following functions:
When the clicking operation of the pause control is detected, pausing the playing of the target audio, and adjusting the pause control to be a playing control; when the clicking operation of the playing control is detected, continuing to play the target audio, and adjusting the playing control to be a pause control; stopping playing the target audio when the click operation of the previous control is detected, and playing the previous audio of the target audio in the current play list; and stopping playing the target audio when the click operation of the next control is detected, and playing the next audio of the target audio in the current play list.
In implementation, when the terminal receives a playing instruction of the target audio, a playing interface corresponding to the target audio is displayed in a display interface of the music player, and playing of the target audio is automatically started. And a play/pause control, a previous control and a next control are displayed on the target three-dimensional model in the play interface of the target audio, and the play/pause control is currently displayed as a pause control because the play/pause control is currently in a state of automatically playing the target audio.
The user can click on the pause control on the target three-dimensional model, so that the terminal detects the click operation of the pause control, thereby controlling the target audio to pause playing, and changing the pause control displayed on the target three-dimensional model into a playing control. Then, when the user wants to continue playing the target audio, the user can click on the playing control on the target three-dimensional model, so that the terminal detects the clicking operation of the playing control, and accordingly the target audio is controlled to continue playing, and the playing control displayed on the target three-dimensional model is changed into a pause control.
The user can click on the previous control or the next control on the target three-dimensional model, so that the currently played audio is switched.
In the embodiment of the present disclosure, the operation panels of the initial three-dimensional model and the target three-dimensional model may be provided with other controls in addition to the three controls described above, which is not limited in the embodiment of the present disclosure.
Therefore, the user can correspondingly control the playing of the target audio through the plurality of controls on the target three-dimensional model in the playing interface, and the display diversity of the playing interface is improved, so that the operation interest of the user on the playing interface is improved.
In one possible implementation, the target three-dimensional model displayed in the playback interface also has the following functions: the method comprises the steps of obtaining the position and the movement track of a finger contact point, generating angle adjustment information corresponding to a target three-dimensional model according to the position and the movement track of the finger contact point, generating an angle adjustment instruction according to the angle adjustment information corresponding to the target three-dimensional model, and adjusting the display view angle of the target three-dimensional model based on the angle adjustment instruction.
In implementation, when it is detected that a user performs a swipe operation in a playing interface, a finger contact point position and a swipe track corresponding to the swipe operation are obtained, where the swipe track corresponding to the swipe operation may include a plurality of position points, and the finger contact point position is a first position point of the detected swipe operation in the playing interface.
Then, according to the finger contact point position and the distance and direction of each position point in the motion track relative to the finger contact point position, generating angle adjustment information corresponding to the target three-dimensional model, where the angle adjustment information may include the direction in which the target three-dimensional model needs to rotate and the rotating angle, that is: the direction of the target three-dimensional model to be rotated can be determined based on the direction of each position point in the motion track relative to the finger contact point position, and the angle of the target three-dimensional model to be rotated in the direction of the target three-dimensional model to be rotated can be determined based on the distance between each position point and the finger contact point position.
After the angle adjustment information corresponding to the target three-dimensional model is determined, a corresponding angle adjustment instruction is generated, and the terminal adjusts the display view angle of the target three-dimensional model in the playing interface from the current angle to the target angle based on the information of the rotation direction and the rotation angle carried in the angle adjustment instruction, so that the adjustment of the display view angle of the target three-dimensional model in the playing interface is realized.
In a possible implementation manner, the playing interface may further include an angle adjustment area corresponding to the target three-dimensional model, and when it is detected that the user performs a swipe operation in the playing interface and the finger contact point position corresponding to the swipe operation is located in the angle adjustment area corresponding to the target three-dimensional model, the display view angle of the target three-dimensional model is correspondingly adjusted based on the finger contact point position corresponding to the swipe operation and the swipe track.
The angle adjustment area corresponding to the target three-dimensional model can be a fixed area in the playing interface, namely whether the display view angle and the occupied area of the target three-dimensional model displayed in the playing interface are changed or not, the position of the angle adjustment area in the playing interface is unchanged, and a user can conduct a stroking operation in the fixed area each time to realize adjustment of the display view angle of the target three-dimensional model.
Or the angle adjustment area corresponding to the target three-dimensional model may be changed along with the change of the display view angle of the target three-dimensional model in the playing interface, for example, the angle adjustment area corresponding to the target three-dimensional model is the current display area of the target three-dimensional model in the playing interface, that is, the finger contact point position needs to be located on the target three-dimensional model to realize the adjustment operation of the display view angle of the target three-dimensional model.
Of course, the angle adjustment area may be other arrangements, which are not limited in the embodiments of the present disclosure.
Referring to fig. 1 and 2, the display view angles of the target three-dimensional model in the playing interface shown in fig. 1 and 2 are different, and a user can arbitrarily adjust the display view angles thereof through operations.
In the embodiments of the present disclosure, there may be various methods for adjusting the display view angle of the target three-dimensional model, for example, the display view angle of the target three-dimensional model may also be adjusted based on gestures:
The server of the music player is pre-stored with angle adjustment instructions corresponding to a plurality of gestures.
When the user wants to adjust the display view angle, one target gesture of the plurality of gestures can be made in front of the camera of the terminal, the terminal can recognize which of the plurality of gestures is stored in advance when acquiring the target gesture, then an angle adjustment instruction corresponding to the target gesture is determined, and the display view angle of the target three-dimensional model is adjusted based on the angle adjustment instruction corresponding to the target gesture.
The gestures may be any reasonable settings, for example, the plurality of gestures may be directions of the index finger of the user in different directions of 360 degrees with the center point of the target three-dimensional model as the center, and the angle adjustment instruction corresponding to each gesture is to rotate the target three-dimensional model by a preset angle in the direction. For example, when the terminal detects that the index finger of the user points to a target direction, the terminal controls the target three-dimensional model in the playing interface to rotate a preset angle towards the target direction.
The preset angle may be any reasonable setting, for example, may be 5 degrees, 10 degrees, or 15 degrees, etc., which is not limited by the embodiments of the present disclosure.
Of course, the gestures and the angle adjustment instructions corresponding to the gestures may also be other reasonable settings, which are not limited in the embodiments of the disclosure.
It can be understood that no matter what display view angle the target three-dimensional model is in, as long as a user can see the control (including the play/pause control, the previous control and the next control) on the target three-dimensional model in the play interface corresponding to the target audio, the clicking operation can be performed on the control.
In one possible implementation manner, when the display view angle of the target three-dimensional model is adjusted, the position of the center point of the target three-dimensional model in the playing interface is unchanged, so that the target three-dimensional model can be always positioned at a fixed position in the playing interface, the influence of the display position of the target three-dimensional model in the playing interface on the operation or display of other controls is avoided, and when the display view angle of the target three-dimensional model is adjusted, a more convenient adjustment method is provided for a user because the position of the center point of the target three-dimensional model is unchanged, and the target three-dimensional model has observability when the display view angle is adjusted.
The center point of the target three-dimensional model may be disposed at any reasonable position in the playing interface, for example, may be disposed at a center position of the entire playing interface, or may also be disposed directly above or directly below the center position of the playing interface, or the like, which is not limited in the embodiments of the present disclosure.
In a possible implementation manner, the initial three-dimensional model may be further provided with a virtual light source, so that the target three-dimensional model displayed in the playing interface displays a dark change, a bright surface effect and the like by reflecting light rays emitted by the virtual light source, thereby improving the display effect of the target three-dimensional model.
In one possible implementation, the reflected light of the frame to the light emitted by the virtual light source is determined based on the material of the frame of the target three-dimensional model, and the reflected light of the transparent cover to the light emitted by the virtual light source is determined based on the material of the transparent cover of the target three-dimensional model.
For example, when the material of the frame of the target three-dimensional model is metal, the frame can reflect light rays emitted by the virtual light source, so that the bright surface effect on the partial area is realized, and when the material of the transparent cover plate of the target three-dimensional model is glass, the transparent cover plate can reflect light rays emitted by the virtual light source, so that the bright surface effect on the partial area is realized.
In one possible implementation, the virtual light source may include at least one of direct light and ambient light, although the virtual light source may be other types of light source arrangements, and the embodiments of the present disclosure are not limited in this respect.
Therefore, when the display view angle of the target three-dimensional model is adjusted based on the angle adjustment instruction, the illumination influence of the virtual light source on the target three-dimensional model in the current display view angle can be calculated in real time, so that the target three-dimensional model has different visual manifestations in different display view angles, and the display diversity of a playing interface is improved.
In an embodiment of the present disclosure, a method for rendering an initial three-dimensional model based on a song cover picture corresponding to a target audio may be as follows:
And identifying the main tone of the song cover picture corresponding to the target audio, and performing color rendering processing on the frame in the initial three-dimensional model based on the main tone to obtain the target three-dimensional model corresponding to the target audio.
In implementation, after a song cover picture corresponding to the target audio is obtained, a main tone corresponding to the song cover picture can be identified based on a preset algorithm, and then a frame in the initial three-dimensional model is rendered into the color of the main tone, so that a target three-dimensional model corresponding to the target audio is obtained.
Therefore, for different target audios, when the main tones of song cover pictures corresponding to the identified target audios are different, target three-dimensional models with frames of different colors are displayed in the playing interface of the target audios, so that the display diversity of the playing interface of the audios is improved, and the visual effect of the playing interface is improved.
In the embodiment of the disclosure, when the color rendering process is performed based on the keytone of the song cover picture, other components in the initial three-dimensional model may be rendered except for the frame, so that different colors are rendered.
In the embodiment of the present disclosure, the preset algorithm for identifying the keytone of the song cover picture corresponding to the target audio may be various, and the following description is given below:
And determining the hue value of each pixel point in the song cover picture, then determining the hue corresponding to each pixel point based on the corresponding relation between the prestored hue value and the hue, calculating the number of the pixel points corresponding to each hue for the hues, and taking the hue with the largest number of the pixel points as the hue corresponding to the song cover picture.
Each hue also comprises a plurality of colors with different saturation and/or different brightness, so that the colors of the hue corresponding to the song cover picture under the conditions of the preset saturation and the preset brightness can be determined as the main hue corresponding to the song cover picture, so that the main hue can be more saturated and vivid, and the visual effect of the target three-dimensional model is improved.
For example, when it is determined that the hue corresponding to the song cover picture is red, the color of the red under the conditions of the preset saturation and the preset brightness may be determined as the dominant hue.
The preset saturation and the preset brightness may be set according to actual situations, for example, the preset saturation and the preset brightness may be set to be 0.6, so that the obtained main color tone is more attractive, and of course, other numerical settings may be performed.
On this basis, finer processing can also be performed on the song cover picture, for example, the song cover picture can be enlarged by 10 x 10 times to obtain an enlarged song cover picture, then the hue value of each pixel point in the enlarged song cover picture is determined, and the subsequent steps are the same as the above and are not repeated here. Thus, the more accurate hue corresponding to the song cover picture can be obtained.
The method of determining the dominant hue of a song cover picture may also be other reasonable methods, and embodiments of the present disclosure are not limited in this regard.
In addition to rendering the initial three-dimensional model, rendering may be performed on other display portions in the playback interface, where the corresponding processing may be as follows:
The playing interface corresponding to the target audio also displays a background image corresponding to the target audio, and it can be understood that in the playing interface, the background image corresponding to the target audio is positioned at the lower layer of the target three-dimensional model.
When the background image is displayed in the playing interface, the dominant hue of the song cover picture corresponding to the target audio can be identified, and then, based on the dominant hue, the color rendering processing is performed on the pre-stored initial background image, so as to obtain the background image corresponding to the target audio.
In implementation, after a song cover picture corresponding to the target audio is obtained, a main tone corresponding to the song cover picture can be identified based on a preset algorithm, and then part or all of images in the pre-stored initial background image are rendered into the color of the main tone, so that a background image corresponding to the target audio is obtained.
Therefore, for different target audios, when the main tone of the song cover picture corresponding to the identified target audio is different, background images with different colors are displayed in the playing interface of the target audio, so that the display diversity of the playing interface of the audio is improved, and the visual effect of the playing interface is improved.
In one possible implementation, the initial background image may include a spectral image, where the spectral image may be in any reasonable form, for example, see the spectral image shown in fig. 1 and 2, which is composed of a plurality of vertical lines with varying lengths, although other forms of presentation are possible, and the embodiments of the present disclosure are not limited thereto.
When the color rendering processing is performed on the initial background image, the color rendering processing can be performed on the spectrum image in the initial background image, so that the background image corresponding to the target audio is obtained.
Therefore, for different target audios, when the main tones of the song cover pictures corresponding to the identified target audios are different, background images with spectrum images with different colors are displayed in the playing interface of the target audios, so that the display diversity of the playing interface is improved.
In one possible implementation manner, the spectrum image in the background image corresponding to the target audio may be a still image or may be a dynamic waveform image that changes with the loudness of the target audio, see fig. 2 and 3, where the spectrum images in fig. 2 and 3 are images with different loudness after dynamic changes, and it can be seen that the lengths of the vertical lines in the spectrum images in fig. 2 and 3 are different, so that more visual experience is brought to the user.
Of course, the spectrum image in the background image corresponding to the target audio may also be a dynamic waveform image that varies with other attributes of the target audio, which is not limited by the embodiments of the present disclosure.
In the embodiment of the present disclosure, the shapes of the initial three-dimensional model and the target three-dimensional model may be any reasonable shapes, for example, may be a three-dimensional model corresponding to a player, may be a three-dimensional model corresponding to a microphone, may be a three-dimensional model corresponding to a loudspeaker, or the like, which is not limited by the embodiment of the present disclosure. The following is an example of possible structural arrangements:
In one possible implementation, referring to fig. 1, the initial three-dimensional model may further include a film cartridge in the frame and connected to the frame, the film cartridge having an opening, a transparent cover plate in the opening of the film cartridge and connected to the film cartridge, and a black disk at least partially in the cavity of the film cartridge.
The initial three-dimensional model may include only one transparent cover plate positioned within the cavity of the film cartridge, the transparent cover plate having a shape that fits the shape of the opening of the film cartridge, the edges of the transparent cover plate being connected to the inner wall of the film cartridge.
Or the initial three-dimensional model can comprise two transparent cover plates, the film bin can be provided with an annular structure, the inside of the annular structure is a cavity, the film bin is provided with two openings, two ends of the cavity are respectively communicated with the outside of the film bin through one opening, and part of the outer wall of the film bin can be attached to the inner wall of the frame.
For the above-mentioned structure setting of the initial three-dimensional model, when rendering processing is performed, the following processing may be further performed: and rendering the initial three-dimensional model based on the song cover picture, displaying at least part of images of the song cover picture at the center position of the black disc, and displaying the song cover picture and related information on the transparent cover plate.
In implementation, in the playing interface corresponding to the target audio, the black glue disc displayed in the target three-dimensional model is located in the cavity of the film cabin, the user can see the black glue disc through the transparent cover plate, and at least part of the image of the song cover picture corresponding to the target audio is also displayed at the center position of the black glue disc, for example, the whole image of the song cover picture can be displayed at the center position of the black glue disc, or part of the image of the center region of the song cover picture can be captured, and the whole image of the song cover picture can be displayed at the center position of the black glue disc.
The song cover picture corresponding to the target audio frequency can be displayed on the transparent cover plate, so that when the playing interface is displayed, a user can see the song cover picture and lyric data displayed on the transparent cover plate of the target three-dimensional model besides the target three-dimensional model, and the song cover picture is also displayed on the black disc, and the relevance between the playing interface and the target audio frequency is improved, and the display diversity of the playing interface is improved.
It will be appreciated that the song cover picture and lyric data displayed on the transparent cover plate will not completely obscure the black matrix disc.
In one possible implementation manner, a playing time axis corresponding to the target audio is also displayed on the transparent cover plate, and the playing time length of the target audio is displayed on the playing time axis in real time, so that the display diversity of the playing interface is improved.
In one possible implementation, referring to fig. 1 and 4, a first bar hole is disposed between the inner wall of the frame and the outer wall of the frame, and a second bar hole (not labeled in the figure) is disposed between the inner wall of the film bin and the outer wall of the film bin, where the second bar hole is opposite to the first bar hole.
Based on this structure, the target three-dimensional model may also have the following functions: when the target audio is played, the black rubber disc is positioned in the film bin, and the black rubber disc is dynamically and automatically displayed; when the clicking operation of the pause control is detected, stopping dynamic autorotation display of the black rubber disc, and moving the black rubber disc to a target position through the first strip-shaped hole and the second strip-shaped hole, wherein when the black rubber disc is positioned at the target position, the preset part of the black rubber disc is positioned outside the film bin; when the clicking operation of the playing control is detected, the black rubber disc moves from the target position to be positioned in the film bin, and the black rubber disc dynamically rotates and displays.
In the implementation, when the terminal receives a playing instruction of the target audio, a playing interface corresponding to the target audio is displayed in a display interface of the music player, and the target audio is automatically started to be played, and in the playing process of the target audio, the black rubber disc is always located in the film bin and dynamically rotates and displays all the time. If at least part of the picture of the song cover is displayed on the black rubber disc, the picture is also dynamically and automatically displayed along with the black rubber disc.
When a user clicks the pause control, the terminal detects clicking operation of the pause control, the target audio is paused, the pause control is changed into play control, the black disc stops dynamic autorotation display, if at least part of images of song cover images are displayed on the black disc, the black disc can be adjusted to an angle when the images are swung, then, an animation process that the black disc moves from a position in a film bin to a target position is displayed, namely, the black disc moves into a second bar hole and a first bar hole, and moves to a position outside the film bin through the second bar hole and the first bar hole, and at the moment, the animation process vividly shows the meaning of pausing the target audio.
Then, the user can click the playing control, the terminal detects the clicking operation of the playing control, the target audio is controlled to continue playing, the playing control is changed into a pause control, meanwhile, the black disc moves back into the film bin from the target position again, the black disc is automatically and automatically displayed in the film bin, and the meaning that the target audio continues playing is vividly shown.
The preset portion of the black matrix disc may be a half portion or a third portion of the black matrix disc, which is not limited in the embodiments of the present disclosure.
Referring to fig. 1 and 4, fig. 1 is a case where a black disc is located in a film cartridge when a target audio is played, and fig. 4 is a case where a black disc is located in a target position when a target audio is paused.
In the embodiment of the present disclosure, the display setting of the playing interface may further be as follows:
Referring to fig. 5 and fig. 6, the playing interface further displays an operation layer, where the operation layer is located at an upper layer of the target three-dimensional model, and the operation layer is provided with a plurality of operation controls, where the types of the operation controls may be the same as or different from those of the controls on the target three-dimensional model, and the embodiment of the disclosure is not limited to this.
In implementation, when the terminal receives a playing instruction of the target audio, a playing interface corresponding to the target audio is displayed in a display interface of the music player, and the target audio is automatically started to be played, at this time, the target three-dimensional model is displayed in the display interface, an operation layer is also displayed on the upper layer of the target three-dimensional model, and a plurality of operation controls are relatively concentrated in the operation layer, so that a user can conveniently perform corresponding operation control on the target audio through the operation controls.
In one possible implementation manner, in order to facilitate the user to operate the target three-dimensional model and the control on the target three-dimensional model, the operation layer can be closed through preset operation, so that interference is avoided.
A first method for closing an operation layer: and closing the operation layer when detecting the click operation of the target area in the playing interface. The target area may be an area other than the plurality of operation controls in the display interface, which is not particularly limited by the embodiments of the present disclosure.
In implementation, when the terminal receives a playing instruction of the target audio, a playing interface corresponding to the target audio is displayed in a display interface of the music player, and the target audio is automatically started to be played, wherein the display interface displays a target three-dimensional model, and an operation layer is also displayed on the upper layer of the target three-dimensional model, at this time, a user can click on a target area to close the operation layer, and the playing interface shown in fig. 1, 2, 3 and 4 is the interface after the operation layer is closed, so that the user can operate the target three-dimensional model more conveniently.
A second method for closing the operation layer: and closing the operation layer when the duration of the playing interface corresponding to the display target audio reaches a preset duration threshold value.
In implementation, when a terminal receives a playing instruction of a target audio, a playing interface corresponding to the target audio is displayed in a display interface of a music player, the target audio is automatically started to be played, a target three-dimensional model is displayed in the display interface, an operation layer is also displayed on the upper layer of the target three-dimensional model, and when the playing time of the target audio reaches a preset time threshold, the operation layer is automatically closed.
The preset duration threshold may be 5 seconds, 10 seconds, 12 seconds, etc., which is not limited by the embodiments of the present disclosure.
Any combination of the above-mentioned optional solutions may be adopted to form an optional embodiment of the present disclosure, which is not described herein in detail.
According to the scheme, on one hand, the picture information is displayed in the playing interface corresponding to the target audio, but the target three-dimensional model is displayed, and different target three-dimensional models are rendered according to different song cover pictures corresponding to the target audio, so that the display diversity of the playing interface is improved by the various target three-dimensional models, and on the other hand, the display diversity of the playing interface is improved by the plurality of controls (the playing/pause control, the previous control and the next control) arranged on the operation panel of the target three-dimensional model and the song cover pictures and the lyric data displayed on the transparent cover plate.
An embodiment of the present disclosure provides a display apparatus for an audio playing interface, where the apparatus may be a computer device in the foregoing embodiment, as shown in fig. 7, and the apparatus includes:
An obtaining module 710, configured to obtain a song cover picture corresponding to the target audio;
The three-dimensional model determining module 720 is configured to perform rendering processing on an initial three-dimensional model stored in advance based on a song cover picture corresponding to the target audio to obtain a target three-dimensional model corresponding to the target audio, where the initial three-dimensional model and the target three-dimensional model each include a frame, a transparent cover plate, and an operation panel, the transparent cover plate and the operation panel are both located in the frame and connected to the frame, and play/pause controls, a previous control, and a next control are provided on the operation panel;
The display module 730 is configured to display a playing interface corresponding to the target audio, wherein the playing interface displays the target three-dimensional model, and a transparent cover plate of the target three-dimensional model displays a song cover picture corresponding to the target audio and lyric data corresponding to the target audio.
In one possible implementation, the display module 730 is further configured to:
Acquiring the position and the motion trail of a finger contact;
generating angle adjustment information corresponding to the target three-dimensional model based on the finger contact point position and the motion trail;
And generating the angle adjustment instruction based on the angle adjustment information corresponding to the target three-dimensional model, and adjusting the display view angle of the target three-dimensional model based on the angle adjustment instruction.
In one possible implementation, the initial three-dimensional model is provided with a virtual light source, and the display module 730 is further configured to:
Determining reflected light of the frame to the light rays emitted by the virtual light source based on the material of the frame of the target three-dimensional model;
And determining the reflected light of the transparent cover plate to the light rays emitted by the virtual light source based on the material of the transparent cover plate of the target three-dimensional model.
In one possible implementation, the virtual light source includes at least one of direct light and ambient light.
In one possible implementation, the three-dimensional model determining module 720 is configured to:
identifying the main tone of the song cover picture corresponding to the target audio;
And performing color rendering processing on the frame of the initial three-dimensional model based on the main tone to obtain a target three-dimensional model corresponding to the target audio.
In one possible implementation manner, the initial three-dimensional model is a three-dimensional model corresponding to a handheld interactive style player.
In a possible implementation manner, a background image corresponding to the target audio is also displayed in a playing interface corresponding to the target audio;
the display module 730 is further configured to:
identifying the main tone of the song cover picture corresponding to the target audio;
and performing color rendering processing on the pre-stored initial background image based on the dominant hue to obtain a background image corresponding to the target audio.
In one possible implementation, the initial background image comprises a spectral image;
the display module 730 is further configured to:
And performing color rendering processing on the spectrum image based on the dominant hue to obtain a background image corresponding to the target audio.
In one possible implementation, the spectral image is a dynamic waveform image that varies with the loudness of the target audio.
In one possible implementation, the initial three-dimensional model further includes a film cartridge and a black disc, the film cartridge being located in and connected to the frame, the film cartridge having an opening, the transparent cover plate being located within the opening of the film cartridge and connected to the film cartridge, the black disc being located at least partially in the cavity of the film cartridge;
the display module 730 is further configured to:
and displaying at least part of images of the song cover pictures at the center of the black glue disc, and displaying the song cover pictures on the transparent cover plate.
In one possible implementation manner, a playing time axis corresponding to the target audio is also displayed on the transparent cover plate.
In one possible implementation, the display module 730 is further configured to:
when the target audio is played, the black rubber disc is positioned in the film bin and is dynamically and automatically rotated for display;
when the clicking operation of the pause control is detected, the black disc stops the dynamic autorotation display and moves to a target position, wherein when the black disc is positioned at the target position, a preset part of the black disc is positioned outside the film bin;
When the clicking operation of the playing control is detected, the black disc moves from the target position to be positioned in the film bin, and the black disc dynamically rotates and displays.
In a possible implementation manner, the playing interface further displays an operation layer, the operation layer is located at the upper layer of the target three-dimensional model, and a plurality of operation controls are arranged on the operation layer;
the display module 730 is further configured to:
And executing the instruction corresponding to the operation control when the clicking operation of the operation control is detected.
In one possible implementation, the display module 730 is further configured to:
And closing the operation layer when detecting the clicking operation of the target area in the playing interface.
In one possible implementation, the display module 730 is further configured to:
And closing the operation layer when the duration of the playing interface corresponding to the target audio reaches a preset duration threshold.
It should be noted that: in the display device of the audio playing interface provided in the above embodiment, when the playing interface is displayed, only the division of the above functional modules is used for illustration, in practical application, the above functional allocation may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the display device of the audio playing interface provided in the above embodiment and the display method embodiment of the audio playing interface belong to the same concept, and detailed implementation processes of the display device and the display method embodiment of the audio playing interface are detailed in the method embodiment, and are not repeated here.
Fig. 8 shows a block diagram of a terminal 800 provided in an exemplary embodiment of the present disclosure. The terminal may be a computer device in the above-described embodiments. The terminal 800 may be: a smart phone, a tablet computer, an MP3 player (moving picture experts group audio layer III, motion picture expert compression standard audio plane 3), an MP4 (moving picture experts group audio layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Terminal 800 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, and the like.
In general, the terminal 800 includes: a processor 801 and a memory 802.
Processor 801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 801 may be implemented in at least one hardware form of DSP (DIGITAL SIGNAL processing), FPGA (field-programmable gate array) GATE ARRAY, PLA (programmable logic array ). The processor 801 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a CPU (central processing unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 801 may integrate a GPU (graphics processing unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 801 may also include an AI (ARTIFICIAL INTELLIGENCE ) processor for processing computing operations related to machine learning.
Memory 802 may include one or more computer-readable storage media, which may be non-transitory. Memory 802 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 802 is used to store at least one instruction for execution by processor 801 to implement the display method of the audio playback interface provided by the method embodiments in the present disclosure.
In some embodiments, the terminal 800 may further optionally include: a peripheral interface 803, and at least one peripheral. The processor 801, the memory 802, and the peripheral interface 803 may be connected by a bus or signal line. Individual peripheral devices may be connected to the peripheral device interface 803 by buses, signal lines, or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 804, a display 805, a camera 806, audio circuitry 807, a positioning component 808, and a power supply 809.
Peripheral interface 803 may be used to connect at least one input/output (I/O) related peripheral device to processor 801 and memory 802. In some embodiments, processor 801, memory 802, and peripheral interface 803 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 801, the memory 802, and the peripheral interface 803 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The radio frequency circuit 804 is used to receive and transmit RF (radio frequency) signals, also known as electromagnetic signals. The radio frequency circuit 804 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 804 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 804 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 804 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (WIRELESS FIDELITY ) networks. In some embodiments, the radio frequency circuitry 804 may also include NFC (NEAR FIELD communication) related circuitry, which is not limited by the present disclosure.
The display screen 805 is used to display a UI (user interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 805 is a touch display, the display 805 also has the ability to collect touch signals at or above the surface of the display 805. The touch signal may be input as a control signal to the processor 801 for processing. At this time, the display 805 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 805 may be one, providing a front panel of the terminal 800; in other embodiments, the display 805 may be at least two, respectively disposed on different surfaces of the terminal 800 or in a folded design; in still other embodiments, the display 805 may be a flexible display disposed on a curved surface or a folded surface of the terminal 800. Even more, the display 805 may be arranged in an irregular pattern other than rectangular, i.e., a shaped screen. The display 805 may be made of LCD (liquid CRYSTAL DISPLAY), OLED (organic light-emitting diode), or other materials.
The camera assembly 806 is used to capture images or video. Optionally, the camera assembly 806 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera, and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and VR (virtual reality) shooting function or other fusion shooting functions. In some embodiments, the camera assembly 806 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
Audio circuitry 807 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, inputting the electric signals to the processor 801 for processing, or inputting the electric signals to the radio frequency circuit 804 for voice communication. For stereo acquisition or noise reduction purposes, a plurality of microphones may be respectively disposed at different portions of the terminal 800. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 801 or the radio frequency circuit 804 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuit 807 may also include a headphone jack.
The location component 808 is utilized to locate the current geographic location of the terminal 800 for navigation or LBS (location based service, location-based services). The positioning component 808 may be a GPS (global positioning system ), beidou system, grainers system or galileo system based positioning component.
A power supply 809 is used to power the various components in the terminal 800. The power supply 809 may be an alternating current, direct current, disposable battery, or rechargeable battery. When the power supply 809 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 800 also includes one or more sensors 810. The one or more sensors 810 include, but are not limited to: acceleration sensor 811, gyroscope sensor 812, pressure sensor 813, fingerprint sensor 814, optical sensor 815, and proximity sensor 816.
The acceleration sensor 811 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 800. For example, the acceleration sensor 811 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 801 may control the display screen 805 to display a user interface in a landscape view or a portrait view based on the gravitational acceleration signal acquired by the acceleration sensor 811. Acceleration sensor 811 may also be used for the acquisition of motion data of a game or user.
The gyro sensor 812 may detect a body direction and a rotation angle of the terminal 800, and the gyro sensor 812 may collect a 3D motion of the user to the terminal 800 in cooperation with the acceleration sensor 811. The processor 801 may implement the following functions based on the data collected by the gyro sensor 812: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 813 may be disposed at a side frame of the terminal 800 and/or at a lower layer of the display 805. When the pressure sensor 813 is disposed on a side frame of the terminal 800, a grip signal of the terminal 800 by a user may be detected, and the processor 801 performs left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 813. When the pressure sensor 813 is disposed at the lower layer of the display screen 805, the processor 801 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 805. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 814 is used to collect a fingerprint of a user, and the processor 801 identifies the identity of the user based on the fingerprint collected by the fingerprint sensor 814, or the fingerprint sensor 814 identifies the identity of the user based on the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 801 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 814 may be provided on the front, back, or side of the terminal 800. When a physical key or vendor Logo is provided on the terminal 800, the fingerprint sensor 814 may be integrated with the physical key or vendor Logo.
The optical sensor 815 is used to collect the ambient light intensity. In one embodiment, the processor 801 may control the display brightness of the display screen 805 based on the intensity of ambient light collected by the optical sensor 815. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 805 is turned up; when the ambient light intensity is low, the display brightness of the display screen 805 is turned down. In another embodiment, the processor 801 may also dynamically adjust the shooting parameters of the camera module 806 based on the ambient light intensity collected by the optical sensor 815.
A proximity sensor 816, also referred to as a distance sensor, is typically provided on the front panel of the terminal 800. The proximity sensor 816 is used to collect the distance between the user and the front of the terminal 800. In one embodiment, when the proximity sensor 816 detects that the distance between the user and the front of the terminal 800 gradually decreases, the processor 801 controls the display 805 to switch from the bright screen state to the off screen state; when the proximity sensor 816 detects that the distance between the user and the front surface of the terminal 800 gradually increases, the processor 801 controls the display 805 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 8 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
Fig. 9 is a schematic structural diagram of a server provided in an embodiment of the disclosure, where the server 900 may have a relatively large difference due to configuration or performance, and may include one or more processors (central processing units, CPU) 901 and one or more memories 902, where at least one instruction is stored in the memories 902, and the at least one instruction is loaded and executed by the processors 901 to implement the methods provided in the foregoing method embodiments. Of course, the server may also have a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing the functions of the device, which are not described herein.
In an exemplary embodiment, a computer readable storage medium, such as a memory including instructions executable by a processor in a terminal to perform the method of displaying an audio playback interface in the above embodiment is also provided. The computer readable storage medium may be non-transitory. For example, the computer readable storage medium may be a ROM (read-only memory), RAM (random access memory ), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
It should be noted that, the information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals (including but not limited to signals transmitted between a user terminal and other devices, etc.) related to the present disclosure are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of the related data is required to comply with the relevant laws and regulations and standards of the relevant countries and regions. For example, references in the present disclosure to "song cover picture corresponding to target audio", "lyric data corresponding to target audio", "initial three-dimensional model", and the like are all acquired with sufficient authorization.
The foregoing description of the preferred embodiments of the present disclosure is provided for the purpose of illustration only, and is not intended to limit the disclosure to the particular embodiments disclosed, but on the contrary, the intention is to cover all modifications, equivalents, alternatives, and alternatives falling within the spirit and principles of the disclosure.

Claims (19)

1. A method for displaying an audio playback interface, the method comprising:
acquiring a song cover picture corresponding to the target audio;
Rendering an initial three-dimensional model stored in advance based on a song cover picture corresponding to the target audio to obtain a target three-dimensional model corresponding to the target audio, wherein the initial three-dimensional model and the target three-dimensional model comprise a frame, a transparent cover plate and an operation panel, the transparent cover plate and the operation panel are both positioned in the frame and connected with the frame, and a play/pause control, a previous control and a next control are arranged on the operation panel;
Displaying a playing interface corresponding to the target audio, wherein the playing interface is provided with the target three-dimensional model, and a transparent cover plate of the target three-dimensional model is provided with a song cover picture corresponding to the target audio and lyric data corresponding to the target audio.
2. The method according to claim 1, wherein the method further comprises:
Acquiring the position and the motion trail of a finger contact;
generating angle adjustment information corresponding to the target three-dimensional model based on the finger contact point position and the motion trail;
And generating the angle adjustment instruction based on the angle adjustment information corresponding to the target three-dimensional model, and adjusting the display view angle of the target three-dimensional model based on the angle adjustment instruction.
3. The method of claim 2, wherein the initial three-dimensional model is provided with a virtual light source, the method further comprising:
Determining reflected light of the frame to the light rays emitted by the virtual light source based on the material of the frame of the target three-dimensional model;
And determining the reflected light of the transparent cover plate to the light rays emitted by the virtual light source based on the material of the transparent cover plate of the target three-dimensional model.
4. The method of claim 3, wherein the virtual light source comprises at least one of direct light and ambient light.
5. The method according to claim 1, wherein the rendering the pre-stored initial three-dimensional model based on the song cover picture corresponding to the target audio to obtain the target three-dimensional model corresponding to the target audio includes:
identifying the main tone of the song cover picture corresponding to the target audio;
And performing color rendering processing on the frame of the initial three-dimensional model based on the main tone to obtain a target three-dimensional model corresponding to the target audio.
6. The method of claim 1, wherein the initial three-dimensional model is a three-dimensional model corresponding to a handheld interactive style player.
7. The method of claim 1, wherein the playback interface corresponding to the target audio further displays a background image corresponding to the target audio;
The method further comprises the steps of:
identifying the main tone of the song cover picture corresponding to the target audio;
and performing color rendering processing on the pre-stored initial background image based on the dominant hue to obtain a background image corresponding to the target audio.
8. The method of claim 7, wherein the initial background image comprises a spectral image;
The color rendering processing is performed on the pre-stored initial background image based on the main tone to obtain a background image corresponding to the target audio, and the method comprises the following steps:
And performing color rendering processing on the spectrum image based on the dominant hue to obtain a background image corresponding to the target audio.
9. The method of claim 8 wherein the spectral image is a dynamic waveform image that varies with the loudness of the target audio.
10. The method of claim 1, wherein the initial three-dimensional model further comprises a film cartridge and a black matrix, the film cartridge being positioned in and coupled to the frame, the film cartridge having an opening, the transparent cover plate being positioned within the opening of the film cartridge and coupled to the film cartridge, the black matrix being positioned at least partially within the cavity of the film cartridge;
the rendering processing is performed on the pre-stored initial three-dimensional model based on the song cover picture corresponding to the target audio to obtain a target three-dimensional model corresponding to the target audio, and the rendering processing comprises the following steps:
and displaying at least part of images of the song cover pictures at the center of the black glue disc, and displaying the song cover pictures on the transparent cover plate.
11. The method of claim 10, wherein a playback timeline corresponding to the target audio is also displayed on the transparent cover plate.
12. The method according to claim 10, wherein the method further comprises:
when the target audio is played, the black rubber disc is positioned in the film bin and is dynamically and automatically rotated for display;
when the clicking operation of the pause control is detected, the black disc stops the dynamic autorotation display and moves to a target position, wherein when the black disc is positioned at the target position, a preset part of the black disc is positioned outside the film bin;
When the clicking operation of the playing control is detected, the black disc moves from the target position to be positioned in the film bin, and the black disc dynamically rotates and displays.
13. The method of claim 1, wherein the playback interface further displays an operation layer, the operation layer being located at an upper layer of the target three-dimensional model, the operation layer being provided with a plurality of operation controls;
The method further comprises the steps of:
And executing the instruction corresponding to the operation control when the clicking operation of the operation control is detected.
14. The method of claim 13, wherein the method further comprises:
And closing the operation layer when detecting the clicking operation of the target area in the playing interface.
15. The method of claim 13, wherein the method further comprises:
And closing the operation layer when the duration of the playing interface corresponding to the target audio reaches a preset duration threshold.
16. A display device for an audio playback interface, the device comprising:
The acquisition module is used for acquiring a song cover picture corresponding to the target audio;
The three-dimensional model determining module is used for rendering an initial three-dimensional model stored in advance based on a song cover picture corresponding to the target audio to obtain a target three-dimensional model corresponding to the target audio, wherein the initial three-dimensional model and the target three-dimensional model comprise a frame, a transparent cover plate and an operation panel, the transparent cover plate and the operation panel are both positioned in the frame and connected with the frame, and a play/pause control, a previous control and a next control are arranged on the operation panel;
The display module is used for displaying a playing interface corresponding to the target audio, the target three-dimensional model is displayed in the playing interface, and song cover pictures corresponding to the target audio and lyric data corresponding to the target audio are displayed on a transparent cover plate of the target three-dimensional model.
17. A computer device comprising a processor and a memory having stored therein at least one instruction that is loaded and executed by the processor to implement the operations performed by the method of displaying an audio playback interface as claimed in any one of claims 1 to 15.
18. A computer readable storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement operations performed by the method of displaying an audio playback interface as claimed in any one of claims 1 to 15.
19. A computer program product comprising at least one instruction for loading and executing by a processor to perform the operations performed by the method for displaying an audio playback interface according to any one of claims 1 to 15.
CN202410301070.4A 2024-03-15 2024-03-15 Method, device and program product for displaying audio playing interface Pending CN118034841A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410301070.4A CN118034841A (en) 2024-03-15 2024-03-15 Method, device and program product for displaying audio playing interface

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410301070.4A CN118034841A (en) 2024-03-15 2024-03-15 Method, device and program product for displaying audio playing interface

Publications (1)

Publication Number Publication Date
CN118034841A true CN118034841A (en) 2024-05-14

Family

ID=90996757

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410301070.4A Pending CN118034841A (en) 2024-03-15 2024-03-15 Method, device and program product for displaying audio playing interface

Country Status (1)

Country Link
CN (1) CN118034841A (en)

Similar Documents

Publication Publication Date Title
CN108769561B (en) Video recording method and device
CN109874312B (en) Method and device for playing audio data
CN110933330A (en) Video dubbing method and device, computer equipment and computer-readable storage medium
CN109327608B (en) Song sharing method, terminal, server and system
CN110491358B (en) Method, device, equipment, system and storage medium for audio recording
CN108965757B (en) Video recording method, device, terminal and storage medium
CN108965922B (en) Video cover generation method and device and storage medium
CN109144346B (en) Song sharing method and device and storage medium
CN111061405B (en) Method, device and equipment for recording song audio and storage medium
CN109192218B (en) Method and apparatus for audio processing
WO2021068903A1 (en) Method for determining volume adjustment ratio information, apparatus, device and storage medium
CN111142838B (en) Audio playing method, device, computer equipment and storage medium
CN109635133B (en) Visual audio playing method and device, electronic equipment and storage medium
CN110956971B (en) Audio processing method, device, terminal and storage medium
CN109743461B (en) Audio data processing method, device, terminal and storage medium
WO2022095465A1 (en) Information display method and apparatus
CN111368114B (en) Information display method, device, equipment and storage medium
CN111081277B (en) Audio evaluation method, device, equipment and storage medium
CN113420177A (en) Audio data processing method and device, computer equipment and storage medium
CN112738606B (en) Audio file processing method, device, terminal and storage medium
CN112118482A (en) Audio file playing method and device, terminal and storage medium
CN110152309B (en) Voice communication method, device, electronic equipment and storage medium
CN109448676B (en) Audio processing method, device and storage medium
CN110808021A (en) Audio playing method, device, terminal and storage medium
WO2022227589A1 (en) Audio processing method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination