CN115794021A - Audio content display method and electronic equipment - Google Patents

Audio content display method and electronic equipment Download PDF

Info

Publication number
CN115794021A
CN115794021A CN202111062371.9A CN202111062371A CN115794021A CN 115794021 A CN115794021 A CN 115794021A CN 202111062371 A CN202111062371 A CN 202111062371A CN 115794021 A CN115794021 A CN 115794021A
Authority
CN
China
Prior art keywords
application
audio content
audio
electronic device
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111062371.9A
Other languages
Chinese (zh)
Inventor
黄亚龙
魏征
张仕钊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202111062371.9A priority Critical patent/CN115794021A/en
Publication of CN115794021A publication Critical patent/CN115794021A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application provides an audio content display method and electronic equipment, relates to the technical field of electronics, and improves the operation experience of audio application used by a user. The method comprises the following steps: a first operation on a first application is received, the first operation being used for instructing the first application to show a plurality of audio contents of a second application. And then in response to the first operation, displaying the plurality of audio contents of the second application through the interface of the first application. According to the audio content display method, if the user needs to acquire the audio content by switching the plurality of audio applications, the user can acquire the audio content in the rest of the applications (namely, the second application) only by operating the first application, and the user does not need to operate the rest of the applications, so that the user is prevented from operating the plurality of applications to generate misoperation, and the user experience is improved.

Description

Audio content display method and electronic equipment
Technical Field
The present application relates to the field of electronic technologies, and in particular, to an audio content display method and an electronic device.
Background
In everyday life, audio applications are one of the most common applications for users. A user may listen to music and audio content such as radio, music novels, talk shows, drama, vocal news, etc. using the audio-based application. Since all audio content that the user wants to listen to may not be included in one audio application, the user often needs to listen to different audio content using different audio applications (e.g., the user listens to music through audio application a, listens to voiced news through audio application B).
However, the operation pages of different audio applications are often different, and a user operating another audio application using the operation habit of operating a commonly used audio application (for example, the user operating the audio application B through the operation habit of the audio application a) is prone to generate an error operation, thereby affecting the user experience.
Disclosure of Invention
The application provides an audio content display method and electronic equipment, which can improve the operation experience of an audio application used by a user. In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, the present application provides an audio content presentation method, which is applied to an electronic device, where a first application and a second application are installed on the electronic device, and the method includes: a first operation is received, the first operation for instructing a first application to present a plurality of audio content of a second application. And then responding to the first operation, and showing a plurality of audio contents of the second application through the interface of the first application.
The audio content described herein is audio information or audio works, for example, songs, drama, vocals, novels, talk shows, live audio, audio news, or audio reads, etc.
In the prior art, when one audio application does not contain all audio contents that a user wants to play, the user needs to play all audio contents that the user wants to play by operating different audio applications, and since the operating modes of different audio applications are often different, the user is prone to generating wrong operations, thereby affecting user experience. In the method provided by the application, the user can obtain the audio content in the other applications (the second applications) only by operating the first application, and the user does not need to operate the other applications, so that the user is prevented from operating a plurality of applications to generate wrong operation, and the user experience is improved.
In one possible implementation, the method further includes: and receiving a second operation for instructing the electronic equipment to play the target audio content. Wherein the target audio content is any one of the plurality of audio contents.
In one possible implementation, the method further includes: in response to the second operation, sending, by the first application, a first indication to the second application to instruct the second application to play the target audio content, so that the second application plays the target audio content.
According to the method, when the user wants to play a certain audio content displayed by the first application, the first application can send an instruction to the second application to enable the second application to play the corresponding audio content only by operating the first application, the user only needs to operate the first application in the whole process, other applications do not need to be operated, and user experience is further improved.
In a possible implementation manner, the interface of the first application may include a first page, and the displaying the plurality of audio contents through the interface of the first application may include: the first set of audio content is presented on the first page according to content characteristics of the first set of audio content, the plurality of audio content including the first set of audio content.
Optionally, before presenting the first set of audio content on the first page according to the content characteristics of the first set of audio content, the method may further include: content characteristics of the first set of audio content are obtained.
In another possible implementation manner, the interface of the first application may include a second page, and the displaying of the plurality of audio contents through the interface of the first application may include: presenting a second set of audio content on the second page according to the account characteristics of the second application, the plurality of audio content including the second set of audio content.
Optionally, before presenting the second set of audio content on the second page according to the account feature of the second application, the method may further include: and acquiring account characteristics of the second application.
In another possible implementation manner, the displaying the plurality of audio contents through the interface of the first application may include: showing a first set of audio content on the first page according to content characteristics of the first set of audio content; and displaying a second group of audio contents on the second page according to the account characteristics of the second application.
In the method provided by the application, the first application can respectively display the audio content through the content characteristics of the audio content and the account characteristics of the application, so that a user can conveniently check or play the audio content through a corresponding page, and the user experience is further improved.
Optionally, the content characteristics include at least one of audio type, album information, song list information, track information, list information, singer information, audio set information, and station information.
Optionally, the account characteristics may include at least one of song list information collected by the account, song list information self-created by the account, audio content information purchased by the account, audio content information recently played by the account, and audio content information liked by the account.
In a possible implementation manner, the presenting the first group of audio contents on the first page according to the content characteristics of the first group of audio contents may include: classifying the first set of audio content according to content characteristics of the first set of audio content; presenting the first set of audio content at the first page category.
In the method provided by the application, the first application can classify the audio contents and display the audio contents in a classified manner through the content characteristics of the audio contents, so that a user can conveniently find interested audio contents, and the user experience is further improved.
In a possible implementation manner, the presenting a second group of audio content on the second page according to the account feature of the second application may include: classifying the second set of audio content according to account characteristics of the second application; presenting the second set of audio content in the second page category.
In the method provided by the application, the first application can classify the audio content and display the audio content in a classified manner through the account characteristics, so that a user can conveniently find interested audio content, and the user experience is further improved.
In one possible implementation, the method further includes: and adjusting the size of the interface of the first application according to the screen size.
In one possible implementation, before resizing the interface of the first application according to a screen size, the method may further include: and acquiring the screen size of the electronic equipment.
In the method provided by the application, the interface size can be adjusted by the first application according to the screen size of the electronic equipment, so that the interface size of the first application is matched with the screen size of the electronic equipment, and the user experience is further improved.
In a second aspect, the present application further provides an electronic device comprising a processor and a memory coupled to the processor, the processor configured to: receiving a first operation for instructing the first application to show a plurality of audio contents of the second application; in response to the first operation, the plurality of audio contents are presented through an interface of the first application.
In one possible implementation, the processor is further configured to: and receiving a second operation, wherein the second operation is used for instructing the electronic equipment to play target audio content, and the target audio content is any one of the plurality of audio contents.
In one possible implementation, the processor is further configured to: in response to the second operation, sending, by the first application, a first indication to the second application, the first indication instructing the second application to play the target audio content, so that the second application plays the target audio content.
In a possible implementation manner, the interface of the first application may include a first page, and the processor is specifically configured to: the first set of audio content is presented on the first page according to content characteristics of the first set of audio content, the plurality of audio content including the first set of audio content.
Optionally, the processor is further configured to: content characteristics of the first set of audio content are obtained.
In another possible implementation manner, the interface of the first application may include a second page, and the processor is specifically configured to: presenting a second set of audio content on the second page according to the account characteristics of the second application, the plurality of audio content including the second set of audio content.
Optionally, the processor is further configured to: and acquiring account characteristics of the second application.
In another possible implementation manner, the interface of the first application may include a first page and a second page, and the processor is specifically configured to: showing a first set of audio content on the first page according to content characteristics of the first set of audio content; and displaying a second group of audio contents on the second page according to the account characteristics of the second application.
Optionally, the content characteristics include at least one of audio type, album information, song list information, track information, list information, singer information, audio set information, and station information.
Optionally, the account characteristics include at least one of song list information collected by the account, song list information self-created by the account, audio content information purchased by the account, audio content information recently played by the account, and audio content information liked by the account.
In a possible implementation manner, the processor is specifically configured to: classifying the first set of audio content according to content characteristics of the first set of audio content; presenting the first set of audio content at the first page category.
In a possible implementation manner, the processor is specifically configured to: classifying the second set of audio content according to account characteristics of the second application; presenting the second set of audio content in the second page category.
In one possible implementation, the processor is further configured to: and adjusting the size of the interface of the first application according to the screen size.
Optionally, the processor is further configured to obtain a screen size of the electronic device.
In a third aspect, the present application further provides an electronic device, including: at least one processor configured to implement the method of the first aspect or any possible implementation thereof when the at least one processor executes program code or instructions.
Optionally, the electronic device may further comprise at least one memory for storing the program code or instructions.
In a fourth aspect, the present application further provides a chip, including: input interface, output interface, at least one processor. Optionally, the chip further comprises a memory. The at least one processor is configured to execute the code in the memory, and when the at least one processor executes the code, the chip implements the method described in the first aspect or any possible implementation manner thereof.
Optionally, the chip may also be an integrated circuit.
In a fifth aspect, the present application further provides a terminal, where the terminal includes the electronic device or the chip.
In a sixth aspect, the present application further provides a computer-readable storage medium for storing a computer program comprising instructions for implementing the method described in the first aspect or any possible implementation manner thereof.
In a seventh aspect, the present application further provides a computer program product containing instructions which, when run on a computer, cause the computer to carry out the method described in the above first aspect or any possible implementation thereof.
In an eighth aspect, the present application further provides a terminal device, where the terminal device includes the electronic device in the second aspect or any possible implementation manner thereof. Optionally, the terminal device is a vehicle or an intelligent robot.
The electronic device, the computer storage medium, the computer program product, and the chip provided in this embodiment are all used for executing the audio content presentation method provided above, and therefore, the beneficial effects achieved by the electronic device, the computer storage medium, the computer program product, and the chip may refer to the beneficial effects in the audio content presentation method provided above, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is an audio content display page provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a software structure of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic view of a user interface of an electronic device according to an embodiment of the present application;
FIG. 5 is a diagram illustrating an audio data model according to an embodiment of the present application;
FIG. 6 is a schematic view of a user interface of another electronic device according to an embodiment of the present application;
FIG. 7 is a schematic diagram of another audio data model provided in an embodiment of the present application;
FIG. 8 is a schematic user interface diagram of another electronic device provided in an embodiment of the present application;
FIG. 9 is a schematic user interface diagram of another electronic device provided in an embodiment of the present application;
fig. 10 is a flowchart illustrating an audio content displaying method according to an embodiment of the present application;
FIG. 11 is a schematic diagram of an apparatus according to an embodiment of the present disclosure;
FIG. 12 is a schematic structural diagram of another apparatus provided in an embodiment of the present application;
fig. 13 is a schematic structural diagram of a chip according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone.
The terms "first" and "second" and the like in the description and drawings of the present application are used for distinguishing different objects or for distinguishing different processes for the same object, and are not used for describing a specific order of the objects.
Furthermore, the terms "including" and "having," and any variations thereof, as referred to in the description of the present application, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
It should be noted that in the description of the embodiments of the present application, words such as "exemplary" or "for example" are used to indicate examples, illustrations or illustrations. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the present application, the meaning of "a plurality" means two or more unless otherwise specified.
In daily life, audio applications are one of the most common applications for users. A user may listen to audio content such as music, radio, music novels, talk shows, drama, voiced news, etc. using the audio-based application. Since all audio content that the user wants to listen to may not be included in one audio application, the user often needs to listen to different audio content using different audio applications.
However, different audio applications often have different operation pages, and a user operating another audio application using the operation habit of operating a common audio application is prone to generate an error operation, thereby affecting the user experience.
Illustratively, the interface of audio application a is shown in fig. 1 (a), and the interface of audio application B is shown in fig. 1 (B), it can be seen that the leaderboard content in audio application a is on the left side, and the leaderboard content in audio application B is on the right side of the page. After listening to the audio content in the ranking list by using the audio application A, the user wants to listen to the audio content in the ranking list by using the audio application B. The user exits the audio application A and opens the audio application B, then habitually clicks the left side of the page to want the ranking list content, and due to the fact that the page layouts of the audio application A and the audio application B are different, the user clicks the left side of the page, the ranking list function in the audio application B cannot be opened, and the album content in the audio application B can be opened wrongly. It can be seen that, in the prior art, when a user uses a plurality of audio applications, it is easy to generate an error operation, thereby affecting the user experience.
Therefore, the audio content display method is provided in the embodiment of the application, and the operation experience of the audio application used by the user can be improved.
The audio content display method provided by the embodiment of the application can be applied to electronic devices with an audio playing function, such as a mobile phone, a tablet computer, a wearable device, a robot, a vehicle-mounted device (e.g., a car machine), an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, a super-mobile personal computer (UMPC), a netbook, and a Personal Digital Assistant (PDA), and the embodiment of the application does not limit the specific type of the electronic devices at all.
Fig. 2 is a schematic structural diagram of an example of the electronic device 100 according to an embodiment of the present disclosure. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bidirectional synchronous serial bus, and the processor 110 may be coupled to the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface, thereby implementing the touch function of the electronic device 100. MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
It should be understood that the interface connection relationship between the modules illustrated in the embodiments of the present application is only an illustration, and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like.
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement a photographing function through the ISP, the camera 193, the touch sensor, the video codec, the GPU, the display screen 194, the application processor, and the like.
The ISP is used for processing data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB format, a YUV format, and the like, and it should be understood that, in the description of the embodiment of the present application, an image in an RGB format is taken as an example for description, and the embodiment of the present application does not limit the format of the image. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The pressure sensor 180A is used for sensing a pressure signal, and can convert the pressure signal into an electrical signal. The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. The air pressure sensor 180C is used to measure air pressure. The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus. The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on. The temperature sensor 180J is used to detect temperature. The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation acting thereon or nearby. The bone conduction sensor 180M may acquire a vibration signal. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100. The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc. The SIM card interface 195 is used to connect a SIM card.
The software system of the electronic device 100 may employ a hierarchical architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present application takes an operating system with a layered architecture as an example, and exemplifies a software structure of the electronic device 100.
Fig. 3 is a block diagram of a software structure of the electronic device 100 according to the embodiment of the present application. The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the operating system is divided into four layers, an application layer, an application framework layer, an operating system runtime (runtime) and system library, and a kernel layer, from top to bottom.
The application layer may include a series of application packages. As shown in fig. 3, the application packages may include camera, photo album, music, settings, etc. applications.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions. As shown in FIG. 3, the application framework layers may include a window manager, content provider, view system, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, which may be used to convey notification type messages, and the notification information displayed in the status bar may disappear automatically after a short dwell, such as a message alert to notify the user that the download is complete. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, to prompt a text message in the status bar, or the notification manager may also sound a prompt such as a vibration of the electronic device, flashing of an indicator light, etc.
The Runtime comprises a core library and a virtual machine. Runtime is responsible for scheduling and management of the operating system. The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of the operating system.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), media libraries (media libraries), three-dimensional graphics processing libraries (e.g., open graphics library (OpenGL) Embedded Systems (ES)), 2D graphics engines (e.g., scene Graphics Library (SGL)), and the like.
The surface manager is used to manage the display subsystem and provide a fusion of the 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as: moving Pictures Experts Group (MPEG) 4, h.264, mp3, advanced Audio Coding (AAC), multi-rate Adaptation (AMR), joint photographic experts group (JPG), portable Network Graphics (PNG), and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like. The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The kernel layer may include a hardware driver module, such as a display driver, a camera driver, a sensor driver, and the like, and the application framework layer may call the hardware driver module of the kernel layer.
For convenience of understanding, the following embodiments of the present application will describe an audio content presentation method provided by the embodiments of the present application by taking an electronic device having a structure shown in fig. 2 and fig. 3 as an example.
The technical solutions in the following embodiments may be implemented in the electronic device 100 having the hardware architecture and the software architecture described above.
In the embodiment of the present application, the electronic device 100 is an in-vehicle device, and technical solutions provided in the embodiment of the present application are described in detail with reference to the drawings.
Fig. 4 is a schematic diagram of a Graphical User Interface (GUI) provided in an embodiment of the present application, where (a) in fig. 4 shows that, in an unlocking mode of an in-vehicle device, a screen display system of the in-vehicle device displays currently output interface content 401, where the interface content 401 is a main interface of the in-vehicle device. The interface contents 401 show a plurality of applications (apps), such as music, address book, telephone, information, clock, and other applications. It should be noted that the interface content 401 may also include other more application programs, which is not limited in this embodiment of the application.
The user may instruct the in-vehicle device to open a music application (i.e., the first application) by touching a specific control on the screen of the in-vehicle device, pressing a specific physical key or key combination, inputting voice, and/or a blank gesture. And after the indication of starting the music application by the user is received, the vehicle-mounted equipment starts the music application and displays an interface of the music application.
Illustratively, as shown in fig. 4 (a), the user may instruct the in-vehicle device to open a music application by clicking a "music" application icon on the main interface, and the in-vehicle device displays a music application interface as shown in fig. 4 (b).
Further exemplarily, when the in-vehicle device is in the screen lock state, the user may also instruct the in-vehicle device to open the music application by using a specific gesture on the screen of the in-vehicle device, and the in-vehicle device may also display a music application interface as shown in fig. 4 (b). Or, when the vehicle-mounted device is in the screen lock state, the user may instruct the vehicle-mounted device to open the music application by clicking the shortcut icon of the "music" application on the screen lock interface, and the vehicle-mounted device may also display the music application interface as shown in fig. 4 (b). Wherein the above music application (first application) may also be referred to as a CarMedia application, and the interface thereof may be referred to as a CarMedia ui.
As shown in fig. 4 (b), the music application interface includes an audio source selection control 402 and content presentation areas (a recommendation area, an artist area, an album area, a track area, a leader board area, a singer area, a vocal set area, and a station area), and the user can select an audio source of the music application, i.e., an audio application to be used (a second application) by clicking on the audio source selection control 402. It should be noted that the content display area may also include other more areas, which is not limited in the embodiment of the present application.
Illustratively, as shown in fig. 4 (b), the user may click on the audio source selection control 402, instructing the in-vehicle device to display a list of audio sources (audio application list) 403 as shown in fig. 4 (c). The user may use audio application B as an audio source for a music application by clicking on it in the audio source list 403. As shown in fig. 4 (c), after the user clicks on audio application B in the audio source list, the music application displays an interface as shown in fig. 4 (d), in which the current audio source displayed by the audio source selection control 402 is switched from unselected to audio application B.
Alternatively, the user may select the audio source of the music application by touching a particular control on the screen of the in-vehicle device, pressing a particular physical key or combination of keys, inputting speech, air gesture, or the like.
It should be noted that the operation of selecting the audio source of the audio application by the user may be referred to as a first operation, or an operation of selecting another application in the first application is referred to as a first operation. After receiving a first operation input by a user to a music application (a first application), the electronic device responds to the first operation, acquires audio content (a plurality of audio contents) of a second application (an audio application corresponding to a current audio source) and displays the audio contents through an interface of the music application.
In the prior art, when one audio application does not include all audio contents that a user wants to play, the user needs to operate different audio applications to play all the audio contents that the user wants to play. In the method provided by the embodiment of the application, the user can obtain the audio content in the other applications (the second applications) only by operating the first application, and the user does not need to operate the other applications, so that the user is prevented from operating a plurality of applications to generate wrong operation, and the user experience is improved.
In a possible implementation manner, the interface of the music application (the first application) may include a first page, and the presenting of the plurality of audio contents through the interface of the first application may include: the electronic device presents the first set of audio content on the first page according to content characteristics of the first set of audio content.
Wherein the plurality of audio content comprises a first set of audio content. For example, the first set of audio content may be all of the plurality of audio content. For another example, the first set of audio content is a portion of the plurality of audio content.
Alternatively, the electronic device may first obtain the content characteristics of the first set of audio content, and then present the first set of audio content on the first page according to the content characteristics of the first set of audio content.
Optionally, the content features may include at least one of audio type, album information, song information, track information, song list information, artist information, audio set information, and station information. For example, the first set of audio content may be a plurality of songs, and the plurality of songs may be displayed in categories by content characteristics.
In another possible implementation manner, the interface of the music application (the first application) may include a second page, and the presenting of the plurality of audio contents through the interface of the first application may include: the electronic device presents a second set of audio content on a second page according to the account characteristics of the second application.
Wherein the plurality of audio content comprises a second set of audio content. For example, the second set of audio content may be all of the plurality of audio content. For another example, the second set of audio content is a portion of the plurality of audio content.
Optionally, the electronic device may obtain account characteristics of the second application and then present the second set of audio content on the second page according to the account characteristics of the second application.
Optionally, the account characteristics may include at least one of song list information collected by the account, song list information self-created by the account, audio content information purchased by the account, audio content information recently played by the account, and audio content information liked by the account. For example, the second set of audio content may be a plurality of songs, and the plurality of songs may be displayed sorted by account characteristics.
In yet another possible implementation, the interface of the music application (the first application) may include a first page and a second page. The displaying of the plurality of audio contents through the interface of the first application may include: the electronic equipment displays the first group of audio contents on the first page according to the content characteristics of the first group of audio contents; the electronic device presents a second set of audio content on a second page according to the account characteristics of the second application.
Therefore, in the embodiment of the application, the first application can respectively display the audio content through the content characteristics of the audio content and the account characteristics of the application, so that a user can conveniently view or play the audio content through the corresponding page, and the user experience is further improved.
It can be understood that the first application may further include more pages, which is not limited in this embodiment of the present application.
It should be noted that the first set of audio content and the second set of audio content may be identical. For example, the first set of audio content includes only audio content 1 through audio content 100, and the second set of audio content also includes only audio content 1 through audio content 100.
The first set of audio content and the second set of audio content may also be completely different. For example, the first set of audio content includes only audio content 1 through audio content 100. The second set of audio content includes only audio content 101 through audio content 200.
The first set of audio content and the second set of audio content may also be partially identical. For example, the first and second sets of audio content each include audio content 51 through 100, but the first set of audio content also includes audio content 1 through 50, and the second set of audio content also includes audio content 101 through 150.
For another example, the first and second sets of audio content each include audio content 51 through 100, but the first set of audio content also includes audio content 1 through 50. I.e. the second set of audio content is a subset of the first set of audio content.
For another example, the first and second sets of audio content each include audio content 51 through 100, but the second set of audio content also includes audio content 1 through 50. I.e. the first set of audio content is a subset of the second set of audio content.
In a possible implementation manner, the presenting the first group of audio contents on the first page according to the content characteristics of the first group of audio contents may include: classifying the first set of audio content according to content characteristics of the first set of audio content; a first set of audio content is presented at a first page category.
Illustratively, as shown in fig. 4 (d), the first page may include a recommendation area (also referred to as a banner (banner) area) for showing a recommended album, an album area for showing a collection of songs, an album area for showing a collection of albums, a track area for showing a collection of tracks, a ranking board area for showing a collection of lists, a singer area for showing a collection of singers, a collection of vocals area for showing a collection of vocals (i.e., a collection of vocals), and a station area for showing a collection of stations. The electronic device may classify the first set of audio content according to its content characteristics and present the first set of audio content in a first page classification via the audio data model shown in fig. 5. For example, audio content data within a second in-application banner (banner) page is presented through a recommendation area of a first page of a first application; displaying the audio content data in the ranking list page in the second application through the ranking list area of the first page of the first application; classifying the audio data in the second application according to album information of the audio data and displaying the audio data through an album area of a first page of the first application; the audio data within the second application is classified according to singer information of the audio data and presented through a singer region of the first page of the first application.
Wherein the audio works include, but are not limited to, audio novels, vocals, commentary, audio news, drama, talk shows, audio lessons, and radio dramas.
Therefore, in the embodiment of the application, the first application can classify the audio content and display the audio content in a classified manner through the account characteristics, so that a user can conveniently find interested audio content, and the user experience is further improved.
In a possible implementation manner, the presenting the second group of audio content on the second page according to the account feature of the second application may include: classifying the second set of audio content according to account characteristics of the second application; a second set of audio content is presented at a second page category.
Illustratively, as shown in fig. 6 (a), the second page may include a self-created song list area for presenting a song list self-created by the account user, a favorite song list area for presenting a song list favorite by the account user, an area for presenting purchased audio of audio already purchased by the account user, an area for presenting favorite audio of audio favorite by the account user, and an area for presenting recently played audio of audio recently played by the account user. The electronic device may classify the second set of audio content according to the account characteristics and present the second set of audio content at a second page classification via the audio data model shown in fig. 7. For example, audio data recently played by a user of the second application is presented through a region of recently played audio of a second page of the first application; the audio data for the purchased second application is exposed through a region of the purchased audio of the second page of the first application.
Therefore, in the embodiment of the application, the first application can classify the audio content and display the audio content in a classified manner through the account characteristics, so that a user can conveniently find interested audio content, and the user experience is further improved.
As shown in fig. 6 (a), the interface of the music application (first application) may simultaneously display a first page and a second page. As shown in fig. 6 (b), the interface of the music application may include a page switching control 404, and the user may cause the interface of the music application to display only the first page or the second page by clicking the switching control 404.
Illustratively, as shown in fig. 6 (b), the user may cause the interface of the music application to display a second page as shown in fig. 6 (c) by clicking "my" in the toggle control 404. As shown in fig. 6 (c), the user may cause the interface of the music application to display the first page as shown in fig. 6 (d) by clicking on the "home page" in the toggle control 404.
The user can click the area in the page to enable the music application to show the audio list in the area, and the user can click the audio in the audio list to enable the electronic equipment to play the audio. Illustratively, as shown in fig. 8 (a), the user causes the music application to present a list of songs as shown in fig. 8 (b) by clicking on the list area, the user causes the music application to present a list of single songs as shown in fig. 8 (c) by clicking on the list of songs 2 in the list of songs as shown in fig. 8 (b), and the user may cause the electronic device to play the single song 1 by clicking on the single song 1 in the list of single songs.
It should be noted that the operation of the user causing the electronic device to play audio through the first application may be referred to as a second operation. The electronic device may receive a second operation of the first application by the user, and then send a first instruction for instructing the second application to play the target audio content to the second application through the first application, so that the second application plays the target audio content.
Illustratively, after receiving a second operation of the first application by the user, the electronic device sends a first instruction to the second application through the first application in response to the second operation, and the second application plays the target audio after receiving the first instruction.
Therefore, the first operation and the second operation are both operations of the first application, the audio content in the rest of applications (namely, the second application) can be obtained through the first application, and the user does not need to operate the rest of applications, so that the user operation is simplified, and the user experience is improved.
It is worth mentioning that the user can also instruct the vehicle-mounted device to play the target audio by operating a knob of the vehicle, touching a specific control on a screen of the vehicle-mounted device, pressing a specific physical key or key combination, inputting voice, separating gesture, and the like.
It is worth mentioning that the electronic device may present the music being played in a variety of ways. For example, the display is performed by split screen display, head Up Display (HUD), desktop card display of the electronic device, center control screen display, and dashboard, which are not limited in the embodiments of the present application.
Optionally, the areas in the first page and the second page may be adjusted according to user requirements (e.g., adjusting the area position, adjusting the area size, etc.). For example, the user does not have a need to listen to a voiced work, and the user may delete the voiced set in the first page as shown in fig. 8 (d). For another example, the user often uses the leaderboard in the first page, as in fig. 9 (a) the user may zoom in on the leaderboard area in the first page.
Optionally, the interface of the first application may further include an operation page. Illustratively, as shown in fig. 9 (b), the user can log in or switch an account through the page, and open or renew the membership rights of the application.
Optionally, the first application may also resize the interface of the first application according to the screen size.
For example, the first application may first obtain a screen size of the electronic device and then resize an interface of the first application according to the screen size.
Therefore, in the embodiment of the application, the interface size of the first application can be adjusted according to the screen size of the electronic device, so that the interface size of the first application is matched with the screen size of the electronic device, and the user experience is further improved.
It should be noted that, in the embodiment of the present application, there is an interactive interface between the first application and the second application, the interactive interface is provided by a software development kit (carmedikit) of the first application (CarMedia), and the software development kit is based on an operating system inter-process communication mechanism and provides interaction with the second application. The first application software development kit defines a set of interfaces for audio content query, play control, active content reporting and the like. The second application (third party audio application) may invoke or control the second application through the first application to provide audio services to the user. The software development kit (carmedikit) also defines a set of structured audio data models that describe music and sound-like data.
Therefore, the audio content of other audio applications (second applications) can be displayed through the first application and the development kit thereof provided by the embodiment of the application, and the other audio applications only need to provide structured data through the interactive interface and do not need to pay attention to the display mode of the audio of other three parties. Therefore, the interface style and the interaction mode of the audio application can be unified, and the user experience of the audio application is greatly improved. And the user only needs to operate the first application and does not need to operate other applications to conveniently acquire the required audio content, so that the user operation is simplified, and the user experience is improved.
An embodiment of the present application provides an audio content display method, as shown in fig. 10, the method includes:
s1001, the electronic device receives a first operation of a first application.
Wherein the first operation is to instruct the first application to present a plurality of audio content of the second application.
Illustratively, as shown in fig. 4 (B) through (d), the user clicking on the music application interface includes the audio source selection control 402 selecting the audio application B as the audio source to instruct the electronic device to present the plurality of audio contents of the audio application B through the music application.
S1002, the electronic equipment responds to the first operation, and displays a plurality of audio contents of the second application through the interface of the first application.
Exemplarily, as shown in fig. 4 (d), the electronic device presents a plurality of audio contents of the audio application B through the music application interface corresponding to the first operation.
As shown in fig. 10, the method may further include:
s1003, the electronic equipment receives a second operation on the first application.
The second operation is used for instructing the electronic equipment to play the target audio content, and the target audio content is any one of the plurality of audio contents.
Illustratively, as shown in fig. 8 (a), the user causes the music application to present a list of singing tickets as shown in fig. 8 (b) by clicking on the singing ticket area, the user causes the music application to present a list of single songs as shown in fig. 8 (c) by clicking on the singing ticket 2 in the list of singing tickets as shown in fig. 8 (b), and the user instructs the electronic device to play the single song 1 by clicking on the single song 1 in the list of single songs.
S1004, the electronic device responds to the second operation, and sends a first instruction to the second application through the first application, wherein the first instruction is used for instructing the second application to play the target audio content, so that the second application plays the target audio content.
Illustratively, in response to a second operation instructing the electronic device to play the single-piece music 1, the electronic device sends a first instruction instructing the audio application B to play the single-piece music 1 to the audio application B through the music application, and the audio application B starts playing the single-piece music 1 after receiving the instruction.
It should be noted that, in the audio content presentation method provided above, the first application presents a plurality of audio contents of the second application according to the first operation of the user, where the plurality of audio contents include the target audio content.
Therefore, the user can obtain the audio content in the rest of the applications (namely the second applications) only by operating the first application, and the user does not need to operate the rest of the applications, so that the user operation is simplified, and the user experience is improved.
The embodiment of the application also provides another audio content display method, and the audio content display method can directly display the target audio content according to the first operation.
Specifically, the method comprises the following steps: the electronic equipment receives a first operation of a first application, wherein the first operation is used for indicating the first application to show target audio content of a second application. The target audio content is then presented through an interface of the first application in response to the first operation.
Optionally, the method may further include: and receiving a second operation of the first application, wherein the second operation is used for instructing the electronic equipment to play the target audio content.
Optionally, the method may further include: in response to the second operation, sending a first indication to the second application through the first application, wherein the first indication is used for indicating the second application to play the target audio content, so that the second application plays the target audio content.
An electronic device for performing the above-described audio content presentation method will be described with reference to fig. 11 and 12.
It will be appreciated that the electronic device, in order to implement the above-described functions, comprises corresponding hardware and/or software modules for performing the respective functions. The present application is capable of being implemented in hardware or a combination of hardware and computer software in conjunction with the exemplary algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, with the embodiment described in connection with the particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional modules according to the method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module may be implemented in the form of hardware. It should be noted that the division of the modules in this embodiment is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
In the case of dividing each functional module by corresponding functions, fig. 11 shows a possible composition diagram of the electronic device involved in the above embodiment, and as shown in fig. 11, the apparatus 1100 may include: a transceiving unit 1101 and a processing unit 1102, the processing unit 1102 may implement the method performed by the electronic device in the method embodiments described above, and/or other processes for the techniques described herein.
It should be noted that all relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
In case an integrated unit is employed, the apparatus 1100 may comprise a processing unit, a storage unit and a communication unit. The processing unit may be configured to control and manage the operation of the apparatus 1100, and for example, may be configured to support the apparatus 1100 to execute the steps performed by the above units. The memory unit may be used to support the apparatus 1100 for executing stored program codes, and/or data, etc. The communication unit may be used to support the apparatus 1100 in communication with other devices.
Wherein the processing unit may be a processor or a controller. Which may implement or execute the various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein. A processor may also be a combination of computing functions, e.g., a combination of one or more microprocessors, a Digital Signal Processing (DSP) and a microprocessor, or the like. The storage unit may be a memory. The communication unit may specifically be a radio frequency circuit, a bluetooth chip, a wireless fidelity (Wi-Fi) chip, or other devices that interact with other electronic devices.
In a possible implementation manner, the electronic device according to the embodiment of the present application may be an apparatus 1200 having a structure shown in fig. 12, where the apparatus 1200 includes a processor 1201 and a transceiver 1202. The relevant functions implemented by the transceiving unit 1101 and the processing unit 1102 in fig. 11 may be implemented by the processor 1201.
Optionally, the apparatus 1200 may further comprise a memory 1203, the processor 1201 and the memory 1203 communicating with each other via an internal connection path. The related functions implemented by the storage unit in fig. 11 can be implemented by the memory 1203.
The embodiment of the present application further provides a computer storage medium, where a computer instruction is stored in the computer storage medium, and when the computer instruction runs on an electronic device, the electronic device is enabled to execute the above related method steps to implement the audio content display method in the above embodiment.
The embodiment of the present application further provides a computer program product, which when running on a computer, causes the computer to execute the above related steps to implement the audio content presentation method in the above embodiment.
The embodiment of the application also provides an electronic device, and the device can be specifically a chip, an integrated circuit, a component or a module. In particular, the apparatus may comprise a processor and a memory coupled to store instructions, or the apparatus may comprise at least one processor configured to retrieve instructions from an external memory. When the device runs, the processor can execute the instructions to enable the chip to execute the audio content display method in the above-mentioned method embodiments.
Fig. 13 shows a schematic structure diagram of a chip 1300. Chip 1300 includes one or more processors 1301 and interface circuitry 1302. Optionally, the chip 1300 may further include a bus 1303.
Processor 1301 may be an integrated circuit chip having signal processing capabilities. In the implementation process, the steps of the audio content presentation method may be implemented by an integrated logic circuit of hardware in the processor 1301 or instructions in the form of software.
Alternatively, the processor 1301 may be a general-purpose processor, a Digital Signal Processing (DSP) device, an integrated circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The methods, steps disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The interface circuit 1302 may be used for sending or receiving data, instructions or information, and the processor 1301 may perform processing by using the data, instructions or other information received by the interface circuit 1302, and may send processing completion information through the interface circuit 1302.
Optionally, the chip further comprises a memory, which may include read only memory and random access memory, and provides operating instructions and data to the processor. The portion of memory may also include non-volatile random access memory (NVRAM).
Optionally, the memory stores executable software modules or data structures, and the processor may perform corresponding operations by calling the operation instructions stored in the memory (the operation instructions may be stored in an operating system).
Alternatively, the chip may be used in the electronic device or DOP according to the embodiment of the present application. Optionally, the interface circuit 1302 may be configured to output the result of the execution by the processor 1301. For the audio content presentation method provided in one or more embodiments of the present application, reference may be made to the foregoing embodiments, which are not described herein again.
It should be noted that the functions corresponding to the processor 1301 and the interface circuit 1302 may be implemented by hardware design, software design, or a combination of hardware and software, which is not limited herein.
The electronic device, the computer storage medium, the computer program product, or the chip provided in this embodiment are all configured to execute the corresponding method provided above, so that the beneficial effects achieved by the electronic device, the computer storage medium, the computer program product, or the chip may refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
The embodiment of the application further provides the terminal device, and the terminal device comprises the electronic device. Optionally, the terminal device is a vehicle or an intelligent robot.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The above functions, if implemented in the form of software functional units and sold or used as a separate product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-described method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The first application in the embodiment of the present application is an application related to audio content, and optionally, the first application may also be expanded to an application related to video content, or an application related to audio and video content.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (25)

1. An audio content presentation method applied to an electronic device, the electronic device being installed with a first application and a second application, the method comprising:
receiving a first operation for instructing the first application to show a plurality of audio contents of the second application;
in response to the first operation, the plurality of audio contents are presented through an interface of the first application.
2. The method of claim 1, further comprising:
and receiving a second operation, wherein the second operation is used for indicating to play target audio content, and the target audio content is any one of the plurality of audio contents.
3. The method of claim 2, further comprising:
in response to the second operation, sending, by the first application, a first indication to the second application, the first indication instructing the second application to play the target audio content, so that the second application plays the target audio content.
4. The method of any of claims 1-3, wherein the interface of the first application comprises a first page, and wherein the presenting the plurality of audio content through the interface of the first application comprises:
presenting a first set of audio content on the first page according to content characteristics of the first set of audio content, the plurality of audio content including the first set of audio content.
5. The method of any of claims 1-4, wherein the interface of the first application comprises a second page, and wherein presenting the plurality of audio content through the interface of the first application comprises:
presenting a second set of audio content on the second page according to the account characteristics of the second application, the plurality of audio content including the second set of audio content.
6. The method of claim 4, wherein the presenting the first set of audio content on the first page according to content characteristics of the first set of audio content comprises:
classifying the first set of audio content according to content characteristics of the first set of audio content;
presenting the first set of audio content at the first page category.
7. The method of claim 5, wherein the presenting a second set of audio content on the second page according to the account characteristic of the second application comprises:
classifying the second set of audio content according to account characteristics of the second application;
presenting the second set of audio content in the second page category.
8. The method of claim 4, wherein the content characteristics include at least one of audio type, album information, song list information, track information, list information, singer information, voiced set information, or station information.
9. The method of claim 5, wherein the account characteristics include at least one of song list information collected by the account, song list information created by the account itself, audio content information purchased by the account, audio content information recently played by the account, or audio content information liked by the account.
10. The method according to any one of claims 1 to 9, further comprising:
and adjusting the size of the interface of the first application according to the screen size.
11. An electronic device comprising a processor and a memory coupled to the processor, the processor configured to:
receiving a first operation for instructing the first application to show a plurality of audio contents of the second application;
in response to the first operation, the plurality of audio contents are presented through an interface of the first application.
12. The electronic device of claim 11, wherein the processor is further configured to:
and receiving a second operation, wherein the second operation is used for indicating to play target audio content, and the target audio content is any one of the plurality of audio contents.
13. The electronic device of claim 12, wherein the processor is further configured to:
in response to the second operation, sending, by the first application, a first indication to the second application, the first indication instructing the second application to play the target audio content, so that the second application plays the target audio content.
14. The electronic device of any of claims 11-13, wherein the interface of the first application comprises a first page, and wherein the processor is specifically configured to:
the first set of audio content is presented on the first page according to content characteristics of the first set of audio content, the plurality of audio content including the first set of audio content.
15. The electronic device of any of claims 11-14, wherein the interface of the first application comprises a second page, and wherein the processor is specifically configured to:
presenting a second set of audio content on the second page according to the account characteristics of the second application, the plurality of audio content including the second set of audio content.
16. The electronic device of claim 14, wherein the processor is specifically configured to:
classifying the first set of audio content according to content characteristics of the first set of audio content;
presenting the first set of audio content at the first page category.
17. The electronic device of claim 15, wherein the processor is specifically configured to:
classifying the second set of audio content according to account characteristics of the second application;
presenting the second set of audio content in the second page category.
18. The electronic device of claim 14, wherein the content features include at least one of audio type, album information, song list information, track information, list information, singer information, voiced set information, or station information.
19. The electronic device of claim 15, wherein the account characteristics include at least one of song list information collected by the account, song list information created by the account itself, audio content information purchased by the account, audio content information recently played by the account, or audio content information liked by the account.
20. The electronic device of any of claims 11-19, wherein the processor is further configured to:
and adjusting the size of the interface of the first application according to the screen size.
21. A chip device comprising a memory coupled to the processor, the memory storing code, and a processor configured to execute the code and when executed implement the method of any of claims 1 to 10.
22. A computer-readable storage medium for storing a computer program, characterized in that the computer program comprises instructions for implementing the method of any of the preceding claims 1 to 10.
23. A computer program product comprising instructions which, when run on a computer or processor, cause the computer or processor to carry out the method of any one of claims 1 to 10.
24. A terminal device, characterized in that it comprises an electronic device according to any one of claims 11 to 20.
25. The terminal device of claim 24, wherein the terminal device is a vehicle.
CN202111062371.9A 2021-09-10 2021-09-10 Audio content display method and electronic equipment Pending CN115794021A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111062371.9A CN115794021A (en) 2021-09-10 2021-09-10 Audio content display method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111062371.9A CN115794021A (en) 2021-09-10 2021-09-10 Audio content display method and electronic equipment

Publications (1)

Publication Number Publication Date
CN115794021A true CN115794021A (en) 2023-03-14

Family

ID=85473350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111062371.9A Pending CN115794021A (en) 2021-09-10 2021-09-10 Audio content display method and electronic equipment

Country Status (1)

Country Link
CN (1) CN115794021A (en)

Similar Documents

Publication Publication Date Title
WO2021164631A1 (en) Screencasting method, and terminal apparatus
CN110286976B (en) Interface display method, device, terminal and storage medium
CN111240547A (en) Interactive method for cross-device task processing, electronic device and storage medium
CN112231025A (en) UI component display method and electronic equipment
CN110910872A (en) Voice interaction method and device
CN111970401B (en) Call content processing method, electronic equipment and storage medium
CN114816167B (en) Application icon display method, electronic device and readable storage medium
WO2020239001A1 (en) Humming recognition method and related device
CN112114733B (en) Screen capturing and recording method, mobile terminal and computer storage medium
CN109981881B (en) Image classification method and electronic equipment
WO2023130921A1 (en) Method for page layout adapted to multiple devices, and electronic device
WO2021169466A1 (en) Information collection method, electronic device and computer-readable storage medium
CN114968018B (en) Card display method and terminal equipment
CN112068907A (en) Interface display method and electronic equipment
CN113360222A (en) Application information display method and related equipment
CN109189978B (en) Method, device and storage medium for audio search based on voice message
CN112740148A (en) Method for inputting information into input box and electronic equipment
CN115794021A (en) Audio content display method and electronic equipment
CN115017522A (en) Permission recommendation method and electronic equipment
CN112286596A (en) Message display method and electronic equipment
US12010257B2 (en) Image classification method and electronic device
CN116048349B (en) Picture display method and device and terminal equipment
CN114513575B (en) Method for collection processing and related device
WO2023236908A1 (en) Image description method, electronic device and computer-readable storage medium
WO2023160455A1 (en) Object deletion method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination