WO2020063394A1 - 应用程序中的声音消息显示方法、装置、计算机设备及计算机可读存储介质 - Google Patents

应用程序中的声音消息显示方法、装置、计算机设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2020063394A1
WO2020063394A1 PCT/CN2019/106116 CN2019106116W WO2020063394A1 WO 2020063394 A1 WO2020063394 A1 WO 2020063394A1 CN 2019106116 W CN2019106116 W CN 2019106116W WO 2020063394 A1 WO2020063394 A1 WO 2020063394A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
message
virtual
sound message
virtual character
Prior art date
Application number
PCT/CN2019/106116
Other languages
English (en)
French (fr)
Inventor
戴维
卢锟
钟庆华
吴俊�
王应韧
黄蓉
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2020063394A1 publication Critical patent/WO2020063394A1/zh
Priority to US17/096,683 priority Critical patent/US11895273B2/en
Priority to US18/527,237 priority patent/US20240098182A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/04Protocols specially adapted for terminals or networks with limited capabilities; specially adapted for terminal portability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/2053D [Three Dimensional] animation driven by audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • H04L67/125Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks involving control of end-device applications over a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/53Centralised arrangements for recording incoming messages, i.e. mailbox systems
    • H04M3/533Voice mail systems
    • H04M3/53333Message receiving aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/12Counting circuits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/14Delay circuits; Timers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/25Aspects of automatic or semi-automatic exchanges related to user interface aspects of the telephonic communication service
    • H04M2203/251Aspects of automatic or semi-automatic exchanges related to user interface aspects of the telephonic communication service where a voice mode or a visual mode can be used interchangeably
    • H04M2203/252Aspects of automatic or semi-automatic exchanges related to user interface aspects of the telephonic communication service where a voice mode or a visual mode can be used interchangeably where a voice mode is enhanced with visual information

Definitions

  • Embodiments of the present application relate to the field of computer programs, and in particular, to a method, an apparatus, a computer device, and a computer-readable storage medium for displaying a sound message in an application program.
  • Social APP Application, application
  • Traditional social APPs mainly use text and pictures as communication media, but voice messages are used as communication media in emerging social APPs.
  • a plurality of voice cells arranged in reverse order according to upload time are used to display a voice message, and each voice cell corresponds to a voice message.
  • Each sound cell has a corresponding rectangular frame with rounded corners. The user can click on the sound cell to play the sound message.
  • a method, an apparatus, a computer device, and a computer-readable storage medium for displaying a sound message in an application program are provided.
  • a method for displaying a sound message in an application the method being executed by a terminal, the method comprising:
  • n sound messages issued by at least one user account where n is a positive integer
  • the sound message display interface Displaying the sound message display interface of the application, the sound message display interface displaying the sound message located in the virtual world, and the sound message being displayed using a visual element in the virtual world as a carrier.
  • a sound message display device in an application program includes:
  • the processing module is configured to obtain n sound messages issued by at least one user account, where n is a positive integer;
  • a display module configured to display a sound message display interface of the application, where the sound message display interface displays the sound message located in a virtual world, and the sound message uses visual elements in the virtual world as Vector display.
  • a computer device includes a memory and a processor; the memory stores at least one program, and the at least one program is loaded and executed by the processor to implement sound message display in an application program as described above method.
  • a computer-readable storage medium stores at least one program stored in the storage medium, and the at least one program is loaded and executed by a processor to implement the sound message display method in an application program as described above.
  • FIG. 1 is a schematic interface diagram of a sound message display method provided by a related art
  • FIG. 2 is a structural block diagram of a computer system provided by an exemplary embodiment of the present application.
  • FIG. 3 is a structural block diagram of a terminal provided by an exemplary embodiment of the present application.
  • FIG. 4 is a structural block diagram of a server provided by an exemplary embodiment of the present application.
  • FIG. 5 is a schematic interface diagram of a sound message display method provided by an exemplary embodiment of the present application.
  • FIG. 6 is a flowchart of a sound message display method provided by an exemplary embodiment of the present application.
  • FIG. 7 is a flowchart of a sound message display method provided by an exemplary embodiment of the present application.
  • FIG. 8 is a schematic interface diagram of a sound message display method provided by an exemplary embodiment of the present application.
  • FIG. 9 is a layered schematic diagram of a sound message display method provided by an exemplary embodiment of the present application.
  • FIG. 10 is a diagram illustrating a correspondence between a character model of a bird and a message duration according to an exemplary embodiment of the present application.
  • FIG. 11 is a correspondence diagram between a character model of a bird and a published duration provided by an exemplary embodiment of the present application.
  • FIG. 12 is a correspondence diagram between a character model of a bird and a published duration provided by another exemplary embodiment of the present application.
  • FIG. 13 is a flowchart of a sound message display method provided by another exemplary embodiment of the present application.
  • FIG. 14 is a schematic interface diagram of a sound message display method provided by another exemplary embodiment of the present application.
  • FIG. 15 is a schematic interface diagram of a sound message display method provided by another exemplary embodiment of the present application.
  • 16 is a schematic interface diagram of a sound message display method provided by another exemplary embodiment of the present application.
  • FIG. 17 is a schematic interface diagram of a sound message display method provided by another exemplary embodiment of the present application.
  • FIG. 19 is a schematic interface diagram of a sound message display method provided by another exemplary embodiment of the present application.
  • FIG. 20 is a schematic interface diagram of a sound message display method provided by another exemplary embodiment of the present application.
  • 21 is a flowchart of a sound message display method provided by another exemplary embodiment of the present application.
  • FIG. 22 is a flowchart of a sound message display method provided by another exemplary embodiment of the present application.
  • FIG. 23 is a schematic interface diagram of a sound message display method provided by another exemplary embodiment of the present application.
  • FIG. 24 is a schematic interface diagram of a sound message display method provided by another exemplary embodiment of the present application.
  • 25 is a flowchart of a sound message display method provided by another exemplary embodiment of the present application.
  • 26 is a block diagram of a sound message display device provided by another exemplary embodiment of the present application.
  • FIG. 27 is a block diagram of a sound message display device provided by another exemplary embodiment of the present application.
  • Sound social application is an application for socializing based on sound messages, also known as voice social application.
  • voice social application In the free ranking of social applications in an app store known to the inventors, sound social applications account for 7.5% of multiple social applications.
  • traditional radio applications have also added social attributes, hoping to occupy a place in the vertical market of sound social.
  • a feed (feed) stream is used to display sound messages in a sound social application.
  • the feed stream is a combination of several message sources that users actively subscribe to form a content aggregator to help users continuously obtain The latest feed content.
  • Feed streams are usually displayed on the user interface in the form of a Timeline.
  • the feed stream 10 uses a plurality of sound cells 12 displayed in order from morning to night according to the release time, and the sound cells 12 correspond to the sound messages one by one.
  • Each sound cell 12 is displayed in a rectangular frame, and each rectangular frame is provided with a play button 14 and a sound path 16, and the sound path 16 corresponds to the sound message.
  • the play button 14 When the user clicks the play button 14 on a certain sound cell 12, the sound message corresponding to the sound cell 12 is triggered to be played.
  • each sound cell 12 since the display effect of the interface of each sound cell 12 is basically the same, only the sound ripple path may be different, so the user cannot accurately distinguish which sound messages are broadcast sound messages and which sound messages are unbroadcast sound messages.
  • the user When there are fewer sound messages displayed in the same user interface, the user needs to drag different sound cells 12 in the feed stream up and down, and constantly search among multiple sound cells 12, so it is necessary to spend time filtering different sound messages. The more time and operation steps, the lower the efficiency of human-computer interaction.
  • the embodiments of the present application provide an improved display solution for sound messages.
  • the virtual characters in the virtual world are used to display sound messages, and each sound message corresponds to a respective virtual character.
  • the virtual world is a virtual forest world, and the virtual character is a bird in the virtual forest world.
  • Each voice message corresponds to a respective bird, and there are some birds displayed in different ways.
  • the virtual world is a virtual ocean world, and the virtual character is a small fish in the virtual ocean world.
  • Each sound message corresponds to a respective small fish, and there are some small fish displayed in different ways. Therefore, the user can easily distinguish different voice messages from the virtual characters in different display modes. Furthermore, it distinguishes between broadcasted sound messages and unplayed sound messages from different display modes.
  • FIG. 2 shows a structural block diagram of a computer system 200 provided by an exemplary embodiment of the present application.
  • the computer system 200 may be an instant messaging system, a team voice chat system, or other application systems with social attributes, which is not limited in the embodiments of the present application.
  • the computer system 200 includes a first terminal 220, a server cluster 240, and a second terminal 260.
  • the first terminal 220 is connected to the server cluster 240 through a wireless network or a wired network.
  • the first terminal 220 may be at least one of a smart phone, a game console, a desktop computer, a tablet computer, an e-book reader, an MP3 player, an MP4 player, and a laptop portable computer.
  • the first device 220 installs and runs an application program that supports voice messages.
  • the application may be any one of a voice social application, an instant messaging application, a team voice application, a social application for crowd aggregation based on topics or channels or circles, and a shopping-based social application.
  • the first terminal 220 is a terminal used by a first user, and a first user account is registered in an application program running in the first terminal 220.
  • the first terminal 220 is connected to the server 240 through a wireless network or a wired network.
  • the server cluster 240 includes at least one of a server, multiple servers, a cloud computing platform, and a virtualization center.
  • the server cluster 240 is used to provide background services for applications supporting voice messages.
  • the server cluster 240 undertakes the main calculation work, and the first terminal 220 and the second terminal 260 undertake the secondary calculation work; or, the server cluster 240 undertakes the secondary calculation work, and the first terminal 220 and the second terminal 260 undertake the main calculation Work; or, the server cluster 240, the first terminal 220, and the second terminal 260 use a distributed computing architecture for collaborative computing.
  • the server cluster 240 includes: an access server 242 and a message forwarding server 244.
  • the access server 242 is used to provide the access service and information transmission and reception services of the first terminal 220 and the second terminal 260, and place messages (sound messages, text messages, picture messages, video messages) between the terminal and the message forwarding server 244 Forward.
  • the server 242 is configured to provide a background service to the application, such as: adding at least one of a friend service, a text message forwarding service, a voice message forwarding service, and a picture message forwarding service.
  • the message forwarding server 244 may be one or more.
  • the second terminal 260 installs and runs an application program supporting voice messages.
  • the application may be any one of a voice social application, an instant messaging application, a team voice application, a social application for crowd aggregation based on topics or channels or circles, and a shopping-based social application.
  • the second terminal 260 is a terminal used by a second user. A second user account is registered in the application of the second terminal 220.
  • the first user account and the second user account are in a virtual social network, and the virtual social network provides a transmission path of the voice message between the first user account and the second user account.
  • the virtual social network may be provided by the same social platform, or may be collaboratively provided by multiple social platforms that have an associated relationship (such as an authorized login relationship). The embodiment of this application does not limit the specific form of the virtual social network.
  • the first user account and the second user account may belong to the same team, the same organization, have a friend relationship, or have temporary communication rights.
  • the first user account and the second user account may also be stranger relationships.
  • the virtual social network provides a one-way message transmission path or a two-way message transmission path between the first user account and the second user account, so that voice messages are transmitted between different user accounts.
  • the applications installed on the first terminal 220 and the second terminal 260 are the same, or the applications installed on the two terminals are the same type of applications on different operating system platforms, or the applications installed on the two terminals
  • the procedures are different but support the same sound message.
  • Different operating systems include: Apple operating system, Android operating system, Linux operating system, Windows operating system and so on.
  • the first terminal 220 may refer to one of a plurality of terminals
  • the second terminal 260 may refer to one of a plurality of terminals. This embodiment only uses the first terminal 220 and the second terminal 260 as examples.
  • the terminal types of the first terminal 220 and the second terminal 260 are the same or different.
  • the terminal types include: smartphone, game console, desktop computer, tablet computer, e-book reader, MP3 player, MP4 player, and laptop. At least one of the computers.
  • the following embodiments are described by using the first terminal 220 and / or the second terminal 240 as a smart phone as an example.
  • the number of the foregoing terminals may be larger or smaller.
  • the foregoing terminal may be only one, or the foregoing terminal may be dozens or hundreds, or a larger number.
  • the foregoing computer system further includes other terminals 280.
  • the embodiment of the present application does not limit the number of terminals and the types of equipment.
  • FIG. 3 shows a structural block diagram of a terminal 300 provided by an exemplary embodiment of the present application.
  • the terminal 300 may be: smartphone, tablet, MP3 player (Moving Picture Experts Group Audio Layer III, Moving Picture Expert Compression Standard Audio Level 3), MP4 (Moving Picture Experts Group Audio Audio Layer Encoding IV, Moving Picture Expert Compressing Standard Audio Level 4) Player, laptop or desktop computer.
  • the terminal 300 may also be called other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
  • the terminal 300 may be a first terminal or a second terminal.
  • the terminal 300 includes a processor 301 and a memory 302.
  • the processor 301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
  • the processor 301 may use at least one hardware form among DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array). achieve.
  • the processor 301 may also include a main processor and a coprocessor.
  • the main processor is a processor for processing data in the wake state, also called a CPU (Central Processing Unit).
  • the coprocessor is Low-power processor for processing data in standby.
  • the processor 301 may be integrated with a GPU (Graphics Processing Unit), and the GPU is responsible for rendering and drawing content required to be displayed on the display screen.
  • the processor 301 may further include an AI (Artificial Intelligence) processor, which is used to process computing operations related to machine learning.
  • AI Artificial Intelligence
  • the memory 302 may include one or more computer-readable storage media, which may be non-transitory.
  • the memory 302 may also include high-speed random access memory, and non-volatile memory, such as one or more disk storage devices, flash storage devices.
  • non-transitory computer-readable storage medium in the memory 302 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 301 to implement applications provided by various method embodiments in this application.
  • the program's sound message display method is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 301 to implement applications provided by various method embodiments in this application.
  • the program's sound message display method is used to implement applications provided by various method embodiments in this application.
  • the terminal 300 may optionally include a peripheral device interface 303 and at least one peripheral device.
  • the processor 301, the memory 302, and the peripheral device interface 303 may be connected through a bus or a signal line.
  • Each peripheral device can be connected to the peripheral device interface 303 through a bus, a signal line, or a circuit board.
  • the peripheral device includes at least one of a radio frequency circuit 304, a touch display screen 305, a camera 306, an audio circuit 307, a positioning component 308, and a power source 309.
  • the peripheral device interface 303 may be used to connect at least one peripheral device related to I / O (Input / Output, Input / Output) to the processor 301 and the memory 302.
  • the display screen 305 is used to display a UI (User Interface).
  • the UI can include graphics, text, icons, videos, and any combination thereof.
  • the display screen 305 also has the ability to collect touch signals on or above the surface of the display screen 305.
  • the camera component 306 is used for capturing images or videos.
  • the audio circuit 307 may include a microphone and a speaker. The microphone is used for collecting sound waves of the user and the environment, and converting the sound waves into electrical signals and inputting them to the processor 301 for processing, or inputting to the radio frequency circuit 304 to implement voice communication.
  • the positioning component 308 is used for positioning the current geographic position of the terminal 300 to implement navigation or LBS (Location Based Service).
  • the power source 309 is used to power various components in the terminal 300.
  • the power source 309 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery.
  • the terminal 300 further includes one or more sensors 310.
  • the one or more sensors 310 include, but are not limited to, an acceleration sensor 311, a gyroscope sensor 312, a pressure sensor 313, a fingerprint sensor 314, an optical sensor 315, and a proximity sensor 316.
  • the acceleration sensor 311 can detect the magnitude of acceleration on three coordinate axes of the coordinate system established by the terminal 300.
  • the gyro sensor 312 can detect the body direction and rotation angle of the terminal 300, and the gyro sensor 312 can cooperate with the acceleration sensor 311 to collect a 3D motion of the user on the terminal 300.
  • the pressure sensor 313 may be disposed on a side frame of the terminal 300 or a lower layer of the touch display screen 305.
  • the processor 301 controls the operability controls on the UI interface according to the user's pressure operation on the touch display screen 305.
  • the fingerprint sensor 314 is used to collect a user's fingerprint, and the processor 301 recognizes the user's identity based on the fingerprint collected by the fingerprint sensor 314, or the fingerprint sensor 314 recognizes the user's identity based on the collected fingerprint.
  • the optical sensor 315 is used for collecting ambient light intensity.
  • the proximity sensor 316 also called a distance sensor, is usually disposed on the front panel of the terminal 300. The proximity sensor 316 is used to collect the distance between the user and the front side of the terminal 300.
  • the memory 302 further includes the following program modules (or instruction sets), or a subset or superset thereof: operating system 321; communication module 322; contact / motion module 323; graphics module 324; haptic feedback module 325; text Input module 326; GPS module 327; Digital assistant client module 328; Data users and models 329; Application 330: Contacts module 330-1, Phone module 330-2, Video conference module 330-3, Email module 330- 4.
  • Instant messaging module 330-5 fitness support module 330-6, camera module 330-7, image management module 330-8, multimedia player module 330-9, notepad module 330-10, map module 330-11, Browser module 330-12, calendar module 330-13, weather module 330-14, stock market module 330-15, computer module 330-16, alarm module 330-17, dictionary module 330-18, search module 330-19, online Video modules 330-20, ..., user-created modules 330-21.
  • the memory 302 further includes an application program 330-22 that supports a voice message.
  • the application program 330-22 may be used to implement a sound message display method of an application program in the method embodiments described below.
  • FIG. 3 does not constitute a limitation on the terminal 300, and may include more or fewer components than shown in the figure, or combine certain components, or adopt different component arrangements.
  • the present application also provides a computer-readable storage medium.
  • the computer-readable storage medium stores at least one instruction, at least one program, code set, or instruction set, the at least one instruction, the at least one program, or the instruction set.
  • the code set or instruction set is loaded and executed by the processor to implement a sound message display method in an application program provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a server provided by an exemplary embodiment of the present application.
  • This server can be implemented as any one of the server clusters 240 described above.
  • the server 400 includes a Central Processing Unit (CPU) 401, a random access memory (RAM) 402, and a read-only memory (ROM) A system memory 404 of 403, and a system bus 405 connecting the system memory 404 and the central processing unit 401.
  • the server 400 further includes a basic input / output system (I / O system) 406 to help transfer information between various devices in the computer, and a large-capacity storage for storing the operating system 413, the client 414, and other program modules 415.
  • the basic input / output system 406 includes a display 408 for displaying information and an input device 409 such as a mouse, a keyboard and the like for a user to input information.
  • the display 408 and the input device 409 are both connected to the central processing unit 401 through an input / output controller 410 connected to the system bus 405.
  • the basic input / output system 406 may further include an input / output controller 410 for receiving and processing input from a plurality of other devices such as a keyboard, a mouse, or an electronic stylus.
  • the input / output controller 410 also provides output to a display screen, printer, or other type of output device.
  • the mass storage device 407 is connected to the central processing unit 401 through a mass storage controller (not shown) connected to the system bus 405.
  • the mass storage device 407 and its associated computer-readable medium provide non-volatile storage for the server 400. That is, the mass storage device 407 may include a computer-readable medium (not shown) such as a hard disk or a Read-Only Memory (CD-ROM) drive.
  • a computer-readable medium such as a hard disk or a Read-Only Memory (CD-ROM) drive.
  • the computer-readable media may include computer storage media and communication media.
  • Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data.
  • Computer storage media include RAM, ROM, erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EPROM) ), Flash memory or other solid-state storage technologies, CD-ROM, Digital Versatile Disc (DVD) or other optical storage, tape cartridges, magnetic tapes, disk storage or other magnetic storage devices.
  • EPROM erasable programmable read-only memory
  • EPROM electrically erasable programmable read-only memory
  • EPROM electrically erasable programmable read-only memory
  • EPROM electrically erasable programmable read-only memory
  • Flash memory or other solid-state storage technologies
  • CD-ROM Compact Disc
  • the server 400 may also be operated by a remote computer connected to a network through a network such as the Internet. That is, the server 400 may be connected to the network 412 through the network interface unit 411 connected to the system bus 405, or the network interface unit 411 may also be used to connect to other types of networks or remote computer systems (not shown). .
  • FIG. 5 and FIG. 6 respectively show a schematic interface diagram and a flowchart of a sound message display method in an application program provided by an exemplary embodiment of the present application.
  • the user has an application installed on the terminal that supports the ability to send and / or receive voice messages. After the application is successfully installed, an icon of the application is displayed on the home page of the terminal.
  • a program icon 51 of the application is displayed on the homepage 50 of the terminal.
  • the user applies a startup operation to the program icon 51 of the application, and the startup operation may be a click operation of the program icon 51 of the application.
  • Step 601 Start the application program according to the startup operation.
  • the terminal calls the startup process of the application, and starts the application into the foreground running state through the startup process. After the application starts, log in to the user account on the server.
  • the user account is used to uniquely identify each user.
  • a first user account is registered in the application, and other user accounts, such as a second user account and a third user account, are registered in applications in other terminals.
  • Each user publishes (or uploads or sends) a generated sound message in their own application.
  • Step 602 Obtain n sound messages issued by at least one user account.
  • the application program obtains n sound messages published by the at least one user account from the server.
  • the at least one user account is usually a user account other than the first user account, but it is not excluded that the n sound messages include the first user.
  • n is at least one, but in most embodiments, n is a positive integer greater than 1.
  • a voice message is a message that conveys information by a voice signal.
  • the sound message includes only a voice signal; in other embodiments, the sound message refers to a message that is mainly used for information transmission by a voice signal, for example, the voice message includes a voice signal and auxiliary explanatory text; for example, a voice message Including voice signals and auxiliary explanatory pictures; for example, voice messages include voice signals, auxiliary explanatory text, and auxiliary explanatory pictures.
  • the voice message includes only a voice signal.
  • an application When an application obtains a sound message, it may first obtain the message identifier of the sound message, and then when it needs to play, then obtain the message content of the sound message from the server according to the message identifier.
  • the application program can also directly obtain the message content of the sound message and cache it locally.
  • step 603 the application's sound message display interface is displayed.
  • the sound message display interface displays a sound message located in the virtual world, and the sound message is displayed using a visual element in the virtual world as a carrier.
  • the voice message display interface is one of the multiple user interfaces.
  • the sound message display interface is displayed by default, as shown in FIG. 5, after the application icon 51 of the application is clicked, the sound message display interface 54 is displayed by default;
  • the main page is displayed by default.
  • the main page provides multiple function options. The user needs to select among multiple function options to control the application to display the sound message display interface 54. For example, in the example shown in FIG. 8, after the application is started, the main page is displayed by default.
  • the main page displays an expansion function item 52.
  • the expansion is an abbreviation for expanding the queue of friends.
  • the expansion function interface includes three tab pages: expansion, expansion group, and Miaoyin Forest.
  • the default display name is Expanded tabs.
  • the sound message display interface 54 is displayed in jump.
  • the display level and jump path of the sound message display interface 54 in the application are not limited.
  • the sound message display interface is a user interface for displaying sound messages posted by at least one user account.
  • the sound message display interface is a user interface for displaying sound messages posted by an unknown user account; in other embodiments, the sound message display interface is for displaying the same User interface for sound messages posted by various user accounts in a topic (or channel or circle or topic); in other embodiments, the sound message display interface is used to display the messages that belong to the same region (such as the current city or school) User interface for sound messages posted by each user account.
  • the sound message display interface displays a sound message located in a virtual world, and the sound message is displayed using a visual element in the virtual world as a carrier.
  • the virtual world can be a two-dimensional world, a 2.5-dimensional world, or a three-dimensional world.
  • a visual element is any object or substance that can be observed in the virtual world.
  • Visual elements include, but are not limited to, at least one of cloud, fog, lightning, fluid, static object, plant, animal, virtual image, and cartoon image. .
  • the application when displaying a sound message with a visual element in the virtual world as a carrier, can display the sound message as a visual element in the virtual world, and can also associate or mount the sound message in the virtual world.
  • the visual elements in the display are displayed on the peripheral side, which is not limited in this embodiment.
  • this step may include the following steps, as shown in FIG. 6B:
  • Step 603a Acquire a virtual character corresponding to each of the n sound messages in the virtual world.
  • Step 603b Generate a scene picture of the virtual world, and the scene picture displays a voice message corresponding to the virtual character and the virtual character.
  • Step 603c Display an audio message display interface of the application program according to the scene picture of the virtual world.
  • the application needs to obtain a virtual character corresponding to each of the n sound messages, and each sound message corresponds to a virtual character.
  • a virtual character is an individual element that can be observed in the virtual universe. That is, a virtual character is an element that can be clearly distinguished for each individual from a visual angle.
  • a virtual character can be a living character in a virtual world.
  • the virtual character is a character displayed in a character model in a cartoon form, a plant form, and / or an animal form.
  • the virtual character is at least one of various flower characters, mammal characters, bird characters, reptile characters, amphibians characters, fish characters, dinosaur characters, anime characters, and other fictional characters.
  • the virtual characters corresponding to the existence of at least two voice messages are the same or different.
  • Virtual characters are characters that belong to the virtual world.
  • the virtual world may be at least one of a two-dimensional virtual world, a 2.5-dimensional virtual world, and a three-dimensional virtual world.
  • the virtual world can be any one of a virtual forest world, a virtual ocean world, a virtual aquarium world, a virtual space world, a virtual animation world, and a virtual fantasy world.
  • the virtual character can be stored locally in the form of a material library; it can also be provided by the server to the application, such as a virtual character that the server provides n sound messages to the application through a web file.
  • the server can also determine the virtual character corresponding to each sound message, and then The virtual character is sent to the application.
  • the application program simultaneously receives n sound messages and virtual characters corresponding to each sound message sent by the server.
  • the voice message display interface 54 displays a scene picture of the virtual world "Mytone Forest", which is a two-dimensional world, and the virtual world “Mytone” In the “Forest", there is a blue sky and white cloud background and a tree 55 on the blue sky and white cloud background.
  • the four birds have different bird character images, and the user can quickly distinguish different voice messages from the birds with four different character images.
  • the "different sound message” herein refers to sound messages that do not belong to the same, and can be different sound messages sent by different users, or different sound messages sent by the same user.
  • the tree 55 includes three triangles stacked on top of each other.
  • the shape and size of the second triangle and the third triangle are the same from top to bottom.
  • the first triangle at the top is slightly smaller than the other two triangles. .
  • the first bird 56 and the corresponding voice message are displayed on the right side of the first triangle, and the voice messages corresponding to the second bird 57 and the third bird 58 are superimposed on the left side of the second triangle.
  • the sound message corresponding to the fourth bird is superimposed on the right side of the third triangle.
  • each voice message is represented by a message box, and the message box may be any box type such as rectangle, rounded rectangle, bubble type, cloud type, etc.
  • the length of the message box is proportional to the message length of the voice message.
  • the message box can also use text to display the message length of the sound message. For example, the number "4" indicates that the message length is 4 seconds, and the number "8" indicates that the message length is 8 seconds.
  • the message box is displayed adjacent to the virtual character. When the virtual character is located on the left side of the triangle, the message box is located on the right side of the virtual character. When the virtual character is located on the right side of the triangle, the message box is located on the left of the virtual character. side.
  • the terminal plays the corresponding message content.
  • the terminal displays the sound message display interface in a multi-layer overlay rendering manner.
  • the terminal renders the ambient atmosphere layer, the background visual layer, and the sound element layer separately, and superimposes the ambient atmosphere layer, the background visual layer, and the sound element layer to display as a sound message display interface.
  • the environment atmosphere layer includes the sky and the ground in the virtual forest world
  • the background visual layer includes the trees in the virtual forest world
  • the sound element layer includes the virtual character (such as a bird) and the sound message corresponding to the virtual character; as shown in Figure 9
  • the terminal renders the ambient atmosphere layer 181, the background visual layer 182, and the sound element layer 183, respectively.
  • the background visual layer 182 is located above the ambient atmosphere layer 181, and the sound element layer 183 is located above the background visual layer 182.
  • the environment atmosphere layer 181 includes the sky in the virtual forest world, and the sky includes the blue sky background plus several white cloud patterns.
  • the environment atmosphere layer 181 also includes the ground in the virtual forest world, and the ground includes grass.
  • the trees in the background visual layer 182 include a plurality of triangles superimposed on top of each other, and each triangle is used to display a virtual character and voice messages corresponding to the virtual character, so the number of triangles is equal to the number of voice messages, or Say, the number of triangles is equal to the number of birds.
  • the number of virtual characters in the sound element layer 183 is the same as the number of sound messages acquired by the terminal, and the display parameters of the virtual characters are related to the message attributes of the corresponding sound messages.
  • the terminal may keep the environment atmosphere layer unchanged, and perform the background visual layer and the sound element layer according to the changed number of sound messages. After redrawing, and then superimposing the ambient atmosphere layer, the background visual layer, and the sound element layer, a new sound message display interface is generated for refresh display.
  • Step 604 Receive a trigger operation of at least one of the virtual character and the sound message in the sound message display interface, where the trigger operation includes at least one of a click operation and a slide operation.
  • the virtual character is also used to play the corresponding message content when triggered.
  • Many virtual characters can be displayed on the same voice message display interface.
  • the target virtual character is one of the multiple virtual characters. The user plays the sound message corresponding to the target virtual character by triggering the target virtual character.
  • the user can click on the virtual character, and when the virtual character is clicked, the sound message corresponding to the virtual character is triggered to play; in other embodiments, the user can slide the virtual character, and when the virtual character When being swiped, a sound message corresponding to the virtual character is triggered to play.
  • Step 605 Play a sound message corresponding to the virtual character according to the trigger signal.
  • the terminal plays a sound message according to the trigger signal.
  • the user can trigger the virtual character again during playback to pause the sound message, or replay the sound message.
  • steps 604 and 605 are optional steps, depending on the operation performed by the user on the voice message display interface.
  • the method provided in this embodiment displays a sound message display interface of an application program.
  • the sound message display interface displays a sound message located in a virtual world, and the sound message uses a visual element in the virtual world as a carrier.
  • the visual elements in the virtual world can be different, and the types of the visual elements can also be rich and diverse, this technical solution can solve the interface display effect of each sound message in the related technology. Similarly, it is difficult for the user to accurately distinguish between the various sound messages.
  • the application program when the application program determines a virtual character corresponding to each of the n sound messages in the virtual world, the application may determine the virtual character corresponding to the sound message according to the message attribute of the sound message.
  • the message attributes include the message content length of the sound message.
  • the message content length refers to the length of time required for the message content of a sound message to play normally.
  • the terminal determines the role parameters of the virtual character corresponding to the sound message according to the message content length of the sound message.
  • the character parameters include at least one of a type of a character model, an animation style, an animation frequency, and a character special effect.
  • the types of character models refer to character models in different external forms, such as sparrows and peacocks; animation styles refer to different animations made by the same character model, such as raising heads, shaking legs, spreading wings, and waving tails; animation frequency refers to Character special effects refer to different special effects of the same character model, such as different feather colors and different appearances.
  • the application program is preset with a plurality of different message content length intervals, and each message content length interval corresponds to a different role model.
  • each message content length interval corresponds to a different role model.
  • interval A corresponds to bird A
  • interval B corresponds to bird B
  • interval C corresponds to bird C
  • interval D corresponds to bird D
  • interval E corresponds to bird E.
  • the terminal When the length of the content of a sound message is 20 seconds, the terminal will determine that the virtual character corresponding to the sound message is Bird D; when the length of the content of a sound message is 35 seconds, the terminal will determine the virtual character corresponding to the sound message The character is Bird E.
  • the message attributes include the published duration of the sound message. Published time is measured from the time the sound message was uploaded. The terminal determines the role parameters of the virtual character corresponding to the sound message according to the published duration of the sound message.
  • a plurality of different published duration intervals are set in advance, and each published duration interval corresponds to a different role model.
  • Interval 1 corresponds to bird 1
  • Interval 2 corresponds to bird 2
  • Interval 3 corresponds to bird 3
  • Interval 4 corresponds to bird 4
  • Interval 5 corresponds to Bird 5 and interval 6 correspond to bird 6.
  • the terminal When the published time of a sound message is 10 minutes, the terminal will determine that the virtual character corresponding to the sound message is Bird 2; when the published time of a sound message is 4 hours and 11 minutes, the terminal will determine that the sound message corresponds to The virtual character is Bird 5.
  • the message attributes include: the unplayed duration of the sound message in an unplayed state after being displayed.
  • the non-broadcast duration refers to the time from the start of the display of the sound message to the current time, and the start of display is the time at which the sound message starts to be displayed on the sound message display interface.
  • the terminal determines a role parameter of a virtual character corresponding to the sound message according to a non-broadcast duration of the sound message. For example, a plurality of different non-broadcast duration intervals are set in advance, and each non-broadcast duration interval corresponds to a different character model.
  • the role parameter is the type of the role model as an example.
  • the animation style and / or role special effect of the virtual character can also be determined according to the message attribute of the voice message.
  • the length of the sound message is used to determine the character characteristics of the virtual character. If the length of the sound message is longer, the halo around the virtual character will be larger and brighter.
  • the message attributes may include: gender attributes of voice messages (such as male or female), age attributes of voice messages (such as under 18, 18 to 25, 25 to 30, and 30 to 35) Years old, 35 years old or older), and classification of voice messages (such as doll sounds, lollies, uncles, and magnetic male voices).
  • gender attributes of voice messages such as male or female
  • age attributes of voice messages such as under 18, 18 to 25, 25 to 30, and 30 to 35
  • classification of voice messages such as doll sounds, lollies, uncles, and magnetic male voices.
  • the attributes included in the message attributes are not limited.
  • the division of the duration interval in different embodiments may be different, for example, the published duration is based on "1 to 5 seconds”, “6 to 10 seconds”, “10 to 30 seconds”, “30- “60 seconds” is divided into four different time intervals; another point that needs to be explained is that the virtual character corresponding to the same time interval can be two or more. For example, each interval corresponds to three different birdies.
  • the role model is shown in Figure 12. At this time, for a sound message corresponding to a certain time interval, a character model may be randomly selected from a plurality of role models corresponding to the time interval as the role model corresponding to the sound message.
  • the method provided in this embodiment determines a virtual character corresponding to a voice message according to the message attribute of the voice message, so that when a user observes different virtual characters, the user simultaneously understands the voice corresponding to the virtual character.
  • the message attribute of the message can be any of the length of the content of the message, the length of time it has been published, and the length of time it has not been broadcast. Therefore, on the premise of minimizing the user to read text information, graphical information is used to deliver relevant information of the voice message to the user.
  • the user may also add a friend relationship with the user who posted the sound message, or bookmark the sound message to the favorite list.
  • the user when the user wants to play the sound message 57 posted by user B in the sound message display interface 54, the user applies a trigger operation to the virtual character or sound message 57, and the trigger operation may be a click operation or a slide operation.
  • the terminal displays a play window 60 of the sound message 57 according to the trigger operation, and the play window 60 displays an add friend control 61 and a favorite control 62 corresponding to the virtual character.
  • the application displays the add friend control corresponding to the virtual character.
  • the user can apply the first operation to the add friend control when he wants to add an interested message publisher.
  • the application receives a first operation for adding a friend control.
  • the adding friend control is represented by an add friend button
  • the first operation may be a click operation for adding a friend button.
  • the application corresponds to the virtual character according to the first operation.
  • the add friend control 61 may be an “add friend” button 61.
  • the "add friend” button 61 is used to establish a friend relationship between the current user and the user B.
  • the terminal may jump to the add friend page 63, and the user adds a friend on the add friend page 63.
  • a "confirm button 632" and a “cancel button” 631 are displayed on the friend adding page 63.
  • the additional friend control 61 may also adopt other display methods, such as being displayed near the virtual character on the sound message display interface 54 in the form of a miniature pull-down menu, which is not limited in this embodiment.
  • the application program interacts with the server according to the first operation, and establishes a friend relationship through the server and the user account corresponding to the virtual role.
  • a friend relationship a user is also required to enter authentication information, and a user account corresponding to a virtual character is required to verify the authentication information to establish a friend relationship between the two.
  • the application displays the favorite control corresponding to the virtual character.
  • the user can apply a second operation to the favorite control when they want to collect the sound message of interest, and the application receives the second operation acting on the favorite control.
  • the second operation may be a click operation acting on a favorite control.
  • the application program collects a voice message corresponding to the virtual character according to the second operation.
  • the favorite button 62 is implemented using a heart-shaped button.
  • the favorite button 62 is used to favorite a voice message corresponding to the virtual character.
  • the terminal may jump to the "Favorite List" page 64.
  • the "Favorite List” page 64 displays various sound messages that the user has favorited. The user may Listen to each voice message repeatedly.
  • the terminal adopts a differentiated display mode for the virtual character corresponding to the bookmarked voice message.
  • the distinguished display mode includes at least one of changing a color, adding accessories, and adding an animation special effect. . For example, a hat or a heart-shaped mark is displayed on the bird corresponding to the favorite voice message.
  • the method provided by this embodiment triggers the playing of a sound message corresponding to a virtual character by a trigger signal acting on the virtual character, so that a user can directly use the virtual character as a human-computer interaction role to play a sound message. Improving the sense of substitution for the virtual world from time to time, thereby improving the efficiency of human-computer interaction during human-computer interaction.
  • the method provided in this embodiment further includes displaying a friend control when a sound message of an unknown user account is displayed on the sound message display interface, and displaying the add friend control when the sound message is played. Establish a friend relationship directly with the strange user account, and increase the friend addition rate between interested users in the social application.
  • a favorite control is also displayed on the sound message display interface, so that when a user listens to a sound message of interest, the user adds the sound message to the favorite list, so that the sound can be added to the sound at a later time. Listen to the message repeatedly.
  • the sound message displayed on the sound message display interface 54 may not be clicked and played by the user in time.
  • a displayed but unplayed sound message referred to as an unplayed sound message
  • a played sound message a displayed but unplayed sound message
  • steps 606 to 609 are also included, as shown in FIG. 16:
  • Step 606 When the non-broadcast duration of the first sound message on the sound message display interface reaches the non-broadcast threshold, control the first virtual character corresponding to the first sound message to perform a preset reminder action.
  • the preset reminding action includes, but is not limited to, at least one of shaking the entire body, shaking the head, shaking the limbs, making a tweet, and pecking a feather.
  • shaking the entire body refers to using the character model as a center point, shaking the character model left and right, and / or shaking the character model up and down.
  • the terminal controls the bird 57 and the bird 58 will shake the body to indicate to the user that the sound message corresponding to the bird 57 and the bird 58 is an unplayed voice message.
  • the mechanism may be performed periodically, such as a non-play threshold of 10 seconds. If the sound message corresponding to the first virtual character is always in an unplayed state, the terminal controls the first virtual character corresponding to the first sound message to shake the body every 10 seconds.
  • Step 607 When the non-broadcast duration of the first and second sound messages on the sound message display interface reaches the unbroadcast threshold, control the first virtual character and the second virtual character to exchange positions with each other, and the first virtual character is the first The virtual character corresponding to the sound message, and the second virtual character is a virtual character corresponding to the second sound message.
  • the first sound message and the second sound message are any two virtual characters displayed on the sound message display interface.
  • the first sound message and the second sound message may be the two sound messages closest to each other on the sound message display interface.
  • three sound messages are displayed on the sound message display interface 40. It is assumed that the sound message located at the top of the tree is a broadcast sound message, and the first sound message 51 and the second sound message 52 located in the middle and lower part of the tree are unplayed sounds. Message, when the unbroadcast duration of the unbroadcast sound message reaches the unbroadcast threshold, control the bird corresponding to the first sound message 51 and the bird corresponding to the second sound message 52 to exchange positions with each other.
  • Step 608 when the third sound message on the sound message display interface is changed from the unplayed state to the broadcasted state, the third sound message is replaced with the fourth sound message in the unplayed state on the sound message display interface, and the fourth sound message is adopted.
  • the virtual character replaces the third virtual character, the fourth virtual character is a virtual character corresponding to the fourth voice message, and the third virtual character is a virtual character corresponding to the third voice message.
  • the fourth sound message is a newly acquired sound message
  • the fourth virtual character is a virtual character determined according to the fourth sound message.
  • the fourth sound message is a sound message that has not yet been displayed.
  • Step 609 Move the third voice message and the third virtual character out of the voice message display interface, or move to the designated area in the voice message display interface.
  • the terminal when the terminal replaces the third virtual character with the fourth virtual character, the terminal moves the third virtual character out of the voice message display interface. For example, if the third virtual character is a bird, the bird The voice message display interface will pop up.
  • the terminal when the terminal replaces the third virtual character with the fourth virtual character, the terminal moves the third virtual character to a designated area in the voice message display interface.
  • the third virtual character is a small Bird, the bird will fly to the lawn under the trees in the virtual forest world.
  • step 606 can be implemented separately as an embodiment
  • step 607 can be implemented separately as an embodiment
  • steps 608 and 609 can be implemented separately as an embodiment.
  • the method provided in this embodiment when the third sound message changes from the unplayed state to the broadcasted state, replaces the third sound message with the fourth sound message in the unbroadcast state, so that the sound message display interface It will automatically increase the display of unplayed sound messages, which is convenient for users to clearly distinguish between broadcasted and unplayed sound messages, and to play unplayed sound messages more conveniently, thereby improving the efficiency of human-computer interaction between the user and the terminal.
  • the method provided in this embodiment controls the first virtual character corresponding to the first voice message to perform a preset reminder action when the non-broadcast duration of the first voice message does not reach the non-broadcast threshold.
  • the sound message display interface distinguishes between broadcasted sound messages and non-broadcast sound messages, so that priority is given to controlling playback of unbroadcast sound messages.
  • the method provided in this embodiment controls the first virtual character and the second virtual character to interact with each other on the sound message display interface by using the first voice message and the second voice message to reach the non-broadcast threshold.
  • the exchange position enables the user to distinguish between the broadcasted sound message and the unbroadcasted sound message on the sound message display interface, so as to give priority to controlling the playback of the unbroadcasted sound message.
  • the number of sound messages displayed on the sound message display interface 54 is limited, such as four, six, or eight. In some possible embodiments, the user needs to refresh the sound message display interface 54. After the above step 603, the following steps are also included, as shown in FIG. 18:
  • Step 701 Receive a refresh operation on a voice message display interface.
  • the refresh operation includes, but is not limited to, a pull-down operation and / or a pull-up operation.
  • the terminal uses a touch screen, and the user pulls down the sound message display interface on the touch screen, or pulls up the sound message display interface on the touch screen.
  • Step 702 Acquire m other sound messages according to the refresh operation.
  • the terminal obtains m other sound messages from the server according to the refresh operation, where m is a positive integer.
  • the other sound message is a sound message newly released by at least one user account between the time when the sound message was last acquired and the current time.
  • other sound messages are sound messages that are filtered from the sound message library by the server according to the filtering conditions.
  • step 703 the unplayed voice messages and the virtual characters corresponding to the unplayed voice messages, the m other voice messages and the virtual characters corresponding to the other voice messages are simultaneously displayed on the voice message display interface.
  • the terminal When the user performs a refresh operation, it indicates that the user wants to view a new sound message.
  • the terminal will simultaneously display unplayed sound messages and m other sound messages among the n sound messages on the sound message display interface. That is, the terminal will preferentially display sound messages that have not yet been played.
  • step 703 Taking the virtual world as a virtual forest world and the virtual character as a bird on a tree in the virtual forest world as an example, there are at least two different ways to implement step 703:
  • the n sound messages are all unplayed sound messages.
  • the sound message display interface displays a scene picture of the virtual forest world.
  • the scene picture includes trees located in the virtual forest world.
  • the upper part of the tree displays m other sound messages, and the lower and middle parts of the tree display n sound messages.
  • the terminal determines the height of the tree according to the first number of un-broadcast sound messages in the n sound messages and the second number of m sound messages.
  • the sum of the number of the first message and the number of the second message has a positive correlation with the height of the tree.
  • a tree in the virtual forest world is formed by superposing a plurality of triangles, wherein each triangle located in the middle of the tree is the same size, and the size of the triangle located at the top of the tree is 70% of the size of the triangle in the tree. %.
  • each triangle is used to display a bird and the voice message corresponding to the bird, and the virtual characters on the triangles adjacent to each other are staggered left and right.
  • k sound messages need to be displayed, k triangles are needed on the tree. Since m other sound messages are newly acquired in this step, and the n sound messages before the refresh are all unplayed sound messages, m triangles need to be added.
  • the terminal makes a copy using a triangle x located at the bottom of the tree as a unit, and translates the copied triangle downward by 2/3 of the height.
  • the terminal may display the m other sound messages in the upper part of the tree, and the n sound messages obtained before refreshing are displayed in the middle and lower part of the tree.
  • some of the n sound messages are unbroadcast sound messages, and the other part of the sound messages are broadcast sound messages.
  • the terminal may cancel the display when the terminal displays m other sound messages and unplayed sound messages among the n sound messages.
  • the broadcasted sound message among the n sound messages, and the virtual character corresponding to the broadcasted sound message is canceled.
  • the terminal may display on the lawn under the tree The virtual character corresponding to the broadcast sound message in the n sound messages.
  • the user has listened to the sound messages corresponding to the bird 3 and the bird 4, and when the new 2 birds are obtained by refreshing, the new 2 birds are displayed on the top of the tree, and the unplayed The bird 1, bird 2, bird 5, bird 6, and bird 7 are displayed downward in the middle and lower part of the tree, and the played bird 3 and bird 4 are displayed on the lawn below the tree.
  • the method provided in this embodiment obtains m other sound messages according to the refresh operation when a refresh operation is received, and simultaneously displays the unplayed sound messages and m of the n sound messages on the sound message display interface.
  • Other sound messages so that the unplayed sound messages will be preferentially displayed during the refresh process on the sound message display interface, so that the user can clearly distinguish between the broadcasted sound messages and the unplayed sound messages, and the user will intuitively view the trees after the refresh operation. Listen to unplayed sound messages, thereby improving the efficiency of human-computer interaction between the user and the terminal.
  • the trees in the virtual forest world are described by using a plurality of triangle-superimposed trees as an example.
  • the trees in the virtual forest world may also be trees with curved branches and multiple branches, as shown in FIG. 21; in other embodiments, as shown in FIG. 22, the trees in the virtual forest world may also be Multiple roots, each tree represents a different topic (or circle or topic), such as topic A, topic B, topic C, and topic D.
  • the user can also scroll up or down to view other topics E and topic F.
  • the user can select After a tree corresponding to "topic E", check the birds and sound messages on the tree. Each sound message on the tree belongs to the same topic E.
  • each tree in the virtual forest world is arranged in a ring shape. It is shown that each tree corresponds to a topic, and the embodiment of the present application does not limit the arrangement of trees in the virtual forest world.
  • the virtual world is a virtual forest world and the virtual character is a virtual bird as an example. This embodiment does not limit the specific form of the virtual world. In other embodiments, the virtual world may also be a virtual ocean world, and the virtual character may be a virtual fish, as shown in FIG. 24.
  • the Methods include:
  • Step 801 The terminal displays a voice message display interface.
  • the voice message display interface is an interface provided by the instant messaging program.
  • the user triggers a user operation on the instant messaging program, and the instant messaging program displays a voice message display interface according to the user operation.
  • the instant messaging program provides an “expanding” function 52 in the function list. “Expanding” refers to expanding the list of friends.
  • “Expanding” refers to expanding the list of friends.
  • the user clicks the "Expand Column” function 52 the user enters the Column Extend Function page, which includes three tab pages: Column Extend, Column Extend Group and Miaoyin Forest.
  • the instant messaging program displays a voice message display interface "Miaoyin Forest”. On the "Miaoyin Forest” interface 54 are displayed a background of blue sky and white clouds, trees and four small birds located on the trees.
  • the instant messaging program when the instant messaging program needs to display the "Miaoyin Forest" interface 54, the instant messaging program obtains n sound messages from the server.
  • the instant messaging program determines the virtual character corresponding to each voice message in the virtual world according to the message attribute of each voice message; when the instant messaging program does not locally store each
  • the instant messaging program obtains the virtual character corresponding to each sound message in the virtual world from the server.
  • the server when the "Miaoyin Forest" interface 54 is implemented in the form of a web page, the server sends n voice messages, virtual characters corresponding to the n voice messages, and display materials of the virtual world together to the instant messaging program, and the instant messaging program Based on these data, a sound message display interface is displayed.
  • the n sound messages sent by the server to the terminal are filtered according to a preset condition
  • the preset condition includes, but is not limited to, at least one of the following conditions: 1.
  • the voice message can be scored according to the weight corresponding to each condition, and n voice messages can be selected and pushed to the instant messaging program according to the score level.
  • the server sends a message to the terminal.
  • the logic for providing n sound messages is not limited.
  • step 802 the terminal browses and clicks on the sound of interest.
  • the top part of the tree is displayed preferentially on the “Miaoyin Forest” interface 73.
  • the user can swipe up to view the middle and lower parts of the tree and the lawn below the tree.
  • Step 803 The terminal receives a pull-down refresh operation of the user.
  • the user can also perform a pull-down refresh on the instant messaging program.
  • the pull-down refresh is a refresh operation that uses a swipe down signal to trigger a refresh. This embodiment is not limited to a specific form of the refresh operation, and is only illustrated by pull-down refresh.
  • the instant messaging program obtains m other sound messages from the server according to the refresh operation.
  • Step 804 The terminal sends an instruction to the server.
  • the instant messaging program sends a refresh instruction to the server.
  • step 805 the server counts the number of unplayed sound messages after the refresh.
  • the server selects m other voice messages for the terminal. Then, the server determines the number of un-broadcast sound messages after refreshing according to the sum of the un-broadcast sound messages among n sound messages before refreshing and m other sound messages.
  • Step 806 If the number is greater than the number before refreshing, the server duplicates the triangle in the background tree.
  • the trees in the virtual forest world are formed by superimposing a plurality of triangles, and each triangle displays one or two sound messages, and a virtual character corresponding to the sound message. If the number of voice messages after refreshing is greater than the number of messages before refreshing, the server duplicates the triangle in the background tree to increase the height of the background tree. The process can refer to FIG. 19 and FIG. 20 described above.
  • step 807 if the number is less than the number before the refresh, the server reduces the triangles in the background tree.
  • the server reduces the triangles in the background tree to reduce the height of the background tree.
  • Step 808 The server transmits data to the terminal.
  • the server sends m other sound messages to the terminal; the server sends m other sound messages to the terminal and unplayed sound messages among the n sound messages before refresh; or the server sends m other sound messages to the terminal, Unplayed sound messages among the n sound messages before the refresh, and played sound messages among the n sound messages before the refresh.
  • the server sends m other sound messages and virtual characters corresponding to other sound messages to the terminal; or, the server sends m virtual characters corresponding to other sound messages and other sound messages to the terminal, among the n sound messages before refresh.
  • the server sends the sequence to the terminal in the order of “new voice message ⁇ unplayed voice message before refreshing ⁇ played voice message after refreshing”.
  • the server also sends the increased or decreased trees to the terminal.
  • Step 809 The terminal redraws the background visual rendering layer and adjusts the position of the sound message.
  • the terminal redraws the background visual rendering layer according to the increased or decreased trees, and adjusts the position of each sound message in the sound element layer according to the sound messages sent by the server in sequence.
  • the terminal superimposes the ambient atmosphere layer, the re-rendered background visual rendering layer, and the re-rendered sound element layer to obtain a refreshed sound message display interface.
  • Step 810 The terminal browses and clicks on the sound of interest.
  • m other sound messages are displayed at the top of the tree, unplayed sound messages among the n sound messages before the refresh are displayed in the middle and lower parts of the tree, and n before the refresh is displayed on the lawn below the tree. Played sound message in sound messages. Selecting users can swipe up to see the middle and lower part of the tree and the lawn below the tree. When there are many voice messages and the trees are high, the top part of the tree is displayed preferentially on the "Miaoyin Forest” interface 73, and then when the user swipes up, the middle and lower parts of the tree and the lawn below the tree are displayed.
  • Step 811 The terminal detects that the voice message display interface stays for more than 3 seconds.
  • the instant messaging program counts the displayed duration of the voice message display interface. If the displayed duration of the voice message display interface exceeds 3 seconds, go to step 812.
  • Step 812 The terminal sends an instruction to the server.
  • the terminal sends a stay trigger instruction to the server, and the stay trigger instruction is used to trigger the server to identify the unplayed sound message on the sound message display interface.
  • Step 813 The server identifies an unplayed sound message in the current interface.
  • the server keeps a playback record of each sound message, and the server identifies the unplayed sound message on the sound message display interface according to the playback record.
  • step 814 the server shakes around the bird image.
  • the server For the small bird corresponding to the unplayed voice message, the server generates a shaking instruction and / or shaking animation material centered on the bird image.
  • Step 815 The server transmits data to the terminal.
  • the server sends a non-broadcast sound message to the terminal to shake the shaking instruction and / or shaking animation material.
  • Step 816 The terminal redraws the animation of the sound element layer.
  • the terminal When the terminal receives the shaking instruction of a certain unplayed sound message sent by the server, the terminal redraws the virtual character corresponding to the unplayed sound message in the sound element layer according to the shaking animation material stored locally.
  • the terminal When the terminal receives the shaking animation material of a certain unplayed sound message sent by the server, redraw the virtual character corresponding to the unplayed sound message in the sound element layer according to the shaking animation material,
  • the server executes part of the calculation logic as an example.
  • the calculation logic executed by the server may also be executed by the terminal, which is not limited in the embodiment of the present application.
  • Figures 6, 7, 15, 18, and 25 are schematic flowcharts of a method for displaying a voice message in an embodiment. It should be understood that although the steps in the flowcharts of FIGS. 6, 7, 15, 18, and 25 are sequentially displayed as indicated by the arrows, these steps are not necessarily performed sequentially in the order indicated by the arrows. Unless explicitly stated in this document, the execution of these steps is not strictly limited, and these steps can be performed in other orders. Moreover, at least some of the steps in Figures 6, 7, 15, 18, and 25 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily performed at the same time, but may be performed at different times. The execution order of these sub-steps or stages is not necessarily performed sequentially, but may be performed in turn or alternately with at least a part of the other steps or sub-steps or stages of other steps.
  • FIG. 26 shows a structural block diagram of a sound message display device according to an exemplary embodiment of the present application.
  • the sound message display device is configured in a terminal installed with an application program.
  • the sound message display device may be implemented by software or hardware.
  • the application program has a capability of receiving sound messages.
  • the device includes a processing module 920 and a display module 940 ;
  • a processing module 920 configured to start the application program according to a startup operation
  • a processing module 920 configured to obtain n sound messages issued by at least one user account, where n is a positive integer
  • a display module 940 is configured to display a sound message display interface of the application, where the sound message display interface displays the sound message located in a virtual world, and the sound message is a visual element in the virtual world. Display as a carrier.
  • the visual element includes a virtual character
  • the display module 940 is configured to obtain a virtual character corresponding to each of the n sound messages in the virtual world; and generate the virtual character A scene scene of the world, where the virtual character and a sound message corresponding to the virtual character are displayed in the scene scene; displaying the sound message display interface of the application program according to the scene scene of the virtual world;
  • At least one of the virtual character and the sound message is used to play a corresponding message content when a trigger operation is received.
  • the processing module 920 is configured to determine a virtual character corresponding to the voice message according to a message attribute of the voice message.
  • the processing module 920 is configured to determine a role parameter of a virtual character corresponding to the sound message according to a message duration of the sound message; or, the processing module 920 is configured to determine a role parameter according to the sound
  • the published time of the message determines the role parameters of the virtual character corresponding to the sound message, and the published time is a time measured from the upload time of the sound message; or, the processing module 920 is configured to After the sound message is displayed, the unplayed duration of the sound message determines the role parameters of the virtual character corresponding to the sound message; wherein the role parameters include: the type of the character model, the animation style, the frequency of the animation, and the special effects of the character At least one.
  • the above device further includes: a human-computer interaction module 960 and a playback module 980, as shown in FIG. 27;
  • the human-machine interaction module 960 is configured to receive a trigger operation of at least one of a virtual character and a voice message in the sound message display interface, and the trigger operation includes at least one of a click operation and a slide operation;
  • the playback module 980 is configured to play the sound message corresponding to the virtual character according to the trigger operation.
  • the human-machine interaction module 960 is further configured to receive various operations related to a user.
  • the human-computer interaction module is further configured to receive a startup operation, so that the processing module 920 starts an application program according to the startup operation.
  • the display module 940 is further configured to display an add friend control corresponding to the virtual character; the human-computer interaction module 960 is further configured to receive a trigger for triggering the add friend control. A first operation; the processing module 920 is further configured to establish a friend relationship according to the first operation and a user account corresponding to the virtual character.
  • the display module 940 is further configured to display a favorite control corresponding to the virtual character; the human-computer interaction module 960 is further configured to receive a first control for triggering the favorite control. Two operations; the processing module 920 is further configured to collect the voice message corresponding to the virtual character according to the second operation.
  • the processing module 920 is further configured to control the first sound message corresponding to the first sound message when the non-broadcast duration of the first sound message exists on the sound message display interface.
  • the first virtual character performs a preset reminder action.
  • the processing module 920 is further configured to control the first virtual message when a non-broadcast duration of a first sound message and a second sound message reaches a non-broadcast threshold on the sound message display interface.
  • the character and the second virtual character exchange positions with each other, the first virtual character is a virtual character corresponding to the first voice message, and the second virtual character is a virtual character corresponding to the second voice message.
  • the processing module 920 is further configured to: when a third sound message exists on the sound message display interface from an unplayed state to a broadcasted state, on the sound message display interface
  • the third sound message is replaced with a fourth sound message in an unplayed state
  • the third virtual character is replaced with a fourth virtual character.
  • the fourth virtual character is a virtual character corresponding to the fourth sound message.
  • the three virtual characters are virtual characters corresponding to the third voice message; moving the third voice message and the third virtual character out of the voice message display interface, or move to a designated area in the voice message display interface .
  • the human-computer interaction module 960 is further configured to receive a refresh operation on the sound message display interface; and the processing module 920 is further configured to obtain m pieces of information according to the refresh operation.
  • Other sound messages the display module 940 is further configured to simultaneously display unplayed sound messages in the n sound messages and a virtual character corresponding to the unplayed sound messages on the sound message display interface, the m Other sound messages and a virtual character corresponding to the other sound messages.
  • the virtual world is a virtual forest world
  • the display module 940 is further configured to display a scene picture of the virtual forest world on the sound message display interface.
  • the scene picture includes trees located in the virtual forest world, and an upper part of the trees is displayed with The m other sound messages and the virtual characters corresponding to the other sound messages, and the middle and lower parts of the trees display unplayed sound messages in the n sound messages and virtual characters corresponding to the unplayed sound messages.
  • the virtual character is a bird on a tree in the virtual forest world.
  • the processing module 920 is further configured to determine according to the first number of non-broadcast sound messages in the n sound messages and the second number of m sound messages. The height of the trees;
  • the sum of the number of the first messages and the number of the second messages has a positive correlation with the height of the trees.
  • the display module 940 is further configured to cancel displaying the broadcast sound message among the n sound messages and a virtual character corresponding to the broadcast sound message; or the display module 940 is further configured to display the A virtual character corresponding to the broadcast sound message among the n sound messages is displayed on the lawn below the tree.
  • the virtual world is a virtual forest world
  • the display module 940 is further configured to separately render an environment atmosphere layer, a background visual layer, and a sound element layer;
  • the environment atmosphere layer includes a sky and a ground in the virtual forest world, and the background visual layer includes the virtual forest Trees in the world, the sound element layer includes the virtual character and a sound message corresponding to the virtual character; the environment atmosphere layer, the background visual layer, and the sound element layer are superimposed as the sound message Display interface for display.
  • the virtual character may be various animals existing in the virtual forest.
  • the sound message display interface is a user interface for displaying sound messages posted by an unknown user account; or the sound message display interface is for displaying each user in the same topic The user interface of the sound message published by the account; or the sound message display interface is a user interface for displaying the sound message issued by each user account belonging to the same region.
  • the application also provides a computer-readable storage medium, where the storage medium stores at least one instruction, at least one program, code set, or instruction set, the at least one instruction, the at least one program, the code set, or The instruction set is loaded and executed by the processor to implement the sound message display method provided by the foregoing method embodiment.
  • the present application also provides a computer program product containing instructions, which when executed on a computer device, causes the computer device to execute the sound message display method provided by each of the foregoing method embodiments.
  • the program may be stored in a computer-readable storage medium.
  • the storage medium mentioned may be a read-only memory, a magnetic disk or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请公开了一种应用程序中的声音消息显示方法、装置、计算机设备及计算机可读存储介质,所述方法由安装有应用程序的终端执行,该应用程序具有接收声音消息的能力,所述方法包括:启动应用程序,获取至少一个用户帐号发布的n条声音消息;显示所述应用程序的声音消息展示界面,所述声音消息展示界面上显示有位于虚拟世界中的所述声音消息,所述声音消息以所述虚拟世界中的可视元素作为载体进行显示。

Description

应用程序中的声音消息显示方法、装置、计算机设备及计算机可读存储介质
本申请要求于2018年09月30日提交中国专利局、申请号为2018111594514、申请名称为“应用程序中的声音消息显示方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及计算机程序领域,特别涉及一种应用程序中的声音消息显示方法、装置、计算机设备及计算机可读存储介质。
背景技术
社交APP(Application,应用程序)是用户在移动终端上最为常用的应用程序。传统的社交APP主要采用文字和图片作为交流媒介,但在新兴的社交APP中采用声音消息作为交流媒介。
在以声音消息作为交流媒介的社交APP中,相关技术中按照上传时间倒序排列的多个声音单元格来显示声音消息,每个声音单元格对应一个声音消息。每个声音单元格具有各自对应的一个圆角矩形框。用户可以点击声音单元格来播放该声音消息。
由于各条声音消息的界面展示效果是基本相同的,因此用户难以准确分辨各条声音消息,需要在筛选不同的声音消息上耗费较多的时间。
发明内容
根据本申请的各种实施例提供了一种应用程序中的声音消息显示方法、装置、计算机设备及计算机可读存储介质。
一种应用程序中的声音消息显示方法,所述方法由终端执行,所述方法包括:
根据操作信号启动应用程序;
获取至少一个用户帐号发布的n条声音消息,n为正整数;
显示所述应用程序的声音消息展示界面,所述声音消息展示界面上显示有位于虚拟世界中的所述声音消息,所述声音消息以所述虚拟世界中的可视元素作为载体进行显示。
一种应用程序中的声音消息显示装置,所述装置包括:
处理模块,用于根据启动操作启动应用程序;
所述处理模块,用于获取至少一个用户帐号发布的n条声音消息,n为正整数;
显示模块,用于显示所述应用程序的声音消息展示界面,所述声音消息展示界面上显示有位于虚拟世界中的所述声音消息,所述声音消息以所述虚拟世界中的可视元素作为载体进行显示。
一种计算机设备,所述计算机设备包括存储器和处理器;所述存储器存储有至少一条程 序,所述至少一条程序由所述处理器加载并执行以实现如上所述的应用程序中的声音消息显示方法。
一种计算机可读存储介质,所述存储介质中存储有至少一条程序,所述至少一条程序由处理器加载并执行以实现如上所述的应用程序中的声音消息显示方法。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是相关技术提供的声音消息显示方法的界面示意图;
图2是本申请一个示例性实施例提供的计算机系统的结构框图;
图3是本申请一个示例性实施例提供的终端的结构框图;
图4是本申请一个示例性实施例提供的服务器的结构框图;
图5是本申请一个示例性实施例提供的声音消息显示方法的界面示意图;
图6是本申请一个示例性实施例提供的声音消息显示方法的流程图;
图7是本申请一个示例性实施例提供的声音消息显示方法的流程图;
图8是本申请一个示例性实施例提供的声音消息显示方法的界面示意图;
图9是本申请一个示例性实施例提供的声音消息显示方法的分层示意图;
图10是本申请一个示例性实施例提供的小鸟的角色模型与消息时长的对应关系图;
图11是本申请一个示例性实施例提供的小鸟的角色模型与已发布时长的对应关系图;
图12是本申请另一个示例性实施例提供的小鸟的角色模型与已发布时长的对应关系图;
图13是本申请另一个示例性实施例提供的声音消息显示方法的流程图;
图14是本申请另一个示例性实施例提供的声音消息显示方法的界面示意图;
图15是本申请另一个示例性实施例提供的声音消息显示方法的界面示意图;
图16是本申请另一个示例性实施例提供的声音消息显示方法的界面示意图;
图17是本申请另一个示例性实施例提供的声音消息显示方法的界面示意图;
图18是本申请另一个示例性实施例提供的声音消息显示方法的流程图;
图19是本申请另一个示例性实施例提供的声音消息显示方法的界面示意图;
图20是本申请另一个示例性实施例提供的声音消息显示方法的界面示意图;
图21是本申请另一个示例性实施例提供的声音消息显示方法的流程图;
图22是本申请另一个示例性实施例提供的声音消息显示方法的流程图;
图23是本申请另一个示例性实施例提供的声音消息显示方法的界面示意图;
图24是本申请另一个示例性实施例提供的声音消息显示方法的界面示意图;
图25是本申请另一个示例性实施例提供的声音消息显示方法的流程图;
图26是本申请另一个示例性实施例提供的声音消息显示装置的框图;
图27是本申请另一个示例性实施例提供的声音消息显示装置的框图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
声音社交应用是一种基于声音消息进行社交的应用程序,又称语音社交应用。在发明人所知的某一应用商店的社交类应用免费排行榜中,声音社交应用占据了多个社交应用中的7.5%。此外传统的电台类应用也纷纷加入社交属性,希望能够在声音社交这一垂直市场占据一席之地。
在发明人所知的技术中,声音社交应用中采用Feed(饲料)流来显示一条条声音消息,Feed流是将用户主动订阅的若干消息源组合在一起形成内容聚合器,帮助用户持续地获取最新的订阅源内容。Feed流在用户界面上通常按照Timeline(时间线)的形式进行显示。如图1所示,该Feed流10中采用按照发布时间由早到晚排序显示的多个声音单元格12,声音单元格12与声音消息一一对应。每个声音单元格12呈矩形框显示,每个矩形框上设置有播放按钮14和声波纹路16,声波纹路16与该声音消息相对应。当用户点击某一个声音单元格12上的播放按钮14时,会触发播放该声音单元格12对应的声音消息。
但是由于每个声音单元格12的界面展示效果基本是相同的,只有声波纹路可能各不相同,因此用户无法准确分辨哪些声音消息是已播声音消息,哪些声音消息是未播声音消息。在同一个用户界面内显示的声音消息较少时,用户需要上下拖动Feed流中的不同声音单元格12,在多个声音单元格12中不断寻找,因此需要在筛选不同的声音消息上耗费较多的时间和操作步骤,人机交互效率较低。
本申请实施例提供了一种针对声音消息的改进型显示方案,通过构建一个虚拟世界,利用该虚拟世界中的虚拟角色来显示声音消息,每个声音消息对应各自的虚拟角色。在一个实施例中,该虚拟世界是一个虚拟森林世界,虚拟角色是该虚拟森林世界中的小鸟,每个声音消息对应各自的小鸟,存在一些小鸟的显示方式是不同的;在另一个实施例中,该虚拟世界是一个虚拟海洋世界,该虚拟角色是虚拟海洋世界中的小鱼,每个声音消息对应各自的小鱼,存在一些小鱼的显示方式是不同的。因此,用户能够较为容易地从不同显示方式的虚拟角色上,区分出不同的声音消息。甚至,从不同的显示方式上区分出已播声音消息和未播声音消息。
图2示出了本申请一个示例性实施例提供的计算机系统200的结构框图。该计算机系统200可以是一个即时通讯系统、团队语音聊天系统或者具有社交属性的其他应用程序系统,本申请实施例对此不加以限定。该计算机系统200包括:第一终端220、服务器集群240和 第二终端260。
第一终端220通过无线网络或有线网络与服务器集群240相连。第一终端220可以是智能手机、游戏主机、台式计算机、平板电脑、电子书阅读器、MP3播放器、MP4播放器和膝上型便携计算机中的至少一种。第一设备220安装和运行有支持声音消息的应用程序。该应用程序可以是声音社交应用程序、即时通讯应用程序、团队语音应用程序、基于话题或频道或圈子进行人群聚合的社交类应用程序、基于购物的社交类应用程序的任意一种。第一终端220是第一用户使用的终端,第一终端220中运行的应用程序内登录有第一用户帐号。
第一终端220通过无线网络或有线网络与服务器240相连。
服务器集群240包括一台服务器、多台服务器、云计算平台和虚拟化中心中的至少一种。服务器集群240用于为支持声音消息的应用程序提供后台服务。可选地,服务器集群240承担主要计算工作,第一终端220和第二终端260承担次要计算工作;或者,服务器集群240承担次要计算工作,第一终端220和第二终端260承担主要计算工作;或者,服务器集群240、第一终端220和第二终端260三者之间采用分布式计算架构进行协同计算。
可选地,服务器集群240包括:接入服务器242和消息转发服务器244。接入服务器242用于提供第一终端220以及第二终端260的接入服务和信息收发服务,并将消息(声音消息、文字消息、图片消息、视频消息)在终端和消息转发服务器244之间转发。服务器242用于向应用程序提供的后台服务,比如:添加好友服务、文字消息转发服务、声音消息转发服务、图片消息转发服务的至少一种。消息转发服务器244可以是一台或多台。当消息转发服务器244是多台时,存在至少两台消息转发服务器244用于提供不同的服务,和/或存在至少两台消息转发服务器244用于提供相同的服务,比如以负载均衡方式提供同一种服务,本申请实施例对此不加以限定。
第二终端260安装和运行有支持声音消息的应用程序。该应用程序可以是声音社交应用程序、即时通讯应用程序、团队语音应用程序、基于话题或频道或圈子进行人群聚合的社交类应用程序、基于购物的社交类应用程序的任意一种。第二终端260是第二用户使用的终端。第二终端220的应用程序内登录有第二用户帐号。
可选地,第一用户帐号和第二用户帐号处于虚拟社交网络中,该虚拟社交网络向第一用户帐号和第二用户帐号之间提供了声音消息的传播途径。该虚拟社交网络可以是同一社交平台提供的,也可以是存在关联关系(比如授权登录关系)的多个社交平台协同提供的,本申请实施例对虚拟社交网络的具体形式不加以限定。可选地,第一用户帐号和第二用户帐号可以属于同一个队伍、同一个组织、具有好友关系或具有临时性的通讯权限。可选地,第一用户帐号和第二用户帐号也可以是陌生人关系。总之,该虚拟社交网络提供了第一用户帐号和第二用户帐号之间的单向消息传播途径或双向消息传播途径,以便声音消息在不同用户帐号之间的传播。
可选地,第一终端220和第二终端260上安装的应用程序是相同的,或两个终端上安装的应用程序是不同操作系统平台的同一类型应用程序,或两个终端上安装的应用程序是不同 的但支持同一种声音消息。不同操作系统包括:苹果操作系统、安卓操作系统、Linux操作系统、Windows操作系统等等。
第一终端220可以泛指多个终端中的一个,第二终端260可以泛指多个终端中的一个,本实施例仅以第一终端220和第二终端260来举例说明。第一终端220和第二终端260的终端类型相同或不同,该终端类型包括:智能手机、游戏主机、台式计算机、平板电脑、电子书阅读器、MP3播放器、MP4播放器和膝上型便携计算机中的至少一种。以下实施例以第一终端220和/或第二终端240是智能手机来举例说明。
本领域技术人员可以知晓,上述终端的数量可以更多或更少。比如上述终端可以仅为一个,或者上述终端为几十个或几百个,或者更多数量,此时上述计算机系统还包括其他终端280。本申请实施例对终端的数量和设备类型不加以限定。
图3示出了本申请一个示例性实施例提供的终端300的结构框图。该终端300可以是:智能手机、平板电脑、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、笔记本电脑或台式电脑。终端300还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。该终端300可以是第一终端或第二终端。
通常,终端300包括有:处理器301和存储器302。
处理器301可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器301可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器301也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器301可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器301还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器302可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器302还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器302中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器301所执行以实现本申请中各个方法实施例提供的应用程序的声音消息显示方法。
在一些实施例中,终端300还可选包括有:外围设备接口303和至少一个外围设备。处理器301、存储器302和外围设备接口303之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口303相连。具体地,外围设备包括:射频电路304、触摸显示屏305、摄像头306、音频电路307、定位组件308和电源309中的至少一种。其中,外围设备接口303可被用于将I/O(Input/Output,输入/输出)相关的至少一个外 围设备连接到处理器301和存储器302。显示屏305用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。当显示屏305是触摸显示屏时,显示屏305还具有采集在显示屏305的表面或表面上方的触摸信号的能力。摄像头组件306用于采集图像或视频。音频电路307可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器301进行处理,或者输入至射频电路304以实现语音通信。定位组件308用于定位终端300的当前地理位置,以实现导航或LBS(Location Based Service,基于位置的服务)。电源309用于为终端300中的各个组件进行供电。电源309可以是交流电、直流电、一次性电池或可充电电池。
在一些实施例中,终端300还包括有一个或多个传感器310。该一个或多个传感器310包括但不限于:加速度传感器311、陀螺仪传感器312、压力传感器313、指纹传感器314、光学传感器315以及接近传感器316。其中,加速度传感器311可以检测以终端300建立的坐标系的三个坐标轴上的加速度大小。陀螺仪传感器312可以检测终端300的机体方向及转动角度,陀螺仪传感器312可以与加速度传感器311协同采集用户对终端300的3D动作。压力传感器313可以设置在终端300的侧边框或触摸显示屏305的下层。当压力传感器313设置在终端300的侧边框时,可以检测用户对终端300的握持信号,由处理器301根据压力传感器313采集的握持信号进行左右手识别或快捷操作。当压力传感器313设置在触摸显示屏305的下层时,由处理器301根据用户对触摸显示屏305的压力操作,实现对UI界面上的可操作性控件进行控制。指纹传感器314用于采集用户的指纹,由处理器301根据指纹传感器314采集到的指纹识别用户的身份,或者,由指纹传感器314根据采集到的指纹识别用户的身份。光学传感器315用于采集环境光强度。接近传感器316,也称距离传感器,通常设置在终端300的前面板。接近传感器316用于采集用户与终端300的正面之间的距离。
可选地,存储器302中还包括如下程序模块(或指令集),或者其子集或超集:操作系统321;通信模块322;接触/运动模块323;图形模块324;触觉反馈模块325;文本输入模块326;GPS模块327;数字助理客户端模块328;数据用户和模型329;应用程序330:联系人模块330-1、电话模块330-2、视频会议模块330-3、电子邮件模块330-4、即时消息模块330-5、健身支持模块330-6、相机模块330-7、图像管理模块330-8、多媒体播放器模块330-9、记事本模块330-10、地图模块330-11、浏览器模块330-12、日历模块330-13、天气模块330-14、股市模块330-15、计算机模块330-16、闹钟模块330-17、词典模块330-18、搜索模块330-19、在线视频模块330-20、…、用户创建的模块330-21。
在本申请实施例中,存储器302中还包括支持声音消息的应用程序330-22。该应用程序330-22可用于实现下述方法实施例中应用程序的声音消息显示方法。
本领域技术人员可以理解,图3中示出的结构并不构成对终端300的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
本申请还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集 或指令集由所述处理器加载并执行以实现本申请实施例提供的应用程序中的声音消息显示方法。
图4是本申请一个示例性实施例提供的服务器的结构示意图。该服务器可以实现成为上述服务器集群240中的任意一个服务器。示意性的,服务器400包括中央处理单元(Central Processing Unit,简称:CPU)401、包括随机存取存储器(random access memory,简称:RAM)402和只读存储器(read-only memory,简称:ROM)403的系统存储器404,以及连接系统存储器404和中央处理单元401的系统总线405。所述服务器400还包括帮助计算机内的各个器件之间传输信息的基本输入/输出系统(I/O系统)406,和用于存储操作系统413、客户端414和其他程序模块415的大容量存储设备407。
所述基本输入/输出系统406包括有用于显示信息的显示器408和用于用户输入信息的诸如鼠标、键盘之类的输入设备409。其中所述显示器408和输入设备409都通过连接到系统总线405的输入/输出控制器410连接到中央处理单元401。所述基本输入/输出系统406还可以包括输入/输出控制器410以用于接收和处理来自键盘、鼠标、或电子触控笔等多个其他设备的输入。类似地,输入/输出控制器410还提供输出到显示屏、打印机或其他类型的输出设备。
所述大容量存储设备407通过连接到系统总线405的大容量存储控制器(未示出)连接到中央处理单元401。所述大容量存储设备407及其相关联的计算机可读介质为服务器400提供非易失性存储。也就是说,所述大容量存储设备407可以包括诸如硬盘或者只读光盘(Compact Disc Read-Only Memory,简称:CD-ROM)驱动器之类的计算机可读介质(未示出)。
不失一般性,所述计算机可读介质可以包括计算机存储介质和通信介质。计算机存储介质包括以用于存储诸如计算机可读指令、数据结构、程序模块或其他数据等信息的任何方法或技术实现的易失性和非易失性、可移动和不可移动介质。计算机存储介质包括RAM、ROM、可擦除可编程只读存储器(erasable programmable read-only memory,简称:EPROM)、电可擦除可编程只读存储器(electrically erasable programmable read-only memory,简称:EEPROM)、闪存或其他固态存储其技术,CD-ROM、数字通用光盘(Digital Versatile Disc,简称:DVD)或其他光学存储、磁带盒、磁带、磁盘存储或其他磁性存储设备。当然,本领域技术人员可知所述计算机存储介质不局限于上述几种。上述的系统存储器404和大容量存储设备407可以统称为存储器。
根据本申请的各种实施例,所述服务器400还可以通过诸如因特网等网络连接到网络上的远程计算机运行。也即服务器400可以通过连接在所述系统总线405上的网络接口单元411连接到网络412,或者说,也可以使用网络接口单元411来连接到其他类型的网络或远程计算机系统(未示出)。
图5和图6分别示出了本申请一个示例性实施例提供的应用程序中的声音消息显示方法 的界面示意图和流程图。用户在终端上安装有支持声音消息发送和/或接收能力的应用程序。在该应用程序安装成功后,终端的主页(Home page)上会显示有该应用程序的图标。
在终端的待机状态下,终端的主页50上显示有该应用程序的程序图标51。用户希望使用该应用程序时,用户向该应用程序的程序图标51施加启动操作,该启动操作可以是作用于该应用程序的程序图标51的点击操作。
步骤601,根据启动操作启动该应用程序。
终端调用该应用程序的启动进程,通过启动进程将该应用程序启动为前台运行状态。该应用程序在启动后,于服务器进行用户帐号登录。
用户帐号用于唯一标识每个用户,该应用程序中登录有第一用户帐号,其它终端中的应用程序中登录有其它用户帐号,比如第二用户帐号、第三用户帐号。每个用户在各自的应用程序中发布(或上传或发送)生成的声音消息。
步骤602,获取至少一个用户帐号发布的n条声音消息。
应用程序从服务器获取至少一个用户帐号发布的n条声音消息,该至少一个用户帐号通常均为除第一用户帐号之外的其它用户帐号,但也不排除该n条声音消息中包括第一用户帐号自身所上传的声音消息的实现方式。可选地,n为至少一条,但在大部分实施例中,n为大于1的正整数。
声音消息是指以语音信号进行信息传递的消息。在一些实施例中,声音消息只包括语音信号;在另一些实施例中,声音消息是指主要以语音信号进行信息传递的消息,比如声音消息包括语音信号和辅助说明型文字;又比如声音消息包括语音信号和辅助说明型图片;再比如声音消息包括语音信号、辅助说明型文字和辅助说明型图片。在本实施例中,以声音消息只包括语音信号来举例说明。
应用程序在获取声音消息时,可以先获取声音消息的消息标识,并在需要播放时再根据消息标识从服务器获取该声音消息的消息内容。可选地,应用程序也可以直接获取声音消息的消息内容并缓存在本地。
步骤603,显示应用程序的声音消息展示界面,声音消息展示界面上显示有位于虚拟世界中的声音消息,声音消息以虚拟世界中的可视元素作为载体进行显示。
应用程序中提供有多种用户界面,声音消息展示界面是多种用户界面中的一个。在一些可能的实现方式中,应用程序在启动后默认显示声音消息展示界面,如图5所示,在应用程序的程序图标51被点击后,默认开始显示声音消息展示界面54;在另一些可能的实现方式中,应用程序在启动后默认显示主页面,该主页面上提供有多种功能选项,用户需要在多种功能选项中进行选择,从而控制应用程序显示声音消息展示界面54。比如在图8所示意的例子中,应用程序在启动后默认显示主页面,该主页面上显示有扩列功能项52,扩列是扩充好友队列的简称。当用户点击扩列功能项52后,应用程序跳转至扩列功能界面,该扩列功能界面上包括扩列、扩列群、妙音森林的三个标签页,初始状态下默认先显示名称为扩列的标签页。当用户点击名称为妙音森林的标签页后,跳转显示声音消息展示界面54,本实施例对声 音消息展示界面54在应用程序中的显示层级和跳转路径不加以限定。
声音消息展示界面是用于显示至少一个用户帐号所发布的声音消息的用户界面。根据用户帐号的分类不同,在一些实施例中,声音消息展示界面是用于显示陌生用户帐号所发布的声音消息的用户界面;在另一些实施例中,声音消息展示界面是用于显示同一个话题(或频道或圈子或主题)中的各个用户帐号所发布的声音消息的用户界面;在另一些实施例中,声音消息展示界面是用于显示属于同一地区(比如当前所在城市或学校)的各个用户帐号所发布的声音消息的用户界面。
该声音消息展示界面上显示有位于虚拟世界中的声音消息,该声音消息以虚拟世界中的可视元素作为载体进行显示。该虚拟世界可以是二维世界、2.5维度或三维世界。
可视元素是虚拟世界中可被观察到的任意物体或物质,可视元素包括但不限于:云、雾、雷电、流体、静态物体、植物、动物、虚拟形象、卡通形象中的至少一种。
可选地,将声音消息以虚拟世界中的可视元素作为载体进行显示时,应用程序可以将声音消息作为虚拟世界中的可视元素来显示,也可以将声音消息关联或挂载在虚拟世界中的可视元素的周侧进行显示,本实施例对此不加以限定。
在一个可能的实施例中,以可视元素是虚拟世界中的虚拟角色为例,本步骤可以包括如下步骤,如图6B所示:
步骤603a,获取n条声音消息各自在虚拟世界中所对应的虚拟角色。
步骤603b,生成虚拟世界的场景画面,场景画面中显示有虚拟角色和虚拟角色对应的声音消息。
步骤603c,根据虚拟世界的场景画面显示应用程序的声音消息展示界面。
针对步骤603a,应用程序需要获取n条声音消息各自对应的虚拟角色,每条声音消息分别对应一个虚拟角色。虚拟角色是虚拟宇宙中可被观察到的个体元素。也即,虚拟角色是在可视角度能够按照个体进行明确区分的元素。虚拟角色可以是虚拟世界中具有生命力的角色。在一些实施例中,虚拟角色是以卡通形式、植物形式和/或动物形式的角色模型进行显示的角色。示意性的,虚拟角色是各种花卉类角色、哺乳动物角色、鸟类角色、爬行动物角色、两栖动物角色、鱼类角色、恐龙类角色、动漫角色以及其他虚构角色中的至少一种。
可选地,存在至少两条声音消息所对应的虚拟角色是相同或不同的。虚拟角色是属于虚拟世界中的角色。根据世界属性划分,该虚拟世界可以是二维虚拟世界、2.5维虚拟世界、三维虚拟世界中的至少一种。根据世界类型划分,该虚拟世界可以是虚拟森林世界、虚拟海洋世界、虚拟水族馆世界、虚拟太空世界、虚拟动漫世界、虚拟奇幻世界中的任意一种。
该虚拟角色可以以素材库的形式存储在本地中;也可以是服务器向应用程序提供的,比如服务器通过网页文件向应用程序提供n条声音消息的虚拟角色。换句话说,除了应用程序自行通过本地素材库来确定每条声音消息所对应的虚拟角色之外,也可以由服务器确定每条声音消息所对应的虚拟角色之后,将每条声音消息所对应的虚拟角色发送给应用程序。可选地,应用程序同时接收服务器发送的n条声音消息和每条声音消息对应的虚拟角色。
针对步骤603c,示意性的参考图5或图8,该声音消息展示界面54上显示有虚拟世界“妙音森林”的场景画面,该虚拟世界“妙音森林”是二维世界,该虚拟世界“妙音森林”中存在蓝天白云背景和位于蓝天白云背景上的树木55,该树木55上站立着四个小鸟,四个小鸟分别对应:用户A发布的声音消息56、用户B发布的声音消息57、用户C发布的声音消息58和用户D发布的声音消息59。可选地,四个小鸟具有不同的小鸟角色形象,用户能够从具有四个不同角色形象的小鸟身上快速地区分不同声音消息。此处的“不同声音消息”是指不属于同一条的声音消息,可以是不同用户发送的不同条声音消息,也可以是同一用户发送的不同条声音消息。
可选地,上述树木55包括上下叠加的三个三角形,从上往下的第二个三角形和第三个三角形的形状和大小是相同的,位于顶端的第一个三角形略小于其它两个三角形。第一个小鸟56和对应的声音消息显示在第一个三角形的右侧上,第二个小鸟57和第三个小鸟58对应的声音消息叠加显示在第二个三角形的左侧上,第四个小鸟对应的声音消息叠加显示在第三个三角形的右侧上。
可选地,每条声音消息采用一个消息框表示,该消息框可以是矩形、圆角矩形、气泡型、云朵型等任意框型,消息框的长度与声音消息的消息长度呈正比例关系。消息框上还可采用文字来显示该声音消息的消息长度,比如数字“4”代表该消息长度为4秒,数字“8”代表该消息长度为8秒。可选地,该消息框与虚拟角色相邻显示,当虚拟角色位于三角形的左侧时,消息框位于虚拟角色的右侧;当虚拟角色位于三角形的右侧时,消息框位于虚拟角色的左侧。当该声音消息和/或相对应的虚拟角色上接收到触发操作时,终端会播放相对应的消息内容。
在一些实施例中,终端采用多层叠加渲染的方式来显示声音消息展示界面。以虚拟世界是虚拟森林世界为例,终端分别渲染环境氛围层、背景视觉层和声音元素层,将环境氛围层、背景视觉层和声音元素层叠加,作为声音消息展示界面进行显示。环境氛围层包括虚拟森林世界中的天空和地面,背景视觉层包括虚拟森林世界中的树木,声音元素层包括虚拟角色(比如小鸟)和虚拟角色对应的声音消息;在如图9所示的示意性例子中,终端分别渲染环境氛围层181、背景视觉层182和声音元素层183。其中,背景视觉层182位于环境氛围层181的上方,声音元素层183位于背景视觉层182的上方。
可选地,环境氛围层181包括虚拟森林世界中的天空,该天空包括蓝天背景加若干个白云图案;该环境氛围层181还包括虚拟森林世界中的地面,该地面包括草地。
可选地,背景视觉层182中的树木包括多个上下叠加的三角形,每个三角形用于显示一个虚拟角色和与该虚拟角色对应的声音消息,因此三角形的数量与声音消息的数量相等,或者说,三角形的数量与小鸟的数量相等。
可选地,声音元素层183中虚拟角色的数量,与终端获取到的声音消息数量相同,而虚拟角色的显示参数与相对应的声音消息的消息属性相关。
示意性的,当声音消息展示界面中的声音消息的条数发生变化或刷新时,终端可保持环境氛围层不变,而按照变化后的声音消息的条数将背景视觉层和声音元素层进行重新绘制, 然后再将环境氛围层、背景视觉层和声音元素层进行叠加后,生成新的声音消息展示界面进行刷新显示。
步骤604,接收在声音消息展示界面中的虚拟角色和声音消息中至少一个的触发操作,该触发操作包括点击操作和滑动操作中的至少一种。
在声音消息展示界面中,除了声音消息本身可点击播放之外,虚拟角色还用于被触发时播放相对应的消息内容。同一个声音消息展示界面上可显示很多个虚拟角色,目标虚拟角色是多个虚拟角色中的一个,用户通过触发目标虚拟角色的方式,来播放与目标虚拟角色对应的声音消息。
在一些实施例中,用户可以对虚拟角色进行点击,当虚拟角色被点击时,触发播放与该虚拟角色对应的声音消息;在另一些实施例中,用户可以对虚拟角色进行滑动,当虚拟角色被滑动时,触发播放与该虚拟角色对应的声音消息。
步骤605,根据触发信号播放虚拟角色相对应的声音消息。
终端根据触发信号播放声音消息。用户可以在播放过程中,再次触发虚拟角色以暂停播放该声音消息,或者,重新播放该声音消息。
上述步骤604和步骤605为可选步骤,视用户在声音消息展示界面上施加的操作而定。
综上所述,本实施例提供的方法,通过显示应用程序的声音消息展示界面,声音消息展示界面上显示有位于虚拟世界中的声音消息,该声音消息以虚拟世界中的可视元素作为载体进行显示,由于虚拟世界中的可视元素可以是各不相同的,该可视元素的类型也可以是丰富多样的,所以本技术方案能够解决相关技术中各条声音消息的界面展示效果是基本相同的,用户难以准确分辨各条声音消息的问题。
在一些可能的实施例中,针对上述步骤603a,应用程序在确定n条声音消息各自在虚拟世界中所对应的虚拟角色时,可以根据声音消息的消息属性,确定与声音消息对应的虚拟角色。
在一个实施例中,消息属性包括:声音消息的消息内容长度。消息内容长度是指声音消息的消息内容在正常播放时所需要的时长。终端根据声音消息的消息内容长度,确定与该声音消息对应的虚拟角色的角色参数。该角色参数包括:角色模型的种类、动画样式、动画频率、角色特效中的至少一种。角色模型的种类是指不同外在形式的角色模型,比如麻雀、孔雀等;动画样式是指同一角色模型所做出的不同动画,比如抬头、抖腿、展翅、摇尾等;动画频率是指角色特效是指同一角色模型在显示时的不同特效,比如不同的羽毛颜色、不同的外形等。
在如图10示出的示意性例子中,应用程序中预先设置有多个不同的消息内容长度区间,每个消息内容长度区间对应各自不同的角色模型。例如,设置有5个不同的消息内容长度区间:区间A“1-2秒”、区间B“2-5秒”、区间C“5-10秒”、区间D“10-30秒”、区间E“30秒以上”。其中,区间A对应小鸟A、区间B对应小鸟B、区间C对应小鸟C、区间D对应小鸟D、区间E对应小鸟E。当一条声音消息的消息内容长度为20秒时,终端会确定该声音消息 对应的虚拟角色为小鸟D;当一条声音消息的消息内容长度为35秒时,终端会确定该声音消息对应的虚拟角色为小鸟E。
在另一个实施例中,消息属性包括:声音消息的已发布时长。已发布时长是从声音消息的上传时刻开始计时时长。终端根据声音消息的已发布时长,确定与该声音消息对应的虚拟角色的角色参数。
在如图11示出的示意性例子中,预先设置有多个不同的已发布时长区间,每个已发布时长区间对应各自不同的角色模型。例如,设置有6个不同的已发布时长区间:区间1“≤1分钟”、区间2“1分钟-30分钟”、区间3“30分钟-1小时”、区间4“1小时-4小时”、区间5“4小时-24小时”和区间6“大于24小时”,区间1对应小鸟1、区间2对应小鸟2、区间3对应小鸟3、区间4对应小鸟4、区间5对应小鸟5、区间6对应小鸟6。当一条声音消息的已发布时长为10分钟时,终端会确定该声音消息对应的虚拟角色为小鸟2;当一条声音消息的已发布时长为4小时11分时,终端会确定该声音消息对应的虚拟角色为小鸟5。
在另一个实施例中,消息属性包括:声音消息在显示后处于未播放状态的未播时长。未播时长是指声音消息从开始显示时刻距离当前时刻的时长,开始显示时刻是声音消息在声音消息展示界面上开始显示的时刻。终端根据声音消息的未播时长,确定与该声音消息对应的虚拟角色的角色参数。例如,预先设置有多个不同的未播时长区间,每个未播时长区间对应各自不同的角色模型。
可选地,存在至少两个不同的消息属性对应的角色参数是不同的。上述两个例子中均以角色参数是角色模型的类型来举例说明,在不同的实施例中,还可以根据声音消息的消息属性来确定虚拟角色的动画样式和/或角色特效,比如,终端根据声音消息的已发布时长来确定虚拟角色的角色特征,若声音消息的已发布时长越长,则虚拟角色周围的光晕会越大和越亮。
在其它实施例中,消息属性可以包括:声音消息的性别属性(比如男声或女声)、声音消息的年龄属性(比如18岁以下、18岁至25岁、25岁至30岁、30岁至35岁、35岁以上)、声音消息的类别分类(比如娃娃音、萝莉音、大叔音、磁性男声),本实施例对消息属性所包括的属性不加以限定。
需要说明的一点是,不同实施例中对时长区间的划分可以是不同的,比如将已发布时长按照“1~5秒”、“6~10秒”、“10~30秒”、“30-60秒”划分为四个不同的时长区间;还需要说明的另一点是,同一个时长区间所对应的虚拟角色可以是两个或两个以上,比如每个区间对应三种不同的小鸟的角色模型,如图12所示。此时,对属于某一个时长区间对应的声音消息,可以从该时长区间所对应的多种角色模型中随机选择出一个角色模型,作为该声音消息所对应的角色模型。
综上所述,本实施例提供的方法,通过根据声音消息的消息属性确定与声音消息对应的虚拟角色,能够使得用户在观察到不同的虚拟角色时,同时了解到与该虚拟角色对应的声音消息的消息属性,该消息属性可以是消息内容长度、已发布时长、未播时长中的任意一种。从而在尽量减少用户阅读文字信息的前提下,以图形化的元素向用户传递声音消息的相关信 息。
在一些可能的实施例中,用户在声音消息界面54上播放声音消息的消息内容后,还可以与发布声音消息的用户添加好友关系,或者,收藏该声音消息至收藏列表。
结合参考图13或图14,当用户希望播放声音消息展示界面54中的用户B发布的声音消息57时,用户对虚拟角色或声音消息57施加触发操作,该触发操作可以是点击操作或滑动操作。终端根据触发操作显示声音消息57的播放窗口60,该播放窗口60上显示有与虚拟角色对应的添加好友控件61和收藏控件62。
针对添加好友过程:应用程序显示与虚拟角色对应的添加好友控件,用户可以在希望添加感兴趣的消息发布者时对添加好友控件施加第一操作。应用程序接收作用于添加好友控件的第一操作,当添加好友控件采用添加好友按钮来表示时,该第一操作可以是作用于添加好友按钮的点击操作,应用程序根据第一操作与虚拟角色对应的用户帐号建立好友关系。在图13所示的示意性例子中,该添加好友控件61可以是“加好友”按钮61。该“加好友”按钮61用于将当前用户与用户B建立好友关系。当该“加好友”按钮61被点击时,终端可以跳转至添加好友页面63,由用户在添加好友页面63上添加好友。可选地,该添加好友页面63上显示有“确认按钮632”和“取消按钮”631。需要说明的是,该添加好友控件61还可以采用其它显示方式,比如以微型下拉菜单的形式显示在声音消息展示界面54上的虚拟角色附近,本实施例对此不加以限定。
应用程序根据第一操作与服务器进行交互,通过服务器与虚拟角色对应的用户帐号建立好友关系。在一些实施例中,在建立好友关系时还需要用户输入验证信息,需要与虚拟角色对应的用户帐号对该验证信息进行验证通过后,建立两者之间的好友关系。
针对收藏声音消息过程:应用程序显示与虚拟角色对应的收藏控件,用户可以在希望收藏感兴趣的声音消息时对收藏控件施加第二操作,应用程序接收作用于收藏控件的第二操作,该第二操作可以是作用于收藏控件的点击操作。应用程序根据第二操作收藏虚拟角色相对应的声音消息。在如图14所示的示意性例子中,收藏按钮62采用心形按钮来实现。该收藏按钮62用于收藏该虚拟角色对应的声音消息。当该收藏按钮62被点击时,终端可以跳转至“收藏列表”页面64,该“收藏列表”页面64上显示有用户已经收藏的各条声音消息,用户可以在该页面上对已收藏的各条声音消息进行重复收听。
可选地,与未收藏的声音消息相比,终端对已收藏的声音消息所对应的虚拟角色采用区别显示方式,该区别显示方式包括:改变颜色、增加配饰、增加动画特效中的至少一种。比如,对已收藏的声音消息所对应的小鸟上显示帽子或者心型标记。
综上所述,本实施例提供的方法,通过作用在虚拟角色上的触发信号来触发播放与该虚拟角色对应的声音消息,使得用户能够直接以虚拟角色作为人机交互角色,在播放声音消息时提高对虚拟世界的代入感,从而提高人机交互时的人机交互效率。
本实施例提供的方法,还通过在声音消息展示界面上显示有陌生用户帐号的声音消息时,在该声音消息被播放时显示添加好友控件,能够使得用户在收听到感兴趣的声音消息时,直 接与该陌生用户帐号建立好友关系,增加社交应用程序中感兴趣的用户之间的好友添加率。
本实施例提供的方法,还通过在声音消息展示界面上显示有收藏控件,使得用户在收听到感兴趣的声音消息时,对该声音消息添加至收藏列表中,能够在后续时间中对该声音消息进行反复收听。
声音消息展示界面54上显示的声音消息可能未被用户及时点击播放。在一些可能的实施例中,为了利于用户区别出已显示但未播放的声音消息(简称未播声音消息)和已播放声音消息。在上述步骤605之后还包括如下步骤606至609,如图16所示:
步骤606,当声音消息展示界面上存在第一声音消息的未播时长达到未播阈值时,控制第一声音消息所对应的第一虚拟角色执行预设提醒动作。
该预设提醒动作包括但不限于:晃动整个身体、晃动头部、晃动四肢、发出鸣叫、啄羽毛中的至少一种。可选地,晃动整个身体是指以角色模型为中心点,左右晃动角色模型和/或上下晃动角色模型。
以虚拟角色为小鸟为例,如图16所示,当声音消息展示界面上存在小鸟57和小鸟58对应的声音消息的未播时长达到10秒时,终端控制小鸟57和小鸟58会晃动身体,以向用户提示该小鸟57和小鸟58所对应的声音消息是未播声音消息。
可选地,该机制可以周期性执行,比如未播阈值是10秒钟。若该第一虚拟角色对应的声音消息一直处于未被播放状态,则终端会控制第一声音消息所对应的第一虚拟角色每隔10秒晃动一次身体。
步骤607,当声音消息展示界面上存在第一声音消息和第二声音消息的未播时长达到未播阈值时,控制第一虚拟角色和第二虚拟角色互相交换位置,第一虚拟角色是第一声音消息对应的虚拟角色,第二虚拟角色是第二声音消息对应的虚拟角色。
可选地,第一声音消息和第二声音消息是声音消息展示界面上所显示的任意两个虚拟角色。当未播时长达到未播阈值的声音消息是多条时,第一声音消息和第二声音消息可以是声音消息展示界面上位置最接近的两条声音消息。
参考图17,声音消息展示界面40上显示有三条声音消息,假设位于树顶的声音消息是已播声音消息,位于树木的中下部的第一声音消息51和第二声音消息52是未播声音消息,则在未播声音消息的未播时长达到未播阈值时,控制第一声音消息51对应的小鸟和第二声音消息52对应的小鸟互相交换位置。
步骤608,当声音消息展示界面上存在第三声音消息从未播状态变为已播状态时,在声音消息展示界面上采用处于未播状态的第四声音消息替换第三声音消息,采用第四虚拟角色替换第三虚拟角色,第四虚拟角色是第四声音消息对应的虚拟角色,第三虚拟角色是第三声音消息对应的虚拟角色。
可选地,第四声音消息是新获取的声音消息,第四虚拟角色是根据第四声音消息所确定出的虚拟角色。第四声音消息是尚未显示过的声音消息。
步骤609,将第三声音消息和第三虚拟角色移出声音消息展示界面,或者移动至声音消 息展示界面中的指定区域。
在一个可能的实施例中,终端采用第四虚拟角色替换第三虚拟角色时,终端会将第三虚拟角色移出声音消息展示界面,比如,第三虚拟角色是一只小鸟,则该小鸟会飞出该声音消息展示界面。
在另一个可能的实施例中,终端采用第四虚拟角色替换第三虚拟角色时,终端会将第三虚拟角色移动至声音消息展示界面中的指定区域,比如,第三虚拟角色是一只小鸟,则该小鸟会飞到虚拟森林世界的树木下方的草坪上。
需要说明的是,步骤606可以单独实现成为一个实施例,步骤607可以单独实现成为一个实施例,步骤608和步骤609可以单独实现成为一个实施例。
综上所述,本实施例提供的方法,通过在第三声音消息从未播状态变为已播状态时,利用处于未播状态的第四声音消息替换第三声音消息,使得声音消息展示界面上会自动增加显示尚未播放的声音消息,便于用户清楚地分辨已播声音消息和未播声音消息,更便捷地播放未播声音消息,从而提高用户和终端之间的人机交互效率。
综上所述,本实施例提供的方法,通过第一声音消息的未播时长未达到未播阈值时,控制第一声音消息所对应的第一虚拟角色执行预设提醒动作,能够使得用户在声音消息展示界面上区分出已播声音消息和未播声音消息,从而优先控制对未播声音消息进行播放。
综上所述,本实施例提供的方法,通过第一声音消息和第二声音消息的未播时长未达到未播阈值时,控制第一虚拟角色和第二虚拟角色在声音消息展示界面上互相交换位置,能够使得用户在声音消息展示界面上区分出已播声音消息和未播声音消息,从而优先控制对未播声音消息进行播放。
声音消息展示界面54上显示的声音消息有限,比如4条、6条或8条等。在一些可能的实施例中,用户存在对声音消息展示界面54进行刷新显示的需要。在上述步骤603之后还包括如下步骤,如图18所示:
步骤701,接收在声音消息展示界面上的刷新操作。
该刷新操作包括但不限于:下拉操作和/或上拉操作,比如终端采用触摸屏,用户在触摸屏中下拉声音消息展示界面,或者,在触摸屏中上拉声音消息展示界面。
步骤702,根据刷新操作获取m条其它声音消息。
终端根据该刷新操作向服务器获取m条其它声音消息,m为正整数。可选地,其它声音消息是在上次获取声音消息至当前时刻之间,由至少一个用户帐号新发布的声音消息。或者,其它声音消息是由服务器按照筛选条件从声音消息库中筛选出的声音消息。
步骤703,在声音消息展示界面上同时显示n条声音消息中的未播声音消息以及未播声音消息对应的虚拟角色,m条其它声音消息以及其它声音消息对应的虚拟角色。
当用户进行刷新操作时,表示用户希望查看新的声音消息。终端会在声音消息展示界面上同时显示n条声音消息中的未播声音消息和m条其它声音消息。也即,终端会优先显示尚未播放的声音消息。
以虚拟世界是虚拟森林世界,虚拟角色是位于虚拟森林世界中的树木上的小鸟为例,步骤703存在至少两种不同的实现方式:
第一,n条声音消息均为未播声音消息。
在声音消息展示界面上显示虚拟森林世界的场景画面,该场景画面中包括位于虚拟森林世界中的树木,树木的上部显示有m条其它声音消息,树木的中下部显示有n条声音消息中的未播声音消息以及未播声音消息对应的虚拟角色。
可选地,终端根据n条声音消息中的未播声音消息的第一消息数量和m条其它声音消息的第二消息数量确定树木的高度。第一消息数量和第二消息数量两者的和与树木的高度呈正相关关系。
参考图19,虚拟森林世界中的树木是由多个三角形叠加形成的,其中位于树中部的各个三角形的大小是相同的,位于树顶的三角形的大小是相对于树中的三角形的大小的70%。在一个可能的实施例中,每个三角形上用于显示一个小鸟以及该小鸟对应的声音消息,且上下相邻的三角形上的虚拟角色是左右交错的。换句话说,当需要显示k条声音消息时,则树木上需要k个三角形。由于在本步骤中新获取了m个其它声音消息,且刷新前的n条声音消息均为未播声音消息,因此需要增加m个三角形。
可选地,在需要增加一个三角形时,终端以树木中位于最下方的一个三角形x为单位进行复制,并将该复制得到的三角形向下平移2/3高度。
由于m条其它声音消息是新获取到的声音消息,终端可以将m条其它声音消息显示在树木的上部,而刷新前获取到的n条声音消息显示在树木的中下部。
第二,n条声音消息中的一部分声音消息为未播声音消息,另一部分声音消息为已播声音消息。
当用户已经对n条声音消息中的一部分声音消息进行收听后,若用户进行了刷新操作,则终端在显示m条其它声音消息和n条声音消息中的未播声音消息时,终端可以取消显示n条声音消息中的已播声音消息,以及取消显示已播声音消息对应的虚拟角色。
在一些实施例中,由于用户对已播声音消息存在重复收听的需求,因此终端在显示m条其它声音消息和n条声音消息中的未播声音消息时,终端可以在树木下方的草坪上显示n条声音消息中的已播声音消息所对应的虚拟角色。
参考图20,假设用户已经收听过小鸟3和小鸟4所对应的声音消息,在刷新获取到新的2只小鸟时,将新的2只小鸟显示在树木的顶部,将未播的小鸟1、小鸟2、小鸟5、小鸟6和小鸟7下移显示在树木的中下部,并将已播放的小鸟3和小鸟4显示在树木下方的草坪上。
综上所述,本实施例提供的方法,通过在接收到刷新操作时,根据刷新操作获取m条其它声音消息,在声音消息展示界面上同时显示n条声音消息中的未播声音消息和m条其它声音消息,使得声音消息展示界面上在刷新过程中会优先显示尚未播放的声音消息,便于用户清楚地分辨已播声音消息和未播声音消息,用户在刷新操作后会在树木上直观地收听未播声音消息,从而提高用户和终端之间的人机交互效率。
上述各个实施例中,以虚拟森林世界中的树木采用多个三角形叠加的树木来举例说明。在一些实施例中,虚拟森林世界中的树木还可以是曲边多枝杈的树木,如图21所示;在另一些实施例中,如图22所示,虚拟森林世界中的树木还可以是多根,每根树木代表不同的话题(或圈子或主题),比如话题A、话题B、话题C和话题D,用户还可以上滑或者下滑查看其它的话题E和话题F,用户可以选定一根与“话题E”对应的树木后,查看位于该树木上的小鸟以及声音消息,该树木上的各个声音消息属于同一个话题E。在另一些实施例中,除了如图22所示的沿走廊类型的树木排列方式外,还可以采用三维环绕式的树木排列方式,如图23所示,虚拟森林世界中的各个树木按照环形排列显示,每个树木对应一个话题,本申请实施例对虚拟森林世界中的树木排列方式不加以限定。
上述各个实施例中,均以虚拟世界是虚拟森林世界,虚拟角色是虚拟小鸟为例来举例说明。本实施例对虚拟世界的具体形式不加以限定,在其它实施例中,虚拟世界还可以是虚拟海洋世界,虚拟角色可以是虚拟小鱼,如图24所示。
在一些可能的实施例中,见图25所示,以应用程序是即时通讯程序、声音消息是该应用程序中的一个子功能、虚拟世界是虚拟森林世界、虚拟角色是小鸟为例,该方法包括:
步骤801,终端显示声音消息展示界面。
声音消息展示界面是即时通讯程序提供的一个界面。用户在即时通讯程序上触发用户操作,即时通讯程序根据用户操作来显示声音消息展示界面。
参考图8所示,即时通讯程序在功能列表中提供有“扩列”功能52,“扩列”是指扩大自己的好友列表。当用户点击“扩列”功能52后,进入扩列功能页面,该扩列功能页面包括扩列、扩列群和妙音森林三个标签页。当用户点击“妙音森林”标签53后,即时通讯程序显示声音消息展示界面“妙音森林”。在“妙音森林”界面54上显示有蓝天白云背景、树木和位于树木上的四个小鸟。
可选地,当即时通讯程序需要显示“妙音森林”界面54时,即时通讯程序从服务器获取n条声音消息。当即时通讯程序本地存储有各个虚拟角色的素材时,即时通讯程序根据每条声音消息的消息属性,确定每条声音消息在虚拟世界中所对应的虚拟角色;当即时通讯程序本地未存储有各个虚拟角色的素材时,即时通讯程序从服务器获取每条声音消息在虚拟世界中所对应的虚拟角色。示意性的,当“妙音森林”界面54采用网页形式实现时,服务器将n条声音消息、n条声音消息各自对应的虚拟角色以及虚拟世界的显示素材一同发送给即时通讯程序,由即时通讯程序根据这些数据显示声音消息展示界面。
可选地,服务器向终端发送的n条声音消息是按照预设条件进行筛选的,该预设条件包括但不限于如下条件中的至少一种:1、与当前用户帐号属于陌生用户帐号的声音消息;2、与当前用户帐号属于不同的性别;3、与当前用户帐号具有相同或相似的用户画像;4、主动添加好友次数多;5、被加好友次数多。当预设条件存在两条或多条时,可以按照每种条件对应的权值对声音消息进行打分,按照打分高低来挑选出n条声音消息推送给即时通讯程序,本实施例对服务器向终端提供n条声音消息的逻辑不加以限定。
步骤802,终端浏览、点击自己感兴趣的声音。
当声音消息较多且树木较高时,“妙音森林”界面73上优先显示该树木的顶端部分,用户可通过向上滑动的方式来查看该树木的中下部,以及位于树木下方的草坪。
当“妙音森林”界面73上存在用户感兴趣的声音消息时,用户点击该声音消息,或者点击该声音消息对应的虚拟角色,则即时通讯程序播放该声音消息。
步骤803,终端接收用户的下拉刷新操作。
用户还可以在即时通讯程序上进行下拉刷新,下拉刷新是采用向下滑动信号来触发刷新的一种刷新操作。本实施例并不限定刷新操作的具体形式,仅以下拉刷新来举例说明。
即时通讯程序根据刷新操作向服务器获取m条其它声音消息。
步骤804,终端向服务器发送指令。
即时通讯程序向服务器发送刷新指令。
步骤805,服务器计算刷新后的未播声音消息的条数。
服务器为终端再挑选出m条其它声音消息。然后,服务器根据刷新前的n条声音消息中的未播声音消息和m条其它声音消息之和,确定刷新后的未播声音消息的条数。
步骤806,若条数多于刷新前的条数,则服务器对背景树木中的三角形进行复制。
可选地,虚拟森林世界中的树木由多个三角形叠加形成,每个三角形上显示有一条或两条声音消息,以及该声音消息对应的虚拟角色。若刷新后的声音消息条数多于刷新前的消息条数,则服务器对背景树木中的三角形进行复制,以增加该背景树木的高度,该过程可以参考上述图19和图20所示。
步骤807,若条数少于刷新前的条数,则服务器对背景树木中的三角形进行减少。
若刷新后的声音消息条数少于刷新前的消息条数,则服务器对背景树木中的三角形进行减少,以降低该背景树木的高度。
步骤808,服务器向终端传输数据。
可选地,服务器向终端发送m条其它声音消息;服务器向终端发送m条其它声音消息和刷新前的n条声音消息中的未播声音消息;或者,服务器向终端发送m条其它声音消息、刷新前的n条声音消息中的未播声音消息、刷新前的n条声音消息中的已播声音消息。
可选地,服务器向终端发送m条其它声音消息和其它声音消息对应的虚拟角色;或者,服务器向终端发送m条其它声音消息和其它声音消息对应的虚拟角色、刷新前的n条声音消息中的未播声音消息和未播声音消息对应的虚拟角色、刷新前的n条声音消息中的已播声音消息和已播声音消息对应的虚拟角色;或者,服务器向终端发送m条其它声音消息和其它声音消息对应的虚拟角色、刷新前的n条声音消息中的未播声音消息和未播声音消息对应的虚拟角色、刷新前的n条声音消息中的已播声音消息和已播声音消息对应的虚拟角色。可选地,服务器按照“新的声音消息→刷新前的未播声音消息→刷新后的已播声音消息”的顺序发送给终端。
可选地,服务器还将增加或减少后的树木发送给终端。
步骤809,终端重绘背景视觉渲染层,调整声音消息的位置。
终端根据增加或减少后的树木重绘背景视觉渲染层,根据服务器按序发送的声音消息调整声音元素层中各个声音消息的位置。
然后,终端将环境氛围层、重新渲染的背景视觉渲染层、重新渲染的声音元素层进行叠加,得到刷新后的声音消息展示界面。
步骤810,终端浏览、点击自己感兴趣的声音。
“妙音森林”界面73上在该树木的顶端显示m条其它声音消息,在树木的中下部显示刷新前的n条声音消息中的未播声音消息,在树木下方的草坪上显示刷新前的n条声音消息中的已播声音消息。挑选用户可通过向上滑动的方式来查看该树木的中下部,以及位于树木下方的草坪。当声音消息较多且树木较高时,“妙音森林”界面73上会优先显示树木的顶端部分,然后当用户向上滑动时,显示树木的中下部以及位于树木下方的草坪。
当“妙音森林”界面73上存在用户感兴趣的声音消息时,用户点击该声音消息,或者点击该声音消息对应的虚拟角色,则即时通讯程序播放该声音消息。
步骤811,终端监测到声音消息展示界面停留3秒以上。
即时通讯程序对声音消息展示界面的已显示时长进行计时。若声音消息展示界面的已显示时长超过了3秒,则进入步骤812。
步骤812,终端向服务器发送指令。
终端向服务器发送停留触发指令,该停留触发指令用于触发服务器对声音消息展示界面上的未播声音消息进行识别。
步骤813,服务器识别当前界面中的未播声音消息。
服务器保存有每条声音消息的播放记录,服务器根据该播放记录对声音消息展示界面上的未播声音消息进行识别。
步骤814,服务器以鸟形象为中心进行晃动。
针对未播声音消息所对应的小鸟,服务器产生以鸟形象为中心进行晃动的晃动指令和/或晃动动画素材。
步骤815,服务器向终端传输数据。
服务器向终端发送未播声音消息进行晃动的晃动指令和/或晃动动画素材。
步骤816,终端重绘声音元素层的动画。
当终端接收到服务器发送某一未播声音消息的晃动指令时,根据本地存储的晃动动画素材重绘声音元素层中与该未播声音消息对应的虚拟角色。
当终端接收到服务器发送某一未播声音消息的晃动动画素材时,根据该晃动动画素材重绘声音元素层中与该未播声音消息对应的虚拟角色,
终端晃动小鸟的角色模型的显示过程可参考如图16所示。
上述实施例中,以服务器执行部分计算逻辑来举例说明。但是在不同的软件架构中,上述由服务器执行的计算逻辑也可能由终端来执行,本申请实施例对此不加以限定。
图6、7、15、18和25为一个实施例中声音消息显示方法的流程示意图。应该理解的是,虽然图6、7、15、18和25的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图6、7、15、18和25中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
以下为本申请的装置实施例,对于装置实施例中未详细描述的细节,请参考上述方法实施例中的相应介绍。
图26示出了本申请的一个示例性实施例的声音消息显示装置的结构框图。该声音消息显示装置被配置在安装有应用程序的终端中,该声音消息显示装置可以通过软件或者硬件来实现,该应用程序具有接收声音消息的能力,该装置包括:处理模块920和显示模块940;
处理模块920,用于根据启动操作启动所述应用程序;
处理模块920,用于获取至少一个用户帐号发布的n条声音消息,n为正整数;
显示模块940,用于显示所述应用程序的声音消息展示界面,所述声音消息展示界面上显示有位于虚拟世界中的所述声音消息,所述声音消息以所述虚拟世界中的可视元素作为载体进行显示。
在一个可选地实施例中,所述可视元素包括虚拟角色,所述显示模块940,用于获取所述n条声音消息各自在所述虚拟世界中所对应的虚拟角色;生成所述虚拟世界的场景画面,所述场景画面中显示有所述虚拟角色和所述虚拟角色对应的声音消息;根据所述虚拟世界的场景画面显示所述应用程序的所述声音消息展示界面;
其中,所述虚拟角色和所述声音消息中的至少一个用于在接收到触发操作时播放相对应的消息内容。
在一个可选地实施例中,处理模块920,用于根据所述声音消息的消息属性,确定与所述声音消息对应的虚拟角色。
在一个可选地实施例中,处理模块920,用于根据所述声音消息的消息时长,确定与所述声音消息对应的虚拟角色的角色参数;或,处理模块920,用于根据所述声音消息的已发布时长,确定与所述声音消息对应的虚拟角色的角色参数,所述已发布时长是从所述声音消息的上传时刻进行计时的时长;或,处理模块920,用于根据所述声音消息在显示后处于未播放状态的未播时长,确定与所述声音消息对应的虚拟角色的角色参数;其中,所述角色参数包括:角色模型的种类、动画样式、动画频率、角色特效中的至少一种。
在一个可选地实施例中,上述装置还包括:人机交互模块960和播放模块980,如图27所示;
所述人机交互模块960,用于接收在所述声音消息展示界面中的虚拟角色和声音消息中 至少一个的触发操作,所述触发操作包括点击操作和滑动操作中的至少一种;
所述播放模块980,用于根据所述触发操作播放所述虚拟角色相对应的所述声音消息。
可选地,所述人机交互模块960还用于接收与用户有关的各种操作。比如,所述人机交互模块还用于接收启动操作,以便处理模块920根据启动操作启动应用程序。
在一个可选地实施例中,所述显示模块940,还用于显示与所述虚拟角色对应的添加好友控件;所述人机交互模块960,还用于接收用于触发所述添加好友控件的第一操作;所述处理模块920,还用于根据所述第一操作与所述虚拟角色对应的用户帐号建立好友关系。
在一个可选地实施例中,所述显示模块940,还用于显示与所述虚拟角色对应的收藏控件;所述人机交互模块960,还用于接收用于触发所述收藏控件的第二操作;所述处理模块920,还用于根据所述第二操作收藏所述虚拟角色相对应的所述声音消息。
在一个可选地实施例中,所述处理模块920,还用于当所述声音消息展示界面上存在第一声音消息的未播时长达到未播阈值时,控制所述第一声音消息所对应的第一虚拟角色执行预设提醒动作。
在一个可选地实施例中,所述处理模块920,还用于当所述声音消息展示界面上存在第一声音消息和第二声音消息的未播时长达到未播阈值时,控制第一虚拟角色和第二虚拟角色互相交换位置,所述第一虚拟角色是所述第一声音消息对应的虚拟角色,所述第二虚拟角色是所述第二声音消息对应的虚拟角色。
在一个可选地实施例中,所述处理模块920,还用于当所述声音消息展示界面上存在第三声音消息从未播状态变为已播状态时,在所述声音消息展示界面上采用处于未播状态的第四声音消息替换所述第三声音消息,采用第四虚拟角色替换第三虚拟角色,所述第四虚拟角色是所述第四声音消息对应的虚拟角色,所述第三虚拟角色是所述第三声音消息对应的虚拟角色;将所述第三声音消息和所述第三虚拟角色移出所述声音消息展示界面,或者移动至所述声音消息展示界面中的指定区域。
在一个可选地实施例中,所述人机交互模块960,还用于接收在所述声音消息展示界面上的刷新操作;所述处理模块920,还用于根据所述刷新操作获取m条其它声音消息;所述显示模块940,还用于在所述声音消息展示界面上同时显示所述n条声音消息中的未播声音消息以及所述未播声音消息对应的虚拟角色,所述m条其它声音消息以及所述其它声音消息对应的虚拟角色。
在一个可选地实施例中,所述虚拟世界是虚拟森林世界;
所述显示模块940,还用于在所述声音消息展示界面上显示所述虚拟森林世界的场景画面,所述场景画面中包括位于所述虚拟森林世界中的树木,所述树木的上部显示有所述m条其它声音消息以及所述其它声音消息对应的虚拟角色,所述树木的中下部显示有所述n条声音消息中的未播声音消息以及所述未播声音消息对应的虚拟角色。
所述虚拟角色是位于所述虚拟森林世界中的树木上的小鸟。
在一个可选地实施例中,所述处理模块920,还用于根据所述n条声音消息中的未播声 音消息的第一消息数量和所述m条其它声音消息的第二消息数量确定所述树木的高度;
其中,所述第一消息数量和所述第二消息数量两者的和,与所述树木的高度呈正相关关系。
在一个可选地实施例中,所述n条声音消息存在至少一条已播声音消息;
所述显示模块940,还用于取消显示所述n条声音消息中的已播声音消息,以及所述已播声音消息对应的虚拟角色;或,所述显示模块940,还用于在所述树木下方的草坪上显示所述n条声音消息中的已播声音消息所对应的虚拟角色。
在一个可选地实施例中,所述虚拟世界是虚拟森林世界;
所述显示模块940,还用于分别渲染环境氛围层、背景视觉层和声音元素层;所述环境氛围层包括所述虚拟森林世界中的天空和地面,所述背景视觉层包括所述虚拟森林世界中的树木,所述声音元素层包括所述虚拟角色和所述虚拟角色对应的声音消息;将所述环境氛围层、所述背景视觉层和所述声音元素层叠加,作为所述声音消息展示界面进行显示。该虚拟角色可以是存在于虚拟森林中的各种动物。
在一个可选地实施例中,所述声音消息展示界面是用于显示陌生用户帐号所发布的声音消息的用户界面;或,所述声音消息展示界面是用于显示同一个话题中的各个用户帐号所发布的声音消息的用户界面;或,所述声音消息展示界面是用于显示属于同一个地区的各个用户帐号所发布的声音消息的用户界面。
本申请还提供一种计算机可读存储介质,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现上述方法实施例提供的声音消息显示方法。
可选地,本申请还提供了一种包含指令的计算机程序产品,当其在计算机设备上运行时,使得计算机设备执行上述各个方法实施例所提供的声音消息显示方法。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种应用程序中的声音消息显示方法,其特征在于,所述方法由终端执行,所述方法包括:
    根据启动操作启动应用程序;
    获取至少一个用户帐号发布的n条声音消息,n为正整数;
    显示所述应用程序的声音消息展示界面,所述声音消息展示界面上显示有位于虚拟世界中的所述声音消息,所述声音消息以所述虚拟世界中的可视元素作为载体进行显示。
  2. 根据权利要求1所述的方法,其特征在于,所述可视元素包括虚拟角色;所述显示所述应用程序的声音消息展示界面,包括:
    获取所述n条声音消息各自在所述虚拟世界中所对应的虚拟角色;
    生成所述虚拟世界的场景画面,所述场景画面中显示有所述虚拟角色和所述虚拟角色对应的声音消息;
    根据所述虚拟世界的场景画面显示所述应用程序的所述声音消息展示界面。
  3. 根据权利要求2所述的方法,其特征在于,所述获取所述声音消息各自对应的虚拟角色,包括:
    根据所述声音消息的消息属性,确定与所述声音消息对应的虚拟角色。
  4. 根据权利要求3所述的方法,其特征在于,所述获取所述声音消息的消息属性,确定与所述声音消息对应的虚拟角色,包括:
    根据所述声音消息的消息内容长度,确定与所述声音消息对应的虚拟角色的角色参数;或,
    根据所述声音消息的已发布时长,确定与所述声音消息对应的虚拟角色的角色参数;所述已发布时长是从所述声音消息的上传时刻进行计时的时长;或,
    根据所述声音消息在显示后处于未播放状态的未播时长,确定与所述声音消息对应的虚拟角色的角色参数;所述角色参数包括:角色模型的种类、动画样式、动画频率、角色特效中的至少一种。
  5. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    接收在所述声音消息展示界面中的所述虚拟角色和所述声音消息中至少一个的触发操作,所述触发操作包括点击操作和滑动操作中的至少一种;
    根据所述触发操作播放所述虚拟角色相对应的所述声音消息。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述触发操作播放所述虚拟角色相对应的所述声音消息之后,还包括:
    显示与所述虚拟角色对应的添加好友控件;
    接收用于触发所述添加好友控件的第一操作;
    根据所述第一操作与所述虚拟角色对应的用户帐号建立好友关系。
  7. 根据权利要求5所述的方法,其特征在于,所述根据所述触发操作播放所述虚拟角色相对应的所述声音消息之后,还包括:
    显示与所述虚拟角色对应的收藏控件;
    接收用于触发所述收藏控件的第二操作;
    根据所述第二操作收藏所述虚拟角色相对应的所述声音消息。
  8. 根据权利要求2至7任一所述的方法,其特征在于,所述方法还包括:
    当所述声音消息展示界面上存在第一声音消息的未播时长达到未播阈值时,控制所述第一声音消息所对应的第一虚拟角色执行预设提醒动作。
  9. 根据权利要求2至7任一所述的方法,其特征在于,所述方法还包括:
    当所述声音消息展示界面上存在第一声音消息和第二声音消息的未播时长达到未播阈值时,控制第一虚拟角色和第二虚拟角色互相交换位置,所述第一虚拟角色是所述第一声音消息对应的虚拟角色,所述第二虚拟角色是所述第二声音消息对应的虚拟角色。
  10. 根据权利要求5所述的方法,其特征在于,所述方法还包括:
    当所述声音消息展示界面上存在第三声音消息从未播状态变为已播状态时,在所述声音消息展示界面上采用处于未播状态的第四声音消息替换所述第三声音消息,采用第四虚拟角色替换第三虚拟角色,所述第四虚拟角色是所述第四声音消息对应的虚拟角色,所述第三虚拟角色是所述第三声音消息对应的虚拟角色;
    将所述第三声音消息和所述第三虚拟角色移出所述声音消息展示界面,或者移动至所述声音消息展示界面中的指定区域。
  11. 根据权利要求2至7任一所述的方法,其特征在于,所述方法还包括:
    接收在所述声音消息展示界面上的刷新操作;
    根据所述刷新操作获取m条其它声音消息;
    在所述声音消息展示界面上同时显示所述n条声音消息中的未播声音消息以及所述未播声音消息对应的虚拟角色,所述m条其它声音消息以及所述其它声音消息对应的虚拟角色。
  12. 根据权利要求11所述的方法,其特征在于,所述虚拟世界是虚拟森林世界;
    所述在所述声音消息展示界面上同时显示所述n条声音消息中的未播声音消息以及所述未播声音消息对应的虚拟角色,所述m条其它声音消息以及所述其它声音消息对应的虚拟角色,包括:
    在所述声音消息展示界面上显示所述虚拟森林世界的场景画面,所述场景画面中包括位于所述虚拟森林世界中的树木,所述树木的上部显示有所述m条其它声音消息以及所述其它声音消息对应的虚拟角色,所述树木的中下部显示有所述n条声音消息中的未播声音消息以及所述未播声音消息对应的虚拟角色。
  13. 一种应用程序中的声音消息显示装置,其特征在于,所述装置包括:
    处理模块,用于根据启动操作启动应用程序;
    所述处理模块,用于获取至少一个用户帐号发布的n条声音消息,n为正整数;
    显示模块,用于显示所述应用程序的声音消息展示界面,所述声音消息展示界面上显示有位于虚拟世界中的所述声音消息,所述声音消息以所述虚拟世界中的可视元素作为载体进行显示。
  14. 一种计算机设备,其特征在于,所述计算机设备包括存储器和处理器;所述存储器中存储有至少一条程序,所述至少一条程序由所述处理器加载并执行,使得所述处理器具体执行以下步骤:
    根据启动操作启动应用程序;
    获取至少一个用户帐号发布的n条声音消息,n为正整数;
    显示所述应用程序的声音消息展示界面,所述声音消息展示界面上显示有位于虚拟世界中的所述声音消息,所述声音消息以所述虚拟世界中的可视元素作为载体进行显示。
  15. 根据权利要求14所述的计算机设备,其特征在于,所述可视元素包括虚拟角色;所述计算机程序被所述处理器执行所述显示声音消息展示界面的步骤时,使得所述处理器具体执行以下步骤:
    获取所述n条声音消息各自在所述虚拟世界中所对应的虚拟角色;
    生成所述虚拟世界的场景画面,所述场景画面中显示有所述虚拟角色和所述虚拟角色对应的声音消息;
    根据所述虚拟世界的场景画面显示所述应用程序的所述声音消息展示界面。
  16. 根据权利要求14所述的计算机设备,其特征在于,所述计算机程序被所述处理器执行所述获取所述声音消息各自对应的虚拟角色的步骤时,使得所述处理器具体执行以下步骤:
    根据所述声音消息的消息属性,确定与所述声音消息对应的虚拟角色。
  17. 根据权利要求14所述的计算机设备,其特征在于,所述计算机程序被所述处理器执行所述获取所述声音消息的消息属性,确定与所述声音消息对应的虚拟角色的步骤时,使得所述处理器具体执行以下步骤:
    根据所述声音消息的消息内容长度,确定与所述声音消息对应的虚拟角色的角色参数;或,
    根据所述声音消息的已发布时长,确定与所述声音消息对应的虚拟角色的角色参数;所述已发布时长是从所述声音消息的上传时刻进行计时的时长;或,
    根据所述声音消息在显示后处于未播放状态的未播时长,确定与所述声音消息对应的虚拟角色的角色参数;所述角色参数包括:角色模型的种类、动画样式、动画频率、角色特效中的至少一种。
  18. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有至少一条程序,所述至少一条程序由处理器加载并执行,使得所述处理器具体执行以下步骤:
    根据启动操作启动应用程序;
    获取至少一个用户帐号发布的n条声音消息,n为正整数;
    显示所述应用程序的声音消息展示界面,所述声音消息展示界面上显示有位于虚拟世界中的所述声音消息,所述声音消息以所述虚拟世界中的可视元素作为载体进行显示。
  19. 根据权利要求18所述的计算机可读存储介质,其特征在于,所述可视元素包括虚拟角色;所述计算机程序被所述处理器执行所述显示声音消息展示界面的步骤时,使得所述处理器具体执行以下步骤:
    获取所述n条声音消息各自在所述虚拟世界中所对应的虚拟角色;
    生成所述虚拟世界的场景画面,所述场景画面中显示有所述虚拟角色和所述虚拟角色对应的声音消息;
    根据所述虚拟世界的场景画面显示所述应用程序的所述声音消息展示界面。
  20. 根据权利要求18所述的计算机可读存储介质,其特征在于,所述计算机程序被所述处理器执行所述获取所述声音消息各自对应的虚拟角色的步骤时,使得所述处理器具体执行以下步骤:
    根据所述声音消息的消息属性,确定与所述声音消息对应的虚拟角色。
PCT/CN2019/106116 2018-09-30 2019-09-17 应用程序中的声音消息显示方法、装置、计算机设备及计算机可读存储介质 WO2020063394A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/096,683 US11895273B2 (en) 2018-09-30 2020-11-12 Voice message display method and apparatus in application, computer device, and computer-readable storage medium
US18/527,237 US20240098182A1 (en) 2018-09-30 2023-12-01 Voice message display

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811159451.4A CN110971502B (zh) 2018-09-30 2018-09-30 应用程序中的声音消息显示方法、装置、设备及存储介质
CN201811159451.4 2018-09-30

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/096,683 Continuation US11895273B2 (en) 2018-09-30 2020-11-12 Voice message display method and apparatus in application, computer device, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
WO2020063394A1 true WO2020063394A1 (zh) 2020-04-02

Family

ID=69951188

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/106116 WO2020063394A1 (zh) 2018-09-30 2019-09-17 应用程序中的声音消息显示方法、装置、计算机设备及计算机可读存储介质

Country Status (3)

Country Link
US (2) US11895273B2 (zh)
CN (2) CN113965542B (zh)
WO (1) WO2020063394A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112860214A (zh) * 2021-03-10 2021-05-28 北京车和家信息技术有限公司 基于语音会话的动画展示方法、装置、存储介质及设备

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570100B (zh) * 2016-10-31 2019-02-26 腾讯科技(深圳)有限公司 信息搜索方法和装置
CN113489833B (zh) * 2021-06-29 2022-11-04 维沃移动通信有限公司 信息播报方法、装置、设备及存储介质
CN117319340A (zh) * 2022-06-23 2023-12-29 腾讯科技(深圳)有限公司 语音消息的播放方法、装置、终端及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100070980A1 (en) * 2008-09-16 2010-03-18 Fujitsu Limited Event detection system, event detection method, and program
WO2010075628A1 (en) * 2008-12-29 2010-07-08 Nortel Networks Limited User interface for orienting new users to a three dimensional computer-generated virtual environment
CN106774830A (zh) * 2016-11-16 2017-05-31 网易(杭州)网络有限公司 虚拟现实系统、语音交互方法及装置

Family Cites Families (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080163075A1 (en) * 2004-01-26 2008-07-03 Beck Christopher Clemmett Macl Server-Client Interaction and Information Management System
US7961854B1 (en) * 2005-10-13 2011-06-14 Tp Lab, Inc. System to record and analyze voice message usage information
US7775885B2 (en) * 2005-10-14 2010-08-17 Leviathan Entertainment, Llc Event-driven alteration of avatars
US20070218986A1 (en) * 2005-10-14 2007-09-20 Leviathan Entertainment, Llc Celebrity Voices in a Video Game
US8726195B2 (en) * 2006-09-05 2014-05-13 Aol Inc. Enabling an IM user to navigate a virtual world
US10963648B1 (en) * 2006-11-08 2021-03-30 Verizon Media Inc. Instant messaging application configuration based on virtual world activities
US8601386B2 (en) * 2007-04-20 2013-12-03 Ingenio Llc Methods and systems to facilitate real time communications in virtual reality
EP2174472A2 (fr) * 2007-07-27 2010-04-14 Goojet Procede et dispositif de creation d'applications informatiques
US20100070501A1 (en) * 2008-01-15 2010-03-18 Walsh Paul J Enhancing and storing data for recall and use using user feedback
US20090210483A1 (en) * 2008-02-15 2009-08-20 Sony Ericsson Mobile Communications Ab Systems Methods and Computer Program Products for Remotely Controlling Actions of a Virtual World Identity
US8311188B2 (en) * 2008-04-08 2012-11-13 Cisco Technology, Inc. User interface with voice message summary
CN101697559A (zh) * 2009-10-16 2010-04-21 深圳华为通信技术有限公司 一种消息显示的方法、装置和终端
JP2012129663A (ja) * 2010-12-13 2012-07-05 Ryu Hashimoto 発話指示装置
US8775535B2 (en) * 2011-01-18 2014-07-08 Voxilate, Inc. System and method for the transmission and management of short voice messages
US9155964B2 (en) * 2011-09-14 2015-10-13 Steelseries Aps Apparatus for adapting virtual gaming with real world information
CN103209201A (zh) * 2012-01-16 2013-07-17 上海那里信息科技有限公司 基于社交关系的虚拟化身互动系统和方法
CA2869699A1 (en) * 2012-04-04 2013-10-10 Scribble Technologies Inc. System and method for generating digital content
CN102780646B (zh) * 2012-07-19 2015-12-09 上海量明科技发展有限公司 即时通信中声音图标的实现方法、客户端及系统
CN102821065A (zh) * 2012-08-13 2012-12-12 上海量明科技发展有限公司 即时通信中输出音频消息显示形状的方法、客户端及系统
CN104104703B (zh) * 2013-04-09 2018-02-13 广州华多网络科技有限公司 多人音视频互动方法、客户端、服务器及系统
US9639902B2 (en) * 2013-07-25 2017-05-02 In The Chat Communications Inc. System and method for managing targeted social communications
CN104732975A (zh) * 2013-12-20 2015-06-24 华为技术有限公司 一种语音即时通讯方法及装置
CN106171038B (zh) * 2014-03-12 2019-11-15 腾讯科技(深圳)有限公司 通过蓝牙协议将外围设备连接到用户设备的方法和装置
US20160110922A1 (en) * 2014-10-16 2016-04-21 Tal Michael HARING Method and system for enhancing communication by using augmented reality
US20160142361A1 (en) * 2014-11-15 2016-05-19 Filmstrip, Inc. Image with audio conversation system and method utilizing social media communications
US9955902B2 (en) * 2015-01-29 2018-05-01 Affectomatics Ltd. Notifying a user about a cause of emotional imbalance
CN105049318B (zh) * 2015-05-22 2019-01-08 腾讯科技(深圳)有限公司 消息发送方法和装置、消息处理方法和装置
CN106817349B (zh) * 2015-11-30 2020-04-14 厦门黑镜科技有限公司 一种在通信过程中使通信界面产生动画效果的方法及装置
US10313281B2 (en) * 2016-01-04 2019-06-04 Rockwell Automation Technologies, Inc. Delivery of automated notifications by an industrial asset
US10638256B1 (en) * 2016-06-20 2020-04-28 Pipbin, Inc. System for distribution and display of mobile targeted augmented reality content
CN107623622A (zh) * 2016-07-15 2018-01-23 掌赢信息科技(上海)有限公司 一种发送语音动画的方法及电子设备
US10659398B2 (en) * 2016-10-03 2020-05-19 Nohold, Inc. Interactive virtual conversation interface systems and methods
CN107222398B (zh) * 2017-07-24 2021-01-05 广州腾讯科技有限公司 社交消息控制方法、装置、存储介质和计算机设备
CN107707538B (zh) * 2017-09-27 2020-04-24 Oppo广东移动通信有限公司 数据传输方法、装置、移动终端及计算机可读存储介质
CN108037869B (zh) * 2017-12-07 2022-04-26 北京小米移动软件有限公司 消息显示方法、装置及终端

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100070980A1 (en) * 2008-09-16 2010-03-18 Fujitsu Limited Event detection system, event detection method, and program
WO2010075628A1 (en) * 2008-12-29 2010-07-08 Nortel Networks Limited User interface for orienting new users to a three dimensional computer-generated virtual environment
CN106774830A (zh) * 2016-11-16 2017-05-31 网易(杭州)网络有限公司 虚拟现实系统、语音交互方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112860214A (zh) * 2021-03-10 2021-05-28 北京车和家信息技术有限公司 基于语音会话的动画展示方法、装置、存储介质及设备
CN112860214B (zh) * 2021-03-10 2023-08-01 北京车和家信息技术有限公司 基于语音会话的动画展示方法、装置、存储介质及设备

Also Published As

Publication number Publication date
CN110971502A (zh) 2020-04-07
US20210067632A1 (en) 2021-03-04
CN113965542A (zh) 2022-01-21
US11895273B2 (en) 2024-02-06
US20240098182A1 (en) 2024-03-21
CN113965542B (zh) 2022-10-04
CN110971502B (zh) 2021-09-28

Similar Documents

Publication Publication Date Title
WO2020063394A1 (zh) 应用程序中的声音消息显示方法、装置、计算机设备及计算机可读存储介质
US11784841B2 (en) Presenting participant reactions within a virtual conferencing system
CN107924414B (zh) 促进在计算装置处进行多媒体整合和故事生成的个人辅助
EP3095091B1 (en) Method and apparatus of processing expression information in instant communication
US10127632B1 (en) Display and update of panoramic image montages
US20240080215A1 (en) Presenting overview of participant reactions within a virtual conferencing system
US20160155256A1 (en) Avatar personalization in a virtual environment
KR20230022983A (ko) 외부-리소스 도크 및 서랍을 포함하는 메시징 시스템
US20220206738A1 (en) Selecting an audio track in association with multi-video clip capture
US11888795B2 (en) Chats with micro sound clips
US10261749B1 (en) Audio output for panoramic images
US20240094983A1 (en) Augmenting image content with sound
CN114995704A (zh) 用于三维环境的集成化输入输出
KR20230166957A (ko) 3차원 가상 환경에서 내비게이션 보조를 제공하기 위한 방법 및 시스템
US20230352054A1 (en) Editing video captured by electronic devices
US12001658B2 (en) Content collections linked to a base item
US20240160343A1 (en) Selectively modifying a gui
US20240163489A1 (en) Navigating previously captured images and ar experiences
US20230377281A1 (en) Creating augmented content items
US20240071004A1 (en) Social memory re-experiencing system
US20240069626A1 (en) Timelapse re-experiencing system
US20230343004A1 (en) Augmented reality experiences with dual cameras
US20230351627A1 (en) Automatically cropping of landscape videos
WO2023211616A1 (en) Editing video captured by electronic devices using associated flight path information
WO2024107720A1 (en) Selectively modifying a gui

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19865143

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19865143

Country of ref document: EP

Kind code of ref document: A1