US11895273B2 - Voice message display method and apparatus in application, computer device, and computer-readable storage medium - Google Patents

Voice message display method and apparatus in application, computer device, and computer-readable storage medium Download PDF

Info

Publication number
US11895273B2
US11895273B2 US17/096,683 US202017096683A US11895273B2 US 11895273 B2 US11895273 B2 US 11895273B2 US 202017096683 A US202017096683 A US 202017096683A US 11895273 B2 US11895273 B2 US 11895273B2
Authority
US
United States
Prior art keywords
voice message
voice
virtual
message
character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US17/096,683
Other languages
English (en)
Other versions
US20210067632A1 (en
Inventor
Wei Dai
Kun Lu
Qinghua Zhong
Jun Wu
Yingren WANG
Rong Huang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Assigned to TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED reassignment TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LU, KUN, DAI, WEI, HUANG, RONG, WANG, Yingren, WU, JUN, ZHONG, QINGHUA
Publication of US20210067632A1 publication Critical patent/US20210067632A1/en
Priority to US18/527,237 priority Critical patent/US20240098182A1/en
Application granted granted Critical
Publication of US11895273B2 publication Critical patent/US11895273B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/04Protocols specially adapted for terminals or networks with limited capabilities; specially adapted for terminal portability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/53Centralised arrangements for recording incoming messages, i.e. mailbox systems
    • H04M3/533Voice mail systems
    • H04M3/53333Message receiving aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/2053D [Three Dimensional] animation driven by audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • H04L67/125Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks involving control of end-device applications over a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/12Counting circuits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/14Delay circuits; Timers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/25Aspects of automatic or semi-automatic exchanges related to user interface aspects of the telephonic communication service
    • H04M2203/251Aspects of automatic or semi-automatic exchanges related to user interface aspects of the telephonic communication service where a voice mode or a visual mode can be used interchangeably
    • H04M2203/252Aspects of automatic or semi-automatic exchanges related to user interface aspects of the telephonic communication service where a voice mode or a visual mode can be used interchangeably where a voice mode is enhanced with visual information

Definitions

  • Embodiments of this application relate to the field of computer programs, including a voice message display method and apparatus in an application (APP), a computer device, and a computer-readable storage medium.
  • APP application
  • a computer device including a computer-readable storage medium.
  • Social APPs are APPs most commonly used by users on mobile terminals. Social APPs mainly use text and pictures as communication media, but in emerging social APPs voice messages are used as a communication medium. In the social APPs using voice messages as a communication medium, voice messages are displayed as a plurality of voice cells arranged in reverse chronological order of uploading times, and each voice cell corresponds to one voice message. Further, each voice cell can have a corresponding rounded rectangular box, and a user may click the voice cell to play the voice message.
  • Embodiments of this application provide a voice message display method and apparatus in an APP, a computer device, and a computer-readable storage medium.
  • a voice message display method in an APP is provided that can be performed by a terminal.
  • the method can include starting an APP according to an operation signal, obtaining n voice messages published by at least one user account, where n is a positive integer, and displaying a voice message presentation interface of the APP, the voice message presentation interface displaying the voice message in a virtual world, the voice message being displayed by using a visible element in the virtual world as a carrier.
  • a voice message display apparatus in an APP can also be provided.
  • the apparatus can include processing circuitry that can be configured to start an APP according to a start operation, obtain n voice messages published by at least one user account, where n is a positive integer, and display a voice message presentation interface of the APP, the voice message presentation interface displaying the voice message in a virtual world, the voice message being displayed by using a visible element in the virtual world as a carrier.
  • a computer device including a memory and a processor, the memory storing at least one program that, when executed by the processor, causes the processor to implement the foregoing voice message display method in an APP.
  • a non-transitory computer-readable storage medium storing at least one program, the at least one program being loaded and executed by the processor to implement the foregoing voice message display method in an APP.
  • FIG. 1 is a schematic diagram of an interface of a voice message display method provided in the related art.
  • FIG. 2 is a structural block diagram of a computer system according to an exemplary embodiment of this application.
  • FIG. 3 is a structural block diagram of a terminal according to an exemplary embodiment of this application.
  • FIG. 4 is a structural block diagram of a server according to an exemplary embodiment of this application.
  • FIG. 5 is a schematic diagram of an interface of a voice message display method according to an exemplary embodiment of this application.
  • FIG. 6 is a flowchart of a voice message display method according to an exemplary embodiment of this application.
  • FIG. 7 is a flowchart of a voice message display method according to an exemplary embodiment of this application.
  • FIG. 8 is a schematic diagram of an interface of a voice message display method according to an exemplary embodiment of this application.
  • FIG. 9 is a schematic diagram of layering of a voice message display method according to an exemplary embodiment of this application.
  • FIG. 10 is a diagram of a correspondence between a character model of a bird and a message duration according to an exemplary embodiment of this application.
  • FIG. 11 is a diagram of a correspondence between a character model of a bird and a published duration according to an exemplary embodiment of this application.
  • FIG. 12 is a diagram of a correspondence between a character model of a bird and a published duration according to another exemplary embodiment of this application.
  • FIG. 13 is a flowchart of a voice message display method according to another exemplary embodiment of this application.
  • FIG. 14 is a schematic diagram of an interface of a voice message display method according to another exemplary embodiment of this application.
  • FIG. 15 is a schematic diagram of an interface of a voice message display method according to another exemplary embodiment of this application.
  • FIG. 16 is a schematic diagram of an interface of a voice message display method according to another exemplary embodiment of this application.
  • FIG. 17 is a schematic diagram of an interface of a voice message display method according to another exemplary embodiment of this application.
  • FIG. 18 is a flowchart of a voice message display method according to another exemplary embodiment of this application.
  • FIG. 19 is a schematic diagram of an interface of a voice message display method according to another exemplary embodiment of this application.
  • FIG. 20 is a schematic diagram of an interface of a voice message display method according to another exemplary embodiment of this application.
  • FIG. 21 is a flowchart of a voice message display method according to another exemplary embodiment of this application.
  • FIG. 22 is a flowchart of a voice message display method according to another exemplary embodiment of this application.
  • FIG. 23 is a schematic diagram of an interface of a voice message display method according to another exemplary embodiment of this application.
  • FIG. 24 is a schematic diagram of an interface of a voice message display method according to another exemplary embodiment of this application.
  • FIG. 25 is a flowchart of a voice message display method according to another exemplary embodiment of this application.
  • FIG. 26 is a block diagram of a voice message display apparatus according to another exemplary embodiment of this application.
  • FIG. 27 is a block diagram of a voice message display apparatus according to another exemplary embodiment of this application.
  • a voice social APP is a social APP based on voice messages and is also referred to as a sound social APP.
  • Voice social APPs account for 7.5% of a plurality of social APPs in a ranking list of free social APPs of a specific APP store.
  • conventional radio APPs also have social attributes added, hoping to occupy a place in the vertical market of voice social APPs.
  • Voice messages can be displayed by using a feed stream in voice social APPs, and the feed stream combines a plurality of message sources to which a user is actively subscribed to form a content aggregator, to help the user keep obtaining latest subscription source content.
  • the feed stream is usually displayed in a form of a timeline in a user interface (UI).
  • UI user interface
  • FIG. 1 in a feed stream 10 , a plurality of voice cells 12 are sorted and displayed in chronological order of publishing times, the voice cells 12 and voice messages are in a one-to-one correspondence.
  • Each voice cell 12 is displayed as a rectangular box, and each rectangular box is provided with a play button 14 and a voice print 16 , and the voice print 16 corresponds to the voice message.
  • play button 14 When a user clicks a play button 14 on a specific voice cell 12 , playback of a voice message corresponding to the voice cell 12 is triggered.
  • the voice cells 12 are basically the same in terms of presentation effects on an interface and may be different from each other only in terms of voice prints, so that the user cannot accurately distinguish which voice messages are played voice messages and which voice messages are unplayed voice messages.
  • the user needs to drag different voice cells 12 in the feed stream up and down, and continuously search the plurality of voice cells 12 . Therefore, more time and operation steps need to be consumed to sift different voice messages, resulting in relatively low efficiency of man-machine interaction.
  • the embodiments of this application provide an improved display solution for a voice message by constructing a virtual world and displaying a voice message through a virtual character in the virtual world, each voice message corresponding to a respective virtual character.
  • the virtual world is a virtual forest world
  • the virtual character is a bird in the virtual forest world
  • each voice message corresponds to a respective bird
  • display manners of some bird are different.
  • the virtual world is a virtual ocean world
  • the virtual character is a fish in the virtual ocean world
  • each voice message corresponds to a respective fish
  • display manners of some fish are different. Therefore, the user can distinguish different voice messages according to virtual characters having different display manners relatively easily. Even played voice messages and unplayed voice messages can be distinguished according to the different display manners.
  • FIG. 2 is a structural block diagram of a computer system 200 according to an exemplary embodiment of this application.
  • the computer system 200 may be an instant messaging system, a team voice chat system, or another APP system having a social attribute. Of course, this is not limited in this embodiment of this application.
  • the computer system 200 can include a first terminal 220 , a server cluster 240 , and a second terminal 260 .
  • the first terminal 220 is connected to the server cluster 240 through a wireless network or wired network.
  • the first terminal 220 may be at least one of a smartphone, a game console, a desktop computer, a tablet computer, an ebook reader, a Moving Picture Experts Group Audio Layer III (MP3) player, a Moving Picture Experts Group Audio Layer IV (MP4) player, and a portable laptop computer.
  • An APP supporting a voice message is installed and run on the first terminal 220 .
  • the APP may be any one of a voice social APP, an instant messaging APP, a team voice APP, a social APP aggregating people based on topics, channels, or circles, and a social APP based on shopping.
  • the first terminal 220 is a terminal used by a first user, and a first user account logs in to an APP run on the first terminal 220 .
  • the first terminal 220 is connected to the server cluster 240 through a wireless network or a wired network.
  • the server cluster 240 includes at least one of one server, a plurality of servers, a cloud computing platform, and a virtualization center.
  • the server cluster 240 is configured to provide a backend service for the APP supporting a voice message.
  • the server cluster 240 takes on primary computing work, and the first terminal 220 and the second terminal 260 take on secondary computing work; alternatively, the server cluster 240 takes on secondary computing work, and the first terminal 220 and the second terminal 260 take on primary computing work; alternatively, collaborative computing is performed by using a distributed computing architecture among the server cluster 240 , the first terminal 220 , and the second terminal 260 .
  • the server cluster 240 can include an access server 242 and a message forwarding server 244 .
  • the access server 242 is configured to provide an access service and an information receiving/transmitting service for the first terminal 220 and the second terminal 260 , and forward a message, such as a voice message, a text message, a picture message, or a video message, between a terminal and the message forwarding server 244 .
  • the message forwarding server 244 is configured to provide a background service for the APP, for example, at least one of a friend adding service, a text message forwarding service, a voice message forwarding service, and a picture message forwarding service. There may be one or more message forwarding servers 244 .
  • message forwarding servers 244 When there are a plurality of message forwarding servers 244 , there are at least two message forwarding servers 244 configured to provide different services, and/or there are at least two message forwarding servers 244 configured to provide the same service, for example, provide the same service in a load-balancing manner. Of course, this is not limited in this embodiment of this application.
  • An APP supporting a voice message is installed and run on the second terminal 260 .
  • the APP may be any one of a voice social APP, an instant messaging APP, a team voice APP, a social APP aggregating people based on topics, channels, or circles, and a social APP based on shopping.
  • the second terminal 260 is a terminal used by a second user. A second user account logs in to an APP in the second terminal 220 .
  • the first user account and the second user account are in a virtual social network, and the virtual social network provides a propagation approach for voice messages between the first user account and the second user account.
  • the virtual social network may be provided by the same social platform, or may be provided by a plurality of social platforms having an association relationship (for example, an authorized login relationship) cooperatively.
  • a specific form of the virtual social network is not limited in this embodiment of this application.
  • the first user account and the second user account may belong to the same team or the same organization, have a friend relationship, or have a temporary communication permission.
  • the first user account and the second user account may also be in an unfamiliar relationship.
  • the virtual social network provides a one-way message propagation approach or a two-way message propagation approach between the first user account and the second user account, to help voice messages propagate between different user accounts.
  • the APPs can be installed on the first terminal 220 and the second terminal 260 are the same, or the APPs can be installed on the two terminals are the same type of APPs of different operating system platforms, or the APPs can be installed on the two terminals are different but support the same type of voice message.
  • Different operating systems can include an Apple's operating system, an Android operating system, a Linux operating system, a Windows operating system, and the like.
  • the first terminal 220 may generally refer to one of a plurality of terminals
  • the second terminal 260 may generally refer to one of a plurality of terminals
  • description is made by using only the first terminal 220 and the second terminal 260 as an example.
  • Terminal types of the first terminal 220 and the second terminal 260 are the same or different.
  • the terminal type includes at least one of a smartphone, a game console, a desktop computer, a tablet computer, an ebook reader, an MP3 player, an MP4 player, and a laptop computer. In the following embodiments, description is made by using an example in which the first terminal 220 and/or the second terminal 240 is a smartphone.
  • the computer system further includes another terminal 280 .
  • the quantity and the device types of the terminals are not limited in this embodiment of this application.
  • FIG. 3 is a structural block diagram of a terminal 300 according to an exemplary embodiment of this application.
  • the terminal 300 may be a smartphone, a tablet computer, an MP3 player, an MP4 player, a notebook computer, or a desktop computer.
  • the terminal 300 may also be referred to as user equipment, a portable terminal, a laptop terminal, a desktop terminal, or another name.
  • the terminal 300 may be a first terminal or a second terminal.
  • the terminal 300 includes a processor 301 and a memory 302 .
  • the processor 301 may include one or more processing cores, and for example, may be a 4-core processor or an 8-core processor.
  • the processor 301 may be implemented by using at least one hardware form of digital signal processing (DSP), a field-programmable gate array (FPGA), and a programmable logic array (PLA).
  • DSP digital signal processing
  • FPGA field-programmable gate array
  • PDA programmable logic array
  • the processor 301 may alternatively include a main processor and a coprocessor.
  • the main processor is a processor configured to process data in an awake state, also referred to as a central processing unit (CPU), and the coprocessor is a low-power processor configured to process data in a standby state.
  • the processor 301 may be integrated with a graphics processing unit (GPU).
  • the GPU is configured to be responsible for rendering and drawing content to be displayed by a display screen.
  • the processor 301 may further include an artificial intelligence
  • the memory 302 may include one or more non-transitory computer-readable storage media.
  • the computer-readable storage medium may be non-transitory.
  • the memory 302 may further include a high-speed random access memory, and a non-volatile memory such as one or more magnetic disk storage devices and a flash storage device.
  • the non-transitory computer-readable storage medium in the memory 302 is configured to store at least one instruction, and the at least one instruction is used for being executed by the processor 301 to implement the voice message display method in an APP provided in the method embodiments of this application.
  • the terminal 300 further optionally includes a peripheral interface 303 and at least one peripheral.
  • the processor 301 , the memory 302 , and the peripheral interface 303 may be connected through a bus or a signal cable.
  • Each peripheral may be connected to the peripheral interface 303 through a bus, a signal cable, or a circuit board.
  • the peripheral includes: at least one of a radio frequency (RF) circuit 304 , a display screen 305 , a camera component 306 , an audio circuit 307 , a positioning component 308 , and a power supply 309 .
  • the peripheral interface 303 may be configured to connect at least one peripheral related to input/output (I/O) to the processor 301 and the memory 302 .
  • I/O input/output
  • the display screen 305 is configured to display a UI.
  • the UI may include a graph, text, an icon, a video, and any combination thereof.
  • the display screen 305 is further capable of obtaining a touch signal on or above a surface of the display screen 305 .
  • the camera component 306 is configured to obtain an image or a video.
  • the audio circuit 307 may include a microphone and a speaker.
  • the microphone is configured to: obtain sound waves of a user and an environment, and convert the sound waves into electrical signals and input the electrical signals into the processor 301 for processing, or input the electrical signals into the RF circuit 304 to implement speech communication.
  • the positioning component 308 is configured to position a current geographic location of the terminal 300 for implementing navigation or a location based service (LBS).
  • the power supply 309 is configured to supply power for various components in the terminal 300 .
  • the power supply 309 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery.
  • the terminal 300 may further include one or more sensors 310 .
  • the one or more sensors 310 include, but are not limited to, an acceleration sensor 311 , a gyroscope sensor 312 , a pressure sensor 313 , a fingerprint sensor 314 , an optical sensor 315 , and a proximity sensor 316 .
  • the acceleration sensor 311 may detect accelerations on three coordinate axes of a coordinate system established by the terminal 300 .
  • the gyroscope sensor 312 may detect a body direction and a rotation angle of the terminal 300 .
  • the gyroscope sensor 312 may cooperate with the acceleration sensor 311 to collect a 3D action by the user on the terminal 300 .
  • the pressure sensor 313 may be disposed on a side frame of the terminal 300 or a lower layer of the display screen 305 .
  • a holding signal of the user to the terminal 300 may be detected, and left/right hand identification or a shortcut operation may be performed by the processor 301 according to the holding signal collected by the pressure sensor 313 .
  • the processor 301 controls an operable control on the UI according to a pressure operation of the user on the display 305 .
  • the fingerprint sensor 314 is configured to collect a user's fingerprint, and the processor 301 identifies a user's identity according to the fingerprint collected by the fingerprint sensor 314 , or the fingerprint sensor 314 identifies a user's identity according to the collected fingerprint.
  • the optical sensor 315 is configured to collect ambient light intensity.
  • the proximity sensor 316 also referred to as a distance sensor, is usually disposed on a front panel of the terminal 300 .
  • the proximity sensor 316 is configured to collect a distance between a user and the front surface of the terminal 300 .
  • the memory 302 further includes the following program modules (or instruction sets), or a subset or a superset thereof: an operating system 321 ; a communication module 322 ; a contact/motion module 323 ; a graphics module 324 ; a tactile feedback module 325 ; a text input module 326 ; a GPS module 327 ; a digital assistant client module 328 ; a data, user, and model module 329 ; and APPs 330 : a contact module 330 - 1 , a telephone module 330 - 2 , a video conference module 330 - 3 , an email module 330 - 4 , an instant messaging module 330 - 5 , a fitness support module 330 - 6 , a camera module 330 - 7 , an image management module 330 - 8 , a multimedia player module 330 - 9 , a notepad module 330 - 10 , a map module 330 - 11 , a browser module 330 - 12 ,
  • the memory 302 further includes an APP 330 - 22 supporting a voice message.
  • the APP 330 - 22 may be configured to implement the voice message display method in an APP in the following method embodiments.
  • FIG. 3 does not constitute a limitation on the terminal 300 , and more or fewer components than those shown in the figure may be included, or some components may be combined, or a different component deployment may be used.
  • This application further provides a non-transitory computer-readable storage medium, the computer-readable storage medium storing at least one instruction, at least one program, a code set, or an instruction set, the at least one instruction, the at least one program, the code set, or the instruction set being loaded and executed by a processor to implement the voice message display method in an APP according to the embodiments of this application.
  • FIG. 4 is a schematic structural diagram of a server according to an exemplary embodiment of this application.
  • the server may be implemented as any server in the foregoing server cluster 240 .
  • a server 400 includes a central processing unit (CPU) 401 , a system memory 404 including a random access memory (RAM) 402 and a read-only memory (ROM) 403 , and a system bus 405 connecting the system memory 404 and the CPU 401 .
  • the server 400 further includes a basic input/output system (I/O system) 406 for transmitting information between components in a computer, and a mass storage device 407 configured to store an operating system 413 , a client 414 , and another program module 415 .
  • I/O system basic input/output system
  • the basic I/O system 406 includes a display 408 configured to display information and an input device 409 such as a mouse or a keyboard that is configured for information inputting by a user.
  • the display 408 and the input device 409 are both connected to the CPU 401 by an input/output (I/O) controller 410 connected to the system bus 405 .
  • the basic I/O system 406 may further include the input/output controller 410 , to receive and process inputs from a plurality of other devices, such as the keyboard, the mouse, or an electronic stylus.
  • the input/output controller 410 further provides an output to a display, a printer or another type of output device.
  • the mass storage device 407 is connected to the CPU 401 by using a mass storage controller (not shown) connected to the system bus 405 .
  • the mass storage device 407 and an associated computer-readable medium provide non-volatile storage for the server 400 . That is, the mass storage device 407 may include a computer-readable medium (not shown) such as a hard disk or a compact disc ROM (CD-ROM) drive.
  • a computer-readable medium such as a hard disk or a compact disc ROM (CD-ROM) drive.
  • the computer-readable medium may include a computer storage medium and a communication medium.
  • the computer storage medium includes volatile and non-volatile media, and removable and non-removable media implemented by using any method or technology and configured to store information such as a computer-readable instruction, a data structure, a program module, or other data.
  • the computer storage medium includes a RAM, a ROM, an erasable programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM), a flash memory or another solid-state memory technology, a CD-ROM, a digital versatile disc (DVD) or another optical memory, a tape cartridge, a magnetic cassette, a magnetic disk memory, or another magnetic storage device.
  • the server 400 may further be connected, through a network such as the Internet, to a remote computer on the network and run. That is, the server 400 may be connected to a network 412 by using a network interface unit 411 connected to the system bus 405 , or may be connected to another type of network or remote computer system (not shown) by using the network interface unit 411 .
  • FIG. 5 and FIG. 6 show a schematic diagram of an interface of and a flowchart of a voice message display method in an APP according to an exemplary embodiment of this application respectively.
  • a user installs an APP supporting a capability of transmitting and/or receiving a voice message on a terminal. After the APP is installed, an icon of the APP is displayed at a home page of the terminal.
  • a home page 50 of the terminal displays a program icon 51 of the APP.
  • the user performs a start operation on the program icon 51 of the APP.
  • the start operation may be a click operation acting on the program icon 51 of the APP.
  • the method can start an APP according to a start operation.
  • a terminal calls a start process of the APP, and starts the APP to enter a foreground running state through the start process.
  • a user account is logged in to at the server.
  • the user account is used for uniquely identifying each user, a first user account is logged in to in the APP, and other user accounts are logged in to in APPs in other terminals, for example, a second user account and a third user account.
  • Each user publishes (or uploads or transmits) a generated voice message in a respective APP.
  • the method can obtain n voice messages published by at least one user account.
  • the APP obtains n voice messages published by at least one user account from the server, and the at least one user account is usually a user account other than the first user account.
  • n voice messages include a voice message uploaded by the first user account is not excluded.
  • n is at least one, but in most embodiments, where n is a positive integer greater than 1.
  • the voice message refers to a message transmitting information through a voice signal.
  • the voice message only includes a voice signal.
  • the voice message refers to a message transmitting information through a voice signal, for example, the voice message includes the voice signal and supplementary explanatory text; in another example, the voice message includes the voice signal and a supplementary explanatory picture; and in another example, the voice message includes the voice signal, the supplementary explanatory text, and the supplementary explanatory picture.
  • an example in which the voice message only includes the voice signal is used for description.
  • the APP may first obtain a message identifier of the voice message, and obtain message content of the voice message from the server according to the message identifier when the voice message needs to be played.
  • the APP may alternatively directly obtain the message content of the voice message and buffer the message content locally.
  • the method can display a voice message presentation interface of the APP, the voice message presentation interface displaying the voice message in a virtual world, the voice message being displayed by using a visible element in the virtual world as a carrier.
  • the APP provides various UIs, and the voice message presentation interface is one of the various UIs.
  • a voice message presentation interface is displayed by default.
  • FIG. 5 after a program icon 51 of the APP is clicked, a voice message presentation interface 54 starts to be displayed by default.
  • a home page is displayed by default, the home page provides various function options, and the user needs to select among the various function options, to control the APP to display the voice message presentation interface 54 .
  • FIG. 8 after an APP is started, a home page is displayed by default, the home page displays a list expansion function option 52 , and list expansion is short for expanding a friend list.
  • the list expansion function interface includes three tab pages: List expansion, List expansion group, and Voice forest.
  • a tab page named List expansion is displayed first by default in an initial state.
  • the APP jumps to display a voice message presentation interface 54 . It should be understood that a display level and a jump path of the voice message presentation interface 54 in the APP is not limited in this embodiment.
  • the voice message presentation interface is used for displaying a UI of a voice message published by at least one user account.
  • the voice message presentation interface is a UI used for displaying a voice message published by an unfamiliar user account.
  • the voice message presentation interface is a UI used for displaying voice messages published by user accounts in the same topic (or the channel, or the circle, or the theme).
  • the voice message presentation interface is a UI used for displaying voice messages published by user accounts belonging to the same area, for example, a current city or school.
  • a voice message in a virtual world is displayed in the voice message presentation interface, and the voice message is displayed by using a visible element in the virtual world as a carrier.
  • the virtual world may be a two-dimensional word, a 2.5-dimensional word, or a three-dimensional world.
  • the visible element may be any object or material observable in the virtual world.
  • the visible element includes, but is not limited to: at least one of cloud, fog, thunder and lightning, a fluid, a static object, a plant, an animal, a virtual image, and a cartoon image.
  • the APP may display the voice message as the visible element in the virtual world, or may associate or mount the voice message on a peripheral side of the visible element in the virtual world for display. This is not limited in this embodiment.
  • this step may include the following steps.
  • step 603 a the method can obtain virtual characters respectively corresponding to the n voice messages in the virtual world.
  • step 603 b the method can generate a scene screen of the virtual world, the scene screen displaying the virtual characters and the voice messages corresponding to the virtual characters.
  • step 603 c the method can display the voice message presentation interface of the APP according to the scene screen of the virtual world.
  • the APP needs to obtain virtual characters respectively corresponding to the n voice messages, and each voice message corresponds to a virtual character.
  • the virtual character is an individual element observable in a virtual universe. That is, the virtual character is an element that can be individually distinguished clearly at a visible angle.
  • the virtual character may be a living character in the virtual world.
  • the virtual character is a character displayed by using a character model in a form of a cartoon, a plant and/or an animal. Schematically, the virtual character is at least one type of various flower characters, mammal characters, bird characters, reptile characters, amphibian characters, fish characters, dinosaur characters, animation and comic characters, and other fictional characters.
  • the virtual character is a character in the virtual world.
  • the virtual world may be at least one of a two-dimensional virtual world, a 2.5-dimensional virtual world, and a three-dimensional virtual world.
  • Classified according to a world type the virtual world may be any one of a virtual forest world, a virtual ocean world, a virtual aquarium world, a virtual space world, a virtual animation and comic world, and a virtual fantasy world.
  • the virtual character may be stored locally in a form of a material library; or may be provided by a server for an APP.
  • the server provides virtual characters of n voice messages for the APP through a web page file.
  • the server may also transmit a virtual character corresponding to each voice message to the APP after determining the virtual character corresponding to the voice message.
  • the APP simultaneously receives n voice messages and virtual characters corresponding to the voice messages that are transmitted by the server.
  • the voice message presentation interface 54 displays a scene screen of a virtual world “Voice forest”, and the virtual world “Voice forest” is a two-dimensional world.
  • the virtual world “Voice forest” is a two-dimensional world.
  • There are four birds standing in the tree 55 and the four birds respectively correspond to: a voice message 56 published by a user A, a voice message 57 published by a user B, a voice message 58 published by a user C, and a voice message 59 published by a user D.
  • the four birds have different bird character images, and the user can quickly distinguish different voice messages according to the birds having four different character images.
  • the “different voice messages” herein refer to voice messages that do not belong to the same voice message, and may be different voice messages transmitted by different users or different voice messages transmitted by the same user.
  • the tree 55 includes three triangles vertically overlapping each other, the second triangle and the third triangle from top to bottom have the same shape and the same size, and the first triangle located at the top is slightly smaller than the other two triangles.
  • a voice message corresponding to the first bird 56 is displayed to the right of the first triangle
  • voice messages corresponding to the second bird 57 and the third bird 58 are displayed to the left of the second triangle in a superposed manner
  • a voice message corresponding to the fourth bird is displayed to the right of the third triangle in a superposed manner.
  • each voice message is represented by using a message box
  • the message box may be a rectangular box, a rounded rectangular box, a bubble box, a cloud box, or any other box.
  • a length of the message box is in proportion to a message length of the voice message.
  • the message length of the voice message may alternatively be displayed by using text above the message box. For example, a number “4” represents that the message length is 4 seconds, and a number “8” represents that the message length is 8 seconds.
  • the message box and the virtual character are displayed adjacently. When the virtual character is located to the left of a triangle, the message box is located to the right of the virtual character; and when the virtual character is located to the right of the triangle, the message box is located to the left of the virtual character.
  • a terminal plays corresponding message content.
  • the terminal displays the voice message presentation interface in a multi-layer superposition and rendering manner.
  • the terminal renders an ambient atmosphere layer, a background visual layer, and a voice element layer separately, and superposes the ambient atmosphere layer, the background visual layer, and the voice element layer as the voice message presentation interface for display.
  • the ambient atmosphere layer includes sky and ground in the virtual forest world
  • the background visual layer includes a tree in the virtual forest world
  • the voice element layer includes a virtual character (for example, a bird) and a voice message corresponding to the virtual character.
  • a virtual character for example, a bird
  • the terminal renders an ambient atmosphere layer 181 , a background visual layer 182 , and a voice element layer 183 separately.
  • the background visual layer 182 is located above the ambient atmosphere layer 181
  • the voice element layer 183 is located above the background visual layer 182 .
  • the ambient atmosphere layer 181 includes the sky in the virtual forest world, and the sky includes a blue sky background and a plurality of cloud patterns.
  • the ambient atmosphere layer 181 further includes the ground in the virtual forest world, and the ground includes a lawn.
  • the tree in the background visual layer 182 can include a plurality of triangles vertically overlapping each other, and each triangle is used for displaying a virtual character and a voice message corresponding to the virtual character. Therefore, a quantity of the triangles and a quantity of the voice messages are the same. In other words, a quantity of the triangles and a quantity of the birds are the same.
  • a quantity of virtual characters in the voice element layer 183 and a quantity of voice messages obtained by the terminal are the same, and a display parameter of the virtual character is related to a message attribute of the corresponding voice message.
  • the terminal may maintain the ambient atmosphere layer unchanged, redraw the background visual layer and the voice element layer according to a quantity of voice messages after the change, and then generate a new voice message presentation interface for refreshing and display after the ambient atmosphere layer, the background visual layer, and the voice element layer are superposed.
  • the method can receive a trigger operation of at least one of the virtual character and the voice message in the voice message presentation interface, the trigger operation including at least one of a click operation and a slide operation.
  • the virtual character is further used for playing corresponding message content when being triggered.
  • the same voice message presentation interface may display a plurality of virtual characters, and a target virtual character is one of the plurality of virtual characters. The user triggers the target virtual character to play a voice message corresponding to the target virtual character.
  • the user may click the virtual character, and when the virtual character is clicked, a voice message corresponding to the virtual character is triggered to be played. In some other embodiments, the user may slide the virtual character, and when the virtual character is slid, a voice message corresponding to the virtual character is triggered to be played.
  • the method can play a voice message corresponding to the virtual character according to a trigger signal.
  • the terminal plays the voice message according to the trigger signal.
  • the user may trigger the virtual character again to suspend playback of the voice message, or replay the voice message.
  • step 604 and step 605 are optional steps, depending on the operations performed by the user on the voice message presentation interface.
  • a voice message presentation interface of an APP is displayed, and a voice message in a virtual world is displayed in the voice message presentation interface, and the voice message is displayed by using a visible element in the virtual world as a carrier.
  • visible elements in the virtual world may be different, and types of the visible elements may also be diversified, this technical solution can resolve problems in the related art: presentation effects of voice messages on an interface are basically the same, and it is difficult for a user to accurately distinguish voice messages.
  • the APP when determining virtual characters respectively corresponding to the n voice messages in the virtual world, the APP may determine a virtual character corresponding to the voice message according to a message attribute of the voice message.
  • the message attribute includes a message content length of the voice message.
  • the message content length refers to a duration required for normally playing message content of the voice message.
  • the terminal determines a character parameter of the virtual character corresponding to the voice message according to the message content length of the voice message.
  • the character parameter includes at least one of a type, an animation style, an animation frequency, and a special character effect of a character model.
  • the type of the character model refers to a character model with different external forms, for example, a sparrow and a peacock;
  • the animation style refers to different animations made by using the same character model, for example, raising the head, shaking the legs, spreading the wings, and wagging the tail;
  • the special character effects refer to different special effects of the same character model during display, for example, different feather colors and different appearances.
  • a plurality of different message content length ranges are preset in the APP, and the message content length ranges correspond to different character models respectively.
  • five different message content length ranges are set: a range A of “1-2 seconds”, a range B of “2-5 seconds”, a range C of “5-10 seconds”, a range D of “10-30 seconds”, and a range E of “more than 30 seconds”.
  • the range A corresponds to a bird A
  • the range B corresponds to a bird B
  • the range C corresponds to a bird C
  • the range D corresponds to a bird D
  • the range E corresponds to a bird E.
  • the terminal determines a virtual character corresponding to the voice message as the bird D; and when a message content length of a voice message is 35 seconds, the terminal determines a virtual character corresponding to the voice message as the bird E.
  • the message attribute includes a published duration of the voice message.
  • the published duration is a duration of timing starting from an upload time of the voice message.
  • the terminal determines a character parameter of the virtual character corresponding to the voice message according to the published duration of the voice message.
  • a plurality of different published duration ranges are preset, and the published duration ranges correspond to different character models respectively.
  • six different published duration ranges are preset: a range 1 of “ ⁇ 1 minute”, a range 2 of “1 minute to 30 minutes”, a range 3 of “30 minutes to 1 hour”, a range 4 of “1 hour to 4 hours”, a range 5 of “4 hours to 24 hours”, and a range 6 of “more than 24 hours”.
  • the range 1 corresponds to a bird 1
  • the range 2 corresponds to a bird 2
  • the range 3 corresponds to a bird 3
  • the range 4 corresponds to a bird 4
  • the range 5 corresponds to a bird 5
  • the range 6 corresponds to a bird 6.
  • the terminal determines a virtual character corresponding to the voice message as the bird 2; and when a published duration of a voice message is 4 hours and 11 minutes, the terminal determines a virtual character corresponding to the voice message as the bird 5.
  • the message attribute includes an unplayed duration of a voice message in an unplayed state after the voice message is displayed.
  • the unplayed duration refers to a duration of a voice message starting from a display start moment to a current moment, the display start moment being a moment at which the voice message starts to be displayed in the voice message presentation interface.
  • the terminal determines a character parameter of the virtual character corresponding to the voice message according to the unplayed duration of the voice message. For example, a plurality of different unplayed duration ranges are preset, and the unplayed duration ranges correspond to different character models respectively.
  • the animation style and/or the special character effect of the virtual character may alternatively be determined according to the message attribute of the voice message.
  • the terminal determines a character feature of the virtual character according to the published duration of the voice message, and a longer published duration of the voice message indicates a larger and brighter halo around the virtual character.
  • the message attribute may include a gender attribute, for example, a male voice or a female voice of the voice message, an age attribute, for example, under 18 years old, 18-25 years old, 25-30 years old, 30-35 years old, and over 35 years old of the voice message, and a category classification, for example, a childlike voice, a loli voice, an uncle voice, and a magnetic male voice of the voice message.
  • a gender attribute for example, a male voice or a female voice of the voice message
  • an age attribute for example, under 18 years old, 18-25 years old, 25-30 years old, 30-35 years old, and over 35 years old of the voice message
  • a category classification for example, a childlike voice, a loli voice, an uncle voice, and a magnetic male voice of the voice message.
  • the attribute included in the message attribute is not limited in this embodiment.
  • duration ranges may divided differently. For example, for the published duration, four different duration ranges are divided as “1-5 seconds”, “6-10 seconds”, “10-30 seconds”, and “30-60 seconds”. There may be two or more virtual characters corresponding to the same duration range, for example, as shown in FIG. 12 , each range corresponds to character models of three different birds. In this case, for a voice message corresponding to one duration range, a character model may be randomly selected from a plurality of character models corresponding to the duration range as a character model corresponding to the voice message.
  • a virtual character corresponding to a voice message is determined according to a message attribute of the voice message, so that when observing different virtual characters, a user can learn of a message attribute of a voice message corresponding to the virtual character at the same time, and the message attribute may be any one of a message content length, a published duration, and an unplayed duration. Therefore, information related to a voice message is transmitted to the user by using graphical elements while reducing, as far as possible, text information read by the user.
  • the user may further establish a friend relationship with a user publishing the voice message, or add the voice message to a favorites list.
  • the user when the user wants to play a voice message 57 published by a user B in the voice message presentation interface 54 , the user performs a trigger operation on the virtual character or the voice message 57 , and the trigger operation may be a click operation or a slide operation.
  • the terminal displays a playback window 60 of the voice message 57 according to the trigger operation, and a friend-add control 61 and a favorite control 62 corresponding to the virtual character are displayed at the play window 60 .
  • the APP displays a friend-add control corresponding to the virtual character, and the user may perform a first operation on the friend-add control when the user wants to add a message publisher that the user is interested in.
  • the APP receives the first operation acting on the friend-add control, and when the friend-add control is represented by using a friend-add button, the first operation may be a click operation acting on the friend-add button, and the APP establishes a friend relationship with a user account corresponding to the virtual character according to the first operation.
  • the friend-add control 61 may be an “Add friend” button 61 .
  • the “Add friend” button 61 is configured to establish a friend relationship between a current user and a user B.
  • the terminal may jump to a friend-add page 63 , and then the user adds a friends at the friend-add page 63 .
  • the friend-add page 63 displays a “Confirm” button 632 and a “Cancel” button 631 .
  • the friend-add control 61 may alternatively be displayed in another manner, for example, be displayed near a virtual character in the voice message presentation interface 54 in a form of a mini pull-down menu, and this is not limited in this embodiment.
  • the APP interacts with the server according to the first operation, and establishes a friend relationship with a user account corresponding to the virtual character through the server.
  • the user is further required to input verification information when establishing a friend relationship, and establish a friend relationship with a user account corresponding to the virtual character after the verification information is successfully verified by the user account.
  • the APP displays a favorite control corresponding to the virtual character, and the user may perform a second operation on the favorite control when the user wants to add a voice message that the user is interested in to favorites.
  • the APP receives the second operation acting on the favorite control, and the second operation may be a click operation acting on the favorite control.
  • the APP adds a voice message corresponding to the virtual character to favorites according to the second operation.
  • the favorite control 62 is implemented by using a heart-shaped button.
  • the favorite control 62 is configured to add a voice message corresponding to the virtual character to favorites.
  • the terminal may jump to a “Favorites list” page 64 , and the “Favorites list” page 64 displays all voice messages that have been added by the user to favorites, then the user may repeatedly listen to the voice messages that have been added to favorites at the page.
  • the terminal adopts a distinctly displaying manner for a virtual character corresponding to a voice message that has been added to favorites, and the distinctly displaying manner includes: at least one of changing a color, adding an accessory, and adding a special animation effect. For example, a hat or heart-shaped mark is displayed above a bird corresponding to the voice message that has been added to favorites.
  • a voice message corresponding to a virtual character is triggered to be played through a trigger signal acting on the virtual character, so that a user may directly use the virtual character as a character for man-machine interaction, and a sense of immersion in the virtual world is improved when the voice message is played, thereby improving man-machine interaction efficiency during man-machine interaction.
  • a friend-add control is displayed when the voice message is played, so that the user may directly establish a friend relationship with an unfamiliar user account when hearing a voice message that the user is interested in, thereby adding a friend addition rate between interested users in a social APP.
  • a favorite control is displayed in the voice message presentation interface, so that the user adds, in a case of hearing a voice message that the user is interested in, the voice message into a favorites list, to repeatedly listen to the voice message in a later time.
  • the voice message displayed in the voice message presentation interface 54 may not be clicked to be played by the user timely. In some possible embodiments, to help the user distinguish a voice message that has been displayed but is not played (referred to as an unplayed voice message for short) from a played voice message.
  • step 606 to step 609 are further included below, as shown in FIG. 16 .
  • the method can control, in a case that an unplayed duration of a first voice message in the voice message presentation interface does not reach an unplayed threshold, a first virtual character corresponding to the first voice message to perform a preset reminder action.
  • the preset reminder action includes, but is not limited to: at least one of shaking the entire body, shaking the head, shaking the limbs, twittering, and pecking feathers.
  • shaking the entire body refers to shaking a character model left and right and/or up and down by using the character model as a central point.
  • the terminal controls the bird 57 and the bird 58 to shake their bodies, to prompt the user that the voice messages corresponding to the bird 57 and the bird 58 are unplayed voice messages.
  • the mechanism may be executed periodically. For example, an unplayed threshold is 10 seconds. If a voice message corresponding to the first virtual character is in an unplayed state all the time, the terminal controls the first virtual character corresponding to the first voice message to shake the body every 10 seconds.
  • the method can control, in a case that unplayed durations of a first voice message and a second voice message in the voice message presentation interface do not reach an unplayed threshold, a first virtual character and a second virtual character to exchange locations, the first virtual character being a virtual character corresponding to the first voice message, and the second virtual character being a virtual character corresponding to the second voice message.
  • the first virtual character and the second virtual character are any two virtual characters displayed in the voice message presentation interface.
  • the first voice message and the second voice message may be two voice messages having the closest locations in the voice message presentation interface.
  • three voice messages are displayed in a voice message presentation interface 40 .
  • a voice message located at the top of a tree is a played voice message
  • a first voice message 51 and a second voice message 52 located at the middle and lower parts of the tree are unplayed voice messages
  • a bird corresponding to the first voice message 51 and a bird corresponding to the second voice message 52 are controlled to exchange locations.
  • the method can replace, in a case that a third voice message in the voice message presentation interface changes from an unplayed state to a played state, the third voice message with a fourth voice message that is in an unplayed state in the voice message presentation interface, and replace a third virtual character with a fourth virtual character, the fourth virtual character being a virtual character corresponding to the fourth voice message, and the third virtual character being a virtual character corresponding to the third voice message.
  • the fourth voice message is a voice message obtained newly, and the fourth virtual character is a virtual character determined according to the fourth voice message.
  • the fourth voice message is a voice message that has not been displayed.
  • the method can remove the third voice message and the third virtual character out of the voice message presentation interface, or move the third voice message and the third virtual character into a designated region in the voice message presentation interface.
  • the terminal when the terminal replaces the third virtual character with the fourth virtual character, the terminal may remove the third virtual character out of the voice message presentation interface. For example, if the third virtual character is a bird, then the bird may fly out of the voice message presentation interface.
  • the terminal when the terminal replaces the third virtual character with the fourth virtual character, the terminal may move the third virtual character into a designated region in the voice message presentation interface. For example, if the third virtual character is a bird, then the bird may fly to a lawn under a tree in the virtual forest world.
  • Step 606 may be independently implemented as an embodiment
  • step 607 may be independently implemented as an embodiment
  • step 608 and step 609 may be independently implemented as an embodiment.
  • a third voice message changes from an unplayed state to a played state, and the third voice message is replaced with a fourth voice message that is in an unplayed state, so that a voice message presentation interface may automatically add and display an unplayed voice message, to help a user clearly distinguish a played voice message and an unplayed voice message and play the unplayed voice message more conveniently, thereby improving efficiency of man-machine interaction between the user and the terminal.
  • a first virtual character corresponding to the first voice message is controlled to perform a preset reminder action, so that a user can distinguish a played voice message and an unplayed voice message in a voice message presentation interface, to preferentially control the unplayed voice message for playback.
  • a first virtual character and a second virtual character are controlled to exchange locations in a voice message presentation interface, so that a user can distinguish a played voice message and an unplayed voice message in the voice message presentation interface, to preferentially control the unplayed voice message for playback.
  • a quantity of voice messages displayed in the voice message presentation interface 54 is limited, and is, for example, 4, 6, or 8. In some possible embodiments, a user has a requirement for refreshing and displaying the voice message presentation interface 54 .
  • the method can receive a refresh operation in the voice message presentation interface.
  • the refresh operation includes, but is not limited to, a pull-down operation and/or a pull-up operation.
  • the terminal is a touch screen, and the user pulls down the voice message presentation interface on the touch screen, or pulls up the voice message presentation interface on the touch screen.
  • the method can obtain m other voice messages according to the refresh operation.
  • the terminal obtains m other voice messages from the server according to the refresh operation, m being a positive integer.
  • the other voice messages are voice messages published by at least one user account between the last obtaining of a voice message and a current moment.
  • the other voice messages are voice messages sifted from a voice message library by the server according to a sifting condition.
  • the method can display an unplayed voice message in the n voice messages, a virtual character corresponding to the unplayed voice message, the m other voice messages, and virtual characters corresponding to the other voice messages in the voice message presentation interface at the same time.
  • the terminal displays the unplayed voice message in the n voice messages and m other voice messages in the voice message presentation interface at the same time. That is, the terminal preferentially displays an unplayed voice message.
  • step 703 Using an example in which the virtual world is a virtual forest world, and the virtual character is a bird in a tree in the virtual forest world, there are at least two different implementations in step 703 .
  • the n voice messages are all unplayed voice messages.
  • a scene screen of the virtual forest world is displayed in the voice message presentation interface, the scene screen including a tree located in the virtual forest world, the m other voice messages being displayed on an upper part of the tree, and the unplayed voice message in the n voice messages and the virtual character corresponding to the unplayed voice message being displayed on middle and lower parts of the tree.
  • the terminal determines a height of the tree according to a first message quantity of the unplayed voice messages in the n voice messages and a second message quantity of the m other voice messages. A sum of the first message quantity and the second message quantity is positively correlated to the height of the tree.
  • the tree in the virtual forest world is formed by overlapping a plurality of triangles. Sizes of triangles located in the middle part of the tree are the same, and a size of a triangle located at the top of the tree is 70% of the sizes of the triangles in the middle part of the tree.
  • each triangle is used for displaying a bird and a voice message corresponding to the bird, and virtual characters on triangles that are vertically adjacent alternate left and right.
  • the tree needs to have k triangles. Because in this step, m other voice messages are obtained newly, and n voice messages before refreshing are all unplayed voice messages, m triangles needs to be added.
  • the terminal copies a triangle x located at the bottom of the tree as a unit, and translates the copied triangle down by 2 ⁇ 3 of the height. Because the m other voice messages are voice messages obtained newly, the terminal may display the m other voice messages on an upper part of the tree, and the n voice messages obtained before the refreshing are displayed on middle and lower parts of the tree.
  • some voice messages in the n voice messages are unplayed voice messages, and the remaining voice messages are played voice messages.
  • the terminal may cancel, when displaying the m other voice messages and the unplayed voice messages in the n other voice messages, displaying of the played voice messages in the n voice messages, and cancel displaying of virtual characters corresponding to the played voice messages.
  • the terminal may display the virtual characters corresponding to the played voice messages in the n voice messages on a lawn under the tree.
  • a voice message presentation interface displays an unplayed voice message in n voice messages and the m other voice messages at the same time, so that the voice message presentation interface preferentially displays the unplayed voice messages during a refresh process, to help a user clearly distinguish a played voice message and an unplayed voice message, and the user intuitively listens to the unplayed voice messages in the tree after the refresh operation, thereby improving efficiency of man-machine interaction between the user and the terminal.
  • the tree in the virtual forest world is a tree formed by overlapping a plurality of triangles.
  • the tree in the virtual forest world may alternatively be a tree with curved edges and a plurality of branches.
  • the user may also slide up or down to view another topic E and another topic F, and the user may select a tree corresponding to the “topic E”, and then view a bird and a voice message in the tree. Voice messages in the tree belong to the same topic E.
  • a corridor following-type tree arrangement manner shown in FIG. 22 and a three-dimensional surrounding-type thee arrangement manner shown in FIG. 23 may be further adopted.
  • the trees in the virtual forest world are arranged and displayed annularly, and each tree corresponds to a topic.
  • the tree arrangement manner in the virtual forest world is not limited in this embodiment of this application.
  • the virtual world is a virtual forest world
  • the virtual character is a virtual bird.
  • a specific form of the virtual world is not limited in this embodiment, but in other embodiments, as shown in FIG. 24 , the virtual world may alternatively be a virtual ocean world, and the virtual character may be a virtual fish.
  • the voice message is a sub-function in the APP
  • the virtual world is a virtual forest world
  • the virtual character is a bird
  • a terminal displays a voice message presentation interface.
  • the voice message presentation interface is an interface provided by the instant messaging program.
  • a user triggers a user operation on the instant messaging program, and the instant messaging program displays the voice message presentation interface according to the user operation.
  • the instant messaging program provides a “list expansion” function 52 in a function list, and “list expansion” refers to expanding a friend list.
  • list expansion refers to expanding a friend list.
  • the list expansion function page includes three tab pages: List expansion, List expansion group, and Voice forest.
  • the instant messaging program displays a voice message presentation interface “Voice forest”.
  • a “Voice forest” interface 54 displays a blue sky and white cloud background, a tree, and four birds in the tree.
  • the instant messaging program when the instant messaging program needs to display the “Voice forest” interface 54 , the instant messaging program obtains n voice messages from the server.
  • the instant messaging program determines virtual characters corresponding to voice messages in the virtual world according to message attributes of the voice messages.
  • the instant messaging program does not store materials of virtual characters locally, the instant messaging program obtains virtual characters corresponding to voice messages in the virtual world from the server.
  • the server transmits the n voice messages, the virtual characters corresponding to the n voice messages, and the display materials of the virtual world to the instant messaging program together, and then the instant messaging program displays the voice message presentation interface according to the data.
  • the n voice messages transmitted by the server to the terminal are sifted according to a preset condition.
  • the preset condition includes, but is not limited to, at least one of the following conditions: 1. being a voice message of an unfamiliar user account of a current user account; 2. having a gender from that of the current user account; 3. having a user portraits the same as or similar to that of the user account; 4. a large quantity of times of adding a friend; and 5. a large quantity of times of being added as a friend.
  • the voice messages may be scored according to a weight corresponding to each condition, and n voice messages are selected in descending order of scores and are pushed to the instant messaging program.
  • Logic according to which the server provides n voice messages to the terminal is not limited in this embodiment.
  • the terminal browses and clicks a voice that a user is interested in.
  • a “Voice forest” interface 73 preferentially displays a top end part of the tree. The user may view middle and lower parts of the tree and a lawn under the tree in a sliding-up manner.
  • the “Voice forest” interface 73 displays a voice message that the user is interested in, if the user clicks the voice message, or clicks a virtual character corresponding to the voice message, then the instant messaging program plays the voice message.
  • the terminal receives a pull-down refresh operation of a user.
  • the user may further perform a pull-down refresh on the instant messaging program, and the pull-down refresh is a refresh operation of triggering a refresh by using a sliding-down signal.
  • a specific form of a refresh operation is not limited in this embodiment, and the pull-down refresh is only used as an example.
  • the instant messaging program obtains m other voice messages from the server according to the refresh operation.
  • step 804 the terminal transmits an instruction to the server.
  • the instant messaging program transmits a refresh instruction to the server.
  • the server calculates a quantity of unplayed voice messages after the refresh.
  • the server selects m other voice messages for the terminal again.
  • the server determines a quantity of unplayed voice messages after the refresh according to a sum of the unplayed voice messages in the n voice messages before the refresh and the m other voice messages.
  • the server copies a triangle in a background tree in a case that the quantity of unplayed voice messages is greater than a quantity of the unplayed voice messages before the refresh.
  • the tree in the virtual forest world is formed by overlapping a plurality of triangles, and one or more voice messages and a virtual character corresponding to the voice message are displayed on each triangle.
  • the server copies the triangle in the background tree in a case that the quantity of the voice messages after the refresh is greater than the quantity of the messages before the refresh, to increase the height of the background tree. For this process, refer to FIG. 19 and FIG. 20 .
  • step 807 the server reduces triangles in the background tree in a case that the quantity of unplayed voice messages is less than a quantity of the unplayed voice messages before the refresh.
  • the server reduces the quantity of the triangles in the background tree in a case that the quantity of the voice messages after the refresh is less than the quantity of the messages before the refresh, to reduce the height of the background tree.
  • the server transmits data to the terminal.
  • the server transmits the m other voice messages to the terminal; and the server transmits the m other voice messages and the unplayed voice messages in the n voice messages before the refresh to the terminal.
  • the server transmits the m other voice messages, the unplayed voice messages in the n voice messages before the refresh, and the played voice messages in the n voice messages before the refresh to the terminal.
  • the server transmits m other voice messages and virtual characters corresponding to the other voice messages to the terminal.
  • the server transmits m other voice messages and virtual characters corresponding to the other voice messages, unplayed voice messages in n voice messages before the refresh and virtual characters corresponding to the unplayed voice messages, and played voice messages in n voice messages before the refresh and virtual characters corresponding to the played voice messages to the terminal.
  • the server transmits the messages to the terminal in a sequence of “a new voice message ⁇ an unplayed voice message before the refresh ⁇ a played voice message after the refresh”. Additionally, the server may further transmit a tree after an increase or reduction to the terminal.
  • step 809 the terminal redraws a background visual rendering layer and adjusts a location of the voice message.
  • the terminal redraws the background visual rendering layer according to the tree after the increase or reduction, and adjusts locations of the voice messages in a voice element layer according to the voice messages transmitted by the server in order.
  • the terminal superposes the ambient atmosphere layer, the re-rendered background visual rendering layer, and the re-rendered voice element layer, to obtain a voice message presentation interface after the refresh.
  • step 810 the terminal browses and clicks a voice that a user is interested in.
  • the m other voice messages are displayed at the top of the tree
  • the unplayed voice messages in the n voice messages before the refresh are displayed on the middle and lower parts of the tree
  • the played voice messages in the n voice messages before the refresh are displayed on the lawn under the tree.
  • the user that makes a selection may view the middle and lower parts of the tree and the lawn under the tree in a sliding-up manner.
  • the “Voice forest” interface 73 preferentially displays the top end part of the tree, and displays the middle and lower parts of the tree and the lawn under the tree when the user slides up.
  • the “Voice forest” interface 73 displays a voice message that the user is interested in, if the user clicks the voice message, or clicks a virtual character corresponding to the voice message, then the instant messaging program plays the voice message.
  • step 811 the terminal detects that the voice message presentation interface stays for more than three seconds.
  • the instant messaging program starts timing a displayed duration of the voice message presentation interface. If the displayed duration of the voice message presentation interface is longer than three seconds, step 812 is performed.
  • step 812 the terminal transmits an instruction to the server.
  • the terminal transmits a stay trigger instruction to the server, and the stay trigger instruction is used for triggering the server to recognize the unplayed voice messages in the voice message presentation interface.
  • step 813 the server recognizes an unplayed voice message in a current interface.
  • the server stores playbacks record of all voice messages, and the server recognizes the unplayed voice messages in the voice message presentation interface according to the playback records.
  • step 814 the server performs shake by using the bird image as a center.
  • the server For a bird corresponding to the unplayed voice message, the server generates a shake instruction and/or a shake animation material for performing a shake by using the bird image as a center.
  • step 815 the server transmits data to the terminal.
  • the server transmits a shake instruction and/or a shake animation material for shaking the unplayed voice messages to the terminal.
  • step 816 the terminal redraws an animation of the voice element layer.
  • the terminal When receiving a shake instruction of an unplayed voice message transmitted by the server, the terminal redraws a virtual character corresponding to the unplayed voice message in the voice element layer according to the shake animation material stored locally.
  • the terminal When receiving a shake animation material of an unplayed voice message transmitted by the server, the terminal redraws a virtual character corresponding to the unplayed voice message in the voice element layer according to the shake animation material.
  • the server executes a part of calculation logic
  • the foregoing calculation logic executed by the server may alternatively be executed by the terminal.
  • this is not limited in this embodiment of this application.
  • FIG. 6 , FIG. 7 , FIG. 15 , FIG. 18 , and FIG. 25 are schematic flowcharts of a voice message display method according to an embodiment. It is to be understood that although the steps in the flowcharts of FIG. 6 , FIG. 7 , FIG. 15 , FIG. 18 , and FIG. 25 are sequentially displayed in accordance with instructions of arrows, the steps are not necessarily performed sequentially in the order indicated by the arrows. Unless otherwise explicitly specified in this application, execution of the steps is not strictly limited, and the steps may be performed in other orders. Moreover, at least some of the steps in FIG. 6 , FIG. 7 , FIG. 15 , FIG. 18 , and FIG.
  • the 25 may include a plurality of sub-steps or a plurality of stages.
  • the sub-steps or stages are not necessarily performed at the same moment but may be performed at different moments.
  • the sub-steps or stages are not necessarily performed sequentially, but may be performed in turn or alternately with another step or at least some of sub-steps or stages of the other step.
  • FIG. 26 is a block diagram of a voice message display apparatus according to an exemplary embodiment of this application.
  • the voice message display apparatus is configured into a terminal on an APP is installed, and the voice message display apparatus may be implemented through software or hardware, and the APP is capable of receiving a voice message.
  • the apparatus includes a processing module 920 and a display module 940 . As described above, one or more of the modules described in this specification can be implemented by processing circuitry.
  • the processing module 920 is configured to start the APP according to a start operation.
  • the processing module 920 is configured to obtain n voice messages published by at least one user account, where n is a positive integer.
  • the display module 940 is configured to display a voice message presentation interface of the APP, the voice message presentation interface displaying the voice message in a virtual world, the voice message being displayed by using a visible element in the virtual world as a carrier.
  • the visible element includes a virtual character.
  • the display module 940 is configured to obtain virtual characters respectively corresponding to the n voice messages in the virtual world; generate a scene screen of the virtual world, the scene screen displaying the virtual characters and the voice messages corresponding to the virtual characters; and display the voice message presentation interface of the APP according to the scene screen of the virtual world. At least one of the virtual character and the voice message is used for playing corresponding message content when a trigger operation is received.
  • the processing module 920 is configured to determine a virtual character corresponding to the voice message according to a message attribute of the voice message. In a further embodiment, the processing module 920 can be configured to determine a character parameter of the virtual character corresponding to the voice message according to a message duration of the voice message. Alternatively, the processing module 920 is configured to determine a character parameter of the virtual character corresponding to the voice message according to a published duration of the voice message, the published duration being a duration of timing starting from an upload time of the voice message.
  • the processing module 920 is configured to determine a character parameter of the virtual character corresponding to the voice message according to an unplayed duration of the voice message in an unplayed state after the voice message is displayed, the character parameter including at least one of a type, an animation style, an animation frequency, and a special character effect of a character model.
  • the foregoing apparatus can further include a man-machine interaction module 960 and a playback module 980 .
  • the man-machine interaction module 960 is configured to receive a trigger operation of at least one of the virtual character and the voice message in the voice message presentation interface, the trigger operation including at least one of a click operation and a slide operation.
  • the playback module 980 is configured to play the voice message corresponding to the virtual character according to the trigger operation.
  • the man-machine interaction module 960 is further configured to receive operations related to the user.
  • the man-machine interaction module is further configured to receive a start operation, to help the processing module 920 start the APP according to the start operation.
  • the display module 940 is further configured to display a friend-add control corresponding to the virtual character.
  • the man-machine interaction module 960 is further configured to receive a first operation used for triggering the friend-add control.
  • the processing module 920 is further configured to establish a friend relationship with a user account corresponding to the virtual character according to the first operation.
  • the display module 940 is further configured to display a favorite control corresponding to the virtual character.
  • the man-machine interaction module 960 is further configured to receive a second operation used for triggering the favorite control.
  • the processing module 920 is further configured to add the voice message corresponding to the virtual character to favorites according to the second operation.
  • the processing module 920 can be further configured to control, in a case that an unplayed duration of a first voice message in the voice message presentation interface does not reach an unplayed threshold, a first virtual character corresponding to the first voice message to perform a preset reminder action.
  • the processing module 920 is further configured to control, in a case that unplayed durations of a first voice message and a second voice message in the voice message presentation interface do not reach an unplayed threshold, a first virtual character and a second virtual character to exchange locations, the first virtual character being a virtual character corresponding to the first voice message, and the second virtual character being a virtual character corresponding to the second voice message.
  • the processing module 920 is further configured to replace, in a case that a third voice message in the voice message presentation interface changes from an unplayed state to a played state, the third voice message with a fourth voice message that is in an unplayed state in the voice message presentation interface, and replace a third virtual character with a fourth virtual character, the fourth virtual character being a virtual character corresponding to the fourth voice message, and the third virtual character being a virtual character corresponding to the third voice message, and remove the third voice message and the third virtual character out of the voice message presentation interface, or move the third voice message and the third virtual character into a designated region in the voice message presentation interface.
  • the man-machine interaction module 960 is further configured to receive a refresh operation in the voice message presentation interface.
  • the processing module 920 is further configured to obtain m other voice messages according to the refresh operation.
  • the display module 940 is further configured to display an unplayed voice message in the n voice messages, a virtual character corresponding to the unplayed voice message, the m other voice messages, and virtual characters corresponding to the other voice messages in the voice message presentation interface at the same time.
  • the virtual world is a virtual forest world.
  • the display module 940 is configured to display a scene screen of the virtual forest world in the voice message presentation interface, the scene screen including a tree located in the virtual forest world, the m other voice messages and the virtual characters corresponding to the other voice messages being displayed on an upper part of the tree, and the unplayed voice message in the n voice messages and the virtual character corresponding to the unplayed voice message being displayed on middle and lower parts of the tree.
  • the virtual character can be a bird in the tree located in the virtual forest world.
  • the processing module 920 is further configured to determine a height of the tree according to a first message quantity of the unplayed voice messages in the n voice messages and a second message quantity of the m other voice messages.
  • a sum of the first message quantity and the second message quantity is positively correlated to the height of the tree.
  • the display module 940 is further configured to cancel displaying of the played voice messages in the n voice messages and the virtual characters corresponding to the played voice messages.
  • the display module 940 is further configured to display the virtual characters corresponding to the played voice messages in the n voice messages on the lawn under the tree.
  • the virtual world is a virtual forest world.
  • the display module 940 is further configured to render the ambient atmosphere layer, the background visual layer, and the voice element layer respectively, the ambient atmosphere layer including the sky and ground in the virtual forest world, the background visual layer including the tree in the virtual forest world, and the voice element layer including the virtual character and the voice message corresponding to the virtual character, and superpose the ambient atmosphere layer, the background visual layer, and the voice element layer to be displayed as the voice message presentation interface.
  • the virtual character may be any animal in the virtual forest.
  • the voice message presentation interface is a UI used for displaying a voice message published by an unfamiliar user account.
  • the voice message presentation interface is a UI used for displaying voice messages published by user accounts in the same topic.
  • the voice message presentation interface is a UI used for displaying voice messages published by user accounts belonging to the same area.
  • This application further provides a non-transitory computer-readable storage medium, the storage medium storing at least one instruction, at least one program, a code set, or an instruction set, the at least one instruction, the at least one program, the code set, or the instruction set being loaded and executed by a processor to implement the voice message display method according to the foregoing method embodiments.
  • this application further provides a computer program product including instructions, and the computer program product, when run on a computer device, causes the computer device to perform the voice message display method provided in the foregoing method embodiments.
  • the program may be stored in a computer-readable storage medium.
  • the storage medium mentioned above may be a read-only memory, a magnetic disk, or an optical disc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • User Interface Of Digital Computer (AREA)
US17/096,683 2018-09-30 2020-11-12 Voice message display method and apparatus in application, computer device, and computer-readable storage medium Active 2041-03-31 US11895273B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/527,237 US20240098182A1 (en) 2018-09-30 2023-12-01 Voice message display

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201811159451.4A CN110971502B (zh) 2018-09-30 2018-09-30 应用程序中的声音消息显示方法、装置、设备及存储介质
CN201811159451.4 2018-09-30
PCT/CN2019/106116 WO2020063394A1 (zh) 2018-09-30 2019-09-17 应用程序中的声音消息显示方法、装置、计算机设备及计算机可读存储介质

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/106116 Continuation WO2020063394A1 (zh) 2018-09-30 2019-09-17 应用程序中的声音消息显示方法、装置、计算机设备及计算机可读存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/527,237 Continuation US20240098182A1 (en) 2018-09-30 2023-12-01 Voice message display

Publications (2)

Publication Number Publication Date
US20210067632A1 US20210067632A1 (en) 2021-03-04
US11895273B2 true US11895273B2 (en) 2024-02-06

Family

ID=69951188

Family Applications (2)

Application Number Title Priority Date Filing Date
US17/096,683 Active 2041-03-31 US11895273B2 (en) 2018-09-30 2020-11-12 Voice message display method and apparatus in application, computer device, and computer-readable storage medium
US18/527,237 Pending US20240098182A1 (en) 2018-09-30 2023-12-01 Voice message display

Family Applications After (1)

Application Number Title Priority Date Filing Date
US18/527,237 Pending US20240098182A1 (en) 2018-09-30 2023-12-01 Voice message display

Country Status (3)

Country Link
US (2) US11895273B2 (zh)
CN (2) CN113965542B (zh)
WO (1) WO2020063394A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570100B (zh) * 2016-10-31 2019-02-26 腾讯科技(深圳)有限公司 信息搜索方法和装置
CN112860214B (zh) * 2021-03-10 2023-08-01 北京车和家信息技术有限公司 基于语音会话的动画展示方法、装置、存储介质及设备
CN113489833B (zh) * 2021-06-29 2022-11-04 维沃移动通信有限公司 信息播报方法、装置、设备及存储介质
CN117319340A (zh) * 2022-06-23 2023-12-29 腾讯科技(深圳)有限公司 语音消息的播放方法、装置、终端及存储介质

Citations (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070218986A1 (en) * 2005-10-14 2007-09-20 Leviathan Entertainment, Llc Celebrity Voices in a Video Game
US20070218987A1 (en) * 2005-10-14 2007-09-20 Leviathan Entertainment, Llc Event-Driven Alteration of Avatars
US20080163075A1 (en) * 2004-01-26 2008-07-03 Beck Christopher Clemmett Macl Server-Client Interaction and Information Management System
US20090210483A1 (en) * 2008-02-15 2009-08-20 Sony Ericsson Mobile Communications Ab Systems Methods and Computer Program Products for Remotely Controlling Actions of a Virtual World Identity
US20100070501A1 (en) * 2008-01-15 2010-03-18 Walsh Paul J Enhancing and storing data for recall and use using user feedback
US20100070980A1 (en) * 2008-09-16 2010-03-18 Fujitsu Limited Event detection system, event detection method, and program
WO2010075628A1 (en) 2008-12-29 2010-07-08 Nortel Networks Limited User interface for orienting new users to a three dimensional computer-generated virtual environment
US20100211638A1 (en) * 2007-07-27 2010-08-19 Goojet Method and device for creating computer applications
US7961854B1 (en) * 2005-10-13 2011-06-14 Tp Lab, Inc. System to record and analyze voice message usage information
JP2012129663A (ja) * 2010-12-13 2012-07-05 Ryu Hashimoto 発話指示装置
US20120185547A1 (en) * 2011-01-18 2012-07-19 Steven Hugg System and method for the transmission and management of short voice messages
CN103209201A (zh) 2012-01-16 2013-07-17 上海那里信息科技有限公司 基于社交关系的虚拟化身互动系统和方法
US20130268490A1 (en) * 2012-04-04 2013-10-10 Scribble Technologies Inc. System and Method for Generating Digital Content
US20140330550A1 (en) * 2006-09-05 2014-11-06 Aol Inc. Enabling an im user to navigate a virtual world
US8924880B2 (en) * 2007-04-20 2014-12-30 Yp Interactive Llc Methods and systems to facilitate real time communications in virtual reality
US20150032675A1 (en) * 2013-07-25 2015-01-29 In The Chat Communications Inc. System and method for managing targeted social communications
US20160110922A1 (en) * 2014-10-16 2016-04-21 Tal Michael HARING Method and system for enhancing communication by using augmented reality
US20160142361A1 (en) * 2014-11-15 2016-05-19 Filmstrip, Inc. Image with audio conversation system and method utilizing social media communications
US20160302711A1 (en) * 2015-01-29 2016-10-20 Affectomatics Ltd. Notifying a user about a cause of emotional imbalance
CN106171038A (zh) * 2014-03-12 2016-11-30 腾讯科技(深圳)有限公司 通过社交网络平台控制外围设备的方法和装置
CN106774830A (zh) * 2016-11-16 2017-05-31 网易(杭州)网络有限公司 虚拟现实系统、语音交互方法及装置
CN107026894A (zh) * 2016-01-04 2017-08-08 洛克威尔自动控制技术股份有限公司 通过工业资产递送自动通知
CN107222398A (zh) * 2017-07-24 2017-09-29 广州腾讯科技有限公司 社交消息控制方法、装置、存储介质和计算机设备
CN107623622A (zh) 2016-07-15 2018-01-23 掌赢信息科技(上海)有限公司 一种发送语音动画的方法及电子设备
CN107707538A (zh) 2017-09-27 2018-02-16 广东欧珀移动通信有限公司 数据传输方法、装置、移动终端及计算机可读存储介质
US20180097749A1 (en) * 2016-10-03 2018-04-05 Nohold, Inc. Interactive virtual conversation interface systems and methods
US20180104594A1 (en) * 2011-09-14 2018-04-19 Steelseries Aps Apparatus for adapting virtual gaming with real world information
CN108037869A (zh) 2017-12-07 2018-05-15 北京小米移动软件有限公司 消息显示方法、装置及终端
US10638256B1 (en) * 2016-06-20 2020-04-28 Pipbin, Inc. System for distribution and display of mobile targeted augmented reality content
US10963648B1 (en) * 2006-11-08 2021-03-30 Verizon Media Inc. Instant messaging application configuration based on virtual world activities

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8311188B2 (en) * 2008-04-08 2012-11-13 Cisco Technology, Inc. User interface with voice message summary
CN101697559A (zh) * 2009-10-16 2010-04-21 深圳华为通信技术有限公司 一种消息显示的方法、装置和终端
CN102780646B (zh) * 2012-07-19 2015-12-09 上海量明科技发展有限公司 即时通信中声音图标的实现方法、客户端及系统
CN102821065A (zh) * 2012-08-13 2012-12-12 上海量明科技发展有限公司 即时通信中输出音频消息显示形状的方法、客户端及系统
CN104104703B (zh) * 2013-04-09 2018-02-13 广州华多网络科技有限公司 多人音视频互动方法、客户端、服务器及系统
CN104732975A (zh) * 2013-12-20 2015-06-24 华为技术有限公司 一种语音即时通讯方法及装置
CN105049318B (zh) * 2015-05-22 2019-01-08 腾讯科技(深圳)有限公司 消息发送方法和装置、消息处理方法和装置
CN106817349B (zh) * 2015-11-30 2020-04-14 厦门黑镜科技有限公司 一种在通信过程中使通信界面产生动画效果的方法及装置

Patent Citations (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080163075A1 (en) * 2004-01-26 2008-07-03 Beck Christopher Clemmett Macl Server-Client Interaction and Information Management System
US7961854B1 (en) * 2005-10-13 2011-06-14 Tp Lab, Inc. System to record and analyze voice message usage information
US20070218986A1 (en) * 2005-10-14 2007-09-20 Leviathan Entertainment, Llc Celebrity Voices in a Video Game
US20070218987A1 (en) * 2005-10-14 2007-09-20 Leviathan Entertainment, Llc Event-Driven Alteration of Avatars
US20140330550A1 (en) * 2006-09-05 2014-11-06 Aol Inc. Enabling an im user to navigate a virtual world
US10963648B1 (en) * 2006-11-08 2021-03-30 Verizon Media Inc. Instant messaging application configuration based on virtual world activities
US8924880B2 (en) * 2007-04-20 2014-12-30 Yp Interactive Llc Methods and systems to facilitate real time communications in virtual reality
US20100211638A1 (en) * 2007-07-27 2010-08-19 Goojet Method and device for creating computer applications
US20100070501A1 (en) * 2008-01-15 2010-03-18 Walsh Paul J Enhancing and storing data for recall and use using user feedback
US20090210483A1 (en) * 2008-02-15 2009-08-20 Sony Ericsson Mobile Communications Ab Systems Methods and Computer Program Products for Remotely Controlling Actions of a Virtual World Identity
US20100070980A1 (en) * 2008-09-16 2010-03-18 Fujitsu Limited Event detection system, event detection method, and program
WO2010075628A1 (en) 2008-12-29 2010-07-08 Nortel Networks Limited User interface for orienting new users to a three dimensional computer-generated virtual environment
JP2012129663A (ja) * 2010-12-13 2012-07-05 Ryu Hashimoto 発話指示装置
US20120185547A1 (en) * 2011-01-18 2012-07-19 Steven Hugg System and method for the transmission and management of short voice messages
US20180104594A1 (en) * 2011-09-14 2018-04-19 Steelseries Aps Apparatus for adapting virtual gaming with real world information
CN103209201A (zh) 2012-01-16 2013-07-17 上海那里信息科技有限公司 基于社交关系的虚拟化身互动系统和方法
US20130268490A1 (en) * 2012-04-04 2013-10-10 Scribble Technologies Inc. System and Method for Generating Digital Content
US20150032675A1 (en) * 2013-07-25 2015-01-29 In The Chat Communications Inc. System and method for managing targeted social communications
CN106171038A (zh) * 2014-03-12 2016-11-30 腾讯科技(深圳)有限公司 通过社交网络平台控制外围设备的方法和装置
US20160110922A1 (en) * 2014-10-16 2016-04-21 Tal Michael HARING Method and system for enhancing communication by using augmented reality
US20160142361A1 (en) * 2014-11-15 2016-05-19 Filmstrip, Inc. Image with audio conversation system and method utilizing social media communications
US20160302711A1 (en) * 2015-01-29 2016-10-20 Affectomatics Ltd. Notifying a user about a cause of emotional imbalance
CN107026894A (zh) * 2016-01-04 2017-08-08 洛克威尔自动控制技术股份有限公司 通过工业资产递送自动通知
US10638256B1 (en) * 2016-06-20 2020-04-28 Pipbin, Inc. System for distribution and display of mobile targeted augmented reality content
CN107623622A (zh) 2016-07-15 2018-01-23 掌赢信息科技(上海)有限公司 一种发送语音动画的方法及电子设备
US20180097749A1 (en) * 2016-10-03 2018-04-05 Nohold, Inc. Interactive virtual conversation interface systems and methods
CN106774830A (zh) * 2016-11-16 2017-05-31 网易(杭州)网络有限公司 虚拟现实系统、语音交互方法及装置
CN107222398A (zh) * 2017-07-24 2017-09-29 广州腾讯科技有限公司 社交消息控制方法、装置、存储介质和计算机设备
CN107707538A (zh) 2017-09-27 2018-02-16 广东欧珀移动通信有限公司 数据传输方法、装置、移动终端及计算机可读存储介质
CN108037869A (zh) 2017-12-07 2018-05-15 北京小米移动软件有限公司 消息显示方法、装置及终端

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
International Search Report issued in International Application No. PCT/CN2019/106116, dated Nov. 28, 2019 (with English translation) (5 pgs).
Office Action issued in corresponding Chinese Patent Application No. 2018111594514 dated Mar. 18, 2021, 8 pages.
Written Opinion of the International Searching Authority issued in International Application No. PCT/CN2019/106116, dated Nov. 28, 2019 (3 pgs).

Also Published As

Publication number Publication date
WO2020063394A1 (zh) 2020-04-02
CN113965542A (zh) 2022-01-21
US20240098182A1 (en) 2024-03-21
CN113965542B (zh) 2022-10-04
CN110971502B (zh) 2021-09-28
CN110971502A (zh) 2020-04-07
US20210067632A1 (en) 2021-03-04

Similar Documents

Publication Publication Date Title
US11895273B2 (en) Voice message display method and apparatus in application, computer device, and computer-readable storage medium
US11741949B2 (en) Real-time video conference chat filtering using machine learning models
JP2023506699A (ja) グループセッションにおけるリマインダー方法、装置、デバイスおよびコンピュータプログラム
EP4315840A1 (en) Presenting participant reactions within virtual conferencing system
US11456887B1 (en) Virtual meeting facilitator
KR20180129886A (ko) 지속적 컴패니언 디바이스 구성 및 전개 플랫폼
KR102557732B1 (ko) 가상펫의 정보 표시 방법, 장치, 단말기, 서버 및 저장매체
US20150279371A1 (en) System and Method for Providing an Audio Interface for a Tablet Computer
US20230086248A1 (en) Visual navigation elements for artificial reality environments
US20240091655A1 (en) Method, apparatus, electronic device and storage medium for game data processing
US20230343056A1 (en) Media resource display method and apparatus, device, and storage medium
US20220139041A1 (en) Representations in artificial realty
JP2022133254A (ja) 3次元(3d)環境用の統合された入出力(i/o)
US20230072463A1 (en) Contact information presentation
CN112416207A (zh) 信息内容显示方法、装置、设备及介质
CN108055191A (zh) 信息处理方法、终端及计算机可读存储介质
CN114885199B (zh) 实时互动方法、装置、电子设备、存储介质及系统
US20220410004A1 (en) Automatically generated enhanced activity and event summaries for gameplay sessions
US20230298290A1 (en) Social interaction method and apparatus, device, storage medium, and program product
CN112138410B (zh) 一种虚拟对象的交互方法以及相关装置
US20240203080A1 (en) Interaction data processing
US11972173B2 (en) Providing change in presence sounds within virtual working environment
WO2024041270A1 (zh) 虚拟场景中的交互方法、装置、设备及存储介质
US20230041497A1 (en) Mood oriented workspace
US20230352054A1 (en) Editing video captured by electronic devices

Legal Events

Date Code Title Description
AS Assignment

Owner name: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED, CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DAI, WEI;LU, KUN;ZHONG, QINGHUA;AND OTHERS;SIGNING DATES FROM 20201104 TO 20201111;REEL/FRAME:054353/0708

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE