CN111582862B - Information processing method, device, system, computer equipment and storage medium - Google Patents

Information processing method, device, system, computer equipment and storage medium Download PDF

Info

Publication number
CN111582862B
CN111582862B CN202010593270.3A CN202010593270A CN111582862B CN 111582862 B CN111582862 B CN 111582862B CN 202010593270 A CN202010593270 A CN 202010593270A CN 111582862 B CN111582862 B CN 111582862B
Authority
CN
China
Prior art keywords
description information
virtual article
virtual
information
article package
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010593270.3A
Other languages
Chinese (zh)
Other versions
CN111582862A (en
Inventor
施国演
李建立
俞清源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010593270.3A priority Critical patent/CN111582862B/en
Publication of CN111582862A publication Critical patent/CN111582862A/en
Application granted granted Critical
Publication of CN111582862B publication Critical patent/CN111582862B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/36Payment architectures, schemes or protocols characterised by the use of specific devices or networks using electronic wallets or electronic money safes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services

Abstract

The application discloses an information processing method, an information processing device, an information processing system, computer equipment and a storage medium, and relates to the field of artificial intelligence. The method comprises the following steps: displaying a first user interface, wherein the first user interface displays a receiving interface of a virtual article package; responding to the triggering operation of the receiving interface, and displaying a second user interface, wherein the second user interface comprises target description information of the virtual article package; receiving a voice fragment input by a first user account, wherein the voice fragment is used for matching with the target description information to request to receive virtual articles in the virtual article package; the virtual item in the virtual item package is received. The method enriches the interaction modes of receiving the virtual article package by the user.

Description

Information processing method, device, system, computer equipment and storage medium
Technical Field
The embodiment of the application relates to the field of artificial intelligence, in particular to an information processing method, an information processing device, an information processing system, computer equipment and a storage medium.
Background
With the development of network technology, various virtual articles such as equipment in network games, pets, etc. are appeared. In social software, a user may send a virtual package using a virtual item, and other users may receive the virtual package to obtain a virtual item in the virtual package.
In the related art, a user may send a virtual item package in a group chat, and other users in the group chat may click on a link of the virtual item package to get a virtual item in the virtual item package.
In the related art, a user can only click on a link of a virtual item package to pick up a virtual item therein, and the pick-up mode is single.
Disclosure of Invention
The embodiment of the application provides an information processing method, an information processing device, an information processing system, computer equipment and a storage medium, and can enrich the mode of a user for picking up virtual object packages. The technical scheme is as follows:
in one aspect, there is provided an information processing method, the method including:
displaying a first user interface, wherein the first user interface displays a receiving interface of a virtual article package;
responding to the triggering operation of the receiving interface, and displaying a second user interface, wherein the second user interface comprises target description information of the virtual article package;
receiving a voice fragment input by a first user account, wherein the voice fragment is used for matching with the target description information to request to receive virtual articles in the virtual article package;
the virtual item in the virtual item package is received.
In another aspect, there is provided an information processing method including:
receiving a matching request sent by a first client, wherein the matching request comprises a first user account, a voice fragment and an identifier of a virtual article package;
acquiring target description information of the virtual article package according to the identification;
and responding to the voice fragment and the target description information to be matched, and sending a virtual article package receiving result to the first client, wherein the virtual article package receiving result comprises virtual articles in the virtual article package received by the first user account.
In another aspect, there is provided an information processing method including:
receiving an operation instruction input by a second user account;
determining a description information set corresponding to the virtual article package according to the operation instruction, wherein the description information set comprises at least two description information, and the description information is used for indicating a picking mode of the virtual article package;
receiving parameter information of the virtual article package input by the second user account, wherein the parameter information is used for generating the virtual article package, and at least one virtual article is carried in the virtual article package;
Displaying a fourth user interface, wherein the fourth user interface displays the virtual article package sent by the second user account, and the virtual article package is generated according to the description information set and the parameter information.
In another aspect, there is provided an information processing method including:
receiving a sending request of a virtual article package sent by a second user account, wherein the sending request carries a description information set and parameter information, the description information set is used for indicating a picking mode of the virtual article package, the parameter information is used for generating the virtual article package, and the virtual article package carries at least one virtual article;
generating the virtual article package according to the description information set and the parameter information;
and sending the receiving interface of the virtual article package to at least one user account.
In another aspect, there is provided an information processing apparatus including:
the first display module is used for displaying a first user interface, and the first user interface displays a receiving interface of the virtual article package;
the first display module is further used for responding to the triggering operation of the pickup interface and displaying a second user interface, and the second user interface comprises target description information of the virtual article package;
The acquisition module is used for receiving a voice fragment input by a first user account, and the voice fragment is used for matching with the target description information to request to receive the virtual article in the virtual article package;
the first receiving module is used for receiving the virtual articles in the virtual article package.
In another aspect, there is provided an information processing apparatus including:
the second receiving module is used for receiving a matching request sent by the first client, wherein the matching request comprises a first user account, a voice fragment and an identifier of a virtual article package;
the acquisition module is used for acquiring the target description information of the virtual article package according to the identification;
and the second sending module is used for responding to the matching of the voice fragment and the target description information and sending a virtual article package receiving result to the first client, wherein the virtual article package receiving result comprises virtual articles in the virtual article package received by the first user account.
In another aspect, there is provided an information processing apparatus including:
the interaction module is used for receiving an operation instruction input by the second user account;
the second determining module is used for determining a description information set corresponding to the virtual article package according to the operation instruction, wherein the description information set comprises at least two description information, and the description information is used for indicating a picking mode of the virtual article package;
The interaction module is further configured to receive parameter information of the virtual article package input by the second user account, where the parameter information is used to generate the virtual article package, and the virtual article package carries at least one virtual article;
the second display module is used for displaying a fourth user interface, the fourth user interface displays the virtual article package sent by the second user account, and the virtual article package is generated according to the description information set and the parameter information.
In another aspect, there is provided an information processing apparatus including:
a fourth receiving module, configured to receive a transmission request of a virtual article packet sent by a second user account, where the transmission request carries a description information set and parameter information, the description information set is used to indicate a pickup mode of the virtual article packet, the parameter information is used to generate the virtual article packet, and the virtual article packet carries at least one virtual article;
the second generation module is used for generating the virtual article package according to the description information set and the parameter information;
and the fourth sending module is used for sending the receiving interface of the virtual article package to at least one user account.
In another aspect, there is provided an information processing system, the system comprising: the system comprises a first client, a server connected with the first client through a wired network or a wireless network and a second client connected with the server through the wired network or the wireless network;
the first client includes the first information processing apparatus as described in the above aspect;
the server includes the second or fourth information processing apparatus as described in the above aspect;
the second client includes the third information processing apparatus as described in the above aspect.
In another aspect, a computer device is provided, the computer device including a processor and a memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the information processing method as described in the above aspect.
In another aspect, there is provided a computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the information processing method as described in the above aspect.
In another aspect, embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the computer device performs the information processing method provided in the above-described alternative implementation.
The beneficial effects that technical scheme that this application embodiment provided include at least:
by setting the description information for the virtual article package, the user inputs the voice fragment matched with the description information according to the description information, and when the voice fragment is successfully matched with the description information, the user can pick up the virtual article in the virtual article package. For example, a question, a picture, a video, a piece of music, etc. may be used as the description information, so that the user answers a piece of speech according to the description information, and the server determines whether the user says the speech is consistent with the description information by identifying the speech of the user, and when the user agrees with the description information, the user obtains the virtual article in the virtual article package. Therefore, the method for receiving the virtual article packages by the users can be enriched, the sending and receiving of the virtual article packages among the users are promoted, the flow of the virtual articles among the users is promoted, and the utilization rate of the virtual articles is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a block diagram of a computer system provided in an exemplary embodiment of the present application;
fig. 2 is a schematic structural diagram of a terminal according to an exemplary embodiment of the present application;
FIG. 3 is a user interface diagram of an information processing method provided in an exemplary embodiment of the present application;
FIG. 4 is a user interface diagram of an information processing method provided in another exemplary embodiment of the present application;
FIG. 5 is a user interface diagram of an information processing method provided in another exemplary embodiment of the present application;
FIG. 6 is a user interface diagram of an information processing method provided in another exemplary embodiment of the present application;
FIG. 7 is a method flow diagram of an information processing method provided by another exemplary embodiment of the present application;
FIG. 8 is a method flow diagram of an information processing method provided by another exemplary embodiment of the present application;
FIG. 9 is a method flow diagram of an information processing method provided by another exemplary embodiment of the present application;
FIG. 10 is a user interface diagram of an information processing method provided in another exemplary embodiment of the present application;
FIG. 11 is a user interface diagram of an information processing method provided in another exemplary embodiment of the present application;
FIG. 12 is a method flow diagram of an information processing method provided by another exemplary embodiment of the present application;
FIG. 13 is a user interface diagram of an information processing method provided in another exemplary embodiment of the present application;
FIG. 14 is a user interface diagram of an information processing method provided in another exemplary embodiment of the present application;
FIG. 15 is a user interface diagram of an information processing method provided in another exemplary embodiment of the present application;
FIG. 16 is a method flow diagram of an information processing method provided by another exemplary embodiment of the present application;
fig. 17 is a block diagram of an information processing apparatus provided in another exemplary embodiment of the present application;
fig. 18 is a block diagram of an information processing apparatus provided in another exemplary embodiment of the present application;
fig. 19 is a block diagram of an information processing apparatus provided in another exemplary embodiment of the present application;
Fig. 20 is a block diagram of an information processing apparatus provided in another exemplary embodiment of the present application;
FIG. 21 is a block diagram of a terminal provided by an exemplary embodiment of the present application;
fig. 22 is a block diagram of a server according to an exemplary embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Virtual article: is a virtual resource capable of being circulated. Illustratively, the virtual item is a virtual resource that can be exchanged for goods. By way of example, the virtual item may be funds, shares, game gear, game material, game pets, game coins, icons, members, titles, value-added services, points, shoe-shaped gold, bean, gift certificates, redemption certificates, coupons, greeting cards, money, and the like.
Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Key technologies to the speech technology (Speech Technology) are automatic speech recognition technology (ASR) and speech synthesis technology (TTS) and voiceprint recognition technology. The method can enable the computer to listen, watch, say and feel, is the development direction of human-computer interaction in the future, and voice becomes one of the best human-computer interaction modes in the future.
Natural language processing (Nature Language processing, NLP) is an important direction in the fields of computer science and artificial intelligence. It is studying various theories and methods that enable effective communication between a person and a computer in natural language. Natural language processing is a science that integrates linguistics, computer science, and mathematics. Thus, the research in this field will involve natural language, i.e. language that people use daily, so it has a close relationship with the research in linguistics. Natural language processing techniques typically include text processing, semantic understanding, machine translation, robotic questions and answers, knowledge graph techniques, and the like.
Referring to fig. 1, a schematic diagram of an implementation environment provided in one embodiment of the present application is shown. The implementation environment may include: a first terminal 10, a server 20 and a second terminal 30.
The first terminal 10 may be an electronic device such as a cell phone, desktop computer, tablet computer, game console, e-book reader, multimedia playing device, wearable device MP3 player (Moving Picture Experts Group Audio Layer III, moving picture experts compression standard audio layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, moving picture experts compression standard audio layer 3) player, laptop portable computer, etc. The first terminal 10 may have installed therein a first client of an application program capable of virtual package reception, for example, a financial program, a social program, a shopping program, a game program, a video program, an audio program, and the like.
The second terminal 30 may be an electronic device such as a cell phone, desktop computer, tablet computer, game console, electronic book reader, multimedia playing device, wearable device MP3 player, MP4 player, laptop portable computer, etc. The second terminal 30 may have installed therein a second client capable of an application program for virtual package transmission, for example, a financial program, a social program, a shopping program, a game program, a video program, an audio program, and the like.
The server 20 is used to provide background services for clients of applications in the first terminal 10 or the second terminal 30, such as applications capable of receiving virtual packages. For example, the server 20 may be a background server of the application program described above (e.g., an application program capable of receiving virtual package of items). The server 20 may be a server, a server cluster comprising a plurality of servers, or a cloud computing service center.
The first terminal 10, the second terminal 30 and the server 20 can communicate with each other via the network 40. The network 40 may be a wired network or a wireless network.
Illustratively, a first client having an application program (such as a social program) capable of receiving a virtual package is installed in the first terminal 10, when the first terminal 10 receives a virtual package receiving operation of a user, a virtual package receiving request may be sent to the server 20 through the network 40, after the server 20 receives the virtual package receiving request of the first terminal 10, a certain amount of virtual package is transferred to an account of the user, and a transfer result is sent to the first terminal 10 through the network 40, and then a certain amount of virtual package is added to the account of the user displayed in an interface thereof by the first terminal 10, so as to complete an information processing process.
For example, when the second terminal 30 receives the virtual package sending operation of the user, the second client having the application program (such as a social program) capable of sending the virtual package is installed in the second terminal 30, and sends a virtual package sending request to the server 20 through the network 40, after receiving the virtual package sending request of the second terminal 30, the server 20 transfers a certain amount of virtual package from the account of the user, and sends the transfer result to the second terminal 30 through the network 40, and then the second terminal 30 displays the reduced certain amount of virtual package in the account of the user in its interface, so as to complete the information processing process.
In the embodiment of the method, the execution subject of each step may be a terminal. Referring to fig. 2, a schematic structural diagram of a terminal according to an embodiment of the present application is shown. The terminal may include: motherboard 110, external input/output device 120, memory 130, external interface 140, touch system 150, and power supply 160.
Wherein, the motherboard 110 has integrated therein processing elements such as a processor and a controller.
The external input/output device 120 may include a display component (such as a display screen), a sound playing component (such as a speaker), a sound collecting component (such as a microphone), various types of keys, and the like.
The memory 130 has stored therein program codes and data.
The external interface 140 may include a headset interface, a charging interface, a data interface, and the like.
The touch system 150 may be integrated in a display component or key of the external input/output device 120, and the touch system 150 is used to detect a touch operation performed by a user on the display component or key.
The power supply 160 is used to power the other various components in the mobile terminal.
In embodiments of the present application, the processor in motherboard 110 may generate a user interface (e.g., a virtual item receiving interface) by executing or invoking program code and data stored in memory, and present the generated user interface (e.g., virtual item receiving interface) via external output/input device 120. During presentation of a user interface (e.g., a virtual item receiving interface), touch operations (e.g., virtual item receiving operations) performed when a user interacts with the user interface (e.g., virtual item receiving interface) may be detected by touch system 150 and responded to.
By way of example, the present application provides an information processing method, and this embodiment is described by taking an example of application of the method in a scenario of sending and receiving a red packet in a social program.
In a social program, a user can send a red packet in a group chat, and the embodiment provides a voice relay red packet. Taking the example that the first user account is the receiver of the voice relay red packet and the second user account is the sender of the voice relay red packet, the voice relay red packet is described. For example, when the user sends the voice relay packet, the user selects a text segment, the text segment contains at least two sentences, and other users pick up the voice relay packet by reading at least one sentence in the text segment. Illustratively, sentences in the text have an order, and the client recommends sentences to be read to the user according to the retrieval progress of the red envelope. For example, after a user successfully reads a sentence to get a red packet, other users cannot get the red packet with the sentence. When the user successfully reads out the sentence to get the red packet, the server also combines the audio of the sentence read by the user with the audio of the sentence read by other users when the user gets the red packet to obtain a section of audio data, sends a multimedia message to the client, wherein the multimedia message carries the current relay progress and the red packet ID of the voice relay red packet, and the user can control the client to request the server to acquire the detail page of the voice relay red packet according to the red packet ID and the current relay progress by triggering the multimedia message and jump to the detail page of the voice relay red packet, wherein the detail page comprises a play control of the audio data, and the user triggers the play control to play the audio data so that the user can hear a section of audio which is jointly completed by a plurality of users.
For the sending process of the voice relay red packet, as shown in fig. 3, a user interface schematic diagram corresponding to a group of second user account numbers is provided. As shown in fig. 3 (1), after entering a user interface of a group chat, the user may click on a red packet sending control 301, pop up a red packet selection interface, and in the red packet selection interface, the user may click on a voice relay red packet 302 and enter an editing interface of the voice relay red packet. As shown in (2) of fig. 3, the user may edit parameter information of the voice relay red package in the user interface, for example, the number of virtual articles 303, relay content 304, and number of red packages 305 in the red package, and illustratively, the user may click on the relay content selection control to enter the relay content selection interface. As shown in fig. 3 (3), a plurality of selectable relay contents are provided to the user, and the user may edit the relay contents by himself/herself, for example. For example, the user may select a second relay content 306 entitled "won politics," which includes two sentences, "before your action" and "i am the only monarch of the personal menu," and jump to the editing interface of the voice relay red package after the user selects the second relay content 306. At this time, as shown in fig. 3 (4), the user selects the second relay content titled "won" as the relay content 304, and illustratively, the number of red packets 305 is automatically determined according to the number of sentences in the relay content 304 selected by the user, for example, two sentences are included in the second relay content 306, and the number of red packets 305 is two. After completing editing the parameter information of the voice relay red packet, the user may click on the send control 307 to enter the payment interface of the voice relay red packet. As shown in fig. 4 (1), the user may complete payment for the voice relay red envelope at the payment interface 308, and jump back to the group chat user interface after the payment is successful. As shown in fig. 4 (2), the second user account 309 sends a voice relay packet 310 in the group chat.
For the process of receiving the voice relay red packet, as shown in fig. 5, a user interface schematic diagram corresponding to a group of first user accounts is provided. As shown in fig. 5 (1), the second user account 309 sends a voice relay packet 310 in the group chat, and the user clicks the voice relay packet 310 to pop up the pre-fetching interface of the voice relay packet. As shown in (2) of fig. 5, sentences in the second relay content are displayed in the pre-retrieval interface, and the client side recommends the first sentence in the second relay content to the user "before your action" according to the retrieval progress of the current voice relay red packet. Illustratively, the user may press the voice input control 311 to read the currently selected sentence "before your action". When the voice input control 311 is released, the client automatically uploads the collected voice fragments to the server for matching, if matching is successful, the client enters a red packet robbing process, the client sends a red packet robbing request to the server, the server determines whether the first user account can rob the red packet according to the red packet robbing request, and if the red packet robbing is successful, the client jumps to a successful receiving interface. As shown in fig. 5 (3), the number 312 of virtual articles that the first user account receives from the voice relay packet is displayed on the successful receiving interface: and 0.08, at the same time, the first user account also sends a multimedia message in the group chat, and the user can see the multimedia message in the user interface of the group chat after exiting the successful acquisition interface. As shown in fig. 5 (4), the first user account 314 sends a multimedia message 313 in a group chat. Illustratively, the multimedia message 313 is used to play audio data synthesized by the server using at least one user's voice clip according to the current relay progress of the voice relay red packet. The multimedia message 313 may be at least one of a voice message, a video message, and a link message, for example. Taking the multimedia message 313 as a link message as an example, the multimedia message includes a red packet ID of the voice relay red packet and a current relay progress of the voice relay red packet, and the user can request to obtain an audio data preview interface of the voice relay red packet from the server according to the multimedia message by clicking the link message and then jump to the audio data preview interface. As shown in fig. 6, at the audio data preview interface, the user can play the audio data synthesized by the server by clicking the play control 315. The server may also combine audio data with video frames according to a preset video frame, for example, to form video data 316 as shown in fig. 6.
As shown in fig. 7, the embodiment provides a method for information interaction between a second client corresponding to a second user account and a server when sending a voice relay red packet. Exemplary servers include a red envelope server, a message server, and a configuration server. The red packet server is used for carrying out logical operation in the red packet sending and receiving process. The message server user performs logical operations in the process of sending and receiving messages in the social program. The configuration server is used for data update of the social program. The method comprises the following steps.
In step 401, the second client requests the configuration server to download/update the configuration.
Illustratively, when the social program performs a version update or a function update, the second client needs to acquire update information from the configuration server. For example, when the social program adds the voice relay packet function, the second client needs to request configuration data corresponding to the voice relay packet from the configuration server, so as to complete updating of the social program, so that the second client can send or receive the voice relay packet.
Step 402, the configuration server issues voice relay configuration data to the second client.
And the second client finishes updating after receiving the voice relay configuration data.
Step 403, the second client receives the user's editing of the voice relay red packet at the voice relay red packet editing interface: selecting relay questions, filling the number of red packets, filling the amount of the red packets, and then receiving the operation of triggering a sending control by a user.
In step 404, when the user triggers the sending control, the second client sends information such as the ID (IDentity) of the relay question selected by the user and the ID of the second user account to the red packet server, and requests to send the voice relay red packet to the red packet server.
Step 405, after receiving the information sent by the second client, the red packet server generates an order of the voice relay red packet, and sends the ID of the red packet order to the second client.
In step 406, the second client finishes the payment operation of the red package order according to the ID of the red package order, and inputs the payment password.
In step 407, the second client sends information such as ID, payment password, etc. of the red packet order to the red packet server.
In step 408, the red packet server verifies the information such as ID, payment password, etc. of the red packet order sent by the second client, and after the verification is passed, sends the payment result to the second client.
Step 409, after the verification is passed, the red packet server sends a request for sending a message to the message server, requesting the message server to send a message of the voice relay red packet.
In step 410, after receiving the request for sending the message sent by the packet server, the message server sends a message of the voice relay packet to the second client, where the message includes information such as ID of the voice relay packet, authentication key (authentication key), relay progress of the voice relay packet, and the like.
The second client displays the voice relay red packet sent by the second user account on the user interface according to the voice relay red packet message, and completes the sending process of the voice relay red packet.
As shown in fig. 8, the embodiment further provides a method for performing information interaction between the first client corresponding to the first user account and the server when receiving the voice relay red packet. Exemplary servers include a red packet server, a message server, and a voice recognition server. The red packet server is used for carrying out logical operation in the red packet sending and receiving process. The message server user performs logical operations in the process of sending and receiving messages in the social program. The voice recognition server is used for recognizing and matching voice fragments. The method comprises the following steps.
In step 501, the first client receives an operation of clicking the voice relay packet by the user, and displays a pre-fetching interface of the voice relay packet.
In step 502, the first client records a voice clip of a specified paragraph read by the user according to the operation of the user pressing the record button for a long time.
Step 503, after the recording is completed, the first client sends the current relay progress of the voice relay red packet, the ID of the voice relay red packet, and the recorded voice clip to the red packet server.
Illustratively, the relay progress includes an identification of the passage that the user should read. The red packet server acquires the relay questions of the voice relay red packet according to the ID of the voice relay red packet, and determines the paragraphs to be identified according to the identification of the paragraphs.
Step 504, the red packet server uploads the recorded voice fragment, the relay title, the paragraph to be identified, the ID of the voice relay red packet, etc. to the voice recognition server.
In step 505, the speech recognition server converts the speech segment into text or pinyin, and matches the text or pinyin with the paragraph to be recognized.
Illustratively, the speech recognition server uses a speech recognition algorithm to convert the speech segments into text or speech.
In step 506, the speech recognition server returns the matching result to the red packet server.
In step 507, the red packet server returns a matching result to the first client.
123, if the matching result is that the matching is successful, the first client determines that the first user account can rob the voice relay packet, and sends a request for robbing the voice relay packet to the packet server, where the request for robbing the voice relay packet includes information such as an ID of the voice relay packet, an ID of the first user account, an authentication key (authentication key), an ID of a relay question, and a relay progress.
124, if the matching fails, the first client displays a prompt for the failure of the voice matching, and resets the record button in the pre-fetch interface so that the user records the voice clip again.
Step 509, the red packet server verifies the first client-side sent request for the red packet, and synthesizes the relay audio data. The relay audio data is obtained by synthesizing voice fragments of at least one user who takes the voice relay red packet according to the relay progress of the voice relay red packet by the server.
Illustratively, the red-envelope server generates a red-envelope result when the information in the red-envelope request is verified to be error-free. The red packet server may determine the number of the virtual articles that are preempted according to the matching rate of the voice segment, where, for example, the higher the matching rate, the more the number of the virtual articles that are preempted by the user.
Step 510, the red packet server returns a robbery red packet result to the first client. The robbed red packet result comprises the information of the amount of the robbed virtual article, relay audio data and the like, and the relay progress of the voice relay red packet is updated to the first client.
In step 511, after receiving the result of the robbery packet, the first client sends a multimedia message to the message server in response to the success of the robbery packet, where the multimedia message includes the ID of the voice relay red packet, the current relay progress, the time of recording the voice clip, the relay audio data, and so on.
129, the message server pushes the multimedia message to clients of other users in the group chat.
130 the client can request to the server to obtain the detail page of the voice relay red packet according to the information such as the ID of the voice relay red packet in the multimedia message, the current relay progress and the like, the detail page contains relay audio data, and the user can play the relay audio data on the detail page.
Illustratively, based on the concepts of the present application, alternative implementations of the present application are not limited to the voice relay packet described above. For example, the relay questions in the voice relay red packet can be replaced by other types of data such as questions, pictures, audio, video, and the like. In addition to entering speech segments, users may also pick up virtual packages by drawing patterns, entering text, recording video, sharing links, and the like. Illustratively, the alternative ways of relaying the questions and the way of picking up the virtual package may be arbitrarily combined to obtain a new method of picking up the virtual package, and several alternative exemplary embodiments are given below.
In an alternative exemplary embodiment, the relay questions may be questions, the questions may be questions described by words, questions may be questions described by voice, questions may be questions described by video, questions may be questions described by pictures, etc., the user needs to answer the questions correctly when want to get the virtual package, the user may answer the questions through various ways such as voice, picture, audio, video, etc., after receiving the answer of the user, the server may perform voice recognition and semantic recognition on the answer by using a voice recognition algorithm or a semantic recognition algorithm if the answer of the user is voice, audio, video, so as to obtain text or semantic of the answer of the user, then match the text or semantic of the answer of the user with the correct answer, and determine that the answer of the user is correct if the text of the user is consistent with the correct answer, or the semantic approximation, so as to get the virtual package. If the answer of the user is a picture or a video, a picture recognition algorithm or a text extraction algorithm can be used for obtaining picture content or text in the answer, the picture content or text is matched with a correct answer, if the picture content is similar or the text is the same, the answer of the user is determined to be correct, and the virtual article package can be taken. By way of example, there may be a large number of questions in the relay title, one of which the user may select to answer. By way of example, the question may be to let the user describe the content of the presentation of the picture, video, audio, to let the user classify the picture, video, audio, to let the user answer several persons included in the picture, video, audio, to let the user answer a idiom, a poem, a personal name, etc. based on the picture, video, audio.
In an alternative exemplary embodiment, the relay title may also be a picture, where the picture includes a plurality of patterns, the user may select one of the patterns to draw or shoot, and when the similarity of the pattern drawn by the user or the picture shot by the user and the pattern in the original picture is greater than a threshold, it is determined that the user may pick up the virtual package. The server may also identify the picture drawn or photographed by the user according to the picture identification algorithm, obtain the content of the picture or the key feature of the picture, and if the content of the picture or the key feature is correct, the user may get the virtual package. The server may also compose a new picture from the patterns drawn by the users, and send the composed new picture to the client.
In an alternative exemplary embodiment, the pictures can be correspondingly replaced by videos, so that each user records a video, whether the video content recorded by the user accords with the specified content is matched, and if so, the virtual article package can be taken. The server can also synthesize a plurality of videos recorded by the user into a new video, and send the synthesized new video to the client.
In an alternative exemplary embodiment, the method may also be used to teach students to complete work titles quickly and efficiently. The relay questions are replaced by a plurality of questions of corresponding subjects, so that students answer, and the students with the fastest answer can obtain virtual articles corresponding to the questions as rewards.
136 to sum up, the method provided in this embodiment provides a voice relay packet, so that the user can get the relay packet by reading at least one paragraph of the relay questions corresponding to the voice relay packet, enriches the ways in which the user gets the packet, and promotes the user to send and get the packet. The relay audio data are obtained by synthesizing the voice fragments of a plurality of users, and the relay audio data are sent to the client, so that the users can enjoy the relay questions completed together with other users, the interactivity of the voice relay red packet is improved, and the display modes of the voice fragments of the red packet users are enriched.
Fig. 9 is a method flowchart of an information processing method provided in an exemplary embodiment of the present application. The execution body of the method is exemplified as a second client in the second terminal 30 shown in fig. 1, and the second terminal 30 has a second client supporting transmission of virtual package. The method at least comprises the following steps.
Step 601, receiving an operation instruction input by a second user account.
Step 602, determining a description information set corresponding to the virtual article package according to the operation instruction, where the description information set includes at least two description information, and the description information is used to indicate a pickup mode of the virtual article package.
For example, when the user wants to send the virtual package, the user enters an editing interface of the virtual package, where the editing interface of the virtual package includes a confirmation control of the description information set, and the user may enable the second client to obtain the description information set corresponding to the virtual package by triggering the confirmation control of the description information set to enter a selection interface or an editing interface of the description information set. For example, as shown in (2) in fig. 3, the user determines a description information set corresponding to the virtual package by an operation on the force content 304 in the editing interface.
A virtual item package is a collection of virtual items. The virtual article package includes at least one unit of virtual article. For example, the virtual package may be a virtual package, an email, an electronic gift package, or the like.
The virtual article is a virtual resource that can be circulated. Illustratively, the virtual item is a virtual resource that can be exchanged for goods. By way of example, the virtual item may be funds, shares, game gear, game material, game pets, game coins, icons, members, titles, value-added services, points, shoe-shaped gold, bean, gift certificates, redemption certificates, coupons, greeting cards, money, and the like.
Illustratively, the description information set includes at least two description information for describing a pickup manner of the virtual package. For example, a user who wants to get a virtual package needs to input a voice clip according to the description information of the virtual package, and when the voice clip matches with the description information, the user can get the virtual package. Thus, when sending the virtual package, the second user account needs to specify a description information set for the virtual package, so that other user accounts can get the virtual package according to the description information set. Illustratively, the descriptive information set further includes a title name for enabling a user to quickly learn the contents of the descriptive information set.
Illustratively, the present embodiment does not impose restrictions on the information type of the description information. For example, the descriptive information may be: at least one of text information, picture information, audio information and video information.
Illustratively, the operational instructions are instructions generated from a user selecting an operation describing the information set. For example, the user may select a desired descriptive information set from the candidate descriptive information set list provided by the client, or may edit the descriptive information set by himself.
Illustratively, when the descriptive information set is selected by the user from the candidate descriptive information sets, the method further comprises, prior to step 601: displaying a fifth user interface, the fifth user interface comprising a candidate descriptive information set list, the candidate descriptive information set list comprising at least one candidate descriptive information set; step 601 further comprises: at least one candidate descriptive information set in the candidate descriptive information set list is determined as a descriptive information set of the virtual package according to the operation instruction.
Illustratively, the fifth user interface is a presentation interface of the candidate descriptive information set list. The fifth user interface is used for displaying the candidate descriptive information set list provided by the client to the user. For example, the candidate descriptive information set list may be a candidate descriptive information set list generated by the second client from the locally stored at least one candidate descriptive information set; or generating a candidate descriptive information set list according to at least one candidate descriptive information set sent by the server; or generating a candidate descriptive information set list according to the at least one candidate descriptive information set stored locally and the at least one candidate descriptive information set sent by the server; or collecting attribute information of the second user account, wherein the attribute information comprises at least one of object attribute information of the second user account, group attribute information of a group to which the second user account belongs and object attribute information of other user accounts except the second user account in the group; generating a third candidate description information set according to the attribute information; and generating a candidate descriptive information set list according to the third candidate descriptive information set.
For example, the attribute information of the second user account in the group includes: and learning the name of the addressee, and acquiring a description information set corresponding to the encyclopedia from the server according to the name of the addressee learned by the second user account, and generating a candidate description information set list according to the description information set corresponding to the encyclopedia.
For another example, information such as a historical music playing record of the second user account, classification of the group, records in the group and the like may be collected, and by using an AI technology, a plurality of pieces of music that may be of interest to the second user account are obtained, and the plurality of pieces of music are used as description information to generate a candidate description information set list. For example, the second user account often listens to songs of singer a, and the classification of group a is rock music, the topics discussed within the group are focused on new albums released by singer a, and the rock music in the new albums released by singer a is used as descriptive information to generate a candidate descriptive information set list.
For example, when the second user account is to select the description information set, the client may further request the server to obtain the candidate description information set. After receiving the request, the server sends a second candidate description information set stored locally to a second user account; or collecting attribute information of the second user account, wherein the attribute information comprises at least one of object attribute information of the second user account, group attribute information of a group to which the second user account belongs and object attribute information of other user accounts except the second user account in the group; generating a third candidate description information set according to the attribute information; and sending the third candidate descriptive information set to the second user account.
For example, the second client may determine whether to display the candidate description information set stored locally according to the configuration, or may determine whether to preferentially display the candidate description information set acquired from the server according to the configuration. Illustratively, the second client may preferentially display the candidate descriptive information sets obtained from the server. When the second client obtains the new candidate descriptive information set from the server, the new candidate descriptive information set is stored locally for the next quick reading. Illustratively, the reading of the candidate descriptive information set is obtained by the second client from the configuration server. For example, the second client may generate the candidate descriptive information set list from near to far according to the update time sequence of the candidate descriptive information sets, and motivate the user to select a new candidate descriptive information set as the descriptive information set.
Illustratively, as shown in (1) in fig. 10, in the editing interface of the virtual package, a selection control 701 related to the description information set is included, and when the user triggers the selection control 701, the second client displays a fifth user interface shown in (2) in fig. 10, where the fifth user interface includes a candidate description information set list, and the candidate description information set list includes: a first candidate descriptive information set 702, a second candidate descriptive information set 703 and a third candidate descriptive information set 704. The user may select one of the candidate descriptive information sets as the descriptive information set of the virtual package.
Illustratively, when the descriptive information set is edited by the user, the method further includes, before step 601: displaying a sixth user interface, the sixth user interface including an edit control; step 601 further comprises: according to the operation instruction on the editing control, at least two pieces of description information input by the second user account are obtained, wherein the description information comprises at least one of text information, picture information, audio information and video information; the at least two pieces of descriptive information are determined as a set of descriptive information for the virtual package.
Illustratively, the sixth user interface is an editing interface that describes information. The sixth user interface is used for acquiring text information input by a user. The user may enter text information by triggering an edit control on the sixth user interface. For example, the user may describe each of the description information in the information set at the time of input, respectively. For example, after the user inputs, the second client may automatically segment the text information according to the text information input by the user to obtain a plurality of description information.
For example, the editing control may also acquire the picture information, the audio information or the video information uploaded by the user, and determine the picture information, the audio information or the video information as the description information set.
Illustratively, as shown in (1) in fig. 11, in the editing interface of the virtual package, a selection control 701 related to the description information set is included, when the user triggers the selection control 701, the second client displays a sixth user interface shown in (2) in fig. 11, where the sixth user interface includes an editing control 705, and the user can input a plurality of description information in the editing control 705 to form the description information set. After editing is completed, the user may click on the confirmation control 706 to determine the input text information as the description information set of the virtual package, and control the second client to jump back to the editing interface of the virtual package.
In step 603, receiving parameter information of a virtual article package input by the second user account, where the parameter information is used to generate a virtual article package, and the virtual article package carries at least one virtual article.
Illustratively, at the editing interface of the virtual package, the user also needs to edit other parameter information of the virtual package. Exemplary, the parameter information includes: at least one of the type of the virtual package, the name of the virtual package, the sending amount of the virtual package, the retrievable number of the virtual package, the sending time of the virtual package, the user account number of the retrievable virtual package, and the distribution mode of the virtual objects in the virtual package. For example, the virtual package may be divided into multiple types according to the manner of picking up the virtual package, such as a general virtual package, a virtual package dedicated to some user account, a virtual package opened by a password, a virtual package for spelling hands, and so on. The name of the virtual package is a text that can be edited arbitrarily by the user, and the name can be displayed on the link of the virtual package. The transmission amount refers to the number of virtual items in the virtual item package. The retrievable number refers to the number of times the virtual package of items can be retrieved. The transmission time is used for periodically transmitting the virtual article package. The virtual article distribution mode comprises at least one of random distribution, average distribution, arithmetic distribution, similarity distribution between the voice fragments and the target description information, and the like. Illustratively, the descriptive information set is also one of the parameter information of the virtual package.
Exemplary, the parameter information includes: type identification and virtual item parameters; the type identifier is used for identifying the type of the virtual article package generated at this time in at least two virtual article package types; the virtual article parameters include: at least one of the number of virtual package, the total number of virtual articles, the number of virtual articles in a single virtual package, and the manner of dividing the number of virtual articles in a single virtual package.
The second client receives the parameter information input by the user, and generates a virtual article package according to the parameter information and the selected description information set. The second client sends a sending request of the virtual article package to the server according to the description information set and the parameter information; and responding to the received successful sending result sent by the server, and displaying a fourth user interface.
The sending request includes a second user account, a description information set and parameter information, the server generates an order of the virtual article package according to the sending request, the identification of the order is returned to the second client for payment operation, the second client sends verification information such as the identification of the order and a payment password to the server, and after the server verifies that the verification is correct, a successful sending result is returned to the second client. The successful sending result comprises at least one of an identification of the virtual article package, a description information set of the virtual article package, relay progress of the virtual article package and an identity verification key of the virtual article package. The second client displays the virtual package sent by the second user account on the user interface according to the successful sending result.
The server receives a sending request of a virtual article package sent by a second user account, wherein the sending request carries a description information set and parameter information, the description information set is used for indicating a picking mode of the virtual article package, the parameter information is used for generating the virtual article package, and at least one virtual article is carried in the virtual article package; generating the virtual article package according to the description information set and the parameter information; and sending the receiving interface of the virtual article package to at least one user account.
Step 604, displaying a fourth user interface, where the fourth user interface displays a virtual article package sent by the second user account, and the virtual article package is generated according to the description information set and the parameter information.
For example, after the virtual package is successfully sent, the second client displays the virtual package sent by the second user account on the user interface.
The fourth user interface is a chat interface displayed on the second client, that is, the fourth user interface is a chat interface corresponding to the second user account for sending the virtual package. For example, as shown in (2) of fig. 4, a fourth user interface is provided, on which a virtual package sent by the second user account 309 is displayed.
For example, after the second user account sends the virtual article package, when the other user accounts receive the virtual article package, the second client of the second user account also receives a corresponding receiving message. Illustratively, step 604 further comprises, after: and displaying the multimedia message sent by the first user account, wherein the multimedia message is used for playing the voice fragment corresponding to the first user account. The virtual article package is provided with a description information set corresponding to the virtual article package, and the description information in the description information set is provided with a sequence identifier corresponding to the description information; the multimedia message is used for the playing server to sequentially synthesize the audio data obtained by at least one voice fragment corresponding to the relayed description information according to the sequence identification.
The second user account may also trigger the multimedia message, for example. Illustratively, the second client receives a first trigger operation on the multimedia message; playing the multimedia message according to the first triggering operation; or, the second client receives a second trigger operation on the multimedia message; collecting the multimedia message according to the second triggering operation; or, the second client receives a third triggering operation on the multimedia message; sharing the multimedia message according to the third triggering operation, or receiving a fourth triggering operation on the multimedia message by the second client; and displaying the playing page of the audio data according to the fourth triggering operation.
In summary, according to the method provided by the embodiment, the description information is set for the virtual article package, so that the user can get the virtual article in the virtual article package according to the description information. For example, a question, a picture, a video, a piece of music, etc. may be used as the description information, so that the user answers a piece of speech according to the description information, and the server determines whether the user says the speech is consistent with the description information by identifying the speech of the user, and when the user agrees with the description information, the user obtains the virtual article in the virtual article package. Therefore, the method for receiving the virtual article packages by the users can be enriched, the sending and receiving of the virtual article packages among the users are promoted, the flow of the virtual articles among the users is promoted, and the utilization rate of the virtual articles is improved.
According to the method provided by the embodiment, the candidate description information set list is provided for the user, so that the user can directly select one description information set from the candidate description information set list to send the virtual article package, the operation of inputting the description information set by the user is simplified, and the sending efficiency of the virtual article package is improved.
According to the method provided by the embodiment, the user can independently edit the description information set through the editing control by providing the editing control for the user, so that the user can independently edit the description information set, the editable degree of the user on the virtual article package is improved, and the virtual article package is more diversified to get.
Fig. 12 is a method flowchart of an information processing method provided in another exemplary embodiment of the present application. The execution body of the method is exemplified by the first client and the server 20 in the first terminal 10 shown in fig. 1, and the first client supporting the virtual package receiving is executed in the first terminal 10. The method at least comprises the following steps.
In step 801, a first client displays a first user interface, where the first user interface displays a pickup interface for a virtual package.
The first user interface is illustratively an interface for sending a virtual package of items, for example, the first user interface may be a chat interface for a first user account.
Illustratively, the pickup interface is configured to receive a request from a first user account to pick up a virtual package of items. Illustratively, the pickup interface may be at least one of a link, a two-dimensional code, and a password.
For example, a two-dimensional code of the virtual article package is displayed on the first user interface, and the user can scan the two-dimensional code to get the virtual article package. For another example, a link to the virtual package is displayed on the first user interface, and the user may click on the link to get the virtual package. For another example, the first user interface displays a password of the virtual package, and the user may copy the password into a designated application program to access the retrieval interface of the virtual package.
For example, as shown in fig. 13, a first user interface is provided, in which a link 901 to a virtual package sent by the second user account 309 is displayed, and a user may click on the link 901 to get the virtual package.
In step 802, the first client responds to the triggering operation of the pickup interface, and displays a second user interface, wherein the second user interface comprises target description information of the virtual article package, and the target description information is used for describing the pickup mode of the virtual article package.
Illustratively, the second user interface is for retrieving the virtual package of items, i.e., the second user interface is a retrieval interface for the virtual package of items. For example, the second user interface may be displayed at an upper layer of the first user interface, entirely covering the first user interface, or partially covering the first user interface.
Illustratively, the triggering operation includes at least one of clicking, double clicking, dragging, sliding, pressing, scanning, copying, pasting, searching. For example, in response to a trigger operation for getting the borrowing port, the first client determines that the first user account requests to get the virtual package, and displays a second user interface, where the second user interface is used to receive a voice segment input by the first user account.
Illustratively, when the user triggers the pickup interface, the first client obtains the target description information of the virtual package. The target description information is used for informing the user of the picking mode of the virtual article package, so that the user inputs the voice fragment according to the target description information. The object description information may be at least one of text information, picture information, audio information, and video information, for example.
For example, as shown in fig. 13, in response to the user clicking on the link 901 of the virtual package, as shown in fig. 14, a second user interface is displayed in which the target descriptive information 902 of the virtual package is included "before your action.
In step 803, the first client receives a voice segment input by the first user account, where the voice segment is used to match with the target description information to request to receive the virtual article in the virtual article package.
Illustratively, the speech segment is user-entered while the target descriptive information is displayed on the second user interface. Illustratively, in order to successfully pick up the virtual package, the user needs to input a voice clip according to the target description information, so that the input voice clip can be matched with the target description information. Illustratively, the speech segment has a maximum duration, and the duration of the speech segment input by the user is less than the maximum duration.
Illustratively, the second user interface further includes a voice input control thereon, and step 803 further includes: the first client responds to triggering operation of the voice input control, and collects voice fragments.
Illustratively, the user's triggering operation of the voice input control may be: pressing the voice input control; or click voice input control amount operation. For example, as shown in fig. 14, a voice input control 311 is further displayed on the second user interface, and the user can press the voice input control 311 to control the first client to start recording, release the voice input control 311 to control the first client to stop recording, and send the recorded voice clip to the server for matching.
In step 804, the first client sends a matching request to the server, where the matching request includes the speech segment and the identification of the virtual package.
The method includes the steps that after a first client collects a voice fragment, a matching request is automatically sent to a server, and the matching request comprises a first user account number, the voice fragment and an ID of a virtual article package. For example, the matching request may further include target description information, so that the server matches the speech segment with the target description information. The matching request is used for requesting the server to match the voice fragment with the target description information, so that the virtual article package is picked up after the matching is successful.
In step 805, the server receives a matching request sent by the first client, where the matching request includes a first user account, a voice segment, and an identifier of the virtual package.
In step 806, the server obtains the target description information of the virtual package according to the identifier.
The server obtains the target description information corresponding to the virtual article package according to the ID of the virtual article package in the matching request after receiving the matching request.
In step 807, the server sends a virtual package receiving result to the first client in response to the voice segment matching the target description information, where the virtual package receiving result includes a virtual object in the virtual package received by the first user account.
Illustratively, the server matches the voice clip with the target description information to obtain a matching result. When the matching is successful, the server determines that the first user account can pick up the virtual article package, and sends a virtual article package receiving result to the first client, so that the first client displays the first user account to receive the virtual article package according to the virtual article package receiving result.
For example, the server may determine the similarity between the speech segment and the target description information according to speech or semantics, and determine that the speech segment matches the target description information when the similarity reaches a threshold.
Exemplary, the object description information includes: at least one of text information, picture information, audio information, and video information; the voice segment is matched with the target description information, and the method comprises the following steps: the first text indicated by the voice fragment is the same as the second text indicated by the target description information; or the semantic similarity between the first semantic meaning indicated by the voice fragment and the second semantic meaning indicated by the target description information is larger than a threshold value; or, the answer indicated by the speech segment includes a correct answer to the question indicated by the target descriptive information.
For example, the server may perform speech recognition on the speech segment to obtain a first text indicated by the speech segment; responding to the fact that the first text indicated by the voice fragment is the same as the second text indicated by the target description information, and sending a virtual article package receiving result to the first client; or, in response to the semantic similarity between the first semantic meaning indicated by the voice fragment and the second semantic meaning indicated by the target description information being greater than a threshold value, sending a virtual article package receiving result to the first client; or, in response to the answer indicated by the voice segment including a correct answer to the question indicated by the target descriptive information, sending a virtual package receiving result to the first client.
Illustratively, the speech segments may be identified by the first client and matched to the target descriptive information.
For example, the first client performs audio-to-text processing on the voice clip to obtain a first text; extracting a first word embedded vector of a first text, and calling a first semantic analysis model to analyze the first word embedded vector to obtain a first semantic; calling a text extraction model to extract a second text from the target description information; extracting a second word embedded vector of the second text, and calling a second semantic analysis model to analyze the second word embedded vector to obtain second semantics; invoking a semantic similarity model to calculate semantic similarity between the first semantic and the second semantic; in response to the semantic similarity being greater than the threshold, virtual items in the virtual item package are received from the server.
For another example, the first client performs audio-to-text processing on the voice clip to obtain an answer text; extracting an answer word embedding vector of an answer text; extracting a problem text from the target description information; extracting a question word embedding vector of a question text; invoking a question-answer model to predict whether the answer word embedding vector belongs to a correct answer of the question word embedding vector; and receiving the virtual articles in the virtual article package from the server in response to the prediction result of the question-answer model being a correct answer.
For example, the server may also recognize the speech segment and match the speech segment with the target description information.
For example, the server performs audio-to-text processing on the voice clip to obtain a first text; extracting a first word embedded vector of a first text, and calling a first semantic analysis model to analyze the first word embedded vector to obtain a first semantic; calling a text extraction model to extract a second text from the target description information; extracting a second word embedded vector of the second text, and calling a second semantic analysis model to analyze the second word embedded vector to obtain second semantics; invoking a semantic similarity model to calculate semantic similarity between the first semantic and the second semantic; and sending a virtual article package receiving result to the first client in response to the semantic similarity being greater than the threshold.
For another example, the server performs audio-to-text processing on the voice clip to obtain an answer text; extracting an answer word embedding vector of an answer text; extracting a problem text from the target description information; extracting a question word embedding vector of a question text; invoking a question-answer model to predict whether the answer word embedding vector belongs to a correct answer of the question word embedding vector; and responding to the prediction result of the question-answer model as belonging to the correct answer, and sending a virtual article package receiving result to the first client.
For example, the server performs voice recognition on the voice segment to obtain a text "good morning", and the target description information is "good morning", so that the text of the voice segment is the same as the text of the target description information, and the voice segment is matched with the target description information.
For another example, the server performs semantic recognition on the voice fragment to obtain the semantic "you are very beautiful", and the semantic of the target description information is "you are very beautiful", so that the text of the voice fragment is very similar to the semantic of the target description information, and the voice fragment is matched with the target description information. For example, the semantic recognition algorithm may be used to recognize the semantics of the speech segment, obtain the semantic vector of the speech segment, calculate the distance between the semantic vector of the speech segment and the semantic vector of the target description information, and when the distance is less than the threshold value, determine that the speech segment matches the semantics of the target description information.
For another example, the server performs voice or semantic recognition on the voice segment to obtain the text or semantic of the voice segment, and the target description information is a question, the question corresponds to a correct answer, and if the text of the voice segment is the same as the correct answer, the voice segment is matched with the target description information; or if the semantics of the voice fragment are similar to the semantics of the correct answer, the voice fragment is matched with the target description information. For example, the target description information is "what is called by dad's mom", the correct answer is "milk", the result of the server performing voice recognition on the voice clip is "grandma", the server obtains the semantic vector of "milk" and the semantic vector of "grandma", respectively, and the server determines that the voice clip matches the target description information because the distance between the semantic vectors of "milk" and the semantic vector of "grandma" is small.
For example, when the target description information is a question, audio information, video information, or picture information, a correct answer corresponding to the target description information is stored in the server, or the server may perform picture recognition, voice recognition, semantic recognition, text recognition, or the like on the question, the audio information, the video information, or the picture information to obtain a correct answer corresponding to the target description information, calculate a similarity between the voice segment and the correct answer, and determine that the voice segment matches the target description information when the similarity is greater than a threshold.
The target description information corresponds to the number of times of opening the virtual article package, and when the number of times of opening the virtual article package by using the target description information reaches the threshold of the number of times of opening, the first user account cannot receive the virtual article package even if the voice segment is matched with the target description information. Therefore, the server responds to the fact that the voice fragment is matched with the target description information, and the number of times that the virtual article package is opened according to the target description information is smaller than a threshold value, and a virtual article package receiving result is sent to the first client. For example, the threshold number of times is 1, that is, the target description information can only be used by one user account to open the virtual package once.
The first client receives the virtual items in the virtual item package, step 808.
The first client responds to the receiving result of the virtual article package sent by the server, and displays the receiving result of the virtual article package on the second user interface, wherein the receiving result of the virtual article package is sent when the server carries out voice recognition or semantic recognition on the voice fragments to obtain a recognition result, and the recognition result is matched with the target description information.
The first client may also display the received result of the virtual package on a third user interface. That is, the first client jumps from the second user interface (the virtual package pickup interface) to the third user interface (the virtual package pickup success interface), and displays the virtual package picked up by the first user account on the third user interface. The virtual article package receiving result comprises at least one of the number of the received virtual articles, the similarity of the voice fragments and the target description information and the number of the remaining virtual articles in the virtual article package.
For example, as shown in fig. 15, a third user interface is shown in which the number 903 of virtual articles received by the first user account is displayed: 0.08 yuan.
In summary, according to the method provided by the embodiment, the description information is set for the virtual article package, so that the user inputs the voice segment matched with the description information according to the description information, and when the voice segment is successfully matched with the description information, the user can pick up the virtual article in the virtual article package. For example, a question, a picture, a video, a piece of music, etc. may be used as the description information, so that the user answers a piece of speech according to the description information, and the server determines whether the user says the speech is consistent with the description information by identifying the speech of the user, and when the user agrees with the description information, the user obtains the virtual article in the virtual article package. Therefore, the method for receiving the virtual article packages by the users can be enriched, the sending and receiving of the virtual article packages among the users are promoted, the flow of the virtual articles among the users is promoted, and the utilization rate of the virtual articles is improved.
According to the method provided by the embodiment, the voice fragments of the user are identified by using the voice identification or semantic identification method, and whether the voice fragments are matched with the target description information is judged according to the identification result, so that the virtual article package is more intelligent to pick up, the identification capability of the server on the voice fragments is improved, and the probability of opening the virtual article package by the voice fragments of the user is improved.
Illustratively, the virtual package corresponds to a set of descriptive information from which the target descriptive information is determined. Illustratively, the description information in the description information set has an order, and the client receives the virtual package in the order relay. The server also synthesizes the voice segments corresponding to the description information into audio data according to the order of the description information.
Fig. 16 is a method flowchart of an information processing method provided in another exemplary embodiment of the present application. The execution body of the method is exemplified by the first client and the server 20 in the first terminal 10 shown in fig. 1, and the first client supporting the reception of the virtual package is executed in the first terminal 10. Unlike the embodiment shown in fig. 12, step 802 includes steps 8021 to 8022, step 901 is further included after step 806, and step 809 is further included after step 808.
In step 8021, the first client determines at least one of the description information sets as target description information in response to a trigger operation on the pickup interface.
The virtual package is illustratively associated with a description information set that includes at least two description information, the target description information being at least one selected from the description information set.
In an exemplary embodiment, when the first client receives the trigger operation of the pickup interface, the first client randomly selects one description information from the description information set corresponding to the virtual article package as the target description information. The first client may also determine, as the target description information, description information having the largest or smallest number of words in the description information set, for example. The first client may also determine the description information as the target description information according to description information in the description information set selected by the first user account history stored locally.
218, the description information in the description information set corresponds to a sequence identifier, where the sequence identifier is used to determine the sequence of the description information, and the client may enable the user account to sequentially open the virtual article package with the description information according to the sequence of the description information. Step 8021 further comprises, before: the first client receives the relay progress of the virtual article package sent by the server, wherein the relay progress comprises the sequence identification of the ith descriptive information, and i is a positive integer.
219, the server responds to the successful receipt of the virtual article package by the at least one user account, and sends a relay progress of the virtual article package to the first client, wherein the relay progress comprises a sequential identification of an ith description information to which the virtual article package is relayed, and the relay progress is used for assisting the first client in determining an (i+1) th description information in the description information set as target description information, and i is a positive integer.
For example, each time a user account successfully retrieves a virtual package, the server synchronizes the receiving progress of the virtual package with the client, where the receiving progress includes sequential identifiers of description information used by the user account successfully retrieving the virtual package, and the first client can determine, according to the sequential identifiers, a relay progress (relay to which description information) of the current virtual package, so that the client enables the user account to open the virtual package according to the next description information according to the relay progress.
Step 8021 further comprises: the first client responds to the triggering operation of the acquisition interface, and determines the (i+1) descriptive information in the descriptive information set as target descriptive information according to the sequential identification of the (i) descriptive information.
When the virtual article package relay to the ith descriptive information, the first client side takes the (i+1) th descriptive information as target descriptive information to be displayed for the user, so that the user inputs the voice fragment according to the (i+1) th descriptive information.
In response to a trigger operation to the pickup interface, the (i+1) th description information in the description information set is determined as the target description information according to the sequential identification of the (i) th description information.
In step 8022, the first client displays a second user interface.
Step 901, the server synthesizes at least one voice segment corresponding to the relayed description information according to the sequence identification to obtain audio data; generating a multimedia message according to the identifier of the virtual article package and the sequence identifier corresponding to the relayed description information, wherein the multimedia message is used for playing audio data.
The multimedia message is used for playing the audio data obtained by sequentially synthesizing at least one voice segment corresponding to the relayed description information according to the sequence identifier.
After the voice segment is successfully matched with the target description information, the server sequentially synthesizes the voice segment with other successfully matched voice segments corresponding to the virtual object package to obtain the multimedia message. And transmitting the multimedia message to the first client as a virtual article packet receiving result, so that the first client displays the multimedia message.
For example, since the client uses the description information to open the virtual package according to the sequential identification of the description information, when the voice segments of the user are successfully matched with the i-th description information, i-1 voice segments successfully matched with the i-1 description information are already stored in the server, and the server can splice the voice segments corresponding to the description information according to the sequential identification of the i description information to obtain a piece of audio data. For example, the server may further reprocess the spliced audio data, for example, adding background music, or adjusting various audio parameters of the audio data, or combining the audio data with a preset frame to form video data.
For example, the description information set includes 5 pieces of description information, their sequence identifiers are 001, 002, 003, 004, 005 respectively, and when the voice fragments of the user are successfully matched with the description information with the sequence identifier of 003, the server stores the audio fragments successfully matched with the sequence identifiers of 001 and 002, and the server sequentially splices the three voice fragments according to the sequence identifiers of 001, 002 and 003 to obtain the audio data.
For example, if the client does not sequentially use the description information to open the virtual package according to the sequential identification of the description information, when the voice segment of the user is successfully matched with the ith description information, the previous i-1 description information may not have the voice segment matched with the description information, and the server may splice the voice segment of the user with the original voice segment pre-stored in the server, so as to obtain an audio data. For example, the description information set is two words of the cartoon character, when the user skips the first word and uses the second word to match, the server can splice the voice fragment of the first word of the cartoon character dubbing with the voice fragment input by the user to obtain the audio data.
By way of example, the multimedia message may include at least one of audio data, video data, links, and further, the multimedia message may include: text information, picture information, etc.
For example, when the multimedia message is audio data, video data, the multimedia message may contain audio data synthesized by the server. When the multimedia message is a link, the multimedia message comprises information such as an identifier of a virtual article package, the current relay progress of the virtual article package and the like, and when a user clicks the link, the client requests a playing page of audio data to the server according to the information contained in the multimedia message, and the user can play the audio data synthesized by the server in the playing page.
In step 809, the first client displays, on the first user interface or the second user interface, a multimedia message sent by the first user account, where the multimedia message is used to play the voice clip.
The first client receives the virtual package receiving result sent by the server, and displays the multimedia message on the user interface according to the multimedia message in the virtual package receiving result. For example, the server may display the multimedia message in a chat interface (first user interface) or in a retrieval interface (second user interface) of the virtual package.
Illustratively, the multimedia message is at least one of a voice message, a video message, a link message, which may be used to play audio data synthesized by the server. Illustratively, the multimedia message is a link message that the user clicks on to jump to the audio data preview interface, play the audio data in the preview interface, or play video data containing the audio data.
For example, the multimedia message may receive a trigger operation by the user. For example, the first client receives a triggering operation on the multimedia message on the first user interface or the second user interface; and playing the voice fragments or the audio data according to the triggering operation.
In summary, by setting the description information set for the virtual package, the method provided in this embodiment enables the user to get the virtual package according to at least one description information in the description information set, so that the user can get the virtual package through different description information, enrich the ways in which the user gets the virtual package, promote the sending and receiving of the virtual package between the users, promote the circulation of the virtual package, and improve the utilization rate of the virtual package.
According to the method provided by the embodiment, the relay audio data are obtained by synthesizing the voice fragments of the plurality of users, and the relay audio data are sent to the client, so that the users can enjoy the relay questions completed together with other users, the interactivity of the voice relay red packet is improved, and the display modes of the voice fragments of the red packet users are enriched.
The following are device embodiments of the present application, reference being made to the above-described method embodiments for details of the device embodiments that are not described in detail.
Fig. 17 is a block diagram of an information processing apparatus provided in an exemplary embodiment of the present application. The device comprises:
a first display module 1701 configured to display a first user interface, where the first user interface displays a pickup interface of a virtual parcel;
the first display module 1701 is further configured to display a second user interface in response to a triggering operation on the pickup interface, where the second user interface includes target description information of the virtual object package;
the first collection module 1702 is configured to receive a voice segment input by a first user account, where the voice segment is configured to match the target description information to request to receive a virtual article in the virtual article package;
A first receiving module 1705 is configured to receive the virtual objects in the virtual object package.
In an exemplary embodiment, the virtual package corresponds to a description information set, the description information set including at least two description information; the apparatus further comprises:
a first determining module 1704, configured to determine, in response to a trigger operation on the pickup interface, at least one of the description information in the description information set as the target description information;
the first display module 1701 is further configured to display the second user interface.
In an exemplary embodiment, the description information in the description information set corresponds to a sequential identification;
the first receiving module 1705 is further configured to receive a relay progress of the virtual object package sent by the server, where the relay progress includes the sequence identifier of the ith description information, and the i is a positive integer;
the first determining module 1704 is further configured to determine, in response to a trigger operation on the pickup interface, an i+1st one of the description information sets as the target description information according to the sequential identifier of the i-th one of the description information.
In an exemplary embodiment, the object description information includes: at least one of text information, picture information, audio information, and video information;
the voice segment is matched with the target description information, and the method comprises the following steps:
the first text indicated by the voice fragment is the same as the second text indicated by the target description information;
or alternatively, the first and second heat exchangers may be,
the semantic similarity between the first semantic meaning indicated by the voice fragment and the second semantic meaning indicated by the target description information is greater than a threshold value;
or alternatively, the first and second heat exchangers may be,
the answers indicated by the speech segments include correct answers to questions indicated by the target descriptive information.
In an exemplary embodiment, the apparatus further comprises:
a first recognition module 1707, configured to perform audio-to-text processing on the speech segment to obtain the first text; extracting a first word embedded vector of the first text, and calling a first semantic analysis model to analyze the first word embedded vector to obtain the first semantic;
the first recognition module 1707 is further configured to invoke a text extraction model to extract the second text from the target description information; extracting a second word embedded vector of the second text, and calling a second semantic analysis model to analyze the second word embedded vector to obtain the second semantic;
A first matching module 1708, configured to invoke a semantic similarity model to calculate a semantic similarity between the first semantic and the second semantic;
the first receiving module 1705 is further configured to receive the virtual object in the virtual object package from a server in response to the semantic similarity being greater than the threshold.
In an exemplary embodiment, the apparatus further comprises:
the first recognition module is used for carrying out audio-frequency text conversion processing on the voice fragments to obtain answer texts; extracting an answer word embedding vector of the answer text;
the first recognition module is further used for extracting a problem text from the target description information; extracting a question word embedding vector of the question text;
the first matching module is used for calling a question-answer model to predict whether the answer word embedding vector belongs to a correct answer of the question word embedding vector;
the first receiving module 1705 is further configured to receive, from a server, the virtual object in the virtual object package in response to the prediction result of the question-answer model being the correct answer.
In one exemplary embodiment, the pickup interface includes: at least one of a link and a two-dimensional code.
In an exemplary embodiment, the second user interface further comprises a voice input control;
the first collection module 1702 is further configured to collect the speech segment in response to a triggering operation of the speech input control.
In an exemplary embodiment, the apparatus further comprises:
a first sending module 1706, configured to send a matching request to a server, where the matching request includes the speech segment and an identifier of the virtual object package;
a first receiving module 1705, configured to receive a virtual object package receiving result sent by the server;
the first receiving module 1705 is further configured to receive, in response to receiving a virtual article packet receiving result sent by the server, the virtual article in the virtual article packet, where the virtual article packet receiving result is that the server performs speech recognition or semantic recognition on the speech segment to obtain a recognition result, and send, in response to the recognition result matching with the target description information.
In an exemplary embodiment, the first display module 1701 is further configured to display, on the first user interface or the third user interface, a multimedia message sent by the first user account, where the multimedia message is used to play the voice clip.
In an exemplary embodiment, the virtual package corresponds to a description information set, and the description information in the description information set has a corresponding sequence identifier;
the multimedia message is used for playing the audio data obtained by sequentially synthesizing at least one voice segment corresponding to the relayed description information according to the sequence identifier by the server.
In an exemplary embodiment, the apparatus further comprises:
a first interaction module 1709 for receiving a triggering operation on the multimedia message on the first user interface or the second user interface;
a first playing module 1703, configured to play the multimedia message according to the triggering operation.
Fig. 18 is a block diagram of an information processing apparatus provided in another exemplary embodiment of the present application. The device comprises:
the second receiving module 1801 is configured to receive a matching request sent by the first client, where the matching request includes a first user account, a voice segment, and an identifier of the virtual package;
an obtaining module 1802, configured to obtain target description information of the virtual article package according to the identifier;
and the second sending module 1803 is configured to send, to the first client, a virtual article package receiving result in response to the voice segment being matched with the target description information, where the virtual article package receiving result includes a virtual article in the virtual article package received by the first user account.
In an exemplary embodiment, the virtual item package corresponds to a description information set, the description information set including at least two description information, and the target description information includes at least one of the description information in the description information set.
In an exemplary embodiment, the description information in the description information set corresponds to a sequential identification;
the second sending module 1803 is further configured to send, in response to successful receipt of the virtual article package by at least one user account, a relay progress of the virtual article package to the first client, where the relay progress includes the sequential identifier of the ith descriptive information to which the virtual article package is relayed, and the relay progress is used to assist the first client to determine the (i+1) th descriptive information in the descriptive information set as the target descriptive information, where i is a positive integer.
In an exemplary embodiment, the object description information includes: at least one of text information, picture information, audio information and video information; the apparatus further comprises:
a second recognition module 1804, configured to perform speech recognition on the speech segment to obtain a first text indicated by the speech segment;
The second sending module 1803 is further configured to send, to the first client, a virtual package receiving result in response to the first text indicated by the speech segment being the same as the second text indicated by the target description information;
or alternatively, the first and second heat exchangers may be,
the second sending module 1803 is further configured to send, to the first client, a virtual article packet receiving result in response to a semantic similarity between a first semantic indicated by the speech segment and a second semantic indicated by the target description information being greater than a threshold;
or alternatively, the first and second heat exchangers may be,
the second sending module 1803 is further configured to send, to the first client, a virtual package receiving result in response to the answer indicated by the speech segment including a correct answer to the question indicated by the target description information.
The second recognition module 1804 is further configured to perform audio-to-text processing on the speech segment to obtain the first text; extracting a first word embedded vector of the first text, and calling a first semantic analysis model to analyze the first word embedded vector to obtain the first semantic;
the second recognition module 1804 is further configured to invoke a text extraction model to extract the second text from the target description information; extracting a second word embedded vector of the second text, and calling a second semantic analysis model to analyze the second word embedded vector to obtain the second semantic;
A second matching module 1806, configured to invoke a semantic similarity model to calculate a semantic similarity between the first semantic and the second semantic;
the second sending module 1803 is further configured to send, in response to the semantic similarity being greater than the threshold, a virtual package receiving result to the first client.
In an exemplary embodiment, the apparatus further comprises:
the second recognition module 1804 is further configured to perform audio-to-text processing on the speech segment to obtain an answer text; extracting an answer word embedding vector of the answer text;
the second recognition module 1804 is further configured to extract question text from the target description information; extracting a question word embedding vector of the question text;
a second matching module 1806, configured to invoke a question-answer model to predict whether the answer word embedding vector belongs to a correct answer of the question word embedding vector;
the second sending module 1803 is further configured to send a virtual package receiving result to the first client in response to the prediction result of the question-answer model being the correct answer.
In an exemplary embodiment, the description information in the description information set corresponds to a sequential identifier, and the virtual package receiving result includes a multimedia message; the apparatus further comprises:
A synthesizing module 1805, configured to sequentially synthesize at least one voice segment corresponding to the relayed description information according to the sequence identifier to obtain audio data; generating the multimedia message according to the identifier of the virtual article package and the sequence identifier corresponding to the relayed description information, wherein the multimedia message is used for playing audio data.
Fig. 19 is a block diagram of an information processing apparatus provided in another exemplary embodiment of the present application. The device comprises:
the second interaction module 1901 is configured to receive an operation instruction input by the second user account;
a second determining module 1906, configured to determine, according to the operation instruction, a description information set corresponding to a virtual package, where the description information set includes at least two description information, where the description information is used to indicate a pickup manner of the virtual package;
the second interaction module 1901 is further configured to receive parameter information of the virtual package input by the second user account, where the parameter information is used to generate the virtual package, and the virtual package carries at least one virtual article;
the second display module 1902 is configured to display a fourth user interface, where the fourth user interface displays the virtual package sent by the second user account, and the virtual package is generated according to the description information set and the parameter information.
In an exemplary embodiment, the apparatus further comprises:
the second display module 1902 is further configured to display a fifth user interface, where the fifth user interface includes a candidate descriptive information set list, and the candidate descriptive information set list includes at least one candidate descriptive information set;
the second determining module 1906 is further configured to determine at least one of the candidate descriptive information sets in the candidate descriptive information set list as the descriptive information set of the virtual package according to the operation instruction.
In an exemplary embodiment, the apparatus further comprises:
a first generation module 1903, configured to generate the candidate descriptive information set list according to a first candidate descriptive information set stored locally;
or alternatively, the first and second heat exchangers may be,
the first generating module 1903 is further configured to generate the candidate description information set list according to a second candidate description information set sent by the server;
or alternatively, the first and second heat exchangers may be,
the first generating module 1903 is further configured to generate the candidate description information set list according to the first candidate description information set and the second candidate description information set;
or alternatively, the first and second heat exchangers may be,
a second collection module 1907, configured to collect attribute information of the second user account, where the attribute information includes at least one of object attribute information of the second user account, group attribute information of a group to which the second user account belongs, and object attribute information of other user accounts in the group except for the second user account;
The first generating module 1903 is further configured to generate a third candidate description information set according to the attribute information; and generating the candidate descriptive information set list according to the third candidate descriptive information set.
In an exemplary embodiment, the apparatus further comprises:
the second display module 1902 is further configured to display a sixth user interface, where the sixth user interface includes an editing control;
the second interaction module 1901 is further configured to obtain, according to the operation instruction, at least two description information input by the second user account, where the description information includes at least one of text information, picture information, audio information, and video information;
the second determining module 1906 is further configured to determine the at least two pieces of description information as the description information set of the virtual package.
In an exemplary embodiment, the parameter information includes: type identification and virtual item parameters;
the type identifier is used for identifying the type of the virtual article package generated at this time in at least two virtual article package types;
the virtual article parameters include: at least one of the number of virtual article packages, the total number of virtual articles, the number of virtual articles in a single virtual article package, and the division manner of the number of virtual articles in a single virtual article package.
In an exemplary embodiment, the apparatus further comprises:
a third sending module 1905, configured to send a sending request of the virtual package to a server according to the description information set and the parameter information;
a third receiving module 1904, configured to receive a successful sending result sent by the server;
the second display module 1902 is further configured to display the fourth user interface in response to receiving a successful transmission result sent by the server.
In an exemplary embodiment, the second display module 1902 is further configured to display a multimedia message sent by a first user account, where the multimedia message is used to play a voice segment corresponding to the first user account.
In an exemplary embodiment, the virtual package corresponds to a description information set, and the description information in the description information set has a corresponding sequence identifier;
the multimedia message is used for playing the audio data obtained by sequentially synthesizing at least one voice segment corresponding to the relayed description information according to the sequence identifier by the server.
In an exemplary embodiment, the apparatus further comprises:
the second interaction module 1901 is further configured to receive a first trigger operation on the multimedia message;
A second playing module 1908, configured to play the multimedia message according to the first trigger operation;
or alternatively, the first and second heat exchangers may be,
the second interaction module 1901 is further configured to receive a second trigger operation on the multimedia message;
a collection module 1909 for collecting the multimedia message according to the second trigger operation;
or alternatively, the first and second heat exchangers may be,
the second interaction module 1901 is further configured to receive a third trigger operation on the multimedia message;
and a sharing module 1910, configured to share the multimedia message according to the third triggering operation.
Fig. 20 is a block diagram of an information processing apparatus provided in another exemplary embodiment of the present application. The device comprises:
a fourth receiving module, configured to receive a transmission request of a virtual article packet sent by a second user account, where the transmission request carries a description information set and parameter information, the description information set is used to indicate a pickup mode of the virtual article packet, the parameter information is used to generate the virtual article packet, and the virtual article packet carries at least one virtual article;
the second generation module is used for generating the virtual article package according to the description information set and the parameter information;
And the fourth sending module is used for sending the receiving interface of the virtual article package to at least one user account.
In an exemplary embodiment, the parameter information includes: type identification and virtual item parameters;
the type identifier is used for identifying the type of the virtual article package generated at this time in at least two virtual article package types;
the virtual article parameters include: at least one of the number of virtual article packages, the total number of virtual articles, the number of virtual articles in a single virtual article package, and the division manner of the number of virtual articles in a single virtual article package.
In an exemplary embodiment, the apparatus further comprises:
the fourth sending module is further configured to send a locally stored second candidate description information set to the second user account;
or alternatively, the first and second heat exchangers may be,
the third acquisition module is used for acquiring attribute information of the second user account, wherein the attribute information comprises at least one of object attribute information of the second user account, group attribute information of a group to which the second user account belongs and object attribute information of other user accounts except the second user account in the group;
The second generation module is further used for generating a third candidate description information set according to the attribute information;
the fourth sending module is further configured to send the third candidate description information set to the second user account.
It should be noted that: the information processing apparatus and the transmitting apparatus provided in the above embodiments are only exemplified by the division of the above functional modules, and in practical applications, the above functional allocation may be performed by different functional modules, that is, the internal structure of the device is divided into different functional modules, so as to perform all or part of the functions described above. In addition, the information processing apparatus and the information processing method embodiment provided in the foregoing embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiment and are not described herein again.
Fig. 21 shows a block diagram of a terminal 2000 according to an exemplary embodiment of the present application. The terminal 2000 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Terminal 2000 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, etc.
In general, the terminal 2000 includes: a processor 2001 and a memory 2002.
Processor 2001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 2001 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). Processor 2001 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 2001 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 2001 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 2002 may include one or more computer-readable storage media, which may be non-transitory. Memory 2002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 2002 is used to store at least one instruction for execution by processor 2001 to implement the information processing methods provided by the method embodiments herein.
In some embodiments, the terminal 2000 may further optionally include: a peripheral interface 2003 and at least one peripheral. The processor 2001, memory 2002, and peripheral interface 2003 may be connected by a bus or signal line. The respective peripheral devices may be connected to the peripheral device interface 2003 through a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 2004, a touch display 2005, a camera assembly 2006, audio circuitry 2007, and a power supply 2008.
Peripheral interface 2003 may be used to connect I/O (Input/Output) related at least one peripheral device to processor 2001 and memory 2002. In some embodiments, processor 2001, memory 2002, and peripheral interface 2003 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 2001, memory 2002, and peripheral interface 2003 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 2004 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 2004 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 2004 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 2004 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 2004 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 2004 may also include NFC (Near Field Communication, short range wireless communication) related circuitry, which is not limited in this application.
The touch display 2005 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the touch display 2005 is a touch display, the touch display 2005 also has the ability to collect touch signals at or above the surface of the touch display 2005. The touch signal may be input to the processor 2001 as a control signal for processing. At this point, touch display 2005 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the touch display 2005 may be one, providing a front panel of the terminal 2000; in other embodiments, at least two touch display screens 2005 may be provided on different surfaces of terminal 2000 or in a folded design; in still other embodiments, touch display 2005 may be a flexible display disposed on a curved surface or a folded surface of terminal 2000. Even further, the touch display 2005 may be arranged in an irregular pattern that is not rectangular, i.e., a shaped screen. The touch display 2005 can be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 2006 is used to capture images or video. Optionally, the camera assembly 2006 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, the camera assembly 2006 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
Audio circuitry 2007 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 2001 for processing, or inputting the electric signals to the radio frequency circuit 2004 for voice communication. For purposes of stereo acquisition or noise reduction, a plurality of microphones may be respectively disposed at different portions of the terminal 2000. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is then used to convert electrical signals from the processor 2001 or the radio frequency circuit 2004 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuit 2007 may also include a headphone jack.
Power supply 2008 is used to power the various components in terminal 2000. The power source 2008 may be alternating current, direct current, disposable battery, or rechargeable battery. When power supply 2008 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 2000 can further include one or more sensors 2009. The one or more sensors 2009 include, but are not limited to: acceleration sensor 2010, gyro sensor 2011, pressure sensor 2012, optical sensor 2013, and proximity sensor 2014.
The acceleration sensor 2010 may detect the magnitudes of accelerations on three coordinate axes of a coordinate system established with the terminal 2000. For example, the acceleration sensor 2010 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 2001 may control the touch display 2005 to perform information processing in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 2010. Acceleration sensor 2010 may also be used for gathering motion data for a game or user.
The gyro sensor 2011 may detect a body direction and a rotation angle of the terminal 2000, and the gyro sensor 2011 may collect a 3D motion of the user to the terminal 2000 in cooperation with the acceleration sensor 2010. The processor 2001 may implement the following functions based on the data collected by the gyro sensor 2011: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
Pressure sensor 2012 may be disposed at a side frame of terminal 2000 and/or an underlying layer of touch display 2005. When the pressure sensor 2012 is disposed at a side frame of the terminal 2000, a grip signal of the terminal 2000 by a user may be detected, and the processor 2001 performs left-right hand recognition or quick operation according to the grip signal collected by the pressure sensor 2012. When the pressure sensor 2012 is disposed below the touch display 2005, control of the operability control on the UI interface is achieved by the processor 2001 according to a user's pressure operation on the touch display 2005. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor 2013 is used to collect the ambient light intensity. In one embodiment, the processor 2001 may control the display brightness of the touch display 2005 based on the ambient light intensity collected by the optical sensor 2013. Specifically, when the intensity of the ambient light is high, the display brightness of the touch display 2005 is turned up; when the ambient light intensity is low, the display brightness of the touch display 2005 is turned down. In another embodiment, the processor 2001 may also dynamically adjust the shooting parameters of the camera assembly 2006 based on the ambient light intensity collected by the optical sensor 2013.
A proximity sensor 2014, also referred to as a distance sensor, is typically provided at the front panel of the terminal 2000. The proximity sensor 2014 is used to collect a distance between a user and the front surface of the terminal 2000. In one embodiment, when the proximity sensor 2014 detects that the distance between the user and the front surface of the terminal 2000 becomes gradually smaller, the processor 2001 controls the touch display 2005 to switch from the bright screen state to the off screen state; when the proximity sensor 2014 detects that the distance between the user and the front surface of the terminal 2000 gradually increases, the processor 2001 controls the touch display 2005 to switch from the off-screen state to the on-screen state.
It will be appreciated by those skilled in the art that the structure shown in fig. 21 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
Referring to fig. 22, a schematic diagram of a server according to an embodiment of the present invention is shown, where the server may be used to implement the information processing method performed by the server according to the above embodiment. The server 2100 includes a central processing unit (CPU, central Processing unit) 2101, a system Memory 2104 including a random access Memory (RAM, random Access Memory) 2102 and a Read-Only Memory (ROM) 2103, and a system bus 2105 connecting the system Memory 2104 and the central processing unit 2101. The server 2100 also includes a basic Input/Output system (I/O) 2106 to facilitate the transfer of information between the various devices within the computer, and a mass storage device 2107 for storing an operating system 2113, application programs 2114 and other program modules 2115.
The basic input/output system 2106 includes a display 2108 for displaying information and an input device 2109, such as a mouse, keyboard, or the like, for user input of information. Wherein the display 2108 and the input device 2109 are connected to the central processing unit 2101 via an input/output controller 2110 connected to a system bus 2105. The basic input/output system 2106 may also include an input/output controller 2110 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, the input/output controller 2110 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 2107 is connected to the central processing unit 2101 through a mass storage controller (not shown) connected to the system bus 2105. The mass storage device 2107 and its associated computer-readable media provide non-volatile storage for the server 2100. That is, the mass storage device 2107 may include a computer readable medium (not shown) such as a hard disk or CD-ROM (Compact Disc Read-Only Memory) drive.
The computer readable medium may include computer storage media and communication media without loss of generality. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), flash Memory or other solid state Memory technology, CD-ROM, DVD (Digital Video Disc, high density digital video disc) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will recognize that the computer storage medium is not limited to the one described above. The system memory 2104 and mass storage 2107 described above may be referred to collectively as memory.
The server 2100 may also operate via a network, such as the internet, connected to remote computers on the network, in accordance with various embodiments of the present invention. That is, the server 2100 may be connected to the network 2112 through a network interface unit 2111 connected to the system bus 2105, or the network interface unit 2111 may be used to connect to other types of networks or remote computer systems (not shown).
The memory also includes one or more programs stored in the memory and configured to be executed by the one or more central processing units 2101. The one or more programs described above include instructions for:
receiving a matching request sent by a first client, wherein the matching request comprises a first user account, a voice fragment and an identifier of a virtual article package;
acquiring target description information of the virtual article package according to the identification;
and responding to the voice fragment and the target description information to be matched, and sending a virtual article package receiving result to the first client, wherein the virtual article package receiving result comprises virtual articles in the virtual article package received by the first user account.
The virtual article package corresponds to a description information set, the description information set comprises at least two description information, and the target description information comprises at least one description information in the description information set.
The description information in the description information set corresponds to the sequential identification;
and responding to the successful receiving of the virtual article package by at least one user account, and sending a relay progress of the virtual article package to the first client, wherein the relay progress comprises a sequential identification of the ith descriptive information to which the virtual article package is relayed, and the relay progress is used for assisting the first client in determining the (i+1) descriptive information in the descriptive information set as target descriptive information, and i is a positive integer.
The object description information includes: at least one of text information, picture information, audio information and video information;
performing voice recognition on the voice fragment to obtain a first text indicated by the voice fragment;
responding to the fact that the first text indicated by the voice fragment is the same as the second text indicated by the target description information, and sending a virtual article package receiving result to the first client;
or alternatively, the first and second heat exchangers may be,
responding to the fact that the semantic similarity between the first semantic indicated by the voice fragment and the second semantic indicated by the target description information is larger than a threshold value, and sending a virtual article package receiving result to the first client;
or alternatively, the first and second heat exchangers may be,
and transmitting a virtual article package receiving result to the first client in response to the answer indicated by the voice segment including a correct answer to the question indicated by the target descriptive information.
Performing audio-to-text processing on the voice fragment to obtain a first text; extracting a first word embedded vector of a first text, and calling a first semantic analysis model to analyze the first word embedded vector to obtain a first semantic;
calling a text extraction model to extract a second text from the target description information; extracting a second word embedded vector of the second text, and calling a second semantic analysis model to analyze the second word embedded vector to obtain second semantics;
Invoking a semantic similarity model to calculate semantic similarity between the first semantic and the second semantic;
and sending a virtual article package receiving result to the first client in response to the semantic similarity being greater than the threshold.
Performing audio-to-text processing on the voice fragments to obtain answer texts; extracting an answer word embedding vector of an answer text;
extracting a problem text from the target description information; extracting a question word embedding vector of a question text;
invoking a question-answer model to predict whether the answer word embedding vector belongs to a correct answer of the question word embedding vector;
and responding to the prediction result of the question-answer model as belonging to the correct answer, and sending a virtual article package receiving result to the first client.
The description information in the description information set corresponds to the sequential identification, and the virtual article package receiving result comprises a multimedia message;
and sequentially synthesizing at least one voice fragment corresponding to the relayed description information according to the sequence identifier to obtain audio data, and generating a multimedia message according to the identifier of the virtual article package and the sequence identifier corresponding to the relayed description information, wherein the multimedia message is used for playing the audio data.
Receiving a sending request of a virtual article package sent by a second user account, wherein the sending request carries a description information set and parameter information, the description information set is used for indicating a picking mode of the virtual article package, the parameter information is used for generating the virtual article package, and the virtual article package carries at least one virtual article;
Generating a virtual article package according to the description information set and the parameter information;
and sending the receiving interface of the virtual article package to at least one user account.
The parameter information includes: type identification and virtual item parameters;
the type identifier is used for identifying the type of the virtual article package generated at this time in at least two virtual article package types;
the virtual article parameters include: at least one of the number of virtual package, the total number of virtual articles, the number of virtual articles in a single virtual package, and the manner of dividing the number of virtual articles in a single virtual package.
Transmitting a locally stored second candidate descriptive information set to a second user account;
or alternatively, the first and second heat exchangers may be,
acquiring attribute information of a second user account, wherein the attribute information comprises at least one of object attribute information of the second user account, group attribute information of a group to which the second user account belongs and object attribute information of other user accounts except the second user account in the group; generating a third candidate description information set according to the attribute information; and sending the third candidate descriptive information set to the second user account.
The present application also provides a computer device including a processor and a memory, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, where the at least one instruction, the at least one program, the set of codes, or the set of instructions are loaded and executed by the processor to implement an information processing method provided in any of the above-described exemplary embodiments.
The present application also provides a computer readable storage medium having stored therein at least one instruction, at least one program, a code set, or an instruction set, which is loaded and executed by a processor to implement the information processing method provided in any of the above-described exemplary embodiments.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the computer device performs the information processing method provided in the above-described alternative implementation.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments is merely exemplary in nature and is in no way intended to limit the invention, since it is intended that all modifications, equivalents, improvements, etc. that fall within the spirit and scope of the invention.

Claims (34)

1. An information processing method, characterized in that the method comprises:
displaying a first user interface, wherein the first user interface displays a receiving interface of a virtual article package; the virtual article package corresponds to a description information set, and the description information set comprises at least two pieces of description information in sequence; the description information in the description information set corresponds to a sequential identifier;
receiving a relay progress of the virtual article package, wherein the relay progress comprises the sequence identification of the ith descriptive information, and i is a positive integer;
in response to a trigger operation of the pickup interface, determining the (i+1) th descriptive information in the descriptive information set as target descriptive information according to the sequence identifier of the (i) th descriptive information;
displaying a second user interface, wherein the second user interface comprises the target description information of the virtual article package and other description information in the description information set; the target description information is determined in the description information set according to the relay progress of the virtual article package; the other descriptive information is descriptive information other than the target descriptive information in the descriptive information set; the description information is used for picking up virtual articles in the virtual article package, and one piece of description information corresponds to one virtual article;
Receiving a voice fragment input by a first user account, wherein the voice fragment is used for matching with the target description information to request to receive virtual articles in the virtual article package; the matching result of the voice fragment and the target description information is used for determining whether the first user account can acquire the virtual article in the virtual article package;
receiving the virtual article in the virtual article package under the condition that the voice fragment is successfully matched with the target description information; displaying prompt information of voice matching failure under the condition that the voice segment fails to match with the target description information;
the multimedia message sent by the first user account is displayed on the first user interface or the second user interface, the multimedia message is used for playing audio data obtained by sequentially synthesizing at least one voice segment corresponding to the relayed description information according to the sequence of the description information, the multimedia message is displayed in a client corresponding to the user account in the group chat, and the user account in the group chat comprises the first user account for receiving the virtual article package, the second user account for sending the virtual article package and other user accounts.
2. The method of claim 1, wherein the object description information comprises: at least one of text information, picture information, audio information, and video information;
the way that the voice fragment is matched with the target description information comprises the following steps:
the first text indicated by the voice fragment is the same as the second text indicated by the target description information;
or alternatively, the first and second heat exchangers may be,
the semantic similarity between the first semantic meaning indicated by the voice fragment and the second semantic meaning indicated by the target description information is greater than a threshold value;
or alternatively, the first and second heat exchangers may be,
the answers indicated by the speech segments include correct answers to questions indicated by the target descriptive information.
3. The method of claim 2, wherein the receiving the virtual item in the virtual item package comprises:
performing audio-to-text processing on the voice fragment to obtain the first text; extracting a first word embedded vector of the first text, and calling a first semantic analysis model to analyze the first word embedded vector to obtain the first semantic;
invoking a text extraction model to extract the second text from the target description information; extracting a second word embedded vector of the second text, and calling a second semantic analysis model to analyze the second word embedded vector to obtain the second semantic;
Invoking a semantic similarity model to calculate semantic similarity between the first semantic and the second semantic;
the virtual items in the virtual item package are received from a server in response to the semantic similarity being greater than the threshold.
4. The method of claim 2, wherein the receiving the virtual item in the virtual item package comprises:
performing audio-to-text processing on the voice fragments to obtain answer texts; extracting an answer word embedding vector of the answer text;
extracting a question text from the target description information; extracting a question word embedding vector of the question text;
invoking a question-answer model to predict whether the answer word embedding vector belongs to a correct answer of the question word embedding vector;
and receiving the virtual articles in the virtual article package from a server in response to the prediction result of the question-answer model being the correct answer.
5. The method of claim 1, wherein the pickup interface comprises: at least one of a link and a two-dimensional code.
6. The method of claim 1, wherein the second user interface further comprises a voice input control;
The receiving the voice segment input by the first user account includes:
and responding to the triggering operation of the voice input control, and collecting the voice fragments.
7. The method of claim 1, wherein the receiving the virtual item in the virtual item package comprises:
sending a matching request to a server, wherein the matching request comprises the voice fragment and the identifier of the virtual article package;
receiving the virtual articles in the virtual article package in response to receiving a virtual article package receiving result sent by the server, wherein the virtual article package receiving result is that the server carries out voice recognition or semantic recognition on the voice fragments to obtain a recognition result, and the recognition result is sent in response to matching of the recognition result and the target description information.
8. The method of claim 1, wherein the description information in the set of description information has a corresponding sequential identification;
the multimedia message is used for the playing server to sequentially synthesize the at least one voice segment corresponding to the relayed description information into the obtained audio data according to the sequence identification.
9. The method according to claim 1, wherein the method further comprises:
Receiving, on the first user interface or the second user interface, a triggering operation on the multimedia message;
and playing the voice fragment according to the triggering operation.
10. An information processing method, characterized in that the method comprises:
responding to successful receiving of the virtual article package by at least one user account, and sending relay progress of the virtual article package to a first client; the virtual article package corresponds to a description information set, and the description information set comprises at least two pieces of description information in sequence; the description information in the description information set corresponds to a sequential identifier; the relay progress comprises the sequence identifier of the ith descriptive information to which the virtual article package is relayed, and is used for assisting the first client to determine the (i+1) th descriptive information in the descriptive information set as target descriptive information, wherein i is a positive integer;
receiving a matching request sent by a first client, wherein the matching request comprises a first user account, a voice fragment and an identifier of a virtual article package;
acquiring target description information of the virtual article package and other description information in the description information set according to the identification; the target description information is determined in the description information set according to the relay progress of the virtual article package; the other descriptive information is descriptive information other than the target descriptive information in the descriptive information set; the description information is used for picking up virtual articles in the virtual article package, and one piece of description information corresponds to one virtual article;
Responding to the fact that the voice fragment is matched with the target description information, and sending a virtual article package receiving result to the first client side under the condition that the voice fragment is successfully matched with the target description information, wherein the virtual article package receiving result comprises virtual articles in the virtual article package received by the first user account; under the condition that the matching of the voice fragment and the target description information fails, sending prompt information of voice matching failure to the first client; the matching result of the voice fragment and the target description information is used for determining whether the first user account can acquire the virtual article in the virtual article package;
the virtual article package receiving result comprises a multimedia message, wherein the multimedia message is used for playing audio data obtained by sequentially synthesizing at least one voice segment corresponding to the relayed description information according to the sequence of the description information, the multimedia message is displayed in a client corresponding to a user account in group chat, and the user account in group chat comprises the first user account for receiving the virtual article package, a second user account for sending the virtual article package and other user accounts.
11. The method of claim 10, wherein the target description information includes at least one of the description information in the set of description information.
12. The method according to claim 10 or 11, wherein the object description information comprises: at least one of text information, picture information, audio information and video information;
the responding to the voice segment matching with the target description information, sending a virtual article package receiving result to the first client, comprising:
performing voice recognition on the voice fragment to obtain a first text indicated by the voice fragment;
responding to the fact that the first text indicated by the voice fragment is the same as the second text indicated by the target description information, and sending a virtual article package receiving result to the first client;
or alternatively, the first and second heat exchangers may be,
responding to the fact that the semantic similarity between the first semantic indicated by the voice fragment and the second semantic indicated by the target description information is larger than a threshold value, and sending a virtual article package receiving result to the first client;
or alternatively, the first and second heat exchangers may be,
and responding to the answer indicated by the voice fragment to comprise the correct answer of the question indicated by the target description information, and sending a virtual article package receiving result to the first client.
13. The method of claim 12, wherein the sending, to the first client, a virtual package receipt result in response to the semantic similarity of the first semantic indicated by the speech segment to the second semantic indicated by the target descriptive information being greater than a threshold, comprises:
performing audio-to-text processing on the voice fragment to obtain the first text; extracting a first word embedded vector of the first text, and calling a first semantic analysis model to analyze the first word embedded vector to obtain the first semantic;
invoking a text extraction model to extract the second text from the target description information; extracting a second word embedded vector of the second text, and calling a second semantic analysis model to analyze the second word embedded vector to obtain the second semantic;
invoking a semantic similarity model to calculate semantic similarity between the first semantic and the second semantic;
and sending a virtual article package receiving result to the first client in response to the semantic similarity being greater than the threshold.
14. The method of claim 12, wherein the sending a virtual package receipt to the first client in response to the answer indicated by the speech segment including a correct answer to the question indicated by the target descriptive information comprises:
Performing audio-to-text processing on the voice fragments to obtain answer texts; extracting an answer word embedding vector of the answer text;
extracting a question text from the target description information; extracting a question word embedding vector of the question text;
invoking a question-answer model to predict whether the answer word embedding vector belongs to a correct answer of the question word embedding vector;
and responding to the prediction result of the question-answer model as belonging to the correct answer, and sending a virtual article package receiving result to the first client.
15. The method of claim 11, wherein the description information in the set of description information corresponds to a sequential identification, and the virtual package reception result includes a multimedia message; the method further comprises the steps of:
sequentially synthesizing at least one voice segment corresponding to the relayed description information according to the sequence identifier to obtain audio data;
generating the multimedia message according to the identifier of the virtual article package and the sequence identifier corresponding to the relayed description information, wherein the multimedia message is used for playing the audio data.
16. An information processing method, characterized in that the method comprises:
Receiving an operation instruction input by a second user account;
determining a description information set corresponding to the virtual article package according to the operation instruction, wherein the description information set comprises at least two pieces of description information in sequence, and the description information is used for indicating a picking mode of the virtual article package;
receiving parameter information of the virtual article package input by the second user account, wherein the parameter information is used for generating the virtual article package, and at least one virtual article is carried in the virtual article package;
displaying a fourth user interface, wherein the fourth user interface displays the virtual article package sent by the second user account, and the virtual article package is generated according to the description information set and the parameter information;
displaying a multimedia message sent by a first user account, wherein the multimedia message is used for playing audio data obtained by sequentially synthesizing at least one voice fragment corresponding to relayed description information according to the sequence of the description information, the multimedia message is displayed in a client corresponding to a user account in a group chat, and the user account in the group chat comprises the first user account for receiving the virtual article package, a second user account for sending the virtual article package and other user accounts;
Wherein, the description information in the description information set corresponds to a sequential identifier; the first user account receives the virtual article package according to target description information, and the target description information comprises the (i+1) th description information in the description information set under the condition that the relay progress of the virtual article package is the sequence identification of the (i) th description information, wherein i is a positive integer; the target description information is determined in the description information set according to the relay progress of the virtual article package; and the matching result of the voice fragment and the target description information is used for determining whether the first user account can acquire the virtual article in the virtual article package.
17. The method of claim 16, wherein the method further comprises:
displaying a fifth user interface, the fifth user interface comprising a list of candidate descriptive information sets, the list of candidate descriptive information sets comprising at least one candidate descriptive information set;
the determining the description information set corresponding to the virtual article package according to the operation instruction comprises the following steps:
and determining at least one candidate descriptive information set in the candidate descriptive information set list as the descriptive information set of the virtual article package according to the operation instruction.
18. The method of claim 17, wherein prior to displaying the fifth user interface, further comprising:
generating the candidate descriptive information set list according to a first candidate descriptive information set stored locally;
or alternatively, the first and second heat exchangers may be,
generating a candidate descriptive information set list according to a second candidate descriptive information set sent by a server;
or alternatively, the first and second heat exchangers may be,
generating the candidate descriptive information set list according to the first candidate descriptive information set and the second candidate descriptive information set;
or alternatively, the first and second heat exchangers may be,
acquiring attribute information of the second user account, wherein the attribute information comprises at least one of object attribute information of the second user account, group attribute information of a group to which the second user account belongs and object attribute information of other user accounts except the second user account in the group; generating a third candidate descriptive information set according to the attribute information; and generating the candidate descriptive information set list according to the third candidate descriptive information set.
19. The method of claim 16, wherein the method further comprises:
displaying a sixth user interface, the sixth user interface comprising an edit control;
The determining the description information set corresponding to the virtual article package according to the operation instruction comprises the following steps:
according to the operation instruction on the editing control, at least two pieces of description information input by the second user account are obtained, wherein the description information comprises at least one of text information, picture information, audio information and video information;
determining the at least two pieces of descriptive information as the descriptive information set of the virtual package.
20. The method according to any one of claims 16 to 19, wherein the parameter information comprises: type identification and virtual item parameters;
the type identifier is used for identifying the type of the virtual article package generated at this time in at least two virtual article package types;
the virtual article parameters include: at least one of the number of virtual article packages, the total number of virtual articles, the number of virtual articles in a single virtual article package, and the division manner of the number of virtual articles in a single virtual article package.
21. The method of any one of claims 16 to 19, wherein displaying a fourth user interface comprises:
sending a sending request of the virtual article package to a server according to the description information set and the parameter information;
And responding to the received successful sending result sent by the server, and displaying the fourth user interface.
22. The method according to any one of claims 16 to 19, further comprising:
and displaying a multimedia message sent by the first user account, wherein the multimedia message is used for playing a voice fragment corresponding to the first user account.
23. The method of claim 22, wherein the virtual package corresponds to a set of descriptive information, the descriptive information in the set of descriptive information having a corresponding sequential identification;
the multimedia message is used for the playing server to sequentially synthesize the at least one voice segment corresponding to the relayed description information into the obtained audio data according to the sequence identification.
24. The method of claim 22, wherein the method further comprises:
receiving a first trigger operation on the multimedia message; playing the multimedia message according to the first triggering operation;
or alternatively, the first and second heat exchangers may be,
receiving a second trigger operation on the multimedia message; collecting the multimedia message according to the second triggering operation;
or alternatively, the first and second heat exchangers may be,
receiving a third trigger operation on the multimedia message; and sharing the multimedia message according to the third triggering operation.
25. An information processing method, characterized in that the method comprises:
receiving a sending request of a virtual article package sent by a second user account, wherein the sending request carries a description information set and parameter information, the description information set is used for indicating a picking mode of the virtual article package, the parameter information is used for generating the virtual article package, and the virtual article package carries at least one virtual article; the description information set comprises at least two pieces of description information in sequence, and the description information is used for indicating the picking mode of the virtual article package;
generating the virtual article package according to the description information set and the parameter information;
a receiving interface for sending the virtual article package to at least one user account;
the method comprises the steps of sending a multimedia message sent by a first user account, wherein the multimedia message is used for playing audio data obtained by sequentially synthesizing at least one voice fragment corresponding to relayed description information according to the sequence of the description information, the multimedia message is displayed in a client corresponding to a user account in group chat, and the user account in the group chat comprises the first user account for receiving the virtual article package, a second user account for sending the virtual article package and other user accounts;
Wherein, the description information in the description information set corresponds to a sequential identifier; the first user account receives the virtual article package according to target description information, and the target description information comprises the (i+1) description information in the description information set under the condition that the relay progress of the virtual article package is the sequence identification of the (i) description information, wherein i is a positive integer; and the matching result of the voice fragment and the target description information is used for determining whether the first user account can acquire the virtual article in the virtual article package.
26. The method of claim 25, wherein the parameter information comprises: type identification and virtual item parameters;
the type identifier is used for identifying the type of the virtual article package generated at this time in at least two virtual article package types;
the virtual article parameters include: at least one of the number of virtual article packages, the total number of virtual articles, the number of virtual articles in a single virtual article package, and the division manner of the number of virtual articles in a single virtual article package.
27. The method of claim 25, wherein the method further comprises:
Sending a locally stored second candidate descriptive information set to the second user account;
or alternatively, the first and second heat exchangers may be,
acquiring attribute information of the second user account, wherein the attribute information comprises at least one of object attribute information of the second user account, group attribute information of a group to which the second user account belongs and object attribute information of other user accounts except the second user account in the group; generating a third candidate descriptive information set according to the attribute information; and sending the third candidate descriptive information set to the second user account.
28. An information processing apparatus, characterized in that the apparatus comprises:
the first display module is used for displaying a first user interface, and the first user interface displays a receiving interface of the virtual article package; the virtual article package corresponds to a description information set, and the description information set comprises at least two pieces of description information in sequence; the description information in the description information set corresponds to a sequential identifier;
the first receiving module is used for receiving the relay progress of the virtual article package, wherein the relay progress comprises the sequence identification of the ith descriptive information, and i is a positive integer;
A first determining module, configured to determine, in response to a trigger operation on the pickup interface, an i+1st description information in the description information set as target description information according to the sequential identifier of the i-th description information;
the first display module is further configured to display a second user interface, where the second user interface includes the target description information of the virtual package and other description information in the description information set; the target description information is determined in the description information set according to the relay progress of the virtual article package; the other descriptive information is descriptive information other than the target descriptive information in the descriptive information set; the description information is used for picking up virtual articles in the virtual article package, and one piece of description information corresponds to one virtual article;
the acquisition module is used for receiving a voice fragment input by a first user account, and the voice fragment is used for matching with the target description information to request to receive the virtual article in the virtual article package; the matching result of the voice fragment and the target description information is used for determining whether the first user account can acquire the virtual article in the virtual article package;
The first receiving module is used for receiving the virtual article in the virtual article package under the condition that the voice fragment is successfully matched with the target description information; displaying prompt information of voice matching failure under the condition that the voice segment fails to match with the target description information;
the first display module is further configured to display, on the first user interface or the second user interface, a multimedia message sent by the first user account, where the multimedia message is used to play audio data obtained by sequentially synthesizing at least one voice segment corresponding to the relayed description information according to the order of the description information, and the multimedia message is displayed in a client corresponding to a user account in a group chat, where the user account in the group chat includes the first user account that receives the virtual article package, a second user account that sends the virtual article package, and other user accounts.
29. An information processing apparatus, characterized in that the apparatus comprises:
the second sending module is used for responding to successful receiving of the virtual article package by at least one user account and sending relay progress of the virtual article package to the first client; the virtual article package corresponds to a description information set, and the description information set comprises at least two pieces of description information in sequence; the description information in the description information set corresponds to a sequential identifier; the relay progress comprises the sequence identifier of the ith descriptive information to which the virtual article package is relayed, and is used for assisting the first client to determine the (i+1) th descriptive information in the descriptive information set as target descriptive information, wherein i is a positive integer;
The second receiving module is used for receiving a matching request sent by the first client, wherein the matching request comprises a first user account, a voice fragment and an identifier of a virtual article package;
the acquisition module is used for acquiring the target description information of the virtual article package and other description information in the description information set according to the identification; the target description information is determined in the description information set according to the relay progress of the virtual article package; the other descriptive information is descriptive information other than the target descriptive information in the descriptive information set; the description information is used for picking up virtual articles in the virtual article package, and one piece of description information corresponds to one virtual article;
the second sending module is used for responding to the fact that the voice fragment is matched with the target description information, and sending a virtual article package receiving result to the first client side under the condition that the voice fragment is successfully matched with the target description information, wherein the virtual article package receiving result comprises virtual articles in the virtual article package received by the first user account; under the condition that the matching of the voice fragment and the target description information fails, sending prompt information of voice matching failure to the first client; the matching result of the voice fragment and the target description information is used for determining whether the first user account can acquire the virtual article in the virtual article package;
The virtual article package receiving result comprises a multimedia message, wherein the multimedia message is used for playing audio data obtained by sequentially synthesizing at least one voice segment corresponding to the relayed description information according to the sequence of the description information, the multimedia message is displayed in a client corresponding to a user account in group chat, and the user account in group chat comprises the first user account for receiving the virtual article package, a second user account for sending the virtual article package and other user accounts.
30. An information processing apparatus, characterized in that the apparatus comprises:
the interaction module is used for receiving an operation instruction input by the second user account;
the second determining module is used for determining a description information set corresponding to the virtual article package according to the operation instruction, wherein the description information set comprises at least two pieces of description information in sequence, and the description information is used for indicating the picking mode of the virtual article package;
the interaction module is further configured to receive parameter information of the virtual article package input by the second user account, where the parameter information is used to generate the virtual article package, and the virtual article package carries at least one virtual article;
The second display module is used for displaying a fourth user interface, the fourth user interface displays the virtual article package sent by the second user account, and the virtual article package is generated according to the description information set and the parameter information;
the second display module is configured to display a multimedia message sent by a first user account, where the multimedia message is used to play audio data obtained by sequentially synthesizing at least one voice segment corresponding to the relayed description information according to the sequence of the description information, and the multimedia message is displayed in a client corresponding to a user account in a group chat, where the user account in the group chat includes the first user account that receives the virtual article package, a second user account that sends the virtual article package, and other user accounts;
wherein, the description information in the description information set corresponds to a sequential identifier; the first user account receives the virtual article package according to target description information, and the target description information comprises the (i+1) th description information in the description information set under the condition that the relay progress of the virtual article package is the sequence identification of the (i) th description information, wherein i is a positive integer; the target description information is determined in the description information set according to the relay progress of the virtual article package; and the matching result of the voice fragment and the target description information is used for determining whether the first user account can acquire the virtual article in the virtual article package.
31. An information processing apparatus, characterized in that the apparatus comprises:
a fourth receiving module, configured to receive a transmission request of a virtual article packet sent by a second user account, where the transmission request carries a description information set and parameter information, the description information set is used to indicate a pickup mode of the virtual article packet, the parameter information is used to generate the virtual article packet, and the virtual article packet carries at least one virtual article; the description information set comprises at least two pieces of description information in sequence, and the description information is used for indicating the picking mode of the virtual article package;
the second generation module is used for generating the virtual article package according to the description information set and the parameter information;
a fourth sending module, configured to send a pickup interface of the virtual package to at least one user account;
the fourth sending module is configured to send a multimedia message sent by a first user account, where the multimedia message is used to play audio data obtained by sequentially synthesizing at least one voice segment corresponding to the relayed description information according to the sequence of the description information, and the multimedia message is displayed in a client corresponding to a user account in a group chat, where the user account in the group chat includes the first user account that receives the virtual article package, a second user account that sends the virtual article package, and other user accounts;
Wherein, the description information in the description information set corresponds to a sequential identifier; the first user account receives the virtual article package according to target description information, and the target description information comprises the (i+1) description information in the description information set under the condition that the relay progress of the virtual article package is the sequence identification of the (i) description information, wherein i is a positive integer; and the matching result of the voice fragment and the target description information is used for determining whether the first user account can acquire the virtual article in the virtual article package.
32. An information processing system, the system comprising: the system comprises a first client, a server connected with the first client through a wired network or a wireless network and a second client connected with the server through the wired network or the wireless network;
the first client includes the information processing apparatus of claim 28;
the server includes the information processing apparatus according to claim 29 or 31;
the second client includes the information processing apparatus according to claim 30.
33. A computer device comprising a processor and a memory, wherein the memory has stored therein at least one program that is loaded and executed by the processor to implement the information processing method of any of claims 1 to 27.
34. A computer readable storage medium having stored therein at least one program loaded and executed by a processor to implement the information processing method according to any one of claims 1 to 27.
CN202010593270.3A 2020-06-26 2020-06-26 Information processing method, device, system, computer equipment and storage medium Active CN111582862B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010593270.3A CN111582862B (en) 2020-06-26 2020-06-26 Information processing method, device, system, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010593270.3A CN111582862B (en) 2020-06-26 2020-06-26 Information processing method, device, system, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111582862A CN111582862A (en) 2020-08-25
CN111582862B true CN111582862B (en) 2023-06-27

Family

ID=72114662

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010593270.3A Active CN111582862B (en) 2020-06-26 2020-06-26 Information processing method, device, system, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111582862B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111966441A (en) * 2020-08-27 2020-11-20 腾讯科技(深圳)有限公司 Information processing method and device based on virtual resources, electronic equipment and medium
CN112231577B (en) * 2020-11-06 2022-06-03 重庆理工大学 Recommendation method fusing text semantic vector and neural collaborative filtering
CN112364144B (en) * 2020-11-26 2024-03-01 北京汇钧科技有限公司 Interaction method, device, equipment and computer readable medium

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105610544A (en) * 2015-12-18 2016-05-25 福建星海通信科技有限公司 Voice data transmission method and device
CN106845958A (en) * 2017-01-07 2017-06-13 上海洪洋通信科技有限公司 A kind of interactive red packet distribution method and system
WO2017152788A1 (en) * 2016-03-11 2017-09-14 阿里巴巴集团控股有限公司 Resource allocation method and device
CN107171933A (en) * 2017-04-28 2017-09-15 北京小米移动软件有限公司 Virtual objects packet transmission method, method of reseptance, apparatus and system
CN107492034A (en) * 2017-08-24 2017-12-19 维沃移动通信有限公司 A kind of resource transfers method, server, receiving terminal and transmission terminal
CN107657471A (en) * 2016-09-22 2018-02-02 腾讯科技(北京)有限公司 A kind of methods of exhibiting of virtual resource, client and plug-in unit
CN107808282A (en) * 2016-09-09 2018-03-16 腾讯科技(深圳)有限公司 Virtual objects packet transmission method and device
CN108011905A (en) * 2016-10-27 2018-05-08 财付通支付科技有限公司 Virtual objects packet transmission method, method of reseptance, apparatus and system
WO2018108035A1 (en) * 2016-12-13 2018-06-21 腾讯科技(深圳)有限公司 Information processing and virtual resource exchange method, apparatus, and device
CN108305057A (en) * 2018-01-22 2018-07-20 平安科技(深圳)有限公司 Dispensing apparatus, method and the computer readable storage medium of electronics red packet
CN108401079A (en) * 2018-02-11 2018-08-14 贵阳朗玛信息技术股份有限公司 A kind of method and device for robbing red packet by voice in IVR platforms
CN109727004A (en) * 2018-03-07 2019-05-07 中国平安人寿保险股份有限公司 Distributing method, user equipment, storage medium and the device of electronics red packet
CN110084579A (en) * 2018-01-26 2019-08-02 百度在线网络技术(北京)有限公司 Method for processing resource, device and system
CN110152307A (en) * 2018-07-17 2019-08-23 腾讯科技(深圳)有限公司 Virtual objects distribution method, device and storage medium
CN110288328A (en) * 2019-06-25 2019-09-27 腾讯科技(深圳)有限公司 Virtual objects sending method, method of reseptance, device, equipment and storage medium
WO2020000766A1 (en) * 2018-06-29 2020-01-02 北京金山安全软件有限公司 Blockchain red packet processing method and apparatus, and electronic device and medium
CN110675133A (en) * 2019-09-30 2020-01-10 北京金山安全软件有限公司 Red packet robbing method and device, electronic equipment and readable storage medium
CN110728558A (en) * 2019-10-16 2020-01-24 腾讯科技(深圳)有限公司 Virtual article package sending method, device, equipment and storage medium
CN111031174A (en) * 2019-11-29 2020-04-17 维沃移动通信有限公司 Virtual article transmission method and electronic equipment
CN111050222A (en) * 2019-12-05 2020-04-21 腾讯科技(深圳)有限公司 Virtual article issuing method, device and storage medium
CN111126980A (en) * 2019-12-30 2020-05-08 腾讯科技(深圳)有限公司 Virtual article sending method, processing method, device, equipment and medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107426275B (en) * 2017-04-14 2020-08-21 阿里巴巴集团控股有限公司 Resource transmission method and device

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105610544A (en) * 2015-12-18 2016-05-25 福建星海通信科技有限公司 Voice data transmission method and device
WO2017152788A1 (en) * 2016-03-11 2017-09-14 阿里巴巴集团控股有限公司 Resource allocation method and device
CN107808282A (en) * 2016-09-09 2018-03-16 腾讯科技(深圳)有限公司 Virtual objects packet transmission method and device
CN107657471A (en) * 2016-09-22 2018-02-02 腾讯科技(北京)有限公司 A kind of methods of exhibiting of virtual resource, client and plug-in unit
CN108011905A (en) * 2016-10-27 2018-05-08 财付通支付科技有限公司 Virtual objects packet transmission method, method of reseptance, apparatus and system
WO2018108035A1 (en) * 2016-12-13 2018-06-21 腾讯科技(深圳)有限公司 Information processing and virtual resource exchange method, apparatus, and device
CN106845958A (en) * 2017-01-07 2017-06-13 上海洪洋通信科技有限公司 A kind of interactive red packet distribution method and system
CN107171933A (en) * 2017-04-28 2017-09-15 北京小米移动软件有限公司 Virtual objects packet transmission method, method of reseptance, apparatus and system
CN107492034A (en) * 2017-08-24 2017-12-19 维沃移动通信有限公司 A kind of resource transfers method, server, receiving terminal and transmission terminal
CN108305057A (en) * 2018-01-22 2018-07-20 平安科技(深圳)有限公司 Dispensing apparatus, method and the computer readable storage medium of electronics red packet
CN110084579A (en) * 2018-01-26 2019-08-02 百度在线网络技术(北京)有限公司 Method for processing resource, device and system
CN108401079A (en) * 2018-02-11 2018-08-14 贵阳朗玛信息技术股份有限公司 A kind of method and device for robbing red packet by voice in IVR platforms
CN109727004A (en) * 2018-03-07 2019-05-07 中国平安人寿保险股份有限公司 Distributing method, user equipment, storage medium and the device of electronics red packet
WO2020000766A1 (en) * 2018-06-29 2020-01-02 北京金山安全软件有限公司 Blockchain red packet processing method and apparatus, and electronic device and medium
CN110152307A (en) * 2018-07-17 2019-08-23 腾讯科技(深圳)有限公司 Virtual objects distribution method, device and storage medium
CN110288328A (en) * 2019-06-25 2019-09-27 腾讯科技(深圳)有限公司 Virtual objects sending method, method of reseptance, device, equipment and storage medium
CN110675133A (en) * 2019-09-30 2020-01-10 北京金山安全软件有限公司 Red packet robbing method and device, electronic equipment and readable storage medium
CN110728558A (en) * 2019-10-16 2020-01-24 腾讯科技(深圳)有限公司 Virtual article package sending method, device, equipment and storage medium
CN111031174A (en) * 2019-11-29 2020-04-17 维沃移动通信有限公司 Virtual article transmission method and electronic equipment
CN111050222A (en) * 2019-12-05 2020-04-21 腾讯科技(深圳)有限公司 Virtual article issuing method, device and storage medium
CN111126980A (en) * 2019-12-30 2020-05-08 腾讯科技(深圳)有限公司 Virtual article sending method, processing method, device, equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
巴志超 ; 李纲 ; 毛进 ; 徐健 ; .微信群内部信息交流的网络结构、行为及其演化分析――基于会话分析视角.情报学报.(第10期),43-55. *

Also Published As

Publication number Publication date
CN111582862A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
CN108304441B (en) Network resource recommendation method and device, electronic equipment, server and storage medium
CN111582862B (en) Information processing method, device, system, computer equipment and storage medium
CN111031386B (en) Video dubbing method and device based on voice synthesis, computer equipment and medium
CN108270794B (en) Content distribution method, device and readable medium
CN112511850B (en) Wheat connecting method, live broadcast display device, equipment and storage medium
CN112749956A (en) Information processing method, device and equipment
CN112261481B (en) Interactive video creating method, device and equipment and readable storage medium
CN111359209B (en) Video playing method and device and terminal
CN109640125A (en) Video content processing method, device, server and storage medium
CN111368127B (en) Image processing method, image processing device, computer equipment and storage medium
CN112115282A (en) Question answering method, device, equipment and storage medium based on search
CN112068762A (en) Interface display method, device, equipment and medium of application program
CN111935516B (en) Audio file playing method, device, terminal, server and storage medium
CN111581958A (en) Conversation state determining method and device, computer equipment and storage medium
CN111339938A (en) Information interaction method, device, equipment and storage medium
CN112188228A (en) Live broadcast method and device, computer readable storage medium and electronic equipment
CN111031391A (en) Video dubbing method, device, server, terminal and storage medium
CN111432245B (en) Multimedia information playing control method, device, equipment and storage medium
CN110493635B (en) Video playing method and device and terminal
CN110099360A (en) Voice message processing method and device
CN111949116B (en) Method, device, terminal and system for picking up virtual article package and sending method
CN111835621A (en) Session message processing method and device, computer equipment and readable storage medium
CN110798327A (en) Message processing method, device and storage medium
CN111131867B (en) Song singing method, device, terminal and storage medium
CN112069350A (en) Song recommendation method, device, equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40027972

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant