CN111598651A - Item donation system, item donation method, item donation device, item donation equipment and item donation medium - Google Patents

Item donation system, item donation method, item donation device, item donation equipment and item donation medium Download PDF

Info

Publication number
CN111598651A
CN111598651A CN202010350823.2A CN202010350823A CN111598651A CN 111598651 A CN111598651 A CN 111598651A CN 202010350823 A CN202010350823 A CN 202010350823A CN 111598651 A CN111598651 A CN 111598651A
Authority
CN
China
Prior art keywords
donation
information
client
server
donated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010350823.2A
Other languages
Chinese (zh)
Inventor
陈春勇
高萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010350823.2A priority Critical patent/CN111598651A/en
Publication of CN111598651A publication Critical patent/CN111598651A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services

Abstract

The application discloses an article donation system, an article donation method, an article donation device, equipment and a medium, and belongs to the field of human-computer interaction. The system comprises: the system comprises a first client, a server and a second client; the system comprises a first client, a second client and a server, wherein the first client is used for collecting a live broadcast stream corresponding to a live broadcast donation process, and the live broadcast stream comprises donation demand information of an audio and video type; the server is used for calling the information identification model to identify donation index information corresponding to the donation demand information and sending a search request to the electronic commerce platform according to the donation index information; generating a first donation list according to a search result of the e-commerce platform; sending a first donation list to a first client; in response to receiving the confirmation request sent by the first client, sending a second donation list to at least one second client; and the second client is used for donating the donation to the first client. Corresponding donations are automatically matched for the donated objects, and the donation process is simplified.

Description

Item donation system, item donation method, item donation device, item donation equipment and item donation medium
Technical Field
The present application relates to the field of human-computer interaction, and in particular, to a system, a method, an apparatus, a device, and a medium for donation of goods.
Background
With the development of internet technology, the traditional offline public welfare donation activities are gradually transferred to be performed online, and the donators can donate goods to the donated objects in an online donation mode.
Taking the public welfare activity of live broadcast mode as an example, the donator donates love through the virtual article that the live broadcast room provided donated to the donated object, calculates the cash amount that the virtual article donated by the donator represented by the live broadcast platform, utilizes this cash amount to purchase the article that the donated object needs, and then provides the article for corresponding donated object.
Based on the above situation, the related platform needs to set up the virtual article corresponding to the article that the object needs donated in advance, because there is a difference in the demand of the different objects donated for the process of donating becomes comparatively loaded down with trivial details.
Disclosure of Invention
The embodiment of the application provides a system, a method, a device, equipment and a medium for donation of goods, wherein the donation process is simplified by identifying donated objects in a live broadcast stream and automatically matching corresponding donated goods for the donated objects. The technical scheme is as follows:
according to one aspect of the present application, there is provided an item donation system, the system including: the system comprises a first client, a server and a second client, wherein the server is respectively connected with the first client and the second client through a network;
the first client is used for collecting a live broadcast stream corresponding to a live broadcast donation process, wherein the live broadcast stream comprises donation demand information of an audio and video type;
the server is used for calling an information identification model to identify donation index information corresponding to the donation demand information and sending a search request to an electronic commerce platform according to the donation index information, wherein the information identification model is a machine learning model with a donation index information identification function;
the server is used for generating a first donation list according to the search result of the electronic commerce platform, wherein the first donation list comprises at least one purchase link of donations searched according to the donation index information;
the server is used for sending the first donation list to the first client;
the server is used for responding to the confirmation request sent by the first client, and sending a second donation list to at least one second client, wherein the second donation list is a subset of the first donation list;
the second client is configured to select at least one purchase link of the donation from the second donation list, and donate the donation to the first client.
According to another aspect of the present application, there is provided a method for donation of an item, the method being applied to a first client, the method including:
displaying a live broadcast stream collected in a live broadcast donation process, wherein a picture of the live broadcast stream comprises a donation object, and the live broadcast stream comprises donation demand information of an audio and video type;
displaying at least one donation matched with the donation demand information in response to receiving a first donation list sent by a server;
in response to receiving a confirmation operation on the first donation list, displaying a second donation list, the second donation list including the confirmed donations;
generating feedback information in response to receiving a receiving operation, the feedback information including at least one of text information, video information, audio information, and image information, the receiving operation being configured to receive the donation from the second client.
According to another aspect of the present application, there is provided a method for donation of an item, the method being applied to a second client, the method including:
displaying a live broadcast stream corresponding to a donated object acquired in a live broadcast donation process, wherein the live broadcast stream comprises donation demand information of an audio and video type;
displaying a second donation list in response to the first client confirming the donation matching the donation demand information, wherein the second donation list comprises at least one purchase link of the donation;
in response to receiving a donation operation on the second donation list, donating the donation to the first client.
According to another aspect of the present application, there is provided a donation device for items, the device comprising:
the system comprises a first display module, a second display module and a third display module, wherein the first display module is used for displaying a live broadcast stream acquired in a live broadcast donation process, a picture of the live broadcast stream comprises a donation object, and the live broadcast stream comprises donation demand information of an audio and video type;
the first display module is used for responding to the first donation list sent by the server and displaying at least one donation matched with the donation demand information;
the first display module, configured to display a second donation list in response to receiving a confirmation operation on the first donation list, the second donation list including the confirmed donations;
the generating module is used for responding to the received receiving operation, generating feedback information, wherein the feedback information comprises at least one of character information, video information, audio information and image information, and the receiving operation is used for receiving the donations donated by the second client.
According to another aspect of the present application, there is provided a donation device for items, the device comprising:
the second display module is used for displaying a live broadcast stream corresponding to the donated object acquired in the live broadcast donation process, wherein the live broadcast stream comprises donation demand information of an audio and video type;
the second display module is configured to display a second donation list in response to the first client confirming the donation matching the donation demand information, where the second donation list includes at least one purchase link of the donation;
a sending module, configured to donate the donation to the first client in response to receiving the donation operation on the second donation list.
According to another aspect of the present application, there is provided a computer device comprising: a processor and a memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement a method of donation of items as described above.
According to another aspect of the present application, a computer-readable storage medium is provided, having stored thereon a computer program which, when being executed by a processor, carries out the method of donation of an item as described above.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
through identifying the donation index information that first client (donated object client) corresponds at the live stream of live broadcast donation in-process, according to donation index information automatic search relevant donation on the electronic commerce platform, when first client confirms that the donation is the article that self needs, the second client can donate this donation to first client, the donation changes along with the change of different donation demand information, make technical staff need not set up the donation that corresponds with donation demand information before the live broadcast donation begins, the donation process has been simplified. Meanwhile, the first client can only donate after confirmation, so that waste caused by unnecessary goods donated to the donated object is avoided.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a block diagram of a method for donation of items provided by an exemplary embodiment of the present application;
FIG. 2 is a block diagram of a computer system provided in an exemplary embodiment of the present application;
FIG. 3 is a schematic view of an item donation system provided by an exemplary embodiment of the present application;
FIG. 4 is a flow chart of a method of donation of items provided by an exemplary embodiment of the present application;
FIG. 5 is a flow chart of a method of donation of items provided by another exemplary embodiment of the present application;
FIG. 6 is a schematic view of a live interface provided by an exemplary embodiment of the present application;
FIG. 7 is a schematic view of a live interface provided by another exemplary embodiment of the present application;
FIG. 8 is a diagrammatic illustration of an interface for a donation amount provided by an exemplary embodiment of the present application;
FIG. 9 is a diagram of an information recognition model provided by an exemplary embodiment of the present application identifying donation requirement information of an audio type;
FIG. 10 is a schematic illustration of audio frame segmentation provided by an exemplary embodiment of the present application;
FIG. 11 is a diagrammatic illustration of an interface for feedback information sent by donated objects as provided by an exemplary embodiment of the present application;
FIG. 12 is a block chain system for a distributed system according to an exemplary embodiment of the present disclosure;
FIG. 13 is a block structure diagram provided by an exemplary embodiment of the present application;
FIG. 14 is a flow chart of a method for donation of an item in conjunction with a first client as provided by an exemplary embodiment of the present application;
FIG. 15 is a schematic view of a live interface incorporating scene recognition provided by an exemplary embodiment of the present application;
FIG. 16 is a flow chart of a method for donation of an item in conjunction with a second client as provided by an exemplary embodiment of the present application;
FIG. 17 is a block diagram illustrating the process of identifying audio-type donation requirement information according to an exemplary embodiment of the present application;
FIG. 18 is a block diagram of a process for identifying donation requirement information for image types provided by an exemplary embodiment of the present application;
FIG. 19 is a block diagram of a donation device for items provided by an exemplary embodiment of the present application;
FIG. 20 is a block diagram of a donation device for items provided by another exemplary embodiment of the present application;
FIG. 21 is a block diagram of a donation device for items in conjunction with a server as provided by an exemplary embodiment of the present application;
FIG. 22 is a block diagram of a server provided by an exemplary embodiment of the present application;
FIG. 23 is a block diagram of a computer device provided in an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, terms referred to in the embodiments of the present application are described:
blockchain (Blockchain) refers to an intelligent peer-to-peer network that uses a distributed database to identify, disseminate, and document information. The block chain technology is based on a decentralized peer-to-peer network, and combines a cryptography principle, time sequence data and a consensus mechanism by using an open source program to ensure the consistency and the persistence of each node in a distributed database, so that information can be immediately verified, traceable, difficult to tamper and incapable of being shielded, and a block chain forms a sharing system with high privacy, high efficiency and safety. Each data block in the block chain contains information of a batch of network transactions, and the information is used for verifying the validity (anti-counterfeiting) of the information and generating a next block. The blockchain may include a blockchain underlying platform, platform product services, and application service layers.
Fig. 1 illustrates a donation flow framework diagram of donations provided by an exemplary embodiment of the present application. The donated object corresponds to the first client, and the donor corresponds to the second client. The process comprises the following steps:
and step 111, starting live broadcast in the public service live broadcast room.
When the donated objects start live broadcast in the public service live broadcast room, the server acquires the live broadcast stream in the first client side and identifies donation demand information in the time-frequency frame.
The donation demand information includes two types: 1. when the donated object is a single donator, the donated object speaks donation demand information, namely the donated object speaks needed articles; 2. when the donated object is an institution or a unit (such as a school), the donation demand information is determined by the scenario in which the donated object is located.
The case of donation according to the two kinds of donation demand information will be explained.
1. The donation demand information is spoken by the donated objects.
In step 112a, the donor introduces the information of the need to speak the donation.
In step 113a, the voice of the donor is converted into text, and the donation index information is screened.
The server obtains the voice of the donated object, and calls the information identification model to identify donated item index information corresponding to the voice, such as keywords of donated items. It is understood that there may be multiple donors in a live broadcast room.
2. And determining the donation demand information according to the scene of the donated object.
Step 112b, identify the scene in which the donated object is located.
And the server acquires a scene image from the live stream and calls the convolutional neural network to identify a scene corresponding to the scene image.
And 113b, identifying the scene and matching the donation index information corresponding to the scene.
The server matches the donation index information corresponding to the scene, if the scene corresponding to the scene image is an outdoor scene, the server automatically matches the donation index information corresponding to the outdoor scene, and the donation index information is keywords related to the sports equipment.
And step 114, searching the donations on the electronic commerce platform through the donation index information.
And when the server acquires the donation index information, searching the entity articles to be donated on the electronic commerce platform according to the donation index information.
At step 115, recommendations are made based on attributes of various aspects of the donation.
After the search results fed back by the e-commerce platform are obtained, the entity articles are sorted according to the attributes of the entity articles, such as sales volume, good appraisal rate, credit degree of merchants and the like of the entity articles. Illustratively, the donation index information is a keyword.
In step 116, the donated subject confirms the donation information.
And the server sends the first donation list to the first client, and the first client confirms that the donations in the first donation list are needed by the first client, and then binds the purchase link of the donations with the donated object identifier. And when the server identifies that the direct broadcast stream corresponding to the second client contains the identifier of the donated object, automatically sending a purchase link of the corresponding donated object to the second client.
Step 117, extracting key images of the donations and converting the key images into the donations of the live broadcast room.
The server extracts key images of the entity articles and generates marks corresponding to donations in the live broadcast room and a first donation list.
And step 118, identifying the face information or the live scene of the donated person, and binding the donation demand information with the donated object.
When the user of the second client watches the live broadcast containing the donated people, the server automatically recommends the donations matched with the donated people according to the binding relationship.
Step 119, the donor donates the donation.
The second client may donate the donation to the first client by selecting the purchase link for the donation.
The donation process is as follows:
and step 121, when the value of the donation reaches the calibration value, the server automatically places an order.
And step 122, informing the electronic commerce platform of the delivery of the goods.
And when the value of the donation donated by the second client reaches the calibrated value, the server automatically sends a purchase request to the e-commerce platform to purchase the donation.
After the donation is completed, the process of storing the donation record is as follows:
in step 123, the donated subjects receive the donation and perform feedback.
When the first client receives the entity goods, the first client can feed back the donation process, such as making a thank you video, recording thank you voice, writing a thank you greeting card, and thank you letter.
At step 124, the donation records are generated into blocks.
In some embodiments, the server generates a donation record based on the feedback information of the first client and generates a block based on the donation record.
In step 125, the blocks storing the donation records are uplink stored.
The blocks stored with the donation records are stored in the block chain, so that the donation records of the donated objects are stored in the live broadcast platforms, and the problem that the donated objects obtain the same donation at the same time to cause resource waste is avoided.
The donation flow frame of donation thing that this embodiment provided can be according to the pronunciation of the object that is donated or the scene automatic identification who locates donation demand information, when the object that is donated changes, this object that is donated needs of automatic switch-over has simplified the donation process, also avoid the article of donation to be the produced extravagant phenomenon of the unnecessary article of the object that is donated, simultaneously, to donate the record and store to the block chain in, guarantee the public transparency of donation process.
Fig. 2 shows a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system 100 includes: a first terminal 120, a server 140, and a second terminal 160.
The first terminal 120 is installed and operated with an application program supporting video playback. The application may be any one of a live application and a social application. The first terminal 120 is a terminal used by the donated object, and the first terminal 120 corresponds to a first client. The first client collects the live broadcast stream corresponding to the donated object, and is further configured to send item information of items to be donated to the server 140 when receiving the confirmation operation of the donated object, where the donated object needs 20 bags for confirmation, and the first client sends item information (20 bags) to the server 140.
The first terminal 120 is connected to the server 140 through a wireless network or a wired network.
The server 140 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. Illustratively, the server 140 includes a processor 144 and a memory 142, the memory 142 further includes a receiving module 1421, a control module 1422 and a sending module 1423, the receiving module 1421 is configured to receive a request from a terminal, such as a confirmation request from the first terminal 120 or a donation request from the second terminal 160; the control module 1422 is configured to identify donation index information corresponding to the donation demand information according to the donation demand information in the live stream, and search a purchase link of a donation according to the donation index information; the transmitting module 1423 is configured to transmit item information to the terminal, such as a purchase link of the searched donation to the first terminal 120 or a second donation list to the second terminal 160. The server 140 is configured to provide a background service for the video playing application, such as providing a picture rendering service for the application. Alternatively, the server 140 undertakes primary computational work and the first and second terminals 120, 160 undertake secondary computational work; alternatively, the server 140 undertakes the secondary computing work and the first terminal 120 and the second terminal 160 undertakes the primary computing work; alternatively, the server 140, the first terminal 120, and the second terminal 160 perform cooperative computing by using a distributed computing architecture. In some embodiments, the server 140 is connected to a server of an e-commerce platform, which may be any one of a shopping platform, a second-hand transaction platform, and a group purchase platform, via a wired network or a wireless network.
The second terminal 160 is connected to the server 140 through a wireless network or a wired network.
The second terminal 160 is installed and operated with an application program supporting video playback. The application may be any one of a live application and a social application. The second terminal 160 is a terminal used by a donor, and the second terminal 160 corresponds to a second client. The second client displays the live broadcast stream corresponding to the donated object, and is further configured to receive a second donation list sent by the server 140, where the second donation list is a donation list confirmed by the first client, and when the second client receives a donation operation, donates at least one donation in the second donation list to the first client.
Optionally, the user account of the donated object and the user account of the donated person may be in a non-friend relationship, or have a temporary communication right in the application program.
Alternatively, the applications installed on the first terminal 120 and the second terminal 160 are the same, or the applications installed on the two terminals are the same type of application of different operating system platforms. The first terminal 120 may generally refer to one of a plurality of terminals, and the second terminal 160 may generally refer to one of a plurality of terminals, and this embodiment is only illustrated by the first terminal 120 and the second terminal 160. The device types of the first terminal 120 and the second terminal 160 are the same or different, and include: at least one of a smartphone, a tablet, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer. The following embodiments are illustrated with the terminal comprising a smartphone.
Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the terminals may be only one, or several tens or hundreds of the terminals, or more. The number of terminals and the type of the device are not limited in the embodiments of the present application.
Fig. 3 shows a framework diagram of the item donation system provided by an exemplary embodiment of the present application, where the system 10 includes a first client 101, a server 102 and a second client 103, the server 102 is connected to the first client 101 and the second client 103 respectively through a network, and the server may be a server 140 as shown in fig. 2.
The first client 101 is a client corresponding to a donated object, and the first client 101 is configured to collect a live stream corresponding to the donated object, where the live stream includes donation demand information of an audio/video type. In some embodiments, the first client 101 is further configured to record feedback information, such as a thank you voice or thank you video, after receiving the donation, for expressing the thank you to the second client 103 for the donation.
The server 102 is configured to invoke an information identification model to identify the donation demand information corresponding to the first client, obtain donation index information corresponding to the donation information, and then search for donations on the e-commerce platform by using the donation index information, where the information identification model is a machine learning model with a donation index information identification function. Illustratively, the e-commerce platform includes any one of a shopping platform, a second-hand transaction platform and a group purchase platform.
The server 102 generates a first donation list according to the searched commodity information, the first donation list comprises a purchase link of at least one donation, the server 102 sends the purchase link to the first client 101, the to-be-donated object determines that the donations in the first donation list are needed by the to-be-donated object, the server 102 sends a second donation list to at least one second client 103 according to the confirmation result of the first client, and the second donation list is a subset of the first donation list.
The second client 103 is configured to select a purchase link for at least one donation from the second donation list to donate the donation to the first client 101. In some embodiments, the donations from each second client 103 to the first client 101 are the same or different, or different donations of the same type.
The article donation system that this embodiment provided, through the donation index information that contains in the direct broadcast stream that server identification donated the object and correspond, for the donation that donated the object and match corresponding donation, need not the technical staff and set up corresponding donation for the donation object before the direct broadcast, the donation process has been simplified, and simultaneously, under the affirmation of the donation object, make the user of second customer end donate the article that needs to the donation object, avoided the unnecessary article of donation and the extravagant problem that produces.
Fig. 4 shows a flow chart of a method of donation of an item provided by an exemplary embodiment of the present application. The method is applied to a computer system 100 as shown in fig. 2. The method comprises the following steps:
step 401, a first client collects a live broadcast stream corresponding to a live broadcast donation process, where the live broadcast stream includes donation demand information of an audio and video type.
The first client is a client corresponding to the donated object, the donated object communicates with the donator in a live broadcast mode, and the donation demand information in the live broadcast stream can be donation information of a video type, donation information of an audio type or donation information of a combination of the audio type and the video type.
Step 402, the server calls an information identification model to identify donation index information corresponding to the donation demand information, and sends a search request to the e-commerce platform according to the donation index information, wherein the information identification model is a machine learning model with a donation index information identification function.
The server acquires the live broadcast stream of the first client, and calls an information identification model to identify the donation demand information in the live broadcast stream. The donation demand information may be voice information spoken by the donated object or scene information corresponding to the donated object. For example, the donated object is a student in a poor mountain area, the student speaks a required bag, and the server identifies that the donation demand information is voice information spoken by the donated object; for another example, the donated object is a school in a remote mountain area, a lot of students are collected to move on a playground in the live broadcast stream, the server identifies that the scene corresponding to the donated object is the playground, and the donation demand information is the scene information corresponding to the donated object. In addition, when the donated object is a handicapped person, the donation demand information may also be sign language information scribed by the donated object, or lip language information made by the donated object, such as sign language information scribed by deaf-mute students to represent books, and the server identifies the donation demand information as sign language information scribed by the donated object.
And calling an information identification model with a donation index information identification function by the server to identify the donation demand information to obtain donation index information of a point corresponding to the donation demand information. In some embodiments, the donation index information may be a keyword, or a code representing information, or a picture.
Illustratively, the keyword may be a key field spoken by the donated object about a certain item, such as the donated object saying "stationery", or the donated object saying "pencil".
Illustratively, the code representing the information may be a bar code of the merchandise, such as a book containing a bar code held by the donated object, the bar code being recognized by the server, and the bar code being used to search the electronic commerce platform for the book name corresponding to the bar code.
Illustratively, the picture is a picture with commodity characteristic information, such as that the donated object holds a pen, the server identifies that the object held by the donated object is the pen, and the pen is searched on the e-commerce platform.
Taking the donation index information as a keyword as an example, if the donated object says "needs one bag", the server calls the information identification model to identify the donation index information as the keyword "bag"; or the scene information corresponding to the donated object is a playground, and the server calls the information identification model to identify that the donated object index information is the keyword 'sports shoes', sports clothes ', sports equipment' or the like.
And the server sends a search request to the electronic commerce platform according to the identified donation index information, wherein the search request carries donation index information, and the donation index information is used for searching for a purchase link of the donation.
In step 403, the server generates a first donation list according to the search result of the e-commerce platform, wherein the first donation list includes a purchase link of at least one donation searched according to the donation index information.
The e-commerce platform sends the search results to the server, and in some embodiments, the server sorts the search results, or the e-commerce platform sends the sorted search results to the server. The search results include a purchase link for the donation.
Illustratively, the server sorts the search results, and the server sorts the purchase links of the donations according to factors such as sales volume, goodness, delivery speed, price, and the like, thereby generating a first donation list.
At step 404, the server sends a first donation list to the first client.
And the server sends the generated first donation list to a first client corresponding to the donated object.
In response to receiving the confirmation request sent by the first client, the server sends a second donation list to at least one second client, the second donation list being a subset of the first donation list, step 405.
The donated objects confirm the required donation by the first client. Illustratively, the first donation list is displayed on a terminal used by donated subjects who, by clicking, confirm the required donations. In some embodiments, when the terminal used by the donated object is a computer device connected to an external input device, the donated object confirms the required donation through the external input device, for example, the terminal is a desktop computer connected to a mouse, and the donated object confirms the required donation by clicking the mouse. In other embodiments, the donated objects can also confirm the donations in a voice instruction manner, if the donated objects use a smart phone to carry out live broadcast, a first donation list is displayed on a display screen of the smart phone, the donated objects speak the names of the confirmed donations, the smart phone collects the voice of the donated objects and identifies the voice, and therefore the confirmation instruction of the donated objects is obtained.
The donations selected by the donated objects constitute a second donation list. And the server sends the second donation list to a second client corresponding to the user watching the live broadcast of the first client.
At step 406, the second client selects at least one purchase link for the donation from the second donation list to donate the donation to the first client.
In some embodiments, the server generates a second donation list from the donations selected by the first client, and the second client selects the donations to the first client through the second client when receiving the second donation list.
The user may select one or more donations. In some embodiments, the donations in the live room need to be purchased by paying virtual currency, and the user needs to purchase virtual currency by paying the currency used in the real world, and then purchase the donations using the virtual currency.
In some embodiments, the user may pay a partial amount, or the entire amount, of the price of the donation as the donation's price in the live bay. In one example, the price of a bag in the live room is one hundred dollars (virtual currency), and a user may choose to pay any amount less than one hundred dollars, such as one user paying fifty dollars, other users paying the remaining amount of dollars, multiple users completing the bag payment process together, or the user choosing to pay the full amount.
In summary, in the method provided in this embodiment, through identifying the donation index information corresponding to the live stream of the first client (donated object client) in the live donation process, the electronic commerce platform automatically searches for the relevant donation according to the donation index information, when the first client confirms that the donation is an item required by itself, the second client can donate the donation to the first client, and the donation changes along with the change of the donation demand information, so that a technician does not need to set the donation corresponding to the donation demand information before the start of the live donation, and the donation process is simplified. Meanwhile, the first client can only donate after confirmation, so that waste caused by unnecessary goods donated to the donated object is avoided.
The method of donation of items is described in conjunction with a User Interface (UI).
Fig. 5 shows a flow chart of a method of donation of an item provided by an exemplary embodiment of the present application. The method is applied to a computer system 100 as shown in fig. 2. The method comprises the following steps:
step 501, a first client collects a live broadcast stream corresponding to a live broadcast donation process, wherein the live broadcast stream comprises donation demand information of an audio and video type.
As shown in fig. 6, the live interface 20 of the live stream collected by the first client includes a live stream corresponding to the donated object, the donated object can transfer the donation demand information to the second client through the live stream, and the live stream may include voice information or scene information corresponding to the donated object. In some embodiments, comments of the user watching the live are also displayed on the live interface 20.
In step 502, the server obtains the type of donation demand information.
In some embodiments, the type of the donation demand information includes at least one of an audio type and an image type.
In step 503, the server determines an information identification model according to the type of the donation demand information.
The server calls different information identification models to identify different types of donation demand information, and illustratively, the information identification model called by the server includes at least one of the following two models: an audio information recognition model for recognizing audio information and an image information recognition model for recognizing image information. And the server determines a corresponding information identification model according to the type of the donation demand information.
For example, the type of the donation demand information is an audio type, and the server-invoked audio information recognition model has a function of recognizing the donation index information from an audio frame, or the server-invoked image information recognition model has a function of recognizing the donation index information from an image.
Step 504, the server calls the information identification model to identify the donation demand information, and donation index information corresponding to the donation demand information is obtained, and the donation index information is used for searching for a purchase link of a donation.
The server calls an information identification model corresponding to the type of the donation demand information.
The relationship among the donated object identifier, the donation demand information type and the donation index information is shown in table one.
Watch 1
Figure BDA0002471877510000131
In one example, the type of the donation demand information includes an audio type.
Step 504 may be replaced with the following steps:
step 504a, in response to that the type of the donation demand information is an audio type, the server calls an information identification model to process an audio frame corresponding to the donated object identifier in the live stream, so as to obtain donation index information corresponding to the audio frame, where the donation index information is used for searching a purchase link of a donation.
In some embodiments, the information recognition model is named a speech-to-text model or an audio recognition model, and has a function of recognizing the donation index information from the audio frame, and the name of the model is not limited in the embodiments of the present application.
The donated object identifier is used for uniquely identifying the donated object, and the donated object identifier may include a character string of at least one character of numbers, letters and symbols.
The donated object identifier may be a live broadcast room identifier, such as a room number of a live broadcast room, and the donated object identifier may also be a donated user account, such as a live broadcast account used when the donated person is live broadcast, where the live broadcast account may be an account registered by the donated person on the application program, or an account of another application program is passed through, the donated person performs an authorization operation on the live broadcast application program, and the authorization live broadcast program should use the account of the another application program. In some embodiments, when the live broadcast room includes a plurality of donated objects, the donated objects identify the user account (live account) of each donated object, such as a teacher who carries two students in a public donation live broadcast in the live broadcast room, and the server identifies the user accounts of each student in the live broadcast room when identifying the donated objects.
In some embodiments, the server invokes a feature extraction model to process the audio frame corresponding to the donated object identifier, so as to obtain a feature vector of the audio frame, where the feature extraction model is a pre-trained model and is a machine learning model with a feature extraction function. In other embodiments, the server invokes the information recognition model to extract features of the audio frames to obtain feature vectors of the audio frames. In the process of obtaining the feature vector of the audio frame, the feature vector of each audio frame can be obtained by segmenting the audio frame, and then the feature vectors corresponding to each audio frame are spliced to obtain the feature vector of the whole audio frame, or the feature vector of the whole audio frame can be directly obtained.
And the server calls the information identification model to process the feature vector of the audio frame to obtain donation index information corresponding to the audio frame.
As shown in fig. 7 (a), a donated object is displayed on the live broadcast interface 24, and illustratively, the voice of the donated object includes keywords of a bag, stationery, clothes and shoes. The server invokes the information recognition model to recognize the audio frames of the donated objects and generate a first donation list 25. On the first donation list 25, purchase links corresponding to a bag, a writing case, clothes, and shoes are displayed. Donated object identification 26 is a donated user account used when the donated object is live.
In one example, the type of donation demand information includes an image type and the information recognition model includes a convolutional neural network.
Step 504 may be replaced with the following steps:
and step 504b, in response to that the type of the donation demand information is an image type, the server calls a convolutional neural network to identify a scene image corresponding to the donated object identifier in the live stream to obtain a scene corresponding to the scene image, and the scene image represents the scene where the donated object is located.
The information identification model may also be another machine learning model having a function of identifying donation index information from the image. The information recognition model may be formed by other neural networks, and the embodiment of the present application is described by taking a convolutional neural network as an example.
Convolutional Neural Networks (CNNs) are a class of feed-forward Neural Networks that include Convolutional computations and have a deep structure, and are constructed to mimic biological visual mechanisms and can perform supervised learning and unsupervised learning. The convolutional neural network comprises at least two neural network layers, wherein each neural network layer comprises a plurality of neurons which are arranged in a layered mode, the neurons in the same layer are not connected with one another, and information between the layers is transmitted only in one direction.
The donated object identifier is used for uniquely identifying the donated object, and the donated object identifier may include a character string of at least one character of numbers, letters and symbols. Under the image type donation demand information, the donated object identifier can be a live broadcast room identifier, such as a room number of a live broadcast room, or a unit identifier corresponding to a unit initiating live broadcast, and if the unit initiating live broadcast is an X elementary school, the donated object identifier is an official account corresponding to an X elementary school.
In some embodiments, the server invokes an image feature extraction model to process a scene image corresponding to the donated object identifier, so as to obtain a feature vector of the scene image, where the scene image is obtained from a live stream. The image feature extraction model is a model trained in advance, and is a machine learning model with an image feature extraction function. In other embodiments, the server calls the convolutional neural network to extract the features of the scene image to obtain the feature vectors of the scene image, or the server calls the information recognition model constructed by other neural networks to extract the features of the scene image to obtain the feature vectors of the scene image.
In some embodiments, before extracting the feature vectors of the scene image, the scene image needs to be preprocessed, where the preprocessing refers to performing operations such as denoising and smooth transformation on the image to enhance important features of the image.
And the server calls the convolutional neural network to process the feature vector of the scene image to obtain a scene corresponding to the scene image. The scene is a scene in which the donated object is located, for example, the scene is an indoor scene in which students are sitting in a classroom to study in a live stream, or the scene is an outdoor scene in which students are operationally active in a live stream.
In step 504c, the server matches the donation index information corresponding to the scenario, the donation index information being used to search for a purchase link for a donation.
Illustratively, the article donation system is provided with a matching scene library, donation index information corresponding to a plurality of scenes is stored in the matching scene library, and the server matches the scenes with the donation index information to obtain donation index information corresponding to the scenes. For example, if the scene in the live stream is the scene of the student learning in the classroom, the donation index information matched by the server may be keywords about stationery or keywords about teaching aids (such as blackboard, chalk, etc.).
In step 505, the server generates a first donation list according to the search result of the e-commerce platform, wherein the first donation list comprises a purchase link of at least one donation searched according to the donation index information.
Step 505 may also be replaced with the following steps:
in step 5051, the server obtains item information of the donation according to the search result.
And the server sends a search request to the e-commerce platform according to the donation index information, wherein the search request carries the donation index information, and the e-commerce platform searches donations according to the donation index information and feeds back a search result to the server. The item information of the donation acquired by the server includes the price of the item, the sales volume of the item, the good appraisal rate of the item, the picture of the item, the distribution range of the item, the logistics transportation speed of the item, and the like. Illustratively, the server may sort the donations according to item information.
In step 5052, the server extracts a key image corresponding to the donation from the item information, where the key image represents an attribute of the donation.
The key image refers to an image characterizing the attribute of the donation, having a donation feature, so that the user can determine the name of the item from the image. Illustratively, the server may invoke an image extraction model to extract a key image corresponding to the donation from the item information, where the image extraction model is a machine learning model with a function of extracting the key image, and the image extraction model may be a pre-trained model.
In step 5053, the server binds the key image with the purchase link of the donation to obtain a first binding relationship.
The server makes the key images and the purchase links of the donations into donations (or commonweal gifts) of the live broadcast room, and the donations (or the commonweal gifts) are displayed in the donation list in the live broadcast room in the form of icons, wherein the icons can be the extracted key images or object marks generated according to the key images, such as simple strokes corresponding to the entity objects.
At step 5054, the server generates a first donation list according to the first binding relationship.
The server corresponds the first binding relationship to a first donation list in which a purchase link for at least one donation corresponding to the donated object is displayed.
As shown in fig. 7 (b), a plurality of students performing activities outdoors are displayed on the live broadcast interface 27, the server calls the information recognition model to recognize that the scene image corresponding to the donated object identifier is an outdoor scene, a first donation list 28 is generated, and purchase links corresponding to sports shoes, roller skates, skipping ropes and football are displayed on the first donation list 28. Donated object identification 29 is the room number of the live room or the official account number corresponding to the X school.
At step 506, the server sends a first donation list to the first client.
In step 507, in response to receiving the confirmation request sent by the first client, the server sends a second donation list to at least one second client, the second donation list being a subset of the first donation list.
Step 507 may be replaced by the following steps:
step 5071, in response to receiving the confirmation request sent by the first client, the server obtains a donated object identifier corresponding to the live broadcast stream, where the confirmation request carries at least one item identifier of the donated object, and the donated object identifier includes at least one of a donated user account and a live broadcast room identifier.
The donated objects confirm the required donation by the first client. The server obtains the donated object identifier, which may be a donated user account, such as a user account used by the donated person in live broadcast, or a user account of the donated person on other application programs, and performs an authorization operation on the live broadcast application program, so that the live broadcast can be performed by using the user account on the other application programs.
As shown in fig. 6, the live interface 20 is a live interface of the first client. After the first client selects the required donation, the first client confirms the required donation by the shoe and the bag with the number matching mark 22.
Step 5072, the server binds the donated object identifier and the donated item identifier to obtain a second binding relationship.
Illustratively, when the donated object identifier is a donated user account, the second binding relationship may be that the donated a requires two pairs of athletic shoes. When the donated object identifier is a live room identifier, the second binding relationship may be that primary school X requires 20 desks.
Step 5073, in response to the second client displaying the live stream corresponding to the donated object identifier, the server sends a second donation list to the second client according to the second binding relationship, where the second donation list includes a purchase link of at least one donation.
When the server detects that the second client is watching the live broadcast corresponding to the donated object identifier, the server sends a second donated list to the second client according to the second binding relationship, and it can be understood that the server sends the second donated list to at least one second client.
At step 5074, the second client displays a second donation list.
The second client has displayed thereon a second donation list validated by the first client, the second donation list being a subset of the first donation list. In some embodiments, the donations selected by the first client constitute a second donation list, or the server generates a further list according to the selection of the second client, naming this list as the second donation list.
In step 508, the second client selects at least one purchase link for the donation from the second donation list to donate the donation to the first client.
As shown in fig. 8 (a), the live interface 30 is a live interface on the second client. A second donation list 31 is displayed on the live broadcast interface 30, and the second donation list 31 is a list obtained after the confirmation of the first client. Illustratively, the user clicks on the UI control on the second donation list 31, displaying the amount that may be donated. For example, the UI control 32 represents that the donation is a schoolbag, and when the user clicks on the UI control 32, the amount of the donation is displayed, which is 88 coins, taking the example that the money paid by the user is a virtual money circulating between live rooms. The user can also click on the random amount control 33, as shown in fig. 8 (b), to switch the donation amount from 88 to 666 or 188, which is random, and the user can also select the total amount of the donations to make the donation, for example, the total donation amount of the schoolbag is 860 coins. In some embodiments, the user may select the amount of the donation by manually entering the amount.
When the user clicks on the donation control 34, the selected amount is donated to the first client. It can be understood that, when a user donates, the user needs to purchase virtual money circulating in the live broadcast room, and when the user donates, the donation amount is also calculated by using the virtual money.
The user of the second client calculates the amount of money in the donation at the time of the donation, so the process of the donation further includes the following steps:
step 5081, in response to the value of the donation donated by the second client reaching the calibrated value, the server sends a purchase request to the e-commerce platform, where the purchase request carries the donated object identifier, the item identifier of the donation and the receiving address corresponding to the donated object.
And responding to the fact that the total value of the donations donated by the at least one second client side reaches the calibrated value, and sending a purchase request to the electronic commerce platform by the server, wherein the purchase request carries the donated object identification, the item identification of the donations and the receiving address corresponding to the donated object.
Illustratively, as shown in fig. 8, the value corresponding to the schoolbag is 860 gold coins, user a selects the schoolbag and donates 88 gold coins, user b selects the schoolbag and donates 300 gold coins, user c selects the schoolbag and donates 12 gold coins, user d selects the schoolbag and donates 460 gold coins, after user d donates, the donated value reaches the calibrated value of the schoolbag, and the server sends a purchase request to the e-commerce platform. As shown in (c) of fig. 8, the UI control 35 corresponding to the schoolbag in the second donation list 31 displayed on the second client displays the word "order placed" indicating that the schoolbag has been purchased on the e-commerce platform.
In some embodiments, the purchase request further includes the number of donations and the brand of donation.
Step 5082, in response to receiving the payment amount corresponding to the donation sent by the e-commerce platform, the server transfers the payment money from the account corresponding to the second client to the account corresponding to the e-commerce platform, wherein the payment amount is calculated by the e-commerce platform according to the purchase request.
And in response to receiving the payment amount corresponding to the donation sent by the electronic commerce platform, the server transfers the payment money from the account corresponding to the at least one second client to the account corresponding to the electronic commerce platform, wherein the payment amount is calculated by the electronic commerce platform according to the purchase request.
The e-commerce platform calculates the amount of money to be paid for the purchase of the donations according to the unit price corresponding to the donations required in the purchase request and the number of the donations, and in some embodiments, the amount of money to be paid for the purchase of the donations may also be calculated by the server. The server performs a transfer operation of payment money.
Step 5083, after the money transfer is successful, the server receives a purchase order corresponding to the donation sent by the e-commerce platform, and the purchase order is used for the e-commerce platform to distribute the donation to the first client.
And the e-commerce platform feeds the payment result back to the server, after the server successfully pays, the e-commerce platform sends a purchase order corresponding to the donation to the server, and the e-commerce platform distributes the purchased donation to the first client according to the purchase order.
In summary, in the method provided in this embodiment, the server obtains a live stream of the first client (the donated object client) in a live donation process, and identifies the donation demand information by calling an information identification model corresponding to the donation demand information type, so as to obtain donation index information corresponding to the donation demand information, and automatically search for a relevant donation on the e-commerce platform according to the donation index information, when the first client confirms that the donation is a required item, the second client donates the donation to the first client, even in different live donation processes, the required item of the donated object can be accurately obtained, so that a technician does not need to set a donation corresponding to the donation demand information before the live donation starts, and the donation process is simplified. Meanwhile, the first client can only donate after confirmation, so that waste caused by unnecessary goods donated to the donated object is avoided.
When the donation demand information is of an audio type, the donation index information corresponding to the donation demand information is obtained by identifying the voice of the donated object, and the server can further automatically search according to the donation index information to obtain the purchase link of the items needed by the donated object, so that the corresponding donation is accurately matched with the donated object.
When the donation demand information is of an image type, the scene where the donated object is located in the live broadcast stream is identified by calling the convolutional neural network and the classifier, donation index information corresponding to the scene is obtained, the server can further automatically search according to the donation index information, a purchase link of an article needed by the donated object is obtained, and the corresponding donation is accurately matched for the donated object.
Through extracting the key image of donation article, generate the public welfare gift in the live broadcast room for the server can recommend its required article for first client intelligence, and first client shows the article (first donation list) recommended by the server with more audio-visual mode.
After the first client confirms, the donated object identifier and the donation demand information are bound, when the server identifies that the live stream displayed by the second client is the live stream corresponding to the donated object identifier, a second donation list corresponding to the donated object is automatically displayed on the second client, so that the second client can select donations to be donated more intuitively.
When the value of the donation donated by the second client reaches the calibrated value, the server automatically sends a purchase request to the e-commerce platform to directly purchase the needed goods of the donated object, and the donation process is simplified.
The above embodiments relate to identifying donation index information according to two types of donation demand information, including: 1. obtaining donation index information according to the donation demand information of the audio type; 2. and obtaining donation index information according to the donation demand information of the image type.
The first case will be explained: and obtaining donation index information according to the donation demand information of the audio type.
In one example, the information recognition model includes an acoustic model and a language model, and the obtaining and recognizing process of the donation index information includes the following steps:
step 1, the server matches the audio frame with a reference voice template to obtain voice information corresponding to the audio frame.
Before Voice recognition is started, Voice Activity Detection (VAD) is required to analyze audio frames, and the purpose is to recognize and eliminate a long mute period from a Voice signal stream, so as to save call resources without reducing service quality.
The purpose of speech recognition is to convert speech into text, i.e. to input a speech signal (i.e. an audio frame), and to determine a text sequence (consisting of words or words) such that the text sequence matches the speech signal to the highest degree, which is expressed in terms of probability.
As shown in fig. 9, a framework of a recognition process combining an acoustic model and a language model is firstly trained by using a speech database, which stores a large number of reference speech templates, and similarly, a language model is trained by using a text database, which stores a large number of word strings. The embodiment of the present application does not limit the training mode of the model.
Illustratively, the server matches the audio frames to the reference speech templates until matching speech information corresponding to the audio frames, which indicates what the donated object said in the audio frames. The voice information represents that the server converts the audio signal into a digital signal recognized by the computer.
And 2, the server calls the language model to process the voice information to obtain a character sequence corresponding to the voice information.
The language model is used for converting the voice information into a text sequence, namely what the text corresponding to the voice information is.
And 3, the server calls the acoustic model to process the feature vector and the character sequence of the audio frame to obtain the similarity probability of the audio frame and the character sequence.
The acoustic model is used for identifying the similarity probability of the audio frame and the character sequence according to the feature vector and the character sequence of the audio frame, namely what the character information corresponding to the audio frame is. The acoustic model calculates the probability of the speech signal after a given text sequence, i.e. how likely this crosstalk is to occur.
The acoustic model, when calculating the probability of a speech signal after a given text sequence, needs to know the pronunciation of each word, and converts a single word (or word) into a corresponding phoneme (i.e., an element representing the pronunciation of the word) by using a dictionary model (Lexicon). The acoustic model also needs to know the start-stop time of each phoneme, and at this time, the audio frame needs to be divided, including the following steps:
and S1, the server divides the audio frames to obtain each segmented audio frame.
The demarcation point of the phoneme is determined by Dynamic Time Warping (DTW). The DTW algorithm is an algorithm for solving the problem of matching templates with different pronunciation lengths. When the difference between two pieces of audio is compared (for example, the audio frame in the embodiment of the present application is compared with a reference speech template), the same person utters the same voice at different times, the voice speed of each person for different phonemes of the same word is different, and the similarity of the two pieces of audio in the time sequence needs to be solved, and one of the two pieces of audio is warped (warped) in the time sequence through a DTW algorithm, so that the two pieces of audio are aligned in the time sequence, and the similarity between the two pieces of audio can be accurately calculated.
In some embodiments, an audio frame is divided into a number of segments, which are converted into corresponding feature vectors by a series of operations such as fourier transforms. As shown in fig. 10, each frame is 25 ms long, and there is an overlap of 15 ms (25-10 ═ 15) between every two frames.
And S2, processing each section of audio by the server to obtain a feature vector corresponding to each section of audio frame.
In some embodiments, a feature extraction model is invoked to extract feature vectors for segments of the audio frame.
And S3, the server obtains the feature vector of the audio frame according to the feature vector of each audio frame.
And splicing the feature vectors of the audio frames according to the physiological features of human ears to obtain the feature vectors of the audio frames.
And 4, the server determines donation index information corresponding to the audio according to the similarity probability, wherein the donation index information is used for searching for a purchase link of the donation.
Illustratively, if the similarity probability between the audio frame and the text sequence is 90%, it may be determined that the audio frame and the text sequence are matched, and the text sequence is output, that is, the text output, and the output text is the donation index information.
In summary, the donation index information corresponding to the donation demand information is identified by converting the voice sent by the donated object into characters corresponding to the donation index information by calling the relevant model, segmenting the audio frame, calling the language model and the acoustic model, and matching the audio frame with the character sequence to obtain the characters corresponding to the voice sent by the donated object, so that the server can accurately search the purchase link of the donation according to the characters.
In one example, the convolutional neural network includes a feature extractor and a classifier, and the obtaining identification process of the donation index information includes the following steps:
and step 11, responding to the fact that the type of the donation demand information is an image type, and calling a feature extractor by the server to preprocess the scene image to obtain a joint vector of the scene image.
In some embodiments, the feature extractor includes a convolutional layer and a convergence layer. Step 11 may be replaced by the following steps:
s11, the server calls the convolution layer to preprocess the scene image to obtain a pixel block corresponding to the scene image, wherein the pixel block comprises the height, the width and the color of the scene image.
The convolution layer disperses the scene picture of the live broadcast into a 3 x 3 or 5 x 5 pixel block, then arranges the output values in a picture group, numerically represents the content of each area in the picture, and the axes respectively represent the height, the width and the color, thereby obtaining three-dimensional numerical expression of each picture block.
And S12, the server calls the convergence layer to combine the pixel blocks with the sampling function to obtain the joint vector of the scene image.
The convergence layer combines the spatial dimension of the three-dimensional (or four-dimensional) image group with the sampling function to output a joint array only containing relatively important parts in the image.
And step 12, the server calls a classifier to classify the joint vector to obtain a scene corresponding to the scene image.
The classifier is a recognition rule obtained through training, and a feature classification can be obtained through the recognition rule, so that the image recognition technology can obtain high recognition rate. Thereby forming related labels and categories, further classifying and deciding and identifying the scene category of the live broadcast room.
In summary, the convolutional neural network includes a feature extractor and a classifier, the feature extractor is used to obtain a joint vector of the scene image, and the classifier is used to classify the joint vector to obtain a scene corresponding to the scene image, so that the convolutional neural network can accurately identify the scene corresponding to the scene image.
The feature extractor comprises a convolution layer and a convergence layer, the convolution layer is used for preprocessing a scene image to obtain a pixel block of the scene image, the convergence layer is used for combining the pixel block with a sampling function to obtain a joint vector of the scene image, so that the classifier can accurately classify the joint vector, and the accuracy of an output result is ensured.
In an alternative embodiment based on fig. 5, after receiving the donation, the first client may send feedback information to the server, and the server generates the donation record according to the feedback information, and the process includes the following steps:
in step 509, the first client sends feedback information to the server in response to receiving the donation, where the feedback information includes at least one of text information, video information, audio information, and image information.
As shown in fig. 11 (a), the user of the second client may view the donation records that he or she has agreed, and a plurality of donation records are displayed on the donation record interface 40. The donation record is feedback information sent to the server by the first client after receiving the donation. Illustratively, the user of the second client clicks on the donation record 41 corresponding to the schoolbag, and displays a feedback information interface 42 as shown in fig. 11 (b), where the feedback information interface 42 includes the thank you video recorded by the donated object of the first client and the written thank you text.
Step 510, in response to receiving the feedback information, the server generates a donation record, wherein the donation record includes at least one of a live platform identifier, a donated object identifier and an item identifier of a donation.
In some embodiments, the donation record also includes the name of the donated object, the number of donations, the time of donation, and the name of the donor (among other information). In one example, the donation records generated by the server are: donor a donated twenty desks to primary X school (donated subject b) on 10 d 4/2020. In some embodiments, the donation is donated collectively by multiple donors, and the donation record records the names (and other information) of all donors who donated the donation.
In summary, after the donated objects receive the entity items, the server generates the donation records by receiving the feedback information sent by the first client, so that the first client and the second client can conveniently query the donation records.
In an alternative embodiment based on fig. 5, the server may store the generated donation records into the blockchain.
The process comprises the following steps:
step 511, the server generates target data according to the donation record, wherein the target data includes at least one of a live broadcast platform identifier, a donated object identifier and an item identifier of the donation.
The server may associate a local live platform identification, donated person identification, and donated record to generate target data. The data format of the destination data is in the form of Key-Value pairs (KV). The server can generate a key element in the key value pair according to the local platform identifier, generate a value element in the key value pair according to the donated person identifier and the donated record, and associate the key element and the value element, so that the target data can be generated.
Step 512, the server sends the target data to the block link nodes in the block chain network.
Taking a distributed system as an example of a blockchain system, fig. 12 is a schematic structural diagram of a distributed system 300 applied to a blockchain system according to an exemplary embodiment of the present application, and is formed by a plurality of nodes 400 (computing devices in any form in an access network, such as servers and user terminals) and a client 500, a peer-to-peer (P2P) network is formed between the nodes, and a P2P Protocol is an application layer Protocol operating on a Transmission Control Protocol (TCP). In a distributed system, any machine, such as a server or a terminal, can join to become a node, and the node comprises a hardware layer, a middle layer, an operating system layer and an application layer.
Referring to the functions of each node in the blockchain system shown in fig. 12, the functions involved include:
1) routing, a basic function that a node has, is used to support communication between nodes.
Besides the routing function, the node may also have the following functions:
2) the application is used for being deployed in a block chain, realizing specific services according to actual service requirements, recording data related to the realization functions to form recording data, carrying a digital signature in the recording data to represent a source of task data, and sending the recording data to other nodes in the block chain system, so that the other nodes add the recording data to a temporary block when the source and integrity of the recording data are verified successfully.
For example, the services implemented by the application include:
2.1) the wallet, is used for providing the function of carrying on the trade of the electronic currency, including initiating the trade, namely, send the trade record of the present trade to other nodes in the block chain system, after other nodes verify successfully, as the response of acknowledging that the trade is valid, store the record data of the trade in the temporary block of the block chain; the wallet also supports the querying of the electronic money remaining in the electronic money address. For example, the target data to be added (donation record) is sent to the blockchain system, other nodes in the blockchain system verify the transaction (i.e., the target data added in the blockchain), and the transaction can only be stored in the blockchain after the other nodes verify successfully.
And 2.2) sharing the account book, wherein the shared account book is used for providing functions of operations such as storage, query and modification of account data, record data of the operations on the account data are sent to other nodes in the block chain system, and after the other nodes verify the validity, the record data are stored in a temporary block as a response for acknowledging that the account data are valid, and confirmation can be sent to the node initiating the operations. For example, the donation record is written into the shared ledger, i.e. stored into the blockchain, after passing the verification of other nodes.
2.3) Intelligent contracts, computerized agreements, which can enforce the terms of a contract, implemented by codes deployed on a shared ledger for execution when certain conditions are met, for completing automated transactions according to actual business requirement codes, such as querying the logistics status of goods purchased by a buyer, transferring the buyer's electronic money to the merchant's address after the buyer signs for the goods; of course, smart contracts are not limited to executing contracts for trading, but may also execute contracts that process received information. For example, after receiving the account transferred by the server, the e-commerce platform triggers the intelligent contract to execute the delivery and delivery processes of the donations according to the specification of the contract, and delivers the donations to the donated objects.
3) And the Block chain comprises a series of blocks (blocks) which are mutually connected according to the generated chronological order, new blocks cannot be removed once being added into the Block chain, and recorded data submitted by nodes in the Block chain system are recorded in the blocks. The blockchain in this application is a shareholder blockchain, for example, each donation record is stored in the blockchain during the donation process.
Referring to fig. 2, fig. 2 is a schematic diagram of a Block Structure (Block Structure) provided in an exemplary embodiment of the present application, where each Block includes a hash value of a transaction record stored in the Block (hash value of the Block) and a hash value of a previous Block, and the blocks are connected by the hash values to form a Block chain. The block may include information such as a time stamp at the time of block generation. A blockchain, which is essentially a decentralized database, is a string of data blocks associated using cryptography, each data block containing relevant information for verifying the validity of the information (anti-counterfeiting) and generating the next block.
Step 513, in response to the consensus node in the blockchain network agreeing on the target data, storing the target data in the blockchain.
The block chain nodes broadcast target data to be chained in a block chain network, the common identification nodes in the block chain network can execute common identification operation after receiving the broadcast, and when the target data pass the common identification, the block chain nodes can generate data blocks according to corresponding chain hash tables. The main purpose of the consensus node is to verify the authenticity of the content transmitted, and avoid the situation that the same donated person receives multiple donations on multiple platforms at the same time.
It is to be understood that the above embodiments may be implemented individually or in free combination.
In conclusion, the donation records are stored on the block chain, so that the donation process is transparent and open, and is not easy to be tampered, and the problem of resource waste caused by multiple donation initiated by donated objects on multiple platforms within a certain time is avoided.
The method of donation of items from the perspective of the first client is described below.
Fig. 14 shows a flow chart of a method of donation of an item as provided by an exemplary embodiment of the present application. The method can be applied to the first terminal 120 or the second terminal 160 in the computer system 100 as shown in fig. 2, or other terminals in the computer system 100, the method is applied to the first client, and the method comprises the following steps:
step 1401, displaying a live broadcast stream collected in a live broadcast donation process, wherein a picture of the live broadcast stream comprises a donation object, and the live broadcast stream comprises donation demand information of an audio and video type.
The first client corresponds to a donated object. Illustratively, the donated object is a single donated object, and the first client displays a live broadcast interface 20 as shown in fig. 6, where the donated object speaks the donation demand information, and the donation demand information is of an audio type, such as that the donated object needs a bag. Illustratively, the donated object is an X elementary school, the first client may further display a live broadcast interface 50 shown in (a) of fig. 15 and a live broadcast interface 53 shown in (b) of fig. 15, the donated object identifier is a live broadcast account 51 and a live broadcast account 54 corresponding to the X elementary school, two students playing ping-pong are displayed in the live broadcast interface 50, the donation demand information is an image type, and the server calls a convolutional neural network to recognize that a scene corresponding to the scene image is a ping-pong scene (an outdoor scene); in the live broadcast interface 53, it is shown that a teacher is in class, the donation demand information is an image type, and the server calls the convolutional neural network to recognize that a scene corresponding to the scene image is a classroom scene (indoor scene).
Step 1402, in response to receiving the first donation list sent by the server, displays at least one donation matching the donation demand information.
When the type of the donation demand information is an audio type, the server calls an information identification model to identify the words of the donated objects as a character sequence, the server obtains donation index information according to the character sequence, searches donations on an e-commerce platform according to the donation index information, and finally generates a first donation list 21 shown in fig. 6 according to the search result. A plurality of donations matching the donation demand information are displayed on the first donation list 21.
Step 1403, in response to receiving the confirmation operation on the first donation list, displays a second donation list, the second donation list including the confirmed donations.
The validation operation is used for the donated subjects to select their own required donations. Illustratively, the donated subjects have selected a bag and shoes, which are correspondingly displayed with selection indicia 22 for indicating which donations have been selected by the donated subjects. The selected donations constitute a second donation list. The second donation list is a subset of the first donation list, and in some embodiments, the second donation list is displayed in the first donation list as a portion of the first donation list; in other embodiments, the second donation list is additionally displayed in the live interface.
Step 1404, generating feedback information in response to receiving the receiving operation, the feedback information including at least one of text information, video information, audio information, and image information, the receiving operation being configured to receive the donation from the second client.
After the donated object receives the donation, the donated object can record thank you videos or thank you voices, and can also fill thank you greeting cards, thank you letters and the like as feedback information.
In summary, the method provided in this embodiment helps the donated object confirm the donation needed by the donated object by displaying the first donation list on the first client, so that the donated object participates in the donation process in a more intuitive manner.
The method of donation of items from the perspective of the second client is described below.
Fig. 16 shows a flow chart of a method of donation of an item provided by an exemplary embodiment of the present application. The method can be applied to the first terminal 120 or the second terminal 160 in the computer system 100 as shown in fig. 2, or other terminals in the computer system 100, and the method is applied to the second client, and the method comprises the following steps:
step 1601, displaying a live broadcast stream corresponding to the donated objects collected in the live broadcast donation process, wherein the live broadcast stream includes donation demand information of an audio and video type.
It can be understood that the live interfaces corresponding to fig. 6 and fig. 15 may also be live interfaces corresponding to the second client.
Step 1602, in response to the first client confirming the donation matching the donation demand information, displaying a second donation list, where the second donation list includes at least one purchase link of the donation.
Illustratively, after the first client confirms, the second donation list 52 shown in fig. 15 (a) is displayed, the server calls a convolutional neural network to identify that the scene is a ping-pong scene, the donation index information matched for the scene is a keyword "sports equipment", the server generates the first donation list according to the search result, and after the confirmation operation of the first client, the second donation list 52 is sent to the second client. Similarly, as shown in fig. 15 (b), when the server identifies that the scene is a classroom scene, the matching donation index information for the scene is the keyword "classroom supplies", and finally the second client displays the second donation list 55.
Step 1603, in response to receiving a donation operation on the second donation list, donates a donation to the first client.
Illustratively, the user of the second client clicks the UI control corresponding to the chalk to donate the chalk to the first client. The donation process is referred to as steps 5081 to 5083, and is not described herein.
It is to be understood that the embodiments of the donation method for items described from the client's point of view and from the server's point of view may be implemented in combination, in which case the embodiments described in the respective points of view are associated with each other.
In summary, in the method provided in this embodiment, the second donation list is displayed on the second client, so that the donor can quickly determine the donation to the donated object, and the donor participates in the donation process in a more intuitive manner.
A method of donation of an item is illustrated, in one example: the process of identifying the index information corresponding to the donation demand information is as follows:
1. the donation demand information is audio-type donation demand information.
Taking the example that the donated object is a donated person, as shown in fig. 17, it shows a frame diagram of a donation flow process of an item provided by an exemplary embodiment of the present application, and the method includes the following steps:
step 1701, the donated object introduces and speaks the donation demand information.
After public welfare live broadcast begins, the donated people can interact with audiences in a live broadcast room to introduce own conditions and demand information, and the item donation system automatically identifies and extracts the donation demand information of the user to obtain donation index information (such as key words of items) corresponding to the donation demand information.
Step 1702, face recognition, recording donated object information.
When the words of the donated people are detected, the hardware equipment of the donation system sends the collected audio and video to the server, and after the server receives the audio frames, the server decompresses and transcodes the audio frames and performs similarity matching on the audio frames matched with the language model library. And finally, performing character output on the audio data with the extracted features through an acoustic model, a dictionary model and a language model.
Step 1703, donations search.
After characters are output, a first client requests a server for Content Distribution Network (CDN) data, the server pulls big data from a cloud repository, an AI (interactive information) technology is linked, contents related to donations on the network and generated text contents are inquired to perform similar item matching, and therefore donation index information such as keywords in voice is screened out, and searching is performed according to the donation index information.
Step 1704, donation confirmation.
And after the donated objects are confirmed, sending a donation list to the second client, wherein the donation list comprises at least one purchase link of donations.
2. The donation demand information is image-type donation demand information.
Taking the donated object as an example of a school, as shown in fig. 18, it shows a frame diagram of a donation flow of an item provided by an exemplary embodiment of the present application, and the method includes the following steps:
step 1801, the environment in which the donated object is located.
When the scene of the live broadcast stream of the live broadcast room is identified as an outdoor scene, automatically configuring donations as items related to rope skipping, such as 'electronic rope skipping', 'rope skipping' and the like, according to the corresponding outdoor motion scene automatically; when the scene of the live broadcast stream of the live broadcast room is switched to a table tennis scene, the donations are automatically configured to be articles related to table tennis sports, such as a table tennis table, a table tennis ball, a table tennis bat and the like; similarly, when the scene of the live broadcast stream of the live broadcast room is identified as the classroom scene, the corresponding donation is an item related to the learning or classroom scene, such as "chalk", "desk", "blackboard", and the like.
At step 1802, a scene image is identified.
Scene recognition is mainly realized through a convolutional neural network, and two adjacent pixels in one image have more relevance than two separated pixels in the image by utilizing the principle of strong relevance and strong similarity of the adjacent pixels in the same image; the process of the scene recognition technology comprises the following steps: information acquisition, preprocessing, feature extraction and selection, classifier design and classification decision.
The information acquisition means that information such as light or sound is converted into electric information through a sensor, that is, basic information of a live broadcast room scene is acquired and converted into information which can be recognized by a machine through a convolutional neural network.
The preprocessing mainly refers to operations such as denoising, smoothing and transformation in image processing, so as to strengthen important features (feature vectors) of the live broadcast images.
The feature extraction and selection refers to the extraction and selection of features required in pattern recognition; in the implementation process of the convolutional neural network, the convolutional neural network is actually divided into two layers, one is a convolutional layer and the other is a convergence layer; the convolution layer disperses the live-room scene picture into a 3 x 3 or 5 x 5 small pixel block and then arranges these output values in a group of pictures, numerically representing the content of each region in the picture, with the axes representing height, width and color, respectively. Thereby yielding a three-dimensional numerical representation of each tile. The convergence layer combines the spatial dimension of the three-dimensional (or four-dimensional) image group with the sampling function to output a joint array (joint vector) containing only relatively important parts of the image.
The classifier design refers to obtaining a recognition rule through training, and a feature classification can be obtained through the recognition rule, so that the image recognition technology can obtain high recognition rate. Thereby forming related labels and categories, further classifying and deciding and identifying the scene category of the live broadcast room.
Step 1803, conversion and extraction of the donation index information.
And the server calls the convolutional neural network to obtain donation index information corresponding to the scene image.
Step 1804, donations search and recommendation.
Based on the donation index information identified in the two manners, the server requests the server of the relevant e-commerce platform. For example, the search of keywords is carried out, the searched result combines with the comprehensive recommendation ranking (such as price, sales volume, good comment, logistics and the like) of the electronic commerce platform, the item with the highest comprehensive evaluation is recommended, and fields such as purchase links and attributes are returned to the server. The server transmits the data back to the client, and the donated objects can see the intelligently recommended articles in the live broadcast room. When the donated objects see the recommended items, the items need to be confirmed, and the items can generate donations in the direct broadcast room after the confirmation. If the donated object is not satisfied with the recommended goods, it may be reselected (repeating the above operations).
Step 1705 to step 1711 and step 1805 to step 1811 are executed according to the following principle: after the donated goods are confirmed by the donated objects, the binding relationship between the identification information of the donated objects and the corresponding donated goods is established based on face identification and scene identification, namely, when the donation system of the goods identifies the donated objects appearing in the direct broadcast room, the corresponding donated goods are displayed on the donation list. Or a corresponding scenario is identified, the list of donations will preferentially match the donations of the corresponding scenario.
The technical principle of scene recognition is the same as above, and the realization principle of face recognition is as follows:
after the donated goods are confirmed by the donated object, the client monitors and recognizes facial features of the donated object in real time. The donated object is taken as an example as a donated person.
Face Recognition is mainly divided into three processes of Face Detection (FD), Feature Extraction (FE), and Face Recognition (FR).
1.1, face detection
In the live broadcast donation process, a client (a first client) detects and extracts a face image from a live broadcast video stream, a Haar feature (Haar) and an Adaboost algorithm are adopted, the Haar feature reflects the gray level change of the image, and a pixel division module calculates a difference value feature which comprises an edge feature, a linear feature, a central feature and a diagonal feature. The Adaboost algorithm is to train a weak learning machine by using the whole training set, and learn on the basis of the error of the previous weak learning machine, so as to construct a classifier with better classification effect. A cascade of classifiers is trained to classify each block of pixels in the image. If a certain rectangular region passes through the cascade classifier, it is discriminated as a face image. In the detection process, the position and the proportion of a detection window are continuously adjusted in a picture to find the face.
1.2, feature extraction
After the face of the donated person is detected, the expression and the gesture and action characteristics of the donated person are extracted. Feature extraction refers to characterizing face information by numbers, which are features to be extracted. Common facial features are divided into two categories, one is geometric features and the other is characterization features. Geometric features refer to the geometric relationships between facial features such as eyes, nose, and mouth, such as distance, area, and angle. The characterization feature is to extract global or local features through some algorithms by using gray information of the face image. Among the more common feature extraction algorithms is the Linear Backprojection (LBP) algorithm. The LBP algorithm first divides the image into regions, thresholded with a center value in the neighborhood of pixels 640x960 in each region, and considers the result as a binary number.
1.3 face recognition
The client sends the facial feature information of the donated people to the server, and after the same face information is detected next time, donations bound by the face can be preferentially displayed.
The donation process is as follows:
when a donor (a user corresponding to the second client) is to donate, clicking the donation list to see a donation matched with the current donated person and required for donation, and clicking to donate; the donator can donate the money amount required by the donation at one time and can donate random amount, when the donation amount of the donation reaches the calibrated price, the article donation system automatically places an order for the donation on the electronic commerce platform, the server can send the name, the harvest address, the order placement price, the order placement amount and other information of the donated person to the electronic commerce platform, and the electronic commerce platform completes subsequent logistics service; the order and the logistics information can be recorded in the donation record, so that both donations can conveniently and timely acquire corresponding information.
The process after donation is as follows:
when the donated object is received by the donator, the thank you video can be recorded and uploaded to the completed donation record, and the subsequent donation history of the donator can be checked through the live broadcast room.
The donated objects will receive confirmation and feedback after confirming the receipt. For completed donation records (live platform id, donated object id, donated name, donated item, donated quantity, donated time, etc.), a block is generated for uplink. After successful uplink, the donated subjects cannot initiate the donation again within a period of time, and the donators can also view historical donation information. The same donated object is prevented from obtaining the same donation in multiple platforms or at the same time.
The rules for data uplink are as follows:
1) and associating the live broadcast platform identification, the donated object identification and the donated record to generate target data. The live broadcast platform identification is used for uniquely identifying one platform, and the identification can comprise a character string of at least one character of numbers, letters and symbols.
The server can associate local live broadcast platform identification, donated object identification and donated record to generate target data. The data format of the target data is in the form of key-value. The server can generate a key element in the key value pair according to the live broadcast platform identifier, generate a value element in the key value pair according to the donated object identifier and the donated record, and associate the key element and the value element to generate the target data.
2) Uploading target data to a block chain node in a block chain network; the uploaded target data is used for indicating the block link points to write the target data into the data blocks.
The block link point is a data processing node in the block link network, and can be used for receiving and processing externally transmitted data. When the block chain link point performs a series of processing on externally transmitted target data to obtain data to be stored, the data to be stored can be sent to a consensus node in the block chain network for consensus operation, and the target data can be written into the data block after consensus is completed. The server may upload the target data to the blockchain node in the blockchain network through the network connection. And the block chain node can write the received target data corresponding to the plurality of live broadcast platform identifications and the donated object identifications into the block together in a preset time period.
3) The uploaded target data are used for indicating the block chain link points to perform Hash operation on the target data, storing the target data to a Hash chain corresponding to the live broadcast platform identification in a chain Hash table according to the Hash operation result, and generating a block of the target data according to the chain Hash table after the chain Hash table is identified; the chain hash table comprises hash chains respectively corresponding to more than one live broadcast platform identification. When the block chain nodes store the target data, the block chain nodes can be stored in a chain hash table mode. The target data corresponding to each platform id may be stored to the same hash chain in the chained hash table. The blockchain node can transmit the key element in the target data into a hash function, and the hash function determines which hash chain the target data corresponds to and the specific position in the hash chain in a hash mode.
For example, a hash function is defined that maps a key value k to a position x in the chain hash table. x is called the hash code of k, expressed in functional form: h (k) ═ x. The purpose of this hash function is to distribute the key elements as evenly and randomly as possible into the chain hash table.
4) The block chain nodes broadcast target data to be chained in a block chain network, the common identification nodes in the block chain network execute common identification operation after receiving the broadcast, and when the target data pass the common identification, the block chain nodes can generate data blocks according to corresponding chain hash tables. The main purpose of the consensus node is to verify the authenticity of the content sent, and avoid the situation that the same donated object obtains a plurality of same donations on a plurality of live broadcast platforms at the same time.
In conclusion, the donations needed by the donated objects are matched for the donated objects in a live broadcast mode by means of voice recognition, face recognition, image recognition and block chain technologies, and when the donated objects change, the donations in the live broadcast room are replaced by the article donation system through automatic recognition, so that the donation process is simplified. Meanwhile, the donation records are stored in the block chain aiming at the completed donation records, so that the donated objects are prevented from obtaining the same donation in the same period, and the public information of public welfare donation is made to be transparent.
The following are embodiments of an apparatus of the present application that may be used to perform embodiments of the methods of the present application. For details which are not disclosed in the device embodiments of the present application, reference is made to the method embodiments of the present application.
Fig. 19 shows a block diagram of a donation device for items according to an embodiment of the present application, the device including:
a first display module 1910, configured to display a live stream acquired in a live donation process, where a picture of the live stream includes a donated object, and the live stream includes donation demand information of an audio and video type;
the first display module 1910 is configured to display at least one donation matching the donation demand information in response to receiving the first donation list sent by the server;
the first display module 1910 configured to display a second donation list in response to receiving a confirmation operation on the first donation list, the second donation list including the confirmed donations;
the generating module 1920 is configured to generate feedback information in response to receiving the receiving operation, where the feedback information includes at least one of text information, video information, audio information, and image information, and the receiving operation is configured to receive the donation from the second client.
Fig. 20 shows a block diagram of a donation device for items provided by an exemplary embodiment of the present application, the device including:
a second display module 2010, configured to display a live stream corresponding to a donated object acquired in a live donation process, where the live stream includes donation demand information of an audio and video type;
the second display module 2010, configured to display a second donation list in response to the first client confirming the donation matching the donation demand information, where the second donation list includes at least one purchase link of the donation;
a sending module 2020 for, in response to receiving the donation operation on the second donation list, donating the donation to the first client.
Fig. 21 shows a block diagram of a donation device for items provided by an exemplary embodiment of the present application, the device including:
the information identification model 2110 is used for identifying donation index information corresponding to the donation demand information and sending a search request to the e-commerce platform according to the donation index information, and the information identification model 2110 is a machine learning model with a donation index information identification function;
a generating module 2120, configured to generate a first donation list according to a search result of the e-commerce platform, where the first donation list includes a purchase link of at least one donation searched according to the donation index information;
a sending module 2130, configured to send a first donation list to a first client;
the sending module 2130 is configured to send, to the at least one second client, a second donation list in response to receiving the confirmation request sent by the first client, where the second donation list is a subset of the first donation list.
In an optional embodiment, the apparatus further comprises an obtaining module 2140 and a processing module 2150;
the obtaining module 2140 is configured to obtain a type of the donation demand information;
the processing module 2150, configured to determine the information identification model 2110 according to the type of the donation demand information; and calling the information identification model 2110 to identify the donation demand information to obtain donation index information corresponding to the donation demand information, wherein the donation index information is used for searching a purchase link of the donation.
In an alternative embodiment, the types of the donation demand information include an audio type;
the information identification model 2110 is configured to, in response to that the type of the donation demand information is an audio type, process an audio frame corresponding to the donated object identifier in the direct broadcast stream to obtain donation index information corresponding to the audio frame, where the donation index information is used to search for a purchase link of a donation.
In an alternative embodiment, the information recognition model 2110 includes an acoustic model 21101 and a language model 21102, the apparatus includes a matching module 2160;
the matching module 2160 is used for matching the audio frame with the reference voice template to obtain the voice information corresponding to the audio frame; the processing module 2150 is configured to invoke the language model 21102 to process the voice information, so as to obtain a text sequence corresponding to the voice information; calling an acoustic model 21101 to process the feature vector and the character sequence of the audio frame to obtain the similarity probability of the audio frame and the character sequence; and determining donation index information corresponding to the audio frame according to the similarity probability, wherein the donation index information is used for searching for a purchase link of the donation.
In an optional embodiment, the processing module 2150 is configured to divide an audio frame to obtain segmented audio frames; processing each section of audio frame to obtain a feature vector corresponding to each section of audio frame; and obtaining the feature vector of the audio frame according to the feature vector of each section of audio frame.
In an alternative embodiment, the type of the donation demand information includes an image type, and the information recognition model 2110 includes a convolutional neural network 21103;
the processing module 2150 is configured to, in response to that the type of the donation demand information is an image type, invoke the convolutional neural network 21103 to identify a scene image corresponding to the donated object identifier in the live stream, so as to obtain a scene corresponding to the scene image, where the scene image represents a scene where the donated object is located; the matching module 2160 is used for matching the donation index information corresponding to the scene, and the donation index information is used for searching the purchase link of the donation.
In an alternative embodiment, the convolutional neural network 21103 comprises a feature extractor and a classifier;
the processing module 2150 is configured to, in response to that the type of the donation demand information is an image type, invoke a feature extractor to perform preprocessing on a scene image to obtain a joint vector of the scene image; and calling a classifier to classify the joint vector to obtain a scene corresponding to the scene image.
In an alternative embodiment, the feature extractor includes a convolutional layer and a convergence layer;
the processing module 2150 is configured to call the convolutional layer to perform preprocessing on the scene image, so as to obtain a pixel block corresponding to the scene image, where the pixel block includes the height, width, and color of the scene image; and calling a convergence layer to combine the pixel blocks with the sampling function to obtain a joint vector of the scene image.
In an optional embodiment, the obtaining module 2140 is configured to obtain item information of a donation according to the search result; the processing module 2150 is configured to extract a key image corresponding to the donation from the item information, where the key image represents an attribute of the donation; binding the key image with a purchase link of a donation to obtain a first binding relationship; the generating module 2120 is configured to generate a first donation list according to the first binding relationship.
In an optional embodiment, the obtaining module 2140 is configured to, in response to receiving a confirmation request sent by the first client, obtain a donated object identifier corresponding to the live broadcast stream, where the confirmation request carries an item identifier of at least one donated item, and the donated object identifier includes at least one of a donated user account and a live broadcast room identifier; the processing module 2150 is configured to bind the donated object identifier and the donated item identifier to obtain a second binding relationship; the sending module 2130 is configured to respond to a live stream corresponding to the donated object identifier displayed by the second client, and send a second donation list to the second client according to the second binding relationship, where the second donation list includes at least one purchase link of a donation;
in an alternative embodiment, the apparatus includes a receiving module 2170;
the sending module 2130 is configured to send a purchase request to the e-commerce platform when the value of the donation donated by the second client reaches the calibrated value, where the purchase request carries the donated object identifier, the item identifier of the donation, and the receiving address corresponding to the donated object; in response to receiving the payment amount corresponding to the donation sent by the e-commerce platform, transferring the payment money from the account corresponding to the second client to the account corresponding to the e-commerce platform, wherein the payment amount is calculated by the e-commerce platform according to the purchase request;
the receiving module 2170 is configured to receive a purchase order corresponding to the donation sent by the e-commerce platform after the money transfer is successfully paid, where the purchase order is used for the e-commerce platform to deliver the donation to the first client.
In an optional embodiment, the generating module 2120 is configured to generate a donation record in response to receiving the feedback information, where the donation record includes at least one of a live platform identifier, a donated object identifier, and an item identifier of a donation.
In an alternative embodiment, the apparatus includes a memory module 2180;
the generating module 2120 is configured to generate target data according to the donation record, where the target data includes at least one of a live broadcast platform identifier, a donated object identifier, and an item identifier of a donation; the sending module 2130 is configured to send target data to a block link node in a block link network;
the storage module 2180 is configured to store the target data in the blockchain in response to a consensus node in the blockchain network agreeing on the target data.
Fig. 22 shows a schematic structural diagram of a server provided in an exemplary embodiment of the present application. The server may be the server 140 in the computer system 100 shown in fig. 2. Specifically, the method comprises the following steps:
the server 2200 includes a Central Processing Unit (CPU)2201, a system Memory 2204 including a Random Access Memory (RAM) 2202 and a Read Only Memory (ROM) 2203, and a system bus 2205 connecting the system Memory 2204 and the central processing unit 2201. The server 2200 also includes a basic input/output system (I/O system) 2206 to facilitate information transfer between devices within the computer, and a mass storage device 2207 to store an operating system 2213, application programs 2214, and other program modules 2215.
The basic input/output system 2206 includes a display 2208 for displaying information and an input device 2209, such as a mouse, keyboard, etc., for a user to input information. Wherein the display 2208 and the input device 2209 are both connected to the central processing unit 2201 through an input output controller 2210 connected to the system bus 2205. The basic input/output system 2206 may also include an input/output controller 2210 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input-output controller 2210 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 2207 is connected to the central processing unit 2201 through a mass storage controller (not shown) connected to the system bus 2205. The mass storage device 2207 and its associated computer-readable media provide non-volatile storage for the server 2200. That is, the mass storage device 2207 may include a computer-readable medium (not shown) such as a hard disk or Compact disk Read Only Memory (CD-ROM) drive.
Computer-readable media may include computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other Solid State Memory technology, CD-ROM, Digital Versatile Disks (DVD), or Solid State Drives (SSD), other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, or other magnetic storage devices. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM). Of course, those skilled in the art will appreciate that computer storage media is not limited to the foregoing. The system memory 2204 and mass storage device 2207 described above may be collectively referred to as memory.
According to various embodiments of the present application, the server 2200 may also operate as a remote computer connected to a network via a network, such as the Internet. That is, the server 2200 may be connected to the network 2212 through a network interface unit 2211 connected to the system bus 2205, or may be connected to other types of networks or remote computer systems (not shown) using the network interface unit 2211.
The memory further includes one or more programs, and the one or more programs are stored in the memory and configured to be executed by the CPU.
Fig. 23 shows a block diagram of a computer device 2300 according to an exemplary embodiment of the present application. The computer device 2300 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP23 player (Moving Picture Experts compression standard Audio Layer IV, motion video Experts compression standard Audio Layer 23), a notebook computer, or a desktop computer. Computer device 2300 may also be referred to by other names such as user device, portable computer device, laptop computer device, desktop computer device, and the like.
Generally, computer device 2300 includes: a processor 2301 and a memory 2302.
The processor 2301 may include one or more processing cores, such as a 23-core processor, an 8-core processor, and so forth. The processor 2301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 2301 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 2301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 2301 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 2302 may include one or more computer-readable storage media, which may be non-transitory. Memory 2302 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 2302 is used to store at least one instruction for execution by the processor 2301 to implement the method of donation of an item as provided by the method embodiments herein.
In some embodiments, computer device 2300 may also optionally include: a peripheral interface 2303 and at least one peripheral. The processor 2301, memory 2302, and peripheral interface 2303 may be connected by bus or signal lines. Various peripheral devices may be connected to peripheral interface 2303 by buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of a radio frequency circuit 2304, a touch display 2305, a camera 2306, an audio circuit 2307, a positioning component 2308, and a power supply 2309.
The peripheral interface 2303 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 2301 and the memory 2302. In some embodiments, the processor 2301, memory 2302, and peripheral interface 2303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 2301, the memory 2302, and the peripheral device interface 2303 can be implemented on separate chips or circuit boards, which are not limited by this embodiment.
The Radio Frequency circuit 2304 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 2304 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 2304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 2304 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, etc. The radio frequency circuit 2304 may communicate with other computer devices through at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 23G, and 5G), Wireless-Fidelity (wlan) networks, and/or Wi-Fi (Wireless-Fidelity) networks. In some embodiments, the rf circuit 2304 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 2305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 2305 is a touch display screen, the display screen 2305 also has the ability to capture touch signals on or over the surface of the display screen 2305. The touch signal may be input to the processor 2301 as a control signal for processing. At this point, the display 2305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 2305 may be one, providing a front panel of the computer device 2300; in other embodiments, the display screen 2305 may be at least two, each disposed on a different surface of the computer device 2300 or in a folded design; in still other embodiments, display 2305 may be a flexible display disposed on a curved surface or on a folded surface of computer device 2300. Even more, the display screen 2305 may be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display 2305 may be made of LCD (Liquid Crystal Display), OLED (organic light-Emitting Diode), or other materials.
The camera assembly 2306 is used to capture images or video. Optionally, camera assembly 2306 includes a front camera and a rear camera. Generally, a front camera is disposed on a front panel of a computer apparatus, and a rear camera is disposed on a rear surface of the computer apparatus. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 2306 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 2307 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals into the processor 2301 for processing or inputting the electric signals into the radio frequency circuit 2304 to realize voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location on computer device 2300. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 2301 or the radio frequency circuit 2304 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuit 2307 may also include a headphone jack.
The Location component 2308 is used to locate the current geographic Location of the computer device 2300 for navigation or LBS (Location Based Service). The Positioning component 2308 may be a Positioning component based on the united states GPS (Global Positioning System), the chinese beidou System, the russian graves System, or the european union galileo System.
The power supply 2309 is used to supply power to various components in the computer device 2300. The power source 2309 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When the power source 2309 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, computer device 2300 also includes one or more sensors 2310. The one or more sensors 2310 include, but are not limited to: an acceleration sensor 2311, a gyro sensor 2312, a pressure sensor 2313, a fingerprint sensor 2314, an optical sensor 2315, and a proximity sensor 2316.
The acceleration sensor 2311 can detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the computer device 2300. For example, the acceleration sensor 2311 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 2301 may control the touch display screen 2305 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 2311. The acceleration sensor 2311 may also be used for game or user motion data acquisition.
The gyro sensor 2312 may detect the body direction and the rotation angle of the computer device 2300, and the gyro sensor 2312 may cooperate with the acceleration sensor 2311 to acquire the 3D motion of the user on the computer device 2300. The processor 2301 may implement the following functions according to the data collected by the gyro sensor 2312: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 2313 can be disposed on the side bezel of computer device 2300 and/or on the lower layers of touch display screen 2305. When the pressure sensor 2313 is arranged on the side frame of the computer device 2300, the holding signal of the user to the computer device 2300 can be detected, and the processor 2301 performs left-right hand identification or quick operation according to the holding signal collected by the pressure sensor 2313. When the pressure sensor 2313 is disposed at the lower layer of the touch display screen 2305, the processor 2301 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 2305. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 2314 is used for collecting a fingerprint of the user, and the processor 2301 identifies the user according to the fingerprint collected by the fingerprint sensor 2314, or the fingerprint sensor 2314 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 2301 authorizes the user to perform relevant sensitive operations including unlocking a screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 2314 may be provided on the front, back or side of the computer device 2300. When a physical key or vendor Logo is provided on the computer device 2300, the fingerprint sensor 2314 may be integrated with the physical key or vendor Logo.
The optical sensor 2315 is used to collect ambient light intensity. In one embodiment, the processor 2301 may control the display brightness of the touch display screen 2305 based on the ambient light intensity collected by the optical sensor 2315. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 2305 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 2305 is turned down. In another embodiment, the processor 2301 may also dynamically adjust the shooting parameters of the camera assembly 2306 according to the intensity of ambient light collected by the optical sensor 2315.
Proximity sensor 2316, also known as a distance sensor, is typically disposed on the front panel of computer device 2300. The proximity sensor 2316 is used to capture the distance between the user and the front of the computer device 2300. In one embodiment, the processor 2301 controls the touch display screen 2305 to switch from a bright screen state to a dark screen state when the proximity sensor 2316 detects that the distance between the user and the front surface of the computer device 2300 is gradually decreased; when the proximity sensor 2316 detects that the distance between the user and the front surface of the computer device 2300 is gradually increased, the touch display screen 2305 is controlled by the processor 2301 to switch from a breath screen state to a bright screen state.
Those skilled in the art will appreciate that the architecture shown in FIG. 23 is not intended to be limiting of the computer device 2300, and may include more or fewer components than those shown, or may combine certain components, or may employ a different arrangement of components.
Embodiments of the present application further provide a computer device, including: a processor and a memory, the computer device memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, the at least one instruction, at least one program, set of codes, or set of instructions being loaded and executed by the processor to implement the method of donation of an item in the above embodiments.
Embodiments of the present application also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of donation of an item in the above-described embodiments.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A system for donating goods, the system comprising: the system comprises a first client, a server and a second client, wherein the server is respectively connected with the first client and the second client through a network;
the first client is used for collecting a live broadcast stream corresponding to a live broadcast donation process, wherein the live broadcast stream comprises donation demand information of an audio and video type;
the server is used for calling an information identification model to identify donation index information corresponding to the donation demand information and sending a search request to an electronic commerce platform according to the donation index information, wherein the information identification model is a machine learning model with a donation index information identification function;
the server is used for generating a first donation list according to the search result of the electronic commerce platform, wherein the first donation list comprises at least one purchase link of donations searched according to the donation index information;
the server is used for sending the first donation list to the first client;
the server is used for responding to the confirmation request sent by the first client, and sending a second donation list to at least one second client, wherein the second donation list is a subset of the first donation list;
the second client is configured to select at least one purchase link of the donation from the second donation list, and donate the donation to the first client.
2. The system of claim 1, wherein the server is configured to:
obtaining the type of the donation demand information;
determining the information identification model according to the type of the donation demand information;
and calling the information identification model to identify the donation demand information to obtain donation index information corresponding to the donation demand information, wherein the donation index information is used for searching a purchase link of the donation.
3. The system of claim 2, wherein the types of the donation demand information include an audio type;
the server is configured to, in response to that the type of the donation demand information is the audio type, invoke the information recognition model to process an audio frame corresponding to a donated object identifier in the live stream, so as to obtain donation index information corresponding to the audio frame, where the donation index information is used to search for a purchase link of the donation.
4. The system of claim 3, wherein the information recognition model comprises an acoustic model and a language model; the server is configured to:
matching the audio frame with a reference voice template to obtain voice information corresponding to the audio frame; calling the language model to process the voice information to obtain a character sequence corresponding to the voice information;
calling the acoustic model to process the feature vector of the audio frame and the character sequence to obtain the similarity probability of the audio frame and the character sequence;
and determining donation index information corresponding to the audio frame according to the similarity probability, wherein the donation index information is used for searching a purchase link of the donation.
5. The system of claim 2, wherein the type of the donation request information includes an image type, the information identification model includes a convolutional neural network;
the server is configured to:
in response to the fact that the type of the donation demand information is the image type, calling the convolutional neural network to identify a scene image corresponding to the donated object identifier in the live stream to obtain a scene corresponding to the scene image, wherein the scene image represents the scene where the donated object is located;
and matching donation index information corresponding to the scene, wherein the donation index information is used for searching for a purchase link of the donation.
6. The system of claim 5, wherein the convolutional neural network comprises a feature extractor and a classifier; the server is configured to:
in response to the fact that the type of the donation demand information is the image type, calling the feature extractor to preprocess the scene image to obtain a joint vector of the scene image;
and calling the classifier to classify the joint vector to obtain a scene corresponding to the scene image.
7. The system of any one of claims 1 to 6, wherein the server is configured to:
acquiring the item information of the donation according to the search result;
extracting a key image corresponding to the donation from the item information, wherein the key image represents the attribute of the donation;
binding the key image with the purchase link of the donation to obtain a first binding relationship;
and generating the first donation list according to the first binding relationship.
8. The system according to any one of claims 1 to 6, wherein the server is configured to, in response to receiving a confirmation request sent by the first client, obtain a donated object identifier corresponding to the live broadcast stream, where the confirmation request carries at least one item identifier of the donated object, and the donated object identifier includes at least one of a donated user account and a live broadcast room identifier;
the server is used for binding the donated object identifier and the donated object identifier to obtain a second binding relationship;
the server is configured to respond to the second client displaying the live stream corresponding to the donated object identifier, and send the second donation list to the second client according to the second binding relationship, where the second donation list includes at least one purchase link of the donation;
the second client is used for displaying the second donation list.
9. The system of any one of claims 1 to 6, wherein the server is configured to:
generating target data according to the donation records, wherein the target data comprises at least one of a live broadcast platform identifier, a donated object identifier and a donated item identifier;
transmitting the target data to a block link node in a block chain network;
in response to a consensus node in the blockchain network agreeing on the target data, storing the target data in a blockchain.
10. A method for donation of an item, the method being applied to a first client, the method comprising:
displaying a live broadcast stream collected in a live broadcast donation process, wherein a picture of the live broadcast stream comprises a donation object, and the live broadcast stream comprises donation demand information of an audio and video type;
displaying at least one donation matched with the donation demand information in response to receiving a first donation list sent by a server;
in response to receiving a confirmation operation on the first donation list, displaying a second donation list, the second donation list including the confirmed donations;
generating feedback information in response to receiving a receiving operation, the feedback information including at least one of text information, video information, audio information, and image information, the receiving operation being configured to receive the donation from the second client.
11. A method for donation of an item, the method being applied to a second client, the method comprising:
displaying a live broadcast stream corresponding to a donated object acquired in a live broadcast donation process, wherein the live broadcast stream comprises donation demand information of an audio and video type;
displaying a second donation list in response to the first client confirming the donation matching the donation demand information, wherein the second donation list comprises at least one purchase link of the donation;
in response to receiving a donation operation on the second donation list, donating the donation to the first client.
12. A device for donation of items, the device comprising:
the system comprises a first display module, a second display module and a third display module, wherein the first display module is used for displaying a live broadcast stream acquired in a live broadcast donation process, a picture of the live broadcast stream comprises a donation object, and the live broadcast stream comprises donation demand information of an audio and video type;
the first display module is used for responding to the first donation list sent by the server and displaying at least one donation matched with the donation demand information;
the first display module, configured to display a second donation list in response to receiving a confirmation operation on the first donation list, the second donation list including the confirmed donations;
the generating module is used for responding to the received receiving operation, generating feedback information, wherein the feedback information comprises at least one of character information, video information, audio information and image information, and the receiving operation is used for receiving the donations donated by the second client.
13. A device for donation of items, the device comprising:
the second display module is used for displaying a live broadcast stream corresponding to the donated object acquired in the live broadcast donation process, wherein the live broadcast stream comprises donation demand information of an audio and video type;
the second display module is configured to display a second donation list in response to the first client confirming the donation matching the donation demand information, where the second donation list includes at least one purchase link of the donation;
a sending module, configured to donate the donation to the first client in response to receiving the donation operation on the second donation list.
14. A computer device, characterized in that the computer device comprises: a processor and a memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the method of donation of an item according to any one of claims 10 and 11.
15. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a method of donation of an item according to any one of claims 10 and 11.
CN202010350823.2A 2020-04-28 2020-04-28 Item donation system, item donation method, item donation device, item donation equipment and item donation medium Pending CN111598651A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010350823.2A CN111598651A (en) 2020-04-28 2020-04-28 Item donation system, item donation method, item donation device, item donation equipment and item donation medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010350823.2A CN111598651A (en) 2020-04-28 2020-04-28 Item donation system, item donation method, item donation device, item donation equipment and item donation medium

Publications (1)

Publication Number Publication Date
CN111598651A true CN111598651A (en) 2020-08-28

Family

ID=72190848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010350823.2A Pending CN111598651A (en) 2020-04-28 2020-04-28 Item donation system, item donation method, item donation device, item donation equipment and item donation medium

Country Status (1)

Country Link
CN (1) CN111598651A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112102139A (en) * 2020-11-10 2020-12-18 工福(北京)科技发展有限公司 Idle goods transaction poverty alleviation management method and system
CN112714333A (en) * 2020-12-29 2021-04-27 维沃移动通信有限公司 Multimedia data processing method and electronic equipment
CN114390329A (en) * 2020-10-16 2022-04-22 青岛聚看云科技有限公司 Display device and image recognition method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114390329A (en) * 2020-10-16 2022-04-22 青岛聚看云科技有限公司 Display device and image recognition method
CN114390329B (en) * 2020-10-16 2023-11-24 青岛聚看云科技有限公司 Display device and image recognition method
CN112102139A (en) * 2020-11-10 2020-12-18 工福(北京)科技发展有限公司 Idle goods transaction poverty alleviation management method and system
CN112714333A (en) * 2020-12-29 2021-04-27 维沃移动通信有限公司 Multimedia data processing method and electronic equipment

Similar Documents

Publication Publication Date Title
US11715473B2 (en) Intuitive computing methods and systems
US10666784B2 (en) Intuitive computing methods and systems
KR101832693B1 (en) Intuitive computing methods and systems
KR101796008B1 (en) Sensor-based mobile search, related methods and systems
US9256806B2 (en) Methods and systems for determining image processing operations relevant to particular imagery
Yang et al. Benchmarking commercial emotion detection systems using realistic distortions of facial image datasets
CN111598651A (en) Item donation system, item donation method, item donation device, item donation equipment and item donation medium
CN112862516A (en) Resource delivery method and device, electronic equipment and storage medium
CN111897996A (en) Topic label recommendation method, device, equipment and storage medium
CN112116391A (en) Multimedia resource delivery method and device, computer equipment and storage medium
CN111614924A (en) Computer system, resource sending method, device, equipment and medium
CN116018608A (en) Electronic commerce label in multimedia content
CN113486260B (en) Method and device for generating interactive information, computer equipment and storage medium
Lamichhane INSTITUTE OF ENGINEERING THAPATHALI CAMPUS

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40028392

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination