CN114827651A - Information processing method, information processing device, electronic equipment and storage medium - Google Patents

Information processing method, information processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114827651A
CN114827651A CN202210440270.9A CN202210440270A CN114827651A CN 114827651 A CN114827651 A CN 114827651A CN 202210440270 A CN202210440270 A CN 202210440270A CN 114827651 A CN114827651 A CN 114827651A
Authority
CN
China
Prior art keywords
information
target object
live
introduction
content data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210440270.9A
Other languages
Chinese (zh)
Other versions
CN114827651B (en
Inventor
刘洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202210440270.9A priority Critical patent/CN114827651B/en
Publication of CN114827651A publication Critical patent/CN114827651A/en
Application granted granted Critical
Publication of CN114827651B publication Critical patent/CN114827651B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7834Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using audio features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7844Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using original textual content or text extracted from visual content or transcript of audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440236Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by media transcoding, e.g. video is transformed into a slideshow of still pictures, audio is converted into text

Abstract

The disclosure relates to an information processing method, an information processing device, electronic equipment and a storage medium, and belongs to the technical field of internet. The method comprises the following steps: in response to detecting introduction start of a target object in a live interface, acquiring live content data for introducing the target object; sending the live content data; receiving key information of the target object, wherein the key information comprises attribute information obtained by extracting information of the live broadcast content data, and the attribute information is used for representing transaction attributes of the target object; and presenting the key information in the live interface in response to a prediction result of the introduction ending of the target object. The method and the device for displaying the key information of the target object in the live interface can visually display the key information of the target object, can accurately transmit the information, and are convenient for a user to accurately know the information of the target object in non-introduction time.

Description

Information processing method, information processing device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to an information processing method and apparatus, an electronic device, and a storage medium.
Background
With the development of internet technology, users can log in the application program of live webcast, enter the interested live webcast room and watch the live webcast program.
At present, in a live broadcast room, the attribute of a target object is expressed basically through the voice introduction of a main broadcast; the user may miss the introduction time of the target object of the anchor, which results in repeatedly inquiring the attribute of the target object, and is not beneficial for the user to quickly know the information.
Disclosure of Invention
The embodiment of the disclosure provides an information processing method and device, an electronic device and a storage medium, so that information can be quickly and accurately transmitted.
According to an aspect of the embodiments of the present disclosure, there is provided an information processing method including:
in response to detecting introduction start of a target object in a live interface, acquiring live content data of the target object;
sending the live content data;
receiving key information of the target object, wherein the key information comprises attribute information obtained by extracting information of the live content data, and the attribute information is used for representing transaction attributes of the target object;
and presenting the key information in the live interface in response to a prediction result of the introduction ending of the target object.
In one exemplary embodiment, the receiving key information of the target object includes:
and receiving key information of the target object, wherein the key information comprises fusion information of first attribute information and second attribute information, the first attribute information is obtained by extracting information of current live content data, and the second attribute information is obtained by extracting information of historical live content data.
In an exemplary embodiment, the presenting the key information in the live interface includes:
presenting the key information in a selected area of the live broadcast interface; or the like, or, alternatively,
and presenting an operable control and prompt information in a selected area of the live broadcast interface, wherein the operable control triggers the presentation of the key information in response to a trigger operation, and the prompt information is used for indicating that the key information is triggered to be presented by the operable control.
In one exemplary embodiment, the method further comprises:
detecting introduction intention information for introducing the target object in the live broadcast interface, wherein the introduction intention information is used for representing that the target object is in a state to be introduced;
in response to successful detection of the introduction intent information, it is determined that introduction start of the target object is detected in the live interface.
In one exemplary embodiment, the successful detection of the introduction intention information includes at least one of:
detecting that the current time in the live interface is matched with the subscription time for starting introducing the target object;
detecting that the live broadcast position information calibrated in the live broadcast interface is matched with forecast position information for starting introduction of the target object;
detecting that the live content data contains content features for characterizing the start of introducing the object target;
detecting that the live content data contains a semantic format for characterizing the start of introducing the object.
In an exemplary embodiment, the obtaining live content data of the target object includes:
and acquiring at least one of audio data and image data introducing the target object, wherein the audio data comprises semantic content describing the target object, and the image data comprises graphic content presenting the target object.
In one exemplary embodiment, the method further comprises: and if the audio data are acquired, converting the audio data into text data containing the semantic content.
In one exemplary embodiment, the method further comprises:
acquiring object data of the target object;
initiating fusion of the object data and the live content data such that the key information includes information extraction of the object data.
In an exemplary embodiment, the initiating the fusion of the object data and the feature data comprises:
and sending the object data and the live content data to a server side for executing information extraction on the object data and the live content data.
In one exemplary embodiment, the method further comprises:
detecting ending intention information for ending introduction of the target object in the live interface, wherein the ending intention information is used for representing that introduction of the target object is about to be completed;
in response to successful detection of the end intention information, obtaining the prediction result of the end of introduction of the target object.
In one exemplary embodiment, the successful detection of the end intent information includes at least one of:
detecting that the current time in the live interface is matched with the subscription time for finishing introducing the target object;
detecting that the live broadcasting position information calibrated in the live broadcasting interface is matched with the forecast position information for ending introduction of the target object;
detecting that the live content data contains content features for representing the object for ending introduction;
detecting that the live content data contains a semantic format for characterizing the end introduction of the object target.
According to another aspect of the embodiments of the present disclosure, there is provided an information processing apparatus. The information processing apparatus includes:
the acquisition module is configured to respond to the detection of introduction start of a target object in a live interface and acquire live content data of the target object;
a sending module configured to send the live content data;
the receiving module is configured to receive key information of the target object, wherein the key information comprises attribute information obtained by extracting information of the live content data, and the attribute information is used for representing transaction attributes of the target object;
a presentation module configured to present the key information in the live interface in response to a result of prediction of an end of introduction of the target object.
In an exemplary embodiment, the receiving module is configured to receive key information of the target object, where the key information includes fusion information of first attribute information and second attribute information, the first attribute information is obtained by performing information extraction on current live content data, and the second attribute information is obtained by performing information extraction on historical live content data.
In an exemplary embodiment, the presenting module is configured to present the key information in a selected area of the live interface; or the like, or, alternatively,
and presenting an operable control and prompt information in a selected area of the live broadcast interface, wherein the operable control triggers the presentation of the key information in response to a trigger operation, and the prompt information is used for indicating that the key information is triggered to be presented by the operable control.
In an exemplary embodiment, the obtaining module is configured to detect introduction intention information for introducing the target object in the live interface, where the introduction intention information is used to represent that the target object is in a state to be introduced; in response to successful detection of the introduction intent information, it is determined that introduction start of the target object is detected in the live interface.
In one exemplary embodiment, the successful detection of the introduction intention information includes at least one of:
detecting that the current time in the live interface is matched with the subscription time for starting introducing the target object;
detecting that the live broadcast position information calibrated in the live broadcast interface is matched with forecast position information for starting introduction of the target object;
detecting that the live content data contains content features for characterizing the start of introducing the object target;
detecting that the live content data contains a semantic format for characterizing the start of introducing the object.
In an exemplary embodiment, the obtaining module is configured to obtain at least one of audio data and image data introducing the target object, wherein the audio data includes semantic content describing the target object, and the image data includes graphical content representing the target object.
In an exemplary embodiment, the obtaining module is configured to, if the audio data is obtained, convert the audio data into text data containing the semantic content.
In an exemplary embodiment, the obtaining module is configured to obtain object data of the target object; initiating fusion of the object data and the live content data such that the key information includes information extraction of the object data.
In an exemplary embodiment, the obtaining module is configured to send the object data and the live content data to a server for performing information extraction on the object data and the live content data.
In an exemplary embodiment, the presentation module is configured to detect ending intention information for ending introduction of the target object in the live interface, wherein the ending intention information is used for representing that introduction of the target object is about to be completed; in response to successful detection of the end intention information, obtaining the prediction result of the end of introduction of the target object.
In one exemplary embodiment, the successful detection of the end intent information includes at least one of:
detecting that the current time in the live interface is matched with the subscription time for finishing introducing the target object;
detecting that the live broadcast position information calibrated in the live broadcast interface is matched with forecast position information for finishing introducing the target object;
detecting that the live content data contains content features for representing the object for ending introduction;
detecting that the live content data contains a semantic format for characterizing the end introduction of the object target.
According to another aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
the processor is used for reading the executable instructions from the memory and executing the executable instructions to realize the information processing method.
According to another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the information processing method described above.
According to another aspect of the embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the information processing method described above.
The technical scheme provided by the embodiment of the disclosure at least comprises the following beneficial effects: responding to the start of introduction of the target object detected in the live broadcast interface, and acquiring live broadcast content data of the target object; sending live content data; receiving key information of a target object, wherein the key information comprises attribute information obtained by extracting information of live content data, and the attribute information is used for representing transaction attributes of the target object; and presenting the key information in the live interface in response to a prediction result of the introduction end of the target object, so that the key information of the target object can be visually presented in the live interface, and the information can be accurately transmitted. And moreover, key information of the target object is visually presented in the live broadcast interface, so that a user can conveniently and accurately know the information of the target object in non-introduction time.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a diagram of a live application environment, shown in accordance with an exemplary embodiment;
FIG. 2 is a flow diagram illustrating an information processing method according to an example embodiment;
FIG. 3 is a flow diagram illustrating an information processing procedure in accordance with an illustrative embodiment;
FIG. 4 is a schematic diagram of an information processing architecture shown in accordance with an exemplary embodiment;
FIG. 5 is a first diagram illustrating the presentation of a key message in a live interface in accordance with an illustrative embodiment;
FIG. 6 is a second diagram illustrating the presentation of a key message in a live interface in accordance with an illustrative embodiment;
FIG. 7 is a block diagram illustrating an information processing apparatus according to an exemplary embodiment;
FIG. 8 is a block diagram of an electronic device shown in accordance with an exemplary embodiment;
fig. 9 is a block diagram illustrating an information processing apparatus according to an exemplary embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below do not represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for presentation, analyzed data, etc.) referred to in the present disclosure are information and data authorized by the user or sufficiently authorized by each party.
The information processing method provided by the present disclosure can be applied to the application environment shown in fig. 1. Wherein, at least one audience terminal 11 and a service terminal 12 communicate through a network, and an anchor terminal 13 and the service terminal 12 communicate through the network. The viewer side 11 runs an application program that can be used to view live broadcasts, and the anchor side 13 runs an application program that can be used to view live broadcasts. It will be appreciated that the application for viewing the live broadcast and the application for viewing the live broadcast may be the same application or different applications. Both the viewer side 11 and the anchor side 13 may display a live interface of the live room. The anchor terminal 13 collects live content data (preferably real-time voice stream or picture data in a live signal stream) sent by the anchor and related to the target object in response to detecting that the anchor introduces the target object. The anchor terminal 13 sends the live content data to the server terminal 12. The server 12 performs information extraction on the live content data to generate key information, where the key information includes attribute information obtained by performing information extraction on the live content data, and the attribute information is used to represent a transaction attribute of the target object (e.g., includes a name of the target object, a price of the target object, and so on). The audience 12 and the anchor 13 receive the key information from the server 12 and respectively display the key information in their respective live interfaces. The spectator end 11 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and so on. The server 12 may be implemented as an independent server or a server cluster composed of a plurality of servers. The anchor end 13 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and the like.
Fig. 2 is a flow chart illustrating an information processing method according to an example embodiment. The method shown in fig. 2 is preferably performed by the anchor side in a live application. The method comprises the following steps:
step 101: and in response to detecting the introduction of the target object in the live interface, acquiring live content data for introducing the target object.
In an exemplary embodiment, the acquiring live content data of the target object in step 101 includes: at least one of audio data and image data of an introduction target object is acquired. The audio data contains semantic content describing the target object, and the image data contains graphic content presenting the target object. For example, the data source of the content data may be a main live signal in a live interface.
Therefore, the method and the device can acquire various types of live content data related to the target object, and are convenient for subsequently extracting key information with rich content.
In an exemplary embodiment, if the audio data is acquired, the method further includes: the audio data is converted into text data containing semantic content.
Therefore, the audio data are converted into the text data containing the semantic content, so that key information can be conveniently generated by combining the semantic content in the subsequent information extraction process.
In an exemplary embodiment, further comprising: detecting introduction intention information for introducing a target object in a live broadcast interface, wherein the introduction intention information is used for representing that the target object is in a state to be introduced; in response to successful detection of the introduction intent information, it is determined that introduction start of the target object is detected in the live interface.
The present disclosure may detect the introduction intention information in various ways. For example, successful detection of introductory intent information includes at least one of:
(1) detecting that the current time in the live interface is matched with the subscription time for starting introducing the target object:
for example, the anchor manually sets the subscription time of the target object at eight nights each day or the live broadcast automatically sets the subscription time of the target object at the live broadcast. When the current time of the live interface is detected to be eight nights, the anchor is determined to have introduction intention aiming at the target object at the moment, and the introduction intention information is determined to be successfully detected.
(2) Detecting that the live broadcast position information calibrated in the live broadcast interface is matched with the forecast position information of the target object to be introduced:
for example, the anchor manually sets in the live broadcast terminal or the live broadcast terminal automatically sets the outdoor place a as: desired location information for live tape of target objects for outdoor product types. And when the current position information of the live broadcast interface is detected to be consistent with the outdoor place A, determining that the anchor has the intention of introducing the target object in the live broadcast sale at the moment, and determining that the introduction intention information is successfully detected.
(3) And detecting that the live content data contains content characteristics for representing the target of the introduction starting object:
for example, the anchor manually sets the content characteristics of the target of introduction "please note below" in the live broadcast side or the live broadcast side automatically sets the content characteristics of the target of introduction. Detecting live content data of an anchor live signal in real time, and when detecting that the live content data contains content characteristics of 'please pay attention to the next', determining that the anchor has introduction intention aiming at a target object in live sales at the moment, and determining that the introduction intention information is successfully detected.
(4) Detecting that the live content data contains a semantic format for representing the object for starting introduction:
for example, the anchor manually sets "begin introducing xxx below" in the live broadcast end or the live broadcast end automatically sets "begin introducing xxx" as the semantic format of the begin introducing object target, where "xxx" is the name of the object target. And monitoring live broadcast content data of the anchor live broadcast signal in real time, determining that the anchor has introduction intention on the target in live broadcast sales at the moment when detecting that the live broadcast content data contains the semantic format, and determining that the introduction intention information is successfully detected.
The above exemplary description describes a typical example of detecting introductory intent information, and those skilled in the art will appreciate that this description is merely exemplary and is not intended to limit the scope of the embodiments of the present invention.
Step 102: and sending the live content data.
The live broadcast end can send live broadcast content data in the live broadcast interface to the server end through the network. For example, the live content data may specifically include audio data or image data of the target object.
In one embodiment, the live content data contains audio data describing the semantic content of the target object, whereby speech-to-text conversion processing is performed on the audio data by the server to generate a text signal containing the semantic content. In another embodiment, the live end performs speech-to-text conversion processing on audio data containing semantic content describing the target object to generate a text signal containing the semantic content, and then sends the text signal to the server.
Step 103: and receiving key information of the target object, wherein the key information comprises attribute information obtained by extracting information of the live content data, and the attribute information is used for representing the transaction attribute of the target object.
In an exemplary implementation, the server may generate the key information based on audio data that is sent from the live broadcast and contains semantic content describing the target object. For example, the server performs a speech-to-text conversion process on the audio data to determine a text signal containing the semantic content, performs a natural language process on the text signal to extract attribute information for characterizing transaction attributes of the target object, and generates key information having a predetermined template format or a free format, wherein the key information contains the attribute information.
In an exemplary implementation, the server may generate the key information based on a text signal containing the semantic content sent by the live broadcast. For example, the server performs natural language processing on a text signal sent by the live broadcast end to extract attribute information for representing transaction attributes of the target object, and then generates key information in a predetermined template format or a free format, where the key information includes the attribute information.
In the above embodiment, the natural language processing specifically includes: and extracting attribute information of the target object from the text signal by analyzing the text signal in real time. For example, the attribute information includes: the number of target objects, purchase links for target objects, shipping information for target objects, and the like.
In an exemplary implementation, the server may generate the key information based on image data, which is sent from the live broadcast and contains the graphic content of the target object. For example, the server performs image recognition processing on the image data to determine the graphic content, and then generates key information having a predetermined template format or a free format, where the key information includes the graphic content.
In an exemplary embodiment, the method further comprises: acquiring object data of a target object; fusion of the object data with live content data is initiated such that the key information comprises information extraction of the object data. Wherein initiating fusion of the object data and the feature data comprises: and sending the object data and the live content data to a server for extracting the information of the object data and the live content data. The object data of the target object is used to describe the inherent properties of the target object. For example, the object data of the target object includes: name of the target object and/or price of the target object, etc. Accordingly, the key information of the target object includes: name of the target object, price of the target object, number of target objects, associated objects of the target object and/or purchase links of the target object, and so forth.
And after the server generates the key information of the target object, the server sends the key information to the live broadcast end through the network.
In an exemplary embodiment, the receiving key information of the target object in step 103 includes: and receiving key information of the target object, wherein the key information comprises fusion information of first attribute information and second attribute information, the first attribute information is obtained by extracting information of current live content data, and the second attribute information is obtained by extracting information of historical live content data. The time when the user account enters the live broadcast room can be detected, and the current live broadcast content data and the historical live broadcast content data are distinguished based on the time.
For example, assuming that the main play zhangsan is currently playing, the viewer lie four enters the live room zhangsan after the time (e.g., 15 minutes) for the live play is reached. And determining the moment when the plum four enters the live broadcast room with the leaf three. The server determines first attribute information and second attribute information based on the moment, wherein the first attribute information is as follows: extracting the information of the live broadcast content data (namely the current live broadcast content data) in a time period taking the moment when Liqu enters the live broadcast room of Zhang III as a starting point and the current moment as an end point to obtain attribute information; the second attribute information is: and extracting the information of the live broadcast content data (namely historical live broadcast content data) in a time period taking the moment of starting live broadcast in Zhao as a starting point and the moment of entering the live broadcast room in Zhao as an ending point to obtain the attribute information. The server side fuses the first attribute information and the second attribute information (for example, information complementation, information deduplication, and the like) to obtain the key information.
Therefore, the fusion information containing the first attribute information and the second attribute information is received, wherein the first attribute information is obtained by extracting the information of the current live content data, and the second attribute information is obtained by extracting the information of the historical live content data, so that a user can accurately know the information of the target object in the non-introduction time.
Step 104: and presenting the key information in the live interface in response to a prediction result of the introduction ending of the target object.
In an exemplary embodiment, further comprising: detecting ending intention information for ending introduction of the target object in a live interface, wherein the ending intention information is used for representing that introduction of the target object is about to be completed; in response to successful detection of the end intention information, the prediction result of the end of introduction of the target object is obtained.
The present disclosure may detect the end intention information in various ways. For example, successful detection of the end intent information includes at least one of:
(1) detecting that the current time in the live interface is matched with the subscription time for finishing introducing the target object:
for example, the anchor manually sets in the live broadcast end or the live broadcast end automatically sets the subscription time of twelve nights each day as the target object for ending the introduction. And when the current time of the live interface is twelve o' clock in night, determining that the anchor has the intention of ending the introduction target object at the moment, and determining that the ending intention information is successfully detected.
(2) Detecting that the live broadcast position information calibrated in the live broadcast interface is matched with the forecast position information of the target object for ending introduction:
for example, the anchor manually sets in the live broadcast terminal or the live broadcast terminal automatically sets the indoor location B as: end introduction location information for a target object of an outdoor product type. And when the current position information of the live broadcast interface is detected to be consistent with the indoor place B, determining that the anchor has the intention of ending introducing the target object at the moment, and determining that the ending intention information is successfully detected.
(3) And detecting that the live content data contains content characteristics for representing the object target for ending introduction:
for example, the anchor manually sets the content characteristics of the "waiting for future" target for ending the introduction object in the live broadcast terminal or the live broadcast terminal automatically sets the content characteristics of the "waiting for future" target. And monitoring the content data of the anchor live broadcast signal in real time, and when detecting that the live broadcast content data contains the content characteristic 'waiting for future' in the live broadcast, determining that the anchor has the end introduction intention aiming at the target object in the live broadcast selling at the moment, and determining that the end intention information is successfully detected.
(4) And detecting that the live content data contains a semantic format for representing the target of the introduction ending object.
For example, the anchor manually sets "stop introduction xxx" in the live broadcast end or the live broadcast end automatically sets "stop introduction xxx" as the semantic format of the object target to be introduced, where "xxx" is the name of the object target. And monitoring live broadcast content data of the anchor live broadcast signal in real time, determining that the anchor has an introduction ending intention for the target in live broadcast sales at the moment when detecting that the live broadcast content data contains the semantic format, and determining that the intention ending information is successfully detected.
The above exemplary description describes a typical example of detecting ending intention information, and those skilled in the art will appreciate that this description is only exemplary and is not intended to limit the scope of the embodiments of the present invention.
In an exemplary embodiment, the step 104 of presenting the key information in the live interface includes: presenting key information in a selected area of a live broadcast interface; or, presenting an operable control and prompt information in a selected area of the live interface, wherein the operable control triggers the presentation of the key information in response to the triggering operation, and the prompt information is used for indicating that the key information is triggered to be presented by the operable control. Preferably, the selected area comprises a dynamic window in the live interface.
FIG. 3 is a flow diagram illustrating an information processing procedure in accordance with an exemplary embodiment. The flow shown in fig. 3 is preferably performed by the anchor side. As shown in fig. 3, the process includes:
step 201: and judging whether introduction to the target object is started, if so, executing the step 202 and the subsequent steps, and otherwise, returning to execute the step 201.
Here, it is determined whether the anchor of the anchor has an introduction intention based on whether a subscription time for starting introduction of the target object is detected, whether current position information matches position information of the starting introduction target object, whether a content feature of the starting introduction target object is included in content data in a live interface, whether a semantic format of the starting introduction target object is included in the content data, and the like, wherein when the introduction intention exists, introduction for the target object is determined to start.
Step 202: the timer starts counting time.
Step 203: audio files in the live signal stream are converted to text information.
Step 204: the timer continues to count until a predetermined time (e.g., 10 seconds) has elapsed.
Step 205: and judging whether the introduction of the target object is about to end, if so, executing the step 207 and the subsequent steps, and otherwise, executing the step 206 and the subsequent steps.
Here, it is determined whether the anchor of the anchor has an intention to end introduction based on whether a subscription time of the end introduction target object is detected, whether current position information coincides with expected position information of the end introduction target object, whether a content feature of the end introduction target object is included in live content data, whether a semantic format of the end introduction target object is included in the live content data, and the like, wherein when the intention to end introduction is included, it is determined that introduction of a product is about to end.
Step 206: and acquiring object data of the target object, uploading the object data and the text information in the step 203 to the server, and returning to execute the step 202.
The text information and the object data of the target object in the time interval from the beginning of the timing of the timer to the timing of the preset time are collected, and the text information and the object data are sent to the server.
Step 207: and informing the server of ending introduction and acquiring key information from the server. Here, the server extracts the text information and the object data to generate key information. The key information preferably has a predetermined template format to facilitate management and normalized presentation.
Step 208: presenting key information, such as: displaying key information of a character format in a subtitle form; and displaying key information in a graph format in a layer form, and the like.
FIG. 4 is a schematic diagram of an information processing architecture shown in accordance with an example embodiment.
The information display architecture comprises an anchor logical processing unit positioned at an anchor and a server logical processing unit positioned at a server.
The logic processing unit at the anchor end monitors the live broadcast process in real time, and triggers the information display processing of the embodiment of the invention when finding that the anchor starts introducing a new target object (such as a product). Whether the anchor starts introducing new target objects can be automatically identified by an artificial intelligence algorithm of the anchor side. In response to determining that the anchor starts introducing new target objects, the anchor logical processing unit collects live content data containing transaction attributes of the target objects (e.g., a gift of an audio file introduction transaction target object, an anchor audio file purchasing a link to the target object, or text information after the anchor audio file is converted into text) in real time and intermittently (every predetermined time) to the server logical processing unit. The anchor logical processing unit can also send object data (such as the name of the target object, the price of the target object, and the like) about the inherent attributes of the target object introduced by the anchor to the server logical processing unit at the same time, so that the server logical processing unit can conveniently integrate the information summarizing the transaction attributes and the inherent attributes of the target object sent by the anchor.
The server side logic processing unit sends the text information converted based on the anchor audio file (wherein the conversion work can be executed at the server side and can also be executed at the anchor side) to the natural language processing service so as to extract the transaction attribute in the text information. For example, transaction attributes include: a gift introducing the transaction target object and/or a link to purchase the target object, etc. Also, the server logical processing unit sends object data (typically in text format) of the target object to the natural language processing service to extract inherent attributes in the object data. For example, the inherent properties include: name of the target object and/or price of the target object, etc. The server logic processing unit arranges the transaction attribute and the inherent attribute into key information with a template format and sends the key information to the anchor terminal or the live terminal. And after the main broadcasting end or the live broadcasting end receives the key information, displaying the key information on respective live broadcasting interfaces when the product introduction in the live broadcasting process is about to end.
Fig. 5 is a first diagram illustrating presentation of a key message in a live interface, according to an example embodiment.
In fig. 5, the live page includes a live user avatar 61, a live user name 62, a focus button 63, a partial viewer avatar 64, a viewer number 65, and a comment area 67. Also, a control 66 is included in the live page, and the cue word "key information" is presented in the control 66. When the control 66 is triggered, the key information may be presented in further detail. The key information includes transaction attributes of the target object and inherent attributes of the target object.
Fig. 6 is a second diagram illustrating presentation of a key message in a live interface in accordance with an exemplary embodiment.
In fig. 6, the live page includes a live user avatar 61, a live user name 62, a focus button 63, a partial viewer avatar 64, a viewer number 65, and a comment area 67. Moreover, the live page further includes an enlarged form 68 generated after the control 66 is triggered, and the enlarged form 68 shows key information including the transaction attribute and the inherent attribute of the target object, specifically shows: "trade name: xxx; quantity: xxx; the model is as follows: xxx; and (3) gift: xxx; the number of gifts: xxx; linking: xxx; price: xxx ", thereby facilitating users watching live to know information about products.
Fig. 7 is a block diagram illustrating an information processing apparatus according to an exemplary embodiment. The information processing apparatus 400 includes:
an obtaining module 401 configured to, in response to detecting that introduction of a target object starts in a live interface, obtain content data of the target object;
a transmitting module 402 configured to transmit the content data;
a receiving module 403, configured to receive key information of the target object, where the key information includes attribute information obtained by performing information extraction on the content data, and the attribute information is used to represent a transaction attribute of the target object;
a presentation module 404 configured to present the key information in the live interface in response to a result of prediction of an end of introduction to the target object.
In an exemplary embodiment, the receiving module 403 is configured to receive key information of the target object, where the key information includes fusion information of first attribute information and second attribute information, where the first attribute information is obtained by performing information extraction on current live content data, and the second attribute information is obtained by performing information extraction on historical live content data.
In an exemplary embodiment, the obtaining module 401 is configured to detect introduction intention information for introducing a target object in a live interface, where the introduction intention information is used to represent that the target object is in a state to be introduced; in response to successful detection of the introduction intent information, it is determined that introduction start of the target object is detected in the live interface.
In one exemplary embodiment, the successful detection of the introductory intent information includes at least one of: detecting that the current time in the live interface is matched with the subscription time for starting introducing the target object; detecting that the live broadcast position information calibrated in the live broadcast interface is matched with the forecast position information of the target object to be introduced; detecting that the live content data contains content characteristics for representing the target of the introduction starting object; it is detected that the live content data contains a semantic format for characterizing the object of the start introduction, etc.
In an exemplary embodiment, the obtaining module 401 is configured to obtain at least one of audio data and image data of an introduction target object, where the audio data includes semantic content describing the target object, and the image data includes graphic content representing the target object.
In an exemplary embodiment, the obtaining module 401 is configured to, if the audio data is obtained, convert the audio data into text data containing semantic content.
In an exemplary embodiment, the obtaining module 401 is configured to obtain object data of a target object; fusion of the object data with the content data is initiated such that the key information comprises information extraction of the object data.
In an exemplary embodiment, the obtaining module 401 is configured to send the object data and the live content data to a server for performing information extraction on the object data and the live content data.
In an exemplary embodiment, the presenting module 404 is configured to detect ending intention information for ending introduction of the target object in the live interface, where the ending intention information is used for representing that introduction of the target object is about to be completed; in response to successful detection of the end intention information, a prediction of an end of introduction of the target object is obtained.
In one exemplary embodiment, successful detection of the end intent information includes at least one of: detecting that the current time in the live interface is matched with the subscription time for finishing introducing the target object; detecting that the live broadcast position information calibrated in the live broadcast interface is matched with the forecast position information of the target object for ending introduction; detecting that the live content data contains content characteristics for representing the target of ending introduction objects; it is detected that the live content data contains a semantic format for characterizing the end introduction object.
In an exemplary embodiment, the presentation module 404 is configured to present the key information in a selected area of the live interface; or, presenting an operable control and prompt information in a selected area of the live interface, wherein the operable control triggers the presentation of the key information in response to the triggering operation, and the prompt information is used for indicating that the key information is triggered to be presented by the operable control.
In summary, the technical solution provided by the embodiments of the present disclosure at least can include the following beneficial effects: responding to the start of introduction of the target object detected in the live broadcast interface, and acquiring live broadcast content data of the target object; sending live content data; receiving key information of a target object, wherein the key information comprises attribute information obtained by extracting information of live content data, and the attribute information is used for representing transaction attributes of the target object; and responding to a prediction result of the introduction end of the target object, and displaying the key information in the live interface, so that the key information of the target object can be visually displayed in the live interface, and the accurate information transmission is realized. And moreover, key information of the target object is visually displayed in the live interface, so that a user can conveniently and accurately know the information of the target object in non-introduction time.
The embodiment of the disclosure also provides an electronic device. FIG. 8 is a block diagram of an electronic device shown in accordance with an example embodiment. As shown in fig. 8, the electronic device 600 may include: a processor 601; a memory 602 for storing instructions executable by the processor 601; wherein the processor 601 is configured to: when the executable instructions stored in the memory 602 are executed, an information processing method provided by the embodiment of the disclosure is implemented.
It is understood that the electronic device 600 may be a server or a terminal device, and in particular applications, the terminal device may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, etc.
Fig. 9 is a block diagram illustrating an information processing apparatus according to an exemplary embodiment. For example, the apparatus 700 may be: a smart phone, a tablet computer, a motion Picture Experts Group Audio Layer 3 player (MP 3), a motion Picture Experts Group Audio Layer 4 player (MP 4), a notebook computer or a desktop computer. The apparatus 700 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, the apparatus 700 includes: a processor 701 and a memory 702.
The processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 701 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 701 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 701 may be integrated with a Graphics Processing Unit (GPU) that is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, processor 701 may further include an Artificial Intelligence (AI) processor for processing computational operations related to machine learning.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. Memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices.
In some embodiments, a non-transitory computer readable storage medium in the memory 702 is used to store at least one instruction for execution by the processor 701 to implement the information processing methods provided by the various embodiments of the present disclosure. In some embodiments, the apparatus 700 may further include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 703 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 704, touch screen display 705, camera assembly 706, audio circuitry 707, positioning assembly 708, and power source 709.
The peripheral interface 703 may be used to connect at least one Input/Output (I/O) related peripheral to the processor 701 and the memory 702. In some embodiments, processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 704 is used for receiving and transmitting Radio Frequency (RF) signals, also called electromagnetic signals. The radio frequency circuitry 704 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 704 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or Wireless Fidelity (WiFi) networks. In some embodiments, the radio frequency circuitry 704 may also include Near Field Communication (NFC) related circuitry, which is not limited by this disclosure.
The display screen 705 is used to display a User Interface (UI). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 705 is a touch display screen, the display screen 705 also has the ability to capture touch signals on or over the surface of the display screen 705. The touch signal may be input to the processor 701 as a control signal for processing. At this point, the display 705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 705 may be one, providing the front panel of the device 700; in other embodiments, the display 705 may be at least two, respectively disposed on different surfaces of the device 700 or in a folded design; in still other embodiments, the display 705 may be a flexible display, disposed on a curved surface or on a folded surface of the device 700. Even more, the display 705 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 705 may be made of Liquid Crystal Display (LCD), Organic Light-Emitting Diode (OLED), or the like.
The camera assembly 706 is used to capture images or video. Optionally, camera assembly 706 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and a Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp refers to a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuitry 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing or inputting the electric signals to the radio frequency circuit 704 to realize voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of the device 700. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 707 may also include a headphone jack.
The positioning component 708 is used to locate current Location information of the device 700 to implement navigation or Location Based Services (LBS). The Positioning component 708 may be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, the grens System in russia, or the galileo System in the european union.
Power supply 709 is used to provide power to various components in device 700. The power source 709 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When power source 709 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also support a fast charge technique.
In some embodiments, the device 700 also includes one or more sensors 710. The one or more sensors 710 include, but are not limited to: acceleration sensor 711, gyro sensor 712, pressure sensor 713, fingerprint sensor 714, optical sensor 715, and proximity sensor 716.
The acceleration sensor 711 can detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the apparatus 700. For example, the acceleration sensor 711 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 701 may control the touch screen 705 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 711. The acceleration sensor 711 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 712 may detect a body direction and a rotation angle of the device 700, and the gyro sensor 712 may cooperate with the acceleration sensor 711 to acquire a 3D motion of the device 700 by the user. From the data collected by the gyro sensor 712, the processor 701 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 713 may be disposed on a side bezel of device 700 and/or underlying touch display 705. When the pressure sensor 713 is disposed on a side frame of the device 700, a user's holding signal of the device 700 may be detected, and the processor 701 may perform right-left hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 713. When the pressure sensor 713 is disposed at a lower layer of the touch display 705, the processor 701 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 714 is used for collecting a fingerprint of a user, and the processor 701 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 714, or the fingerprint sensor 714 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 701 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 714 may be disposed on the front, back, or side of the device 700. When a physical key or vendor Logo is provided on the device 700, the fingerprint sensor 714 may be integrated with the physical key or vendor Logo.
The optical sensor 715 is used to collect the ambient light intensity. In one embodiment, the processor 701 may control the display brightness of the touch display 705 based on the ambient light intensity collected by the optical sensor 715. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 705 is increased; when the ambient light intensity is low, the display brightness of the touch display 705 is turned down. In another embodiment, processor 701 may also dynamically adjust the shooting parameters of camera assembly 706 based on the ambient light intensity collected by optical sensor 715.
A proximity sensor 716, also known as a distance sensor, is typically provided on the front panel of the device 700. The proximity sensor 716 is used to capture the distance between the user and the front of the device 700. In one embodiment, the processor 701 controls the touch display 705 to switch from the bright screen state to the dark screen state when the proximity sensor 716 detects that the distance between the user and the front surface of the device 700 is gradually decreased; when the proximity sensor 716 detects that the distance between the user and the front of the device 700 is gradually increased, the processor 701 controls the touch display 705 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the above-described configurations are not intended to be limiting of the apparatus 700 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In addition, the disclosed embodiments also provide a non-transitory computer readable storage medium, and when instructions in the storage medium are executed by a processor of an electronic device, the electronic device is enabled to execute the steps of the information processing method provided by the disclosed embodiments. Computer-readable storage media may include, but are not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing, without limiting the scope of the invention. In the disclosed embodiments, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
In addition, the embodiment of the present disclosure further provides a computer program product, and when instructions in the computer program product are executed by a processor of an electronic device, the electronic device is enabled to execute the steps of the information processing method.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (10)

1. An information processing method characterized by comprising:
in response to detecting introduction start of a target object in a live interface, acquiring live content data of the target object;
sending the live content data;
receiving key information of the target object, wherein the key information comprises attribute information obtained by extracting information of the live content data, and the attribute information is used for representing transaction attributes of the target object;
and presenting the key information in the live interface in response to a prediction result of the introduction ending of the target object.
2. The method of claim 1,
the receiving key information of the target object comprises:
and receiving key information of the target object, wherein the key information comprises fusion information of first attribute information and second attribute information, the first attribute information is obtained by extracting information of current live content data, and the second attribute information is obtained by extracting information of historical live content data.
3. The method according to claim 1 or 2,
the presenting the key information on the live broadcast interface comprises:
presenting the key information in a selected area of the live broadcast interface; or the like, or, alternatively,
and presenting an operable control and prompt information in a selected area of the live broadcast interface, wherein the operable control triggers the presentation of the key information in response to a trigger operation, and the prompt information is used for indicating that the key information is triggered to be presented by the operable control.
4. The method of claim 1, further comprising:
detecting introduction intention information for introducing the target object in the live broadcast interface, wherein the introduction intention information is used for representing that the target object is in a state to be introduced;
in response to successful detection of the introduction intent information, it is determined that introduction start of the target object is detected in the live interface.
5. The method of claim 4, wherein the successful detection of the introduction intent information comprises at least one of:
detecting that the current time in the live interface is matched with the subscription time for starting introducing the target object;
detecting that the live broadcast position information calibrated in the live broadcast interface is matched with forecast position information for starting introduction of the target object;
detecting that the live content data contains content characteristics for characterizing the start of introducing the object target;
detecting that the live content data contains a semantic format for characterizing the start of introducing the object.
6. The method of claim 1, wherein the obtaining live content data of the target object comprises:
and acquiring at least one of audio data and image data introducing the target object, wherein the audio data comprises semantic content describing the target object, and the image data comprises graphic content presenting the target object.
7. An information processing apparatus characterized by comprising:
the acquisition module is configured to respond to the detection of introduction start of a target object in a live interface and acquire live content data of the target object;
a sending module configured to send the live content data;
the receiving module is configured to receive key information of the target object, wherein the key information comprises attribute information obtained by extracting information of the live content data, and the attribute information is used for representing transaction attributes of the target object;
a presentation module configured to present the key information in the live interface in response to a result of prediction of an end of introduction to the target object.
8. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing the processor-executable instructions;
the processor is used for reading the executable instructions from the memory and executing the executable instructions to realize the information processing method of any one of claims 1 to 6.
9. A computer-readable storage medium having stored thereon computer instructions, wherein the computer instructions, when executed by a processor, implement the information processing method of any one of claims 1 to 6.
10. A computer program product comprising a computer program which, when executed by a processor, implements the information processing method of any one of claims 1 to 6.
CN202210440270.9A 2022-04-25 2022-04-25 Information processing method, information processing device, electronic equipment and storage medium Active CN114827651B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210440270.9A CN114827651B (en) 2022-04-25 2022-04-25 Information processing method, information processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210440270.9A CN114827651B (en) 2022-04-25 2022-04-25 Information processing method, information processing device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114827651A true CN114827651A (en) 2022-07-29
CN114827651B CN114827651B (en) 2023-12-01

Family

ID=82506902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210440270.9A Active CN114827651B (en) 2022-04-25 2022-04-25 Information processing method, information processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114827651B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116107684A (en) * 2023-04-12 2023-05-12 天津中新智冠信息技术有限公司 Page amplification processing method and terminal equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017166517A1 (en) * 2016-03-30 2017-10-05 乐视控股(北京)有限公司 Method and device for interaction in live broadcast
CN110139161A (en) * 2018-02-02 2019-08-16 阿里巴巴集团控股有限公司 Information processing method and device in live streaming
CN111601145A (en) * 2020-05-20 2020-08-28 腾讯科技(深圳)有限公司 Content display method, device and equipment based on live broadcast and storage medium
CN111652678A (en) * 2020-05-27 2020-09-11 腾讯科技(深圳)有限公司 Article information display method, device, terminal, server and readable storage medium
CN112104899A (en) * 2020-09-11 2020-12-18 腾讯科技(深圳)有限公司 Information recommendation method and device in live broadcast, electronic equipment and storage medium
CN112399200A (en) * 2019-08-13 2021-02-23 腾讯科技(深圳)有限公司 Method, device and storage medium for recommending information in live broadcast
CN112561631A (en) * 2020-12-08 2021-03-26 北京达佳互联信息技术有限公司 Information display method and device, electronic equipment and storage medium
CN112818674A (en) * 2021-01-29 2021-05-18 广州繁星互娱信息科技有限公司 Live broadcast information processing method, device, equipment and medium
CN113315979A (en) * 2020-08-10 2021-08-27 阿里巴巴集团控股有限公司 Data processing method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017166517A1 (en) * 2016-03-30 2017-10-05 乐视控股(北京)有限公司 Method and device for interaction in live broadcast
CN110139161A (en) * 2018-02-02 2019-08-16 阿里巴巴集团控股有限公司 Information processing method and device in live streaming
CN112399200A (en) * 2019-08-13 2021-02-23 腾讯科技(深圳)有限公司 Method, device and storage medium for recommending information in live broadcast
CN111601145A (en) * 2020-05-20 2020-08-28 腾讯科技(深圳)有限公司 Content display method, device and equipment based on live broadcast and storage medium
CN111652678A (en) * 2020-05-27 2020-09-11 腾讯科技(深圳)有限公司 Article information display method, device, terminal, server and readable storage medium
CN113315979A (en) * 2020-08-10 2021-08-27 阿里巴巴集团控股有限公司 Data processing method and device, electronic equipment and storage medium
CN112104899A (en) * 2020-09-11 2020-12-18 腾讯科技(深圳)有限公司 Information recommendation method and device in live broadcast, electronic equipment and storage medium
CN112561631A (en) * 2020-12-08 2021-03-26 北京达佳互联信息技术有限公司 Information display method and device, electronic equipment and storage medium
CN112818674A (en) * 2021-01-29 2021-05-18 广州繁星互娱信息科技有限公司 Live broadcast information processing method, device, equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116107684A (en) * 2023-04-12 2023-05-12 天津中新智冠信息技术有限公司 Page amplification processing method and terminal equipment
CN116107684B (en) * 2023-04-12 2023-08-15 天津中新智冠信息技术有限公司 Page amplification processing method and terminal equipment

Also Published As

Publication number Publication date
CN114827651B (en) 2023-12-01

Similar Documents

Publication Publication Date Title
CN109600678B (en) Information display method, device and system, server, terminal and storage medium
CN110267067B (en) Live broadcast room recommendation method, device, equipment and storage medium
CN109040297B (en) User portrait generation method and device
CN111083516B (en) Live broadcast processing method and device
CN112672176B (en) Interaction method, device, terminal, server and medium based on virtual resources
CN110572711B (en) Video cover generation method and device, computer equipment and storage medium
WO2019114514A1 (en) Method and apparatus for displaying pitch information in live broadcast room, and storage medium
CN110865754B (en) Information display method and device and terminal
CN111355974A (en) Method, apparatus, system, device and storage medium for virtual gift giving processing
CN110213612B (en) Live broadcast interaction method and device and storage medium
CN110418152B (en) Method and device for carrying out live broadcast prompt
CN110533585B (en) Image face changing method, device, system, equipment and storage medium
CN110139143B (en) Virtual article display method, device, computer equipment and storage medium
CN110933468A (en) Playing method, playing device, electronic equipment and medium
CN110290392B (en) Live broadcast information display method, device, equipment and storage medium
CN111061405B (en) Method, device and equipment for recording song audio and storage medium
CN111445901A (en) Audio data acquisition method and device, electronic equipment and storage medium
CN110996167A (en) Method and device for adding subtitles in video
CN112788359A (en) Live broadcast processing method and device, electronic equipment and storage medium
CN113259702A (en) Data display method and device, computer equipment and medium
CN111031391A (en) Video dubbing method, device, server, terminal and storage medium
CN110798327B (en) Message processing method, device and storage medium
CN112468884A (en) Dynamic resource display method, device, terminal, server and storage medium
CN114827651B (en) Information processing method, information processing device, electronic equipment and storage medium
CN109819308B (en) Virtual resource acquisition method, device, terminal, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant