CN113301412B - Information display method, information processing method, device and system - Google Patents

Information display method, information processing method, device and system Download PDF

Info

Publication number
CN113301412B
CN113301412B CN202010340430.3A CN202010340430A CN113301412B CN 113301412 B CN113301412 B CN 113301412B CN 202010340430 A CN202010340430 A CN 202010340430A CN 113301412 B CN113301412 B CN 113301412B
Authority
CN
China
Prior art keywords
video picture
product
area
server
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010340430.3A
Other languages
Chinese (zh)
Other versions
CN113301412A (en
Inventor
赵立冬
黄梦华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010340430.3A priority Critical patent/CN113301412B/en
Publication of CN113301412A publication Critical patent/CN113301412A/en
Application granted granted Critical
Publication of CN113301412B publication Critical patent/CN113301412B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4882Data services, e.g. news ticker for displaying messages, e.g. warnings, reminders

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the application provides an information display method, an information processing device and an information processing system. Wherein, a first area is provided in a live interface of a live broadcast room; displaying a first video picture in the first area; the first video picture is acquired by a main broadcasting terminal; providing a second area in the live interface; displaying a second video picture in the second area; the second video picture is obtained by identifying a target object from a third video picture and fusing a product effect of a target product associated with the live broadcast room into the third video picture based on the target object; and the third video picture is acquired by the first user terminal. The technical scheme of the embodiment of the application improves the product promotion effect.

Description

Information display method, information processing method, device and system
Technical Field
The embodiment of the application relates to the technical field of computer application, in particular to an information display method, an information processing device and an information processing system.
Background
With the development of internet technology and streaming media technology, live webcast is rapidly developed, and watching users are more and more, wherein live webcast refers to the fact that information is synchronously manufactured and distributed on site along with the occurrence and development processes of events, and a bidirectional-circulation information network distribution mode is provided.
Because the live webcasting brings more user traffic, many product providers select a live webcasting mode to introduce related products to users, for example, in an e-commerce scene, the users can be guided to know commodities more and more quickly by combining the live webcasting with an e-commerce platform, and therefore purchase can be completed quickly.
However, the current product introduction is explained by a main broadcasting, and field trial is performed when necessary, so that a user can only know the product through the explanation of the main broadcasting, and the product cannot be actually experienced, and therefore the product popularization effect is not good enough.
Disclosure of Invention
The embodiment of the application provides an information display method, an information processing device and an information processing system, and aims to solve the technical problem that the product popularization effect is not good enough in the prior art.
In a first aspect, an embodiment of the present application provides an information display method, including:
providing a first area in a live interface of a live broadcasting room;
displaying a first video picture in the first area; the first video picture is acquired by a main broadcasting terminal;
providing a second area in the live interface; displaying a second video picture in the second area; the second video picture is obtained by identifying a target object from a third video picture and fusing a product effect of a target product associated with the live broadcast room into the third video picture based on the target object; and the third video picture is acquired by the first user terminal.
In a second aspect, an embodiment of the present application provides an information display method, including:
a first user end provides a first area and a first interactive control in a live broadcast interface, and displays a first video picture in the first area;
responding to the triggering operation aiming at the first interactive control, establishing an interactive connection channel with a main broadcasting end through a server end, and providing a second area in the live broadcasting interface;
collecting a third video picture and sending the third video picture to the server;
displaying trial prompt information corresponding to at least one product associated with the live broadcast room in the live broadcast interface;
responding to a trigger operation aiming at any trial prompt information, sending a trial request to a server based on a target product corresponding to the any trial prompt information so that the server can identify a target object corresponding to the target product in the third video picture, and fusing a product effect of the target product into the third video picture based on the target object to obtain a second video picture;
displaying the second video picture in the second area.
In a third aspect, an embodiment of the present application provides an information display method, including:
the anchor terminal provides a first area and second interactive controls corresponding to different users in a live broadcast interface, and displays a first video picture in the first area;
responding to the triggering operation aiming at any second interactive control, establishing an interactive connection channel corresponding to any second interactive control and a first user side through a server side, and providing a second area in the live broadcast interface;
displaying trial prompt information corresponding to at least one product associated with the live broadcast room in the live broadcast interface;
responding to a trigger operation aiming at any trial prompt information, sending a trial request to a server based on a target product corresponding to the any trial prompt information so that the server can identify a target object corresponding to the target product in a third video picture acquired by a first user side, and fusing a product effect of the target product into the third video picture based on the target object to obtain a second video picture;
displaying the second video picture in the second area.
In a fourth aspect, an embodiment of the present application provides an information display method, including:
the second user side obtains a first video picture and a second video picture provided by the server side; the first video picture is acquired by a main broadcasting terminal; the second video picture is obtained by identifying a target object from a third video picture and fusing a product effect of a target product associated with the live broadcast room into the third video picture based on the target object; the third video picture is acquired by the first user terminal;
providing a first area and a second area in a live broadcast interface;
displaying the first video picture in the first area and displaying the second video picture in the second area.
In a fifth aspect, an embodiment of the present application provides an information processing method, including:
establishing an interactive connection channel between a first user terminal and a main broadcasting terminal;
acquiring a first video picture acquired by the anchor terminal;
acquiring a third video picture acquired by the first user side, and identifying a target object in the third video picture;
based on the target object, fusing the product effect of the target product associated with the live broadcast room into the third video picture to obtain a second video picture;
and providing the first video picture and the second video picture to a second user end so as to enable the second user end to display the first video picture and the second video picture.
In a sixth aspect, an embodiment of the present application provides an information display method, including:
a first user end provides a first area and a microphone connecting control in a live broadcast interface, and displays a first video picture in the first area;
responding to the triggering operation aiming at the microphone connecting control, establishing a microphone connecting interaction channel with a main broadcasting end through a server end, and providing a second area in the live broadcasting interface;
collecting a third video picture and sending the third video picture to the server;
displaying makeup test prompt information corresponding to at least one makeup product associated with the live broadcast room in the live broadcast interface;
responding to a trigger operation aiming at any makeup trying prompt message, sending a makeup trying request to a server based on a target makeup product corresponding to any makeup trying prompt message so that the server can identify a face feature part corresponding to the target makeup product in the third video picture, and fusing a makeup effect image of the target makeup product to the third video picture based on the face feature part to obtain a makeup trying picture;
and displaying the makeup trying picture in the second area.
In a seventh aspect, an embodiment of the present application provides an information display method, including:
the method comprises the steps that a main broadcast end provides a first area and microphone connecting controls corresponding to different users in a live broadcast interface, and a first video picture is displayed in the first area;
responding to the triggering operation aiming at any one microphone connecting control, establishing a microphone connecting interaction channel corresponding to the first user side of the any microphone connecting control through a server side, and providing a second area in the live broadcast interface;
displaying makeup test prompt information corresponding to at least one makeup product associated with the live broadcast room in the live broadcast interface;
responding to a trigger operation aiming at any makeup test prompt message, sending a makeup test request to a server based on a target makeup product corresponding to any makeup test prompt message so that the server can identify a face feature part corresponding to the target makeup product in a third video picture acquired by a first user side, and fusing a makeup effect image of the target makeup product into the third video picture based on the face feature part to obtain a makeup test picture;
displaying the makeup trial picture in the second area.
In an eighth aspect, an embodiment of the present application provides an information display method, including:
the second user side obtains a first video picture and a makeup trial picture provided by the server side; the first video picture is acquired by a main broadcasting terminal; the makeup trial picture is obtained by identifying a face characteristic part from a third video picture and fusing a makeup effect picture of a target makeup product associated with the live broadcast room into the third video picture based on the face characteristic part; the third video picture is acquired by the first user terminal;
providing a first area and a second area in a live broadcast interface;
displaying the first video picture in the first area, and displaying the makeup trial picture in the second area.
In a ninth aspect, an embodiment of the present application provides an information processing method, including:
establishing a connecting wheat interaction channel between a first user terminal and a main broadcasting terminal;
acquiring a first video picture acquired by the anchor terminal;
acquiring a third video image acquired by the first user side, and identifying a face characteristic part in the third video image;
based on the face feature part, fusing a makeup effect picture of a target makeup product associated with a live broadcast room into the third video picture to obtain a makeup trial picture;
and providing the first video picture and the makeup trial picture to a second user end so that the second user end can display the first video picture and the makeup trial picture.
In a tenth aspect, an embodiment of the present application provides a live webcast system, which includes a server, an anchor, a first user, and a second user;
the server is used for establishing an interactive connection channel between the anchor terminal and the first user terminal; acquiring a first video picture acquired by the anchor terminal; acquiring a third video image acquired by the first user side, and identifying a target object in the third video image; based on the target object, fusing the product effect of the target product associated with the live broadcast room into the third video picture to obtain a second video picture;
the first user side is used for acquiring a third video picture after an interactive connection channel with the anchor side is established, and sending the third video picture to the server side; acquiring the first video picture and the second video picture from the server side, and displaying the first video picture and the second video picture;
the second user side is used for acquiring the first video picture and the second video picture from the server side and displaying the first video picture and the second video picture.
In an eleventh aspect, an embodiment of the present application provides an information display device, including:
the first display module is used for providing a first area in a live interface of a live broadcasting room; displaying a first video picture in the first area; the first video picture is acquired by a main broadcasting terminal;
the second display module is used for providing a second area in the live broadcast interface; displaying a second video picture in the second area; the second video picture is obtained by identifying a target object from a third video picture and fusing a product effect of a target product associated with the live broadcast room into the third video picture based on the target object; and the third video picture is acquired by the first user terminal.
In a twelfth aspect, an embodiment of the present application provides an information display apparatus, including:
the third display module is used for providing a first area and a first interactive control in a live broadcast interface and displaying a first video picture in the first area;
the first response module is used for responding to the triggering operation of the first interactive control, establishing an interactive connection channel with a main broadcast end through a server end, and providing a second area in the live broadcast interface;
the first acquisition module is used for acquiring a third video picture and sending the third video picture to the server;
the fourth display module is used for displaying trial prompt information corresponding to at least one product associated with the live broadcast room in the live broadcast interface; responding to a triggering operation aiming at any trial prompt information, sending a trial request to a server based on a target product corresponding to any trial prompt information so that the server can identify a target object corresponding to the target product in the third video picture, and fusing a product effect of the target product into the third video picture based on the target object to obtain a second video picture; displaying the second video picture in the second region.
In a thirteenth aspect, an embodiment of the present application provides an information display apparatus, including:
the fifth display module is used for providing a first area and second interactive controls corresponding to different users in a live broadcast interface and displaying a first video picture in the first area;
the second response module is used for responding to the triggering operation of any second interactive control, establishing an interactive connection channel of a first user end corresponding to the any second interactive control through the server end, and providing a second area in the live broadcast interface;
a sixth display module, configured to display trial prompt information corresponding to each of at least one product associated with the live broadcast room in the live broadcast interface; responding to a trigger operation aiming at any trial prompt information, sending a trial request to a server based on a target product corresponding to the any trial prompt information so that the server can identify a target object corresponding to the target product in a third video picture acquired by a first user side, and fusing a product effect of the target product into the third video picture based on the target object to obtain a second video picture; displaying the second video picture in the second region.
In a fourteenth aspect, an embodiment of the present application provides an information display apparatus, including:
the first acquisition module is used for acquiring a first video picture and a second video picture provided by the server; the first video picture is acquired by a main broadcasting terminal; the second video picture is obtained by identifying a target object from a third video picture and fusing a product effect of a target product associated with the live broadcast room into the third video picture based on the target object; the third video picture is acquired by the first user terminal;
the seventh display module is used for providing a first area and a second area in a live broadcast interface; displaying the first video picture in the first area and displaying the second video picture in the second area.
In a fifteenth aspect, an embodiment of the present application provides an information processing apparatus, including:
the interactive triggering module is used for establishing an interactive connection channel between the first user terminal and the anchor terminal;
the second acquisition module is used for acquiring a first video picture acquired by the anchor terminal; acquiring a third video picture acquired by the first user side;
the identification module is used for identifying a target object in the third video picture;
the fusion module is used for fusing the product effect of the target product associated with the live broadcast room into the third video picture to obtain a second video picture based on the target object;
and the providing module is used for providing the first video picture and the second video picture to a second user end so as to enable the second user end to display the first video picture and the second video picture.
In a sixteenth aspect, an embodiment of the present application provides a live interface, including a first area and a second area;
the first area is used for displaying a first video picture; the first video picture is acquired by a main broadcasting terminal;
the second area is used for displaying a second video picture; the second video picture is obtained by identifying a target object from a third video picture and fusing a product effect of a target product associated with the live broadcast room into the third video picture based on the target object position; and the third video picture is acquired by the first user terminal.
In a seventeenth aspect, an embodiment of the present application provides an electronic device, including a storage component, a display component, and a processing component; the storage component stores one or more computer program instructions; the one or more computer program instructions are for being invoked and executed by the processing component to implement the information display method of the first aspect.
In an eighteenth aspect, embodiments of the present application provide a computing device, comprising a processing component and a storage component;
the storage component stores one or more computer instructions; the one or more computer instructions are called and executed by the processing component to implement the information processing method according to the fifth aspect.
In a nineteenth aspect, an embodiment of the present application provides a computer storage medium storing a computer program, where the computer program, when executed by a computer, implements the information display method according to the first aspect.
A twentieth aspect provides a computer storage medium storing a computer program which, when executed by a computer, implements the information processing method according to the fifth aspect.
In the embodiment of the application, the anchor terminal and the user side can establish an interactive connection channel, so that a first area and a second area can be provided in a live interface, the first area is used for displaying a first video picture acquired by the anchor terminal, the second area is used for displaying a second video picture, the second video picture is generated based on a third video picture acquired by the user side, a target object in the third video picture is identified and is based on the fact that the target object fuses a product effect of a target product associated with a live broadcast room into the third video picture to obtain the second video picture, so that the second video picture realizes a virtual-real combined display effect, the purpose of virtual trial of the target product is realized, a user can watch the anchor explanation, the virtual trial effect of the product can be also watched, the user experience is improved, the product popularization effect is improved, and the effective information recommendation is realized.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic structural diagram illustrating an embodiment of a live webcasting system provided in the present application;
FIG. 2 is a flow chart illustrating one embodiment of an information display method provided herein;
fig. 3a to fig. 3c respectively show display diagrams of a live interface in an actual application of the embodiment of the present application;
FIG. 4 is a flow chart illustrating a further embodiment of an information display method provided by the present application;
fig. 5a to 5f respectively show display diagrams of a live interface in an actual application of the embodiment of the present application;
FIG. 6 is a flow chart illustrating a further embodiment of an information display method provided by the present application;
FIG. 7 is a flow chart illustrating a further embodiment of an information display method provided by the present application;
FIG. 8 is a schematic structural diagram illustrating an embodiment of an information processing method provided by the present application;
FIG. 9 is a schematic diagram illustrating an embodiment of an information display device provided by the present application;
FIG. 10 is a schematic diagram illustrating a structure of another embodiment of an information display device provided in the present application;
FIG. 11 is a schematic diagram illustrating a structure of another embodiment of an information display device provided by the present application;
FIG. 12 is a schematic diagram illustrating a structure of another embodiment of an information display device provided by the present application;
FIG. 13 is a schematic diagram illustrating an embodiment of an electronic device provided by the present application;
FIG. 14 is a schematic diagram illustrating an embodiment of an information processing apparatus provided by the present application;
FIG. 15 illustrates a schematic diagram of one embodiment of a computing device provided herein.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In some of the flows described in the specification and claims of this application and in the above-described figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel as they occur herein, the number of operations, e.g., 101, 102, etc., merely being used to distinguish between various operations, and the number itself does not represent any order of performance. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor do they limit the types of "first" and "second".
The technical scheme of the embodiment of the application is mainly applied to a live webcast scene for popularizing products in a live broadcast mode, for example, an E-commerce live broadcast scene for recommending commodities in a live broadcast mode.
Because the network live broadcast can bring more user flow, many product providers select a network live broadcast mode to introduce products to users, in a network live broadcast scene, the live broadcast contents of different anchor broadcasts are usually distinguished in a live broadcast room mode, and users can watch the products recommended by the anchor broadcasts of the live broadcast room by entering a certain live broadcast room through user terminals. The anchor is when live the on-the-spot explanation product, in order to understand the product for convenience, can try the product when necessary, for example, when the product is the cosmetics, can try to make up, when the product is the clothing, can try on, when the product is the accessory, can try on etc., the user can only rely on the anchor explanation or the anchor effect of trying to know the product, still do not have actual impression to the product, consequently, it is not good to lead to the product to promote the effect, and then influence the product transformation rate, if under the live scene of electricity merchant, the commodity of anchor explanation can't effectively attract the user, the user just can not leave the order and buy, influence the purchase rate of commodity.
In order to effectively improve the product popularization effect, the inventor proposes the technical scheme of the present application through a series of researches, and the technical scheme in the embodiment of the present application will be clearly and completely described below with reference to the drawings in the embodiment of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The technical solution in this embodiment of the present application can be applied to a live network system as shown in fig. 1, where the live network system mainly includes an anchor 101, a server 102, and a client, where the client may include a first client 103 and a second client 104, and it can be understood that in a live network scenario, the live network system may have at least one first client 103 and at least one second client 104. And the offline user registers the user account through the user side, so that the live video can be watched.
It should be noted that the anchor terminal and the user terminal shown in fig. 1 may be implemented in a mobile phone, a tablet computer, or other electronic devices with a video capture function in practical applications, and are not limited to the device shapes shown in fig. 1.
In the current implementation scheme, the anchor terminal is responsible for acquiring the sound and/or the picture of the live broadcast site where the anchor is located in real time, acquiring the live broadcast content and uploading the live broadcast content to the server terminal, wherein the live broadcast content can include video pictures, audio data and the like. Live contents of different anchor broadcasts can be distinguished through a live broadcast room, and the anchor broadcasts can firstly apply for the live broadcast room from a server through anchor broadcast terminals to record and upload live contents in real time. The user can request to enter a certain live broadcast room through the user side, the server side can send live broadcast contents of the live broadcast room to the user side, the user side can play the live broadcast contents, and the user side can display video pictures in the live broadcast contents in a live broadcast interface of the live broadcast room.
The anchor can explain one or more products that need recommend on the live broadcast scene, and the user can obtain the relevant information of product from the live broadcast content that the user played through the user side.
In this embodiment of the application, as shown in fig. 1, a server 102 may establish an interactive connection channel between an anchor terminal 101 and a first user terminal 103, where the anchor terminal 101 is configured to acquire a first video picture; after the first user terminal 103 establishes an interactive connection channel with the anchor terminal 101, a third video picture can be collected and sent to the server terminal 102; the server 102 can identify a feature of a target object in the third video picture, and based on the feature, fuse a product effect graph of a target product associated with the live broadcast room into the third video picture to obtain a second video picture;
the server 102 may provide the first video frame and the second video frame to the second user 104, so that the second user 104 can display the first video frame and the second video frame in the live interface.
Of course, the server 102 may also provide the first video frame and the second video frame to the first user terminal 103, and the first user terminal 103 may display the first video frame and the second video frame in the live interface.
The server 102 may also provide the first video frame and the second video frame to the anchor terminal 101, or only provide the second video frame to the anchor terminal 101, so that the anchor terminal 101 can display the first video frame and the second video frame in the live interface.
In the embodiment of the application, the first video picture and the second video picture can be displayed in the live interface, and the second video picture is the live picture fusing the third video picture with the product effect picture, so that the anchor explanation can be watched in the live interface, the virtual trial effect of the product can be watched, the user experience feeling is improved, and the product popularization effect is improved.
As can be seen from the above description, the first user end 103 and the second user end 104 are different only in that the first user end 103 is a user end that establishes an interactive connection channel with the anchor end 101 to implement a multi-user video live broadcast wheat-connecting scene, and not only can view a video picture of a site where the anchor end is located from a live broadcast interface, but also can view a video picture of the site where the first user end is located, and certainly, the first user end can also collect audio data and the like of the site where the first user end is located for interactive transmission, which is not specifically limited in the present application.
The user side can be configured in an electronic device such as a mobile phone, a tablet computer, a computer, and a smart watch, the server side can be implemented by a Content Delivery Network (CDN) system or other processing systems, the anchor side can be composed of an electronic device having an acquisition function and an Open Broadcast Software (OBS) streaming function, for example, an intelligent device such as a mobile phone and a tablet with a camera, and the first user side can also be composed of an electronic device having an acquisition function and an OBS streaming function.
Of course, the present application is not limited to the implementation of live webcasting by using the above live webcasting technical solution. In addition, as can be understood by those skilled in the art, the live content may need to be processed by encoding, transcoding, compressing, and the like before being uploaded to the server, and correspondingly, the client may need to be processed by decoding, decompressing, and the like before playing the live data, and the like, which are the same as those in the prior art and are not described again.
The user side and the anchor side can be independent application programs, and can also be functional modules integrated in other application programs and the like.
Based on the network live broadcast system shown in fig. 1, as shown in fig. 2, an embodiment of an information display method provided in the embodiment of the present application is a flowchart, and the method may include the following steps:
201: a first zone is provided in a live interface of a live room.
The solution of the embodiment shown in fig. 2 may be performed by the anchor side, the first user side or the second user side.
202: a first video picture is displayed in the first area.
The first video picture is acquired by the anchor terminal, the anchor terminal can directly broadcast the picture in the live broadcast, and can also acquire sound in real time to generate the live broadcast content of the live broadcast room and upload the live broadcast content to the server terminal, and the first video picture also contains the live broadcast content.
After entering the live broadcasting room, the first user end and the second user end can obtain the live broadcasting content from the server end and play the live broadcasting content, wherein the live broadcasting content comprises the first video picture, namely, a first area is provided in a live broadcasting interface to display the first video picture.
And the anchor end can also provide a live interface and display the acquired first video in a first area in the live interface.
203: a second region is provided in the live interface.
204: and displaying the second video picture in the second area.
The second video picture is obtained by identifying a target object from the third video picture and fusing a product effect of a target product associated with the live broadcast room into the third video picture based on the target object; and the third video picture is acquired by the first user terminal. The first user end and the anchor end can establish an interactive connection channel, so that the first user end is triggered to acquire a third video picture.
As an optional manner, fusing the product effect of the target product to the third video frame may be obtained by performing corresponding image processing on the target object in the third video frame based on the product effect.
Alternatively, the product effect of the target product can be represented by a product effect diagram. Therefore, the second video picture may specifically be obtained by identifying a target object from the third video picture and fusing a product effect graph of a target product associated with the live broadcast into the third video picture based on the target object, and optionally, the third video picture may specifically be obtained by identifying a target object associated with the target product.
Since different products may correspond to different feature portions in different objects, the second video frame may be obtained by fusing a product effect diagram of a target product associated with a live broadcast room to a third video frame based on the feature portion of the target object.
Alternatively, the characteristic portion of the target object may be determined based on a user selection operation.
Alternatively, the feature portion of the target object corresponding to the target product may be identified from the third video image in combination with the target product. Therefore, the second video picture can be obtained by identifying a characteristic part of the target object from the third video picture and fusing a product effect graph of the target product associated with the live broadcast room into the third video picture based on the characteristic part.
The live broadcast room can be associated with at least one product, the at least one product can be published in advance by a main broadcast, the main broadcast can provide the at least one product when applying for the live broadcast room, and the server can establish association between the at least one product and the live broadcast room. Of course, the at least one product associated with the live room may also consist of products that are taught by the anchor history as well as products that are currently taught.
Wherein the target product may be a product currently taught by the anchor or any one of the products selected by the anchor or the first user from the at least one product.
The target object may refer to a trial object corresponding to the target product, and the target object feature part may refer to a trial part of the target product corresponding to the target object, for example, if the target product is a product applied to a human body, the target object may refer to a human body, and the feature part may be determined in combination with the trial parts corresponding to different target products, and if the target product is lipstick, the feature part is lips; when the target product is foundation liquid, the characteristic part is the face; when the target product is a jacket, the characteristic part is an upper limb part; when the target product is a hat, the characteristic part is a head and the like. Of course, the target product may also be a product suitable for other objects, for example, when the target product is a television, the target object may refer to a wall or a television cabinet, and the target object feature part may refer to a blank area in the wall or an empty area in the television cabinet. If the watching user corresponding to the first user is assumed to be the first user, and the target object is a human body, the third video picture may be obtained by the first user by operating the acquisition device corresponding to the first user to shoot the user, or may be obtained by operating the acquisition device corresponding to the first user to shoot other users on the site where the user is located, and if the target object is a non-human body article, the first user may be obtained by operating the acquisition device corresponding to the first user to shoot the article. Therefore, the target object can be identified from the third video picture, and the characteristic part corresponding to the target product is determined.
The product effect map may be generated based on a product form of the target product in an actual use state, for example, when the target product is a cosmetic product such as lipstick, the product effect map may be a lip effect map having a lipstick color, the target object feature part is a lip part, and the lip effect map is mapped to the lip part of the target object, so that a lip makeup effect of the target object with the lipstick color may be formed. Because the target object is a real shooting object and the product effect graph is virtual content, the product effect graph is superposed on the target object to form a virtual combined display effect, so that a user can feel more intuitive, and the user experience is improved just like actually using a target product.
The first user side can establish an interactive connection channel with the anchor side through the server side, after the interactive connection channel with the anchor side is established, the first user side can acquire pictures of a site where the first user side is located, sound and the like can be included to form interactive content, and the third video picture is included in the interactive content.
The operation of identifying the characteristic part of the target object in the third video picture and obtaining the second video picture by fusing the product effect graph can be executed by the first user side or the server side, when the operation is executed by the first user side, the first user side can send the second video picture to the server side, and the server side can provide the second video picture to the first user side and the anchor terminal; when executed by the server, the server can provide the second video picture to the first user side, the second user side and the anchor side respectively.
After the first user side, the second user side or the main broadcasting side obtains the second video picture, the second video picture can be displayed in a second area in the live broadcasting interface.
The interactive content acquired by the first user terminal can also be sent to the server terminal in real time, and the server terminal can provide the interactive content for the anchor terminal and the second user terminal. Therefore, after the first user side, the second user side or the main broadcasting side obtains the third video picture, the second area can be provided in the live broadcasting interface, and the third video picture can be displayed first before the second video picture is displayed in the second area.
The providing of the second area in the live interface may be providing the second area in the live interface, and displaying the second area in the first area in an overlapping manner, or displaying the first area in the second area in an overlapping manner. Of course, the first area and the second area may also be displayed in a split manner in the live broadcast interface, and the first area and the second area are not covered with each other, which is not limited in this application.
It should be noted that, as will be understood by those skilled in the art, when an interactive connection channel is established between the anchor terminal and the first user terminal, because two paths of data acquired by the anchor terminal and the first user terminal in real time exist, the first video picture and the second video picture may be transmitted after data mixing is performed first, and after the anchor terminal, the first user terminal, or the second user terminal obtains the mixed flow picture, the first area and the second area may be provided in the live interface at the same time, and the first video picture in the mixed flow picture may be continuously displayed in the first area, and the second video picture in the mixed flow picture may be displayed in the second area.
Similarly, the first video picture and the third video picture may be mixed and then transmitted, after the mixed-flow picture is obtained by the main broadcasting terminal, the first user terminal or the second user terminal, the first region and the second region may be simultaneously provided in the live broadcasting interface, the first video picture in the mixed-flow picture may continue to be displayed in the first region, and the third video picture in the mixed-flow picture may be displayed in the second region.
In practical application, the main broadcasting terminal performs mixed flow processing on data, and then sends a mixed flow picture to the server terminal, and the mixed flow picture is provided to the second user terminal or the first user terminal by the server terminal; or the data mixed flow processing is carried out by the main broadcasting terminal, then the mixed flow picture is sent to the server terminal and provided to the second user terminal by the server terminal, and the first user terminal can independently carry out the data mixed flow processing to obtain the mixed flow picture; or the mixed flow picture is obtained after the data mixed flow processing is independently carried out by the main broadcasting end, the first user end or the second user end respectively. The implementation of this technology is not specifically limited, and this application only takes a video image as an example for description, and for audio data respectively acquired by the anchor terminal and the first user terminal, it is also possible to perform data mixing and then transmission, which is the same as the currently common live broadcast wheat-connecting technology, and is not described herein again.
In this embodiment, can show first video picture and second video picture in the live interface, the second video picture is the live picture that fuses the product effect picture with the third video picture for not only can watch the anchor explanation in the live interface, can also watch the virtual effect of trying of product, promoted user experience impression, thereby help improving the product popularization effect.
For convenience of understanding, as shown in fig. 3a to fig. 3c, respectively show possible interface display diagrams of a live interface in an actual application, fig. 3a shows a live interface diagram in a case that an interactive connection channel is not established between a main broadcast end and a first user end, a live interface 300 includes a first area 301, the first area 301 may fill the whole live interface, and is used for displaying a first video picture acquired by the main broadcast end in the first area 301.
Fig. 3b shows a schematic view of a live interface when the anchor terminal establishes an interactive connection channel with the first user terminal, where the live interface may simultaneously provide a first area 301 and a second area 302, and optionally, the first area 301 may be displayed in a superimposed manner in the second area 302, covering a part of the second area, and in order not to affect the display effect of the second area 302, the first area 301 may be displayed in a superimposed manner in a boundary area of the second area 302.
The first area 301 continues to display the first video picture, and after the first user terminal establishes the interactive connection channel with the anchor terminal, a third video picture can be acquired in real time and can be displayed first in the second area 302.
Then, based on the product effect map of the target product, the target object and the feature portion thereof corresponding to the target product may be identified from the third video picture, so that the product effect map may be fused with the third video picture to obtain a second video picture, and thus the second video picture may be displayed in the second area 302, as shown in fig. 3 c.
It should be noted that, the live interface may not only provide the first area and the second area, but also may necessarily include some other contents, which may be different according to different terminals (the anchor terminal, the first user terminal, and the second user terminal) corresponding to the live interface, for example, the live interface may further include a pop-up screen display area, a pop-up screen sending control, a forwarding control, a collection control, a praise control, an anchor related introduction content, or a red packet pickup control that may be provided in an e-commerce live scene, and other contents that are promoted in combination with actual scene requirements, and the like.
In some embodiments, after the second video screen is displayed in the second area, the method may further include:
displaying an operation control corresponding to a target product in a live interface;
and responding to the trigger operation aiming at the operation control, and performing corresponding processing on the target product.
In practical application, the operation controls corresponding to the target product can be displayed only in the live interfaces corresponding to the first user side and the second user side, so that a user can conveniently execute trigger operation; of course, in some implementations, in order to reduce the processing amount, the display contents in the live interfaces corresponding to the anchor, the first user, and the second user may also be the same.
Optionally, the operation control may include a processing control corresponding to at least one processing type; responding to the triggering operation for the operation control, performing corresponding processing on the target product may include: and responding to the triggering operation of the processing control aiming at any processing type, and performing corresponding processing on the target product according to the processing type.
In an e-commerce live broadcast scene, a target product is a commodity capable of conducting transaction, at least one processing control of a processing type can comprise a transaction control and/or a purchase adding control and the like, corresponding processing which can be triggered by the transaction control is ordering operation, the purchase adding control is also a control which requests to add to a shopping cart, and corresponding processing which is triggered by the purchase adding control can be operation of adding a product to the shopping cart.
Thus, in one implementation, the processing controls may include a buy-in control; in response to the triggering operation of the processing control for any processing type, performing corresponding processing on the target product according to the processing type may include:
and responding to the triggering operation of the purchase adding control, and adding the target product to the shopping cart.
In another implementation, the processing control includes a transaction control; in response to the triggering operation of the processing control for any processing type, performing corresponding processing on the target product according to the processing type may include:
and generating an ordering request based on the target product in response to the triggering operation aiming at the transaction control.
In addition, the operation control can also comprise a property selection control for providing a product property option so as to facilitate the user to select the required product property, and the product property can comprise the quantity, the specification size and the like. (ii) a In response to the triggering operation of the processing control for any processing type, performing corresponding processing on the target product according to the processing type may include:
and responding to the selection operation of the product attribute options provided in the attribute selection control and the triggering operation of the processing control of any processing type, and performing corresponding processing on the target product with the selected product attributes according to the processing type.
For example, the target product having the selected product attribute may be added to a shopping cart, or an order placement request may be generated based on the target product having the selected product attribute, or the like.
It can be understood that the target product generally corresponds to an actual physical product, and the target product added to the shopping cart described herein refers to a virtual form of the target product in the network environment, and the target product related in the order placing request generated based on the target product refers to a virtual form of the target product in the network environment, and after the order placing is successful, the actual physical product is distributed to the receiving address in a logistics form, which is a common network transaction mode, and is not described herein again.
In the embodiment of the application, the operation control aiming at the target product is displayed in the live interface, so that a user can directly trigger and execute the processing operation aiming at the target product in the live interface without jumping to a product description page, the processing convenience is improved, and the watching of live content is not influenced. The method has the advantages that the corresponding watching users of the first user side and the second user side can not only search the main broadcast explanation content in the live broadcast room, but also check the virtual trial effect of the product, further improve the live broadcast attraction of the product, improve the user interest, contribute to improving the product conversion rate, directly start the processing operation on the product on the live broadcast interface, and further contribute to improving the product conversion rate in a portable processing mode.
In addition, for the trigger operation of the operation control, a product description page corresponding to the target product can also be displayed. Therefore, in some embodiments, in response to the triggering operation for the operation control, performing corresponding processing on the target product may include:
and responding to the triggering operation aiming at the operation control, and displaying the target product into a corresponding product description page.
Optionally, the product description page can be jumped to from the live interface to display the product description page.
As can be seen from the foregoing description, the technical solution of the embodiment shown in fig. 2 may be executed by the anchor terminal, the first user terminal, or the second user terminal, so that the display of the live interface in the anchor terminal, the first user terminal, or the second user terminal may be implemented.
In one implementation, the technical solution of the embodiment shown in fig. 2 may be executed by the first user end, and therefore, in some embodiments, the method may further include:
providing a first interaction control in a live interface;
responding to the trigger operation aiming at the first interactive control, and establishing an interactive connection channel with the anchor terminal through the server terminal;
and collecting a third video picture, and sending the third video picture to the server.
Optionally, the providing the second area in the live interface may be providing the second area in the live interface after the first user establishes an interactive connection channel with the anchor through the server.
Alternatively, the third video picture may be displayed first before the second video picture is displayed in the second area.
And after obtaining the third video picture, the server can provide the third video picture to the anchor terminal and the second user terminal, and simultaneously the server can trigger the live interface of the anchor terminal or the second user terminal to provide a second area, so that the anchor terminal or the second user terminal can display the third video picture in the second area of the respective displayed live interface.
The method comprises the following steps that in response to a triggering operation aiming at a first interactive control, an interactive connection channel established with a main broadcasting end through a server end can be executed according to the following mode:
and responding to the triggering operation of the first user for the first interaction control, sending a first interaction request to the server side so that the server side sends first interaction prompt information to the anchor side, and establishing an interaction connection channel between the first user side and the anchor side based on a first confirmation request fed back by the anchor side.
Optionally, the server receives the first confirmation request fed back by the anchor terminal, and may feed back the first interaction instruction to the first user terminal after the interactive connection channel between the first user terminal and the anchor terminal is established.
The first user terminal can respond to the first interaction instruction, provide the second area in the live broadcast interface, acquire the third video picture, send the third video picture to the server terminal and the like.
In addition, in some embodiments, the anchor terminal may also initiatively initiate an interaction request with the first user terminal, a second interaction control corresponding to different users may be provided in a live interface of the anchor terminal, the anchor terminal may send the second interaction request to the server terminal in response to a trigger operation of the anchor terminal for any second interaction control, the server terminal may send second interaction prompt information to the first user terminal corresponding to any second interaction control, and an interaction connection channel between the first user terminal and the anchor terminal is established based on a second confirmation request fed back by the first user terminal.
Therefore, in some embodiments, after the first user terminal provides the first area in the live interface and displays the first video frame in the first area, the method may further include:
the first user end displays second interaction prompt information in a live broadcast interface; the second interaction prompt message is sent by the server side after receiving a second interaction request of the anchor side;
responding to the confirmation operation aiming at the second interaction prompt message, and sending a second confirmation request to the server so that the server establishes an interaction connection channel between the first user terminal and the anchor terminal based on the second confirmation request;
and acquiring a third video picture, sending the third video picture to the server, so that the server can identify a target object characteristic part corresponding to the target product from the third video picture based on a trial request, sent by the anchor terminal, for the target product associated with the live broadcast, and fuse a product effect picture of the target product into the third video picture based on the characteristic part to obtain a second video picture.
Optionally, after the interactive connection channel between the first user side and the anchor side, the server side may feed back a second interactive instruction to the first user side, and the first user side may provide a second area in the live interface, and capture a third video image, and the like, in response to the second interactive instruction.
As an alternative, before displaying the second video picture in the second area, the method may further include:
displaying trial prompt information corresponding to at least one product associated with a live broadcast room in a live broadcast interface;
and responding to the triggering operation aiming at any trial prompt information, sending a trial request to the server based on the target product corresponding to any trial prompt information so that the server can identify the target object characteristic part corresponding to the target product in the third video picture, and fusing the product effect graph of the target product to the third video picture based on the characteristic part to obtain the second video picture.
In order to facilitate the user to select the target product, trial prompt information corresponding to at least one product associated with the live broadcast room can be provided in the live broadcast interface for the user to select, and the corresponding target product can be determined based on any trial prompt information selected by the user.
The trial prompt information can be used for prompting the user to perform virtual trial of the product and the like.
After the server obtains the second video image, the second video image can be provided to the first user side, the second user side and the anchor side respectively. Therefore, the first user side can obtain the second video picture from the server side and display the second video picture in the second area.
As another alternative, before displaying the second video picture in the second area, the method further comprises:
displaying trial prompt information corresponding to at least one product associated with the live broadcast room in a live broadcast interface;
responding to the trigger operation aiming at any trial prompt information, and acquiring a product effect graph of a target product from the server based on the target product corresponding to any trial prompt information;
and fusing the product effect picture into a third video picture to obtain a second video picture.
That is, the second video picture may be obtained by the first user side through fusion, optionally, after the first user side obtains the second video picture, the second video picture may be sent to the server side, and the server side may provide the second video picture to the anchor side or the second user side.
In some embodiments, the server may be a second client that provides the second video frame to a second client that meets the predetermined requirements.
The predetermined requirement may be that the user account is a predetermined account, the predetermined account may be a high-level account, and in practical application, the high-level account may be obtained by payment or may be a user account whose login duration is longer than a certain duration, or whose product purchase rate is higher than a certain probability, and the like.
Referring to fig. 4, which is a flowchart illustrating an information display method from the perspective of the first user terminal, the method may include the following steps:
401: and providing a first area and a first interactive control in the live interface, and displaying a first video picture in the first area.
402: and responding to the triggering operation aiming at the first interactive control, establishing an interactive connection channel with the main broadcasting end through the server end, and providing a second area in the live broadcasting interface.
Wherein providing the second area in the live interface may include: and providing a second area and the first area in the live interface, and displaying the first area in the second area in an overlapping manner so as to cover partial area in the second area.
Of course, the second area may be displayed in the first area in an overlapping manner to cover a partial area in the first area. In addition, the first area and the second area may also be displayed in a split manner in the live interface, and the first area and the second area are not covered with each other, and the like.
403: and collecting a third video picture and sending the third video picture to the server.
Optionally, after acquiring the third video picture, the third video picture may be displayed in the second area.
404: and displaying trial prompt information corresponding to at least one product associated with the live broadcast room in a live broadcast interface.
Optionally, a trial control may be displayed in the live interface, and step 404 is executed after the trigger operation for the trial control is detected.
405: and responding to the triggering operation aiming at any trial prompt information, sending a trial request to the server based on the target product corresponding to any trial prompt information so that the server can identify the target object corresponding to the target product in the third video picture, fusing the product effect of the target product into the third video picture based on the target object to obtain a second video picture, and providing the second video picture to the first user side.
Optionally, the server may specifically identify a target object feature portion corresponding to a target product in the third video image, fuse a product effect graph of the target product into the third video image based on the feature portion to obtain a second video image, and provide the second video image to the first user side.
406: a second video picture is displayed in the second area.
Optionally, after the second video picture is displayed in the second area, an operation control corresponding to the target product may be provided in the live interface, so that corresponding processing on the target product may be directly triggered.
For convenience of understanding, as shown in fig. 5a to fig. 5b, interface display diagrams of a live interface provided by a first user end in an actual application are respectively shown, a live interface 500 of fig. 5a includes a first area 501, and displays a first video picture in the first area 501, in addition, a first interaction control 502 may also be included in the live interface 500, and the first interaction control 502 may be used to prompt a user to interact with a main broadcast, so as to establish an interaction connection channel.
After the trigger operation for the first interactive control 501 is detected, an interactive connection channel with the anchor terminal may be established through the server terminal, as shown in fig. 5b, a second area 502 may be provided in the live interface 500, and at the same time, the first area 501 may be displayed in a boundary area of the second area 502 in an overlapping manner, so that the first area 501 is displayed in a small window form, and the second area 502 is displayed in a large window form.
After the first user terminal and the anchor terminal establish the interactive connection channel, the first user terminal can acquire the site where the first user terminal is located to obtain a third video picture, as shown in fig. 5b, the third video picture can be displayed in the live interface 500. In addition, a trial control 503 may be displayed in the live interface 500, and a trigger operation for the trial control 503 is detected, as shown in fig. 5c, trial prompt information 504 corresponding to at least one product may be displayed in the live interface 500, and a user may select any one of the trial prompt information 504 from the information, and based on a target product corresponding to the any one of the trial prompt information, a target object corresponding to the target product in the third video picture may be identified, where it is assumed that the target product is a hat and the target object is a human body; and the special part corresponding to the target product in the human body is identified, and the head is assumed, so that the product effect graph 505 corresponding to the target product can be mapped to the head part of the human body, thereby forming the virtual reality effect of wearing the hat on the human body and obtaining a second video picture.
Therefore, the second video picture can be displayed in real time in the second area in the live broadcast interface, so that a user can conveniently check the virtual trial effect of the target product. Certainly, the user may also perform a trigger operation for other trial prompt information to switch to display virtual trial effects corresponding to different products and update the second video picture.
In addition, as shown in fig. 5c, operation controls corresponding to the target product may also be provided in the live interface 500, such as a purchase adding control 506 and a transaction control 507.
In addition, as shown in fig. 5b and fig. 5c, an interaction canceling control 508 may also be displayed in the live interface, and when a trigger operation for the interaction canceling control is detected, an interaction connection channel between the first user side and the anchor side may be disconnected.
Fig. 5d to 5f are schematic interface diagrams illustrating a live interface provided by the second user, and fig. 5d is a schematic interface diagram before the first user establishes an interactive connection channel with the anchor, which may be the same as the schematic interface diagram of fig. 5 a. Fig. 5e is an interface schematic diagram showing a third video image after the first user side and the anchor side establish an interactive connection channel, which is different from fig. 5b in that a trial control, an interactive cancellation control, and the like may not be included, and fig. 5f is an interface schematic diagram showing a first video image after the first user side and the anchor side establish an interactive connection channel, which is different from fig. 5c in that an interactive cancellation control, and the like may not be included.
It should be noted that, an interactive connection channel may be allowed to be established between one first user end or multiple first user ends and the anchor end at the same time, and under the condition that the interactive connection channel is allowed to be established between multiple first user ends and the anchor end, multiple second regions corresponding to one first user end respectively may be provided in the live broadcast interface, the multiple second regions may be superimposed in the first region for display, and the execution operations corresponding to each first user end are the same, which may be described in the foregoing for details, and are not described herein again.
In another implementation, the technical solution of the embodiment shown in fig. 2 may be executed by the anchor side, and therefore, in some embodiments, the method may further include:
displaying second interactive controls corresponding to different users in a live broadcast interface;
and responding to the trigger operation of the anchor aiming at any second interactive control, and establishing an interactive connection channel corresponding to any second interactive control and the first user side through the server side.
That is, the anchor terminal can actively initiate an interactive operation with the first user terminal.
Optionally, in response to a trigger operation of the anchor for any second interactive control, establishing, by the server, an interactive connection channel of the first user side corresponding to any second interactive control may include:
and responding to the trigger operation of the anchor to any second interactive control, sending a second interactive request to the server so that the server sends second interactive prompt information to the first user side corresponding to any second interactive control, and establishing an interactive connection channel between the first user side and the anchor side based on a second confirmation request fed back by the first user side.
The first user side can display the second interaction prompt information in the live broadcast interface after obtaining the second interaction prompt information, and can feed back a second confirmation request to the server side after detecting the confirmation operation aiming at the second interaction prompt information.
After the server establishes the interactive connection channel between the first user side and the anchor side, a second interactive instruction can be fed back to the first user side, and the first user side responds to the second interactive instruction, can acquire a third video picture and sends the third video picture to the server.
The server side can provide the third video picture to the anchor side and the second user side, the anchor side and the second user side can provide the second area in the respective live interface, and the third video picture can be displayed in the second area firstly until the second video picture is obtained.
In some embodiments, after the anchor provides the second area in the live interface, the method further includes:
displaying trial prompt information corresponding to at least one product associated with the live broadcast room in a live broadcast interface;
and responding to the triggering operation aiming at any trial prompt information, sending a trial request to the server based on the target product corresponding to any trial prompt information so that the server can identify the characteristic part of the target object in the third video picture, and fusing the product effect graph of the target product to the third video picture based on the characteristic part to obtain the second video picture.
The server can provide the second video picture to the anchor terminal, the first user terminal and the second user terminal.
Optionally, the service end may specifically perform mixed flow processing on the first video picture and the second video picture, and then provide the mixed flow picture to the anchor end, the first user end, and the second user end, so that the anchor end may display the second video picture in the second area and the first video picture in the first area after obtaining the mixed flow picture.
As can be seen from the foregoing description, it may also be that the first user terminal actively initiates an interactive operation with the anchor terminal, and therefore, in some embodiments, the method may further include:
displaying first interaction prompt information in a live interface, wherein the first interaction prompt information is sent by a server side after receiving a first interaction request of a first user side;
and responding to the confirmation operation aiming at the first interaction prompt message, sending a first confirmation request to the server side, so that the server side establishes an interaction connection channel between the anchor side and the first user side based on the first confirmation request, and indicating the first user side to collect a third video picture.
Optionally, after the server establishes the interactive connection channel between the anchor terminal and the first user terminal, the server may send a first interactive instruction to the first user terminal, and the first user terminal may provide a first area in a live interface thereof in response to the first interactive instruction, and acquire a third video picture, and the like.
As shown in fig. 6, which is a flowchart of an information display method described from the point of view of the anchor side execution, the method may include the following steps:
601: and providing a first area and second interactive controls corresponding to different users in a live interface, and displaying a first video picture in the first area.
602: and responding to the triggering operation aiming at any second interactive control, establishing an interactive connection channel corresponding to any second interactive control and the first user side through the server side, and providing a second area in the live broadcast interface.
603: and displaying trial prompt information corresponding to at least one product associated with the live broadcast room in a live broadcast interface.
604: and responding to the triggering operation aiming at any trial prompt information, sending a trial request to the server based on a target product corresponding to any trial prompt information so that the server can identify a target object corresponding to the target product in a third video picture acquired by the first user side, fusing the product effect of the target product into the third video picture based on the target object to obtain a second video picture, and providing the second video picture to the anchor terminal.
Optionally, the server may specifically identify a target object feature part corresponding to a target product in a third video image acquired by the first user, fuse a product effect graph of the target product into the third video image based on the feature part to obtain a second video image, and provide the second video image to the anchor terminal.
605: a second video picture is displayed in the second area.
Referring to fig. 7, a flowchart of an information display method described from the second user side implementation perspective may include the following steps:
701: and acquiring a first video picture and a second video picture provided by the server.
The first video picture is acquired by a main broadcasting terminal; the second video picture is obtained by identifying a target object from the third video picture and fusing the product effect of a target product associated with the live broadcast room into the third video picture based on the target object; and the third video picture is acquired by the first user terminal.
Optionally, the second video picture may specifically be obtained by identifying a feature of the target object from the third video picture, and fusing a product effect graph of the target product associated with the live broadcast room to the third video picture based on the feature;
702: and providing a first area and a second area in the live interface.
703: a first video picture is displayed in the first area, and a second video picture is displayed in the second area.
Optionally, an operation control corresponding to the target product may be further provided in the live interface, and therefore, in some embodiments, the method may further include:
displaying an operation control corresponding to a target product in a live interface;
and responding to the triggering operation aiming at the operation control, and correspondingly processing the target product.
The specific implementation form of the operation control may be detailed as above, and is not described herein again.
Fig. 8 is a flowchart of an embodiment of an information processing method provided in an embodiment of the present application, and the embodiment is described from the perspective of server side execution, where the method may include the following steps:
801: and establishing an interactive connection channel between the first user terminal and the anchor terminal.
As an alternative, establishing the interactive connection channel between the first user end and the anchor end may include:
receiving a first interaction request sent by a first user end, sending first interaction prompt information to a main broadcast end so that the main broadcast end can output the first interaction prompt information, and generating a first confirmation request based on confirmation operation aiming at the first interaction prompt information;
and receiving a first confirmation request sent by the anchor terminal, and establishing an interactive connection channel between the first user terminal and the anchor terminal.
The first interaction request can be generated by providing a first interaction control in a live interface of the first user terminal and responding to a trigger operation aiming at the first interaction control.
After the server establishes an interactive connection channel between the first user terminal and the anchor terminal, a first interactive instruction can be sent to the first user terminal.
As another alternative, establishing the interactive connection channel between the first user end and the anchor end may include:
receiving a second interaction request sent by the anchor terminal, sending second interaction prompt information to the corresponding first user terminal so that the first user terminal can output the second interaction prompt information, and generating a second confirmation request based on confirmation operation aiming at the second interaction prompt information;
and receiving a second confirmation request sent by the first user terminal, and establishing an interactive connection channel between the first user terminal and the anchor terminal.
The second interaction request may be generated by displaying second interaction controls corresponding to different users in a live interface of the anchor terminal and responding to a trigger operation for any one of the second interaction controls.
After the server establishes the interactive connection channel between the first user terminal and the anchor terminal, a second interactive instruction can be sent to the first user terminal.
802: and acquiring a first video picture acquired by the anchor terminal.
The anchor terminal will collect the first video frame in real time and upload it to the server terminal, so the operation of step 802 is not limited to the appearance sequence in this embodiment.
803: and acquiring a third video picture acquired by the first user side, and identifying a target object in the third video picture.
After the first user side and the anchor side establish an interactive connection channel, a third video picture can be acquired in real time and sent to the server side.
The server side can also provide a third video picture to the anchor side and the second user side, so that the third video picture can be displayed in a live interface of the anchor side and a live interface of the second user side.
As an alternative, identifying the target object in the third video picture may include:
sending trial prompt information of at least one product to a first user end so that the first user end can output the trial prompt information of the at least one product, and sending a trial request aiming at a target product corresponding to any one of the trial prompt information based on the trigger operation of any one of the trial prompt information;
receiving a trial request sent by a first user terminal;
and identifying a target object corresponding to the target product in the third video picture.
The trial request may include information such as a product identifier of the target product, so as to determine the corresponding target product.
As another alternative, identifying the target object in the third video picture may include:
the method comprises the steps that trial prompt information of at least one product is sent to a broadcaster side, so that the broadcaster side can output the trial prompt information of the at least one product, and a trial request for a target product corresponding to any one of the trial prompt information is sent based on the triggering operation of any one of the trial prompt information;
receiving a trial request sent by a first user terminal;
and identifying a target object corresponding to the target product in the third video picture.
The trial request may include information such as a product identifier of the target product, so as to determine the corresponding target product.
804: and based on the target object, fusing the product effect of the target product associated with the live broadcast room into the third video picture to obtain a second video picture.
Optionally, in step 803, specifically, a feature portion of a target object corresponding to a target product in the third video image may be identified, and in step 804, a product effect graph of the target product associated with the live broadcast may be fused to the third video image based on the feature portion to obtain the second video image.
805: and providing the first video picture and the second video picture to the second user end so as to enable the second user end to display the first video picture and the second video picture.
The second user terminal may provide a first area and a second area in its live interface, so as to display the first video picture in the first area and the second video picture in the second area.
In addition, the server can also provide the first video picture and the second video picture to the first user terminal and the anchor terminal, so that the anchor terminal and the first user terminal can display the first video picture and the second video picture.
The anchor terminal or the first user terminal can provide a first area and a second area in a live interface thereof, so that the first video picture is displayed in the first area and the second video picture is displayed in the second area.
The server side may perform mixed flow processing on the first video image and the second video image, and then synthesize the mixed flow image and provide the mixed flow image to the second user side, or the second user side may pull the first video image and the second video image from the server side, and then perform mixed flow processing and display the mixed flow image.
It should be noted that the live interface of the anchor terminal, the first user terminal, or the second user terminal may be generated and provided by the server terminal, and of course, the anchor terminal, the first user terminal, or the second user terminal may locally generate respective corresponding live interfaces in combination with the display content provided by the server terminal.
In some embodiments, the method may further comprise:
providing an operation control corresponding to the target product for the second user end to display the operation control of the target product;
and receiving a processing request which is sent by a second user side and is triggered based on the operation control, and carrying out corresponding processing on the target product.
The specific implementation manner of the operation control can be detailed in the foregoing, and is not described herein again.
In addition, the server side can also provide the first user side with an operation control corresponding to the target product, so that the first user side can display the operation control of the target product; and receiving a processing request which is sent by the first user side and is triggered based on the operation control, and carrying out corresponding processing on the target product.
In a practical application, the technical solution in the embodiment of the present application may be applied to a scene of promoting or selling a beauty product in a live webcast form, and therefore, as another embodiment, the present application further provides an information display method, including:
providing a first area in a live interface of a live broadcast room;
displaying a first video picture in the first area; the first video picture is acquired by a main broadcasting terminal;
providing a second area in the live interface; displaying a makeup trial picture in the second area; the makeup trial picture is obtained by identifying a face characteristic part from a third video picture and fusing a makeup effect picture of a target makeup product associated with the live broadcast room into the third video picture based on the face characteristic part; and the third video picture is acquired by the first user terminal.
When the product is a makeup product, the second video picture obtained by fusion is specifically a makeup trying picture, face recognition is carried out on the user in the third video picture, and the face feature part corresponding to the target makeup product can be determined.
As still another embodiment, the present application further provides an information display method including:
a first user end provides a first area and a microphone connecting control in a live broadcast interface, and displays a first video picture in the first area;
responding to the trigger operation aiming at the microphone connecting control, establishing a microphone connecting interaction channel with a main broadcasting end through a server end, and providing a second area in the live broadcasting interface;
collecting a third video picture, and sending the third video picture to the server;
displaying makeup test prompt information corresponding to at least one makeup product associated with the live broadcast room in the live broadcast interface;
responding to a trigger operation aiming at any makeup trying prompt message, sending a makeup trying request to a server based on a target makeup product corresponding to any makeup trying prompt message so that the server can identify a face feature part corresponding to the target makeup product in the third video picture, and fusing a makeup effect image of the target makeup product to the third video picture based on the face feature part to obtain a makeup trying picture;
and displaying the makeup trying picture in the second area.
As still another embodiment, the present application further provides an information display method including:
the method comprises the steps that a main broadcast end provides a first area and microphone connecting controls corresponding to different users in a live broadcast interface, and a first video picture is displayed in the first area;
responding to the triggering operation aiming at any one microphone connecting control, establishing a microphone connecting interaction channel corresponding to the first user side of the any microphone connecting control through a server side, and providing a second area in the live broadcast interface;
displaying makeup trying prompt information corresponding to at least one makeup product related to the live broadcasting room in the live broadcasting interface;
responding to a trigger operation aiming at any makeup test prompt message, sending a makeup test request to a server based on a target makeup product corresponding to any makeup test prompt message so that the server can identify a face feature part corresponding to the target makeup product in a third video picture acquired by a first user side, and fusing a makeup effect image of the target makeup product into the third video picture based on the face feature part to obtain a makeup test picture;
and displaying the makeup trying picture in the second area.
As still another embodiment, the present application further provides an information display method including:
the second user side obtains a first video picture and a makeup trial picture provided by the server side; the first video picture is acquired by a main broadcasting terminal; the makeup trial picture is obtained by identifying a face characteristic part from a third video picture and fusing a makeup effect picture of a target makeup product associated with the live broadcast room into the third video picture based on the face characteristic part; the third video picture is acquired by the first user terminal;
providing a first area and a second area in a live interface;
displaying the first video picture in the first area, and displaying the makeup trial picture in the second area.
As still another embodiment, the present application further provides an information processing method including:
establishing a connecting wheat interaction channel between a first user terminal and a main broadcasting terminal;
acquiring a first video picture acquired by the anchor terminal;
acquiring a third video image acquired by the first user side, and identifying a face characteristic part in the third video image;
based on the face feature part, fusing a makeup effect picture of a target makeup product associated with a live broadcast room into the third video picture to obtain a makeup trial picture;
and providing the first video picture and the makeup trial picture to a second user end so that the second user end can display the first video picture and the makeup trial picture.
That is, in the live scene of the makeup product, the first interactive control or the second interactive control may specifically refer to a microphone connecting control, the interactive connecting channel may specifically refer to a microphone connecting interactive channel, the product effect diagram may be a makeup effect diagram, and the second video picture may specifically refer to a makeup trying picture fused with the makeup effect diagram. The target object may specifically refer to a face feature, and other identical or corresponding steps may be described in each embodiment, which is not described herein again.
Because at present, the director explains the makeup product on the live broadcast site usually, tries if necessary to promote the makeup product to the user, and then the user can't produce the actual impression to the makeup product. By adopting the technical scheme in the embodiment of the application, a certain user watching live broadcasting can request to perform microphone connection interaction with a main broadcasting, certainly, the main broadcasting can also request a certain user to perform microphone connection interaction, after mutual agreement is obtained, the main broadcasting end can establish a microphone connection interaction channel with a first user end corresponding to a microphone connection user, the first user end can collect a third video picture of a scene where the microphone connection user is located, so that the first video picture collected by the main broadcasting end and the third video picture collected by the first user end can be simultaneously displayed in a live broadcasting interface, the watching users of the main broadcasting, microphone connection users and non-microphone connection users can view the first video picture and the third video picture in the live broadcasting interface, in order to effectively popularize the product, a certain target cosmetic product can be selected by the microphone connection user or the main broadcasting, if the target cosmetic product is lipstick of a color number, face recognition can be performed in the third video picture, a lip part in a face area is recognized, then, a lipstick effect image corresponding to the color number corresponding to the third cosmetic product can be mapped to the third lip part, and a lip test image with a virtual lip effect can be presented in the virtual lip image, and the lip effect of the lip test image can be presented. The second video picture is displayed in the live interface, the main broadcasting, the wheat connecting user and the watching user can check the virtual trial makeup effect of the lipstick with the color number A through the trial makeup picture, the actual trial makeup effect of experiencing the lipstick is generated, and therefore the interactive user and the watching user can further know the target product, the purpose that the interactive user and the watching user purchase the target product can be stimulated, the product conversion rate is improved, and the product popularization effect is improved.
Certainly, this application is not limited to the popularization to cosmetic products, other like clothes, shoes and hats, accessories, electronic equipment or other tangible products etc. can all adopt the technical scheme of this application embodiment to promote, the target object is also not limited to the human body, as long as there is the object of trying that the target product corresponds in first user end collection scope all can realize according to the technical scheme in this application embodiment, so that can carry out virtual trying to the product, make the user produce the actual experience impression, help improving product promotion effect, improve product conversion rate etc..
As can be seen from the embodiment shown in fig. 1, the present application further provides a live webcast system, where the live webcast system may include a server, an anchor, a first user, and a second user;
the server is used for establishing an interactive connection channel between the anchor terminal and the first user terminal; acquiring a first video picture acquired by a main broadcasting end; acquiring a third video picture acquired by the first user side, and identifying a target object in the third video picture; based on the target object, fusing the product effect of the target product associated with the live broadcast room into a third video picture to obtain a second video picture; optionally, the server may specifically identify a feature of a target object in the third video frame, and based on the feature, fuse a product effect graph of a target product associated with the live broadcast room into the third video frame to obtain the second video frame;
the first user side is used for acquiring a third video picture after an interactive connection channel with the anchor side is established, and sending the third video picture to the server side; acquiring a first video picture and a second video picture from a server, and displaying the first video picture and the second video picture;
and the second user side is used for acquiring the first video picture and the second video picture from the server side and displaying the first video picture and the second video picture.
Optionally, the anchor terminal may also acquire the second video picture from the server terminal, and display the first video picture and the second video picture.
In yet another embodiment, after the first user side acquires the third video picture, a product effect graph of a target product associated with the live broadcast room is acquired from the server side, and a target object characteristic part in the third video picture is identified; fusing the product effect graph into a third video picture to obtain a second video picture based on the characteristic part; and providing the second video picture to the server. The first user terminal obtains the first video picture from the server terminal, and can simultaneously display the first video picture and the second video picture.
The operations that the server, the first user, the second user, and the anchor may perform have been described in detail in the foregoing embodiments, and will not be described repeatedly herein.
Fig. 9 is a schematic structural diagram of an embodiment of an information display apparatus provided in the present application, where the apparatus may be configured in an electronic device, such as a mobile phone, a tablet computer, a notebook computer, and the like, to implement corresponding functional operations, and the apparatus may include:
a first display module 901, configured to provide a first area in a live interface of a live broadcast room; displaying a first video picture in a first area; the first video picture is acquired by a main broadcasting terminal;
a second display module 902, configured to provide a second area in the live interface; displaying a second video picture in a second area; the second video picture is obtained by identifying a target object from the third video picture and fusing the product effect of a target product associated with the live broadcast room into the third video picture based on the target object; and the third video picture is acquired by the first user terminal.
In some embodiments, the second display module is further configured to display an operation control corresponding to the target product in the live interface; the apparatus may further include:
and the first processing module is used for responding to the triggering operation aiming at the operation control and correspondingly processing the target product.
In some embodiments, the operational controls include processing controls corresponding to at least one processing type;
the first processing module may be specifically configured to, in response to a trigger operation for a processing control of any processing type, perform corresponding processing on the target product according to the processing type.
In some embodiments, the processing controls may include a buy-in control.
The first processing module may be specifically configured to add the target product to the shopping cart in response to a triggering operation for the shopping control.
In some embodiments, the processing control comprises a transaction control;
the first processing module may be specifically configured to generate an order placement request based on the target product in response to a triggering operation for the transaction control.
In some embodiments, the first processing module may be specifically configured to, in response to a triggering operation for the operation control, display the target product into a corresponding product description page.
In some embodiments, the first display module is further configured to provide a first interactive control in the live interface; responding to the trigger operation aiming at the first interactive control, and establishing an interactive connection channel with the anchor terminal through the server terminal;
the apparatus may further include:
and the first acquisition module is used for acquiring the third video picture and sending the third video picture to the server.
In some embodiments, the establishing, by the first display module and in response to the triggering operation of the first user for the first interactive control, an interactive connection channel with the anchor terminal through the server terminal may include: and responding to the triggering operation of the first user for the first interaction control, sending a first interaction request to the server side so that the server side sends first interaction prompt information to the anchor side, and establishing an interaction connection channel between the first user side and the anchor side based on a first confirmation request fed back by the anchor side.
In some embodiments, the second display module may be further configured to display a third video picture in the second area before displaying the second video picture in the second area.
In some embodiments, the second display module is further configured to display trial prompt information corresponding to each of at least one product associated with the live broadcast room in the live broadcast interface; and responding to the triggering operation aiming at any trial prompt information, sending a trial request to the server based on the target product corresponding to any trial prompt information so that the server can identify the characteristic part of the target object corresponding to the target product in the third video picture, and fusing the product effect graph of the target product into the third video picture based on the characteristic part to obtain the second video picture.
In some embodiments, the second display module is further configured to display trial prompt information corresponding to at least one product associated with the live broadcast room in the live broadcast interface; responding to the triggering operation aiming at any trial prompt information, and acquiring a product effect graph of a target product from the server based on the target product corresponding to any trial prompt information; and fusing the product effect picture into a third video picture to obtain a second video picture.
In addition, the second display module is further configured to send the second video image to the server, so that the server provides the second video image to a second user meeting a predetermined requirement.
In some embodiments, the first display module is further configured to display a second interactive prompt message in the live interface; the second interaction prompt message is sent by the server side after receiving a second interaction request of the anchor side; responding to the confirmation operation aiming at the second interaction prompt message, and sending a second confirmation request to the server;
the apparatus may further include:
and the second acquisition module is used for acquiring a third video picture, sending the third video picture to the server, identifying a target object characteristic part corresponding to the target product from the third video picture by the server based on a trial request of the anchor terminal for the target product associated with the live broadcast room, and fusing a product effect graph of the target product to the third video picture based on the characteristic part to obtain a second video picture.
In some embodiments, the first display module is further configured to display a second interactive control corresponding to a different user in the live interface; and responding to the trigger operation of the anchor aiming at any second interactive control, and establishing an interactive connection channel of the first user end corresponding to any second interactive control through the server end.
In some embodiments, the establishing, by the first display module and in response to a trigger operation of the anchor for any one of the second interactive controls, an interactive connection channel of the first user side corresponding to any one of the second interactive controls through the server side includes: and responding to the trigger operation of the anchor to any second interactive control, sending a second interactive request to the server so that the server sends second interactive prompt information to the first user side corresponding to any second interactive control, and establishing an interactive connection channel between the first user side and the anchor side based on a second confirmation request fed back by the first user side.
In some embodiments, the second display module is further configured to display trial prompt information corresponding to the at least one product associated with the live broadcast room in the live broadcast interface; and responding to the triggering operation aiming at any trial prompt information, sending a trial request to the server based on the target product corresponding to any trial prompt information so that the server can identify the characteristic part of the target object in the third video picture, and fusing the product effect graph of the target product to the third video picture based on the characteristic part to obtain the second video picture.
In some embodiments, the first display module is further configured to display a first interaction prompt message in the live interface, where the first interaction prompt message is sent by the server when the server receives a first interaction request from the first user; and responding to the confirmation operation aiming at the first interactive prompt message, sending a first confirmation request to the server side so that the server side can establish an interactive connection channel between the anchor side and the first user side, and indicating the first user side to collect a third video picture.
Optionally, the second display module providing the second area in the live interface may include: and providing a second area in the live interface, and displaying the first area in the second area in an overlapping manner.
The information display apparatus shown in fig. 9 may execute the information display method shown in the embodiment shown in fig. 2, and the implementation principle and the technical effect are not repeated. The specific manner in which each module and unit of the information display device in the above embodiments perform operations has been described in detail in the embodiments related to the method, and will not be described in detail here.
Fig. 10 is a schematic structural diagram of an embodiment of an information display apparatus provided in the present application, where the apparatus may be configured in an electronic device, such as a mobile phone, a tablet computer, a notebook computer, and the like, to implement corresponding functional operations, and the apparatus may include:
a third display module 1001, configured to provide a first area and a first interactive control in a live interface, and display a first video frame in the first area;
a first response module 1002, configured to, in response to a trigger operation for a first interactive control, establish an interactive connection channel with a main broadcast terminal through a server, and provide a second area in a live broadcast interface;
the first acquisition module 1003 is configured to acquire a third video image and send the third video image to the server;
a fourth display module 1004, configured to display trial prompt information corresponding to at least one product associated with the live broadcast room in the live broadcast interface; responding to the trigger operation aiming at any trial prompt information, sending a trial request to the server based on a target product corresponding to any trial prompt information so that the server can identify a target object corresponding to the target product in the third video picture, and fusing the product effect of the target product to the third video picture based on the target object to obtain a second video picture; a second video picture is displayed in the second area.
The information display apparatus shown in fig. 10 may execute the information display method shown in the embodiment shown in fig. 4, and the implementation principle and the technical effect are not repeated. The specific manner in which each module and unit of the information display device in the above embodiments perform operations has been described in detail in the embodiments related to the method, and will not be described in detail here.
Fig. 11 is a schematic structural diagram illustrating another embodiment of an information display apparatus provided in the present application, where the apparatus may be configured in an electronic device, such as a mobile phone, a tablet computer, a notebook computer, or the like, to implement corresponding functional operations, and the apparatus may include:
a fifth display module 1101, configured to provide a first area and second interactive controls corresponding to different users in a live interface, and display a first video frame in the first area;
a second response module 1102, configured to, in response to a trigger operation for any second interactive control, establish, through a server, an interactive connection channel with a first user side corresponding to any second interactive control, and provide a second area in a live interface;
a sixth display module 1103, configured to display trial prompt information corresponding to each of at least one product associated with the live broadcast room in the live broadcast interface; responding to the trigger operation aiming at any trial prompt information, sending a trial request to the server based on a target product corresponding to any trial prompt information so that the server can identify a target object corresponding to the target product in a third video picture acquired by the first user side, and fusing the product effect of the target product into the third video picture to obtain a second video picture based on the target object; a second video picture is displayed in the second area.
The information display apparatus in fig. 11 may execute the information display method described in the embodiment shown in fig. 6, and the implementation principle and the technical effect are not repeated. The specific manner in which each module and unit of the information display device in the above embodiments perform operations has been described in detail in the embodiments related to the method, and will not be described in detail here.
Fig. 12 is a schematic structural diagram illustrating a further embodiment of an information display apparatus provided in the present application, where the apparatus may be configured in an electronic device, such as a mobile phone, a tablet computer, a notebook computer, etc., to implement corresponding functional operations, and the apparatus may include:
a first obtaining module 1201, configured to obtain a first video picture and a second video picture provided by a server; the first video picture is acquired by a main broadcasting terminal; the second video picture is obtained by identifying a target object from the third video picture and fusing a product effect graph of a target product associated with the live broadcast room into the third video picture based on the target object; the third video picture is acquired by the first user terminal;
a seventh display module 1202, configured to provide the first area and the second area in the live interface; a first video picture is displayed in the first region, and a second video picture is displayed in the second region.
The information display apparatus shown in fig. 12 may execute the information display method shown in the embodiment shown in fig. 7, and the implementation principle and the technical effect are not repeated. The specific manner in which each module and unit of the information display device in the above embodiments perform operations has been described in detail in the embodiments related to the method, and will not be described in detail here.
Fig. 13 shows a schematic structural diagram of an embodiment of an electronic device provided in the present application, where the electronic device may include a storage component 1301, a display component 1302, and a processing component 1303; storage component 1301 stores one or more computer program instructions; the one or more computer program instructions are called and executed by the processing component 1303, so as to implement the information display method according to the embodiment shown in fig. 2.
The processing component 1303 may include one or more processors executing computer instructions to perform all or part of the steps of the above method. Of course, the processing elements may also be implemented as one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components configured to perform the above-described methods.
The storage component 1301 is configured to store various types of data to support operations in the electronic device. The memory components may be implemented by any type or combination of volatile and non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The display element 1302 may be an Electroluminescent (EL) element, a liquid crystal display or a microdisplay having a similar structure, or a retina-directly or similar laser scanning type display.
Of course, the electronic device may of course also comprise other components, such as input/output interfaces, communication components, etc.
In practical application, the electronic device may refer to an intelligent terminal such as a mobile phone, a tablet computer, a computer, an intelligent watch and the like,
the electronic device may be configured with the anchor terminal, the first user terminal, or the second user terminal, and when the first user terminal is configured, the electronic device may specifically execute the information display method shown in fig. 4; when the anchor terminal is configured, the electronic device may specifically execute the information display method shown in fig. 6; when the second user terminal is configured, the electronic device may specifically perform the information display method as shown in fig. 7.
In addition, the embodiment of the application also provides a live interface, which can comprise a first area and a second area;
the first area is used for displaying a first video picture; the first video picture is acquired by a main broadcasting terminal;
the second area is used for displaying a second video picture; the second video picture is obtained by identifying a target object from the third video picture and fusing the product effect of a target product associated with the live broadcast room into the third video picture based on the target object position; and the third video picture is acquired by the first user terminal.
Possible implementations of the live interface can be shown in fig. 3a to fig. 3c and fig. 5a to fig. 5f, but the application is not limited thereto.
In addition, an embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a computer, the information display method of the embodiment shown in fig. 2, fig. 4, fig. 6, or fig. 7 may be implemented.
Fig. 14 is a schematic structural diagram of an embodiment of an information processing apparatus according to an embodiment of the present application, where the apparatus may include:
an interactive triggering module 1401, configured to establish an interactive connection channel between a first user side and a host side;
a second obtaining module 1402, configured to obtain a first video picture collected by a anchor terminal; acquiring a third video picture acquired by the first user side;
an identifying module 1403, configured to identify a target object in the third video picture;
a fusion module 1404, configured to fuse, based on the target object, a product effect of a target product associated with the live broadcast room into a third video frame to obtain a second video frame;
the providing module 1405 is configured to provide the first video frame and the second video frame to the second user end, so that the second user end can display the first video frame and the second video frame.
In some embodiments, the interaction triggering module may be specifically configured to receive a first interaction request sent by a first user, send first interaction prompt information to the anchor, allow the anchor to output the first interaction prompt information, and generate a first confirmation request based on a confirmation operation for the first interaction prompt information; and receiving a first confirmation request sent by the anchor terminal, and establishing an interactive connection channel between the first user terminal and the anchor terminal.
In some embodiments, the interaction triggering module may be specifically configured to receive a second interaction request sent by the anchor terminal, send second interaction prompt information to the corresponding first user terminal, so that the first user terminal outputs the second interaction prompt information, and generate a second confirmation request based on a confirmation operation for the second interaction prompt information; and receiving a second confirmation request sent by the first user terminal, and establishing an interactive connection channel between the first user terminal and the anchor terminal.
In some embodiments, the identification module is specifically configured to send trial prompt information of at least one product to the first user end, so that the first user end outputs the trial prompt information of the at least one product, and send a trial request for a target product corresponding to any one of the trial prompt information based on a trigger operation of any one of the trial prompt information; receiving a trial request sent by a first user terminal; and identifying the characteristic part of the target object corresponding to the target product in the third video picture.
In some embodiments, the identification module is specifically configured to send trial prompt information of at least one product to the anchor terminal, so that the anchor terminal outputs the trial prompt information of the at least one product, and based on a trigger operation of any one of the trial prompt information, sends a trial request for a target product corresponding to any one of the trial prompt information; receiving a trial request sent by a main broadcasting terminal; and identifying the characteristic part of the target object corresponding to the target product in the third video picture.
In some embodiments, the providing module is further configured to provide the first video frame and the second video frame to the first user terminal and the anchor terminal, so that the anchor terminal and the first user terminal can display the first video frame and the second video frame.
In some embodiments, the providing module is further configured to provide an operation control corresponding to the target product to the second user end, so that the second user end displays the operation control of the target product; and receiving a processing request which is sent by a second user side and is triggered based on the operation control, and carrying out corresponding processing on the target product.
The information processing apparatus shown in fig. 14 may execute the information processing method shown in the embodiment shown in fig. 8, and details of the implementation principle and the technical effect are not repeated. The specific manner in which each module and unit of the information display device in the above embodiments perform operations has been described in detail in the embodiments related to the method, and will not be described in detail here.
In one possible design, the information processing apparatus in the embodiment shown in fig. 14 may be implemented as a computing device, which may be a component device in the server in the embodiment shown in fig. 1, as shown in fig. 15, and may include a storage component 1501 and a processing component 1502;
the storage component 1501 stores one or more computer instructions for the processing component 1502 to invoke for execution to implement the information processing method shown in fig. 8.
Among other things, the processing component 1502 may include one or more processors executing computer instructions to perform all or some of the steps of the methods described above. Of course, the processing elements may also be implemented as one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components configured to perform the above-described methods.
The storage component 1501 is configured to store various types of data to support operations on a computing device. The storage component may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Of course, a computing device may also necessarily include other components, such as input/output interfaces, communication components, and so forth.
The input/output interface provides an interface between the processing components and peripheral interface modules, which may be output devices, input devices, etc.
The communication component is configured to facilitate wired or wireless communication between the computing device and other devices, and the like.
The computing device may be a physical device or an elastic computing host provided by a cloud computing platform in actual application, and the computing device may be a cloud server at this time, and the processing component, the storage component, and the like may be a basic server resource rented or purchased from the cloud computing platform.
In addition, an embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a computer, the computer program can implement the information processing method according to the embodiment shown in fig. 8.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (43)

1. An information display method, comprising:
providing a first area in a live interface of a live broadcast room;
displaying a first video picture in the first area; the first video picture is acquired by a main broadcasting terminal;
providing a second area in the live interface; displaying a second video picture in the second area; the second video picture is obtained by identifying a target object from a third video picture and fusing a product effect of a target product associated with the live broadcast room into the third video picture based on the target object; the third video picture is acquired by the first user terminal; wherein the target object is a real shooting object.
2. The method of claim 1, further comprising:
displaying an operation control corresponding to the target product in the live broadcast interface;
and responding to the trigger operation aiming at the operation control, and correspondingly processing the target product.
3. The method of claim 2, wherein the operational controls include process controls corresponding to at least one process type;
the responding to the triggering operation aiming at the operation control, and the corresponding processing of the target product comprises the following steps:
and responding to the triggering operation of the processing control aiming at any processing type, and performing corresponding processing on the target product according to the processing type.
4. The method in accordance with claim 3, wherein the processing control comprises a buy-in control;
the responding to the triggering operation of the processing control aiming at any processing type, and the corresponding processing of the target product according to the processing type comprises the following steps:
and responding to the triggering operation of the purchase adding control, and adding the target product to the shopping cart.
5. The method of claim 3, wherein the processing control comprises a transaction control;
the responding to the triggering operation of the processing control aiming at any processing type, and the corresponding processing of the target product according to the processing type comprises the following steps:
and generating an ordering request based on the target product in response to a triggering operation for the transaction control.
6. The method of claim 2, wherein the performing the corresponding processing on the target product in response to the triggering operation for the operation control comprises:
and responding to the triggering operation of the operation control, and displaying the target product into a corresponding product description page.
7. The method of claim 1, further comprising:
providing a first interaction control in the live interface;
responding to the triggering operation aiming at the first interactive control, and establishing an interactive connection channel with a main broadcasting end through a server end;
and acquiring a third video picture, and sending the third video picture to the server.
8. The method of claim 7, wherein the establishing, by the server, an interactive connection channel with the anchor in response to the triggering operation of the first user on the first interactive control comprises:
and responding to the triggering operation of the first user for the first interaction control, sending a first interaction request to a server side for the server side to send first interaction prompt information to an anchor side, and establishing an interaction connection channel between the first user side and the anchor side based on a first confirmation request fed back by the anchor side.
9. The method of claim 7, wherein prior to displaying the second video picture in the second region, the method further comprises:
displaying the third video picture in the second region.
10. The method of claim 7, wherein prior to displaying the second video picture in the second region, the method further comprises:
displaying trial prompt information corresponding to at least one product associated with the live broadcast room in the live broadcast interface;
responding to a triggering operation aiming at any trial prompt information, sending a trial request to a server based on a target product corresponding to any trial prompt information so that the server can identify a target object characteristic part corresponding to the target product in the third video picture, and fusing a product effect graph of the target product to the third video picture based on the characteristic part to obtain the second video picture.
11. The method of claim 7, wherein prior to displaying the second video picture in the second region, the method further comprises:
displaying trial prompt information corresponding to at least one product associated with the live broadcast room in the live broadcast interface;
responding to the trigger operation aiming at any trial prompt information, and acquiring a product effect graph of a target product from a server based on the target product corresponding to any trial prompt information;
and fusing the product effect graph into the third video picture to obtain the second video picture.
12. The method of claim 11, further comprising:
and sending the second video picture to a server side so that the server side can provide the second video picture to a second user side meeting the preset requirement.
13. The method of claim 1, further comprising:
displaying second interaction prompt information in the live broadcast interface; the second interaction prompt message is sent by the server side after receiving a second interaction request of the anchor side;
responding to the confirmation operation aiming at the second interaction prompt message, and sending a second confirmation request to the server;
acquiring a third video picture, sending the third video picture to the server, so that the server can identify a target object characteristic part corresponding to a target product from the third video picture based on a trial request which is sent by the anchor terminal and aims at the target product associated with the live broadcast room, and fusing a product effect graph of the target product to the third video picture based on the characteristic part to obtain a second video picture.
14. The method of claim 1, wherein prior to providing the second region in the live interface, the method further comprises:
displaying second interaction controls corresponding to different users in the live interface;
and responding to the trigger operation of the anchor aiming at any second interactive control, and establishing an interactive connection channel of the first user end corresponding to any second interactive control through the server end.
15. The method of claim 14, wherein the establishing, by the server, an interactive connection channel of the first user side corresponding to any one of the second interactive controls in response to a trigger operation of an anchor for the any one of the second interactive controls comprises:
and responding to the trigger operation of the anchor to any second interactive control, sending a second interactive request to a server, so that the server sends second interactive prompt information to a first user end corresponding to any second interactive control, and establishing an interactive connection channel between the first user end and the anchor based on a second confirmation request fed back by the first user end.
16. The method of claim 14, wherein prior to displaying the second video picture in the second region, the method further comprises:
displaying trial prompt information corresponding to at least one product related to the live broadcast room in the live broadcast interface;
responding to a trigger operation aiming at any trial prompt information, sending a trial request to a server based on a target product corresponding to any trial prompt information so that the server can identify the characteristic part of the target object in the third video picture, and fusing the product effect graph of the target product to the third video picture based on the characteristic part to obtain the second video picture.
17. The method of claim 1, wherein prior to providing the second region in the live interface, the method further comprises:
displaying first interaction prompt information in the live broadcast interface, wherein the first interaction prompt information is sent by a server side after receiving a first interaction request of a first user side;
and responding to the confirmation operation aiming at the first interaction prompt message, and sending a first confirmation request to the server side so that the server side can establish an interaction connection channel between the anchor side and the first user side, and indicating the first user side to collect a third video picture.
18. The method of claim 1, wherein providing the second area in the live interface comprises:
and providing a second area in the live broadcast interface, and displaying the first area in the second area in an overlapping manner.
19. An information display method, comprising:
a first user end provides a first area and a first interaction control in a live interface, and displays a first video picture in the first area;
responding to the triggering operation aiming at the first interactive control, establishing an interactive connection channel with a main broadcasting end through a server end, and providing a second area in the live broadcasting interface;
collecting a third video picture and sending the third video picture to the server;
displaying trial prompt information corresponding to at least one product associated with a live broadcast room in the live broadcast interface;
responding to a trigger operation aiming at any trial prompt information, sending a trial request to a server based on a target product corresponding to the any trial prompt information so that the server can identify a target object corresponding to the target product in the third video picture, and fusing a product effect of the target product into the third video picture based on the target object to obtain a second video picture; wherein the target object is a real shooting object;
displaying the second video picture in the second area.
20. An information display method, comprising:
the anchor terminal provides a first area and second interactive controls corresponding to different users in a live broadcast interface, and displays a first video picture in the first area;
responding to the triggering operation aiming at any second interactive control, establishing an interactive connection channel corresponding to any second interactive control and a first user side through a server side, and providing a second area in the live broadcast interface;
displaying trial prompt information corresponding to at least one product associated with a live broadcast room in the live broadcast interface;
responding to a triggering operation aiming at any trial prompt information, sending a trial request to a server based on a target product corresponding to any trial prompt information so that the server can identify a target object corresponding to the target product in a third video picture acquired by a first user side, and fusing a product effect of the target product into the third video picture based on the target object to obtain a second video picture; wherein the target object is a real shooting object;
displaying the second video picture in the second area.
21. An information display method, comprising:
the second user side obtains a first video picture and a second video picture provided by the server side; the first video picture is acquired by a main broadcasting terminal; the second video picture is obtained by identifying a target object from a third video picture and fusing a product effect of a target product associated with a live broadcast room into the third video picture based on the target object; the third video picture is acquired by the first user terminal; wherein the target object is a real shooting object;
providing a first area and a second area in a live broadcast interface;
displaying the first video picture in the first area and the second video picture in the second area.
22. An information processing method characterized by comprising:
establishing an interactive connection channel between a first user terminal and a main broadcasting terminal;
acquiring a first video picture acquired by the anchor terminal;
acquiring a third video picture acquired by the first user side, and identifying a target object in the third video picture; wherein the target object is a real shooting object;
based on the target object, fusing the product effect of the target product associated with the live broadcast room into the third video picture to obtain a second video picture;
and providing the first video picture and the second video picture to a second user end so as to enable the second user end to display the first video picture and the second video picture.
23. The method of claim 22, wherein establishing the interactive connection channel between the first user terminal and the anchor terminal comprises:
receiving a first interaction request sent by a first user end, sending first interaction prompt information to a main broadcast end so that the main broadcast end can output the first interaction prompt information, and generating a first confirmation request based on confirmation operation aiming at the first interaction prompt information;
and receiving the first confirmation request sent by the anchor terminal, and establishing an interactive connection channel between the first user terminal and the anchor terminal.
24. The method of claim 22, wherein establishing the interactive connection channel between the first user terminal and the anchor terminal comprises:
receiving a second interaction request sent by a main broadcasting end, sending second interaction prompt information to a corresponding first user end so that the first user end can output the second interaction prompt information, and generating a second confirmation request based on confirmation operation aiming at the second interaction prompt information;
and receiving the second confirmation request sent by the first user terminal, and establishing an interactive connection channel between the first user terminal and the anchor terminal.
25. The method of claim 22, wherein the identifying a target object feature in the third video frame comprises:
sending trial prompt information of at least one product to the first user side so that the first user side can output the trial prompt information of the at least one product, and sending a trial request aiming at a target product corresponding to any one of the trial prompt information based on the triggering operation of any one of the trial prompt information;
receiving the trial request sent by the first user terminal;
and identifying a target object corresponding to the target product in the third video picture.
26. The method of claim 22, wherein the identifying the target object in the third video frame comprises:
sending trial prompt information of at least one product to a broadcaster end so that the broadcaster end can output the trial prompt information of the at least one product, and sending a trial request aiming at a target product corresponding to any one of the trial prompt information based on the trigger operation of any one of the trial prompt information;
receiving the trial request sent by the anchor terminal;
and identifying a target object corresponding to the target product in the third video picture.
27. The method of claim 22, further comprising:
and providing the first video picture and the second video picture to the first user terminal and the anchor terminal so that the anchor terminal and the first user terminal can display the first video picture and the second video picture.
28. The method of claim 22, further comprising:
providing the operation control corresponding to the target product to the second user end so that the second user end can display the operation control of the target product;
and receiving a processing request which is sent by the second user side and triggered based on the operation control, and carrying out corresponding processing on the target product.
29. An information display method, comprising:
a first user end provides a first area and a microphone connecting control in a live broadcast interface, and displays a first video picture in the first area; the first video picture is acquired by a main broadcasting terminal;
responding to the triggering operation aiming at the microphone connecting control, establishing a microphone connecting interaction channel with a main broadcasting end through a server end, and providing a second area in the live broadcasting interface;
collecting a third video picture, and sending the third video picture to the server;
displaying makeup test prompt information corresponding to at least one makeup product associated with a live broadcast room in the live broadcast interface;
responding to a trigger operation aiming at any makeup trying prompt message, sending a makeup trying request to a server based on a target makeup product corresponding to any makeup trying prompt message so that the server can identify a face feature part corresponding to the target makeup product in the third video picture, and fusing a makeup effect image of the target makeup product to the third video picture based on the face feature part to obtain a makeup trying picture;
and displaying the makeup trying picture in the second area.
30. An information display method, comprising:
the method comprises the steps that a main broadcast end provides a first area and microphone connecting controls corresponding to different users in a live broadcast interface, and a first video picture is displayed in the first area; the first video picture is acquired by the anchor terminal;
responding to the triggering operation aiming at any one microphone connecting control, establishing a microphone connecting interaction channel corresponding to the first user side of the any microphone connecting control through a server side, and providing a second area in the live broadcast interface;
displaying makeup test prompt information corresponding to at least one makeup product associated with a live broadcast room in the live broadcast interface;
responding to a trigger operation aiming at any makeup test prompt message, sending a makeup test request to a server based on a target makeup product corresponding to any makeup test prompt message so that the server can identify a face feature part corresponding to the target makeup product in a third video picture acquired by a first user side, and fusing a makeup effect image of the target makeup product into the third video picture based on the face feature part to obtain a makeup test picture;
displaying the makeup trial picture in the second area.
31. An information display method, comprising:
the second user side obtains a first video picture and a makeup trial picture provided by the server side; the first video picture is acquired by a main broadcasting terminal; the makeup trial picture is obtained by identifying a face characteristic part from a third video picture and fusing a makeup effect picture of a target makeup product associated with a live broadcast room into the third video picture based on the face characteristic part; the third video picture is acquired by the first user terminal;
providing a first area and a second area in a live interface;
displaying the first video picture in the first area, and displaying the makeup trial picture in the second area.
32. An information processing method characterized by comprising:
establishing a connecting microphone interaction channel between a first user terminal and a main broadcasting terminal;
acquiring a first video picture acquired by the anchor terminal;
acquiring a third video image acquired by the first user side, and identifying a face characteristic part in the third video image;
based on the face feature part, fusing a makeup effect picture of a target makeup product associated with a live broadcast room into the third video picture to obtain a makeup trial picture;
and providing the first video picture and the makeup trial picture to a second user end so that the second user end can display the first video picture and the makeup trial picture.
33. A network live broadcast system is characterized by comprising a server, a main broadcast end, a first user end and a second user end;
the server is used for establishing an interactive connection channel between the anchor terminal and the first user terminal; acquiring a first video picture acquired by the anchor terminal; acquiring a third video picture acquired by the first user side, and identifying a target object in the third video picture; based on the target object, fusing a product effect of a target product associated with a live broadcast room into the third video picture to obtain a second video picture; wherein the target object is a real shooting object;
the first user side is used for acquiring a third video picture after an interactive connection channel with the anchor side is established, and sending the third video picture to the server side; acquiring the first video picture and the second video picture from the server side, and displaying the first video picture and the second video picture;
the second user side is used for acquiring the first video picture and the second video picture from the server side and displaying the first video picture and the second video picture.
34. An information display device, comprising:
the first display module is used for providing a first area in a live interface of a live broadcast room; displaying a first video picture in the first area; the first video picture is acquired by a main broadcasting terminal;
the second display module is used for providing a second area in the live interface; displaying a second video picture in the second area; the second video picture is obtained by identifying a target object from a third video picture and fusing a product effect of a target product associated with the live broadcast room into the third video picture based on the target object; the third video picture is acquired by the first user terminal; wherein the target object is a real shooting object.
35. An information display device characterized by comprising:
the third display module is used for providing a first area and a first interactive control in a live interface and displaying a first video picture in the first area;
the first response module is used for responding to the triggering operation aiming at the first interactive control, establishing an interactive connection channel with a main broadcast end through a server end, and providing a second area in the live broadcast interface;
the first acquisition module is used for acquiring a third video picture and sending the third video picture to the server;
the fourth display module is used for displaying trial prompt information corresponding to at least one product associated with the live broadcast room in the live broadcast interface; responding to a trigger operation aiming at any trial prompt information, sending a trial request to a server based on a target product corresponding to the any trial prompt information so that the server can identify a target object corresponding to the target product in the third video picture, and fusing a product effect of the target product into the third video picture based on the target object to obtain a second video picture; displaying the second video picture in the second area; wherein the target object is a real shooting object.
36. An information display device characterized by comprising:
the fifth display module is used for providing a first area and second interactive controls corresponding to different users in a live broadcast interface and displaying a first video picture in the first area;
the second response module is used for responding to the triggering operation aiming at any second interactive control, establishing an interactive connection channel corresponding to a first user end of any second interactive control through the server end, and providing a second area in the live broadcast interface;
the sixth display module is used for displaying trial prompt information corresponding to at least one product related to the live broadcast room in the live broadcast interface; responding to a trigger operation aiming at any trial prompt information, sending a trial request to a server based on a target product corresponding to the any trial prompt information so that the server can identify a target object corresponding to the target product in a third video picture acquired by a first user side, and fusing a product effect of the target product into the third video picture based on the target object to obtain a second video picture; displaying the second video picture in the second area; wherein the target object is a real shooting object.
37. An information display device, comprising:
the first acquisition module is used for acquiring a first video picture and a second video picture provided by the server; the first video picture is acquired by a main broadcasting terminal; the second video picture is obtained by identifying a target object from a third video picture and fusing a product effect of a target product associated with a live broadcast room into the third video picture based on the target object; the third video picture is acquired by the first user terminal; the target object is a real shooting object;
the seventh display module is used for providing a first area and a second area in a live interface; displaying the first video picture in the first area and displaying the second video picture in the second area.
38. An information processing apparatus characterized by comprising:
the interactive triggering module is used for establishing an interactive connection channel between the first user terminal and the anchor terminal;
the second acquisition module is used for acquiring a first video picture acquired by the anchor terminal; acquiring a third video picture acquired by the first user side;
the identification module is used for identifying a target object in the third video picture; wherein the target object is a real shooting object;
the fusion module is used for fusing the product effect of the target product associated with the live broadcast room into the third video picture to obtain a second video picture based on the target object;
and the providing module is used for providing the first video picture and the second video picture to a second user end so that the second user end can display the first video picture and the second video picture.
39. A live broadcast interface is characterized by comprising a first area and a second area;
the first area is used for displaying a first video picture; the first video picture is acquired by a main broadcasting terminal;
the second area is used for displaying a second video picture; the second video picture is obtained by identifying a target object from a third video picture and fusing a product effect of a target product associated with a live broadcast room into the third video picture based on the target object; the third video picture is acquired by the first user terminal; wherein the target object is a real shooting object.
40. An electronic device is characterized by comprising a storage component, a display component and a processing component; the storage component stores one or more computer program instructions; the one or more computer program instructions are called and executed by the processing component to realize the information display method according to any one of claims 1 to 18.
41. A computing device comprising a processing component and a storage component;
the storage component stores one or more computer instructions; the one or more computer instructions for execution by the processing component to perform the information processing method of claim 22.
42. A computer storage medium storing a computer program that, when executed by a computer, implements the information display method according to any one of claims 1 to 18.
43. A computer storage medium characterized by storing a computer program that realizes the information processing method according to claim 22 when executed by a computer.
CN202010340430.3A 2020-04-26 2020-04-26 Information display method, information processing method, device and system Active CN113301412B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010340430.3A CN113301412B (en) 2020-04-26 2020-04-26 Information display method, information processing method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010340430.3A CN113301412B (en) 2020-04-26 2020-04-26 Information display method, information processing method, device and system

Publications (2)

Publication Number Publication Date
CN113301412A CN113301412A (en) 2021-08-24
CN113301412B true CN113301412B (en) 2023-04-18

Family

ID=77317975

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010340430.3A Active CN113301412B (en) 2020-04-26 2020-04-26 Information display method, information processing method, device and system

Country Status (1)

Country Link
CN (1) CN113301412B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115776575A (en) * 2021-09-06 2023-03-10 北京字跳网络技术有限公司 Article display method and device, electronic equipment and storage medium
CN115767191A (en) * 2022-11-10 2023-03-07 北京字跳网络技术有限公司 Method, device, equipment and storage medium for live broadcast
CN117544795A (en) * 2023-11-03 2024-02-09 书行科技(北京)有限公司 Live broadcast information display method, management method, device, equipment and medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8381246B2 (en) * 2010-08-27 2013-02-19 Telefonaktiebolaget L M Ericsson (Publ) Methods and apparatus for providing electronic program guides
CN106682958A (en) * 2016-11-21 2017-05-17 汕头市智美科技有限公司 Method and device for trying on makeup virtually
CN106791904A (en) * 2016-12-29 2017-05-31 广州华多网络科技有限公司 Live purchase method and device
CN109874021B (en) * 2017-12-04 2021-05-11 腾讯科技(深圳)有限公司 Live broadcast interaction method, device and system

Also Published As

Publication number Publication date
CN113301412A (en) 2021-08-24

Similar Documents

Publication Publication Date Title
CN113301412B (en) Information display method, information processing method, device and system
CN105701217B (en) Information processing method and server
CN113965811B (en) Play control method and device, storage medium and electronic device
CN106792228B (en) Live broadcast interaction method and system
CN111405343A (en) Live broadcast interaction method and device, electronic equipment and storage medium
CN107911737B (en) Media content display method and device, computing equipment and storage medium
WO2012109666A1 (en) Contextual commerce for viewers of video programming
CN109525850A (en) A kind of live broadcasting method, apparatus and system
CN106792230B (en) Advertisement interaction method and system based on live video
KR101511297B1 (en) Apparatus and method for generating information about object and, server for shearing information
CN105704502A (en) Live video interactive method and device
US20170019720A1 (en) Systems and methods for making video discoverable
CN113573129A (en) Commodity object display video processing method and device
US20240106985A1 (en) Video distribution server, video distribution method and recording medium
CN111107434A (en) Information recommendation method and device
US20170131851A1 (en) Integrated media display and content integration system
US9538209B1 (en) Identifying items in a content stream
CN113301421B (en) Live broadcast segment display method and device, storage medium and electronic equipment
CN113784180A (en) Video display method, video pushing method, video display device, video pushing device, video display equipment and storage medium
KR101279849B1 (en) Realtime assistant method for tv shopping program using mobile computing device
CN115174953B (en) Event virtual live broadcast method, system and event live broadcast server
WO2017047288A1 (en) Video display system
JP5852171B2 (en) Content additional information provision system
CN113645474A (en) Interactive information processing method, interactive information display method and electronic equipment
KR20130083003A (en) Apparatus and method for electronic commerce using broadcasting image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant