CN110865754B - Information display method and device and terminal - Google Patents

Information display method and device and terminal Download PDF

Info

Publication number
CN110865754B
CN110865754B CN201911093543.1A CN201911093543A CN110865754B CN 110865754 B CN110865754 B CN 110865754B CN 201911093543 A CN201911093543 A CN 201911093543A CN 110865754 B CN110865754 B CN 110865754B
Authority
CN
China
Prior art keywords
special effect
target
effect component
information
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911093543.1A
Other languages
Chinese (zh)
Other versions
CN110865754A (en
Inventor
王倩
帕哈尔丁·帕力万
张永良
陈凤龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201911093543.1A priority Critical patent/CN110865754B/en
Publication of CN110865754A publication Critical patent/CN110865754A/en
Application granted granted Critical
Publication of CN110865754B publication Critical patent/CN110865754B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Abstract

The disclosure provides an information display method, an information display device and a terminal, and belongs to the technical field of internet. The method comprises the following steps: displaying a first application interface of a target application on a terminal, wherein the first application interface comprises a skip interface of a shooting interface; responding to the received trigger operation of the jump interface, and displaying a shooting interface of the target application; acquiring current camera shooting information; and if the shooting information meets the pop-up condition for popping up the special effect panel, displaying the special effect panel in the shooting interface, wherein the special effect panel comprises at least one target special effect component. When the shooting information meets the popping condition for popping up the special effect panel, the special effect panel is automatically displayed in the shooting interface, so that the operation of manually controlling the popping-up of the special effect panel when a user enters the shooting interface every time is saved, and the operation efficiency of the terminal is improved.

Description

Information display method and device and terminal
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to an information display method, an information display device, and a terminal.
Background
Nowadays, with the development of internet technology, entertainment applications on terminals are increasing. For example, a short video application is installed on the terminal, and a user can watch a short video through the short video application and also can shoot the short video through the short video application. In order to improve the interestingness of short video shooting, when a short video is shot, the short video application can provide various magic expressions for a user to select, and the user can shoot interesting short videos through the various magic expressions.
In the related technology, a trigger button of a magic expression panel is displayed below a shooting interface of a short video application, a user can click the trigger button to control the magic expression panel to pop up, correspondingly, when a terminal detects that the user clicks the trigger button, the magic expression panel is displayed below the shooting interface and comprises a plurality of predefined magic expressions, and the user can select one magic expression from the predefined magic expressions to shoot.
The method has the problems that the user needs to click the trigger button of the magic expression panel to control the bounce of the magic expression panel every time the shooting interface is accessed, so that more time and energy are needed to be consumed, and the operation efficiency of the terminal is low.
Disclosure of Invention
The embodiment of the disclosure provides an information display method, an information display device and a terminal, and solves the problem that in the related art, a user needs to click a trigger button of a magic expression panel to control the bounce of the magic expression panel every time a shooting interface is entered, so that the operation efficiency of the terminal is low. The technical scheme is as follows:
according to an aspect of the embodiments of the present disclosure, there is provided an information display method, including:
displaying a first application interface of a target application on a terminal, wherein the first application interface comprises a skip interface of a shooting interface;
responding to the received trigger operation of the jump interface, and displaying a shooting interface of the target application;
acquiring current camera shooting information;
and if the shooting information meets the pop-up condition for popping up the special effect panel, displaying the special effect panel in the shooting interface, wherein the special effect panel comprises at least one target special effect component.
In a possible implementation manner, the camera information includes a currently-turned-on camera identifier; if the camera which is started at present is determined to be the target camera according to the camera identification, the camera information is determined to meet the pop-up condition for popping up the special effect panel; and/or the presence of a gas in the gas,
the shooting information comprises first image data in a view frame of the shooting interface; and if the first image data comprises face information, determining that the shooting information meets the pop-up condition for popping up a special effect panel.
In another possible implementation manner, the acquiring current image capture information includes:
when the shooting interface is displayed, acquiring the current shooting information; alternatively, the first and second electrodes may be,
and when receiving the switching operation of the camera of the terminal, acquiring the current camera shooting information.
In another possible implementation manner, the method further includes:
and if the closing operation of the special effect panel by the user is detected within the preset time after the special effect panel is displayed in the shooting interface, stopping executing the step of acquiring the current shooting information when the shooting interface of the target application is displayed next time.
In another possible implementation manner, the displaying the special effects panel in the shooting interface includes: .
Acquiring a user identifier of a current login user in the target application;
acquiring at least one piece of multimedia information of the user, which acts in the target application and uses the special effect component, according to the user identification;
acquiring the at least one target special effect component recommended for the user according to the at least one piece of multimedia information;
displaying a special effect template in the shooting interface, and loading the at least one target special effect component into the special effect template to obtain the special effect panel.
In another possible implementation manner, the obtaining, according to the user identifier, at least one piece of multimedia information in which a behavior of the user occurs in the target application and a special effect component is used includes:
acquiring at least one piece of multimedia information which is watched in the target application and uses the special effect component by the user according to the user identification; and/or the presence of a gas in the gas,
acquiring at least one piece of multimedia information which is issued in the target application by the user and uses the special effect component according to the user identification; and/or the presence of a gas in the gas,
and acquiring at least one piece of multimedia information which is made and used by the user in the target application according to the user identification.
In another possible implementation manner, the obtaining, according to the at least one piece of multimedia information, the at least one target special effect component recommended to the user includes:
according to the at least one piece of multimedia information, obtaining special effect components used in each piece of multimedia information to obtain a plurality of special effect components;
determining the recommendation degrees of the plurality of special effect components according to the occurrence behaviors of the user on each piece of multimedia information;
selecting the at least one target special effect component from the plurality of special effect components according to the recommendation degrees of the plurality of special effect components.
In another possible implementation, the occurrence includes viewing, publishing, and production; the determining the recommendation degrees of the plurality of special effect components according to the occurrence behavior of the user on each piece of multimedia information includes:
for any special effect component, respectively acquiring a first number of multimedia information which uses the special effect component and takes place behaviors as watching, a second number of multimedia information which uses the special effect component and takes place behaviors as publishing, and a third number of multimedia information which uses the special effect component and takes place behaviors as making;
and according to the viewing corresponding first weight, issuing the corresponding second weight and making the corresponding third weight, carrying out weighted summation on the first quantity, the second quantity and the third quantity to obtain the recommendation degree of the special effect component.
In another possible implementation manner, the selecting the at least one target special effect component from the plurality of special effect components according to the recommendation degrees of the plurality of special effect components includes:
determining a heat value for each of the plurality of special effect components;
selecting the at least one target special effect component from the plurality of special effect components according to the heat value and the recommendation degree of each special effect component.
In another possible implementation manner, the obtaining, according to the at least one piece of multimedia information, the at least one target special effect component recommended to the user includes:
according to the at least one piece of multimedia information, obtaining special effect components used in each piece of multimedia information to obtain a plurality of special effect components;
determining at least one feature of the plurality of special effects components from the plurality of special effects components;
according to the at least one feature, selecting the at least one target special effect component matched with the at least one feature from a special effect component library, wherein the special effect component library is used for storing special effect components of the target application.
In another possible implementation manner, the selecting, according to the at least one feature, the at least one target special effect component matching the at least one feature from a special effect component library includes:
and according to the at least one characteristic, selecting at least one target special effect component which is matched with the at least one characteristic and the heat value of which exceeds a preset threshold value from a special effect component library.
In another possible implementation manner, the loading the at least one target special effect component into the special effect template to obtain the special effect panel includes:
sequencing the at least one target special effect component according to the recommendation degree of each target special effect component;
and loading the at least one ordered target special effect component into the special effect template to obtain the special effect panel.
In another possible implementation manner, after the displaying the special effects panel in the shooting interface, the method further includes:
acquiring first image data in a viewing frame in the shooting interface, and selecting a target special effect component from the at least one target special effect component;
according to the selected target special effect component, carrying out special effect processing on the first image data to obtain second image data;
displaying the second image data in the viewfinder frame.
In another possible implementation, the selecting a target special effects component from the at least one target special effects component includes:
selecting a target special effect component with the highest recommendation degree from the at least one target special effect component according to the recommendation degree of the at least one target special effect component; alternatively, the first and second electrodes may be,
according to the first image data, selecting a target special effect component matched with the first image data from the at least one target special effect component.
According to another aspect of the embodiments of the present disclosure, there is provided an information presentation apparatus, the apparatus including:
a first display module configured to execute a first application interface displaying a target application on a terminal, the first application interface including a jump interface of a shooting interface; responding to the received trigger operation of the jump interface, and displaying a shooting interface of the target application;
an acquisition module configured to perform acquisition of current imaging information;
the second display module is configured to display the special effect panel in the shooting interface if the shooting information meets the shooting condition for shooting the special effect panel, wherein the special effect panel comprises at least one target special effect component.
In a possible implementation manner, the camera information includes a currently-turned-on camera identifier; the second display module is further configured to execute the steps of determining that the currently started camera is the target camera if the camera is determined to be the target camera according to the camera identification, and determining that the camera shooting information meets the pop-up condition for popping up the special effect panel; and/or the presence of a gas in the gas,
the shooting information comprises first image data in a view frame of the shooting interface; the second display module is further configured to execute determining that the camera information meets a pop-up condition for popping up a special effect panel if the first image data includes face information.
In another possible implementation manner, the obtaining module is further configured to obtain the current image capture information when the shooting interface is displayed; or when receiving a switching operation of a camera of the terminal, acquiring the current camera information.
In another possible implementation manner, the obtaining module is further configured to stop obtaining the current image capture information when a next shooting interface of the target application is displayed if a closing operation of the special effect panel by a user is detected within a preset time period after the special effect panel is displayed in the shooting interface.
In another possible implementation manner, the second display module is further configured to perform obtaining of a user identifier of a currently logged-in user in the target application; acquiring at least one piece of multimedia information of the user, which acts in the target application and uses the special effect component, according to the user identification; acquiring the at least one target special effect component recommended for the user according to the at least one piece of multimedia information; displaying a special effect template in the shooting interface, and loading the at least one target special effect component into the special effect template to obtain the special effect panel.
In another possible implementation manner, the second display module is further configured to perform obtaining, according to the user identifier, at least one piece of multimedia information that the user watches in the target application and uses a special effect component; and/or acquiring at least one piece of multimedia information which is issued by the user in the target application and uses the special effect component according to the user identification; and/or acquiring at least one piece of multimedia information which is produced and used by the user in the target application and is of the special effect component according to the user identification.
In another possible implementation manner, the second display module is further configured to execute obtaining, according to the at least one piece of multimedia information, an effect component used in each piece of multimedia information to obtain a plurality of effect components; determining the recommendation degrees of the plurality of special effect components according to the occurrence behaviors of the user on each piece of multimedia information; selecting the at least one target special effect component from the plurality of special effect components according to the recommendation degrees of the plurality of special effect components.
In another possible implementation, the occurrence includes viewing, publishing, and production;
the second display module is further configured to execute the steps of respectively acquiring, for any one of the special effect components, a first number of multimedia information which uses the special effect component and whose occurrence behavior is viewing, a second number of multimedia information which uses the special effect component and whose occurrence behavior is publishing, and a third number of multimedia information which uses the special effect component and whose occurrence behavior is production; and according to the viewing corresponding first weight, issuing the corresponding second weight and making the corresponding third weight, carrying out weighted summation on the first quantity, the second quantity and the third quantity to obtain the recommendation degree of the special effect component.
In another possible implementation, the second display module is further configured to perform determining a heat value of each of the plurality of special effect components; selecting the at least one target special effect component from the plurality of special effect components according to the heat value and the recommendation degree of each special effect component.
In another possible implementation manner, the second display module is further configured to execute obtaining, according to the at least one piece of multimedia information, an effect component used in each piece of multimedia information to obtain a plurality of effect components; determining at least one feature of the plurality of special effects components from the plurality of special effects components; according to the at least one feature, selecting the at least one target special effect component matched with the at least one feature from a special effect component library, wherein the special effect component library is used for storing special effect components of the target application.
In another possible implementation manner, the second display module is further configured to execute selecting, according to the at least one feature, at least one target special effect component from the special effect component library, which matches the at least one feature and has a heat value exceeding a preset threshold.
In another possible implementation manner, the second display module is further configured to perform sorting the at least one target special effect component according to the recommendation degree of each target special effect component; and loading the at least one ordered target special effect component into the special effect template to obtain the special effect panel.
In another possible implementation manner, the second display module is further configured to perform acquiring first image data in a viewing frame in the shooting interface, and selecting a target special effect component from the at least one target special effect component; according to the selected target special effect component, carrying out special effect processing on the first image data to obtain second image data; displaying the second image data in the viewfinder frame.
In another possible implementation manner, the second display module is further configured to execute selecting a target special effect component with the highest recommendation degree from the at least one target special effect component according to the recommendation degree of the at least one target special effect component; or, according to the first image data, selecting a target special effect component matched with the first image data from the at least one target special effect component.
According to another aspect of the embodiments of the present disclosure, a terminal is provided, where the terminal includes a processor and a memory, where the memory stores at least one instruction, and the instruction is loaded and executed by the processor to implement the operations performed by the information displaying method described above.
According to another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium, in which at least one instruction is stored, and the instruction is loaded and executed by a processor to implement the operations performed by the information presentation method described above.
According to another aspect of the embodiments of the present disclosure, there is provided a computer program product, wherein instructions of the computer program product, when executed by a processor of a computer device, enable the computer device to perform operations performed by the information presentation method described above.
The technical scheme provided by the embodiment of the disclosure has the following beneficial effects:
in the embodiment of the disclosure, a first application interface of a target application is displayed on a terminal, wherein the first application interface comprises a jump interface of a shooting interface; responding to the received trigger operation of the jump interface, and displaying a shooting interface of the target application; acquiring current camera shooting information; and if the shooting information meets the pop-up condition for popping up the special effect panel, displaying the special effect panel in the shooting interface, wherein the special effect panel comprises at least one target special effect component. When the shooting information meets the popping condition for popping up the special effect panel, the special effect panel is automatically displayed in the shooting interface, so that the operation of manually controlling the popping-up of the special effect panel when a user enters the shooting interface every time is saved, and the operation efficiency of the terminal is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a schematic diagram of an implementation environment shown in accordance with an example embodiment.
Fig. 2 is a flow chart illustrating an information presentation method according to an example embodiment.
Fig. 3 is a flow chart illustrating an information presentation method according to an example embodiment.
FIG. 4 is an interface diagram illustrating a target application, according to an example embodiment.
FIG. 5 is an interface diagram illustrating another target application, according to an example embodiment.
FIG. 6 is a block diagram illustrating an information presentation device according to an example embodiment.
Fig. 7 is a schematic diagram illustrating a structure of a terminal according to an exemplary embodiment.
Fig. 8 is a schematic diagram illustrating a configuration of a server according to an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The user information to which the present disclosure relates may be information authorized by the user or sufficiently authorized by each party.
FIG. 1 is a schematic diagram of an implementation environment provided by embodiments of the present disclosure. Referring to fig. 1, the implementation environment includes a terminal 101 and a server 102. The terminal 101 and the server 102 are connected via a wireless or wired network. In addition, a target application that the server 102 provides services may be installed on the terminal 101, and a user corresponding to the terminal 101 may implement functions such as data transmission and message interaction through the target application.
The terminal 101 may be a computer, a mobile phone, a tablet computer or other terminals. The target application may be a target application in the operating system of the terminal 101, and may also be a target application provided by a third party. For example, the target application may be a short video application, a photo application, a camera application, a chat application, and the like. The server 102 may be a background server corresponding to the target application. Accordingly, the server 102 may be a short video application server, a photographing application server, a camera application server, or a chat application server, etc.
The user can take a picture, take a short video, or conduct a video chat, etc. through a target application installed on the terminal 101. When a user shoots a short video through a target application, the terminal 101 acquires current shooting information, and if the shooting information meets a pop-up condition for popping up a special effect panel, the terminal 101 sends a special effect component acquisition request to the server 102, wherein the request carries a user identifier of a currently logged-in user in the target application. The server 102 receives the special effect component acquisition request, acquires at least one target special effect component recommended for the user according to the user identification, and sends the at least one target special effect component to the terminal 101.
The terminal 101 receives at least one target special effect component recommended for the user, loads the at least one target special effect component into a special effect template to obtain a special effect panel, and displays the special effect panel in a shooting interface. The special effect panel can be a magic expression panel, a makeup panel or a beauty panel, and correspondingly, the target special effect component can be a magic expression, a makeup special effect, a beauty special effect and the like.
In a possible implementation manner, the server 102 may also sort at least one target special effect component recommended for the user according to the recommendation degree of each target special effect component, and send at least one sorted first special effect component to the terminal 101. And after receiving the at least one ordered target special effect component, the terminal 101 loads the at least one ordered target special effect component into the special effect template to obtain a special effect panel.
In a possible implementation manner, the terminal 101 further selects a target special effect component from at least one target special effect component, performs special effect processing on first image data in a viewfinder in the shooting interface according to the selected target special effect component to obtain second image data, and then displays the second image data in a viewfinder frame of the shooting interface to provide an effect preview for the user.
Fig. 2 is a flow chart illustrating a method of presenting information according to an example embodiment. Referring to fig. 2, the information presentation method may be applied to a terminal, and the embodiment includes:
in step 201, a first application interface of a target application is displayed on a terminal, and the first application interface comprises a jump interface of a shooting interface.
In step 202, in response to receiving a trigger operation for the jump interface, a shooting interface of the target application is displayed.
In step 203, current imaging information is acquired.
In step 204, if the camera shooting information meets the pop-up condition for popping up the special effect panel, the special effect panel is displayed in the shooting interface, and the special effect panel comprises at least one target special effect component.
In one possible implementation, the camera information includes a currently-turned-on camera identifier; if the camera which is started at present is determined to be the target camera according to the camera identification, the camera information is determined to meet the pop-up condition for popping up the special effect panel; and/or the presence of a gas in the gas,
the shooting information comprises first image data in a view frame of a shooting interface; and if the first image data comprises the face information, determining that the shooting information meets the pop-up condition for popping up the special effect panel.
In another possible implementation manner, acquiring current image capture information includes:
when a shooting interface is displayed, acquiring current shooting information; alternatively, the first and second electrodes may be,
when receiving the switching operation of the camera of the terminal, acquiring the current shooting information.
In another possible implementation manner, the method further includes:
and if the closing operation of the special effect panel by the user is detected within the preset time after the special effect panel is displayed in the shooting interface, stopping executing the step of acquiring the current shooting information when the shooting interface of the target application is displayed next time.
In another possible implementation, displaying a special effects panel in a shooting interface includes:
acquiring a user identifier of a current login user in a target application;
acquiring at least one piece of multimedia information of a user, which acts in a target application and uses a special effect component, according to the user identification;
acquiring at least one target special effect component recommended for a user according to at least one piece of multimedia information;
and displaying the special effect template in the shooting interface, and loading at least one target special effect component into the special effect template to obtain a special effect panel.
In another possible implementation manner, acquiring, according to the user identifier, at least one piece of multimedia information in which a user acts in the target application and uses the special effect component, includes:
acquiring at least one piece of multimedia information which is watched by a user in a target application and uses the special effect component according to the user identification; and/or the presence of a gas in the gas,
acquiring at least one piece of multimedia information which is issued in a target application by a user and uses the special effect component according to the user identification; and/or the presence of a gas in the gas,
and acquiring at least one piece of multimedia information which is made and used by the user in the target application according to the user identification.
In another possible implementation manner, obtaining at least one target special effect component recommended to a user according to at least one piece of multimedia information includes:
according to at least one piece of multimedia information, obtaining special effect components used in each piece of multimedia information to obtain a plurality of special effect components;
determining recommendation degrees of a plurality of special effect components according to the occurrence behavior of each multimedia information of the user;
and selecting at least one target special effect component from the plurality of special effect components according to the recommendation degrees of the plurality of special effect components.
In another possible implementation, the actions that occur include viewing, publishing, and production; determining the recommendation degrees of a plurality of special effect components according to the occurrence behaviors of the user on each piece of multimedia information, wherein the recommendation degrees comprise:
for any special effect component, respectively acquiring a first number of multimedia information which uses the special effect component and takes place behaviors as watching, a second number of multimedia information which uses the special effect component and takes place behaviors as publishing, and a third number of multimedia information which uses the special effect component and takes place behaviors as making;
and according to the first weight corresponding to watching, the second weight corresponding to publishing and the third weight corresponding to making, carrying out weighted summation on the first quantity, the second quantity and the third quantity to obtain the recommendation degree of the special effect component.
In another possible implementation, selecting at least one target special effect component from the plurality of special effect components according to the recommendation degrees of the plurality of special effect components includes:
determining a heat value for each of a plurality of special effect components;
at least one target special effect component is selected from the plurality of special effect components according to the heat value and the recommendation degree of each special effect component.
In another possible implementation manner, obtaining at least one target special effect component recommended to a user according to at least one piece of multimedia information includes:
according to at least one piece of multimedia information, obtaining special effect components used in each piece of multimedia information to obtain a plurality of special effect components;
determining at least one characteristic of the plurality of special effect components according to the plurality of special effect components;
and selecting at least one target special effect component matched with the at least one characteristic from a special effect component library according to the at least one characteristic, wherein the special effect component library is used for storing the special effect components of the target application.
In another possible implementation manner, selecting at least one target special effect component matching at least one feature from a special effect component library according to the at least one feature includes:
and selecting at least one target special effect component which is matched with the at least one characteristic and the heat value of which exceeds a preset threshold value from the special effect component library according to the at least one characteristic.
In another possible implementation, loading at least one target special effect component into a special effect template to obtain a special effect panel, including:
sequencing at least one target special effect component according to the recommendation degree of each target special effect component;
and loading the at least one ordered target special effect component into a special effect template to obtain a special effect panel.
In another possible implementation manner, after the special effect panel is displayed in the shooting interface, the method further includes:
acquiring first image data in a view frame in a shooting interface, and selecting a target special effect component from at least one target special effect component;
according to the selected target special effect component, carrying out special effect processing on the first image data to obtain second image data;
the second image data is displayed in the viewfinder frame.
In another possible implementation, selecting a target special effects component from at least one target special effects component includes:
selecting a target special effect component with the highest recommendation degree from the at least one target special effect component according to the recommendation degree of the at least one target special effect component; alternatively, the first and second electrodes may be,
according to the first image data, a target special effect component matched with the first image data is selected from at least one target special effect component.
In the embodiment of the disclosure, a first application interface of a target application is displayed on a terminal, wherein the first application interface comprises a jump interface of a shooting interface; responding to the received trigger operation of the jump interface, and displaying a shooting interface of the target application; acquiring current camera shooting information; and if the shooting information meets the pop-up condition for popping up the special effect panel, displaying the special effect panel in the shooting interface, wherein the special effect panel comprises at least one target special effect component. When the shooting information meets the popping condition for popping up the special effect panel, the special effect panel is automatically displayed in the shooting interface, so that the operation of manually controlling the popping-up of the special effect panel when a user enters the shooting interface every time is saved, and the operation efficiency of the terminal is improved.
Fig. 3 is a flow chart illustrating a method of presenting information according to an example embodiment. Referring to fig. 3, the embodiment includes:
in step 301, the terminal displays a first application interface of the target application, where the first application interface includes a jump interface of the shooting interface.
The first application interface can be a main interface of the target application or an interface of a jump interface with a shooting interface in the target application. Referring to fig. 4, the jump interface is displayed in the upper right corner of the first application interface. Of course, the jump interface may also be displayed in other positions of the first application interface, for example, in a lower right corner of the first application interface, or in an intermediate position below the first application interface, and the display position of the jump interface is not limited by the present disclosure.
The skip interface may be displayed as a name or an icon, for example, when the skip interface is displayed as a name, the name may be "american photograph", "camera shooting", etc., referring to fig. 4, when the skip interface is displayed as an icon, the icon may be a graphic of a camera, and the icon may also be displayed as other graphics, for example, a graphic of a plus sign, a graphic of a shot, etc.
In step 302, the terminal displays a shooting interface of the target application in response to receiving the trigger operation of the jump interface.
The terminal responds to the received trigger operation of the jump interface and also starts a camera which can be a front-facing camera or a rear-facing camera. For example, the terminal may default to turn on the front camera, and may default to turn on the rear camera; the terminal can also start the camera according to the start record of the target application camera, for example, when the start record of the target application camera shows that the target application is started last time, the final state of the camera is a front-facing camera, and the terminal starts the front-facing camera when the terminal receives the trigger operation on the skip interface this time.
In step 303, the terminal obtains current camera information, and if the camera information meets the pop-up condition for popping up the special effect panel, the terminal obtains the user identifier of the current login user in the target application.
The camera information may include camera information of the terminal and image information in a view frame of a shooting interface, and the camera information may include a camera identifier, where the camera identifier is used to determine that a camera currently turned on by the terminal is a front-facing camera or a rear-facing camera.
The pop-up condition is a condition preset by the terminal and used for determining whether the special effect panel pops up, for example, the pop-up condition may be that a camera opened by the terminal is a target camera, an image in a view finder of a shooting interface is a specific image, and the like. The target camera can be a front camera or a rear camera, and the specific image can be a human face and the like.
In one possible implementation, the camera information includes an identifier of a currently turned on camera. And if the terminal determines that the currently opened camera is the target camera according to the camera identification, the terminal determines that the camera information meets the pop-up condition for popping up the special effect panel. In the embodiment of the disclosure, a target camera is taken as a front-facing camera for illustration, and since the probability that a user uses the special effect component is higher when the camera opened by the terminal is the front-facing camera, the terminal can automatically pop up the special effect panel by determining that the currently opened camera is the front-facing camera, so that the probability that the user does not need to use the special effect component automatically pops up the special effect panel is reduced, the use of the user is facilitated, and the user viscosity is improved.
In another possible implementation, the camera information includes first image data within a viewfinder frame of the camera interface. And if the first image data comprises the face information, the terminal determines that the shooting information meets the pop-up condition for popping up the special effect panel. Because the special effect components in the special effect panel are mostly designed according to the expression of the face, when the first image data comprises face information, a user is most likely to need to use the special effect components, and the terminal can automatically bounce the special effect panel only by determining that the first image data comprises the face information, so that the probability of automatically bouncing the special effect panel for the user who does not need to use the special effect components is reduced, the use of the user is facilitated, and the stickiness of the user is improved.
In another possible implementation manner, the camera information includes a currently opened camera identifier and first image data in a view frame of the shooting interface, and if the terminal determines that the currently opened camera is a target camera and the first image data includes face information according to the camera identifier, the terminal determines that the camera information satisfies a pop-up condition for popping up the special-effect panel. Because the camera that the terminal was opened is when leading the camera, the probability that the user used the special effect subassembly is bigger, and the special effect subassembly in the special effect panel is mostly designed according to the expression of people's face, when including face information in first image data, the user is most likely to need to use the special effect subassembly, the terminal just can bounce the special effect panel automatically through confirming that the camera that opens at present is when target camera and first image data include face information, thereby reduced for the user that does not need to use the special effect subassembly probability of bouncing the special effect panel automatically, user's use has been made things convenient for, user's viscidity has been improved.
The terminal extracts at least one feature of the first image data when the camera information acquired by the terminal includes the first image data in the view frame of the shooting interface, and determines the face information in the first image data according to the at least one feature.
In a possible implementation manner, before acquiring current shooting information, a terminal acquires historical operation information of a user on a special effect panel, and if the terminal determines that a closing operation of the user on the special effect panel is detected within a preset time after the special effect panel is displayed in a last shooting interface according to the historical operation information, the terminal stops executing the step of acquiring the current shooting information when the shooting interface of the target application is displayed at this time.
In a possible implementation manner, the terminal detects the closing operation of the special effect panel by the user within a preset time after the special effect panel is displayed in the shooting interface, and if the terminal detects the closing operation of the special effect panel by the user within the preset time after the special effect panel is displayed in the shooting interface, the step of obtaining the current shooting information is stopped when the shooting interface of the target application is displayed next time.
When the terminal detects the closing operation of the special effect panel by the user within the preset time after the special effect panel is displayed in the shooting interface, the closing operation can be recorded in the historical operation information. The preset time period may be set according to actual conditions, and may be set to 3 seconds, 5 seconds, and the like, for example.
In the embodiment of the disclosure, if the terminal displays the shooting interface last time and detects that the user closes the special effect panel within the preset time after the special effect panel is displayed in the shooting interface, it indicates that the user does not like to use the special effect component. The terminal acquires the historical operation information of the user on the special effect panel before acquiring the current camera shooting information, if the terminal determines that the terminal displays the shooting interface at the last time according to the historical operation information and does not detect the closing operation of the user on the special effect panel within the preset time after the special effect panel is displayed in the shooting interface, the terminal executes the step of acquiring the current camera shooting information, and then determines whether the special effect panel pops up or not based on the camera shooting information, so that the probability of automatically popping up the special effect panel for the user who does not need to use the special effect assembly is reduced, the use of the user is facilitated, and the viscosity of the user is improved.
It should be noted that the timing for the terminal to acquire the current image capturing information may include the following two timings:
first, when the terminal displays a shooting interface, the terminal acquires current shooting information.
The application scenario corresponding to the step is that the user needs to enter the shooting interface, and correspondingly, when the terminal receives the triggering operation of the user on the skip interface and displays the shooting interface, the terminal acquires the current shooting information.
Secondly, when the terminal receives the switching operation of the camera of the terminal, the terminal acquires the current shooting information.
The application scenario corresponding to this step is that after the terminal displays the shooting interface, the user needs to switch the camera currently opened by the terminal, and correspondingly, when the terminal receives the switching operation of the camera of the terminal, the terminal acquires the current shooting information.
In step 304, the terminal obtains at least one piece of multimedia information of the user, which acts in the target application and uses the special effect component, according to the user identifier.
The multimedia information may include short videos and pictures, the occurrence behavior may include watching, publishing and making, and correspondingly, the at least one piece of multimedia information in which the user acts in the target application and uses the special effect component may include: the short videos and pictures shot by using the special effect component are watched by the user in the target application, the short videos and pictures shot by using the special effect component are published by the user in the target application, and the short videos and pictures shot by using the special effect component are manufactured by the user in the target application.
In a possible implementation manner, the server stores the user information and at least one piece of multimedia information of the behavior of the user in the target application and the use of the special effect component, and accordingly, the step of acquiring, by the terminal according to the user identifier, the at least one piece of multimedia information of the behavior of the user in the target application and the use of the special effect component includes: the terminal sends a multimedia information acquisition request to the server, the acquisition request carries a user identifier, the server receives the acquisition request, searches at least one piece of multimedia information of a user, which acts in a target application and uses a special-effect component, according to the user identifier, and sends the at least one piece of multimedia information to the terminal. The server is a background server corresponding to the target application.
In a possible implementation manner, the terminal acquires at least one piece of multimedia information which is watched by the user in the target application and uses the special effect component according to the user identification; and/or the terminal acquires at least one piece of multimedia information which is issued by the user in the target application and uses the special effect component according to the user identification; and/or the terminal acquires at least one piece of multimedia information which is made and used by the user in the target application according to the user identification.
In the embodiment of the disclosure, as the special effect components used in the multimedia information viewed, made and released by the user are mostly favorite special effect components of the user, the terminal acquires at least one piece of multimedia information of which the user acts in the target application and uses the special effect components according to the user identification, and then acquires at least one target special effect component recommended to the user based on the at least one piece of multimedia information, the at least one target special effect component is most likely to be the target special effect component that the user wants to use, so that the use by the user is facilitated, and the user stickiness is improved.
In step 305, the terminal obtains at least one target special effect component recommended for the user according to the at least one multimedia message.
The terminal can realize the step in two modes, in the first mode, the terminal obtains special effect components used in the multimedia information according to the multimedia information, determines the recommendation degree of each special effect component according to the occurrence behavior of the user on each multimedia information, and recommends a target special effect component for the user according to the recommendation degree. In the second mode, the terminal acquires special effect components used in the multimedia information according to the multimedia information, and the terminal selects a target special effect component matched with a plurality of characteristics from a special effect component library according to the plurality of characteristics of the special effect components.
For the first implementation manner, the step of acquiring, by the terminal, at least one target special effect component recommended for the user according to the at least one piece of multimedia information may be implemented by the following steps (1) to (3).
(1) The terminal obtains the special effect components used in each multimedia message according to at least one multimedia message to obtain a plurality of special effect components.
In a possible implementation manner, the generation information of the multimedia information records special effect component information used in the multimedia information, and accordingly, the step may include: for any multimedia information, the terminal acquires the generation information of the multimedia information, acquires the special effect component information used in the multimedia information according to the generation information of the multimedia information, and acquires the special effect component used in the multimedia information according to the special effect component information.
The terminal obtains 10 special effect components according to the step (1), namely 'rose petals', 'piggy glasses', 'leopard line glasses', 'rabbit ears', 'cat ears', 'split screen technology', 'love heart', 'balloon', 'bubble' and 'snowflake'.
(2) And the terminal determines the recommendation degrees of the plurality of special effect components according to the occurrence behavior of the user on each multimedia message.
The step includes the following first and second steps:
first, for any special effect component, the terminal respectively obtains a first number of multimedia information which uses the special effect component and whose occurrence behavior is watching, a second number of multimedia information which uses the special effect component and whose occurrence behavior is publishing, and a third number of multimedia information which uses the special effect component and whose occurrence behavior is making.
For example, the first number of "rose petals" obtained by the terminal according to the first step in step (2) is 10, the second number is 1, the third number is 3, the first number of "pig glasses" is 15, the second number is 2, the third number is 2, the first number of "leopard glasses" is 5, the second number is 1, the first number of third number 3 … … "snowflakes" is 20, the second number is 3, and the third number is 5.
Secondly, the terminal obtains the recommendation degree of the special effect component by weighting and summing the first quantity, the second quantity and the third quantity according to the first weight corresponding to watching, the second weight corresponding to releasing and the third weight corresponding to making.
The calculation of the recommendation degree can be expressed by the following formula (1):
recommendation degree is first quantity, first weight + second quantity, second weight + third quantity + third weight (1)
The weights corresponding to the different occurrence behaviors can be set as needed, for example, the weight corresponding to the release is set to 0.5, the weight corresponding to the manufacture is set to 0.4, and the weight corresponding to the view is set to 0.1.
In the embodiment of the present disclosure, taking the first weight as 0.1, the second weight as 0.5, and the third weight as 0.4 as examples, the recommendation degrees of "rose petal", "pig glasses", "leopard glasses", "rabbit ear", "cat ear", "split screen", "love heart", "balloon", "bubble", and "snowflake" obtained according to the formula (1) are 2.7, 3.3, 2.2, 1.2, 2.6, 4, 1.8, 5, 3, 5.5, respectively.
(3) And the terminal selects at least one target special effect component from the plurality of special effect components according to the recommendation degrees of the plurality of special effect components.
This step can be implemented in two ways, namely a first way and a second way:
firstly, the terminal selects at least one target special effect component with recommendation degree exceeding a threshold value from the plurality of special effect components according to the recommendation degrees of the plurality of special effect components.
For example, in the embodiment of the present disclosure, if the threshold is set to be 3, the selected target special effect components in the special effect components are "pig glasses", "split screen technique", "balloon", and "snowflake", respectively.
Secondly, the terminal selects at least one target special effect component with higher recommendation degree and preset quantity from the plurality of special effect components according to the recommendation degree of the plurality of special effect components.
For example, in the embodiment of the present disclosure, if the predetermined number is set to 5, the selected target special effect components in the special effect components are "pig glasses", "split screen technique", "balloon", "snowflake", "bubble", respectively.
In the embodiment of the disclosure, because the users have different degrees of liking for the special effect components in the multimedia information viewed, produced and released, for example, the users may prefer the special effect components used in the released multimedia information, the terminal determines the recommendation degrees of the plurality of special effect components according to the occurrence behavior of the users on each multimedia information, and selects at least one target special effect component recommended for the users from the plurality of special effect components according to the recommendation degrees of the plurality of special effect components, so that the probability that the recommended target special effect component is the special effect component required to be used by the users can be further improved, thereby facilitating the use of the users and improving the user stickiness.
In one possible implementation manner, the terminal determines a heat value of each of the plurality of special effect components, and selects at least one target special effect component from the plurality of special effect components according to the heat value and the recommendation degree of each of the plurality of special effect components.
The step of determining the heat value of each of the plurality of special effect components by the terminal may include, for any special effect component, the terminal obtaining the number of users who used the special effect component in the target application within a predetermined time, the number of multimedia information using the special effect component, the number of clicked multimedia information using the special effect component, and calculating the heat value of the special effect component according to the number of users who used the special effect component in the target application within the predetermined time, the number of multimedia information using the special effect component, and the number of clicked multimedia information using the special effect component.
The step of selecting at least one target special effect component from the plurality of special effect components may be implemented by either of the following two steps, based on the heat value and the recommendation of each special effect component:
firstly, the terminal respectively sets weights for the heat value and the recommendation degree of the special effect components, calculates a weighted average value according to the weights corresponding to the heat value and the recommendation degree and the heat value and the recommendation degree of each special effect component, and selects at least one target special effect component from the plurality of special effect components according to the size of the weighted average value.
Secondly, the terminal selects part of the special effect components from the plurality of special effect components as target special effect components according to the recommendation degree, and then selects part of the special effect components from another part of the plurality of special effect components as the target special effect components according to the heat value.
For example, the terminal selects a part of the special effect components with the recommendation degree larger than the threshold value from the plurality of special effect components as target special effect components according to the recommendation degree, and then selects a part of the special effect components with the heat value larger than a preset threshold value from another part of the plurality of special effect components as target special effect components according to the heat value.
In a possible implementation manner, the terminal may further directly obtain, from the special effect component library, at least one special effect component with a higher heat value in a predetermined number as the target special effect component.
In the embodiment of the disclosure, because the popularity value of the special effect component in the target application can reflect the likeness of a general user to the special effect component, the terminal can further improve the probability that the recommended target special effect component is the special effect component that the user needs to use by determining the popularity value of each of the plurality of special effect components and selecting at least one target special effect component from the plurality of special effect components according to the popularity value and the popularity value of each of the plurality of special effect components, thereby facilitating the use of the user and improving the user stickiness.
For the second implementation manner, the terminal obtains at least one target special effect component recommended for the user according to at least one piece of multimedia information, and the method can be implemented through the following steps (a) to (C).
(A) The terminal obtains the special effect components used in each multimedia message according to at least one multimedia message to obtain a plurality of special effect components.
This step is the same as the step (1) in step 305, and is not described herein again.
(B) The terminal determines at least one characteristic of the plurality of special effect components according to the plurality of special effect components.
In one possible implementation, the method includes: the terminal inputs the plurality of special effect components into the recognition model, and obtains the feature signature of at least one feature of the plurality of special effect components output by the recognition model.
(C) And the terminal selects at least one target special effect component matched with the at least one characteristic from a special effect component library according to the at least one characteristic, wherein the special effect component library is used for storing the special effect components of the target application.
In one possible implementation, the method includes: the terminal obtains a feature tag of at least one feature of each special effect component in the special effect component library, calculates a matching value of each special effect component in the special effect component library according to the feature tag of the at least one feature of each special effect component in the special effect component library and the feature tag of the at least one feature of the plurality of special effect components obtained by the terminal in the step (B), and selects at least one target special effect component from the special effect component library according to the matching value.
The step of calculating, by the terminal, the matching value of each special effect component in the special effect component library according to the feature tag of the at least one feature of each special effect component in the special effect component library and the feature tag of the at least one feature of the plurality of special effect components acquired by the terminal in step (B) may include: for any special effect component in the special effect component library, the terminal compares at least one feature tag corresponding to the special effect component with at least one feature tag corresponding to a plurality of special effect components one by one, and the number of the same feature tags in at least one feature tag corresponding to the special effect component in the at least one feature tag corresponding to the plurality of special effect components is used as a matching value of the special effect component.
In the embodiment of the disclosure, due to the occurrence behavior of the user and the characteristic of the special effect component in the multimedia information using the special effect component may be a favorite characteristic of the user, for example, the characteristic may be "beauty", "ear", or the like, or "funny", "lingua", or the like, the terminal determines at least one characteristic of the plurality of special effect components according to the plurality of special effect components, selects at least one target special effect component matched with the at least one characteristic from the special effect component library according to the at least one characteristic, so that the target special effect component recommended by the terminal for the user is greatly likely to be a special effect component required to be used by the user, thereby facilitating the use of the user and improving the user stickiness.
In a possible implementation manner, the terminal selects at least one target special effect component which is matched with at least one feature and has a heat value exceeding a preset threshold from the special effect component library according to the at least one feature.
This step may include: the terminal determines the heat value of each special effect component in the special effect component library, selects at least one special effect component with the heat value exceeding a preset threshold value as a target special effect component to be selected according to the heat value of each special effect component, and then selects at least one target special effect component matched with at least one characteristic from the target special effect components to be selected.
The method for determining the heat value of each special effect component in the special effect component library by the terminal is the same as the method for determining the heat value of each special effect component in the plurality of special effect components by the terminal, and the method for selecting at least one target special effect component matched with at least one characteristic from the target special effect components to be selected by the terminal is the same as the step (C), and the detailed description is omitted here.
It should be noted that the executing subject of step 305 may also be a server, and after obtaining at least one target special effect component recommended for the user, the server sends the at least one target special effect component to the terminal.
Another point to be noted is that, steps 303 to 305 may be replaced by: the terminal acquires current shooting information, and if the shooting information meets the pop-up condition for popping up the special effect panel, the terminal selects at least one target special effect component matched with first image data from the special effect component library according to the first image data in a viewing frame of a shooting interface.
The method comprises the following steps that the terminal selects at least one target special effect component matched with first image data from a special effect component library according to the first image data in a viewing frame of a shooting interface, and comprises the following steps:
the terminal acquires at least one feature of the first image data according to the first image data, acquires object information of the first image data according to the at least one feature, acquires attribute information of each special effect component in the special effect component library, and selects at least one target special effect component matched with the first image data from the special effect component library according to the object information and the attribute information.
The step of selecting, by the terminal, at least one target special effect component matching the first image data from the special effect component library according to the object information and the attribute information may be: and the terminal selects at least one special effect component with higher similarity between the attribute information and the object information in the special effect component library as at least one target special effect component matched with the first image data.
Wherein the object information may include: tags of the objects, number of objects, gender of the objects, environmental information of the objects, etc. The attribute information of the special effects component may include: an applicable object tag, an applicable object number, an applicable object gender, an applicable environment, and the like.
In the embodiment of the present disclosure, for example, the first image data includes two pieces of face information, one of the faces is a male, and the other face is a female, and the object information of the first image data acquired by the terminal may be "face", "face-2", "male and female", "indoor", and the like.
Taking 3 special effect components in the special effect component library as an example, if the attribute information of the first special effect component is "hand", "hand-2", "indoor", the attribute information of the second special effect component is "face", "face-2", "male and female", and the attribute information of the third special effect component is "face", "face-1", "female", the step of the terminal selecting at least one special effect component with higher similarity between the attribute information and the object information in the special effect component library as at least one target special effect component matched with the first image data may be: the terminal selects the second effect component and the third effect component as target effect components matched with the first image data.
In the embodiment of the disclosure, the terminal selects at least one target special effect component matched with the first image data from the special effect component library according to the first image data in the view frame of the shooting interface, and because the first image data is the image data in the view frame of the current shooting interface, the target special effect component matched with the first image data is most likely to be a special effect component which is required to be used by a user at present, so that the use of the user is facilitated, and the user viscosity is improved.
In step 306, the terminal displays a special effect template in the shooting interface, and loads at least one target special effect component into the special effect template to obtain a special effect panel.
In a possible implementation manner, the terminal sorts at least one target special effect component according to the recommendation degree of each target special effect component, and the terminal loads the sorted at least one target special effect component into a special effect template to obtain a special effect panel.
The terminal ranks at least one target special effect component according to the recommendation degree of each target special effect component, and the ranking method comprises the following steps: and the terminal compares the recommendation degree of each target special effect component and sorts at least one target special effect component in sequence from high to low according to the recommendation degree.
It should be noted that, in this step, each target special effect component may be ranked by the terminal, and each target special effect component may also be ranked by the server; when the server sequences each target special effect component, the terminal sequences at least one target special effect component according to the recommendation degree of each target special effect component, and the steps are as follows: the server sorts at least one target special effect component according to the recommendation degree of each target special effect component, and sends the sorted at least one target special effect component to the terminal; and the terminal receives the sequenced at least one target special effect component.
In the embodiment of the disclosure, since the higher the recommendation degree, the higher the possibility that the target special effect component is used by the user is, the terminal loads the at least one sorted target special effect component to the special effect template by sorting the at least one target special effect component according to the recommendation degree, so as to obtain the special effect panel, thereby facilitating the use of the user and improving the user viscosity.
In step 307, the terminal displays a special effect panel in the shooting interface, wherein the special effect panel comprises at least one target special effect component.
Referring to fig. 5, a special effects panel is displayed below a photographing interface. Of course, the terminal may also display the special effect panel above, to the left, or to the right of the shooting interface. The terminal can slide the special effect panel out of any direction of the shooting interface, for example, out of the bottom of the shooting interface. In the embodiment of the present disclosure, the display position and the display mode of the special effect panel are not limited.
In step 308, the terminal acquires first image data in a view frame in a shooting interface and selects a target special effect component from at least one target special effect component.
The first image data is image data in a view frame in a current shooting interface of the terminal.
In one possible implementation manner, the step of the terminal selecting the target special effect component from the at least one target special effect component includes: and the terminal selects a target special effect component with the highest recommendation degree from the at least one target special effect component according to the recommendation degree of the at least one target special effect component.
In the embodiment of the disclosure, because the target special effect component with the highest recommendation degree is most likely to be a special effect component that needs to be used by the user, the terminal selects the target special effect component with the highest recommendation degree from the at least one target special effect component according to the recommendation degree of the at least one target special effect component, and then performs special effect processing on the image in the viewing frame based on the target special effect component, so that effect preview is formed for the user, the user is most likely to directly use the target special effect component to directly shoot, which is convenient for the user to use and improves the user viscosity.
In another possible implementation manner, the terminal selects a target special effect component matched with the first image data from at least one target special effect component according to the first image data.
The implementation of this step is similar to the step in which the terminal selects at least one target special effect component matching the first image data from the special effect component library according to the first image data in the finder frame of the shooting interface, and details are not repeated here.
In the embodiment of the disclosure, the terminal selects a target special effect component matched with the first image data from at least one target special effect component according to the first image data, and then performs special effect processing on the image in the viewing frame based on the target special effect component, so as to form effect preview for a user.
In step 309, the terminal performs special effect processing on the first image data according to the selected target special effect component to obtain second image data, and displays the second image data in the view finder.
The step of the terminal performing special effect processing on the first image data according to the selected target special effect component to obtain the second image may include: the terminal stores an image processing model, the image processing model is provided with various image processing tools, various image processing functions can be realized, such as functions of 'peeling', 'face thinning', filter adding, real-time AR (mixed reality) special effects and the like, the terminal inputs first image data and a target special effect assembly selected by the terminal into the image processing model, obtains the first image data output by the image processing model and rendered by the target special effect assembly, and takes the first image data rendered by the target special effect assembly as second image data.
For example, when the target special effect component selected by the terminal is a rabbit and the first image data includes face information, the terminal performs special effect processing on the first image data according to the selected target special effect component to obtain the second image, where the step of obtaining the second image may include: the terminal inputs the first image data and the target special effect component rabbit into an image processing model to obtain second image data output by the image processing model, wherein the second image data is data obtained after the human face and the rabbit are overlapped.
In the embodiment of the disclosure, the terminal performs special effect processing on the first image data according to the selected target special effect component to obtain second image data, and the second image data is displayed in the viewing frame, so that a user can directly click a shooting button of a shooting interface to shoot, the step of selecting the special effect component from a special effect component panel by the user is saved, and the operation efficiency of the terminal is improved.
In the embodiment of the disclosure, a first application interface of a target application is displayed on a terminal, wherein the first application interface comprises a jump interface of a shooting interface; if the trigger operation of the jump interface is received, displaying a shooting interface of the target application; acquiring current camera shooting information; and if the shooting information meets the pop-up condition for popping up the special effect panel, displaying the special effect panel in the shooting interface, wherein the special effect panel comprises at least one target special effect component. When the shooting information meets the popping condition for popping up the special effect panel, the special effect panel is automatically displayed in the shooting interface, so that the operation of manually controlling the popping-up of the special effect panel when a user enters the shooting interface every time is saved, and the operation efficiency of the terminal is improved.
FIG. 6 is a block diagram illustrating an information presentation device according to an example embodiment. Referring to fig. 6, the embodiment includes:
a first display module 601 configured to execute a first application interface for displaying a target application on a terminal, the first application interface including a jump interface of a shooting interface; and displaying a shooting interface of the target application in response to receiving the trigger operation of the jump interface.
An obtaining module 602 configured to perform obtaining current image capture information.
And a second display module 603 configured to perform displaying the special effect panel in the shooting interface if the shooting information satisfies a pop-up condition for popping up the special effect panel, wherein the special effect panel includes at least one target special effect component.
In one possible implementation, the camera information includes a currently-turned-on camera identifier; the second display module 603 is further configured to determine that the currently turned-on camera is the target camera if the currently turned-on camera is determined to be the target camera according to the camera identifier, and determine that the camera information meets the pop-up condition for popping up the special effect panel; and/or the presence of a gas in the gas,
the shooting information comprises first image data in a view frame of a shooting interface; the second display module 603 is further configured to perform determining that the image pickup information satisfies a pop-up condition for popping up the special effect panel if the face information is included in the first image data.
In another possible implementation manner, the obtaining module 602 is further configured to perform obtaining current image capturing information when the shooting interface is displayed; or when receiving the switching operation of the camera of the terminal, acquiring the current image pickup information.
In another possible implementation manner, the obtaining module 602 is further configured to stop obtaining the current image capture information when the shooting interface of the target application is displayed next time if the closing operation of the special effect panel by the user is detected within a preset time period after the special effect panel is displayed in the shooting interface.
In another possible implementation manner, the second display module 603 is further configured to perform obtaining a user identifier of a currently logged-in user in the target application; acquiring at least one piece of multimedia information of a user, which acts in a target application and uses a special effect component, according to the user identification; acquiring at least one target special effect component recommended for a user according to at least one piece of multimedia information; and displaying the special effect template in the shooting interface, and loading at least one target special effect component into the special effect template to obtain a special effect panel.
In another possible implementation manner, the second display module 603 is further configured to perform obtaining, according to the user identifier, at least one piece of multimedia information that is viewed and used by the user in the target application and that is in use by the user in the special effect component; and/or acquiring at least one piece of multimedia information which is issued by a user in the target application and uses the special effect component according to the user identification; and/or acquiring at least one piece of multimedia information which is made and used by the user in the target application according to the user identification.
In another possible implementation manner, the second display module 603 is further configured to execute obtaining, according to at least one piece of multimedia information, an effect component used in each piece of multimedia information, to obtain a plurality of effect components; determining recommendation degrees of a plurality of special effect components according to the occurrence behavior of each multimedia information of the user; and selecting at least one target special effect component from the plurality of special effect components according to the recommendation degrees of the plurality of special effect components.
In another possible implementation, the actions that occur include viewing, publishing, and production;
the second display module 603 is further configured to perform, for any one of the special effect components, obtaining a first number of multimedia information that uses the special effect component and occurs as viewing, a second number of multimedia information that uses the special effect component and occurs as publishing, and a third number of multimedia information that uses the special effect component and occurs as production, respectively; and according to the first weight corresponding to watching, the second weight corresponding to publishing and the third weight corresponding to making, carrying out weighted summation on the first quantity, the second quantity and the third quantity to obtain the recommendation degree of the special effect component.
In another possible implementation, the second display module 603 is further configured to perform determining a heat value of each of the plurality of special effect components; at least one target special effect component is selected from the plurality of special effect components according to the heat value and the recommendation degree of each special effect component.
In another possible implementation manner, the second display module 603 is further configured to execute obtaining, according to at least one piece of multimedia information, an effect component used in each piece of multimedia information, to obtain a plurality of effect components; determining at least one characteristic of the plurality of special effect components according to the plurality of special effect components; and selecting at least one target special effect component matched with the at least one characteristic from a special effect component library according to the at least one characteristic, wherein the special effect component library is used for storing the special effect components of the target application.
In another possible implementation manner, the second display module 603 is further configured to execute selecting, according to the at least one feature, at least one target special effect component from the special effect component library, which matches the at least one feature and has a heat value exceeding a preset threshold.
In another possible implementation manner, the second display module 603 is further configured to perform sorting on at least one target special effect component according to the recommendation degree of each target special effect component; and loading the at least one ordered target special effect component into a special effect template to obtain a special effect panel.
In another possible implementation manner, the second display module 603 is further configured to perform acquiring first image data in a viewing frame in a shooting interface, and selecting a target special effect component from at least one target special effect component; according to the selected target special effect component, carrying out special effect processing on the first image data to obtain second image data; the second image data is displayed in the viewfinder frame.
In another possible implementation manner, the second display module 603 is further configured to execute selecting a target special effect component with the highest recommendation degree from the at least one target special effect component according to the recommendation degree of the at least one target special effect component; or, according to the first image data, selecting a target special effect component matched with the first image data from at least one target special effect component.
In the embodiment of the disclosure, a first application interface of a target application is displayed on a terminal, wherein the first application interface comprises a jump interface of a shooting interface; responding to the received trigger operation of the jump interface, and displaying a shooting interface of the target application; acquiring current camera shooting information; and if the shooting information meets the pop-up condition for popping up the special effect panel, displaying the special effect panel in the shooting interface, wherein the special effect panel comprises at least one target special effect component. When the shooting information meets the popping condition for popping up the special effect panel, the special effect panel is automatically displayed in the shooting interface, so that the operation of manually controlling the popping-up of the special effect panel when a user enters the shooting interface every time is saved, and the operation efficiency of the terminal is improved.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 7 shows a block diagram of a terminal 700 according to an exemplary embodiment of the present disclosure. The terminal 700 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer iv, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 700 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so on.
In general, terminal 700 includes: a processor 701 and a memory 702.
The processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 701 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 701 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. Memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 702 is used to store at least one instruction for execution by processor 701 to implement the information presentation method provided by the method embodiments of the present disclosure.
In some embodiments, the terminal 700 may further optionally include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 703 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 704, touch screen display 705, camera assembly 706, audio circuitry 707, positioning assembly 708, and power source 709.
The peripheral interface 703 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 701 and the memory 702. In some embodiments, processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 704 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 704 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 704 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 704 may also include NFC (Near Field Communication) related circuits, which are not limited by this disclosure.
The display screen 705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 705 is a touch display screen, the display screen 705 also has the ability to capture touch signals on or over the surface of the display screen 705. The touch signal may be input to the processor 701 as a control signal for processing. At this point, the display 705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 705 may be one, providing the front panel of the terminal 700; in other embodiments, the display 705 can be at least two, respectively disposed on different surfaces of the terminal 700 or in a folded design; in still other embodiments, the display 705 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 700. Even more, the display 705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The Display 705 may be made of LCD (liquid crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 706 is used to capture images or video. Optionally, camera assembly 706 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing or inputting the electric signals to the radio frequency circuit 704 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 700. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 707 may also include a headphone jack.
The positioning component 708 is used to locate the current geographic position of the terminal 700 to implement navigation or LBS (location based Service). The positioning component 708 may be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
Power supply 709 is provided to supply power to various components of terminal 700. The power source 709 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When power source 709 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 700 also includes one or more sensors 710. The one or more sensors 710 include, but are not limited to: acceleration sensor 711, gyro sensor 712, pressure sensor 713, fingerprint sensor 714, optical sensor 715, and proximity sensor 716.
The acceleration sensor 711 can detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the terminal 700. For example, the acceleration sensor 711 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 701 may control the touch screen 705 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 711. The acceleration sensor 711 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 712 may detect a body direction and a rotation angle of the terminal 700, and the gyro sensor 712 may cooperate with the acceleration sensor 711 to acquire a 3D motion of the terminal 700 by the user. From the data collected by the gyro sensor 712, the processor 701 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 713 may be disposed on a side bezel of terminal 700 and/or an underlying layer of touch display 705. When the pressure sensor 713 is disposed on a side frame of the terminal 700, a user's grip signal on the terminal 700 may be detected, and the processor 701 performs right-left hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 713. When the pressure sensor 713 is disposed at a lower layer of the touch display 705, the processor 701 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 714 is used for collecting a fingerprint of a user, and the processor 701 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 714, or the fingerprint sensor 714 identifies the identity of the user according to the collected fingerprint. When the user identity is identified as a trusted identity, the processor 701 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 714 may be disposed on the front, back, or side of the terminal 700. When a physical button or a vendor Logo is provided on the terminal 700, the fingerprint sensor 714 may be integrated with the physical button or the vendor Logo.
The optical sensor 715 is used to collect the ambient light intensity. In one embodiment, the processor 701 may control the display brightness of the touch display 705 based on the ambient light intensity collected by the optical sensor 715. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 705 is increased; when the ambient light intensity is low, the display brightness of the touch display 705 is turned down. In another embodiment, processor 701 may also dynamically adjust the shooting parameters of camera assembly 706 based on the ambient light intensity collected by optical sensor 715.
A proximity sensor 716, also referred to as a distance sensor, is typically disposed on a front panel of the terminal 700. The proximity sensor 716 is used to collect the distance between the user and the front surface of the terminal 700. In one embodiment, when the proximity sensor 716 detects that the distance between the user and the front surface of the terminal 700 gradually decreases, the processor 701 controls the touch display 705 to switch from the bright screen state to the dark screen state; when the proximity sensor 716 detects that the distance between the user and the front surface of the terminal 700 gradually becomes larger, the processor 701 controls the touch display 705 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 7 is not intended to be limiting of terminal 700 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Fig. 8 is a schematic structural diagram of a server according to an embodiment of the present disclosure, where the server 800 may generate a relatively large difference due to a difference in configuration or performance, and may include one or more processors (CPUs) 801 and one or more memories 802, where the memory 802 stores at least one instruction, and the at least one instruction is loaded and executed by the processor 801 to implement the method for providing information presentation according to the various method embodiments described above. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
In an exemplary embodiment, a computer-readable storage medium, such as a memory, including instructions executable by a processor in a terminal to perform the information presentation method in the above-described embodiments is also provided. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, wherein instructions of the computer program product, when executed by a processor of a computer device, enable the computer device to perform the steps performed by the computer device in the above information presentation method.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (28)

1. An information presentation method, the method comprising:
displaying a first application interface of a target application on a terminal, wherein the first application interface comprises a skip interface of a shooting interface;
responding to the received trigger operation of the jump interface, and displaying a shooting interface of the target application;
acquiring current camera shooting information;
if the shooting information meets the popping condition for popping up a special effect panel, displaying the special effect panel in the shooting interface, wherein the special effect panel comprises at least one target special effect component;
the shooting information comprises a currently opened camera identification and first image data in a view frame of the shooting interface, if the currently opened camera is determined to be a target camera according to the camera identification, and when the first image data comprises face information, the shooting information is determined to meet a pop-up condition for popping up a special effect panel.
2. The information presentation method according to claim 1, wherein the acquiring current camera information comprises:
when the shooting interface is displayed, acquiring the current shooting information; alternatively, the first and second electrodes may be,
and when receiving the switching operation of the camera of the terminal, acquiring the current camera shooting information.
3. The information presentation method of claim 1, further comprising:
and if the closing operation of the special effect panel by the user is detected within the preset time after the special effect panel is displayed in the shooting interface, stopping executing the step of acquiring the current shooting information when the shooting interface of the target application is displayed next time.
4. The information presentation method of claim 1, wherein the displaying the special effects panel in the capture interface comprises:
acquiring a user identifier of a current login user in the target application;
acquiring at least one piece of multimedia information of the user, which acts in the target application and uses the special effect component, according to the user identification;
acquiring the at least one target special effect component recommended for the user according to the at least one piece of multimedia information;
displaying a special effect template in the shooting interface, and loading the at least one target special effect component into the special effect template to obtain the special effect panel.
5. The information presentation method according to claim 4, wherein the obtaining, according to the user identifier, at least one piece of multimedia information of the user that acts in the target application and uses a special effect component comprises:
acquiring at least one piece of multimedia information which is watched in the target application and uses the special effect component by the user according to the user identification; and/or the presence of a gas in the gas,
acquiring at least one piece of multimedia information which is issued in the target application by the user and uses the special effect component according to the user identification; and/or the presence of a gas in the gas,
and acquiring at least one piece of multimedia information which is made and used by the user in the target application according to the user identification.
6. The information presentation method according to claim 4, wherein the obtaining of the at least one target special effect component recommended for the user according to the at least one multimedia information comprises:
according to the at least one piece of multimedia information, obtaining special effect components used in each piece of multimedia information to obtain a plurality of special effect components;
determining the recommendation degrees of the plurality of special effect components according to the occurrence behaviors of the user on each piece of multimedia information;
selecting the at least one target special effect component from the plurality of special effect components according to the recommendation degrees of the plurality of special effect components.
7. The information presentation method of claim 6, wherein said occurrence comprises viewing, publishing, and production; the determining the recommendation degrees of the plurality of special effect components according to the occurrence behavior of the user on each piece of multimedia information includes:
for any special effect component, respectively acquiring a first number of multimedia information which uses the special effect component and takes place behaviors as watching, a second number of multimedia information which uses the special effect component and takes place behaviors as publishing, and a third number of multimedia information which uses the special effect component and takes place behaviors as making;
and according to the viewing corresponding first weight, issuing the corresponding second weight and making the corresponding third weight, carrying out weighted summation on the first quantity, the second quantity and the third quantity to obtain the recommendation degree of the special effect component.
8. The information presentation method of claim 6, wherein said selecting the at least one target special effects component from the plurality of special effects components according to the recommendation degrees of the plurality of special effects components comprises:
determining a heat value for each of the plurality of special effect components;
selecting the at least one target special effect component from the plurality of special effect components according to the heat value and the recommendation degree of each special effect component.
9. The information presentation method according to claim 4, wherein the obtaining of the at least one target special effect component recommended for the user according to the at least one multimedia information comprises:
according to the at least one piece of multimedia information, obtaining special effect components used in each piece of multimedia information to obtain a plurality of special effect components;
determining at least one feature of the plurality of special effects components from the plurality of special effects components;
according to the at least one feature, selecting the at least one target special effect component matched with the at least one feature from a special effect component library, wherein the special effect component library is used for storing special effect components of the target application.
10. The method of claim 9, wherein selecting the at least one target special effects component from a library of special effects components that matches the at least one feature based on the at least one feature comprises:
and according to the at least one characteristic, selecting at least one target special effect component which is matched with the at least one characteristic and the heat value of which exceeds a preset threshold value from a special effect component library.
11. The information presentation method of claim 4, wherein said loading the at least one target special effects component into the special effects template, resulting in the special effects panel, comprises:
sequencing the at least one target special effect component according to the recommendation degree of each target special effect component;
and loading the at least one ordered target special effect component into the special effect template to obtain the special effect panel.
12. The information presentation method of claim 1, wherein after displaying the special effects panel in the capture interface, the method further comprises:
acquiring first image data in a viewing frame in the shooting interface, and selecting a target special effect component from the at least one target special effect component;
according to the selected target special effect component, carrying out special effect processing on the first image data to obtain second image data;
displaying the second image data in the viewfinder frame.
13. The information presentation method of claim 12, wherein said selecting a target special effects component from the at least one target special effects component comprises:
selecting a target special effect component with the highest recommendation degree from the at least one target special effect component according to the recommendation degree of the at least one target special effect component; alternatively, the first and second electrodes may be,
according to the first image data, selecting a target special effect component matched with the first image data from the at least one target special effect component.
14. An information presentation device, comprising:
a first display module configured to execute a first application interface displaying a target application on a terminal, the first application interface including a jump interface of a shooting interface; responding to the received trigger operation of the jump interface, and displaying a shooting interface of the target application;
an acquisition module configured to perform acquisition of current imaging information;
the second display module is configured to display the special effect panel in the shooting interface if the shooting information meets the shooting condition for shooting the special effect panel, wherein the special effect panel comprises at least one target special effect component;
the camera shooting information comprises a currently started camera identification and first image data in a view frame of the shooting interface, and the second display module is configured to execute that if the currently started camera is determined to be a target camera according to the camera identification, and when the first image data comprises face information, the camera shooting information meets a pop-up condition for popping up a special effect panel.
15. The information presentation device of claim 14,
the acquisition module is further configured to acquire the current shooting information when the shooting interface is displayed; or when receiving a switching operation of a camera of the terminal, acquiring the current camera information.
16. The information presentation device of claim 14,
the obtaining module is further configured to stop obtaining the current shooting information when a shooting interface of the target application is displayed next time if a closing operation of the special effect panel by a user is detected within a preset time after the special effect panel is displayed in the shooting interface.
17. The information presentation device of claim 14,
the second display module is further configured to execute obtaining of a user identifier of a currently logged-in user in the target application; acquiring at least one piece of multimedia information of the user, which acts in the target application and uses the special effect component, according to the user identification; acquiring the at least one target special effect component recommended for the user according to the at least one piece of multimedia information; displaying a special effect template in the shooting interface, and loading the at least one target special effect component into the special effect template to obtain the special effect panel.
18. The information presentation device of claim 17,
the second display module is further configured to execute the step of acquiring at least one piece of multimedia information which is watched by the user in the target application and uses a special effect component according to the user identification; and/or acquiring at least one piece of multimedia information which is issued by the user in the target application and uses the special effect component according to the user identification; and/or acquiring at least one piece of multimedia information which is produced and used by the user in the target application and is of the special effect component according to the user identification.
19. The information presentation device of claim 17,
the second display module is further configured to execute obtaining an effect component used in each piece of multimedia information according to the at least one piece of multimedia information, so as to obtain a plurality of effect components; determining the recommendation degrees of the plurality of special effect components according to the occurrence behaviors of the user on each piece of multimedia information; selecting the at least one target special effect component from the plurality of special effect components according to the recommendation degrees of the plurality of special effect components.
20. The information presentation device of claim 19,
the occurrence behavior comprises watching, publishing and making;
the second display module is further configured to execute the steps of respectively acquiring, for any one of the special effect components, a first number of multimedia information which uses the special effect component and whose occurrence behavior is viewing, a second number of multimedia information which uses the special effect component and whose occurrence behavior is publishing, and a third number of multimedia information which uses the special effect component and whose occurrence behavior is production; and according to the viewing corresponding first weight, issuing the corresponding second weight and making the corresponding third weight, carrying out weighted summation on the first quantity, the second quantity and the third quantity to obtain the recommendation degree of the special effect component.
21. The information presentation device of claim 19,
the second display module further configured to perform determining a heat value for each of the plurality of special effects components; selecting the at least one target special effect component from the plurality of special effect components according to the heat value and the recommendation degree of each special effect component.
22. The information presentation device of claim 17,
the second display module is further configured to execute obtaining an effect component used in each piece of multimedia information according to the at least one piece of multimedia information, so as to obtain a plurality of effect components; determining at least one feature of the plurality of special effects components from the plurality of special effects components; according to the at least one feature, selecting the at least one target special effect component matched with the at least one feature from a special effect component library, wherein the special effect component library is used for storing special effect components of the target application.
23. The information presentation device of claim 22,
the second display module is further configured to execute the step of selecting at least one target special effect component which is matched with the at least one feature and has a heat value exceeding a preset threshold value from a special effect component library according to the at least one feature.
24. The information presentation device of claim 17,
the second display module is further configured to perform sorting of the at least one target special effect component according to the recommendation degree of each target special effect component; and loading the at least one ordered target special effect component into the special effect template to obtain the special effect panel.
25. The information presentation device of claim 14,
the second display module is further configured to execute acquiring first image data in a viewing frame in the shooting interface and selecting a target special effect component from the at least one target special effect component; according to the selected target special effect component, carrying out special effect processing on the first image data to obtain second image data; displaying the second image data in the viewfinder frame.
26. The information presentation device of claim 25,
the second display module is further configured to execute selecting a target special effect component with the highest recommendation degree from the at least one target special effect component according to the recommendation degree of the at least one target special effect component; or, according to the first image data, selecting a target special effect component matched with the first image data from the at least one target special effect component.
27. A terminal, comprising a processor and a memory, wherein the memory stores at least one instruction that is loaded and executed by the processor to perform operations performed by the information presentation method according to any one of claims 1 to 13.
28. A computer-readable storage medium, wherein at least one instruction is stored in the storage medium, and the instruction is loaded and executed by a processor to implement the operation performed by the information presentation method according to any one of claims 1 to 13.
CN201911093543.1A 2019-11-11 2019-11-11 Information display method and device and terminal Active CN110865754B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911093543.1A CN110865754B (en) 2019-11-11 2019-11-11 Information display method and device and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911093543.1A CN110865754B (en) 2019-11-11 2019-11-11 Information display method and device and terminal

Publications (2)

Publication Number Publication Date
CN110865754A CN110865754A (en) 2020-03-06
CN110865754B true CN110865754B (en) 2020-09-22

Family

ID=69654744

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911093543.1A Active CN110865754B (en) 2019-11-11 2019-11-11 Information display method and device and terminal

Country Status (1)

Country Link
CN (1) CN110865754B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111541936A (en) * 2020-04-02 2020-08-14 腾讯科技(深圳)有限公司 Video and image processing method and device, electronic equipment and storage medium
CN112135059B (en) * 2020-09-30 2021-09-28 北京字跳网络技术有限公司 Shooting method, shooting device, electronic equipment and storage medium
CN112672036A (en) * 2020-12-04 2021-04-16 北京达佳互联信息技术有限公司 Shot image processing method and device and electronic equipment
CN112597851A (en) * 2020-12-15 2021-04-02 泰康保险集团股份有限公司 Signature acquisition method and device, electronic equipment and storage medium
CN113110887B (en) * 2021-03-31 2023-07-21 联想(北京)有限公司 Information processing method, device, electronic equipment and storage medium
CN113115099B (en) * 2021-05-14 2022-07-05 北京市商汤科技开发有限公司 Video recording method and device, electronic equipment and storage medium
CN116156314A (en) * 2021-05-31 2023-05-23 荣耀终端有限公司 Video shooting method and electronic equipment
CN115379113A (en) * 2022-07-18 2022-11-22 北京达佳互联信息技术有限公司 Shooting processing method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447164A (en) * 2015-12-02 2016-03-30 小天才科技有限公司 Method and apparatus for automatically pushing chat expressions
CN108038102A (en) * 2017-12-08 2018-05-15 北京小米移动软件有限公司 Recommendation method, apparatus, terminal and the storage medium of facial expression image

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060150104A1 (en) * 2004-12-31 2006-07-06 Luigi Lira Display of user selected digital artworks as embellishments of a graphical user interface
KR101831516B1 (en) * 2016-06-08 2018-02-22 주식회사 시어스랩 Method and apparatus for generating image using multi-stiker
CN107277642B (en) * 2017-07-24 2020-09-15 硕诺科技(深圳)有限公司 Method for realizing interesting mapping based on video call data stream processing
CN107800871A (en) * 2017-09-27 2018-03-13 光锐恒宇(北京)科技有限公司 The display of special efficacy and querying method and device, terminal device and cloud server
CN107909629A (en) * 2017-11-06 2018-04-13 广东欧珀移动通信有限公司 Recommendation method, apparatus, storage medium and the terminal device of paster
CN109033276A (en) * 2018-07-10 2018-12-18 Oppo广东移动通信有限公司 Method for pushing, device, storage medium and the electronic equipment of paster
CN110177219A (en) * 2019-07-01 2019-08-27 百度在线网络技术(北京)有限公司 The template recommended method and device of video
CN110413818B (en) * 2019-07-31 2023-11-17 腾讯科技(深圳)有限公司 Label paper recommending method, device, computer readable storage medium and computer equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447164A (en) * 2015-12-02 2016-03-30 小天才科技有限公司 Method and apparatus for automatically pushing chat expressions
CN108038102A (en) * 2017-12-08 2018-05-15 北京小米移动软件有限公司 Recommendation method, apparatus, terminal and the storage medium of facial expression image

Also Published As

Publication number Publication date
CN110865754A (en) 2020-03-06

Similar Documents

Publication Publication Date Title
CN110865754B (en) Information display method and device and terminal
CN110572711B (en) Video cover generation method and device, computer equipment and storage medium
CN108737897B (en) Video playing method, device, equipment and storage medium
CN111083516B (en) Live broadcast processing method and device
CN111079012A (en) Live broadcast room recommendation method and device, storage medium and terminal
CN110278464B (en) Method and device for displaying list
CN110149332B (en) Live broadcast method, device, equipment and storage medium
CN109922356B (en) Video recommendation method and device and computer-readable storage medium
CN110533585B (en) Image face changing method, device, system, equipment and storage medium
CN110418152B (en) Method and device for carrying out live broadcast prompt
CN111880888B (en) Preview cover generation method and device, electronic equipment and storage medium
CN110147503B (en) Information issuing method and device, computer equipment and storage medium
US20230076109A1 (en) Method and electronic device for adding virtual item
CN109618192B (en) Method, device, system and storage medium for playing video
CN111127509A (en) Target tracking method, device and computer readable storage medium
CN111031391A (en) Video dubbing method, device, server, terminal and storage medium
CN112788359A (en) Live broadcast processing method and device, electronic equipment and storage medium
CN111192072A (en) User grouping method and device and storage medium
CN112100528A (en) Method, device, equipment and medium for training search result scoring model
CN109819308B (en) Virtual resource acquisition method, device, terminal, server and storage medium
CN111782950A (en) Sample data set acquisition method, device, equipment and storage medium
CN110891181B (en) Live broadcast picture display method and device, storage medium and terminal
CN110942426B (en) Image processing method, device, computer equipment and storage medium
CN110519614B (en) Method, device and equipment for interaction between accounts in live broadcast room
CN113613028A (en) Live broadcast data processing method, device, terminal, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant