CN112565806B - Virtual gift giving method, device, computer equipment and medium - Google Patents

Virtual gift giving method, device, computer equipment and medium Download PDF

Info

Publication number
CN112565806B
CN112565806B CN202011403675.2A CN202011403675A CN112565806B CN 112565806 B CN112565806 B CN 112565806B CN 202011403675 A CN202011403675 A CN 202011403675A CN 112565806 B CN112565806 B CN 112565806B
Authority
CN
China
Prior art keywords
face
target
virtual gift
target object
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011403675.2A
Other languages
Chinese (zh)
Other versions
CN112565806A (en
Inventor
陈文琼
谢欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Fanxing Huyu IT Co Ltd
Original Assignee
Guangzhou Fanxing Huyu IT Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Fanxing Huyu IT Co Ltd filed Critical Guangzhou Fanxing Huyu IT Co Ltd
Priority to CN202011403675.2A priority Critical patent/CN112565806B/en
Publication of CN112565806A publication Critical patent/CN112565806A/en
Application granted granted Critical
Publication of CN112565806B publication Critical patent/CN112565806B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25875Management of end-user data involving end-user authentication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • H04N21/4415Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card using biometric characteristics of the user, e.g. by voice recognition or fingerprint scanning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Abstract

The embodiment of the application discloses a virtual gift giving method, a device, computer equipment and a medium, belonging to the technical field of live broadcasting. The method comprises the following steps: displaying a live interface of a live broadcast room, wherein the live interface comprises a plurality of object areas; responding to the selection operation of a target object area, and acquiring target object characteristics corresponding to the target object area; and responding to the virtual gift giving operation, and initiating a virtual gift giving request to a target object corresponding to a target object area, wherein the virtual gift giving request carries the characteristics of the target object. This way of presenting virtual gifts to a particular anchor increases the interactivity between the user and the anchor and increases the flexibility of the interaction between the user and the anchor.

Description

Virtual gift giving method, device, computer equipment and medium
Technical Field
The embodiment of the application relates to the technical field of live broadcasting, in particular to a virtual gift giving method, a device, computer equipment and a medium.
Background
In the process of watching live broadcast, a user can give a virtual gift to a host, and a virtual gift special effect corresponding to the virtual gift is displayed in a live broadcast interface. For example, if the user gives a virtual gift "yacht" to the host, the special effect corresponding to "yacht" is displayed in the live interface.
However, if a plurality of broadcasters live through the same living room, that is, if a plurality of broadcasters are included in a living interface of the living room, how to present a virtual gift to one of the plurality of broadcasters has become a problem to be solved.
Disclosure of Invention
The embodiment of the application provides a virtual gift giving method, a device, computer equipment and a medium, which improve the flexibility of interaction between a user and a host. The technical scheme is as follows:
in one aspect, a virtual gift-gifting method is provided, the method comprising:
displaying a live interface of a live broadcast room, wherein the live interface comprises a plurality of object areas;
responding to the selection operation of a target object area, and acquiring target object characteristics corresponding to the target object area;
and responding to the virtual gift giving operation, and initiating a virtual gift giving request to a target object corresponding to the target object area, wherein the virtual gift giving request carries the characteristics of the target object.
In one possible implementation manner, after the responding to the virtual gift gifting operation and initiating the virtual gift gifting request to the target object corresponding to the target object area, the method further includes:
And displaying the virtual gift special effect corresponding to the virtual gift in the target object area.
In another possible implementation manner, the live interface includes a live screen, where the live screen includes the plurality of object areas, and the responding to the selection operation of the target object area obtains a target object feature corresponding to the target object area, and includes:
responding to the triggering operation of the live broadcast picture, and determining a target position corresponding to the triggering operation;
identifying the live broadcast picture and determining an object area comprising the target position;
and determining the object area as the target object area, and acquiring the target object characteristics corresponding to the target object area.
In another possible implementation, the initiating a virtual gift-gifting request in response to a virtual gift-gifting operation includes:
displaying a virtual gift giving interface on the upper layer of the live broadcast interface; or switching from the live broadcast interface to the virtual gift presentation interface;
and responding to the selection operation of any virtual gift in the virtual gift presentation interface, and initiating the virtual gift presentation request.
In another possible implementation, the virtual gift gifting interface includes special effect thumbnails corresponding to the plurality of virtual gifts; the responding to the selection operation of any virtual gift in the virtual gift presentation interface, the launching the virtual gift presentation request comprises:
And responding to the selection operation of the special effect thumbnail corresponding to any virtual gift, and initiating the virtual gift giving request.
In another possible implementation, the virtual gift gifting interface includes a gifting control, and the initiating the virtual gift gifting request in response to a selection operation of any one of the virtual gift gifting interfaces includes:
setting the selected virtual gift to a selected state in response to a selection operation of any one of the virtual gift;
and responding to the triggering operation of the gift control, and initiating a gift request of the virtual gift in a selected state.
In another possible implementation manner, the target object area includes a target face area, and the acquiring, in response to a selection operation of the target object area, a target object feature corresponding to the target object area includes:
acquiring a plurality of face key points of the target face area in response to a selection operation of the target face area, wherein the face key points comprise at least one of face edge points or face organ edge points of the target face area;
and determining the target face characteristics of the target face area according to the positions of the plurality of face key points.
In another possible implementation manner, the determining the target face feature of the target face area according to the positions of the face keypoints includes at least one of the following:
determining a first face sub-feature of the target face region according to the abscissa of the plurality of face key points, wherein the first face sub-feature represents the transverse relative positions of the plurality of face key points;
and acquiring second face sub-features of the target face region according to the ordinate of the plurality of face key points, wherein the second face sub-features represent the longitudinal relative positions of the plurality of face key points.
In another possible implementation manner, the determining the target face feature of the target face area according to the positions of the face keypoints includes:
determining a first distance between a first face key point and a second distance between the first face key point and a third face key point according to the positions of the first face key point, the second face key point and the third face key point;
and determining a first ratio between the first distance and the second distance as the target face feature.
In another possible implementation manner, the plurality of face key points include face edge points and face organ edge points, and the determining the target face feature of the target face area according to the positions of the plurality of face key points includes:
selecting a face key point positioned in a first face subarea or a second face subarea from the plurality of face key points;
and determining the target face characteristics of the target face area according to the positions of the selected plurality of face key points.
In another possible implementation manner, the selecting a face key point located in the first face sub-area or the second face sub-area from the plurality of face key points includes:
selecting a first corner key point, a second corner key point and a face edge key point which is positioned at the same height with the lower eyelid key point from the plurality of face key points; or alternatively, the process may be performed,
and selecting a first nose bridge key point, a second nose bridge key point and a third nose bridge key point from the plurality of face key points.
In another possible implementation manner, the target object area includes a target face area, and the acquiring, in response to a selection operation of the target object area, a target object feature corresponding to the target object area includes:
Responding to the selection operation of the target face area, acquiring face shape parameters of the target face area, wherein the face shape parameters comprise at least one of a length-to-face ratio of a face length to a face width, a width ratio of a forehead width to a chin width, a chin angle parameter or a chin angle parameter;
and determining the face shape parameters as the target face features.
In another possible implementation manner, the face shape parameters include the chin angle parameters, and the obtaining the face shape parameters of the target face region in response to the selection operation of the target face region includes:
in response to the selection operation of the target face region, determining a first line segment corresponding to a first chin key point and a second line segment corresponding to a third chin key point according to the position of the first chin key point, the position of the second chin key point and the position of the third chin key point, wherein the first chin key point and the second chin key point are at the same height, and the third chin key point is a vertex in a plurality of chin key points;
and determining the jaw angle parameter according to the included angle between the first line segment and the second line segment.
In another possible implementation manner, the face shape parameters include the chin angle parameters, and the obtaining the face shape parameters of the target face region in response to the selection operation of the target face region includes:
responding to the selection operation of the target face area, and determining a third line segment corresponding to a first chin key point and a second chin key point and a fourth line segment corresponding to the second chin key point and the third chin key point according to the position of the first chin key point, the position of the second chin key point and the position of the third chin key point, wherein the first chin key point and the second chin key point are positioned at the same height, and the third chin key point is a vertex in a plurality of chin key points;
and determining the chin angle parameter according to the included angle between the third line segment and the fourth line segment.
In another possible implementation manner, the target object area includes a target human body area, and the acquiring, in response to a selection operation of the target object area, a target object feature corresponding to the target object area includes:
in response to a selection operation of the target human body region, acquiring a first human body length of a first human body sub-region and a second human body length of a second human body sub-region in the target human body region, and determining a ratio between the first human body length and the second human body length as a target human body characteristic of the target human body region; or alternatively, the process may be performed,
In response to a selection operation of the target human body region, acquiring a human body total length and a human body total width of the target human body region, and determining a ratio between the human body total length and the human body total width as a target human body characteristic of the target human body region; or alternatively, the process may be performed,
and responding to the selection operation of the target human body area, acquiring clothing features in the target human body area, and determining the clothing features as target human body features of the target human body area.
In another possible implementation manner, the live broadcast picture in the live broadcast interface includes a plurality of picture areas, and the responding to the selection operation of the target object area, obtaining the target object feature corresponding to the target object area includes:
and determining a target background area in the target picture area in response to the selection operation of the target picture area, and acquiring the target background characteristics of the target background area.
In another aspect, there is provided a virtual gift-gifting method, the method including:
receiving a virtual gift-giving request, wherein the virtual gift-giving request carries target object characteristics;
determining a plurality of object areas included in a live interface of a live room;
Determining object features matched with the target object features in the object features corresponding to the plurality of object regions, and determining the object region corresponding to the object features matched with the target object features as a target object region;
and displaying the virtual gift special effect corresponding to the virtual gift in the target object area.
In one possible implementation manner, the determining an object feature that matches the target object feature from the object features corresponding to the plurality of object regions, and determining the object region corresponding to the object feature as the target object region includes:
and respectively acquiring difference values between the object features of the object regions and the target object features, and determining the object region corresponding to the minimum difference value as the target object region.
In another possible implementation manner, the displaying, in the target object area, the virtual gift special effect corresponding to the virtual gift includes:
and displaying the virtual gift special effect at a target position in the target object area.
In another possible implementation manner, the number of the virtual gift is carried in the virtual gift giving request, and the displaying, in the target object area, the virtual gift special effect corresponding to the virtual gift includes:
Responsive to the number of virtual gifts being greater than a first reference number and less than a second reference number, displaying the virtual gift effects of the number in the target object area in a superimposed manner; or alternatively, the process may be performed,
and displaying the special effects of the virtual gift and text information corresponding to the virtual gift in the target object area in response to the number of the virtual gift being greater than a third reference number, wherein the text information comprises the number of the virtual gift.
In another possible implementation manner, the object area includes a face area, the live interface includes a live screen, and the determining a plurality of object areas included in the live interface of the live room includes:
and carrying out face recognition on the live broadcast picture, and determining the face areas.
In another possible implementation manner, after the virtual gift special effect corresponding to the virtual gift is displayed in the target face area, the method further includes:
and sending the live broadcast picture added with the virtual gift special effect to a live broadcast server, wherein the live broadcast server is used for publishing the live broadcast picture in the live broadcast room.
In another aspect, there is provided a virtual gift-gifting apparatus, the apparatus including:
The display module is used for displaying a live broadcast interface of the live broadcast room, wherein the live broadcast interface comprises a plurality of object areas;
the characteristic acquisition module is used for responding to the selection operation of the target object area and acquiring the target object characteristics corresponding to the target object area;
the request initiating module is used for responding to the virtual gift giving operation and initiating a virtual gift giving request to a target object corresponding to the target object area, wherein the virtual gift giving request carries the characteristics of the target object.
In one possible implementation, the apparatus further includes:
and the display module is used for displaying the virtual gift special effect corresponding to the virtual gift in the target object area.
In one possible implementation manner, the live interface includes a live screen, the live screen includes the plurality of object areas, and the feature obtaining module includes:
the position determining unit is used for responding to the triggering operation of the live broadcast picture and determining a target position corresponding to the triggering operation;
the region determining unit is used for identifying the live broadcast picture and determining an object region comprising the target position;
and the characteristic acquisition unit is used for determining the object area as the target object area and acquiring the target object characteristics corresponding to the target object area.
In another possible implementation manner, the request initiating module is configured to:
displaying a virtual gift giving interface on the upper layer of the live broadcast interface; or switching from the live broadcast interface to the virtual gift presentation interface;
and responding to the selection operation of any virtual gift in the virtual gift presentation interface, and initiating the virtual gift presentation request.
In another possible implementation, the virtual gift gifting interface includes special effect thumbnails corresponding to the plurality of virtual gifts; the request initiating module is used for responding to the selection operation of the special effect thumbnail corresponding to any virtual gift and initiating the virtual gift giving request.
In another possible implementation, the virtual gift gifting interface includes a gifting control, and the request initiating module is configured to:
setting the selected virtual gift to a selected state in response to a selection operation of any one of the virtual gift;
and responding to the triggering operation of the gift control, and initiating a gift request of the virtual gift in a selected state.
In another possible implementation manner, the target object area includes a target face area, and the feature acquisition module includes:
A key point obtaining unit, configured to obtain a plurality of face key points of the target face area in response to a selection operation of the target face area, where the face key points include at least one of a face edge point or a face organ edge point of the target face area;
and the feature acquisition unit is used for determining the target face features of the target face region according to the positions of the plurality of face key points.
In another possible implementation manner, the feature acquiring unit is configured to:
determining a first face sub-feature of the target face region according to the abscissa of the plurality of face key points, wherein the first face sub-feature represents the transverse relative positions of the plurality of face key points;
and acquiring second face sub-features of the target face region according to the ordinate of the plurality of face key points, wherein the second face sub-features represent the longitudinal relative positions of the plurality of face key points.
In another possible implementation manner, the feature acquiring unit is configured to:
determining a first distance between a first face key point and a second distance between the first face key point and a third face key point according to the positions of the first face key point, the second face key point and the third face key point;
And determining a first ratio between the first distance and the second distance as the target face feature.
In another possible implementation manner, the plurality of face key points include face edge points and face organ edge points, and the feature acquiring unit is configured to:
selecting a face key point positioned in a first face subarea or a second face subarea from the plurality of face key points;
and determining the target face characteristics of the target face area according to the positions of the selected plurality of face key points.
In another possible implementation manner, the feature acquiring unit is configured to:
selecting a first corner key point, a second corner key point and a face edge key point which is positioned at the same height with the lower eyelid key point from the plurality of face key points; or alternatively, the process may be performed,
and selecting a first nose bridge key point, a second nose bridge key point and a third nose bridge key point from the plurality of face key points.
In another possible implementation manner, the target object area includes a target face area, and the feature acquisition module includes:
a parameter obtaining unit, configured to obtain face shape parameters of the target face area in response to a selection operation of the target face area, where the face shape parameters include at least one of an aspect ratio of a face length and a face width, a width ratio of a forehead width and a chin width, a chin angle parameter, or a chin angle parameter;
And the feature acquisition unit is also used for determining the face shape parameters as the target face features.
In another possible implementation manner, the face shape parameters include the chin angle parameters, and the parameter acquiring unit is configured to:
in response to the selection operation of the target face region, determining a first line segment corresponding to a first chin key point and a second line segment corresponding to a third chin key point according to the position of the first chin key point, the position of the second chin key point and the position of the third chin key point, wherein the first chin key point and the second chin key point are at the same height, and the third chin key point is a vertex in a plurality of chin key points;
and determining the jaw angle parameter according to the included angle between the first line segment and the second line segment.
In another possible implementation manner, the face shape parameter includes the chin angle parameter, and the parameter acquiring unit is configured to:
responding to the selection operation of the target face area, and determining a third line segment corresponding to a first chin key point and a second chin key point and a fourth line segment corresponding to the second chin key point and the third chin key point according to the position of the first chin key point, the position of the second chin key point and the position of the third chin key point, wherein the first chin key point and the second chin key point are positioned at the same height, and the third chin key point is a vertex in a plurality of chin key points;
And determining the chin angle parameter according to the included angle between the third line segment and the fourth line segment.
In another possible implementation manner, the target object area includes a target human body area, and the feature acquisition module includes:
a feature acquiring unit configured to acquire a first human body length of a first human body sub-region and a second human body length of a second human body sub-region in the target human body region in response to a selection operation of the target human body region, and determine a ratio between the first human body length and the second human body length as a target human body feature of the target human body region; or alternatively, the process may be performed,
the characteristic acquisition unit is further used for acquiring the total length and the total width of the human body of the target human body area in response to the selection operation of the target human body area, and determining the ratio between the total length and the total width of the human body as the target human body characteristic of the target human body area; or alternatively, the process may be performed,
the feature acquisition unit is further configured to acquire clothing features in the target human body region in response to a selection operation of the target human body region, and determine the clothing features as target human body features of the target human body region.
In another possible implementation manner, the live broadcast picture in the live broadcast interface includes a plurality of picture areas, and the feature acquisition module includes:
and the characteristic acquisition unit is also used for responding to the selection operation of the target picture area, determining a target background area in the target picture area and acquiring the target background characteristic of the target background area.
In another aspect, there is provided a virtual gift-gifting apparatus, the apparatus including:
the request receiving module is used for receiving a virtual gift giving request, wherein the virtual gift giving request carries target object characteristics;
the region determining module is used for determining a plurality of object regions included in a live interface of the live broadcasting room;
the feature matching module is used for determining the object features matched with the target object features in the object features corresponding to the plurality of object regions, and determining the object region corresponding to the object features matched with the target object features as a target object region;
and the special effect display module is used for displaying the special effect of the virtual gift corresponding to the virtual gift in the target object area.
In one possible implementation manner, the feature matching module is configured to obtain difference values between object features of the plurality of object regions and the target object feature, and determine an object region corresponding to the minimum difference value as the target object region.
In another possible implementation manner, the special effect display module is configured to display the virtual gift special effect at a target location in the target object area.
In another possible implementation manner, the number of the virtual gifts is carried in the virtual gift presenting request, and the special effect display module is configured to:
responsive to the number of virtual gifts being greater than a first reference number and less than a second reference number, displaying the virtual gift effects of the number in the target object area in a superimposed manner; or alternatively, the process may be performed,
and displaying the special effects of the virtual gift and text information corresponding to the virtual gift in the target object area in response to the number of the virtual gift being greater than a third reference number, wherein the text information comprises the number of the virtual gift.
In another possible implementation manner, the object area includes a face area, the live broadcast interface includes a live broadcast picture, and the area determining module is configured to perform face recognition on the live broadcast picture to determine a plurality of face areas.
In another possible implementation, the apparatus further includes:
and the picture transmitting module is used for transmitting the live picture added with the virtual gift special effect to a live server, and the live server is used for releasing the live picture in the live room.
In another aspect, a computer device is provided that includes a processor and a memory having stored therein at least one piece of program code that is loaded and executed by the processor to implement the operations performed in the virtual gift-gifting method of the above aspect.
In another aspect, there is provided a computer readable storage medium having stored therein at least one program code loaded and executed by a processor to implement the operations performed in the virtual gift-gifting method of the above aspect.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer program code stored in a computer readable storage medium, the computer program code being loaded and executed by a processor to implement the operations performed in the virtual gift-giving method as described in the above aspect.
The technical scheme provided by the embodiment of the application has the beneficial effects that at least:
the method, the device, the computer equipment and the medium provided by the embodiment of the application can select the target face area to be presented with the virtual gift from a plurality of face areas, present the virtual gift to a specific anchor, and carry the target face characteristics in the presentation request, thereby being convenient for displaying the special effect of the virtual gift in the corresponding target face area according to the target face characteristics.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of an implementation environment provided by an embodiment of the present application;
FIG. 2 is a flow chart of a virtual gift-gifting method provided by an embodiment of the present application;
FIG. 3 is a flow chart of another virtual gift-gifting method provided by an embodiment of the present application;
FIG. 4 is a flow chart of another virtual gift-gifting method provided by an embodiment of the present application;
fig. 5 is a schematic diagram of a face key point provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of another face key point provided in an embodiment of the present application;
FIG. 7 is a schematic view of a face according to an embodiment of the present application;
FIG. 8 is a schematic view of another face provided by an embodiment of the present application;
FIG. 9 is a schematic view of another face provided by an embodiment of the present application;
FIG. 10 is a schematic view of another face provided by an embodiment of the present application;
Fig. 11 is a schematic structural diagram of a virtual gift-gifting apparatus according to an embodiment of the present application;
fig. 12 is a schematic structural view of another virtual gift-gifting apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural view of another virtual gift-gifting apparatus according to an embodiment of the present application;
fig. 14 is a schematic structural view of another virtual gift-gifting apparatus according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the following detailed description of the embodiments of the present application will be given with reference to the accompanying drawings.
It is to be understood that the terms "first," "second," and the like, as used herein, may be used to describe various concepts, but are not limited by these terms unless otherwise specified. These terms are only used to distinguish one concept from another. For example, a first face key point may be referred to as a second face key point and a second face key point may be referred to as a first face key point without departing from the scope of the present application.
The terms "at least one", "a plurality", "each", "any" and the like as used herein, at least one includes one, two or more, a plurality includes two or more, each means each of the corresponding plurality, and any one means any of the plurality. For example, the plurality of virtual gifts includes 3 virtual gifts, and each virtual gift refers to each of the 3 virtual gifts, and any one refers to any one of the 3 virtual gifts, which may be the first, the second, or the third.
FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application. Referring to fig. 1, the implementation environment includes at least one audience terminal 101 (1 in fig. 1 as an example), a anchor terminal 102, and a live server 103. At least one of the audience terminals 101 and the live broadcast server 103 are connected by a wireless or wired network, and the anchor terminal 102 and the live broadcast server 103 are connected by a wireless or wired network.
The audience terminal 101 and the anchor terminal 102 are installed with target applications for providing services by the live broadcast server 103, and the audience terminal 101 and the anchor terminal 102 can implement functions such as data transmission, message interaction, and the like through the target applications. Alternatively, the audience terminal 101 and the anchor terminal 102 are computers, cell phones, tablets, or other terminals. Optionally, the target application is a target application in an operating system or a target application provided for a third party. For example, the target application is a live application having a live function, a virtual gift-giving function, and the like, but of course, the live application can also have other functions such as a wheat-with-one function, a comment function, and the like. Alternatively, the live server 103 is a server, or a server cluster composed of several servers, or a cloud computing service center.
The method provided by the embodiment of the application is applied to the scene of giving the virtual gift to the host in the live broadcast process. For example, if a user has a plurality of anchor in a live broadcast viewing process, the user wants to give a virtual gift to one of the plurality of anchor, the virtual gift giving method provided by the embodiment of the application can be adopted to trigger an object area corresponding to the anchor to which the virtual gift is to be given, and then select the virtual gift given to give the virtual gift to the anchor, and the virtual gift special effect corresponding to the virtual gift is displayed in the object area of the anchor.
Fig. 2 is a flowchart of a virtual gift-gifting method according to an embodiment of the present application. The implementation subject of the embodiment of the application is a spectator terminal. Referring to fig. 2, the method includes the steps of:
201. and displaying a live interface of the live broadcasting room, wherein the live broadcasting interface comprises a plurality of object areas.
The live interface comprises a plurality of object areas, wherein the object areas refer to areas where objects are located. Optionally, the object region includes a face region, a body region, a background region, or other region. Optionally, the live interface includes a live view, and the live view includes the plurality of object regions.
202. And responding to the selection operation of the target object area, and acquiring the target object characteristics corresponding to the target object area.
The selection operation of the target object area refers to a single click operation, a double click operation, a long press operation, a frame selection operation or other operations performed on the target object area. The target object features are used to describe the target object, and object features of different objects are distinguished, so that different object regions can be distinguished based on the object features.
In one possible implementation manner, the audience terminal responds to a trigger operation on a live broadcast picture, determines a target position corresponding to the trigger operation, identifies the live broadcast picture, determines an object area including the target position, determines the object area as a target object area, and acquires a target object feature corresponding to the target object area. The triggering operation comprises a single click operation, a double click operation, a long press operation, a box selection operation or other operations.
203. And responding to the virtual gift giving operation, and initiating a virtual gift giving request to a target object corresponding to the target object area.
The target object is a target face, a target human body, a target background and the like, and the virtual gift giving request carries the characteristics of the target object.
The audience terminal initiates a virtual gift presentation request, namely, sends the virtual gift presentation request to a live broadcast server, the live broadcast server sends the virtual gift presentation request to a host terminal, the host terminal displays a virtual gift special effect corresponding to a virtual gift in a target object area according to the received virtual gift presentation request, a live broadcast picture containing the virtual gift special effect is sent to the audience terminal through the live broadcast server, and the audience terminal displays the live broadcast picture, namely, the audience terminal displays the virtual gift special effect corresponding to the virtual gift in the target object area.
After the audience terminal determines the target object characteristics of the target object area, displaying a plurality of virtual gifts for users to select, and the users select the virtual gifts to give the selected virtual gifts to the anchor.
The method provided by the embodiment of the application can select the target object area to be presented with the virtual gift from a plurality of object areas, presents the virtual gift to a specific host, carries the target object characteristics in the presentation request, and is convenient for displaying the special effect of the virtual gift in the corresponding target object area according to the target object characteristics.
Fig. 3 is a flowchart of another virtual gift-gifting method according to an embodiment of the present application. The execution subject of the embodiment of the application is a anchor terminal. Referring to fig. 3, the method includes the steps of:
301. and receiving a virtual gift-giving request, wherein the virtual gift-giving request carries the characteristics of the target object.
The anchor terminal determines a target object feature from the received virtual gift presentation request, and subsequently determines an anchor presented with the virtual gift based on the target object feature.
302. A plurality of object regions included in a live interface of a live room is determined.
Since the current live view of the anchor terminal may be changed compared with the live view of the viewer terminal when the trigger operation is performed, the plurality of object regions may be moved relative to the previous position, and thus it is necessary to determine the plurality of object regions in the current live view interface of the anchor terminal.
303. And determining object features matched with the target object features in the object features corresponding to the plurality of object regions, and determining the object region corresponding to the object features matched with the target object features as the target object region.
And matching the object features with the target object features, so as to determine the object features with the highest similarity with the target object features in the object features, determining the object region corresponding to the object features with the highest similarity as a target object region, and determining the anchor corresponding to the target object region as the anchor of the given virtual gift.
304. And displaying the special effect of the virtual gift corresponding to the virtual gift in the target object area.
After the anchor terminal determines the target object area, the anchor terminal determines the virtual gift to be presented according to the virtual gift information in the virtual gift presentation request, and performs virtual gift special effect rendering on the target object area to display the virtual gift special effect corresponding to the virtual gift, while other object areas in the live broadcast picture do not display the virtual gift special effect, so that the anchor terminal presents the virtual gift to a specific anchor.
According to the method provided by the embodiment of the application, the target object area corresponding to the target characteristic can be determined from the displayed multiple object areas according to the target object characteristic carried in the presentation request, so that the special effect of the virtual gift is displayed in the target object area, and the virtual gift is presented to the specific host.
In the embodiment of the present application, the object area is taken as a face area as an example, and the following embodiment shown in fig. 4 is adopted to describe a display process of a virtual gift special effect.
Fig. 4 is a flowchart of another virtual gift-gifting method according to an embodiment of the present application. The interaction main body of the embodiment of the application is a spectator terminal, a live broadcast server and a main broadcasting terminal. Referring to fig. 4, the method includes the steps of:
401. The audience terminal displays a live interface of the live broadcasting room.
In one possible implementation, the live interface includes a live view, where the live view includes a plurality of face regions. Optionally, the live broadcast picture includes a plurality of divided regions, each region displaying a different face, for example, in a scene of a plurality of anchor links, the live broadcast picture is divided into a plurality of regions, and each region displays a corresponding anchor live broadcast picture; or the live broadcast picture comprises an area, a plurality of faces are displayed in the area, for example, a plurality of anchor broadcasters live in the same live broadcast room, and in the live broadcast process, images of the anchor broadcasters are collected through the same anchor terminal.
In one possible implementation, the viewer terminal installs a live application through which a live interface of the live room is displayed.
402. And the audience terminal responds to the selection operation of the target face area and acquires the target face characteristics corresponding to the target face area.
In the embodiment of the application, the user gives the virtual gift to the anchor through the audience terminal in the live broadcast watching process, and for a plurality of anchors, the user can select any anchor from the plurality of anchors and give the virtual gift to the selected anchor.
In one possible implementation manner, a user performs a triggering operation on a live broadcast picture, and an audience terminal responds to the triggering operation on the live broadcast picture and determines a target position corresponding to the triggering operation; performing face recognition on the live broadcast picture, determining a face area comprising a target position, namely performing face recognition on the live broadcast picture, identifying a plurality of face areas in the live broadcast picture, and determining the face area comprising the target position from the plurality of face areas; and determining the face area as a target face area, and acquiring target face characteristics corresponding to the target face area. In addition, if none of the face areas in the live view screen includes the target position, the trigger operation is considered to be generated by the misoperation of the user, and the subsequent virtual gift giving process is not executed.
The triggering operation is a single click operation, a double click operation, a long press operation, a frame selection operation or other operations, the target position is the position of the user finger for triggering operation, the target face area is the area where the face of the user to be presented with the virtual gift is located in the live broadcast picture, and the target face characteristics are used for representing the characteristics of the face.
In one possible implementation manner, the manner in which the viewer terminal determines the face area includes at least two of the following:
First kind: the audience terminal detects face key points in the live broadcast picture, and determines a face area based on the detected face key points. For example, face key points are detected on the live broadcast picture, a plurality of face key points in the live broadcast picture are obtained, and the area where the face key points belonging to the same face are located is determined as a face area. Optionally, a face key point detection algorithm is adopted to detect the face key points; or detecting the key points of the human face by adopting a key point detection model of the human face; alternatively, the detection may be performed in other ways.
Second kind: and the audience terminal invokes a face recognition model to recognize the live broadcast picture, and determines the face area of each face in the live broadcast picture.
After determining the selected target face region, in one possible implementation, the viewer terminal obtains a plurality of face key points of the target face region; and determining the target face characteristics of the target face area according to the positions of the plurality of face key points. The face key points comprise at least one of face edge points or face organ edge points of a target face area, the number of the obtained face key points is 5, 21, 49, 68 or 100, and the like, and the number of the face key points of each face is not limited. For example, referring to fig. 5, the number of face keypoints is 68.
The audience terminal determines the target face characteristics of the target face area according to the positions of the plurality of face key points, and the method comprises the following possible implementation modes:
in one possible implementation, the contours of the faces may be different due to the different faces, which includes: at least one of the different shapes of the face, the different shapes of the facial organs, the different relative positions of the facial organs, or the different relative positions of the facial organs and the facial edges, and the differences can be manifested by the relative positions between the plurality of facial key points of the face. For example, the shape of a face is different, and the relative positions of a plurality of face edge points of the face are also different; the shape of the facial organ is different, the relative positions of the facial organ edge points are also different, and the like. The relative positions between the plurality of face keypoints are used to represent the target face features.
Optionally, determining a first face sub-feature of the target face region according to the abscissa of the plurality of face key points, wherein the first face sub-feature represents the lateral relative positions of the plurality of face key points; or, according to the ordinate of the plurality of face key points, acquiring a second face sub-feature of the target face region, wherein the second face sub-feature represents the longitudinal relative positions of the plurality of face key points.
The embodiment of the application is only described by taking the example that the target face features comprise a first face sub-feature and a second face sub-feature, wherein the target face features comprise at least one first face sub-feature or at least one second face sub-feature. For example, according to the horizontal coordinates of the face key point 1, the face key point 2 and the face key point 3, a first face sub-feature 1 of the target face area is obtained; and acquiring the first face sub-feature 2 of the target face region according to the horizontal coordinates of the face key points 4, 5 and 6. The first face features include a first face sub-feature 1 and a first face sub-feature 2.
In another possible implementation manner, the audience terminal determines a first distance between the first face key point and the second face key point and a second distance between the first face key point and the third face key point according to the positions of the first face key point, the second face key point and the third face key point; and determining a first ratio between the first distance and the second distance as the target face characteristic. The first face key point, the second face key point and the third face key point are any face key point in the plurality of face key points respectively.
It should be noted that, in the embodiment of the present application, only the determination of the target face feature according to the positions of the 3 face key points is described as an example, in another embodiment, other number of face key points may be adopted to determine the target face feature, for example, the target face feature is determined according to the positions of the 4 face key points; or determining the target face features according to the positions of the 6 face key points. The embodiment of the application does not limit the number of the key points of the human face.
In another possible implementation, the angles of the faces displayed in the live view may change, and for faces with different angles, the detected face key points may also be different, for example, if the face in the live view is the left side face of the anchor, then the face key point of the right side face of the anchor cannot be detected. But at any angle, the face region includes eyes and a bridge of the nose. For example, the left face includes a left eye and the right face includes a right eye, and both the left face and the right face include nose bridges. The nose bridge is located on the middle line of the face, and whether the left face or the right face is detected, the key points of the face on the middle line of the face can be detected.
In order to reduce the influence of the change of the face angle on the determination of the target face characteristics, the target face characteristics are determined by adopting the face key points of the region to which the eyes belong or the face key points of the region to which the facial midline belongs. The audience terminal selects a face key point positioned in the first face subarea or the second face subarea from a plurality of face key points; and determining the target face characteristics of the target face area according to the positions of the selected plurality of face key points. The first face subregion is the region that eyes belong to, and the second face subregion is the region that facial midline belongs to, for example, the first face region includes eyes and region around eyes, and the second face region includes the region that the bridge of the nose is located.
In one possible implementation, the viewer terminal selects a first corner of eye key, a second corner of eye key, and a facial edge key at the same elevation as the lower eyelid key from a plurality of facial key points. Wherein the first corner of eye keypoints and the second corner of eye keypoints belong to the same eye, or belong to different eyes.
In order to obtain the target facial features more accurately, optionally, the first corner-of-eye key point and the second corner-of-eye key point belong to the same eye, the first corner-of-eye key point, the second corner-of-eye key point and the face edge key point are at the same height, and the face edge key point is on the same side as the first corner-of-eye key point and the second corner-of-eye key point, that is, the first corner-of-eye key point and the second corner-of-eye key point belong to the left eye, and the face edge key point is the left face edge point; the first eye corner key point and the second eye corner key point belong to the right eye, and the face edge key point is a right face edge point. For example, referring to fig. 6, the first corner key point is C, the second corner key point is B, and the facial edge key point is a; or the first eye corner key point is D, the second eye corner key point is E, and the face edge key point is F.
In one possible implementation manner, after the viewer terminal selects the first corner key point, the second corner key point and the face edge key point, the target face feature is determined in combination with the above possible implementation manner that the ratio between any two distances is determined as the target face feature. Optionally, the first distance is a lateral distance between the facial edge key point and the second corner key point, the second distance is a lateral distance between the second corner key point and the first corner key point, and a ratio between the first distance and the second distance is determined as the target facial feature; or, the first distance is a lateral distance between the face edge key point and the first eye corner key point, the second distance is a lateral distance between the second eye corner key point and the first eye corner key point, and the ratio between the first distance and the second distance is determined as the target face feature.
For example, referring to fig. 6, if the selected face keypoints are A, B and C, r1=ab/BC is determined, and r1 is determined as the target face feature. Wherein AB is the lateral distance between the facial edge key point A and the second corner key point B, BC is the lateral distance between the second corner key point B and the first corner key point C, and r1 is the ratio between AB and BC. If the selected face key points are D, E and F, determining r1=EF/DE, and determining r1 as the target face feature. Wherein EF is the lateral distance between the facial edge key point F and the second corner key point E, DE is the lateral distance between the second corner key point E and the first corner key point D, and r1 is the ratio between EF and DE.
In another possible implementation, the audience terminal selects a first nose bridge keypoint, a second nose bridge keypoint, and a third nose bridge keypoint from a plurality of face keypoints. For example, referring to fig. 6, the first nose bridge keypoint is G, the second nose bridge keypoint is H, and the third nose bridge keypoint is I.
In one possible implementation, after the audience terminal selects the first nose bridge key point, the second nose bridge key point and the third nose bridge key point, the target face feature is determined in combination with the possible implementation of determining the ratio between any two distances as the target face feature. Optionally, the first distance is a longitudinal distance between the first nose bridge key point and the second nose bridge key point, the second distance is a longitudinal distance between the second nose bridge key point and the third nose bridge key point, and a ratio between the first distance and the second distance is determined as the target face feature; or, the first distance is a longitudinal distance between the first nose bridge key point and the third nose bridge key point, the second distance is a longitudinal distance between the second nose bridge key point and the third nose bridge key point, and the ratio between the first distance and the second distance is determined as the target face feature.
For example, referring to fig. 6, if the selected face key points are G, H and I, r2=gh/HI is determined, and r2 is determined as the target face feature. Wherein GH is the longitudinal distance between the first bridge key point G and the second bridge key point H, HI is the longitudinal distance between the second bridge key point H and the third bridge key point I, and r2 is the ratio between GH and HI.
In another possible implementation manner, the audience terminal obtains face shape parameters of the target face area in response to a selection operation of the target face area, and determines the face shape parameters as the target face features. Wherein the face shape parameters include at least one of an aspect ratio of a face length and a face width, a width ratio of a forehead width and a chin width, a chin angle parameter, or a chin angle parameter.
In one possible implementation manner, under the condition that the face shape parameters comprise the length-width ratio of the face length and the face width, the audience terminal identifies the target face area to obtain the face length and the face width of the target face area; and determining a second ratio between the face length and the face width as the target face characteristic. The face length refers to the longest length in the face area, and the face width refers to the widest width in the face area. For example, referring to fig. 7, where the face width is W and the face length is H, r3=h/W, r3 is determined as the target face feature.
In another possible implementation manner, in the case that the face shape parameter includes a width ratio of a forehead width to a chin width, the audience terminal identifies a target face area, and obtains the forehead width and the chin width of the target face area; a third ratio between forehead width and chin width is determined as the target facial feature. For example, referring to fig. 8, where the forehead width is L1 and the chin width is L2, r4=l1/L2, r4 is determined as the target face feature.
In another possible implementation, in the case that the face shape parameter includes a chin angle parameter, the viewer terminal determines a first line segment corresponding to the first chin key point and the second chin key point, and a second line segment corresponding to the second chin key point and the third chin key point according to the position of the first chin key point, the position of the second chin key point, and the position of the third chin key point, and determines the chin angle parameter according to an included angle between the first line segment and the second line segment. The first jaw key point and the second jaw key point are located at the same height, the third jaw key point is the top point of the jaw key points, namely the third jaw key point is the key point located at the lowest position of the jaw key points. For example, referring to fig. 9, the first chin key point is a, the second chin key point is B, the third chin key point is C, the first line segment is AB, the second line segment is BC, r5=tan=abc, the tangent r5 of the ++abc is determined as the target face feature, or the cosine or sine of the ++abc is determined as the target face feature, or the angle corresponding to the ++abc is directly determined as the target face feature.
In another possible implementation manner, in the case that the face shape parameter includes a chin angle parameter, the viewer terminal determines a third line segment corresponding to the first chin key point and the second chin key point, and a fourth line segment corresponding to the second chin key point according to the position of the first chin key point, the position of the second chin key point, and the position of the third chin key point, and determines the chin angle parameter according to an included angle between the third line segment and the fourth line segment. The first chin key point and the second chin key point are located at the same height, the third chin key point is a vertex in the plurality of chin key points, namely the third chin key point is a key point located at the lowest position in the plurality of chin key points. For example, referring to fig. 10, the first chin key point is F, the second chin key point is G, the third chin key point is C, the third line segment is FG, and the fourth line segment is CG, r6=tan=fgc, the tangent r6 of ++fgc is determined as the target face feature, or the cosine or sine of ++fgc is determined as the target face feature, or the angle corresponding to ++fgc is directly determined as the target face feature.
It should be noted that the target face features of the target face area determined by the viewer terminal include one or more of the above-mentioned target face features. For example, the target face features are r1, or the target face features are r1 and r2, or the target face features are r3, r4, r5, and r6.
In the above embodiment, the audience terminal is only used to identify the live broadcast picture and acquire the target face feature as an example, in another embodiment, the audience terminal transmits the live broadcast picture when the user performs the triggering operation to the live broadcast server, the live broadcast server identifies the live broadcast picture, acquires the target face feature corresponding to the target face region, and then transmits the acquired target face feature to the audience terminal.
403. The viewer terminal transmits a virtual gift-gifting request to the live server in response to the virtual gift-gifting operation.
The virtual gift giving request carries the target face characteristics, so that the terminal in the subsequent living broadcast room displays the virtual gift special effect corresponding to the virtual gift in the target face area according to the target face characteristics. Optionally, the virtual gift-gifting request carries virtual gift information to enable the subsequent anchor terminal to determine which virtual gift is gifted. Optionally, the virtual gift-gifting request carries the number of virtual gifts, so that the subsequent anchor terminal can determine the number of virtual gifts gifted.
In one possible implementation, the audience terminal displays a virtual gift presentation interface at an upper layer of the live interface; or switching from the live broadcast interface to the virtual gift giving interface; and responding to the selection operation of any virtual gift in the virtual gift presentation interface, and initiating a virtual gift presentation request. For example, the gift-giving interface is a floating window popped up at an upper layer of the live interface, or the gift-giving interface is a new interface different from the live interface, or the virtual gift interface is another interface, which is not limited in the present application.
In one possible implementation, the virtual gift gifting interface includes special effect thumbnails corresponding to a plurality of virtual gifts, and the audience terminal initiates the virtual gift gifting request in response to a selection operation of the special effect thumbnail corresponding to any one of the virtual gifts. The special effect thumbnail is a thumbnail of a special effect of the virtual gift corresponding to the virtual gift, and the special effect thumbnail is selected, namely, the corresponding virtual gift is selected. Optionally, the virtual gift effect is a dynamic image and the effect thumbnail is a static image.
In addition, the virtual gift effect is a text, graphic, or other form of effect.
In one possible implementation, the virtual gift gifting interface includes a gifting control, and the viewer terminal sets the selected virtual gift to a selected state in response to a selection operation of any one of the virtual gifts; and responding to the triggering operation of the gift control, and initiating a gift request of the virtual gift in the selected state. The virtual gift in the selected state is the virtual gift to be presented. Alternatively, if the virtual gift presentation interface displays special effect thumbnails corresponding to the virtual gift, the viewer terminal sets the selected special effect thumbnail to a selected state in response to a selection operation of any special effect thumbnail.
404. And the live broadcast server sends the virtual gift giving request to the anchor terminal.
405. The anchor terminal receives the virtual gift-giving request.
The audience terminal sends the virtual gift giving request to the anchor terminal through the live broadcast server, the anchor terminal obtains the virtual gift giving request, the target face feature is obtained from the virtual gift giving request, and the anchor of the given virtual gift is determined based on the target face feature.
In one possible implementation, the gift information is carried in the virtual gift-giving request, and the live server issues the virtual gift to the anchor terminal.
406. The anchor terminal determines a plurality of face areas included in a live interface of the live broadcasting room.
Since the current live view of the anchor terminal may be changed compared with the live view of the viewer terminal when the trigger operation is performed, it is necessary to determine a plurality of face areas in the current live view interface of the anchor terminal.
In one possible implementation, the anchor terminal performs face recognition on the live broadcast picture to determine a plurality of face areas. The embodiment of determining the plurality of face areas by face recognition is similar to the embodiment of determining the face areas by the viewer terminal in step 402, and will not be described herein.
407. The anchor terminal determines the face features matched with the target face features in the face features corresponding to the plurality of face regions, and determines the face region corresponding to the face features matched with the target face features as the target face region.
The matching of the face features and the target face features is to determine the face features with the highest similarity to the target face features in the face features, and the face region corresponding to the face features with the highest similarity is determined as the target face region, and the anchor corresponding to the target face region is considered to be the anchor presented with the virtual gift.
In one possible implementation manner, difference values between face features of a plurality of face regions and target face features are obtained respectively, so that a plurality of difference values are obtained, and a face region corresponding to a minimum difference value in the plurality of difference values is determined as the target face region. Wherein the variance value includes a difference value, a square difference value, a standard deviation, or other numerical value representing the variance. For example, if the target face features are r1 and r2, any of the face features is r1' and r2', the difference value is (r 1-r1 ') 2 +(r2-r2′) 2 Or the difference value is (r 1-r1 ') + (r 2-r 2').
It should be noted that, the implementation of determining the face feature corresponding to each face area by the anchor terminal is similar to the implementation of determining the target face feature of the target face area in the above step 402, and will not be described herein.
408. And the anchor terminal displays the special effect of the virtual gift corresponding to the virtual gift in the target face area.
After the anchor terminal determines the target face area, the anchor terminal determines the virtual gift according to the virtual gift information in the virtual gift presentation request, and performs virtual gift special effect rendering on the target face area to display the virtual gift special effect corresponding to the virtual gift, while other face areas in the live broadcast picture do not display the virtual gift special effect.
In one possible implementation manner, the virtual gift giving request carries the number of virtual gift, and after receiving the virtual gift giving request, the anchor terminal obtains the number of virtual gift and displays the special effect of the virtual gift according to the number of virtual gift.
Optionally, the anchor terminal superimposes and displays the virtual gift effects of the number in the target face area in response to the number of the virtual gift being greater than the first reference number and less than the second reference number, wherein the first reference number is less than the second reference number. For example, if the first reference number is 1, the second reference number is 10, the virtual gift to be presented is "ear", and the number carried in the virtual gift presentation request is 5, then 5 pairs of "ears" can be superimposed, and the superimposed "ears" can be displayed. For another example, the virtual gift effect corresponding to the virtual gift is "fat", and the target face is increased by 10% on the original basis when one virtual gift is received, and by 50% on the original basis when 5 virtual gifts are received.
Optionally, the anchor terminal displays the special effect of the virtual gift and text information corresponding to the virtual gift in the target face area in response to the number of the virtual gift being greater than the third reference number. The text information comprises the number of virtual gifts, and the third reference number is not smaller than the second reference number. For example, the third reference number is 10, the number carried in the virtual gift presentation request is 20, the presented virtual gift is "yacht", the "yacht" special effect is displayed, and "20×" is displayed above the "yacht" special effect or other surrounding area, indicating that 20 "yachts are presented.
In one possible implementation, a virtual gift effect is displayed at a target location in a target face region. Wherein the target part is the part of eyes, face, forehead, etc.
In one possible implementation, the virtual gift effect is provided with a corresponding display duration, and the virtual gift effect is not displayed when the display duration is reached.
It should be noted that, the embodiment of the present application is described only by taking an example in which one viewer terminal transmits a virtual gift-giving request. In another embodiment, the plurality of audience terminals in the living room can each send the virtual gift-gifting request to the anchor terminal in the above manner, and if the plurality of audience terminals send the virtual gift-gifting request to the anchor terminal at the same time, the anchor terminal displays the corresponding special effects of the patrol gift according to the plurality of virtual gift-gifting requests. Optionally, if the plurality of audience terminals give the same virtual gift to the anchor terminal, the anchor terminal displays according to the number of the virtual gift and the display mode of the special effect of the virtual gift; if a plurality of audience terminals give different virtual gifts to the anchor terminal, the anchor terminal displays the virtual gift special effects corresponding to each virtual gift respectively, and the virtual gift special effects corresponding to the different virtual gifts can be displayed simultaneously or sequentially according to a preset display sequence.
409. And the anchor terminal sends the live broadcast picture added with the virtual gift special effect to a live broadcast server.
410. The live broadcast server issues live broadcast pictures in a live broadcast room.
The live broadcast terminal combines the live broadcast picture and the virtual gift special effect to obtain a live broadcast picture added with the virtual gift special effect, then the live broadcast picture is sent to the live broadcast server, the live broadcast picture is released in a live broadcast room by the live broadcast server, and the audience terminal in the live broadcast room displays the live broadcast picture to realize the presentation of the virtual gift to a specific host.
The method provided by the embodiment of the application can select the target face area to be presented with the virtual gift from a plurality of face areas, presents the virtual gift to a specific anchor, carries the target face characteristics in the presentation request, and is convenient for displaying the special effect of the virtual gift in the corresponding target face area according to the target face characteristics.
And when the face features are acquired, the corresponding ratio of the distances between different face key points is determined as the face features, or the face features are determined according to the angles formed by the different face key points, so that the corresponding face regions can be accurately represented, and the accuracy of the face features is improved. And when the accurate target face features are adopted and the corresponding target face areas are matched, the determined target face areas can be ensured to be the face areas where the user wants to give the virtual gift.
The embodiment shown in fig. 4 is described taking the object area as the face area as an example, and in another embodiment taking the object area as the body area as an example, the object area is different from the object area as the face area in that the target body feature is acquired by the audience terminal, the target body feature is carried in the virtual gift-giving request sent to the anchor terminal, the anchor terminal determines the matching body feature according to the target body feature, the body area corresponding to the body feature is determined as the target body area, and the manner in which the audience terminal sends the virtual gift-giving request and the anchor terminal displays the corresponding virtual gift special effect is the same as that in fig. 4.
The method for acquiring the target human body characteristics comprises at least one of the following steps:
first kind: the audience terminal responds to the selection operation of the target human body area, acquires the first human body length of the first human body subarea and the second human body length of the second human body subarea in the target human body area, determines the ratio of the first human body length to the second human body length as the target human body characteristic of the target human body area, and the target human body characteristic can represent the human body proportion characteristic. The first human body sub-region and the second human body sub-region are different two regions in the target human body, for example, the first human body sub-region is a region at and above the waist region, and the second human body sub-region is a region below the waist region.
Second kind: the audience terminal responds to the selection operation of the target human body area, acquires the total human body length and the total human body width of the target human body area, and determines the ratio between the total human body length and the total human body width as the target human body characteristic of the target human body area. The total length of the human body is the height of the target human body, and the target human body characteristics can represent the body type characteristics.
Third kind: the audience terminal responds to the selection operation of the target human body area, acquires the clothing features in the target human body area, and determines the clothing features as the target human body features of the target human body area. Wherein the apparel features include collocation features of the apparel, color features of the apparel, ornamental features, or other target body appearance features.
In one possible implementation, the human body region includes a face region, if the viewer terminal detects a selection operation on the face region, a target face feature is obtained, and a subsequent virtual gift special effect display process is performed according to the target face feature; if the audience terminal detects an area except a human face area in the human body area, acquiring target human body characteristics, and executing a subsequent virtual gift special effect display process according to the target human body characteristics.
In addition, the manner of determining the human body characteristics corresponding to the plurality of human body areas by the anchor terminal is the same as the manner of determining the target human body characteristics.
In another embodiment, in the case where the live broadcast picture in the live broadcast interface includes a plurality of picture areas, taking the object area as the background area as an example, the object area is different from the object area as the face area in that the target background feature is acquired by the audience terminal, the target background feature is carried in the virtual gift presentation request sent to the anchor terminal, the anchor terminal determines the matched background feature according to the target background feature, the background area corresponding to the background feature is determined as the target background area, and the manner in which the audience terminal sends the virtual gift presentation request and the anchor terminal displays the corresponding virtual gift special effect is the same as the manner in fig. 4.
The method for acquiring the target background features comprises the following steps: the audience terminal responds to the selection operation of the target picture area, determines a target background area in the target picture area, and acquires target background characteristics of the target background area. Wherein the target background feature is used to describe the background in the picture area.
In one possible implementation, in a case where each anchor corresponds to one screen area, the audience terminal performs a selection operation on a target screen area, and gives a virtual gift to the anchor, the anchor terminal displays a corresponding virtual gift effect in a background area in the target screen area, displays a corresponding virtual gift effect in a human body area in the target screen area, or displays a corresponding virtual gift effect in a human face area in the target screen area.
In addition, the manner of determining the background features corresponding to the plurality of background areas by the anchor terminal is the same as the manner of determining the target background features.
Fig. 11 is a schematic structural diagram of a virtual gift-gifting apparatus according to an embodiment of the present application. Referring to fig. 11, the apparatus includes:
the display module 1101 is configured to display a live interface of a live broadcast room, where the live broadcast interface includes a plurality of object areas;
the feature acquisition module 1102 is configured to acquire a target object feature corresponding to a target object region in response to a selection operation of the target object region;
the request initiating module 1103 is configured to initiate a virtual gift-gifting request to a target object corresponding to the target object area in response to the virtual gift-gifting operation, where the virtual gift-gifting request carries a target object feature.
In one possible implementation, referring to fig. 12, the apparatus further includes:
the display module 1101 is further configured to display a virtual gift special effect corresponding to the virtual gift in the target object area.
In one possible implementation, the live interface includes a live screen, where the live screen includes a plurality of object regions, see fig. 12, and the feature obtaining module 1102 includes:
a position determining unit 1112, configured to determine a target position corresponding to a trigger operation in response to the trigger operation on the live view;
an area determination unit 1122 configured to identify a live view and determine an object area including a target position;
the feature obtaining unit 1132 is configured to determine the object area as a target object area, and obtain a target object feature corresponding to the target object area.
In another possible implementation, the request initiating module 1103 is configured to:
displaying a virtual gift giving interface on the upper layer of the live broadcast interface; or switching from the live broadcast interface to the virtual gift giving interface;
and responding to the selection operation of any virtual gift in the virtual gift presentation interface, and initiating a virtual gift presentation request.
In another possible implementation, the virtual gift gifting interface includes special effect thumbnails corresponding to a plurality of virtual gifts; the request initiating module 1103 is configured to initiate a virtual gift giving request in response to a selection operation of a special effect thumbnail corresponding to any virtual gift.
In another possible implementation, the virtual gift gifting interface includes a gifting control, a request initiation module 1103, for:
setting the selected virtual gift to a selected state in response to a selection operation of any one of the virtual gift;
and responding to the triggering operation of the gift control, and initiating a gift request of the virtual gift in the selected state.
In another possible implementation, the target object region includes a target face region, referring to fig. 12, and the feature acquisition module 1102 includes:
a key point obtaining unit 1142, configured to obtain, in response to a selection operation of the target face region, a plurality of face key points of the target face region, where the face key points include at least one of a face edge point or a face organ edge point of the target face region;
the feature acquiring unit 1132 is configured to determine a target face feature of the target face area according to the positions of the plurality of face key points.
In another possible implementation manner, referring to fig. 12, the feature acquiring unit 1132 is configured to:
determining a first face sub-feature of a target face area according to the abscissa of the plurality of face key points, wherein the first face sub-feature represents the transverse relative positions of the plurality of face key points;
And acquiring a second face sub-feature of the target face region according to the ordinate of the plurality of face key points, wherein the second face sub-feature represents the longitudinal relative positions of the plurality of face key points.
In another possible implementation manner, referring to fig. 12, the feature acquiring unit 1132 is configured to:
determining a first distance between the first face key point and the second face key point and a second distance between the first face key point and the third face key point according to the positions of the first face key point, the second face key point and the third face key point;
and determining a first ratio between the first distance and the second distance as the target face characteristic.
In another possible implementation manner, the plurality of face key points include a face edge point and a face organ edge point, see fig. 12, and the feature obtaining unit 1132 is configured to:
selecting a face key point positioned in the first face subarea or the second face subarea from a plurality of face key points;
and determining the target face characteristics of the target face area according to the positions of the selected plurality of face key points.
In another possible implementation manner, referring to fig. 12, the feature acquiring unit 1132 is configured to:
Selecting a first corner key point, a second corner key point and a face edge key point which is positioned at the same height as the lower eyelid key point from a plurality of face key points; or alternatively, the process may be performed,
and selecting a first nose bridge key point, a second nose bridge key point and a third nose bridge key point from the plurality of face key points.
In another possible implementation, the target object region includes a target face region, referring to fig. 12, and the feature acquisition module 1102 includes:
a parameter acquiring unit 1152 for acquiring face shape parameters of a target face region in response to a selection operation of the target face region, the face shape parameters including at least one of an aspect ratio of a face length and a face width, a width ratio of a forehead width and a chin width, a chin angle parameter, or a chin angle parameter;
the feature acquisition unit 1132 is further configured to determine the face shape parameter as a target face feature.
In another possible implementation, the face shape parameters include a chin angle parameter, see fig. 12, a parameter acquiring unit 1152 for:
responding to the selection operation of the target face area, and determining a first line segment corresponding to a first chin key point and a second line segment corresponding to a second chin key point and a third chin key point according to the position of the first chin key point, the position of the second chin key point and the position of the third chin key point, wherein the first chin key point and the second chin key point are positioned at the same height, and the third chin key point is the top point of the plurality of chin key points;
And determining the jaw angle parameter according to the included angle between the first line segment and the second line segment.
In another possible implementation, the face shape parameters include a chin angle parameter, see fig. 12, and a parameter acquiring unit 1152 for:
responding to the selection operation of the target face area, and determining a third line segment corresponding to the first chin key point and the second chin key point and a fourth line segment corresponding to the second chin key point and the third chin key point according to the position of the first chin key point, the position of the second chin key point and the position of the third chin key point, wherein the first chin key point and the second chin key point are positioned at the same height, and the third chin key point is a vertex in a plurality of chin key points;
and determining a chin angle parameter according to the included angle between the third line segment and the fourth line segment.
In another possible implementation, the target object region includes a target human body region, and the feature acquisition module 1102 includes:
the feature obtaining unit 1132 is further configured to obtain a first human body length of the first human body sub-region and a second human body length of the second human body sub-region in the target human body region in response to a selection operation of the target human body region, and determine a ratio between the first human body length and the second human body length as a target human body feature of the target human body region; or alternatively, the process may be performed,
The feature acquiring unit 1132 is further configured to acquire a total human length and a total human width of the target human body area in response to a selection operation of the target human body area, and determine a ratio between the total human body length and the total human body width as a target human body feature of the target human body area; or alternatively, the process may be performed,
the feature acquisition unit 1132 is further configured to acquire clothing features in the target human body region in response to a selection operation on the target human body region, and determine the clothing features as target human body features of the target human body region.
In another possible implementation, the live view in the live interface includes a plurality of view areas, and the feature acquisition module 1102 includes:
the feature obtaining unit 1132 is further configured to determine a target background area in the target picture area in response to the selection operation of the target picture area, and obtain a target background feature of the target background area.
Any combination of the above optional solutions may be adopted to form an optional embodiment of the present application, which is not described herein.
It should be noted that: the virtual gift giver provided in the above embodiment only illustrates the division of the above functional modules when displaying the special effects of the virtual gift, and in practical application, the above functional allocation may be completed by different functional modules according to needs, that is, the internal structure of the computer device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the virtual gift-gifting device provided in the above embodiment and the virtual gift-gifting method embodiment belong to the same concept, and detailed implementation processes of the virtual gift-gifting device are shown in the method embodiment, and are not repeated here.
Fig. 13 is a schematic structural diagram of another virtual gift-gifting apparatus according to an embodiment of the present application. Referring to fig. 13, the apparatus includes:
a request receiving module 1301, configured to receive a virtual gift-giving request, where the virtual gift-giving request carries a target object feature;
a region determination module 1302 for determining a plurality of object regions included in a live interface of a live room;
the feature matching module 1303 is configured to determine an object feature that matches a target object feature from object features corresponding to the plurality of object regions, and determine an object region corresponding to the object feature that matches the target object feature as a target object region;
and the special effect display module 1304 is configured to display a special effect of the virtual gift corresponding to the virtual gift in the target object area.
In one possible implementation manner, the feature matching module 1303 is configured to obtain difference values between object features of the plurality of object regions and the target object feature, and determine an object region corresponding to the minimum difference value as the target object region.
In another possible implementation, the special effects display module 1304 is configured to display the virtual gift special effects at a target location in the target object area.
In another possible implementation, the virtual gift giving request carries the number of virtual gifts, and the special effect display module 1304 is configured to:
In response to the number of virtual gifts being greater than the first reference number and less than the second reference number, superimposing the virtual gift effects of the displayed number on the target object area; or alternatively, the process may be performed,
and displaying a special effect of the virtual gift and text information corresponding to the virtual gift in the target object area in response to the number of the virtual gift being greater than the third reference number, wherein the text information comprises the number of the virtual gift.
In another possible implementation, the object region includes a face region, the live interface includes a live view, and the region determining module 1302 is configured to face the live view and determine a plurality of face regions.
In another possible implementation, referring to fig. 14, the apparatus further includes:
the picture sending module 1305 is configured to send the live picture added with the virtual gift special effect to a live server, where the live server is configured to issue the live picture in the live room.
Any combination of the above optional solutions may be adopted to form an optional embodiment of the present application, which is not described herein.
It should be noted that: the virtual gift giver provided in the above embodiment only illustrates the division of the above functional modules when displaying the special effects of the virtual gift, and in practical application, the above functional allocation may be completed by different functional modules according to needs, that is, the internal structure of the computer device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the virtual gift-gifting device provided in the above embodiment and the virtual gift-gifting method embodiment belong to the same concept, and detailed implementation processes of the virtual gift-gifting device are shown in the method embodiment, and are not repeated here.
The embodiment of the application also provides a computer device, which comprises a processor and a memory, wherein at least one program code is stored in the memory, and the at least one program code is loaded and executed by the processor to realize the operations executed in the virtual gift-gifting method of the embodiment.
In one possible implementation, the computer device is provided as a terminal. Fig. 15 is a schematic structural diagram of a terminal 1500 according to an embodiment of the present application. The terminal 1500 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Terminal 1500 can also be referred to as a user device, portable terminal, laptop terminal, desktop terminal, and the like.
The terminal 1500 includes: a processor 1501 and a memory 1502.
The processor 1501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1501 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 1501 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1501 may be integrated with a GPU (Graphics Processing Unit, image processor) for taking care of rendering and rendering of content to be displayed by the display screen. In some embodiments, the processor 1501 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 1502 may include one or more computer-readable storage media, which may be non-transitory. Memory 1502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1502 is used to store at least one program code for execution by processor 1501 to implement the virtual gift-gifting method provided by the method embodiments of the present application.
In some embodiments, the terminal 1500 may further optionally include: a peripheral interface 1503 and at least one peripheral device. The processor 1501, memory 1502 and peripheral interface 1503 may be connected by a bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 1503 via a bus, signal lines, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1504, a display screen 1505, a camera assembly 1506, audio circuitry 1507, a positioning assembly 1508, and a power supply 1509.
A peripheral interface 1503 may be used to connect I/O (Input/Output) related at least one peripheral device to the processor 1501 and the memory 1502. In some embodiments, processor 1501, memory 1502, and peripheral interface 1503 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 1501, the memory 1502, and the peripheral interface 1503 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1504 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 1504 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1504 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1504 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 1504 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuit 1504 may also include NFC (Near Field Communication, short range wireless communication) related circuits, which the present application is not limited to.
Display 1505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When display screen 1505 is a touch display screen, display screen 1505 also has the ability to collect touch signals at or above the surface of display screen 1505. The touch signal may be input to the processor 1501 as a control signal for processing. At this point, display 1505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1505 may be one, disposed on the front panel of the terminal 1500; in other embodiments, the display 1505 may be at least two, respectively disposed on different surfaces of the terminal 1500 or in a folded design; in other embodiments, display 1505 may be a flexible display disposed on a curved surface or a folded surface of terminal 1500. Even more, the display 1505 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display screen 1505 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 1506 is used to capture images or video. Optionally, the camera assembly 1506 includes a front camera and a rear camera. The front camera is arranged on the front panel of the terminal, and the rear camera is arranged on the back of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, the camera assembly 1506 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuitry 1507 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, inputting the electric signals to the processor 1501 for processing, or inputting the electric signals to the radio frequency circuit 1504 for voice communication. For purposes of stereo acquisition or noise reduction, a plurality of microphones may be respectively disposed at different portions of the terminal 1500. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 1501 or the radio frequency circuit 1504 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 1507 may also include a headphone jack.
The positioning component 1508 is for positioning a current geographic location of the terminal 1500 to enable navigation or LBS (Location Based Service, location-based services). The positioning component 1508 may be a positioning component based on the United states GPS (Global Positioning System ), the Beidou system of China, the Granati positioning system of Russia, or the Galileo positioning system of the European Union.
The power supply 1509 is used to power the various components in the terminal 1500. The power supply 1509 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 1509 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 1500 also includes one or more sensors 1510. The one or more sensors 1510 include, but are not limited to: acceleration sensor 1511, gyroscope sensor 1512, pressure sensor 1513, fingerprint sensor 1514, optical sensor 1515, and proximity sensor 1516.
The acceleration sensor 1511 may detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 1500. For example, the acceleration sensor 1511 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1501 may control the display screen 1505 to display the user interface in a landscape view or a portrait view based on the gravitational acceleration signal acquired by the acceleration sensor 1511. The acceleration sensor 1511 may also be used for the acquisition of motion data of a game or user.
The gyro sensor 1512 may detect a body direction and a rotation angle of the terminal 1500, and the gyro sensor 1512 may collect 3D motion of the terminal 1500 by a user in cooperation with the acceleration sensor 1511. The processor 1501, based on the data collected by the gyro sensor 1512, may implement the following functions: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 1513 may be disposed on a side frame of the terminal 1500 and/or under the display 1505. When the pressure sensor 1513 is disposed on the side frame of the terminal 1500, a grip signal of the user on the terminal 1500 may be detected, and the processor 1501 performs left-right hand recognition or quick operation according to the grip signal collected by the pressure sensor 1513. When the pressure sensor 1513 is disposed at the lower layer of the display screen 1505, the processor 1501 realizes control of the operability control on the UI interface according to the pressure operation of the user on the display screen 1505. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 1514 is used for collecting the fingerprint of the user, and the processor 1501 recognizes the identity of the user according to the collected fingerprint of the fingerprint sensor 1514, or the fingerprint sensor 1514 recognizes the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1501 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 1514 may be disposed on the front, back, or side of the terminal 1500. When a physical key or vendor Logo is provided on the terminal 1500, the fingerprint sensor 1514 may be integrated with the physical key or vendor Logo.
The optical sensor 1515 is used to collect the ambient light intensity. In one embodiment, processor 1501 may control the display brightness of display screen 1505 based on the intensity of ambient light collected by optical sensor 1515. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1505 is turned up; when the ambient light intensity is low, the display luminance of the display screen 1505 is turned down. In another embodiment, the processor 1501 may also dynamically adjust the shooting parameters of the camera assembly 1506 based on the ambient light intensity collected by the optical sensor 1515.
A proximity sensor 1516, also referred to as a distance sensor, is provided on the front panel of the terminal 1500. The proximity sensor 1516 is used to collect the distance between the user and the front of the terminal 1500. In one embodiment, when the proximity sensor 1516 detects a gradual decrease in the distance between the user and the front of the terminal 1500, the processor 1501 controls the display 1505 to switch from the on-screen state to the off-screen state; when the proximity sensor 1516 detects that the distance between the user and the front surface of the terminal 1500 gradually increases, the processor 1501 controls the display screen 1505 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 15 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
In another possible implementation, the computer device is provided as a server. Fig. 16 is a schematic diagram of a server according to an embodiment of the present application, where the server 1600 may have a relatively large difference due to different configurations or performances, and may include one or more processors (Central Processing Units, CPU) 1601 and one or more memories 1602, where at least one program code is stored in the memories 1602 and is loaded and executed by the processors 1601 to implement the methods according to the above-described method embodiments. Of course, the server may also have a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing the functions of the device, which are not described herein.
The embodiment of the present application also provides a computer readable storage medium having at least one program code stored therein, the at least one program code being loaded and executed by a processor to implement the operations performed in the virtual gift-gifting method of the above embodiment.
Embodiments of the present application also provide a computer program product or computer program comprising computer program code stored in a computer readable storage medium, the computer program code being loaded and executed by a processor to implement the operations performed in the virtual gift-gifting method of the above embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the above storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing is merely an alternative embodiment of the present application and is not intended to limit the embodiment of the present application, and any modifications, equivalent substitutions, improvements, etc. made within the spirit and principle of the embodiment of the present application should be included in the protection scope of the present application.

Claims (23)

1. A virtual gift-gifting method, the method comprising:
displaying a live broadcast interface of a live broadcast room, wherein the live broadcast interface comprises a plurality of object areas, and a plurality of anchor broadcasters are arranged in the live broadcast room to conduct live broadcast;
responding to the selection operation of a target object area, and acquiring target object characteristics corresponding to the target object area;
responding to virtual gift giving operation, and initiating a virtual gift giving request to a target object corresponding to the target object area, wherein the virtual gift giving request carries the characteristics of the target object;
the target object features are received by a host terminal, the host terminal is used for determining a plurality of object areas included in a current live broadcast interface of the host terminal, obtaining difference values of the object features of the plurality of object areas determined by the host terminal and the target object features, determining an object area corresponding to a minimum difference value in the plurality of difference values as a target object area, and determining that the target object area determined by the host terminal corresponds to the same target object as the target object area obtained by the selection operation, wherein the target object is a host of the presented virtual gift.
2. The method of claim 1, wherein the method further comprises, in response to the virtual gift-gifting operation, after initiating a virtual gift-gifting request to a target object corresponding to the target object region:
And displaying the virtual gift special effect corresponding to the virtual gift in the target object area.
3. The method according to claim 1, wherein the live interface includes a live view, the live view includes the plurality of object regions, and the obtaining, in response to the selection operation of the target object region, the target object feature corresponding to the target object region includes:
responding to the triggering operation of the live broadcast picture, and determining a target position corresponding to the triggering operation;
identifying the live broadcast picture and determining an object area comprising the target position;
and determining the object area as the target object area, and acquiring the target object characteristics corresponding to the target object area.
4. The method of claim 1, wherein the initiating a virtual gift-gifting request to a target object corresponding to the target object region in response to a virtual gift-gifting operation comprises:
displaying a virtual gift giving interface on the upper layer of the live broadcast interface; or switching from the live broadcast interface to the virtual gift presentation interface;
and responding to the selection operation of any virtual gift in the virtual gift presentation interface, and initiating the virtual gift presentation request.
5. The method according to claim 1, wherein the target object region includes a target face region, and the obtaining, in response to the selecting operation of the target object region, the target object feature corresponding to the target object region includes:
acquiring a plurality of face key points of the target face area in response to a selection operation of the target face area, wherein the face key points comprise at least one of face edge points or face organ edge points of the target face area;
and determining the target face characteristics of the target face area according to the positions of the plurality of face key points.
6. The method of claim 5, wherein determining the target face feature of the target face region based on the locations of the plurality of face keypoints comprises at least one of:
determining a first face sub-feature of the target face region according to the abscissa of the plurality of face key points, wherein the first face sub-feature represents the transverse relative positions of the plurality of face key points;
and acquiring second face sub-features of the target face region according to the ordinate of the plurality of face key points, wherein the second face sub-features represent the longitudinal relative positions of the plurality of face key points.
7. The method of claim 5, wherein determining the target face feature of the target face region based on the locations of the plurality of face keypoints comprises:
determining a first distance between a first face key point and a second distance between the first face key point and a third face key point according to the positions of the first face key point, the second face key point and the third face key point;
and determining a first ratio between the first distance and the second distance as the target face feature.
8. The method of claim 5, wherein the plurality of face keypoints comprise facial edge points and facial organ edge points, and wherein determining the target face feature of the target face region based on the positions of the plurality of face keypoints comprises:
selecting a face key point positioned in a first face subarea or a second face subarea from the plurality of face key points;
and determining the target face characteristics of the target face area according to the positions of the selected plurality of face key points.
9. The method of claim 8, wherein selecting a face keypoint from the plurality of face keypoints that is located in a first face subregion or a second face subregion comprises:
selecting a first corner key point, a second corner key point and a face edge key point which is positioned at the same height with the lower eyelid key point from the plurality of face key points; or alternatively, the process may be performed,
and selecting a first nose bridge key point, a second nose bridge key point and a third nose bridge key point from the plurality of face key points.
10. The method according to claim 1, wherein the target object region includes a target face region, and the obtaining, in response to the selecting operation of the target object region, the target object feature corresponding to the target object region includes:
responding to the selection operation of the target face area, acquiring face shape parameters of the target face area, wherein the face shape parameters comprise at least one of a length-to-face ratio of a face length to a face width, a width ratio of a forehead width to a chin width, a chin angle parameter or a chin angle parameter;
and determining the face shape parameters as target face features.
11. The method of claim 10, wherein the face shape parameters include the chin angle parameters, and wherein the obtaining the face shape parameters of the target face region in response to the selecting the target face region comprises:
In response to the selection operation of the target face region, determining a first line segment corresponding to a first chin key point and a second line segment corresponding to a third chin key point according to the position of the first chin key point, the position of the second chin key point and the position of the third chin key point, wherein the first chin key point and the second chin key point are at the same height, and the third chin key point is a vertex in a plurality of chin key points;
and determining the jaw angle parameter according to the included angle between the first line segment and the second line segment.
12. The method of claim 10, wherein the face shape parameters include the chin angle parameters, and wherein the obtaining the face shape parameters of the target face region in response to the selecting the target face region includes:
responding to the selection operation of the target face area, and determining a third line segment corresponding to a first chin key point and a second chin key point and a fourth line segment corresponding to the second chin key point and the third chin key point according to the position of the first chin key point, the position of the second chin key point and the position of the third chin key point, wherein the first chin key point and the second chin key point are positioned at the same height, and the third chin key point is a vertex in a plurality of chin key points;
And determining the chin angle parameter according to the included angle between the third line segment and the fourth line segment.
13. The method according to claim 1, wherein the target object region includes a target human body region, and the acquiring, in response to the selecting operation of the target object region, the target object feature corresponding to the target object region includes:
in response to a selection operation of the target human body region, acquiring a first human body length of a first human body sub-region and a second human body length of a second human body sub-region in the target human body region, and determining a ratio between the first human body length and the second human body length as a target human body characteristic of the target human body region; or alternatively, the process may be performed,
in response to a selection operation of the target human body region, acquiring a human body total length and a human body total width of the target human body region, and determining a ratio between the human body total length and the human body total width as a target human body characteristic of the target human body region; or alternatively, the process may be performed,
and responding to the selection operation of the target human body area, acquiring clothing features in the target human body area, and determining the clothing features as target human body features of the target human body area.
14. The method according to claim 1, wherein the live view in the live view interface includes a plurality of view areas, and the obtaining, in response to the selection operation of the target object area, the target object feature corresponding to the target object area includes:
and determining a target background area in the target picture area in response to the selection operation of the target picture area, and acquiring the target background characteristics of the target background area.
15. A virtual gift-gifting method, the method comprising:
receiving a virtual gift-giving request, wherein the virtual gift-giving request carries target object characteristics;
determining a plurality of object areas included in a live interface of a live broadcasting room, wherein a plurality of anchor broadcasting is carried out in the live broadcasting room;
respectively acquiring difference values between object features of the plurality of object regions and the target object feature, and determining an object region corresponding to the minimum difference value in the plurality of difference values as a target object region;
and displaying the special effect of the virtual gift corresponding to the virtual gift in the target object area, wherein the target object corresponding to the target object area is the anchor of the given virtual gift.
16. The method of claim 15, wherein displaying the virtual gift special effect corresponding to the virtual gift in the target object area comprises:
and displaying the virtual gift special effect at a target position in the target object area.
17. The method of claim 15, wherein the number of virtual gifts is carried in the virtual gift presentation request, and wherein displaying the virtual gift special effect corresponding to the virtual gift in the target object area comprises:
responsive to the number of virtual gifts being greater than a first reference number and less than a second reference number, displaying the virtual gift effects of the number in the target object area in a superimposed manner; or alternatively, the process may be performed,
and displaying the special effects of the virtual gift and text information corresponding to the virtual gift in the target object area in response to the number of the virtual gift being greater than a third reference number, wherein the text information comprises the number of the virtual gift.
18. The method of claim 15, wherein the object region comprises a face region, the live interface comprises a live view, and the determining a plurality of object regions included in the live interface of the live room comprises:
And carrying out face recognition on the live broadcast picture to determine a plurality of face areas.
19. The method of claim 15, wherein after the target object area displays the virtual gift special effect corresponding to the virtual gift, the method further comprises:
and sending the live broadcast picture added with the virtual gift special effect to a live broadcast server, wherein the live broadcast server is used for publishing the live broadcast picture in the live broadcast room.
20. A virtual gift-gifting apparatus, the apparatus comprising:
the display module is used for displaying a live broadcast interface of a live broadcast room, wherein the live broadcast interface comprises a plurality of object areas, and a plurality of main broadcasters are arranged in the live broadcast room for live broadcast;
the characteristic acquisition module is used for responding to the selection operation of the target object area and acquiring the target object characteristics corresponding to the target object area;
the request initiating module is used for responding to the virtual gift giving operation and initiating a virtual gift giving request, wherein the virtual gift giving request carries the target object characteristics;
the target object features are received by a host terminal, the host terminal is used for determining a plurality of object areas included in a current live broadcast interface of the host terminal, obtaining difference values of the object features of the plurality of object areas determined by the host terminal and the target object features, determining an object area corresponding to a minimum difference value in the plurality of difference values as a target object area, and determining that the target object area determined by the host terminal corresponds to the same target object as the target object area obtained by the selection operation, wherein the target object is a host of the presented virtual gift.
21. A virtual gift-gifting apparatus, the apparatus comprising:
the request receiving module is used for receiving a virtual gift giving request, wherein the virtual gift giving request carries target object characteristics;
the region determining module is used for determining a plurality of object regions included in a live broadcast interface of a live broadcast room, wherein a plurality of anchor broadcasters are arranged in the live broadcast room to conduct live broadcast;
the feature matching module is used for respectively acquiring difference values between the object features of the plurality of object regions and the target object features, and determining an object region corresponding to the minimum difference value in the plurality of difference values as a target object region;
the special effect display module is used for displaying the special effect of the virtual gift corresponding to the virtual gift in the target object area, and the anchor corresponding to the target object area is the anchor presented with the virtual gift.
22. A computer device comprising a processor and a memory having stored therein at least one program code that is loaded and executed by the processor to implement operations performed in a virtual gift-gifting method of any one of claims 1 to 14 or to implement operations performed in a virtual gift-gifting method of any one of claims 15 to 19.
23. A computer readable storage medium having stored therein at least one program code loaded and executed by a processor to implement operations performed in a virtual gift-gifting method of any one of claims 1 to 14 or to implement operations performed in a virtual gift-gifting method of any one of claims 15 to 19.
CN202011403675.2A 2020-12-02 2020-12-02 Virtual gift giving method, device, computer equipment and medium Active CN112565806B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011403675.2A CN112565806B (en) 2020-12-02 2020-12-02 Virtual gift giving method, device, computer equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011403675.2A CN112565806B (en) 2020-12-02 2020-12-02 Virtual gift giving method, device, computer equipment and medium

Publications (2)

Publication Number Publication Date
CN112565806A CN112565806A (en) 2021-03-26
CN112565806B true CN112565806B (en) 2023-08-29

Family

ID=75048455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011403675.2A Active CN112565806B (en) 2020-12-02 2020-12-02 Virtual gift giving method, device, computer equipment and medium

Country Status (1)

Country Link
CN (1) CN112565806B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112933598B (en) * 2021-04-06 2022-12-20 腾讯科技(深圳)有限公司 Interaction method, device and equipment based on virtual gift and storage medium
CN114268808A (en) * 2021-12-29 2022-04-01 广州方硅信息技术有限公司 Live broadcast interactive information pushing method, system, device, equipment and storage medium
CN114449305A (en) * 2022-01-29 2022-05-06 上海哔哩哔哩科技有限公司 Gift animation playing method and device in live broadcast room

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295476A (en) * 2015-05-29 2017-01-04 腾讯科技(深圳)有限公司 Face key point localization method and device
CN108391153A (en) * 2018-01-29 2018-08-10 北京潘达互娱科技有限公司 Virtual present display methods, device and electronic equipment
CN109889858A (en) * 2019-02-15 2019-06-14 广州酷狗计算机科技有限公司 Information processing method, device and the computer readable storage medium of virtual objects
CN110493630A (en) * 2019-09-11 2019-11-22 广州华多网络科技有限公司 The treating method and apparatus of virtual present special efficacy, live broadcast system
WO2020021319A1 (en) * 2018-07-27 2020-01-30 Yogesh Chunilal Rathod Augmented reality scanning of real world object or enter into geofence to display virtual objects and displaying real world activities in virtual world having corresponding real world geography
CN111147877A (en) * 2019-12-27 2020-05-12 广州华多网络科技有限公司 Virtual gift presenting method, device, equipment and storage medium
CN111246232A (en) * 2020-01-17 2020-06-05 广州华多网络科技有限公司 Live broadcast interaction method and device, electronic equipment and storage medium
CN111444860A (en) * 2020-03-30 2020-07-24 东华大学 Expression recognition method and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295476A (en) * 2015-05-29 2017-01-04 腾讯科技(深圳)有限公司 Face key point localization method and device
CN108391153A (en) * 2018-01-29 2018-08-10 北京潘达互娱科技有限公司 Virtual present display methods, device and electronic equipment
WO2020021319A1 (en) * 2018-07-27 2020-01-30 Yogesh Chunilal Rathod Augmented reality scanning of real world object or enter into geofence to display virtual objects and displaying real world activities in virtual world having corresponding real world geography
CN109889858A (en) * 2019-02-15 2019-06-14 广州酷狗计算机科技有限公司 Information processing method, device and the computer readable storage medium of virtual objects
CN110493630A (en) * 2019-09-11 2019-11-22 广州华多网络科技有限公司 The treating method and apparatus of virtual present special efficacy, live broadcast system
CN111147877A (en) * 2019-12-27 2020-05-12 广州华多网络科技有限公司 Virtual gift presenting method, device, equipment and storage medium
CN111246232A (en) * 2020-01-17 2020-06-05 广州华多网络科技有限公司 Live broadcast interaction method and device, electronic equipment and storage medium
CN111444860A (en) * 2020-03-30 2020-07-24 东华大学 Expression recognition method and system

Also Published As

Publication number Publication date
CN112565806A (en) 2021-03-26

Similar Documents

Publication Publication Date Title
CN110971930B (en) Live virtual image broadcasting method, device, terminal and storage medium
CN110992493B (en) Image processing method, device, electronic equipment and storage medium
CN111464749B (en) Method, device, equipment and storage medium for image synthesis
CN112565806B (en) Virtual gift giving method, device, computer equipment and medium
CN108965922B (en) Video cover generation method and device and storage medium
US20220164159A1 (en) Method for playing audio, terminal and computer-readable storage medium
CN109947338B (en) Image switching display method and device, electronic equipment and storage medium
CN110740340B (en) Video live broadcast method and device and storage medium
CN109922356B (en) Video recommendation method and device and computer-readable storage medium
WO2022134632A1 (en) Work processing method and apparatus
CN111723803B (en) Image processing method, device, equipment and storage medium
CN111083513B (en) Live broadcast picture processing method and device, terminal and computer readable storage medium
CN112396076A (en) License plate image generation method and device and computer storage medium
CN112967261B (en) Image fusion method, device, equipment and storage medium
CN112637624B (en) Live stream processing method, device, equipment and storage medium
CN111369434B (en) Method, device, equipment and storage medium for generating spliced video covers
CN110942426B (en) Image processing method, device, computer equipment and storage medium
CN114594885A (en) Application icon management method, device and equipment and computer readable storage medium
CN114155132A (en) Image processing method, device, equipment and computer readable storage medium
CN114595019A (en) Theme setting method, device and equipment of application program and storage medium
CN112241987B (en) System, method, device and storage medium for determining defense area
CN108881715B (en) Starting method and device of shooting mode, terminal and storage medium
CN112052806A (en) Image processing method, device, equipment and storage medium
CN113592874A (en) Image display method and device and computer equipment
CN112399080A (en) Video processing method, device, terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant