CN106303662B - Image processing method and device in net cast - Google Patents

Image processing method and device in net cast Download PDF

Info

Publication number
CN106303662B
CN106303662B CN201610763148.XA CN201610763148A CN106303662B CN 106303662 B CN106303662 B CN 106303662B CN 201610763148 A CN201610763148 A CN 201610763148A CN 106303662 B CN106303662 B CN 106303662B
Authority
CN
China
Prior art keywords
net cast
video
image
time
predetermined condition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610763148.XA
Other languages
Chinese (zh)
Other versions
CN106303662A (en
Inventor
黄丽如
程广
陈伟
肖媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN201610763148.XA priority Critical patent/CN106303662B/en
Publication of CN106303662A publication Critical patent/CN106303662A/en
Application granted granted Critical
Publication of CN106303662B publication Critical patent/CN106303662B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • H04L65/612Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/70Media network packetisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Abstract

The invention discloses the image processing methods and device in a kind of net cast.Wherein, this method comprises: determining that object is triggered and shows in net cast;Whether monitoring predetermined condition meets;When predetermined condition meets, the video streaming image in current video live streaming is obtained;Obtain the object images of object;At least video streaming image and object images are synthesized.The present invention is solved and can not be accurately acquired in the prior art the technical issues of including the image of video streaming image and object images.

Description

Image processing method and device in net cast
Technical field
The present invention relates to field of image processings, in particular to the image processing method and dress in a kind of net cast It sets.
Background technique
In existing society, universal and prevalence, user with network direct broadcasting platform and net cast can pass through network Platform viewing is broadcast live by the singing or dancing etc. of net cast person (for example, network main broadcaster, is referred to as main broadcaster below) offer Entertainment.User can be expressed by way of giving virtual special efficacy present to master during watching programme televised live That broadcasts likes.Main broadcaster is in the moment for receiving gorgeous brilliant virtual present special efficacy, not only pleasantly surprised but also unexpected mind that main broadcaster shows Feelings are often most beautiful and unique dazzling, and benefactor as the arrival at this beautiful a moment and obtain high satisfaction With honor sense.
Generally, due to the time of present Special display effect is shorter, however within this shorter time, the expression of main broadcaster and dynamic Make to be also transient.Therefore, in order to save the expression movement of gorgeous special efficacy and main broadcaster that time, main broadcaster and user simultaneously It is captured often through the method for screenshot capture or mobile phone shooting computer screen to realize.But in main broadcaster side, either more than Which kind of candid photograph mode all can make main broadcaster take sb's mind off sth because of additional operation, cause expression of the main broadcaster when being photographed and Movement can not be accomplished most natural and in place.In user side, it is most that user, which can not accurately capture main broadcaster and special efficacy again when capturing, That time when good state, the special effect play time is shorter in addition, and main broadcaster and user often have little time to prepare and react, that is gorgeous Special efficacy just has finished on.Even if user captures to gorgeous special efficacy, as some other factors are (for example, sectional drawing softwares The problems such as improper use, mobile phone photograph are shaken when successfully not focusing or take pictures) and cause the picture material captured fuzzy, coarse And it is not perfect.Further, even if user has really captured gorgeous perfect a moment, if without photo be shared, that Other users still can not see and share a moment of this gorgeous beauty, thus can not accomplish beautiful moment persistence and Show-and-tell.
For above-mentioned problem, currently no effective solution has been proposed.
Summary of the invention
The embodiment of the invention provides the image processing methods and device in a kind of net cast, at least to solve existing skill The technical issues of including the image of video streaming image and object images can not be accurately acquired in art.
According to an aspect of an embodiment of the present invention, the image processing method in a kind of net cast is provided, comprising: really Determine object to be triggered and show in net cast;Whether monitoring predetermined condition meets;When the predetermined condition meets, obtain Video streaming image in presently described net cast;Obtain the object images of the object;At least by the video streaming image with The object images are synthesized.
Further, the predetermined condition includes at least following one: user makes in the net cast gesture with The similarity of prearranged gesture is more than or equal to predetermined similarity;The facial image of user described in the net cast with it is described The image scaled of net cast is within a predetermined range.
Further, if the predetermined condition include: the gesture made of user described in the net cast with it is described pre- The similarity for determining gesture is more than or equal to the predetermined similarity, then is determining that the object is triggered and shows straight in video After broadcasting, the method also includes: Xiang Suoshu user shows the prearranged gesture.
Further, if not monitoring in the first predetermined amount of time, the predetermined condition meets, pre- described first Section finish time of fixing time is considered as predetermined condition and has met.
Further, after determining that object is triggered and shows in net cast, Xiang Suoshu user shows that first falls Timing, wherein first countdown corresponds to first predetermined amount of time.
Further, first predetermined amount of time is the displaying time of the object.
Further, after determining that object is triggered and shows in net cast, the method also includes: to user Show the second countdown, wherein the predetermined condition is that the second predetermined amount of time terminates, and second countdown corresponds to described Second predetermined amount of time.
Further, second predetermined amount of time is that the object shows that start time to the object shows best effective Fruit goes out current moment.
Further, the video streaming image obtained in presently described net cast and pair for obtaining the object As image includes: the screenshot for obtaining video streaming image and the object images described in current video broadcast window, wherein described Video playback window is for being broadcast live the video flowing and the object.
Further, the video streaming image and the object images are at least carried out synthesis includes: the institute that will acquire Video streaming image is stated to zoom in and out according to the first preset ratio;The object images for the object that will acquire are default according to second Ratio zooms in and out;At least by after scaling the video streaming image and the object images synthesize.
According to another aspect of an embodiment of the present invention, the image processing apparatus in a kind of net cast is additionally provided, comprising: Determining module, for determining that object is triggered and shows in net cast;Whether monitoring modular is full for monitoring predetermined condition Foot;First obtains module, for obtaining the video streaming image in presently described net cast when the predetermined condition meets; Second obtains module, for obtaining the object images of the object;Synthesis module, at least by the video streaming image and institute Object images are stated to be synthesized.
Further, the predetermined condition includes at least following one: user makes in the net cast gesture with The similarity of prearranged gesture is more than or equal to predetermined similarity;The facial image of user described in the net cast with it is described The image scaled of net cast is within a predetermined range.
Further, described device further include: the first display module, for including: the video in the predetermined condition The similarity of gesture and the prearranged gesture that user described in live streaming makes is more than or equal to the feelings of the predetermined similarity Under condition, and after determining that the object is triggered and shows in net cast, Xiang Suoshu user shows the prearranged gesture.
Further, if not monitoring in the first predetermined amount of time, the predetermined condition meets, pre- described first Section finish time of fixing time is considered as predetermined condition and has met.
Further, described device further include: the second display module, for determining that object is triggered and shows in video After in live streaming, Xiang Suoshu user shows the first countdown, wherein first countdown corresponds to first predetermined time Section.
Further, first predetermined amount of time is the displaying time of the object.
Further, described device further include: third display module, for determining that object is triggered and shows in video After in live streaming, the second countdown is shown to user, wherein the predetermined condition is that the second predetermined amount of time terminates, described the Two countdowns correspond to second predetermined amount of time.
Further, second predetermined amount of time is that the object shows that start time to the object shows best effective Fruit goes out current moment.
Further, the first acquisition module and the second acquisition module include: acquiring unit, current for obtaining The screenshot of video streaming image described in video playback window and the object images, wherein the video playback window is for straight Broadcast the video flowing and the object.
Further, the synthesis module includes: the first unit for scaling, the video streaming image for will acquire by It is zoomed in and out according to the first preset ratio;Second unit for scaling, the object images for the object that will acquire are according to second Preset ratio zooms in and out;Synthesis unit, at least by after scaling the video streaming image and the object images into Row synthesis.
In embodiments of the present invention, it is triggered and is shown in net cast using determining object;In the touching of predetermined condition It gives, obtains video streaming image corresponding with the predetermined condition in the net cast, wherein the predetermined condition is in institute It states and occurs during object is demonstrated;Obtain the object images of the object corresponding with the predetermined condition;At least will The mode that the video streaming image and the object images are synthesized, by the way that it is straight to obtain video under the triggering of predetermined condition Middle video streaming image is broadcast, and obtains object images corresponding with predetermined condition, then, object images and video streaming image are carried out Synthesis, obtain the image comprising object and main broadcaster, compared with the existing technology in, the mode obtaining picture by way of candid photograph, Achieve the purpose that accurate acquisition includes the video streaming image of object images, has included object images to realize and improve acquisition Video streaming image technical effect, and then solve and can not accurately acquire comprising video streaming image and object in the prior art The technical issues of image of image.
Detailed description of the invention
The drawings described herein are used to provide a further understanding of the present invention, constitutes part of this application, this hair Bright illustrative embodiments and their description are used to explain the present invention, and are not constituted improper limitations of the present invention.In the accompanying drawings:
Fig. 1 is the flow chart of the image processing method in a kind of net cast according to an embodiment of the present invention;
Fig. 2 is the schematic diagram of the image processing apparatus in a kind of net cast according to an embodiment of the present invention;
Fig. 3 is the schematic diagram of the image processing apparatus in another net cast according to an embodiment of the present invention;
Fig. 4 is a kind of schematic diagram of main broadcaster end video flowing receiving unit according to an embodiment of the present invention;
Fig. 5 is a kind of schematic diagram of viewer end video flowing receiving unit according to an embodiment of the present invention;
Fig. 6 is a kind of schematic diagram of present special efficacy administrative unit according to an embodiment of the present invention;
Fig. 7 is a kind of schematic diagram of special efficacy animation display unit according to an embodiment of the present invention;
Fig. 8 is a kind of schematic diagram of photo synthesis unit according to an embodiment of the present invention;
Fig. 9 is a kind of schematic diagram of another embodiment of photo synthesis unit according to an embodiment of the present invention;And
Figure 10 is a kind of schematic diagram of photo post-processing unit according to an embodiment of the present invention.
Specific embodiment
In order to enable those skilled in the art to better understand the solution of the present invention, below in conjunction in the embodiment of the present invention Attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only The embodiment of a part of the invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill people The model that the present invention protects all should belong in member's every other embodiment obtained without making creative work It encloses.
It should be noted that description and claims of this specification and term " first " in above-mentioned attached drawing, " Two " etc. be to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should be understood that using in this way Data be interchangeable under appropriate circumstances, so as to the embodiment of the present invention described herein can in addition to illustrating herein or Sequence other than those of description is implemented.In addition, term " includes " and " having " and their any deformation, it is intended that cover Cover it is non-exclusive include, for example, the process, method, system, product or equipment for containing a series of steps or units are not necessarily limited to Step or unit those of is clearly listed, but may include be not clearly listed or for these process, methods, product Or other step or units that equipment is intrinsic.
According to embodiments of the present invention, a kind of embodiment of the image processing method in net cast is provided, needs to illustrate , step shown in the flowchart of the accompanying drawings can hold in a computer system such as a set of computer executable instructions Row, although also, logical order is shown in flow charts, and it in some cases, can be to be different from sequence herein Execute shown or described step.
Fig. 1 is the flow chart of the image processing method in a kind of net cast according to an embodiment of the present invention, such as Fig. 1 institute Show, this method comprises the following steps:
Step S102 determines that object is triggered and shows in net cast.
Image processing method in net cast provided in an embodiment of the present invention can apply in network direct broadcasting platform or Net cast platform.If applying the video flowing during network direct broadcasting platform, main broadcaster's live streaming that will see in one or more It is played out in many terminals, wherein above-mentioned object can be the special efficacy animation of virtual present.
In embodiments of the present invention, when spectators give virtual present to main broadcaster, if the virtual present that spectators give is full The different condition of foot, then different special efficacy animations will be triggered.For example, virtual present " lollipop ", when the stick that spectators give When the quantity of sugar is 10, then special efficacy animation can be a sparkling lollipop, when the lollipop that spectators give When quantity is 60, then special efficacy animation can be a large amount of lollipops very unexpectedly.Therefore, in embodiments of the present invention, when When the virtual present that spectators give reaches special efficacy trigger condition, it is determined that special efficacy animation (that is, above-mentioned object) quilt of virtual present Triggering, and be shown in net cast.
Whether step S104, monitoring predetermined condition meet.
In embodiments of the present invention, after determining that object is triggered and shows in net cast, main broadcaster can be with Correspondingly gesture or expression are made with object.But in order to guarantee that the image synthesized in following step S110 includes object Therefore best illustrated effect and the best gesture or best expression of main broadcaster in embodiments of the present invention, can be set in advance Set predetermined condition, wherein above-mentioned predetermined condition includes at least following one: the gesture and predetermined hand that user makes in net cast The similarity of gesture is more than or equal to predetermined similarity;The image scaled of the facial image of user and net cast in net cast Within a predetermined range.Then, when predetermined condition meets, following step S106, i.e. view in the live streaming of acquisition current video are executed Frequency stream picture.
Step S106 obtains the video streaming image in current video live streaming when predetermined condition meets.
In embodiments of the present invention, after object (that is, special efficacy animation) is triggered, main broadcaster's client judges whether to meet Predetermined condition obtains the video streaming image in current time net cast, and execute step if meeting predetermined condition S108 obtains the object images of object.
Step S108 obtains the object images of object.
Step S110, at least synthesizes video streaming image and object images.
In embodiments of the present invention, main broadcaster's client, can be by video after getting object images and video streaming image Stream picture and object images are synthesized, and synthesize a picture, and the picture of synthesis is saved.Figure after composition As in, best gesture or best expression of best illustrated effect and main broadcaster comprising object etc..
In embodiments of the present invention, by obtaining video streaming image in net cast, and obtain under the triggering of predetermined condition Take object images corresponding with predetermined condition then to synthesize object images and video streaming image, obtain comprising object and The image of main broadcaster, compared with the existing technology in, the mode obtaining picture by way of candid photograph, reached it is accurate obtain comprising pair As the purpose of the video streaming image of image, the technology effect for obtaining the video streaming image comprising object images is improved to realize Fruit, and then solve and can not accurately acquire the technology of the image comprising video streaming image and object images in the prior art and ask Topic.
By the description in step S104 it is found that in embodiments of the present invention, predetermined condition includes at least following one: view The similarity of user makes in frequency live streaming gesture and prearranged gesture is more than or equal to predetermined similarity;And in net cast The facial image of user and the image scaled of net cast are within a predetermined range.
That is, above-mentioned predetermined condition can be with are as follows: the gesture that user makes in net cast is similar to prearranged gesture Degree is more than or equal to predetermined similarity, can be with are as follows: the facial image of user and the image of net cast ratio in net cast Example within a predetermined range, can be with are as follows: the similarity of user makes in net cast gesture and prearranged gesture is greater than or waits The group of the facial image of user and the image scaled of net cast within a predetermined range in predetermined similarity and net cast It closes, will just above-mentioned three kinds of situations be introduced below.
The gesture that user makes in situation one, net cast is similar more than or equal to making a reservation for the similarity of prearranged gesture Degree.
Main broadcaster's client is determining that object is triggered, and after object is shown in video playback window, main broadcaster client The video streaming image in available net cast is held, that is, obtains each video frame images;Then, it is determined that in net cast whether There is the gesture that user makes, that is, determining whether the gesture that user makes in each video frame images.Wherein, exist In the case where determining to occur in net cast the gesture that user makes, the gesture and prearranged gesture for determining that user makes are calculated Similarity, and judge whether calculated similarity is more than or equal to default similarity.If it is judged that calculated phase It is more than or equal to default similarity like degree, it is determined that meet predetermined condition, then, obtain the video of current time net cast Stream picture and object images.Finally, the video streaming image and object images that will acquire are synthesized, the figure after being synthesized Piece.
The facial image of user and the image scaled of net cast be within a predetermined range in situation two, net cast.
Main broadcaster's client is determining that object is triggered, and after showing object in the window of net cast, main broadcaster Video streaming image in the available net cast of client, that is, obtain each video frame images;Then, it is determined that in net cast The facial image of the user of appearance and the image scaled of net cast whether within a predetermined range, that is, in each video frame Determine whether the predetermined facial image of user.Wherein, the facial image for determining to occur in net cast user with The image scaled of net cast within a predetermined range in the case where, determination meet predetermined condition.Wherein, user in net cast Facial image and the image scaled of net cast are less than or equal to the first ratio value, and are greater than the second ratio value.
Preferably, in embodiments of the present invention, the first ratio value is chosen for 2/3, and the second ratio value is chosen for 1/100.This When, it is understood that it is shared region of the facial image in video area no more than the image 2/3 in net cast, and not Less than 1/100 of the image in net cast.That is, determining detection when judging that facial image meets above-mentioned condition To predetermined condition is met, at this point, the facial image got is clear head portrait.
By above-mentioned two preset condition it is found that if meeting above-mentioned predetermined condition, that is, get clearly face figure Picture, perhaps gets normal gesture then and can obtain directly from net cast comprising normal gesture or clearly face The video streaming image of image.Clearly facial image such as is not got simultaneously, or gets normal gesture, then is needed At the end of countdown, video streaming image is obtained from net cast stream.Wherein, countdown will carry out in detail in the following embodiments Introduction.
It should be noted that in embodiments of the present invention, above-mentioned predetermined condition other than above situation one and situation two, It further include another situation, wherein the situation are as follows: the similarity of user makes in net cast gesture and prearranged gesture is big In or be equal in predetermined similarity and net cast the facial image of user and the image scaled of net cast in predetermined model In enclosing.
In the case, main broadcaster's client is determining that object is triggered, and object is shown in video playback window it Afterwards, the video streaming image in the available net cast of main broadcaster's client, that is, obtain each video frame images;Then, it is determined that view Whether there is the gesture that user makes in frequency live streaming, that is, determining whether what user made in each video frame images Gesture.Wherein, in the case where determining to occur in net cast gesture that user makes, the hand for determining that user makes is calculated The similarity of gesture and prearranged gesture, and judge whether calculated similarity is more than or equal to default similarity.
Further, if it is judged that calculated similarity is more than or equal to default similarity, it is determined that video is straight Whether within a predetermined range to broadcast the middle image scaled for the facial image of user and net cast occur.Wherein, determining occur The facial image of user and the image scaled of net cast within a predetermined range in the case where, determination meets above-mentioned predetermined condition.
It should be noted that in embodiments of the present invention, being shown after video playback window by object, can also sentence Whether the disconnected object being shown in video playback window meets picture synthesis condition, wherein if meeting picture synthesis condition, Video streaming image corresponding with predetermined condition in net cast is obtained, and obtains object images corresponding with predetermined condition.For example, It is assumed that user send the special efficacy animation of 66 lollipops to main broadcaster, when there is a large amount of sticks very unexpectedly in video playback window When sugared, then meet the synthesis condition of picture;It is assumed, however, that user send the special efficacy animation of 10 lollipops, if in video playing A sparkling lollipop is only shown in dress mouth, then the synthesis condition of picture is unsatisfactory for, then when detecting special efficacy queue In next special efficacy that will be played when being " largely lollipop very unexpectedly " the special efficacy animation for meeting picture synthesis condition, then Video streaming image corresponding with predetermined condition in net cast is obtained, and obtains object images corresponding with predetermined condition, and will The image got is synthesized.
In another optional embodiment of the invention, if predetermined condition includes: that user described in net cast makes Gesture and the similarity of prearranged gesture be more than or equal to predetermined similarity, then determining that object is triggered and shows in video After in live streaming, method further include: show the prearranged gesture to user.
Specifically, if in predetermined condition including condition: the gesture and prearranged gesture that user described in net cast makes Similarity be more than or equal to predetermined similarity, then synthesize special efficacy animation picture be activated when, firstly, can be in gesture One gesture of selection or random selection is specified in property data base, and is prompted main broadcaster to show corresponding gesture and prepared to take pictures. For example, main broadcaster A receives a special efficacy present " may you be hapy and prosperous " (that is, object), then while the triggering of special efficacy animation, main broadcaster A Client selection will be specified to embrace boxer's gesture (that is, a prearranged gesture) from gesture feature database, and the video in main broadcaster side is broadcast The lower section for putting window shows the cartoon prompt icon for embracing fist movement, and main broadcaster A is prompted to show response gesture.
In an optional embodiment of the invention, if not monitoring in the first predetermined amount of time, predetermined condition is full Foot, then be considered as predetermined condition in the first predetermined amount of time finish time and met, wherein when first time period is the displaying of object Between.Specifically, in embodiments of the present invention, if at the end of the displaying time of object, do not monitor that predetermined condition meets, then By the object images of the video streaming image and object that are obtained at the time of terminating the displaying time of object in net cast.
Optionally, after determining that object is triggered and shows in net cast, it can also show that first falls to user Timing, wherein the first countdown corresponds to the first predetermined amount of time.
For example, main broadcaster A receives a special efficacy present " may you be hapy and prosperous " (that is, object), then be triggered in special efficacy animation Meanwhile the client of main broadcaster A will specify selection to embrace boxer's gesture (that is, prearranged gesture) from gesture feature database, and in main broadcaster The lower section of the video playback window of side shows the cartoon prompt icon for embracing fist movement, and main broadcaster A is prompted to show response gesture.So Afterwards, the first countdown is shown in main broadcaster side, and before the first countdown terminates, to face and gesture in the image of net cast It is identified, that is, face in the image of net cast and gesture are identified before the displaying time of object terminates, Wherein, the identification that the first countdown is used to that main broadcaster to be prompted to carry out facial image and gesture.
As can be seen from the above description, predetermined condition in the above-described embodiments be in net cast the gesture made of user with The similarity of prearranged gesture is more than or equal to the facial image of user and net cast in predetermined similarity or net cast Image scaled within a predetermined range.In addition to this, at the time of predetermined condition can also terminate for the second predetermined amount of time.Also It is to say, after determining that object is triggered and shows in net cast, further includes: show the second countdown to user, wherein Predetermined condition is that the second predetermined amount of time terminates, and the second countdown corresponds to the second predetermined amount of time.Optionally, above-mentioned second is pre- Section of fixing time is that object shows that start time to object shows that optimum efficiency goes out current moment.
In alternative embodiment of the present invention, can in obtaining net cast before video streaming image and object images, Setting the second countdown of display, wherein the timing time of the second countdown is the second predetermined amount of time.When the second countdown terminates When, i.e., at the time of the second predetermined amount of time terminates, determine that predetermined condition meets, at this point, obtaining the video in net cast at that time Stream picture and object images.It should be noted that in embodiments of the present invention, the second predetermined amount of time is at the beginning of object is shown Carve to object show optimum efficiency occur at the time of, that is, object start show at the time of into video stream file optimum frame Show the moment, wherein include the best illustrated effect of main broadcaster and object in optimum frame.
Wherein, at the time of obtaining the second countdown terminates, the video streaming image in current video live streaming and acquisition pair are obtained The object images of elephant include: the screenshot for obtaining video streaming image and object images in current video broadcast window, wherein video is broadcast Window is put for live video stream and the object.
Specifically, at the time of the second predetermined amount of time terminates, video streaming image is obtained from video playback window, and same When by special efficacy animation (that is, object) being played on intercept and capture.Then, the video streaming image that will acquire is playing with what is intercepted and captured Special efficacy animation synthesized, the picture after being synthesized.
In another optional embodiment of the invention, video streaming image and object images are at least subjected to synthesis packet Include: the video streaming image that will acquire is zoomed in and out according to the first preset ratio, and will acquire corresponding with predetermined condition Object images are zoomed in and out according to the second preset ratio, then, at least by after scaling video streaming image and object images into Row synthesis.
If the size of the picture after final synthesis is PictureSize (pw, ph), then needing will be from net cast In the size VideoSize (vw, vh) of video streaming image that gets zoomed in and out according to the first preset ratio, wherein scaling Size later is the size of the picture after final synthesis, is then plotted on painting canvas.First preset ratio includes width contracting Put than with height pantograph ratio, wherein width pantograph ratio are as follows: kw=pw/vw, height pantograph ratio are as follows: kh=ph/vh.
It should be noted that under normal circumstances, in order to keep image undistorted, width pantograph ratio and length scale ratio should It is identical, that is, kw=kh.If under special circumstances, cause width pantograph ratio and length scale than it is not identical when, for guarantee institute Some picture materials can zoom in the picture after final synthesis, then need to use the smaller value of the two as scale because Son, that is, the pantograph ratio chosen at this time are as follows: k=min (kw, kh).When being zoomed in and out according to pantograph ratio to video streaming image, most Throughout one's life at video streaming image size are as follows: DestVideoSize (vw '=vw*k, vh '=vh*k).
On painting canvas after drafting video streaming image, so that it may successively rendered object image (that is, special efficacy image).Object diagram Two kinds of the drafting effect of picture, i.e. type of effect 1 and type of effect 2, wherein the addition of type of effect 1 and type of effect 2 is based on The location information of object images: EffectRect (x, y, w, h).
Type of effect 1: type of effect 1 is that object images cover entire video streaming image, at this point, the position of object images is believed Breath are as follows: EffectRect (0,0, w, h).At this point, object images can obtain the object diagram being more suitable by zoom factor k2 Picture, it may be assumed that and EffectRect ' (0,0, w ', h ')=EffectRect (0,0, w, h) * k2.
Type of effect 2: type of effect 2 is that object images are in specific position in video streaming image.First from video image It is middle to obtain the range and location information that object images are shown, it may be assumed that EffectRect ' (x ', y ', w ', h ').But due to the position Confidence breath and indication range are what relatively primitive video streaming image obtained, it is therefore desirable to first believe above-mentioned position got Breath and indication range are changed in the following manner, it may be assumed that EffectRect " (x ", y ", w ", h ")=EffectRect ' (x ', y',w',h')*k.Then, by the original size EffectSize (w, h) of object images according to non-uniform zoom factor k3 (α, β) It zooms in and out, the size of object images after scaling are as follows: EffectSize " (w ", h ")=EffectSize (w, h) * k3 (α, β).Finally, drawing effect on position (x ", y ") of painting canvas.
The embodiment of the invention also provides the image processing apparatus in a kind of net cast, at the image in the net cast Reason device is mainly used for executing the image processing method in net cast provided by above content of the embodiment of the present invention, right below Image processing apparatus in net cast provided by the embodiment of the present invention does specific introduction.
Fig. 2 is the schematic diagram of the image processing apparatus in a kind of net cast according to an embodiment of the present invention, such as Fig. 2 institute Show, the image processing apparatus in the net cast mainly includes that determining module 21, monitoring modular 23, first obtain module 25, the Two obtain module 27 and synthesis module 29, in which:
Determining module, for determining that object is triggered and shows in net cast;
Image processing method in net cast provided in an embodiment of the present invention can apply in network direct broadcasting platform or Net cast platform.If applying the video flowing during network direct broadcasting platform, main broadcaster's live streaming that will see in one or more It is played out in many terminals, wherein above-mentioned object can be the special efficacy animation of virtual present.
In embodiments of the present invention, when spectators give virtual present to main broadcaster, if the virtual present that spectators give is full The different condition of foot, then different special efficacy animations will be triggered.For example, virtual present " lollipop ", when the stick that spectators give When the quantity of sugar is 10, then special efficacy animation can be a sparkling lollipop, when the lollipop that spectators give When quantity is 60, then special efficacy animation can be a large amount of lollipops very unexpectedly.Therefore, in embodiments of the present invention, when When the virtual present that spectators give reaches special efficacy trigger condition, it is determined that special efficacy animation (that is, above-mentioned object) quilt of virtual present Triggering, and be shown in net cast.
Monitoring modular, for monitoring whether predetermined condition meets.
In embodiments of the present invention, after determining that object is triggered and shows in net cast, main broadcaster can be with Correspondingly gesture or expression are made with object.But in order to guarantee the image synthesized in lower synthesis module comprising object most Therefore the best gesture or best expression of good bandwagon effect and main broadcaster in embodiments of the present invention, can be preset Predetermined condition, wherein above-mentioned predetermined condition includes at least following one: the gesture and prearranged gesture that user makes in net cast Similarity be more than or equal to predetermined similarity;The facial image of user and the image scaled of net cast exist in net cast In preset range.Then, when predetermined condition meets, the video in module acquisition current video live streaming is obtained by following first Stream picture.
First obtains module, for when predetermined condition meets, obtaining the video streaming image in current video live streaming.
In embodiments of the present invention, after object (that is, special efficacy animation) is triggered, main broadcaster's client judges whether to meet Predetermined condition obtains the video streaming image in current time net cast, and execute step if meeting predetermined condition S108 obtains the object images of object.Second obtains module, for obtaining object images corresponding with predetermined condition;
Synthesis module, at least synthesizing video streaming image and object images.
In embodiments of the present invention, main broadcaster's client, can be by video after getting object images and video streaming image Stream picture and object images are synthesized, and synthesize a picture, and the picture of synthesis is saved.Figure after composition As in, best gesture or best expression of best illustrated effect and main broadcaster comprising object etc..
In embodiments of the present invention, by obtaining video streaming image in net cast, and obtain under the triggering of predetermined condition Take object images corresponding with predetermined condition then to synthesize object images and video streaming image, obtain comprising object and The image of main broadcaster, compared with the existing technology in, the mode obtaining picture by way of candid photograph, reached it is accurate obtain comprising pair As the purpose of the video streaming image of image, the technology effect for obtaining the video streaming image comprising object images is improved to realize Fruit, and then solve and can not accurately acquire the technology of the image comprising video streaming image and object images in the prior art and ask Topic.
Optionally, predetermined condition includes at least following one: user makes in net cast gesture and prearranged gesture Similarity is more than or equal to predetermined similarity;The facial image of user and the image scaled of net cast are pre- in net cast Determine in range.
Optionally, device further include: the first display module, for predetermined condition include: in net cast user make Gesture and prearranged gesture similarity be more than or equal to predetermined similarity in the case where, and determining that object is triggered and opens up After showing in net cast, prearranged gesture is shown to user.
Optionally, if not monitoring in the first predetermined amount of time, predetermined condition meets, in the first predetermined amount of time knot The beam moment is considered as predetermined condition and has met.
Optionally, device further include: the second display module, for determining that object is triggered and shows in net cast Later, the first countdown is shown to user, wherein the first countdown corresponds to the first predetermined amount of time.
Optionally, the first predetermined amount of time is the displaying time of object.
Optionally, device further include: third display module, for determining that object is triggered and shows in net cast Later, the second countdown is shown to user, wherein predetermined condition is that the second predetermined amount of time terminates, and the second countdown corresponds to Second predetermined amount of time.
Optionally, the second predetermined amount of time is that object shows that start time to object shows that optimum efficiency goes out current moment.
Optionally, the first acquisition module and the second acquisition module include: acquiring unit, play window for obtaining current video The screenshot of video streaming image and object images in mouthful, wherein video playback window is used for live video stream and object;
Optionally, synthesis module includes: the first unit for scaling, and the video streaming image for will acquire is default according to first Ratio zooms in and out;The object images of second unit for scaling, the object for will acquire contract according to the second preset ratio It puts;Synthesis unit, at least by after scaling video streaming image and object images synthesize.
Fig. 3 is the schematic diagram of the image processing apparatus in another net cast according to an embodiment of the present invention, such as Fig. 3 institute Show, the image processing apparatus in net cast provided in an embodiment of the present invention includes: video flowing receiving unit 301, special efficacy management Unit 302, special efficacy animation display unit 303, photo synthesis unit 304 and photo post-processing unit 305, in which:
Video flowing receiving unit 301 for receiving video streaming image, and provides video streaming image and obtains interface to obtain view Frequency stream picture.Video flowing receiving unit carries out the processing of video flowing at main broadcaster end and viewer end using different processing modes, main Broadcasting end video flowing receiving unit will be described in detail in the fig. 4 embodiment, and viewer end video flowing receiving unit will be in Fig. 5 Embodiment in be described in detail.
Special efficacy administrative unit 302, the virtual present data that the spectators for receiving network server distribution give to main broadcaster, And the essential information and special-effect information of management present, and start gift when the virtual present given meets the condition for showing special efficacy Object FX Module triggers present special efficacy.
Special efficacy animation display unit 303 is used for synthetic video stream picture and object images, and the image after synthesis is shown To main broadcaster and spectators.Special efficacy animation display unit includes time control unit, special efficacy pretreatment unit, Special display effect unit and spy End unit is imitated, it will for the effect of time control unit, special efficacy pretreatment unit, Special display effect unit and special efficacy end unit It is described in detail in the following embodiments.
Photo synthesis unit 304, video streaming image for will be got from video flowing receiving unit and from present spy The present special efficacy image (that is, object images) that effect administrative unit is got is handled and is synthesized, and is generated main broadcaster's special video effect and is shone Piece.Wherein, photo synthesis unit includes countdown unit, face and gesture identification unit, video image acquiring unit, special effect graph As acquiring unit and photograph image synthesis unit, the effect of specific each unit will describe in detail in the following embodiments.
Optionally, photo synthesis unit 304 can directly be obtained from video and special efficacy synthesis unit under given conditions Take the image data of synthesis.Photo synthesis unit will include countdown unit, composograph acquiring unit and composograph at this time Processing unit.Details will be described in detail in the implementation flow chart of Fig. 7.
Photo post-processing unit 305, for collecting, managing and showing the photo of user;In users from networks server It obtains the photo of current anchor and shows in specific the window's position.User can also look at these photos, affiliated main broadcaster It can check and manage these photos.
Fig. 4 is a kind of schematic diagram of main broadcaster end video flowing receiving unit according to an embodiment of the present invention, as shown in figure 4, main It broadcasts end video flowing receiving unit and is used to acquire the Video stream information at main broadcaster end, and export the Video stream information at main broadcaster end.
Specifically, main broadcaster end video flowing receiving unit includes:
Camera device 401 connects the computer at main broadcaster end by USB port, and the video flowing of main broadcaster is acquired by the equipment Information generates video data and is output to video flowing acquisition unit.
Video flowing acquisition unit 402 will realize following two functions when video flowing acquisition unit acts on main broadcaster end:
Function 1: existing video capture technology is used, video will be converted to from the collected video data of camera device Live streaming platform software can recognize and accessible video mapped file.
Function 2: for the ease of network transmission, using major video coding techniques, video volume is carried out to video mapped file The video stream file of rtmp format is generated after code, and is sent on network server 403 in this, as output, in order to provide to sight Many end subscribers obtain the video stream data of main broadcaster.
Network server 403 receives the video stream data of the rtmp format uploaded by main broadcaster end, carries out to the data of short duration Video data distribution function is stored and realizes, in order to provide the video data that main broadcaster is obtained and downloaded to spectators' end subscriber.
Main broadcaster end video reception unit 404 receives the video mapped file data exported from video flowing acquisition unit, by this Video mapped file data are shown in the video playback window of net cast software of the machine in the form of original video stream, with Just it is supplied to main broadcaster and carries out video preview.
Fig. 5 is a kind of schematic diagram of viewer end video flowing receiving unit according to an embodiment of the present invention, as shown in figure 5, seeing Many end video reception units are used to acquire the Video stream information of viewer end, and export the Video stream information of viewer end.
Specifically, viewer end video reception unit includes:
Video flowing acquisition unit 501 is same unit, video flowing acquisition unit with the video flowing acquisition unit 402 in Fig. 4 501 for acquiring video stream data from main broadcaster end, and is submitted to network server 502.
Network server 502 is same unit with the network server 403 in Fig. 4, and network server 502 is for receiving By the video stream data for the rtmp format that main broadcaster end uploads, and of short duration storage is carried out to the data and realizes that video stream data is distributed Function, in order to provide the video stream data that main broadcaster is obtained and downloaded to spectators' end subscriber.
Viewer end video flowing receiving unit 503, using network server 502 distribute rtmp format video stream data as Input, is decoded processing to video stream data, and after the decoding process, the video stream data after decoding is deposited in is In Memory Mapping File of uniting, then, using existing video display technology, in the video playing of the net cast software of viewer end The video stream data that display obtains in window.
Fig. 6 is a kind of schematic diagram of present special efficacy administrative unit according to an embodiment of the present invention, as shown in fig. 6, special efficacy pipe Reason unit includes: present gives unit 601, special efficacy animation unit 602 and special efficacy animation adjustment unit 603, in which:
Present gives unit 601, and the behavior for giving virtual present is clicked using spectators as input, exports user in the window The information of gifts triggers different animation effects when the virtual present given meets different conditions, when spectators give Virtual present when reaching special efficacy trigger condition, start special efficacy animation unit.
It is dynamic to obtain the special efficacy specified in network server according to the information of spectators' gifts for special efficacy animation unit 602 Draw information.In general, the present that special efficacy animation and spectators give is closely bound up.If giving special efficacy present " lollipop " Reach certain condition (for example, the quantity of lollipop is 10), at this point, the special efficacy animation shown in video playback window may It is a sparkling lollipop;However, if giving special efficacy present " lollipop " reaches certain condition (for example, lollipop Quantity be 66), at this point, special efficacy animation will be a large amount of lollipop very unexpectedly, i.e., different virtual presents is reaching When different condition, different special efficacy animation effects will be triggered.
Special efficacy animation adjustment unit 603 is used for after the triggering of special efficacy animation, in the case where guaranteeing the distortionless situation of video cartoon It is sent out to successive step at the beginning of the progress such as the display size of special efficacy animation, display position, playback rate, and by the special efficacy animation after adjustment It is sent in special efficacy animation display unit 302 and is shown.
Fig. 7 is a kind of schematic diagram of special efficacy animation display unit according to an embodiment of the present invention, as shown in fig. 7, special efficacy is dynamic Drawing display unit includes: special efficacy pretreatment unit 701, time control unit 702, Special display effect unit 703 and special efficacy terminate list Member 704, in which:
Special efficacy pretreatment unit 701 is used to obtain special-effect information from network server, and according to monitor resolution, view The size and location of location information and video area information the adjustment special efficacy animation of frequency broadcast window.
For example, some special efficacy animation is " fresh flower surrounds main broadcaster ", then the special efficacy animation of " fresh flower " just needs to appear in view The surrounding in frequency domain can be only achieved around effect;In another example some special efficacy animation is " Christmas accumulated snow ", then " accumulated snow " special efficacy is dynamic Picture, which just needs to appear in, can be only achieved accumulated snow effect below video area.However, under the biggish display of resolution ratio, due to video Broadcast window becomes larger, therefore the size of fresh flower and accumulated snow can also become larger;And under the lesser display of resolution ratio, since video is broadcast It puts window to become smaller, therefore the size of fresh flower and accumulated snow can also become smaller.I.e. under different monitor resolutions, the size of special efficacy animation It will be different, it is therefore desirable to which stretching or reduction operation are carried out to original special efficacy animation size.
Further, the position due to video playback window in indicator screen can with the movement of video playback window and It is mobile.For example, video playback window carries out forms dragging by user, carries out window stretching or forms zoom operations and carry out When window maximized equal operation, if being playing special efficacy animation at this time, the position to it in indicator screen is just needed It sets and is adjusted in real time.
Specifically, in order to enable video playback window is when being stretched or reducing, special efficacy animation is undistorted, needs to spy The size of effect animation is adjusted.Assuming that display size of the video stream data in video playback window be videoSize (vw, Vh), wherein vw is video playback window width value, and vh is video playback window height value;The original size of some special efficacy animation For effectSize (ew, eh), wherein ew is the original width value of special efficacy animation, and eh is the original height value of special efficacy animation.Through Special efficacy pretreatment unit carries out the size of processing and special efficacy animation adjusted are as follows: effectSizeDest (edw, edh), In, edw is special efficacy animation width value adjusted, and edh is special efficacy animation height value adjusted, after the adjustment of this special efficacy animation Size and video playback window there are relationship, which can obtain from special efficacy administrative unit 302.It is assumed that after adjustment Size and video playback window between proportionate relationship can be with are as follows: effectSizeDest (edw, edh)=videoSize (vw,vh)*ρ(ρw,ρh).It is assumed that special efficacy animation zoom factor is σ (σ w, σ h), zoom factor can be obtained according to aforementioned proportion relationship σ (σ w, σ h)=videoSize (vw, vh) * ρ (ρ w, ρ h)/effectSize (ew, eh), then special efficacy animation will be according to the contracting It puts the factor and is adjusted to target sizes, complete the adjustment to special efficacy animation size.
Further, it in order to enable special efficacy animation appears in video playback window on correct regional location, therefore, needs The position of special efficacy animation is adjusted.In embodiments of the present invention, can according to the coordinate position of video playback window and The home position point that 4 special efficacy animations are likely to occur is arranged in the coordinate position of video playback window, they are respectively: origin1 (that is, room window upper left corner of net cast platform), origin2 (that is, the room window of net cast platform is hit exactly), Origin3 (that is, net cast software video broadcast window upper left corner) and origin4 are (that is, net cast software video plays window Mouth center).
In embodiments of the present invention, the opposite offset that each special efficacy animation can be set comprising one according to itself size Value offsize (ox, oy), this value manually adjust the position of special efficacy animation for fine arts personnel.It is assumed that the room of net cast platform Between the size of window be roomSize (rw, rh), the position of the opposite room window of video playback window for videoPos (vx, Vy), video playback window size is videoSize (vw, vh), at this point, can calculate correspondence according to 4 home position points The synthesising position effectPos (ex, ey) of special efficacy animation, specific as follows:
Origin1: special efficacy synthesising position is located at the upper left corner in room, then effectPos (ex, ey)=offsize (ox, oy)。
Origin2: special efficacy synthesising position is located at the center in room, and effectPos (ex, ey)=offsize (ox, oy)+ roomSize(rw,rh)/2。
Origin3: the offset of special efficacy synthesising position relative video, effectPos (ex, ey)=offsize (ox, oy)+ videoPos(vx,vy)。
Origin4: the offset of special efficacy synthesising position relative video, effectPos (ex, ey)=offsize (ox, oy)+ (videoPos(vx,vy)+videoSize(vw,vh)/2)。
Time control unit 702, after special efficacy animation is triggered, all special efficacys according to the chronological order being triggered into Row is lined up, and is entered in special efficacy animation queue, and the successively special display effect animation under the control of time quantum.Time control unit For triggering the display of special efficacy, and control the duration of Special display effect.When special efficacy animation play is completed, special efficacy knot will be called Shu Danyuan 704 carries out resource reclaim and memory release.Further, time control unit, will after starting Special display effect unit Judge whether the special efficacy animation meets the synthesis condition of photo, if meeting the synthesis condition of photo, send activation instruction to Photo synthesis unit 304.It is assumed that user send the special efficacy animation of 66 lollipops to main broadcaster, it is big when having in video playback window When measuring lollipop very unexpectedly, then meet the synthesis condition of picture;It is assumed, however, that setting send the special efficacy of 10 lollipops dynamic It draws, if only showing a sparkling lollipop in video playing dress mouth, is unsatisfactory for the synthesis condition of picture, then When detecting that next special efficacy that will be played is to meet " the largely stick very unexpectedly of picture synthesis condition in special efficacy queue When sugar " special efficacy animation, then video streaming image corresponding with predetermined condition in net cast is obtained, and obtain corresponding with predetermined condition Object images, and the figure that will acquire is synthesized.
Special display effect unit 703, using the output of special efficacy pretreatment unit 701 as input, in the control of time control unit Under system, above video stream data, special display effect animation.
Special efficacy end unit 704 for terminating and closing special efficacy animation, and recycles the resource that special efficacy animation uses, and Special efficacy animated state is reset, is prepared for the broadcasting of next special efficacy.
Fig. 8 is a kind of schematic diagram of photo synthesis unit according to an embodiment of the present invention, as shown in figure 8, being embodied In, photo synthesis unit includes: gesture feature database 801, gesture motion reminding unit 802, countdown unit 803, face With gesture identification unit 804, video image acquiring unit 805, special efficacy image acquisition unit 806 and photograph image synthesis unit 807, in which:
In embodiments of the present invention, it is " most beautiful will to determine that image content whether successful " candid photograph " arrives at the time point of picture synthesis A moment ", and the time point at most U.S. a moment is calculated, it will be obtained by being calculated with lower unit:
Gesture feature database 801, the cartoon for the characteristic and corresponding gesture of storing various gestures prompt icon, Including but not limited to various movements and gesture, for example, embracing the various popular gestures such as boxer's gesture, yeah gesture, love gesture.
When photo synthesis unit is activated, initiation gesture acts reminding unit 802 first, which can be in gesture feature One gesture of selection or random selection is specified in database, and is prompted main broadcaster to show corresponding gesture and prepared to take pictures.For example, Main broadcaster A receives the virtual present " may you be hapy and prosperous " that can trigger to take pictures, then while the triggering of special efficacy animation, gesture motion Reminding unit can specify selection to embrace boxer's gesture from gesture feature database, and show that one is embraced below main broadcaster's side view frequency domain The cartoon of fist movement prompts icon, and main broadcaster is prompted to show response gesture.
When photo synthesis unit is activated, facial image and main broadcaster's gesture can also be set by countdown unit 803 Recognition time.For example, carrying out facial image and main broadcaster's gesture identification when countdown starts, people is closed after countdown The identification of face image and main broadcaster's gesture.
Face and gesture identification unit 804, for before countdown unit terminates, to the face figure in video stream data The gesture motion of picture and main broadcaster are detected and are identified.
It specifically, in embodiments of the present invention, is with video frame when the gesture to facial image and main broadcaster identifies It is identified for unit, the facial image in each video frame and main broadcaster's gesture identify respectively, when recognizing facial image When similarity value than more visible or main broadcaster gesture and prearranged gesture meets predetermined similarity value, facial image is returned The movement of data and main broadcaster's gesture encodes.Wherein, range of the data of facial image comprising rectangle where face, eyes, nose, Lip, rectangular extent of the ear with respect to face rectangle.Assuming that successfully recognizing both hands index finger thumb is combined into a cardioid pattern Love gesture, then return gesture motion coding 1.
Video image acquiring unit 805, for obtaining real-time video stream picture from video flowing receiving unit.And judge people Whether face image is predetermined facial image, wherein when judge facial image be predetermined facial image, i.e., where facial image Face rectangle in video area shared region be not more than video area 2/3, not less than video area 1/100 when, judgement For predetermined facial image, i.e., clear head portrait.
Therefore, in embodiments of the present invention, if recognizing clearly facial image and getting normal gesture coding, Video streaming image then can be directly obtained from video flowing.If not getting above- mentioned information simultaneously, need in countdown At the end of, video streaming image is obtained from video flowing, wherein the video streaming image that video image acquiring unit is got be without The image of special efficacy animation information.
Special efficacy image acquisition unit 806, the image letter of the special efficacy animation for obtaining storage from present information database Breath, then, according to the image information acquisition special efficacy image (that is, above-mentioned object images) of special efficacy animation.
It should be noted that in present example, there are many sources for special efficacy image, specifically include that
1, the picture file being stored on the webserver needs to download this document from network server and is loaded into interior In depositing.
2, special efficacy animation frame delineation, special efficacy image obtain corresponding frame as special efficacy image from special efficacy animation.
Photograph image synthesis unit 807, for video streaming image and special efficacy image to be synthesized final photo.If final The size of picture after synthesis is PictureSize (pw, ph), then the video flowing for needing to get from net cast The size VideoSize (vw, vh) of image is zoomed in and out according to the first preset ratio, wherein the size after scaling is final The size of picture after synthesis, is then plotted on painting canvas.First preset ratio includes width pantograph ratio and height pantograph ratio, Wherein, width pantograph ratio are as follows: kw=pw/vw, height pantograph ratio are as follows: kh=ph/vh.
It should be noted that under normal circumstances, in order to keep image undistorted, width pantograph ratio and length scale ratio should It is identical, that is, kw=kh.If under special circumstances, cause width pantograph ratio and length scale than it is not identical when, for guarantee institute Some picture materials can zoom in the picture after final synthesis, then need to use the smaller value of the two as scale because Son, that is, the pantograph ratio chosen at this time are as follows: k=min (kw, kh).When being zoomed in and out according to pantograph ratio to video streaming image, most Throughout one's life at video streaming image size are as follows: DestVideoSize (vw '=vw*k, vh '=vh*k).
On painting canvas after drafting video streaming image, so that it may successively rendered object image (that is, special efficacy image).Object diagram Two kinds of the drafting effect of picture, i.e. type of effect 1 and type of effect 2, wherein the addition of type of effect 1 and type of effect 2 is based on The location information of object images: EffectRect (x, y, w, h).
Type of effect 1: type of effect 1 is that object images cover entire video streaming image, at this point, the position of object images is believed Breath are as follows: EffectRect (0,0, w, h).At this point, object images can obtain the object diagram being more suitable by zoom factor k2 Picture, it may be assumed that and EffectRect ' (0,0, w ', h ')=EffectRect (0,0, w, h) * k2.
Type of effect 2: type of effect 2 is that object images are in specific position in video streaming image.First from video image It is middle to obtain the range and location information that object images are shown, it may be assumed that EffectRect ' (x ', y ', w ', h ').But due to the position Confidence breath and indication range are what relatively primitive video streaming image obtained, it is therefore desirable to first believe above-mentioned position got Breath and indication range are changed in the following manner, it may be assumed that EffectRect " (x ", y ", w ", h ")=EffectRect ' (x ', y',w',h')*k.Then, by the original size EffectSize (w, h) of object images according to non-uniform zoom factor k3 (α, β) It zooms in and out, the size of object images after scaling are as follows: EffectSize " (w ", h ")=EffectSize (w, h) * k3 (α, β).Finally, drawing effect on position (x ", y ") of painting canvas.
Fig. 9 is a kind of schematic diagram of another embodiment of photo synthesis unit according to an embodiment of the present invention, as shown in figure 9, In another embodiment, another implementation of image synthesis includes: countdown unit 901, composograph acquisition list Member 902 and composograph processing unit 903, in which:
Countdown unit 901 is same unit with the countdown unit in Fig. 8.Specifically, it is shown in when in special efficacy animation When on video playback window, the specified time that can set special efficacy animation is at the time of take pictures, and the description of this time point can be with It is obtained from special efficacy present unit, the displaying time of effect optimum frame generally in special efficacy animation.Countdown unit is specified herein Preceding 3~the 5s at time point starts, and count down information is shown in video playback window, to prompt main broadcaster and user will be in terms of falling When after will take pictures, and at the end of countdown start composograph acquiring unit.
Composograph acquiring unit 902, for obtained from video playback window video streaming image and special efficacy image (that is, Object).The unit can obtain the screenshot of video playback window directly from video playback window, and simultaneously will be being played on Special efficacy animation is intercepted and captured, and the special efficacy animation intercepted and captured at this time is the optimum frame in setting.Herein, composograph acquiring unit is got Image has included the video streaming image and special efficacy image at main broadcaster end.
Composograph processing unit 903 is handled for zooming in and out processing to the image of synthesis image processing unit synthesis The size finally needed.
Figure 10 is a kind of schematic diagram of photo post-processing unit according to an embodiment of the present invention, as shown in Figure 10, photo Post-processing unit includes: photo upload unit 1001, network server 1002 and photo display unit 1003, in which:
Photo upload unit 1001 is for uploading to the picture of synthesis in network server.
Network server 1002 for receiving the photo of the upload of photo upload unit 1001, and stores photo, and Inquiry and download service are provided.
Photo display unit 1003, the picture for obtaining stored current anchor from network server 1002 are believed Breath, and be shown.User can also browse all photos of the main broadcaster, and main broadcaster can also carry out while browsing photo It deletes and photo is grouped.
To sum up, in embodiments of the present invention, the image processing method and device in a kind of net cast, this method are provided It enables to obtain automatically while of short duration present special efficacy occurs by the identification of facial image and gesture with device And the optimum attitude of main broadcaster and the photo of best special efficacy are saved, to allow main broadcaster and spectators after special efficacy photo is saved and shown Can aftertaste, judge that splendid moment at that time.
Further, the image processing method in net cast provided in an embodiment of the present invention and device can be applicable to tradition In real-time video live streaming, and without being modified to net cast process and video stream data.
Further, due to being not necessarily to be modified net cast process and video stream data, it is provided by the invention Technical solution realizes high reusability and transplantability, i.e., any audio/video player system is all not necessarily to make to modify too much can be very The Real-time Special Effect photo that easily load the device of the invention synthesized based on video image generates.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
In the above embodiment of the invention, it all emphasizes particularly on different fields to the description of each embodiment, does not have in some embodiment The part of detailed description, reference can be made to the related descriptions of other embodiments.
In several embodiments provided herein, it should be understood that disclosed technology contents can pass through others Mode is realized.Wherein, the apparatus embodiments described above are merely exemplary, such as the division of the unit, Ke Yiwei A kind of logical function partition, there may be another division manner in actual implementation, for example, multiple units or components can combine or Person is desirably integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual Between coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or communication link of unit or module It connects, can be electrical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple On unit.It can some or all of the units may be selected to achieve the purpose of the solution of this embodiment according to the actual needs.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product When, it can store in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words It embodies, which is stored in a storage medium, including some instructions are used so that a computer Equipment (can for personal computer, server or network equipment etc.) execute each embodiment the method for the present invention whole or Part steps.And storage medium above-mentioned includes: that USB flash disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited Reservoir (RAM, Random Access Memory), mobile hard disk, magnetic or disk etc. be various to can store program code Medium.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered It is considered as protection scope of the present invention.

Claims (16)

1. the image processing method in a kind of net cast characterized by comprising
Determine that object is triggered and shows in net cast;
Whether monitoring predetermined condition meets, wherein the predetermined condition include: user makes in the net cast gesture with The similarity of prearranged gesture is more than or equal to predetermined similarity;
When the predetermined condition meets, the video streaming image in presently described net cast is obtained;
Obtain the object images of the object;
At least the video streaming image and the object images are synthesized,
Wherein, the predetermined condition further include: the facial image of user described in the net cast and the net cast Image scaled within a predetermined range,
After determining that the object is triggered and shows in net cast, the method also includes: Xiang Suoshu user shows The prearranged gesture.
2. the method according to claim 1, wherein if not monitored in the first predetermined amount of time described predetermined Condition meets, then is considered as predetermined condition in the first predetermined amount of time finish time and has met.
3. according to the method described in claim 2, it is characterized in that, the method also includes:
After determining that object is triggered and shows in net cast, Xiang Suoshu user shows the first countdown, wherein described First countdown corresponds to first predetermined amount of time.
4. according to the method described in claim 2, it is characterized in that, when first predetermined amount of time is the displaying of the object Between.
5. the method according to claim 1, wherein determining that object is triggered and shows in net cast it Afterwards, the method also includes:
The second countdown is shown to user, wherein the predetermined condition is that the second predetermined amount of time terminates, second countdown Corresponding to second predetermined amount of time.
6. according to the method described in claim 5, it is characterized in that, second predetermined amount of time is that object displaying starts Moment to the object shows that optimum efficiency goes out current moment.
7. according to the method described in claim 5, it is characterized in that, the video flow graph obtained in presently described net cast Picture and the object images for obtaining the object include:
Obtain the screenshot of video streaming image and the object images described in current video broadcast window, wherein the video is broadcast Window is put for the video flowing and the object to be broadcast live.
8. the method according to claim 1, wherein at least by the video streaming image and the object images into Row synthesizes
The video streaming image that will acquire is zoomed in and out according to the first preset ratio;
The object images for the object that will acquire are zoomed in and out according to the second preset ratio;
At least by after scaling the video streaming image and the object images synthesize.
9. the image processing apparatus in a kind of net cast characterized by comprising
Determining module, for determining that object is triggered and shows in net cast;
Monitoring modular, for monitoring whether predetermined condition meets, the predetermined condition include: in the net cast user make Gesture and prearranged gesture similarity be more than or equal to predetermined similarity;
First obtains module, for obtaining the video streaming image in presently described net cast when the predetermined condition meets;
Second obtains module, for obtaining the object images of the object;
Synthesis module, at least the video streaming image and the object images to be synthesized,
The predetermined condition further include: the image ratio of the facial image of user described in the net cast and the net cast Example within a predetermined range,
Described device further include: the first display module, for determining that the object is triggered and shows in net cast it Afterwards, Xiang Suoshu user shows the prearranged gesture.
10. device according to claim 9, which is characterized in that if not monitored in the first predetermined amount of time described pre- Fixed condition meets, then is considered as predetermined condition in the first predetermined amount of time finish time and has met.
11. device according to claim 10, which is characterized in that described device further include:
Second display module, for after determining that object is triggered and shows in net cast, Xiang Suoshu user to show One countdown, wherein first countdown corresponds to first predetermined amount of time.
12. device according to claim 10, which is characterized in that first predetermined amount of time is the displaying of the object Time.
13. device according to claim 9, which is characterized in that described device further include:
Third display module, for showing that second falls to user after determining that object is triggered and shows in net cast Timing, wherein the predetermined condition is that the second predetermined amount of time terminates, and second countdown corresponds to the described second pre- timing Between section.
14. device according to claim 13, which is characterized in that second predetermined amount of time is that object displaying is opened Moment beginning to the object shows that optimum efficiency goes out current moment.
15. device according to claim 13, which is characterized in that described first, which obtains module and described second, obtains module Include:
Acquiring unit, for obtaining the screenshot of video streaming image described in current video broadcast window and the object images, In, the video playback window is for being broadcast live the video flowing and the object.
16. device according to claim 9, which is characterized in that the synthesis module includes:
First unit for scaling, the video streaming image for will acquire are zoomed in and out according to the first preset ratio;
Second unit for scaling, the object images for the object that will acquire are zoomed in and out according to the second preset ratio;
Synthesis unit, at least by after scaling the video streaming image and the object images synthesize.
CN201610763148.XA 2016-08-29 2016-08-29 Image processing method and device in net cast Active CN106303662B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610763148.XA CN106303662B (en) 2016-08-29 2016-08-29 Image processing method and device in net cast

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610763148.XA CN106303662B (en) 2016-08-29 2016-08-29 Image processing method and device in net cast

Publications (2)

Publication Number Publication Date
CN106303662A CN106303662A (en) 2017-01-04
CN106303662B true CN106303662B (en) 2019-09-20

Family

ID=57674903

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610763148.XA Active CN106303662B (en) 2016-08-29 2016-08-29 Image processing method and device in net cast

Country Status (1)

Country Link
CN (1) CN106303662B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106954080B (en) * 2017-03-17 2020-07-31 武汉斗鱼网络科技有限公司 Video live broadcast method and device and user terminal
CN106804007A (en) * 2017-03-20 2017-06-06 合网络技术(北京)有限公司 The method of Auto-matching special efficacy, system and equipment in a kind of network direct broadcasting
CN107124658B (en) * 2017-05-02 2019-10-11 北京小米移动软件有限公司 Net cast method and device
CN107124664A (en) * 2017-05-25 2017-09-01 百度在线网络技术(北京)有限公司 Exchange method and device applied to net cast
CN109040835A (en) * 2017-06-08 2018-12-18 合网络技术(北京)有限公司 Multimedia data processing method and device
CN107493515B (en) * 2017-08-30 2021-01-01 香港乐蜜有限公司 Event reminding method and device based on live broadcast
CN107786549B (en) * 2017-10-16 2019-10-29 北京旷视科技有限公司 Adding method, device, system and the computer-readable medium of audio file
CN107911736B (en) * 2017-11-21 2020-05-12 广州华多网络科技有限公司 Live broadcast interaction method and system
CN107911724B (en) * 2017-11-21 2020-07-07 广州华多网络科技有限公司 Live broadcast interaction method, device and system
CN108810602B (en) * 2018-03-30 2020-09-04 武汉斗鱼网络科技有限公司 Method and device for displaying information of live broadcast room and computer equipment
CN112347301A (en) * 2019-08-09 2021-02-09 北京字节跳动网络技术有限公司 Image special effect processing method and device, electronic equipment and computer readable storage medium
CN110830811B (en) * 2019-10-31 2022-01-18 广州酷狗计算机科技有限公司 Live broadcast interaction method, device, system, terminal and storage medium
CN112887631B (en) * 2019-11-29 2022-08-12 北京字节跳动网络技术有限公司 Method and device for displaying object in video, electronic equipment and computer-readable storage medium
CN110933454B (en) * 2019-12-06 2021-11-02 广州酷狗计算机科技有限公司 Method, device, equipment and storage medium for processing live broadcast budding gift
CN111355974A (en) * 2020-03-12 2020-06-30 广州酷狗计算机科技有限公司 Method, apparatus, system, device and storage medium for virtual gift giving processing
CN111629223B (en) * 2020-06-11 2022-09-13 网易(杭州)网络有限公司 Video synchronization method and device, computer readable storage medium and electronic device
CN111970533B (en) * 2020-08-28 2022-11-04 北京达佳互联信息技术有限公司 Interaction method and device for live broadcast room and electronic equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101742114A (en) * 2009-12-31 2010-06-16 上海量科电子科技有限公司 Method and device for determining shooting operation through gesture identification
CN105450642B (en) * 2015-11-17 2018-11-23 广州华多网络科技有限公司 It is a kind of based on the data processing method being broadcast live online, relevant apparatus and system

Also Published As

Publication number Publication date
CN106303662A (en) 2017-01-04

Similar Documents

Publication Publication Date Title
CN106303662B (en) Image processing method and device in net cast
CN108989830A (en) A kind of live broadcasting method, device, electronic equipment and storage medium
JP6397911B2 (en) Video broadcast system and method for distributing video content
CN105359501B (en) Automatic music video creation from photograph collection and intelligent picture library
US7397932B2 (en) Facial feature-localized and global real-time video morphing
CN108986192B (en) Data processing method and device for live broadcast
CN107682729A (en) It is a kind of based on live interactive approach and live broadcast system, electronic equipment
CN107105315A (en) Live broadcasting method, the live broadcasting method of main broadcaster's client, main broadcaster's client and equipment
CN107071580A (en) Data processing method and device
CN105898133A (en) Video shooting method and device
CN106303354B (en) Face special effect recommendation method and electronic equipment
CN107911736A (en) Living broadcast interactive method and system
KR20160128366A (en) Mobile terminal photographing method and mobile terminal
CN113067994B (en) Video recording method and electronic equipment
CN112905074B (en) Interactive interface display method, interactive interface generation method and device and electronic equipment
CN106162357B (en) Obtain the method and device of video content
CN113068053A (en) Interaction method, device, equipment and storage medium in live broadcast room
WO2023035897A1 (en) Video data generation method and apparatus, electronic device, and readable storage medium
CN108416832A (en) Display methods, device and the storage medium of media information
CN114900678B (en) VR end-cloud combined virtual concert rendering method and system
CN104244101A (en) Method and device for commenting multimedia content
CN107801061A (en) Ad data matching process, apparatus and system
CN109313653A (en) Enhance media
CN106131457A (en) A kind of GIF image generating method, device and terminal unit
CN107203646A (en) A kind of intelligent social sharing method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant