CN112188228A - Live broadcast method and device, computer readable storage medium and electronic equipment - Google Patents

Live broadcast method and device, computer readable storage medium and electronic equipment Download PDF

Info

Publication number
CN112188228A
CN112188228A CN202011062067.XA CN202011062067A CN112188228A CN 112188228 A CN112188228 A CN 112188228A CN 202011062067 A CN202011062067 A CN 202011062067A CN 112188228 A CN112188228 A CN 112188228A
Authority
CN
China
Prior art keywords
information
scene
anchor
live broadcast
live
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011062067.XA
Other languages
Chinese (zh)
Inventor
何志强
陈健生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202011062067.XA priority Critical patent/CN112188228A/en
Publication of CN112188228A publication Critical patent/CN112188228A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23412Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Human Computer Interaction (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure provides a live broadcast method and device, a computer readable storage medium and electronic equipment, and relates to the technical field of network live broadcast. The live broadcast method comprises the following steps: acquiring virtual scene information by adopting a first process; acquiring real scene information determined by a second process by adopting a first process, and extracting anchor portrait information from the real scene information; and fusing the virtual scene information and the anchor portrait information by adopting a first process to generate live broadcast data. The live broadcast method realizes the live broadcast mode of fusing the anchor portrait with the virtual scene in real time by using the double-process method, increases the diversity of the live broadcast scene, and ensures that the anchor can realize the live broadcast in various different virtual scenes under the condition of not changing the real live broadcast scene.

Description

Live broadcast method and device, computer readable storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of network live broadcast technologies, and in particular, to a live broadcast method and apparatus, a computer-readable storage medium, and an electronic device.
Background
With the continuous improvement of the living standard of people, more and more people watch live broadcast in daily life.
Currently, live broadcast modes mainly include live broadcast in a real scene and live broadcast in a virtual scene. Live broadcast is carried out in a real scene, and due to the fact that available live broadcast scenes of a main broadcast are limited or live broadcast is carried out in the same live broadcast scene for a long time, the live broadcast scene is single, and the condition that users are lost easily occurs. Live broadcast under the virtual scene can change the live broadcast scene according to the live broadcast content, thereby increasing the diversity of the live broadcast scene and further improving the retention rate of users. For example, when the anchor live content compares the literature, the live background can be selected to be set as a virtual library; when the anchor live content is loud, the live background can be selected to be set as a virtual bar or the like.
The existing technology for live broadcast by using virtual scenes usually needs to design and develop the virtual scenes in advance and use virtual roles for live broadcast. However, the method for live broadcasting in the virtual scene by using the virtual character has the disadvantages of high development cost, too little interaction between the virtual object and the user, poor live broadcasting effect and low user retention rate.
Disclosure of Invention
The present disclosure aims to provide a live broadcast method, a live broadcast apparatus, a computer-readable storage medium, and an electronic device, so as to overcome, at least to some extent, the problems of high live broadcast cost, poor live broadcast effect, and low user retention rate in a virtual scene due to limitations and defects of related technologies.
According to a first aspect of the present disclosure, there is provided a live broadcast method, including: acquiring virtual scene information by adopting a first process; acquiring real scene information determined by a second process by adopting a first process, and extracting anchor portrait information from the real scene information; and fusing the virtual scene information and the anchor portrait information by adopting a first process to generate live broadcast data.
Optionally, the obtaining of the virtual scene information by using the first process further includes: acquiring a preselected scene object; and determining the virtual scene information according to the scene object so as to be conveniently acquired by the first process.
In an exemplary embodiment of the present disclosure, when the scene object is a picture preselected by a main broadcasting, determining the virtual scene information according to the scene object includes: determining virtual scene elements according to the images preselected by the anchor; virtual scene information is determined from the virtual scene elements.
Optionally, the obtaining, by using the first process, the real scene information determined by the second process includes: and acquiring real scene information acquired by the real scene acquisition equipment in real time by adopting a second process.
Optionally, the extracting the anchor portrait information from the real scene information includes: detecting a main broadcasting portrait from the real scene information; and separating the anchor portrait from the background in the real scene to obtain the anchor portrait information.
Optionally, the detecting the anchor portrait from the real scene information includes: if the fact that the real scene information comprises a plurality of portraits is detected, similarity calculation is carried out on the portraits and pre-stored anchor portraits respectively; and extracting the portrait with the highest similarity as the anchor portrait information.
Optionally, fusing the virtual scene information and the anchor portrait information by using the first process includes: determining the position of each virtual object in the virtual scene information; determining a target area of the anchor portrait in the virtual scene according to the position of each virtual object; and configuring the anchor portrait in the target area.
Optionally, the configuring the anchor portrait in the target area includes: adjusting the size of the anchor portrait according to the size of the target area to obtain an adjusted anchor portrait; and configuring the adjusted anchor portrait in a target area to obtain a live broadcast picture.
Optionally, fusing the virtual scene information and the anchor portrait information by using a first process to generate live broadcast data, including: fusing virtual scene information and anchor portrait information by adopting a first process to obtain a live broadcast picture; determining live broadcast audio corresponding to a live broadcast picture; and generating live broadcast data by utilizing the live broadcast picture and the live broadcast audio.
Optionally, determining the live audio corresponding to the live picture includes: acquiring a scene audio in a virtual scene by adopting a first process; acquiring the anchor audio in the real scene by adopting a second process; sending scene audio in the virtual scene to a second process by adopting a first process; and mixing the anchor audio in the real scene with the scene audio in the virtual scene by adopting a second process to obtain the live broadcast audio corresponding to the live broadcast picture.
According to another aspect of the present disclosure, a live broadcasting device is provided, which includes an information obtaining module, an information extracting module, and an information fusing module.
Specifically, the information obtaining module may be configured to obtain the virtual scene information by using a first process; the information extraction module can be used for acquiring real scene information determined by the second process by adopting the first process and extracting the anchor portrait information from the real scene information; the information fusion module may be configured to fuse the virtual scene information with the anchor portrait information using a first process to generate live data.
Optionally, the information obtaining module may be configured to perform: acquiring a preselected scene object; and determining the virtual scene information according to the scene object so as to be conveniently acquired by the first process.
Optionally, the information obtaining module may be further configured to perform: determining virtual scene elements according to the images preselected by the anchor; virtual scene information is determined from the virtual scene elements.
Optionally, the information extraction module may be configured to perform: and acquiring real scene information acquired by the real scene acquisition equipment in real time by adopting a second process.
Optionally, the information extraction module may be further configured to perform: detecting a main broadcasting portrait from the real scene information; and separating the anchor portrait from the background in the real scene to obtain the anchor portrait information.
Optionally, the information extraction module may be further configured to perform: if the fact that the real scene information comprises a plurality of portraits is detected, similarity calculation is carried out on the portraits and pre-stored anchor portraits respectively; and extracting the portrait with the highest similarity as the anchor portrait information.
Optionally, the information fusion module may be configured to perform: determining the position of each virtual object in the virtual scene information; determining a target area of the anchor portrait in the virtual scene according to the position of each virtual object; and configuring the anchor portrait in the target area.
Optionally, the information fusion module may be further configured to perform: adjusting the size of the anchor portrait according to the size of the target area to obtain an adjusted anchor portrait; and configuring the adjusted anchor portrait in a target area to obtain a live broadcast picture.
Optionally, the information fusion module may be further configured to perform: fusing virtual scene information and anchor portrait information by adopting a first process to obtain a live broadcast picture; determining live broadcast audio corresponding to a live broadcast picture; and generating live broadcast data by utilizing the live broadcast picture and the live broadcast audio.
Optionally, the information fusion module may be further configured to perform: acquiring a scene audio in a virtual scene by adopting a first process; acquiring the anchor audio in the real scene by adopting a second process; sending scene audio in the virtual scene to a second process by adopting a first process; and mixing the anchor audio in the real scene with the scene audio in the virtual scene by adopting a second process to obtain the live broadcast audio corresponding to the live broadcast picture.
According to another aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements any of the live methods described above.
According to still another aspect of the present disclosure, there is provided an electronic device including: a processor; and a memory for storing executable instructions for the processor; wherein the processor is configured to perform any of the live methods described above via execution of executable instructions.
In the technical solutions provided in some embodiments of the present disclosure, if an instruction that a anchor needs to open a virtual scene is received, a first process is used to load a specified virtual scene according to a scene object preselected by the anchor; meanwhile, acquiring information in a real live scene by adopting a second process, sending the real scene information to a first process, and extracting anchor portrait information from the real scene by adopting the first process; when a plurality of portraits exist in a real scene, correct anchor portraits can be accurately extracted, when the content of a virtual scene is rich, the optimal position suitable for fusing the anchor portraits can be selected in the virtual scene, then the size of the anchor portraits is adjusted, and the virtual scene and the anchor portraits are fused by adopting a first process. Compared with the existing live broadcast technology, on one hand, the live broadcast method provided by the disclosure uses a double-process method to realize the decoupling of live broadcast services and scene fusion services, and can be developed in parallel by dividing work for each; on the other hand, the anchor portrait is extracted from the real background, so that the separation of the portrait from the background is realized, the virtual character can be replaced by the real anchor portrait, the problem that live broadcast can only be carried out by using the virtual character in a virtual scene is solved, the diversity of live broadcast is increased, and the live broadcast effect is improved; on the other hand, the anchor portrait and the virtual scene are fused, so that the problem that the live scene is single in a real scene is solved, the problem that the live effect is not good when the virtual character is used in the virtual scene is solved, and the retention rate of the user is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
fig. 1 schematically shows a flow chart of a live method according to an exemplary embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart for determining scene information from a scene object according to an exemplary embodiment of the present disclosure;
FIG. 3 schematically illustrates a process flow diagram for extracting a anchor portrait from a plurality of portraits of a real scene, according to an exemplary embodiment of the present disclosure;
FIG. 4 schematically illustrates a flow chart fusing anchor portrait, virtual scene, and audio information, according to an exemplary embodiment of the present disclosure;
fig. 5 schematically illustrates an effect diagram of a live method according to an exemplary embodiment of the present disclosure;
FIG. 6 schematically shows a flow diagram of a method for implementing live broadcast using dual processes according to an exemplary embodiment of the present disclosure;
fig. 7 schematically shows a block diagram of a live device according to an exemplary embodiment of the present disclosure;
fig. 8 schematically shows a block diagram of an electronic device according to an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the steps. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
With the continuous improvement of living standard of people, people watching live broadcast are more and more, and the live broadcast mode is continuously developed.
The live broadcast mode can be live broadcast in a real scene only by the original anchor, and the anchor can select live broadcast in the real scene or live broadcast in a virtual scene according to live broadcast content and live broadcast requirements at present.
Live broadcast is carried out in a real scene, and due to the fact that available live broadcast scenes of a main broadcast are limited or live broadcast is carried out in the same live broadcast scene for a long time, the live broadcast scene is single, and the condition that users are lost easily occurs. Live broadcast under the virtual scene can change the live broadcast scene according to the live broadcast content, thereby increasing the diversity of the live broadcast scene and further improving the retention rate of users.
However, the related live broadcast technology in the virtual scene has the problems of incomplete extraction of the anchor portrait, poor fusion effect of the virtual scene and the anchor portrait, and the like. For example, when a plurality of portraits exist in a real scene, the anchor portraits cannot be accurately extracted; when the content in the virtual scene is rich, the optimal position in the virtual scene where the anchor portrait can be fused cannot be determined. In view of this, a new live broadcasting method is needed.
The respective steps of the object detection method of the exemplary embodiment of the present disclosure may be generally performed by a server, in which case the object detection apparatus described below may be configured within the server. The server may include a dedicated chip or be equipped with an independent GPU (Graphics Processing Unit). However, aspects of the present disclosure may also be implemented with terminal devices, which may include, but are not limited to, cell phones, tablets, personal computers, and the like.
Fig. 1 schematically shows a flow chart of a live method of an exemplary embodiment of the present disclosure. Referring to fig. 1, the live broadcasting method may include the steps of:
and S12, acquiring virtual scene information by adopting a first process.
In an exemplary embodiment of the present disclosure, the first process may be a scenario process. The virtual scene information may include a virtual scene, in an optional embodiment, the virtual scene information may include a virtual scene and a virtual object, in an optional embodiment, the virtual scene information may include a virtual scene and a scene audio, and in an optional embodiment, the virtual scene information may include a virtual scene, a virtual object and a scene audio; specifically, the virtual scene can be a scene background such as a tall building, a sea wave beach, a yellow river shoreside and the like; the virtual object can be scene decorations such as love balloons, aircraft rockets, space vehicles and the like; the scene audio can be the sound effect and the background music carried by the virtual scene. In an exemplary embodiment of the present disclosure, before loading the virtual scene with the first process, the scene parameters need to be determined according to the preselected scene objects. Loading a designated virtual scene according to a preselected scene object, wherein a program for loading the virtual scene can be developed through an existing rendering engine; the specified virtual scene may be one of a plurality of virtual scenes designed in advance; the preselected scene object can be preselected by a main player, preselected by a viewer (such as a scene object determined by most numbers or characters input by a bullet screen), or preselected by a system (such as a scene object randomly determined by a system turntable); the scene object may be a parameter set in the rendering engine, or may be a picture selected by a host, and the like, and each set of the parameter or each preselected picture corresponds to one virtual scene. It should be noted that the virtual scene in the present disclosure may be a two-dimensional virtual scene, and may also be a three-dimensional virtual scene.
Specifically, referring to fig. 2, when the preselected scene object 21 is a picture, the picture may be identified by using a machine learning model to obtain a scene parameter 23, where the machine learning model may be a convolutional neural network model, and the type and parameters of the model are not particularly limited by the present disclosure; the scene parameters 23 include elements 25 such as style, location, color system, etc. of the content in the picture; from this set of elements 25 a virtual scene 27 is determined corresponding thereto.
S14, acquiring real scene information determined by a second process by adopting the first process, and extracting anchor portrait information from the real scene information;
in an exemplary embodiment of the present disclosure, the second process is responsible for acquiring video and/or audio information in a real scene in real time, and sending the video and/or audio information to the first process in a memory sharing manner, where the second process and the first process perform different actions. The second process may be a live process in this disclosure. The real scene information includes: anchor portrait information, anchor audio information. It should be noted that one or more of the two kinds of information may be collected through the second process, and the specific collection of several kinds of information is determined according to the requirements of the anchor live broadcast. The real scene information acquisition device is a device capable of acquiring video and audio, and includes but is not limited to a camera, a microphone, and the like. The anchor portrait information is the body of the anchor that appears in the real scene. For example, the camera only shoots the live face, and the anchor portrait information is the anchor face information; the camera shoots the face and the arms of the anchor, and the portrait information of the anchor is the face and the arms of the anchor at the moment; the anchor audio information is the sound of the anchor in the live broadcasting process of the real scene, and comprises the sound collected by a microphone, the played background music and the like.
In an optional embodiment, the first process and the second process belong to the same live broadcast client (live broadcast client software/live broadcast client application), and further, the first process and the second process run in the same electronic device, where the live broadcast client runs in the electronic device, and the electronic device is a device used by an anchor user for live broadcast, such as a mobile phone, a personal computer, and the like.
In an exemplary embodiment of the present disclosure, the first process is adopted to obtain the real scene information determined by the second process, the anchor portrait is detected from the information in the real scene, and the anchor portrait is separated from the background in the real scene to obtain the anchor portrait information. Specifically, the detection method may use an image recognition method.
In this embodiment, only one person may be live in the video frame of the real scene, or a plurality of persons may exist in the frame. Under the condition that only one anchor person exists in the picture, the area where the anchor portrait is located can be detected through an image recognition method, and then the area where the anchor portrait is located is separated from the background, so that the anchor portrait information can be obtained. When there are multiple people in the picture, the anchor portrait needs to be separated from the multiple portraits.
Fig. 3 schematically shows a flow diagram of extracting a anchor portrait from multiple portraits of a real scene in the present disclosure. Referring to fig. 3, when a plurality of human figures 33 are detected from a real scene 31, the plurality of human figures 33 are separated from the background, similarity calculation is performed on the separated human figures 33 and pre-stored anchor human figures respectively, and the human figure with the highest similarity is extracted as anchor human figure information 35. The similarity calculation method comprises the following steps: training the anchor portrait in advance through a neural network by using a machine learning method, then respectively inputting the detected portrait into the trained neural network for image recognition, and then outputting the similarity that the image is the anchor portrait; when the input image is the anchor portrait, the similarity obtained through the calculation of the neural network is far higher than that when the input image is not the anchor portrait, and the portrait with the highest similarity is the anchor portrait.
And S16, fusing the virtual scene information and the anchor portrait information by adopting the first process to generate live broadcast data.
In an exemplary embodiment of the present disclosure, live data refers to data obtained by fusing a virtual scene with anchor portrait and/or audio information.
Fig. 4 schematically illustrates a flow chart fusing anchor portrait, virtual scene, and audio information according to an exemplary embodiment of the present disclosure. Referring to fig. 4, a live broadcast portrait 401 is fused with a virtual scene 403 through a first process to obtain a live broadcast picture, and preferably, the live broadcast picture may be sent to a second process in a texture sharing manner; optionally, the audio information 405 may also be fused into a live broadcast picture to generate live broadcast data 411; specifically, a first process is adopted to obtain a scene audio 407, and the scene audio is sent to a second process in a memory sharing manner; acquiring a main broadcasting audio 409 through a microphone by adopting a second process, and mixing the scene audio 407 and the main broadcasting audio 409 in the second process to generate a live broadcasting audio; finally, in the second process, the live broadcast picture and the live broadcast audio are fused to generate live broadcast data 411. Optionally, after generating the live data 411, the live client sends the live data 411 to the live server, so that the live server provides the live data 411 to the live viewer client.
It should be noted that the method in the above embodiment is only an optional solution. The process of fusing the anchor portrait and the virtual scene in the disclosure can be realized in a first process and can also be realized in a second process; the process of generating live audio in the present disclosure may be implemented before, after, or simultaneously with the process of generating a live view; the inter-process data transmission mode is various, and other modes are used for inter-process data transmission to achieve the result achieved by the disclosure, which also belongs to the scope of the inventive content in the disclosure.
In an exemplary embodiment of the present disclosure, before the virtual scene is fused with the anchor portrait and/or the audio information, the position of each virtual object in the virtual scene information is determined first, then a target area of the anchor portrait in the virtual scene is determined according to the position of each virtual object, and finally the anchor portrait is configured in the target area. Specifically, by a target detection method in machine learning, which virtual objects exist in a virtual scene and the positions of the virtual objects in the virtual scene are detected, the position where the anchor portrait is most suitable to be placed is judged, and the anchor portrait is configured at the position. For example, a sea is in the virtual scene, a boat is in the sea, and after the sea and the boat in the virtual scene are detected, it is judged that the boat is more suitable for placing the anchor portrait, that is, the anchor portrait is configured at the position of the boat.
In another embodiment of the disclosure, when the size of the extracted anchor portrait does not accord with the size of a target area of the anchor portrait to be configured in the virtual scene, adjusting the size of the anchor portrait according to the size of the target area to obtain an adjusted anchor portrait; and configuring the adjusted anchor portrait in a target area, namely rendering the anchor portrait to an appointed patch to obtain a live broadcast picture.
In another embodiment of the present disclosure, the anchor portrait may be rendered in a customized tile by a rendering engine, and specific material parameters are configured on the tile according to the characteristics of the anchor portrait, so as to achieve the best fusion effect. Optionally, effects such as illumination can be automatically superimposed on the live broadcast picture through the rendering engine, so that the fusion effect of the characters and the scene is further improved.
In another embodiment of the present disclosure, the generated virtual live view may be previewed in advance. The anchor can select a proper virtual scene for live broadcasting according to the effect of the virtual live broadcasting. For example, before a virtual scene is selected by a anchor, a live broadcast effect in the virtual scene may be watched at a live broadcast end, and at this time, a user cannot see a picture of the anchor in the virtual scene. And when the anchor determines that the live broadcast is carried out in the scene, taking the picture in the virtual scene as a live broadcast picture.
In another embodiment of the present disclosure, when the user presents the virtual gift to the anchor, a corresponding virtual object is added in the live broadcast picture according to the type of the virtual gift, and the virtual object can also be added in the live broadcast picture according to the live broadcast habit of the anchor. For example, when a user gifts a balloon to a anchor, a balloon effect having a love shape may be generated in a live view.
Fig. 5 schematically shows an effect diagram of the live broadcasting method in the present disclosure. When the anchor is live at some festivals, the live broadcast can be performed in a virtual scene with a festival atmosphere using the method of the present disclosure. Referring to fig. 5, when the anchor is broadcast directly at the valentine's day, the anchor can be selected to broadcast directly under the background of a plurality of hearts formed by pink balloons, so that the festival atmosphere can be increased, the live broadcast effect can be improved, and the user can be attracted more. When a user gives a gift to the anchor, different scene objects, such as an orange flash balloon, a red rose and the like, can be triggered on a pink heart background according to the type of the gift.
The live broadcast method of the present disclosure will be further explained with reference to fig. 6.
Fig. 6 schematically shows a flow chart for implementing a live method using two processes. In step S601, a specified virtual scene is loaded using a first process. In the embodiment of the disclosure, the first process may be a scene process, which is a scene loader built based on a rendering engine bottom layer, and different scene resources may be loaded and run according to an instruction. In step S603, the real scene data is monitored using the second process. In the embodiment of the present disclosure, the second process may be a live broadcast process, and when the live broadcast process receives a live broadcast instruction, a scene process is started, and the scene process loads a specific scene according to a specific parameter. In step S605, the second process sends the video data in the real scene to the first process by using the memory sharing method, and the first process separates the anchor portrait from the background in the video data by using step S607 to obtain the anchor portrait. In step S609, the anchor portrait is fused with the virtual scene to obtain a live broadcast picture; in the fusion process, a position which is most suitable for configuring the anchor portrait needs to be found in the virtual scene in advance, the size of the anchor portrait is adjusted according to the area size of the position, and the adjusted anchor portrait is configured at the optimal position. In step S611, the first process transmits the live view to the second process by sharing the texture. In step S613 and step S615, the first process and the second process are respectively adopted to obtain the scene audio in the virtual scene and the anchor audio in the real scene, and then the first process sends the scene audio to the second process in a memory sharing manner through step S617. In step S619, a second process is used to mix the scene audio with the anchor audio to obtain live audio. In step S621, a second process is used to fuse the live broadcast picture and the live broadcast audio to obtain live broadcast data.
It should be noted that although the various steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that these steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Further, a live device is also provided in the present exemplary embodiment.
Fig. 7 schematically shows a block diagram of a live device of an exemplary embodiment of the present disclosure. Referring to fig. 7, the live device 7 according to an exemplary embodiment of the present disclosure may include: an information acquisition module 71, an information extraction module 73, and an information fusion module 75.
Specifically, the information obtaining module 71 may be configured to obtain the virtual scene information by using a first process; the information extraction module 73 may be configured to acquire, by using the first process, real scene information determined by the second process, and extract anchor portrait information from the real scene information; the information fusion module 75 may be configured to fuse the virtual scene information with the anchor portrait information using the first process to generate live data.
In an exemplary embodiment of the present disclosure, the information acquisition module may be configured to perform: acquiring a preselected scene object; and determining the virtual scene information according to the scene object so as to be conveniently acquired by the first process.
In an exemplary embodiment of the present disclosure, the information obtaining module may be further configured to perform: determining virtual scene elements according to the anchor preselected pictures; and determining the virtual scene information according to the virtual scene elements.
In an exemplary embodiment of the present disclosure, the information extraction module may be configured to perform: and acquiring the real scene information acquired by real scene acquisition equipment in real time by adopting the second process.
In an exemplary embodiment of the present disclosure, the information extraction module may be further configured to perform: detecting the anchor portrait from the real scene information; and separating the anchor portrait from the background in the real scene to obtain the anchor portrait information.
In an exemplary embodiment of the present disclosure, the information extraction module may be further configured to perform: if the fact that the real scene information comprises a plurality of portraits is detected, similarity calculation is carried out on the portraits and pre-stored anchor portraits respectively; and extracting the portrait with the highest similarity as the anchor portrait information.
In an exemplary embodiment of the present disclosure, the information fusion module may be configured to perform: determining the position of each virtual object in the virtual scene information; determining a target area of the anchor portrait in the virtual scene according to the position of each virtual object; and configuring the anchor portrait in the target area.
In an exemplary embodiment of the disclosure, the information fusion module may be further configured to perform: adjusting the size of the anchor portrait according to the size of the target area to obtain an adjusted anchor portrait; and configuring the adjusted anchor portrait in the target area to obtain a live broadcast picture.
In an exemplary embodiment of the disclosure, the information fusion module may be further configured to perform: acquiring audio information by adopting the first process, wherein the audio information comprises anchor audio information and live background audio information; and adjusting the volume of the anchor audio information and the volume of the live background audio information, and fusing the volume with a live frame to generate live data.
In an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above section "exemplary methods" of the present description, when said program product is run on the terminal device.
The program product for implementing the above method according to an embodiment of the present invention may employ a portable compact disc read only memory (CD-ROM) and include program codes, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical disk, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 800 according to this embodiment of the invention is described below with reference to fig. 8. The electronic device 800 shown in fig. 8 is only an example and should not bring any limitations to the function and scope of use of the embodiments of the present invention.
As shown in fig. 8, electronic device 800 is in the form of a general purpose computing device. The components of the electronic device 800 may include, but are not limited to: the at least one processing unit 810, the at least one memory unit 820, a bus 830 connecting different system components (including the memory unit 820 and the processing unit 810), and a display unit 840.
Wherein the storage unit stores program code that is executable by the processing unit 810 to cause the processing unit 810 to perform steps according to various exemplary embodiments of the present invention as described in the above section "exemplary methods" of the present specification. For example, the processing unit 810 may perform step S611 as shown in fig. 6:
the storage unit 820 may include readable media in the form of volatile memory units such as a random access memory unit (RAM)8201 and/or a cache memory unit 8202, and may further include a read only memory unit (ROM) 8203.
The storage unit 820 may also include a program/utility 8204 having a set (at least one) of program modules 8205, such program modules 8205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 830 may be any of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 800 may also communicate with one or more external devices 900 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 800, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 800 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 850. Also, the electronic device 800 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 860. As shown, the network adapter 860 communicates with the other modules of the electronic device 800 via the bus 830. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 800, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (13)

1. A live broadcast method, comprising:
acquiring virtual scene information by adopting a first process;
acquiring real scene information determined by a second process by adopting the first process, and extracting anchor portrait information from the real scene information;
and fusing the virtual scene information and the anchor portrait information by adopting the first process to generate live broadcast data.
2. A live method according to claim 1, characterized in that the live method further comprises:
acquiring a preselected scene object;
and determining the virtual scene information according to the scene object so as to be conveniently acquired by the first process.
3. The live broadcasting method according to claim 2, wherein the scene object is a picture preselected by a host, and wherein determining the virtual scene information according to the scene object comprises:
determining virtual scene elements according to the anchor preselected pictures;
and determining the virtual scene information according to the virtual scene elements.
4. A live method according to claim 1, characterized in that the live method comprises:
and acquiring the real scene information acquired by real scene acquisition equipment in real time by adopting the second process.
5. The live broadcasting method of claim 1, wherein extracting anchor portrait information from the real scene information comprises:
detecting the anchor portrait from the real scene information;
and separating the anchor portrait from the background in the real scene to obtain the anchor portrait information.
6. A live method according to claim 5, wherein detecting the anchor portrait from the real scene information comprises:
if the fact that the real scene information comprises a plurality of portraits is detected, similarity calculation is carried out on the portraits and pre-stored anchor portraits respectively;
and extracting the portrait with the highest similarity as the anchor portrait information.
7. The live broadcasting method of claim 1, wherein fusing the virtual scene information with the anchor portrait information using the first process comprises:
determining the position of each virtual object in the virtual scene information;
determining a target area of the anchor portrait in the virtual scene according to the position of each virtual object;
and configuring the anchor portrait in the target area.
8. A live method as defined in claim 7, wherein configuring the anchor portrait in the target area comprises:
adjusting the size of the anchor portrait according to the size of the target area to obtain an adjusted anchor portrait;
and configuring the adjusted anchor portrait in the target area to obtain a live broadcast picture.
9. The live broadcasting method according to any one of claims 1 to 8, wherein fusing the virtual scene information and the anchor portrait information with the first process to generate live broadcasting data comprises:
fusing the virtual scene information and the anchor portrait information by adopting the first process to obtain a live broadcast picture;
determining live broadcast audio corresponding to the live broadcast picture;
and generating live broadcast data by utilizing the live broadcast picture and the live broadcast audio.
10. The live method of claim 9, wherein determining live audio corresponding to the live view comprises:
acquiring scene audio in a virtual scene by adopting the first process;
acquiring the anchor audio in the real scene by adopting the second process;
sending scene audio in the virtual scene to the second process by adopting the first process;
and mixing the anchor audio in the real scene with the scene audio in the virtual scene by adopting the second process to obtain the live broadcast audio corresponding to the live broadcast picture.
11. A live broadcast apparatus, comprising:
the information acquisition module acquires virtual scene information by adopting a first process;
the information extraction module is used for acquiring real scene information determined by the second process by adopting the first process and extracting anchor portrait information from the real scene information;
and the information fusion module is used for fusing the virtual scene information and the anchor portrait information by adopting the first process so as to generate live broadcast data.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the live method of any one of claims 1-10.
13. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the live method of any of claims 1-10 via execution of the executable instructions.
CN202011062067.XA 2020-09-30 2020-09-30 Live broadcast method and device, computer readable storage medium and electronic equipment Pending CN112188228A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011062067.XA CN112188228A (en) 2020-09-30 2020-09-30 Live broadcast method and device, computer readable storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011062067.XA CN112188228A (en) 2020-09-30 2020-09-30 Live broadcast method and device, computer readable storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN112188228A true CN112188228A (en) 2021-01-05

Family

ID=73948193

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011062067.XA Pending CN112188228A (en) 2020-09-30 2020-09-30 Live broadcast method and device, computer readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112188228A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113473207A (en) * 2021-07-02 2021-10-01 广州博冠信息科技有限公司 Live broadcast method and device, storage medium and electronic equipment
CN113706719A (en) * 2021-08-31 2021-11-26 广州博冠信息科技有限公司 Virtual scene generation method and device, storage medium and electronic equipment
CN114189743A (en) * 2021-12-15 2022-03-15 广州博冠信息科技有限公司 Data transmission method and device, electronic equipment and storage medium
CN115134616A (en) * 2021-03-29 2022-09-30 阿里巴巴新加坡控股有限公司 Live broadcast background control method, device, electronic equipment, medium and program product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109660818A (en) * 2018-12-30 2019-04-19 广东彼雍德云教育科技有限公司 A kind of virtual interactive live broadcast system
CN110519611A (en) * 2019-08-23 2019-11-29 腾讯科技(深圳)有限公司 Living broadcast interactive method, apparatus, electronic equipment and storage medium
CN110794952A (en) * 2018-08-01 2020-02-14 北京鑫媒世纪科技发展有限公司 Virtual reality cooperative processing method, device and system
WO2020052421A1 (en) * 2018-09-13 2020-03-19 腾讯科技(深圳)有限公司 Method for configuring virtual scene, device, storage medium, and electronic device
CN111432235A (en) * 2020-04-01 2020-07-17 网易(杭州)网络有限公司 Live video generation method and device, computer readable medium and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110794952A (en) * 2018-08-01 2020-02-14 北京鑫媒世纪科技发展有限公司 Virtual reality cooperative processing method, device and system
WO2020052421A1 (en) * 2018-09-13 2020-03-19 腾讯科技(深圳)有限公司 Method for configuring virtual scene, device, storage medium, and electronic device
CN109660818A (en) * 2018-12-30 2019-04-19 广东彼雍德云教育科技有限公司 A kind of virtual interactive live broadcast system
CN110519611A (en) * 2019-08-23 2019-11-29 腾讯科技(深圳)有限公司 Living broadcast interactive method, apparatus, electronic equipment and storage medium
CN111432235A (en) * 2020-04-01 2020-07-17 网易(杭州)网络有限公司 Live video generation method and device, computer readable medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
段永良,宋燕燕,周洪萍,董丽花编著: "《全媒体制播技术》", 30 September 2016, 中国广播电视出版社 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115134616A (en) * 2021-03-29 2022-09-30 阿里巴巴新加坡控股有限公司 Live broadcast background control method, device, electronic equipment, medium and program product
CN115134616B (en) * 2021-03-29 2024-01-02 阿里巴巴新加坡控股有限公司 Live broadcast background control method, device, electronic equipment, medium and program product
CN113473207A (en) * 2021-07-02 2021-10-01 广州博冠信息科技有限公司 Live broadcast method and device, storage medium and electronic equipment
CN113706719A (en) * 2021-08-31 2021-11-26 广州博冠信息科技有限公司 Virtual scene generation method and device, storage medium and electronic equipment
CN114189743A (en) * 2021-12-15 2022-03-15 广州博冠信息科技有限公司 Data transmission method and device, electronic equipment and storage medium
CN114189743B (en) * 2021-12-15 2023-12-12 广州博冠信息科技有限公司 Data transmission method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112188228A (en) Live broadcast method and device, computer readable storage medium and electronic equipment
CN111163274B (en) Video recording method and display equipment
CN111031386B (en) Video dubbing method and device based on voice synthesis, computer equipment and medium
CN111564152B (en) Voice conversion method and device, electronic equipment and storage medium
WO2022089224A1 (en) Video communication method and apparatus, electronic device, computer readable storage medium, and computer program product
CN113485617B (en) Animation display method and device, electronic equipment and storage medium
KR20230026321A (en) Methods and apparatus for composite video imaging, electronic devices and computer readable media
US20230368461A1 (en) Method and apparatus for processing action of virtual object, and storage medium
CN111582862B (en) Information processing method, device, system, computer equipment and storage medium
CN109168032B (en) Video data processing method, terminal, server and storage medium
CN112115282A (en) Question answering method, device, equipment and storage medium based on search
CN110750734A (en) Weather display method and device, computer equipment and computer-readable storage medium
CN108304434B (en) Information feedback method and terminal equipment
CN110493635B (en) Video playing method and device and terminal
CN111291219A (en) Method for changing interface background color and display equipment
CN113301372A (en) Live broadcast method, device, terminal and storage medium
CN114780181B (en) Resource display method, device, computer equipment and medium
US20230067387A1 (en) Method for music generation, electronic device, storage medium cross reference to related applications
CN113875227A (en) Information processing apparatus, information processing method, and program
CN114125531B (en) Video preview method, device, terminal and storage medium
CN113194329B (en) Live interaction method, device, terminal and storage medium
CN115633223A (en) Video processing method and device, electronic equipment and storage medium
CN114328815A (en) Text mapping model processing method and device, computer equipment and storage medium
CN108536343B (en) Control display method and device, terminal and storage medium
KR20220061763A (en) Electronic device providing video conference and method for providing video conference thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210105

RJ01 Rejection of invention patent application after publication