CN114449303A - Live broadcast picture generation method and device, storage medium and electronic device - Google Patents
Live broadcast picture generation method and device, storage medium and electronic device Download PDFInfo
- Publication number
- CN114449303A CN114449303A CN202210096666.6A CN202210096666A CN114449303A CN 114449303 A CN114449303 A CN 114449303A CN 202210096666 A CN202210096666 A CN 202210096666A CN 114449303 A CN114449303 A CN 114449303A
- Authority
- CN
- China
- Prior art keywords
- live broadcast
- picture
- broadcast picture
- live
- distance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 230000000007 visual effect Effects 0.000 claims abstract description 127
- 238000012549 training Methods 0.000 claims description 18
- 238000012545 processing Methods 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 9
- 238000013135 deep learning Methods 0.000 claims description 5
- 241000209140 Triticum Species 0.000 abstract description 13
- 235000021307 Triticum Nutrition 0.000 abstract description 13
- 230000008569 process Effects 0.000 description 14
- 238000010586 diagram Methods 0.000 description 11
- 230000003993 interaction Effects 0.000 description 9
- 239000011159 matrix material Substances 0.000 description 8
- 230000000694 effects Effects 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23424—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4788—Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Databases & Information Systems (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Studio Devices (AREA)
Abstract
The invention discloses a live broadcast picture generation method and device, a storage medium and an electronic device, wherein the method comprises the following steps: acquiring first visual angle information of a first live broadcast picture and second visual angle information of a second live broadcast picture, wherein the first live broadcast picture is a picture obtained by shooting a first anchor object by first image acquisition equipment, and the second live broadcast picture is a picture obtained by shooting a second anchor object by second image acquisition equipment; according to the display area occupied by the first anchor object in the first live broadcast picture and the display area occupied by the second anchor object in the second live broadcast picture, the first live broadcast picture is adjusted to be a third live broadcast picture, and the second live broadcast picture is adjusted to be a fourth live broadcast picture; and pushing the third live broadcast picture and the fourth live broadcast picture to the audience client for displaying. By adopting the technical scheme, the problem of poor user experience caused by directly splicing the live broadcast pictures of the connected wheat in the prior art is solved.
Description
Technical Field
The invention relates to the technical field of computer vision, in particular to a live broadcast picture generation method and device, a storage medium and an electronic device.
Background
With the development of network technology and live broadcast industry, live broadcast with wheat successfully attracts public attention to live broadcast with network as a new live broadcast interaction mode. In the prior art, a plurality of live broadcast pictures of a plurality of anchor broadcasts participating in live broadcasting are usually spliced directly, for example, live broadcast pictures of two anchor objects or three anchor objects are spliced to obtain a target live broadcast picture.
However, because the distances between different anchor broadcasters in the live broadcasting scene and the cameras in the live broadcasting space are different, the internal parameters of the cameras may also be different, so that a plurality of anchor broadcasters in the spliced target live broadcasting picture cannot be on the same view distance plane, and the target live broadcasting picture looks relatively illegible, which causes a problem of poor user experience.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a live broadcast picture generation method and device, a storage medium and an electronic device, and aims to at least solve the problem of poor user experience caused by directly splicing live broadcast pictures.
According to an aspect of the embodiments of the present invention, there is provided a live view generating method, including: acquiring first visual angle information of a first live broadcast picture and second visual angle information of a second live broadcast picture, wherein the first live broadcast picture is a picture obtained by shooting a first anchor object by first image acquisition equipment, and the second live broadcast picture is a picture obtained by shooting a second anchor object by second image acquisition equipment; according to the display area occupied by the first anchor object in the first live broadcast picture and the display area occupied by the second anchor object in the second live broadcast picture, the first live broadcast picture is adjusted to be a third live broadcast picture, and the second live broadcast picture is adjusted to be a fourth live broadcast picture, wherein the display area occupied by the first anchor object in the third live broadcast picture and the display area occupied by the second anchor object in the fourth live broadcast picture reach the same viewing distance condition; and pushing the third live broadcast picture and the fourth live broadcast picture to the audience client for displaying.
Optionally, the adjusting the first live view picture to a third live view picture and the second live view picture to a fourth live view picture according to the display area occupied by the first anchor object in the first live view picture and the display area occupied by the second anchor object in the second live view picture includes: acquiring a first display proportion between a display area occupied by a first anchor object in a first live-action picture and the display area of the first live-action picture; acquiring a second display proportion between a display area occupied by a second anchor object in a second live broadcast picture and the display area of the second live broadcast picture; under the condition that the first display proportion does not reach the line-of-sight condition, adjusting the first live broadcast picture according to the display proportion indicated by the line-of-sight condition to obtain a third live broadcast picture; and under the condition that the second display proportion does not reach the sight distance condition, adjusting the second live broadcast picture according to the display proportion indicated by the sight distance condition to obtain a fourth live broadcast picture.
Optionally, the adjusting the first live view picture to a third live view picture and the second live view picture to a fourth live view picture according to the display area occupied by the first anchor object in the first live view picture and the display area occupied by the second anchor object in the second live view picture includes: acquiring first visual angle information between a first anchor object and first image acquisition equipment according to a display area occupied by the first anchor object in a first direct broadcasting picture; acquiring second visual angle information between a second anchor object and second image acquisition equipment according to a display area occupied by the second anchor object in a second live broadcast picture; the method comprises the steps that first visual angle information and second visual angle information, a first live broadcast picture and a second live broadcast picture are input into a target picture generation network to generate and obtain a target visual distance live broadcast picture, wherein the target visual distance live broadcast picture comprises a third live broadcast picture obtained by adjusting the first live broadcast picture and a fourth live broadcast picture obtained by adjusting the second live broadcast picture, a first anchor object in the third live broadcast picture and a second anchor object in the fourth live broadcast picture are located on the same visual distance plane, and the target picture generation network is a deep learning network generated after training is carried out by utilizing a plurality of groups of sample live broadcast pictures and is used for adjusting the visual distance of the anchor object displayed in the input live broadcast picture.
Optionally, the inputting the first view information and the second view information, the first live view and the second live view into the target view generation network to generate and obtain the target view distance live view includes: in a target picture generation network, extracting a first object feature of a first anchor object in a first live broadcast picture and a second object feature of a second anchor object in a second live broadcast picture; determining a target viewing distance according to the first viewing angle information and the second viewing angle information; generating a third live broadcast picture based on the first object characteristics, the first live broadcast picture and the target sight distance, and generating a second sight distance sub-picture based on the second object characteristics, the second live broadcast picture and the target sight distance; fusing the first view range sub-picture and the second view range sub-picture to generate a fourth live broadcast picture; and fusing the third live broadcast picture and the fourth live broadcast picture to generate a target sight distance live broadcast picture.
Optionally, the generating a third live view based on the first object feature, the first live view and the target view distance, and generating a fourth live view based on the second object feature, the second live view and the target view distance includes: adjusting the display size of a first image displayed by a first anchor object in a first live broadcast picture based on the target sight distance and the first object characteristics to generate and obtain a third live broadcast picture; and adjusting the display size of a second image displayed by the second anchor object in the second live broadcast picture based on the target visual distance and the second object characteristic so as to generate and obtain a fourth live broadcast picture.
Optionally, the fusing the third live broadcast picture and the fourth live broadcast picture to generate a target view distance live broadcast picture includes: adjusting the resolution of the third live broadcast picture to be a target resolution, and adjusting the resolution of the fourth live broadcast picture to be the target resolution; and splicing the third live broadcast picture under the target resolution and the fourth live broadcast picture under the target resolution to generate a target sight distance live broadcast picture.
Optionally, before the acquiring the first perspective information between the first anchor object and the first image capturing device and the acquiring the second perspective information between the second anchor object and the second image capturing device, the method further includes: acquiring a plurality of groups of sample live broadcast pictures and view angle information corresponding to each sample live broadcast picture in each group of sample live broadcast pictures; inputting a plurality of groups of sample live broadcast pictures and corresponding visual angle information into an initial picture adjustment network for training, wherein the initial picture adjustment network comprises a picture generation network and a judgment network, the picture generation network is used for generating a reference visual distance live broadcast picture based on a group of sample live broadcast pictures, the reference visual distance live broadcast picture comprises visual distance sub-pictures corresponding to each sample live broadcast picture in the group of sample live broadcast pictures, and the judgment network is used for judging whether the visual distance sub-pictures in the reference visual distance live broadcast picture reach a target visual distance condition; and under the condition that the picture adjusting network reaches the convergence condition, determining the picture generation network when the convergence condition is reached as a target picture generation network, wherein in the reference visual range live pictures output by the picture generation network when the convergence condition is reached, the anchor objects in the visual range sub-pictures corresponding to the sample live pictures are positioned on the same visual range plane.
Optionally, the acquiring first perspective information between the first anchor object and the first image capturing device and acquiring second perspective information between the second anchor object and the second image capturing device includes: acquiring a first internal reference of first image acquisition equipment; displaying at least two reference images in first terminal equipment for displaying a first live broadcast picture, and determining a first sight distance parameter between a first live broadcast object and first image acquisition equipment according to an observation distance between the first live broadcast object and the reference images; acquiring a second internal reference of a second image acquisition device; and displaying at least two reference images in second terminal equipment for displaying a second live broadcast picture, and determining a second sight distance parameter between a second anchor object and second image acquisition equipment according to the observation distance between the second anchor object and the reference images.
According to another aspect of the present invention, there is also provided a live view generating apparatus including: the first acquisition unit is used for acquiring first visual angle information of a first live broadcast picture and second visual angle information of a second live broadcast picture, wherein the first live broadcast picture is a picture obtained by shooting a first anchor object by first image acquisition equipment, and the second live broadcast picture is a picture obtained by shooting a second anchor object by second image acquisition equipment; the first processing unit is used for adjusting the first live broadcast picture into a third live broadcast picture and adjusting the second live broadcast picture into a fourth live broadcast picture according to the display area occupied by the first anchor object in the first live broadcast picture and the display area occupied by the second anchor object in the second live broadcast picture, wherein the display area occupied by the first anchor object in the third live broadcast picture and the display area occupied by the second anchor object in the fourth live broadcast picture reach the same viewing distance condition; and the pushing unit is used for pushing the third live broadcast picture and the fourth live broadcast picture to the audience client for displaying.
Optionally, the processing unit further includes: the first acquisition module is used for acquiring a first display proportion between a display area occupied by a first anchor object in a first live-broadcasting picture and the display area of the first live-broadcasting picture; the second acquisition module is used for acquiring a second display proportion between a display area occupied by a second anchor object in a second live broadcast picture and the display area of the second live broadcast picture; the first adjusting module is used for adjusting the first live broadcast picture according to the display proportion indicated by the line-of-sight condition under the condition that the first display proportion does not reach the line-of-sight condition to obtain a third live broadcast picture; and the second adjusting module is used for adjusting the second live broadcast picture according to the display proportion indicated by the line-of-sight condition under the condition that the second display proportion does not reach the line-of-sight condition, so as to obtain a fourth live broadcast picture.
According to the embodiment of the invention, the first live broadcast picture is adjusted to the third live broadcast picture and the second live broadcast picture is adjusted to the fourth live broadcast picture according to the display area occupied by the first anchor object in the first live broadcast picture and the display area occupied by the second anchor object in the second live broadcast picture; and enabling the display area occupied by the first anchor object in the adjusted third live broadcast picture and the display area occupied by the second anchor object in the adjusted fourth live broadcast picture to reach the same viewing distance condition. That is to say, through adjusting the display area that the first anchor object occupies in the first live broadcast picture and the display area that the second anchor object occupies in the second live broadcast picture, make the viewing angle distance of the anchor object in third live broadcast picture and the fourth live broadcast picture be in same stadia plane, and push the third live broadcast picture after adjusting and the fourth live broadcast picture after adjusting to the customer end that spectator was located, thereby make the live broadcast of company's wheat in the live broadcast picture of company basically be in same stadia plane, the authenticity of live broadcast picture of company's wheat has been promoted, the user experience sense of live broadcast picture has been improved, the problem that the user experience sense that directly will link the live broadcast picture of company's wheat is poor has been solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a schematic diagram of an application scene environment of an alternative live view generation method according to an embodiment of the present invention;
FIG. 2 is a flow diagram of an alternative live view generation method according to an embodiment of the present invention;
FIG. 3 is a diagram of an alternative method of generating a live view in accordance with an embodiment of the present invention;
FIG. 4 is a diagram of an alternative live view generated without reaching line-of-sight in accordance with an embodiment of the present invention;
fig. 5 is a schematic diagram of an optional screen adjustment according to the display scale of the first anchor object and the second anchor object according to the embodiment of the present invention;
FIG. 6 is a diagram illustrating an alternative adjustment of a live view according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating generation of a target line-of-sight live view after an alternative line-of-sight adjustment according to an embodiment of the invention;
FIG. 8 is a schematic diagram of an alternative method for generating a first line-of-sight sprite and a second line-of-sight sprite, in accordance with embodiments of the present invention;
FIG. 9 is a schematic diagram of an alternative screen adjustment network according to an embodiment of the present invention;
FIG. 10 is a schematic illustration of an alternative determination of a first line-of-sight parameter and a second line-of-sight parameter in accordance with embodiments of the invention;
fig. 11 is a block diagram of a configuration of a live view generating apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an aspect of the embodiments of the present invention, a live view generating method is provided, and optionally, as an optional implementation manner, the live view generating method may be but is not limited to a live view generating system applied in an application scenario as shown in fig. 1. The live view generation system may include, but is not limited to, an anchor device 102, a network 104, a server 106, a database 108, and a viewing device 110, among others. A target client (such as the live interface shown in fig. 1, which may be an anchor version client of a live platform) runs in the viewing device 110. The viewing device 110 includes a human-computer interaction screen, a processor and a memory. The man-machine interaction screen is used for displaying a live interface of the anchor client (such as the live interface of the anchor client shown in fig. 1); and the system is also used for providing a human-computer interaction interface to receive human-computer interaction operation for live network broadcast by a user by using live broadcast software. The processor is configured to generate an interaction instruction in response to the human-computer interaction operation, and send the interaction instruction to the server 106. The memory is used for storing related attribute data, such as interface special effect information of a live interface, different virtual gift information of a live platform and the like. The anchor device 102 also includes a human-computer interaction screen for displaying a live interface of the viewer client, a processor, and a memory.
The specific process is as follows: step S102, a first live broadcast picture and a second live broadcast picture are obtained; then, as in steps S104 and S106, the first live view and the second live view are transmitted. The server 106 executes steps S108-S112 to adjust the first live view to a third live view and the second live view to a fourth live view according to the display area occupied by the first anchor object in the first live view and the display area occupied by the second anchor object in the second live view; carrying out picture synthesis on the third live broadcast picture and the fourth live broadcast picture to obtain a mixed picture; and pushing a mixed picture containing the third live broadcast picture and the fourth live broadcast picture to a viewer client, so as to display a live broadcast picture containing the third live broadcast picture and the fourth live broadcast picture.
As another alternative, when anchor device 102 has a relatively large computing processing capacity, steps S108-S114 described above may also be performed by anchor device 102. Here, this is an example, and this is not limited in this embodiment.
In this embodiment, a live view generating method is provided, and fig. 2 is a flowchart of a live view generating method according to an embodiment of the present invention, where the flowchart includes the following steps:
step S202, acquiring first visual angle information of a first live broadcast picture and second visual angle information of a second live broadcast picture, wherein the first live broadcast picture is a picture obtained by shooting a first anchor object by first image acquisition equipment, and the second live broadcast picture is a picture obtained by shooting a second anchor object by second image acquisition equipment;
step S204, according to the display area occupied by the first anchor object in the first live broadcast picture and the display area occupied by the second anchor object in the second live broadcast picture, the first live broadcast picture is adjusted to be a third live broadcast picture, and the second live broadcast picture is adjusted to be a fourth live broadcast picture, wherein the display area occupied by the first anchor object in the third live broadcast picture and the display area occupied by the second anchor object in the fourth live broadcast picture reach the same sight distance condition;
and step S206, pushing the third live broadcast picture and the fourth live broadcast picture to the audience client for displaying.
Specifically, as shown in fig. 3, before the anchor object is played, an image capturing device in the live broadcast room is usually turned on, and a first anchor object is photographed by the first image capturing device to obtain a first anchor picture; and shooting a second anchor object through second image acquisition equipment to obtain a second live broadcast picture. The number, the model and the content part parameters of the first image acquisition equipment and the second image acquisition equipment are not limited.
Due to the fact that the models and internal parameters of the first image acquisition device (such as a camera) and the second image acquisition device are different, the distance between the first anchor object and the second anchor object and the respective image acquisition devices (such as cameras) may be different, and the like, the phenomenon that the display area occupied by the first anchor object in the first live broadcast picture is different from the display area occupied by the second anchor object in the second live broadcast picture can be caused.
As an alternative embodiment, on the premise that the internal parameters of the first image capturing device and the second image capturing device are the same, and the rising and the weight of the first anchor object and the second anchor object are also substantially equal, the first viewing angle distance between the first anchor object and the first image capturing device is 10 meters, and the second viewing angle distance between the second anchor object and the second image capturing device is 4 meters, in which case, the first live view and the second live view are captured as shown in fig. 4. If the first live broadcast picture and the second live broadcast picture are directly spliced, the visual effect presented to the audience is that the two live broadcast objects in the first live broadcast picture are respectively positioned on different live broadcast planes, so that the live broadcast picture looks relatively disfavored.
Based on the above problem, it may be considered to adjust the first display area and the second display area occupied by the first anchor object in the first live broadcast picture, and specifically, as shown in fig. 3, according to the first display area occupied by the first anchor object in the first live broadcast picture and the target view distance condition, the first live broadcast picture is adjusted to obtain an adjusted third live broadcast picture; and adjusting the second live broadcast picture according to a second display area occupied by the second anchor object in the second live broadcast picture and a target sight distance condition to obtain an adjusted fourth live broadcast picture.
As can be seen from fig. 3, the display area occupied by the first anchor object in the adjusted third live view is substantially the same as the display area occupied by the second anchor object in the adjusted fourth live view, in other words, the display area occupied by the first anchor object in the third live view and the display area occupied by the second anchor object in the fourth live view reach the same viewing distance condition. And finally, carrying out picture fusion on the adjusted third live broadcast picture and the adjusted fourth live broadcast picture, and pushing the third live broadcast picture and the fourth live broadcast picture to audience clients.
It should be noted that the display area occupied by the first anchor object in the third live broadcast picture and the display area occupied by the second anchor object in the fourth live broadcast picture reach the same viewing distance condition, where the viewing distance condition includes, but is not limited to, that the display ratio occupied by the first anchor object in the third live broadcast picture and the display ratio occupied by the second anchor object in the fourth live broadcast picture are both within a preset ratio range or equal to a preset ratio, and that the distance between the first anchor object and the second anchor object from the respective image acquisition device satisfies a preset distance range or equal to a preset distance, and the like.
As an optional implementation manner, the adjusting the first live view to the third live view and the second live view to the fourth live view according to the display area occupied by the first anchor object in the first live view and the display area occupied by the second anchor object in the second live view includes:
acquiring a first display proportion between a display area occupied by a first anchor object in a first live broadcast picture and the display area of the first live broadcast picture;
acquiring a second display proportion between a display area occupied by a second anchor object in a second live broadcast picture and the display area of the second live broadcast picture;
under the condition that the first display proportion does not reach the line-of-sight condition, adjusting the first live broadcast picture according to the display proportion indicated by the line-of-sight condition to obtain a third live broadcast picture;
and under the condition that the second display proportion does not reach the line-of-sight condition, adjusting the second live broadcast picture according to the display proportion indicated by the line-of-sight condition to obtain a fourth live broadcast picture.
In this embodiment, the specific process of performing the frame adjustment on the first live view and the second live view includes:
1) judging whether the first display proportion and the second display proportion meet the sight distance condition or not;
2) when the first display scale (the second display scale) meets the sight distance condition and the second display scale (the first display scale) does not meet the sight distance condition, adjusting the first display scale; or
Under the condition that the first display proportion and the second display proportion do not meet the sight distance condition, the first display proportion and the second display proportion are adjusted;
3) adjusting the first live broadcast picture into a third live broadcast picture according to the adjusted first display proportion; and adjusting the second live broadcast picture into a fourth live broadcast picture according to the adjusted second display proportion.
Specifically, as shown in fig. 5, it is assumed that a first display ratio between a display area occupied by a first anchor object in a first live view and a display area of the first live view is 70%, a second display ratio between a display area occupied by a second anchor object in a second live view and a display area of the second live view is 30%, and a viewing distance condition is that a display ratio of the first/second anchor object between the display area of the live view and the display area of the live view is 50%. The process of picture adjustment for the first live picture and the second live picture includes:
s1, comparing the first display proportion and the second display proportion with the display proportion of 50% in the sight distance condition respectively, and determining that the first display proportion and the second display proportion do not meet the sight distance condition;
s2, adjusting the first display scale from 70% down to 50%, and simultaneously adjusting the second display scale from 30% up to 50%;
and S3, respectively determining the adjusted first live broadcast picture and the second live broadcast picture as a third live broadcast picture and a fourth live broadcast picture.
It is easy to understand that, the viewing distance condition in this embodiment may be, besides the preset ratio, a preset ratio range of the anchor object in the live broadcast picture, for example, 50% to 55%, at this time, only the first display ratio needs to be compared with an upper limit value 55% and a lower limit value 50% of the preset ratio range, respectively, and when the first display ratio is greater than the upper limit value 55% or less than the lower limit value 50%, the first display ratio is adjusted, so that the adjusted third display ratio is within the preset ratio range of 50% to 55%. Referring to the method for adjusting the first display scale, the second display scale is adjusted, so that the adjusted fourth display scale is within the preset scale range, and the detailed process is not repeated here.
As another optional implementation manner, the adjusting the first live view to the third live view and the second live view to the fourth live view according to the display area occupied by the first anchor object in the first live view and the display area occupied by the second anchor object in the second live view further includes:
acquiring first visual angle information between a first anchor object and first image acquisition equipment according to a display area occupied by the first anchor object in a first direct broadcasting picture;
acquiring second visual angle information between a second anchor object and second image acquisition equipment according to a display area occupied by the second anchor object in a second live broadcast picture;
the method comprises the steps that first visual angle information and second visual angle information, a first live broadcast picture and a second live broadcast picture are input into a target picture generation network to generate and obtain a target visual distance live broadcast picture, wherein the target visual distance live broadcast picture comprises a third live broadcast picture obtained by adjusting the first live broadcast picture and a fourth live broadcast picture obtained by adjusting the second live broadcast picture, a first anchor object in the third live broadcast picture and a second anchor object in the fourth live broadcast picture are located on the same visual distance plane, and the target picture generation network is a deep learning network generated after training is carried out by utilizing a plurality of groups of sample live broadcast pictures and is used for adjusting the visual distance of the anchor object displayed in the input live broadcast picture.
Specifically, as shown in fig. 6, according to a first live view and a second live view, first perspective information between a first anchor object and a first image capturing device and second perspective information between a second anchor object and a second image capturing device are respectively obtained, where the first perspective information includes, but is not limited to, a first perspective distance between the first anchor object and the first image capturing device, a focal length of the first image capturing device when shooting the first anchor object, and the second first perspective information includes, but is not limited to, a second perspective distance between the second anchor object and the second image capturing device, a focal length of the first image capturing device when shooting the second anchor object, and the like.
And inputting the first visual angle information, the second visual angle information, the first live broadcast picture and the second live broadcast picture into a target picture generation network. In the target picture generation network, according to the sight distance condition, the sight distance of the first anchor object and the sight distance of the second anchor object are respectively adjusted, so that the first anchor object in the adjusted third live broadcast picture and the second anchor object in the adjusted fourth live broadcast picture are located on the same sight distance plane.
Further, in the target picture generation network, the third live broadcast picture and the fourth live broadcast picture are spliced, so that the target view-distance live broadcast picture as shown in fig. 6 can be obtained.
It should be noted that, in an actual linkman scene, the number of anchor objects participating in linkman is not limited, and may be two or more (for example, 3) in the foregoing embodiment, and the number of live views corresponding to the anchor objects is also 3, and reference may be made to the embodiment shown in fig. 4 for a specific process of generating a target view distance live view according to the 3 live views. The method comprises the steps of inputting sight distance parameters of three live broadcast pictures into a target picture generation network, adjusting the three sight distance parameters according to preset values, correspondingly generating 3 sight distance sub-pictures, and finally generating a target sight distance live broadcast picture with 3 anchor broadcast objects basically located on the same sight distance plane.
It is understood that the target picture generation network in the above embodiment is a deep learning network generated by training a plurality of sets of sample live pictures, for example, a generation network in a Generative confrontation network (GAN) is obtained by training a plurality of sets of first live pictures, a plurality of sets of second live pictures, a plurality of sets of different camera view distance parameters, and the like. By the generation network, the viewing angle distance of the anchor object in each live broadcast picture can be adjusted.
Through the above-mentioned embodiment that this application provided, utilize the target picture to generate the network, adjust the visual angle distance of the anchor object in first live broadcast picture and the second live broadcast picture to same stadia plane, then according to the third live broadcast picture after the adjustment and the live broadcast picture of fourth live broadcast picture generation target stadia live broadcast picture, and show the live broadcast picture of target stadia at the customer end at spectator's place, thereby make the live broadcast of company's wheat in the live broadcast picture of company basically be in same stadia plane, the authenticity of the live broadcast picture of company's wheat has been promoted, the user experience of live broadcast picture has been improved, the problem that the direct live broadcast picture of company's wheat that will be connected splices the user experience that causes is poor has been solved.
In an optional embodiment, the inputting the first viewing angle information and the second viewing angle information, the first live view picture and the second live view picture into the target picture generation network to generate and obtain the target view distance live view picture includes:
in a target picture generation network, extracting a first object feature of a first anchor object in a first live broadcast picture and a second object feature of a second anchor object in a second live broadcast picture;
determining a target viewing distance according to the first viewing angle information and the second viewing angle information;
generating a third live broadcast picture based on the first object characteristics, the first live broadcast picture and the target sight distance, and generating a fourth live broadcast picture based on the second object characteristics, the second live broadcast picture and the target sight distance;
and fusing the third live broadcast picture and the fourth live broadcast picture to generate a target sight distance live broadcast picture.
Specifically, as shown in fig. 7, in this embodiment, first, a first anchor object in a first live view is subjected to feature extraction, so as to obtain a first object feature, where the first object feature includes, but is not limited to, a viewing angle distance between the first anchor object and a first image capturing device, and may also be an internal parameter (such as a shooting focal length and a pixel) of the image capturing device when the first live view is shot. Similarly, feature extraction is performed on a second anchor object in the live human image to obtain second object features, where the second object features include, but are not limited to, a viewing angle distance between the second anchor object and a second image capture device, and may also be internal parameters (such as a shooting focal length and pixels) of the image capture device when the second live human image is shot. And secondly, respectively adjusting the first visual angle distance and the second visual angle distance according to the determined target visual distance.
It should be noted that the first viewing angle distance is an estimated distance from the first anchor object to the camera, which is calculated according to different position images of any two first anchor objects in the first live view; similarly, the second viewing angle distance is an estimated distance between the second anchor object and the camera, which is calculated according to different position images of any two second anchor objects in the second live broadcast picture. The two images of the first anchor object at different positions refer to the two images which are acquired by the camera after the first anchor object keeps the same posture and moves in position in the first anchor space.
As an alternative implementation, the specific process of determining the target sight distance is as follows:
s1, acquiring a first visual angle distance in the first visual angle information and a second visual angle distance in the second visual angle information;
s2, according to the first viewing angle distance d1And a second viewing angle distance d2Calculating a compromise third viewing angle distance d3Wherein d is1>d2;
S3, separating the third visual angle by a distance d3Determining the fourth viewing angle distance d as the target viewing angle distance or floating the third viewing angle distance downward by a preset value4Is determined as the target viewing distance, where d2<d4≤d3。
In a specific embodiment, assuming that the estimated first perspective distance is 10 meters, the estimated second perspective distance is 5 meters, and the calculated compromise distance is 7.5, then such a distance 7.5 may be directly determined as the target viewing distance, or any value between greater than 5 and less than 7.5 may be determined as the target viewing distance. It follows that the target apparent distance is not a fixed distance value, but an arbitrary number in a specified range of values.
Further, according to the target viewing distance, a first viewing angle distance with a relatively larger viewing angle distance in the two anchor objects is adjusted downwards, namely, the first viewing angle distance is adjusted from 10 to 7.5; while the second viewing angle distance, at which the viewing angle distance is relatively small, is adjusted upwards, i.e. from 5 to 7.5. Then in a target picture generation network, generating a third live broadcast picture according to the adjusted first visual angle distance and the first live broadcast picture; generating a fourth live broadcast picture according to the adjusted second visual angle distance and the second live broadcast picture; and finally, carrying out picture fusion on the third live broadcast picture and the fourth live broadcast picture to obtain a target sight distance live broadcast picture.
According to the embodiment provided by the application, the visual angle distances of a plurality of anchor objects in the continuous-microphone scene are adjusted according to the target visual distance; and generating a target sight distance live broadcast picture according to the plurality of live broadcast sub-pictures after sight distance adjustment, so that the main broadcasting objects connected with the TV are visually positioned in the same live broadcast space. That is, the live broadcast pictures of a plurality of live broadcast rooms are basically on the same live broadcast plane, so that the watching experience of live broadcast pictures with wheat is improved. In addition, the target visual range is any value in the designated value interval, so that the visual range adjustment of the anchor object has higher flexibility, and the applicability of the visual range adjustment method is further improved.
In an optional embodiment, the generating a third live view based on the first object feature, the first live view and the target view distance, and generating a fourth live view based on the second object feature, the second live view and the target view distance includes:
adjusting the display size of a first image displayed by a first anchor object in a first live frame based on the target sight distance and the first object characteristics to generate and obtain a third live frame;
and adjusting the display size of a second image displayed by the second anchor object in the second live broadcast picture based on the target visual distance and the second object characteristic so as to generate and obtain a fourth live broadcast picture.
Specifically, as shown in fig. 8, it is assumed that a first anchor object is displayed as a first character in a first live view and a second anchor object is displayed as a second character in a second live view. Wherein the display width of the first character in the first live-action picture is w1, and the display height is h 1; the display width of the second character in the second live broadcast picture is w2, and the display height is h 2; assume that the target viewing distance is obtained and mapped to the target width w and the target display height h in the viewing distance sprite according to the method for determining the target viewing distance in the above embodiment.
The display width w1 of the first character is adjusted to the target display width w, the display height h1 is adjusted to the target display height h, and then a third live view as shown in fig. 8 is generated according to the target display width w, the target display height h and the first object feature. Based on the same principle, the display width w2 of the second avatar is adjusted to the target display width w, the display height h2 is adjusted to the target display height h, and then a fourth live view as shown in fig. 8 is generated according to the target display width w, the target display height h and the second object feature.
It is understood that, there is a mapping relationship between the target display width and the target display height and the target viewing distance, in other words, in the case of obtaining the target viewing distance by using the method for determining the target viewing distance in the above embodiment, the target display width and the target display height of the first character in the third live broadcast picture and the target display width and the target display height of the second character in the fourth live broadcast picture in the present embodiment are obtained.
Through the above embodiments provided by the present application, the display size of the first avatar of the first anchor object in the first live broadcast picture and the display size of the second avatar of the second anchor object in the second live broadcast picture are respectively adjusted to the target display size, so that the adjusted display sizes of the first avatar and the second avatar are ensured to be the same. Therefore, after the third live broadcast picture and the fourth live broadcast picture which have the same anchor image display size are subjected to picture fusion, the first image and the second image in the target view distance live broadcast picture are in the same view distance plane, and the technical effect of improving the experience of live broadcast audiences is achieved.
As an optional implementation, the fusing the third live broadcast picture and the fourth live broadcast picture to generate the target view-distance live broadcast picture includes:
adjusting the resolution of the third live broadcast picture to be a target resolution, and adjusting the resolution of the fourth live broadcast picture to be the target resolution;
and splicing the third live broadcast picture under the target resolution and the fourth live broadcast picture under the target resolution to generate a target sight distance live broadcast picture.
In the process of generating the target view distance live view picture based on the third live view picture and the fourth live view picture generated in the above embodiment, the following factors need to be considered: the shooting focal length and the resolution of the third live broadcast picture, the shooting focal length and the resolution of the fourth live broadcast picture and the relationship between the shooting focal length and the resolution of the fourth live broadcast picture and the shooting focal length and the resolution of the fourth live broadcast picture are obtained. The shooting focal length refers to a distance between a camera lens and a photosensitive element, and the resolution is the capability of a camera of image acquisition equipment (such as a camera) for analyzing an image, namely the pixel value of an image sensor of the camera; the shooting focal length and the resolution are key performance indexes of the camera.
In this embodiment, in order to ensure that the first image and the second image in the target view distance live broadcast picture are displayed on the same view distance plane, before generating the target view distance live broadcast picture, it is further required to ensure that the third live broadcast picture and the fourth live broadcast picture have the same resolution, and then generate the target view distance live broadcast picture with the same resolution, which includes the following specific steps:
1) determining a target resolution of a target sight distance live broadcast picture according to a first resolution of a third live broadcast picture and a second resolution of a fourth live broadcast picture;
2) respectively adjusting the first resolution of the third live broadcast picture to be a target resolution, and adjusting the second resolution of the fourth live broadcast picture to be the target resolution;
3) and splicing the adjusted third live broadcast picture and the fourth live broadcast picture with the same target resolution ratio to generate a target sight distance live broadcast picture.
In the present embodiment, only the adjustment of the resolutions of the third live view and the fourth live view is described as an example, but other internal parameters of the camera may be used in the actual application. For example, the shooting focal lengths of the cameras at the two ends are adjusted, so that the display sizes of the first image in the third live broadcast picture and the second image in the fourth live broadcast picture after adjustment are the same in the target sight distance live broadcast picture, and the consistency of parameters such as brightness, definition and the like of the first image and the second image in the target sight distance live broadcast picture can be ensured. Therefore, the display effect of the live broadcast picture of the TV station is improved, the watching experience of audiences is improved, and the texture of the live broadcast picture of the TV station is improved.
As an optional implementation manner, before the acquiring first perspective information between the first anchor object and the first image capturing device and acquiring second perspective information between the second anchor object and the second image capturing device, the method further includes:
acquiring a plurality of groups of sample live broadcast pictures and view angle information corresponding to each sample live broadcast picture in each group of sample live broadcast pictures;
inputting a plurality of groups of sample live broadcast pictures and corresponding visual angle information into an initial picture adjustment network for training, wherein the initial picture adjustment network comprises a picture generation network and a judgment network, the picture generation network is used for generating a reference visual distance live broadcast picture based on a group of sample live broadcast pictures, the reference visual distance live broadcast picture comprises visual distance sub-pictures corresponding to each sample live broadcast picture in the group of sample live broadcast pictures, and the judgment network is used for judging whether the visual distance sub-pictures in the reference visual distance live broadcast picture reach a target visual distance condition;
and under the condition that the picture adjusting network reaches the convergence condition, determining the picture generation network when the convergence condition is reached as a target picture generation network, wherein in the reference visual range live pictures output by the picture generation network when the convergence condition is reached, the anchor objects in the visual range sub-pictures corresponding to the sample live pictures are positioned on the same visual range plane.
In the present embodiment, the network for adjusting the screen is described by taking a Generic Adaptive Networks (GAN) as an example.
Specifically, as shown in fig. 9, the screen adjustment network includes a generation network and a discrimination network, and then the training process for the screen adjustment network based on the training sample composed of the first live view and the second live view is as follows:
s1, forming a first training sample p by a plurality of first direct broadcasting pictures1,p2…pnFirst visual angle information corresponding to each sample live broadcast picture and a second training sample q consisting of a plurality of second live broadcast pictures1,q2…qnInputting second visual angle information corresponding to each sample live broadcast picture into a generation network;
s2, respectively adjusting the visual distance of the first visual angle information in the first training sample and the second visual angle information in the second training sample in the generating network to obtain the adjusted first visual angle information and the adjusted second visual angle information;
s3, according to the adjusted first visual angle information and the plurality of first direct broadcasting pictures p1,p2…pnGenerating a group of corresponding first reference sight distance live broadcast pictures; according to the adjusted second visual angle information and a plurality of second live broadcast pictures q1,q2…qnGenerating a group of corresponding second reference sight distance live broadcast pictures; respectively fusing corresponding sample live broadcast pictures in the first reference sight distance live broadcast picture and the second reference sight distance live broadcast picture to obtain a group of reference sight distance live broadcast pictures Q1,Q2…Qn;
S4, live group Q of pictures with reference visual range1,Q2…QnInputting the discrimination network, comparing the reference-line-of-sight live frames Q in the discrimination network1,Q2…QnLive broadcast picture Q with target sight distance1',Q2'…Qn' obtaining a discrimination result in which the target sight distance is the live picture Q1',Q2'…Qn' is a group of pictures which are preset;
let T, T, T … F be the result of the determination, wherein T represents the first live-action picture p adjusted according to the visual range1And the adjusted second live broadcast picture q1The first reference sight distance live broadcast picture Q obtained by fusion1True (true), F denotes the first live picture p adjusted according to the viewing distancenAnd the adjusted second live broadcast picture qnThe first reference sight distance live broadcast picture Q obtained by fusionnFalse (false).
First reference visual range live broadcast picture Q1True, it means that the first live view p is adjusted according to the viewing distance1The generated first sight distance sub-picture and a second live broadcast picture q which is adjusted according to the sight distance1And the generated second sight distance sub-pictures all reach the preset target sight distance condition. Wherein the target visual range condition is the first visual range of the first anchor object in the third live broadcast picture and the second visual range of the second anchor object in the fourth live broadcast pictureThe distance between the two visual angles is equal to the target visual distance.
S5, when the picture adjusting network meets the preset convergence condition, determining the picture reaching the convergence condition as a target picture generating network;
the preset convergence condition may include, but is not limited to, when m F s continuously appear in the determination result, because the goal of generating the network G is to generate a real picture as much as possible to deceive the determination of the network D in the whole training process. The aim of distinguishing the network D is to distinguish the image generated by G from the real image as much as possible, namely that G and D form a dynamic game process.
In an ideal state, G can generate enough spurious images to deceive and judge the network D, D is difficult to judge whether the images generated by G are real or not, namely, the training of the generation network can be completed under the condition that the judgment network D continuously judges m images output by the generation network G to be false.
It should be noted that the number of sample live view groups input to generate the network depends on the number of live view groups participating in live view connection, and therefore, in the embodiment of the present application, the number of sample live view groups is not limited, and may be two groups, or may be other numerical values of more than two groups.
Through the above embodiment provided by the application, the generation type countermeasure network GAN is utilized to train the initial picture adjustment network, and the target picture generation network meeting the convergence condition is obtained, so that the target picture generation network is utilized to adjust the first anchor object in the first live broadcast picture and the second anchor object in the second live broadcast picture to the same view distance plane, the authenticity of the live broadcast pictures of the connected wheat is improved, the user experience of the live broadcast pictures is improved, and the problem of poor user experience caused by directly splicing the live broadcast pictures of the connected wheat is solved.
As an optional implementation, the acquiring first perspective information between the first anchor object and the first image capturing device and acquiring second perspective information between the second anchor object and the second image capturing device includes:
acquiring a first internal reference of first image acquisition equipment; displaying at least two reference images in first terminal equipment for displaying a first live broadcast picture, and determining a first sight distance parameter between a first live broadcast object and first image acquisition equipment according to an observation distance between the first live broadcast object and the reference images;
acquiring a second internal reference of a second image acquisition device; and displaying at least two reference images in second terminal equipment for displaying a second live broadcast picture, and determining a second sight distance parameter between a second anchor object and second image acquisition equipment according to the observation distance between the second anchor object and the reference images.
Specifically, as shown in fig. 10, the first line-of-sight parameter between the first anchor object and the first image capturing device is determined as an example for explanation. Suppose a is the first anchor object, p1 and p2 are images of the first anchor object at different positions in the live space, that is, p1 and p2 are reference images of the first anchor object, C1 and C2 are cameras for capturing the images p1 and p2, respectively, a line segment L is an epipolar line of a plane formed by the images p1 and C1C2A, and a line segment L' is an epipolar line of a plane formed by the images p2 and C1C 2A.
According to the epipolar geometry principle, the relation between the three-dimensional space coordinate captured by the camera and the corresponding pixel coordinate is defined, namely the three-dimensional point A is transformed to the pixel coordinate x through the first projection matrix, or the point A is transformed to the pixel point coordinate x' through the second projection matrix. Calculating a first observation distance from the point a to the reference image p1 according to the first projection matrix, and calculating a second observation distance from the point a to the reference image p2 according to the second projection matrix; based on the first observation distance and the second observation distance, a first view angle distance between the first anchor object and the first image acquisition device can be obtained.
Based on the same principle, the three-dimensional point B where the second anchor object is located is converted to the pixel coordinate through a third projection matrix, or the point B is converted to the corresponding pixel point coordinate through a fourth projection matrix. Calculating a third viewing distance of point B from the reference image p3 according to the third projection matrix and a fourth viewing distance of point B from the reference image p4 according to the fourth projection matrix; and obtaining a second visual angle distance between the second anchor object and the second image acquisition device based on the third observation distance and the fourth observation distance.
It should be noted that, in the process of determining the first viewing angle distance according to the epipolar geometry principle, when the reference images of the first live view displayed in the first terminal device are selected, the number of the reference images may be two or more; similarly, in the process of determining the second viewing angle distance, when the reference images of the second live view displayed in the second terminal device are selected, the number of the reference images may be two or more; that is, in the embodiment of the present invention, the number of reference images is not limited.
Through the above embodiments provided by the present application, a first line-of-sight parameter between a first anchor object and a first image acquisition device and a second line-of-sight parameter between a second anchor object and a second image acquisition device are respectively calculated by using an epipolar geometry principle. And then, the target visual distance is calculated based on the determined first visual distance parameter and the second visual distance parameter, so that favorable conditions are provided for realizing alignment processing of the visual angle distance of the continuous-broadcasting object. The effectiveness of stadia adjustment is improved, the problem that the user experience sense is poor due to the fact that the live pictures of the connected wheat are directly spliced is solved, and the experience sense of audiences on the live pictures of the connected wheat is improved.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a live view generating apparatus is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and details of which have been already described are omitted. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the devices described in the following embodiments are preferably implemented in software, implementations in hardware or a combination of software and hardware are also possible and contemplated.
Fig. 11 is a block diagram showing the structure of a live view generating apparatus according to an embodiment of the present invention, the apparatus including:
an obtaining unit 1102, configured to obtain a first live broadcast picture and a second live broadcast picture, where the first live broadcast picture is obtained by shooting a first anchor object by a first image capture device, and the second live broadcast picture is obtained by shooting a second anchor object by a second image capture device;
a first processing unit 1104, configured to adjust the first live view to a third live view and adjust the second live view to a fourth live view according to a display area occupied by the first anchor object in the first live view and a display area occupied by the second anchor object in the second live view, where the display area occupied by the first anchor object in the third live view and the display area occupied by the second anchor object in the fourth live view meet a same viewing distance condition;
the pushing unit 1106 is configured to push the third live broadcast picture and the fourth live broadcast picture to the viewer client for displaying.
Optionally, the processing unit 1104 further includes:
the first acquisition module is used for acquiring a first display proportion between a display area occupied by a first anchor object in a first live-broadcasting picture and the display area of the first live-broadcasting picture;
the second acquisition module is used for acquiring a second display proportion between a display area occupied by a second anchor object in a second live broadcast picture and the display area of the second live broadcast picture;
the first adjusting module is used for adjusting the first live broadcast picture according to the display proportion indicated by the line-of-sight condition under the condition that the first display proportion does not reach the line-of-sight condition to obtain a third live broadcast picture;
and the second adjusting module is used for adjusting the second live broadcast picture according to the display proportion indicated by the line-of-sight condition under the condition that the second display proportion does not reach the line-of-sight condition, so as to obtain a fourth live broadcast picture.
Optionally, the processing unit 1104 further includes:
the third acquisition module is used for acquiring first visual angle information between the first anchor object and the first image acquisition equipment according to a display area occupied by the first anchor object in the first direct broadcasting picture;
the fourth acquisition module is used for acquiring second visual angle information between the second anchor object and second image acquisition equipment according to a display area occupied by the second anchor object in the second live broadcast picture;
the first processing module is used for inputting the first visual angle information and the second visual angle information into a target picture generation network to generate and obtain a target visual distance live broadcast picture, wherein the target visual distance live broadcast picture comprises a third live broadcast picture after the first live broadcast picture is adjusted and a fourth live broadcast picture after the second live broadcast picture is adjusted, a first anchor object in the third live broadcast picture and a second anchor object in the fourth live broadcast picture are located on the same visual distance plane, and the target picture generation network is a deep learning network generated after training is carried out on the live broadcast pictures by utilizing a plurality of groups of samples and is used for adjusting the visual distance of the anchor object displayed in the input live broadcast picture.
Optionally, the inputting the first view angle information and the second view angle information, the first live view picture and the second live view picture into the target picture generation network to generate and obtain a target view distance live view picture includes;
the extraction module is used for extracting first object features of a first anchor object in a first live broadcast picture and second object features of a second anchor object in a second live broadcast picture in a target picture generation network;
the first determining module is used for determining the target visual distance according to the first visual angle information and the second visual angle information;
the second processing module is used for generating three live broadcast pictures based on the first object characteristics, the first live broadcast picture and the target sight distance and generating a fourth live broadcast picture based on the second object characteristics, the second live broadcast picture and the target sight distance;
and the third processing module is used for fusing the third live broadcast picture and the fourth live broadcast picture to generate a target sight distance live broadcast picture.
Optionally, the generating three live broadcast frames based on the first object feature, the first live broadcast frame and the target view distance, and generating a fourth live broadcast frame based on the second object feature, the second live broadcast frame and the target view distance includes:
the first adjusting submodule is used for adjusting the display size of a first image displayed by a first anchor object in a first live broadcast picture based on the target sight distance and the first object characteristics so as to generate and obtain a third live broadcast picture;
and the second adjusting submodule is used for adjusting the display size of a second image displayed by the second anchor object in the second live broadcast picture based on the target sight distance and the second object characteristic so as to generate and obtain a fourth live broadcast picture.
Optionally, the fusing the third live broadcast picture and the fourth live broadcast picture, and generating the target view distance live broadcast picture includes:
the third adjustment submodule is used for adjusting the resolution of the third live broadcast picture to be the target resolution and adjusting the resolution of the fourth live broadcast picture to be the target resolution;
and the first splicing submodule is used for splicing the first sight distance sub-picture under the target resolution and the second sight distance sub-picture under the target resolution so as to generate a target sight distance live broadcast picture.
Optionally, before the acquiring first perspective information between the first anchor object and the first image capturing device and second perspective information between the second anchor object and the second image capturing device, the method further includes:
the second acquisition unit is used for acquiring a plurality of groups of sample live broadcast pictures and view angle information corresponding to each sample live broadcast picture in each group of sample live broadcast pictures;
the second processing unit is used for inputting a plurality of groups of sample live broadcast pictures and corresponding visual angle information into an initial picture adjustment network for training, wherein the initial picture adjustment network comprises a picture generation network and a judgment network, the picture generation network is used for generating a reference visual distance live broadcast picture based on a group of sample live broadcast pictures, the reference visual distance live broadcast picture comprises visual distance sub-pictures corresponding to the live broadcast pictures of the samples in the group of sample live broadcast pictures, and the judgment network is used for judging whether the visual distance sub-pictures in the reference visual distance live broadcast picture reach a target visual distance condition;
and the determining unit is used for determining the picture generation network when the convergence condition is reached as a target picture generation network under the condition that the picture adjustment network reaches the convergence condition, wherein in the reference view distance live pictures output by the picture generation network when the convergence condition is reached, the main broadcast objects in the view distance sub-pictures respectively corresponding to the sample live pictures are positioned on the same view distance plane.
Optionally, the acquiring first perspective information between the first anchor object and the first image capturing device and second perspective information between the second anchor object and the second image capturing device includes:
the fifth acquisition module is used for acquiring the first internal parameter of the first image acquisition device; displaying at least two reference images in first terminal equipment for displaying a first live broadcast picture, and determining a first sight distance parameter between a first live broadcast object and first image acquisition equipment according to an observation distance between the first live broadcast object and the reference images;
the sixth acquisition module is used for acquiring a second internal parameter of the second image acquisition device; and displaying at least two reference images in second terminal equipment for displaying a second live broadcast picture, and determining a second sight distance parameter between a second anchor object and second image acquisition equipment according to the observation distance between the second anchor object and the reference images.
An embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, where the computer program is configured to, when executed, perform the steps in any of the above method embodiments.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring first visual angle information of a first live broadcast picture and second visual angle information of a second live broadcast picture, wherein the first live broadcast picture is obtained by shooting a first anchor object by first image acquisition equipment, and the second live broadcast picture is obtained by shooting a second anchor object by second image acquisition equipment;
s2, adjusting the first live broadcast picture into a third live broadcast picture and adjusting the second live broadcast picture into a fourth live broadcast picture according to the display area occupied by the first anchor object in the first live broadcast picture and the display area occupied by the second anchor object in the second live broadcast picture, wherein the display area occupied by the first anchor object in the third live broadcast picture and the display area occupied by the second anchor object in the fourth live broadcast picture reach the same sight distance condition;
and S3, pushing the third live broadcast picture and the fourth live broadcast picture to the audience client for displaying.
In an exemplary embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
For specific examples in this embodiment, reference may be made to the examples described in the foregoing embodiments and exemplary implementations, and details of this embodiment are not repeated herein.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring first visual angle information of a first live broadcast picture and second visual angle information of a second live broadcast picture, wherein the first live broadcast picture is obtained by shooting a first anchor object by first image acquisition equipment, and the second live broadcast picture is obtained by shooting a second anchor object by second image acquisition equipment;
s2, adjusting the first live broadcast picture into a third live broadcast picture and adjusting the second live broadcast picture into a fourth live broadcast picture according to the display area occupied by the first anchor object in the first live broadcast picture and the display area occupied by the second anchor object in the second live broadcast picture, wherein the display area occupied by the first anchor object in the third live broadcast picture and the display area occupied by the second anchor object in the fourth live broadcast picture reach the same sight distance condition;
and S3, pushing the third live broadcast picture and the fourth live broadcast picture to the audience client for displaying.
In an exemplary embodiment, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
For specific examples in this embodiment, reference may be made to the examples described in the above embodiments and exemplary embodiments, and details of this embodiment are not repeated herein.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented in a general purpose computing device, they may be centralized in a single computing device or distributed across a network of multiple computing devices, and they may be implemented in program code that is executable by a computing device, such that they may be stored in a memory device and executed by a computing device, and in some cases, the steps shown or described may be executed in an order different from that shown or described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps therein may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.
Claims (12)
1. A live-broadcast picture generation method is characterized by comprising the following steps:
acquiring a first live broadcast picture and a second live broadcast picture, wherein the first live broadcast picture is obtained by shooting a first anchor object by first image acquisition equipment, and the second live broadcast picture is obtained by shooting a second anchor object by second image acquisition equipment;
according to the display area occupied by the first anchor object in the first live broadcast picture and the display area occupied by the second anchor object in the second live broadcast picture, adjusting the first live broadcast picture into a third live broadcast picture and adjusting the second live broadcast picture into a fourth live broadcast picture, wherein the display area occupied by the first anchor object in the third live broadcast picture and the display area occupied by the second anchor object in the fourth live broadcast picture reach the same viewing distance condition;
and pushing the third live broadcast picture and the fourth live broadcast picture to a viewer client for displaying.
2. The method of claim 1, wherein adjusting the first live view to a third live view and the second live view to a fourth live view according to a display area occupied by the first anchor object in the first live view and a display area occupied by the second anchor object in the second live view comprises:
acquiring a first display proportion between a display area occupied by the first anchor object in the first live broadcast picture and the display area of the first live broadcast picture;
acquiring a second display proportion between a display area occupied by the second anchor object in the second live broadcast picture and the display area of the second live broadcast picture;
under the condition that the first display proportion does not reach the sight distance condition, adjusting the first live broadcast picture according to the display proportion indicated by the sight distance condition to obtain a third live broadcast picture;
and under the condition that the second display proportion does not reach the sight distance condition, adjusting the second live broadcast picture according to the display proportion indicated by the sight distance condition to obtain a fourth live broadcast picture.
3. The method of claim 1, wherein adjusting the first live view to a third live view and the second live view to a fourth live view according to a display area occupied by the first anchor object in the first live view and a display area occupied by the second anchor object in the second live view comprises:
acquiring first visual angle information between the first anchor object and the first image acquisition equipment according to a display area occupied by the first anchor object in the first direct broadcasting picture; acquiring second visual angle information between a second anchor object and second image acquisition equipment according to a display area occupied by the second anchor object in the second live broadcast picture; and inputting the first visual angle information and the second visual angle information, a first live broadcast picture and a second live broadcast picture into a target picture generation network to generate and obtain a target visual distance live broadcast picture, wherein the target visual distance live broadcast picture comprises a third live broadcast picture after the first live broadcast picture is adjusted and a fourth live broadcast picture after the second live broadcast picture is adjusted, the first anchor object in the third live broadcast picture and the second anchor object in the fourth live broadcast picture are positioned on the same visual distance plane, and the target picture generation network is a deep learning network generated after training is carried out by utilizing a plurality of groups of sample live broadcast pictures and is used for adjusting the visual distance of the anchor object displayed in the input live broadcast picture.
4. The method according to claim 3, wherein the inputting the first perspective information and the second perspective information, the first live view and the second live view into the target view generation network to generate a target view-distance live view comprises:
in the target picture generation network, extracting a first object feature of the first anchor object in the first live broadcast picture and a second object feature of the second anchor object in the second live broadcast picture;
determining a target visual distance according to the first visual angle information and the second visual angle information; generating the three live broadcast pictures based on the first object feature, the first live broadcast picture and the target sight distance, and generating the fourth live broadcast picture based on the second object feature, the second live broadcast picture and the target sight distance;
and fusing the third live broadcast picture and the fourth live broadcast picture to generate the target sight distance live broadcast picture.
5. The method of claim 4, wherein generating the third live view based on the first object feature, the first live view, and the target line of sight, and generating the fourth live view based on the second object feature, the second live view, and the target line of sight comprises:
adjusting the display size of a first image displayed by the first anchor object in the first live broadcast picture based on the target sight distance and the first object characteristic so as to generate and obtain a third live broadcast picture;
and adjusting the display size of a second image displayed by the second anchor object in the second live broadcast picture based on the target sight distance and the second object characteristic so as to generate and obtain a fourth live broadcast picture.
6. The method of claim 4, wherein fusing the third live view and the fourth live view to generate the target line-of-sight live view comprises:
adjusting the resolution of the third live broadcast picture to be a target resolution, and adjusting the resolution of the fourth live broadcast picture to be the target resolution;
and splicing the third live broadcast picture under the target resolution and the fourth live broadcast picture under the target resolution to generate the target line-of-sight live broadcast picture.
7. The method of claim 3, further comprising, prior to said acquiring first perspective information between said first anchor object and said first image capture device and said acquiring second perspective information between said second anchor object and said second image capture device:
acquiring the multiple groups of sample live broadcast pictures and view angle information corresponding to each sample live broadcast picture in each group of sample live broadcast pictures;
inputting the multiple groups of sample live broadcast pictures and corresponding visual angle information into an initial picture adjustment network for training, wherein the initial picture adjustment network comprises a picture generation network and a judgment network, the picture generation network is used for generating a reference visual distance live broadcast picture based on a group of sample live broadcast pictures, the reference visual distance live broadcast picture comprises visual distance sub-pictures corresponding to each sample live broadcast picture in the group of sample live broadcast pictures, and the judgment network is used for judging whether the visual distance sub-pictures in the reference visual distance live broadcast pictures reach a target visual distance condition; and under the condition that the picture adjusting network reaches a convergence condition, determining the picture generation network when the convergence condition is reached as the target picture generation network, wherein in the reference view distance live pictures output by the picture generation network when the convergence condition is reached, the main broadcasting objects in the view distance sub-pictures corresponding to the sample live pictures are located on the same view distance plane.
8. The method of claim 4, wherein said obtaining first perspective information between the first anchor object and the first image acquisition device and said obtaining second perspective information between the second anchor object and the second image acquisition device comprises:
acquiring a first internal parameter of the first image acquisition device; displaying at least two reference images in first terminal equipment for displaying the first live broadcast picture, and determining a first sight distance parameter between a first live broadcast object and first image acquisition equipment according to an observation distance between the first live broadcast object and the reference images;
acquiring a second internal parameter of the second image acquisition device; and displaying the at least two reference images in a second terminal device for displaying the second live broadcast picture, and determining a second sight distance parameter between the second anchor object and the second image acquisition device according to the observation distance between the second anchor object and the reference images.
9. A live view generation apparatus, comprising:
the system comprises a first acquisition unit, a second acquisition unit and a display unit, wherein the first acquisition unit is used for acquiring a first live broadcast picture and a second live broadcast picture, the first live broadcast picture is obtained by shooting a first anchor object by first image acquisition equipment, and the second live broadcast picture is obtained by shooting a second anchor object by second image acquisition equipment;
a first processing unit, configured to adjust the first live view picture to a third live view picture and adjust the second live view picture to a fourth live view picture according to a display area occupied by the first anchor object in the first live view picture and a display area occupied by the second anchor object in the second live view picture, where a display area occupied by the first anchor object in the third live view picture and a display area occupied by the second anchor object in the fourth live view picture meet a same viewing distance condition;
and the pushing unit is used for pushing the third live broadcast picture and the fourth live broadcast picture to a viewer client for displaying.
10. The apparatus of claim 9, wherein the first processing unit further comprises:
the first acquisition module is used for acquiring a first display proportion between a display area occupied by the first anchor object in the first live-broadcasting picture and the display area of the first live-broadcasting picture;
a second obtaining module, configured to obtain a second display ratio between a display area occupied by the second anchor object in the second live broadcast picture and the display area of the second live broadcast picture;
a first adjusting module, configured to adjust the first live view according to the display ratio indicated by the viewing distance condition when the first display ratio does not reach the viewing distance condition, so as to obtain a third live view;
and the second adjusting module is used for adjusting the second live broadcast picture according to the display proportion indicated by the line-of-sight condition to obtain a fourth live broadcast picture under the condition that the second display proportion does not reach the line-of-sight condition.
11. A computer-readable storage medium, comprising a stored program, wherein the program when executed performs the method of any one of claims 1 to 8.
12. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of any of claims 1 to 8 by means of the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210096666.6A CN114449303B (en) | 2022-01-26 | 2022-01-26 | Live broadcast picture generation method and device, storage medium and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210096666.6A CN114449303B (en) | 2022-01-26 | 2022-01-26 | Live broadcast picture generation method and device, storage medium and electronic device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114449303A true CN114449303A (en) | 2022-05-06 |
CN114449303B CN114449303B (en) | 2024-08-30 |
Family
ID=81370303
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210096666.6A Active CN114449303B (en) | 2022-01-26 | 2022-01-26 | Live broadcast picture generation method and device, storage medium and electronic device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114449303B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115061617A (en) * | 2022-06-14 | 2022-09-16 | 深圳市万声文化科技有限公司 | Processing method and device of live broadcast picture, computer equipment and storage medium |
CN115334353A (en) * | 2022-08-11 | 2022-11-11 | 北京达佳互联信息技术有限公司 | Information display method and device, electronic equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106792098A (en) * | 2016-12-28 | 2017-05-31 | 广州华多网络科技有限公司 | The company wheat of live platform live method and its system |
CN107147927A (en) * | 2017-04-14 | 2017-09-08 | 北京小米移动软件有限公司 | Live broadcasting method and device based on live even wheat |
CN110798697A (en) * | 2019-11-22 | 2020-02-14 | 广州华多网络科技有限公司 | Video display method, device and system and electronic equipment |
CN112672174A (en) * | 2020-12-11 | 2021-04-16 | 咪咕文化科技有限公司 | Split-screen live broadcast method, acquisition equipment, playing equipment and storage medium |
CN112752116A (en) * | 2020-12-30 | 2021-05-04 | 广州繁星互娱信息科技有限公司 | Display method, device, terminal and storage medium of live video picture |
CN113163219A (en) * | 2021-02-20 | 2021-07-23 | 李信海 | Live broadcast picture adjusting platform and method based on main object distribution |
-
2022
- 2022-01-26 CN CN202210096666.6A patent/CN114449303B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106792098A (en) * | 2016-12-28 | 2017-05-31 | 广州华多网络科技有限公司 | The company wheat of live platform live method and its system |
CN107147927A (en) * | 2017-04-14 | 2017-09-08 | 北京小米移动软件有限公司 | Live broadcasting method and device based on live even wheat |
CN110798697A (en) * | 2019-11-22 | 2020-02-14 | 广州华多网络科技有限公司 | Video display method, device and system and electronic equipment |
CN112672174A (en) * | 2020-12-11 | 2021-04-16 | 咪咕文化科技有限公司 | Split-screen live broadcast method, acquisition equipment, playing equipment and storage medium |
CN112752116A (en) * | 2020-12-30 | 2021-05-04 | 广州繁星互娱信息科技有限公司 | Display method, device, terminal and storage medium of live video picture |
CN113163219A (en) * | 2021-02-20 | 2021-07-23 | 李信海 | Live broadcast picture adjusting platform and method based on main object distribution |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115061617A (en) * | 2022-06-14 | 2022-09-16 | 深圳市万声文化科技有限公司 | Processing method and device of live broadcast picture, computer equipment and storage medium |
CN115334353A (en) * | 2022-08-11 | 2022-11-11 | 北京达佳互联信息技术有限公司 | Information display method and device, electronic equipment and storage medium |
CN115334353B (en) * | 2022-08-11 | 2024-03-12 | 北京达佳互联信息技术有限公司 | Information display method, device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114449303B (en) | 2024-08-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108737882B (en) | Image display method, image display device, storage medium and electronic device | |
CN106210861B (en) | Method and system for displaying bullet screen | |
CN114449303B (en) | Live broadcast picture generation method and device, storage medium and electronic device | |
KR20110002025A (en) | Method and apparatus for modifying a digital image | |
CN110427107A (en) | Virtually with real interactive teaching method and system, server, storage medium | |
US8675042B2 (en) | Image processing apparatus, multi-eye digital camera, and program | |
CN102783161A (en) | Disparity distribution estimation for 3D TV | |
CN105635675A (en) | Panorama playing method and device | |
US11533431B2 (en) | Method and device for generating a panoramic image | |
CN109661816A (en) | The method and display device of panoramic picture are generated and shown based on rendering engine | |
CN116860112B (en) | Combined scene experience generation method, system and medium based on XR technology | |
KR20150105069A (en) | Cube effect method of 2d image for mixed reality type virtual performance system | |
CN112468832A (en) | Billion-level pixel panoramic video live broadcast method, device, medium and equipment | |
CN106231411B (en) | Main broadcaster's class interaction platform client scene switching, loading method and device, client | |
CN110730340B (en) | Virtual audience display method, system and storage medium based on lens transformation | |
US7936936B2 (en) | Method of visualizing a large still picture on a small-size display | |
CN108898680B (en) | A kind of method and device automatically correcting interception picture in virtual three-dimensional space | |
CN106231350B (en) | Main broadcaster's class interaction platform method for changing scenes and its device | |
CN106096665A (en) | Dual pathways cloud data management platform | |
CN110198457B (en) | Video playing method and device, system, storage medium, terminal and server thereof | |
CN107491934B (en) | 3D interview system based on virtual reality | |
JP2020101897A (en) | Information processing apparatus, information processing method and program | |
CN110784728B (en) | Image data processing method and device and computer readable storage medium | |
CN100498840C (en) | Method of and scaling unit for scaling a three-dimensional model | |
CN112532964B (en) | Image processing method, device, apparatus and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |