CN115278193A - Panoramic video distribution method, device, equipment and computer storage medium - Google Patents

Panoramic video distribution method, device, equipment and computer storage medium Download PDF

Info

Publication number
CN115278193A
CN115278193A CN202110481689.4A CN202110481689A CN115278193A CN 115278193 A CN115278193 A CN 115278193A CN 202110481689 A CN202110481689 A CN 202110481689A CN 115278193 A CN115278193 A CN 115278193A
Authority
CN
China
Prior art keywords
panoramic
image frame
video
information
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110481689.4A
Other languages
Chinese (zh)
Inventor
肖哲
孔晓琨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Group Hebei Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Group Hebei Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Group Hebei Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202110481689.4A priority Critical patent/CN115278193A/en
Publication of CN115278193A publication Critical patent/CN115278193A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/194Transmission of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors

Abstract

The application discloses a panoramic video distribution method, a panoramic video distribution device, panoramic video distribution equipment and a computer storage medium. The method comprises the steps that under the condition that panoramic video transmission connection is established with a terminal, playing parameters of the terminal are received, wherein the playing parameters comprise position information, time information and view information; according to the position information and the time information, determining an initial panoramic image frame from the panoramic video; cutting the initially selected panoramic image frame according to the visual field information to obtain a target image frame; and distributing the target image frame to the terminal. According to the method and the device, the playing parameters of the terminal are matched with the panoramic video, the matched panoramic image frame is finally cut, the cut target image frame is distributed to the terminal, the user operation can be responded in time, and the target image frame is obtained through cutting, so that the smoothness of the distribution of the panoramic video can be guaranteed.

Description

Panoramic video distribution method, device, equipment and computer storage medium
Technical Field
The present application belongs to the field of video distribution technologies, and in particular, to a panoramic video distribution method, apparatus, device, and computer storage medium.
Background
The panoramic video is a video shot in 360 degrees in all directions by using a three-dimensional camera, and is a content carrier applied to Virtual Reality (VR) technology. The existing panoramic video generally adopts a distribution mode of field angle (FOV) transmission to pre-divide the panoramic video into a plurality of FOV subareas, and then the panoramic background stream is assisted to be spliced, so that the panoramic video is difficult to adapt to user operation in time, and the panoramic video is not smoothly distributed.
Disclosure of Invention
The embodiment of the application provides a method, a device and equipment for distributing panoramic videos and a computer storage medium, so as to solve the technical problem that the panoramic videos are not smooth enough to be distributed.
In a first aspect, an embodiment of the present application provides a panoramic video distribution method, where the method includes:
receiving playing parameters of a terminal under the condition of establishing panoramic video transmission connection with the terminal, wherein the playing parameters comprise position information, time information and view information;
according to the position information and the time information, determining an initial panoramic image frame from the panoramic video;
according to the visual field information, the primary panoramic image frame is cut to obtain a target image frame;
and distributing the target image frame to the terminal.
In one embodiment, the panoramic video comprises N panoramic sub-videos, each of the panoramic sub-videos having associated therewith a shot point location, each of the panoramic sub-videos comprising at least one panoramic image frame, where N is an integer greater than or equal to 1,
the determining an initially selected panoramic image frame from the panoramic video according to the position information and the time information comprises:
determining a first panoramic sub-video matched with the position information from the panoramic video according to the corresponding relation between the preset position information and the shooting point location;
and according to the corresponding relation between preset time information and panoramic image frames, determining an initially selected panoramic image frame matched with the time information from at least one panoramic image frame included in the first panoramic sub-video.
In one embodiment, before receiving the play parameter of the terminal, the method further includes:
acquiring N panoramic sub-videos in a panoramic video and point location information associated with each panoramic sub-video, wherein the panoramic sub-videos are videos shot by each shooting point location in the panoramic video;
acquiring a frame sequence of each panoramic sub video, wherein the frame sequence comprises at least one panoramic image frame and a playing progress corresponding to each panoramic image frame;
and storing the panoramic sub video and the point location information in an associated manner, and storing the panoramic image frame and the playing progress.
In one embodiment, after the obtaining the sequence of frames for each panoramic sub-video, the method further comprises:
acquiring pixel coordinates of each pixel point of each panoramic image frame in a preset image coordinate system;
and storing the pixel point and the pixel coordinate in an associated manner.
In one embodiment, the cropping the preliminary panoramic image frame according to the view information to obtain a target image frame includes:
determining a central point from the initially selected panoramic image frame according to the field coordinate, wherein the central point is a pixel point of which the pixel coordinate is matched with the field coordinate;
and cutting the primary panoramic image frame based on the central point and the initial field angle to obtain a target image frame.
In one embodiment, the playing parameters further include a scaling ratio, and the cropping the preliminary panoramic image frame based on the central point and the initial field angle to obtain a target image frame includes:
obtaining a target field angle according to the scaling and the initial field angle;
and cutting the primary panoramic image frame based on the central point and the target field angle to obtain a target image frame.
In one embodiment, the initial field angle is obtained based on a field range of a human eye of the terminal, and the scaling is obtained based on operation information collected by a sensor of the terminal or based on a first input received by the terminal.
In a second aspect, an embodiment of the present application provides a panoramic video distribution apparatus, including:
the terminal comprises a receiving module, a display module and a display module, wherein the receiving module is used for receiving playing parameters of the terminal under the condition of establishing panoramic video transmission connection with the terminal, and the playing parameters comprise position information, time information and view information;
the determining module is used for determining an initial panoramic image frame from the panoramic video according to the position information and the time information;
the cutting module is used for cutting the primary selected panoramic image frame according to the visual field information to obtain a target image frame;
and the distribution module is used for distributing the target image frame to the terminal.
In a third aspect, an embodiment of the present application provides an electronic device, where the device includes:
a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements the method described above.
In a fourth aspect, the present application provides a computer storage medium having computer program instructions stored thereon, which when executed by a processor implement the method described above.
The panoramic video distribution method, the panoramic video distribution device, the panoramic video distribution equipment and the computer storage medium can receive playing parameters such as position information, time information and view information of a terminal under the condition of establishing panoramic video transmission connection with the terminal, determine a primary panoramic image frame from a panoramic video according to the position information and the time information, and then cut the primary panoramic image frame according to the view information to obtain a target image frame finally distributed to the terminal. According to the embodiment of the application, the playing parameters of the terminal are matched with the panoramic video, and finally, after the matched panoramic image frame is cut, the cut target image frame is distributed to the terminal, so that the user operation can be responded in time to cut the target image frame, the problem that the panoramic video is cut into partitions and the subsequent splicing sense is serious is effectively solved, and the smoothness of the panoramic video distribution can be guaranteed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required to be used in the embodiments of the present application will be briefly described below, and for those skilled in the art, other drawings may be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a panoramic video distribution method according to an embodiment of the present application;
fig. 2 is a schematic diagram of point location information of a panoramic sub video in an embodiment of the present application;
fig. 3 is a schematic coordinate diagram of each pixel point of a panoramic image frame in an embodiment of the present application;
FIG. 4 is a schematic diagram of a target image frame in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a panoramic video distribution apparatus according to another embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to still another embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application will be described in detail below, and in order to make objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are intended to be illustrative only and are not intended to be limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present application by illustrating examples thereof.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element.
In order to solve the prior art problems, embodiments of the present application provide a panoramic video distribution method, apparatus, device, and computer storage medium. First, a panoramic video distribution method provided by an embodiment of the present application is described below.
Fig. 1 shows a flowchart of a panoramic video distribution method according to an embodiment of the present application. The panoramic video distribution method can be applied to any scene related to panoramic video distribution, for example, can be applied to a scene in which a terminal such as a smart phone, a computer or a smart television requests to play a panoramic video, and can also be applied to a Virtual Reality (VR) scene, which is not specifically limited herein; for simplicity, the VR scene is mainly used as an example for the following description.
As shown in fig. 1, the panoramic video distribution method includes:
step S101, receiving playing parameters of a terminal under the condition of establishing panoramic video transmission connection with the terminal, wherein the playing parameters comprise position information, time information and view information;
step S102, determining an initial panoramic image frame from the panoramic video according to the position information and the time information;
step S103, cutting the initially selected panoramic image frame according to the visual field information to obtain a target image frame;
and step S104, distributing the target image frame to the terminal.
In this embodiment, the step S101 of establishing the panoramic video transmission connection with the terminal may be that the server interacts with the terminal, the terminal requests to play the panoramic video, and the server distributes the panoramic video to the terminal in response to the request of the terminal.
The server may be any device that stores data related to the panoramic video, for example, a Content Delivery Network (CDN) device, and for simplicity of description, the following description will use the server as the CDN device.
The terminal can be a smart phone, a computer, a smart television, a VR head display device and other devices with the function of processing panoramic video data. Specifically, the terminal may include a sensor, a remote controller, a button, a mouse, or the like, and may be configured to acquire play parameters such as position information and view information; the terminal can also be provided with playing software (application program) which can be used for playing the panoramic video and acquiring the playing time.
The playback parameters may include position information, time information, and field of view information. The position information may be a spatial position where the user is located, which is acquired by the control device, for example, the user wears the VR head display device and enters a first space of the VR scene, where the position information corresponds to the first space, that is, the playing parameter indicates that the panoramic video data associated with the first space needs to be played.
The time information may be virtual control of the time dimension of the panoramic video, and may be, for example, a playing time axis of the panoramic video, and the playing time axis of the panoramic video may also be changed according to operations such as fast forward and fast backward. At this time, a specific panoramic image frame can be corresponded according to the playing time axis of the panoramic video, namely, the playing parameter indicates that the panoramic image frame matched with the current playing time axis needs to be played.
The visual field information may be a visual field range of the user, i.e., an area that the user's visual field can see. For example, when the user looks forward in the first space of the VR space, the user's visible range may be a partial region directly in front of the first space with reference to the user's orientation.
In step S102, an initially selected panoramic image frame may be determined from the panoramic video according to the position information and the time information. As described above, a panoramic video may include one or more different spatially associated panoramic sub-videos.
If the panoramic sub video is one, it can be said that the panoramic video has only one space, and at this time, the position information corresponds to the space, so it is the panoramic sub video that matches with the position information. If the number of the panoramic sub-videos is multiple, the space to which the user belongs can be determined according to the position information, and the panoramic sub-video associated with the space is the panoramic sub-video matched with the position information.
The first panoramic sub video matched with the position information can be determined according to the position information, and then the initially selected panoramic image frame matched with the time information in the first panoramic sub video is determined according to the time information. Or determining all panoramic image frames matched with the time information in the panoramic video according to the time information, and then matching the panoramic image frames with all panoramic image frames according to the position information to obtain the panoramic image frame corresponding to the space to which the position information belongs as the initially selected panoramic image frame.
In step S103, the preliminary panoramic image frame may be clipped according to the view information, so as to obtain a target image frame. After the initial selection panoramic image frame is determined, the initial selection panoramic image frame is not directly distributed to the terminal at the moment, the initial selection panoramic image frame can be cut according to the visual field information, namely, an image frame area in the visual range of a user is cut from the panoramic image frame, and the image frame area serves as a target image frame.
In step S104, the target image frame is distributed to the terminal. The CDN device may directly distribute the target image frame to the terminal, so that the target image frame may be displayed on a display screen of the terminal.
The panoramic video distribution method provided by the embodiment of the application can receive playing parameters such as position information, time information and view information of the terminal under the condition that the panoramic video transmission connection is established with the terminal, firstly, an initial panoramic image frame is determined from the panoramic video according to the position information and the time information, then, the initial panoramic image frame is cut according to the view information, and a target image frame finally distributed to the terminal is obtained. According to the embodiment of the application, the playing parameters of the terminal are matched with the panoramic video, and finally, after the matched panoramic image frame is cut, the cut target image frame is distributed to the terminal, so that the user operation can be responded in time to cut the target image frame, the problem that the panoramic video is cut into partitions and the subsequent splicing sense is serious is effectively solved, and the smoothness of the panoramic video distribution can be guaranteed.
Alternatively, in one embodiment, the panoramic video may include N panoramic sub-videos, each of which may have a shot point associated therewith, each panoramic sub-video including at least one panoramic image frame, where N is an integer greater than or equal to 1,
step S102, determining an initially selected panoramic image frame from the panoramic video according to the position information and the time information, which may include:
determining a first panoramic sub-video matched with the position information from the panoramic video according to the corresponding relation between the preset position information and the shooting point location;
and determining a primary panoramic image frame matched with the time information from at least one panoramic image frame included in the first panoramic sub-video according to the corresponding relation between the preset time information and the panoramic image frame.
In this embodiment, the panoramic video may include N panoramic sub-videos, where N is an integer greater than or equal to 1, and each panoramic sub-video may be associated with a shooting point location. For example, the panoramic video may be composed of N panoramic sub-videos in different spaces, and specifically, each space may be shot by one shot point.
One of the shooting sites may include a plurality of photosensitive sensors arranged around, for example, one shooting site may include 3 wide-angle cameras of 120 ° for shooting, so as to obtain a panoramic sub-video of the space. A shooting spot can also be a camera shot by rotation, or other devices capable of shooting panoramic video.
Each panoramic sub-video includes at least one panoramic image frame, for example, a panoramic sub-video may include "panoramic image frame 1", "panoramic image frame 2", "panoramic image frame 3" … … "panoramic image frame n".
And determining a first panoramic sub-video matched with the position information from the panoramic video according to the corresponding relation between the preset position information and the shooting point position. The preset corresponding relationship between the position information and the shooting point location may be that one shooting point location corresponds to one position information, and the position information may be a specific position point or may be in a position range.
According to the corresponding relation between the position information and the shooting point location, the shooting point location matched with the position information can be determined, and the panoramic sub-video associated with the shooting point location is the first panoramic sub-video matched with the position information.
And determining a primary panoramic image frame matched with the time information from at least one panoramic image frame included in the first panoramic sub-video according to the corresponding relation between the preset time information and the panoramic image frame. The preset correspondence between the time information and the panoramic image frame may be that one panoramic image frame corresponds to one time information.
For example, "panoramic image frame 1" corresponds to time T1, "panoramic image frame 2" corresponds to time T2, "panoramic image frame 3" corresponds to time T3 … … "panoramic image frame n" corresponds to time Tn.
According to the corresponding relation between the time information and the panoramic image frames, the initially selected panoramic image frame matched with the time information can be determined from the panoramic image frame 1, the panoramic image frame 2 and the panoramic image frame 3, … … and the panoramic image frame n which are included in the first panoramic sub-video determined in the previous step.
For example, if the time information is T3, the corresponding "panoramic image frame 3" may be determined as the initially selected panoramic image frame matching the time information.
According to the embodiment, the matched first panoramic sub-video is determined according to the position information, the initially selected panoramic image frame is determined from the first panoramic sub-video according to the time information, the target image frame is obtained by cutting according to the determined initially selected panoramic image frame, and then the target image frame is distributed to the terminal. The target image frame can be obtained by responding to the user operation in time according to the playing parameters, so that the smoothness of panoramic video distribution is ensured.
Optionally, in an embodiment, before receiving the playing parameter of the terminal, the panoramic video distribution method may further include:
acquiring N panoramic sub-videos in the panoramic video and point location information associated with each panoramic sub-video, wherein the panoramic sub-videos are videos shot by each shooting point location in the panoramic video;
acquiring a frame sequence of each panoramic sub video, wherein the frame sequence comprises at least one panoramic image frame and a playing progress corresponding to each panoramic image frame;
and storing the panoramic sub video and the point location information in an associated manner, and storing the panoramic image frame and the playing progress.
In this embodiment, N panoramic sub-videos in a panoramic video and point location information associated with each panoramic sub-video may be obtained. For example, a spatial coordinate system may be established for the panoramic video, as shown in fig. 2, the origin of coordinates of the spatial coordinate system may be a certain shooting point, so as to obtain coordinates of other shooting points in the spatial coordinate system. The coordinates of the shooting point in the space coordinate system can be used as the point information associated with the panoramic sub-video.
Or, a panoramic sub-video shot by each shooting point location can be obtained, and then the coordinates of the center point of the panoramic sub-video in the space coordinate system are used as the point location information associated with the panoramic sub-video.
For simplicity of description, the coordinates of the shooting point in the spatial coordinate system are taken as the point information associated with the panoramic sub-video.
For example, if the coordinate of "shooting point 1" in the spatial coordinate system is (x 1, y1, z 1), the point information associated with the panoramic sub-video shot by "shooting point 1" is (x 1, y1, z 1); the coordinates of the "shooting point 2" in the spatial coordinate system are (x 2, y2, z 2), the point information associated with the panoramic sub-video shot by the "shooting point 2" is (x 2, y2, z 2) … …, and the coordinates of the "shooting point n" in the spatial coordinate system are (xn, yn, zn), and the point information associated with the panoramic sub-video shot by the "shooting point n" is (xn, yn, zn).
And a frame sequence of each panoramic sub video can be obtained, wherein the frame sequence comprises at least one panoramic image frame and a corresponding playing progress of each panoramic image frame. For example, a frame sequence of each panoramic sub-video may be obtained according to a video playing time or a video editing front-to-back order, and the frame sequence may include at least one panoramic image frame.
Wherein, the playing progress corresponding to each panoramic image frame can be represented by t. For example, t1 may represent the panoramic image frame at time t1, or may directly represent the t1 th panoramic image frame.
And storing the panoramic sub video and the point location information in an associated manner, and storing the panoramic image frame and the playing progress. Specifically, the panoramic sub video and the point location information corresponding to the panoramic video can be stored, and on the basis, each panoramic image frame in the panoramic sub video and the playing progress corresponding to each panoramic image frame are stored.
For example, the point location information and the playing progress of the panoramic image frame of the panoramic sub-video shot at the time t1 by "shooting point location 1" are stored as (x 1, y1, z1, t 1); point location information and playing progress of a panoramic image frame of the panoramic sub video shot by the shooting point location 1 at the time t2 are stored as (x 1, y1, z1, t 2); the point location information and the playing progress of the panoramic image frame of the panoramic sub-video shot at the time t1 by the shooting point location 2 are stored as (x 2, y2, z2, t 1) … … and the point location information and the playing progress of the panoramic image frame of the panoramic sub-video shot at the time tn by the shooting point location n are stored as (xn, yn, zn, tn).
According to the embodiment of the application, after the point location information associated with the panoramic sub-video and the playing progress corresponding to the panoramic image frame are obtained, the panoramic sub-video and the point location information, the panoramic image frame and the playing progress are stored in an associated mode, indexing is carried out on the panoramic video, the corresponding initially-selected panoramic image frame can be conveniently and quickly searched according to the index information, and therefore the smoothness of distribution of the panoramic video is guaranteed. The index information may be used to indicate the point location information and the playing progress.
In an example, the position information of the terminal may be based on a user operation or based on a preset position relationship, and obtain an initial position coordinate (X, Y, Z), where the initial position coordinate (X, Y, Z) matches an initial point location information, for example, the matched initial point location information may be (X2, Y2, Z2).
The position information may be changed according to a user operation collected by a sensor of the terminal, for example, a user action collected by a gyroscope in the VR device, and a user moves a position, so that a position coordinate is changed. The change of the position coordinates may be made in accordance with a virtual space motion such as "forward, backward, upward, and downward" input by a remote controller, a mouse, or the like. The position coordinates are changed, and the matching point location information may also be changed.
The time information of the terminal may also be the time T obtained based on a preset time axis, where the time T matches a playing schedule, for example, the matching playing schedule may be T1. The time information may also be changed according to instructions such as "fast forward, fast backward" and the like input by a remote controller, a mouse and the like, and the playing progress matched with the time information may also be changed.
For example, if the point location information and the playing progress, which are matched with the initial location information and the time information (X, Y, Z, T) of the terminal, are (X2, Y2, Z2, T1), the panoramic image frame corresponding to (X2, Y2, Z2, T1) may be used as the initially selected panoramic image frame. And as the position moves and the time axis changes, the point location information and the playing progress matched with the position information and the time information (X, Y, Z, T) of the terminal may become (X1, Y1, Z1, T2) at the next time, and at this time, the panoramic image frame corresponding to (X1, Y1, Z1, T2) may be used as the initially selected panoramic image frame.
Optionally, in an embodiment, after obtaining the frame sequence of each panoramic sub-video, the panoramic video distribution method may further include:
acquiring pixel coordinates of each pixel point of each panoramic image frame in a preset image coordinate system;
and associating and storing the pixel points and the pixel coordinates.
In this embodiment, the pixel coordinates of each pixel point of each panoramic image frame in the preset image coordinate system may also be obtained. The preset image coordinate system may be a two-dimensional coordinate system established for each panoramic image frame.
As shown in fig. 3, a physical orientation starting point of the shooting position of the shooting point may be selected, for example, in the due north direction, to establish an image coordinate system, and certainly, other direction starting points may also be selected to establish an image coordinate system, which is not limited herein. Therefore, the panoramic image frame is panned in the image coordinate system, and the pixel coordinates of each pixel point in the image coordinate system are obtained.
For example, the pixel coordinate of the first pixel point can be represented as (a 1, b 1), and the pixel coordinate of the second pixel point can be represented as (a 2, b 2) … … nth pixel point can be represented as (an, bn).
The pixel points and pixel coordinates are stored in association. Specifically, each pixel point in each panoramic image frame in each panoramic sub-video may be stored in association with the point location information, the playing progress, and the pixel coordinates.
For example, the point location information, the playing progress and the pixel coordinates of the first pixel point in the panoramic image frame of the panoramic sub-video shot by "shot point location 1" at the time t1 are stored as (x 1, y1, z1, t1, a1, b 1); the point location information, the playing progress and the pixel coordinates of the second pixel point in the panoramic image frame of the panoramic sub-video shot by the shooting point location 2 at the time t1 are stored as (x 2, y2, z2, t1, a2, b 2) … … and the point location information, the playing progress and the pixel coordinates of the nth pixel point in the panoramic image frame of the panoramic sub-video shot by the shooting point location n at the time tn are stored as (xn, yn, zn, tn, an, bn).
According to the embodiment of the application, the pixel coordinate of each pixel point in the preset image coordinate system can be further obtained, each pixel point and the pixel coordinate are stored in an associated mode, the panoramic video is further indexed, the specific pixel point in the corresponding initially-selected panoramic image frame can be further conveniently and subsequently found out quickly according to the index information, the initially-selected panoramic image frame can be processed quickly according to the pixel point, for example, the target image frame is obtained by cutting, accordingly, the smoothness of panoramic video distribution is further guaranteed, and transmission resources in the panoramic video distribution process are effectively saved. Wherein the index information may also be used to indicate the pixel coordinates described above.
Optionally, the view information may include a view coordinate and an initial view angle, and the clipping is performed on the preliminary panoramic image frame according to the view information to obtain the target image frame, which may include:
according to the field coordinate, determining a central point from the initially selected panoramic image frame, wherein the central point is a pixel point with the pixel coordinate matched with the field coordinate;
and cutting the initially selected panoramic image frame based on the central point and the initial field angle to obtain a target image frame.
In this embodiment, the field of view coordinates may be used to characterize the focal point of the human eye. For example, the eye focus can be tracked in real time according to the eye tracking sensor, and the focus coordinates (a, B) can be saved as the field of view coordinates.
And determining a central point from the initially selected panoramic image frame according to the field coordinates, wherein the central point can be a pixel point with the pixel coordinate matched with the field coordinates. For example, if the pixel coordinates matched with the field coordinates (a, B) are (a 3, B3), the pixel point corresponding to the pixel coordinates (a 3, B3) in the initially selected panoramic image frame may be taken as the central point, that is, the third pixel point in the initially selected panoramic image frame is taken as the central point.
The initial field angle F may be preset according to actual conditions, for example, the sensing capability of the field angle of the general human eye is generally less than 120 ° in the horizontal direction and less than 60 ° in the vertical direction, and the initial field angle F may be preset to be 120 ° in the horizontal direction and 60 ° in the vertical direction.
Alternatively, in order to adapt to the related operations such as rapid shaking and nodding of the head by the user, the range of the initial angle of view may be expanded in accordance with the performance of the terminal, and for example, the initial angle of view may be set to 150 ° in the horizontal direction and 80 ° in the vertical direction, so as to ensure that the initial angle of view remains within the range of the field of view of the human eyes when the user is in a state of rapid shaking and nodding of the head.
In addition, the initial field angle can also be set by the terminal based on capturing the difference of the field ranges of different users. The initial field of view may also include a resolution, which may be a fixed value that is preset or may be set to a different value based on user selection.
And cutting the initially selected panoramic image frame based on the central point and the initial field angle to obtain a target image frame. For simplicity of description, taking an initial field angle of 120 ° in the horizontal direction and 60 ° in the vertical direction as an example, the initially selected panoramic image frame is cut 120 ° in the horizontal direction and 60 ° in the vertical direction with a central point as a cutting center, and the obtained area is a target image frame.
According to the embodiment of the application, the central point and the cutting range in the initially selected panoramic image frame can be quickly acquired according to the field coordinate and the initial field angle, and the target image frame obtained through cutting is distributed to the terminal, so that the smoothness of panoramic video distribution is ensured, and transmission resources in the panoramic video distribution process are effectively saved.
In one example, a spatial transformation of the user's field of view may be characterized in a forward direction, or a backward direction, from a rotational motion of a gyroscope in a VR device worn by the user grabbing the user's real head or body, resulting in a change in the field of view coordinates. Or the field of view transformation under the virtual position of the user can be formed according to the virtual space actions such as 'left-right rotation, up-down movement' and the like input by a mouse, a remote controller and the like, so that the field of view coordinate is changed. The field of view coordinates change and the center point in the initially selected panoramic image frame may change accordingly.
Optionally, in an embodiment, the playing parameters may further include a scaling ratio, and the clipping is performed on the initially selected panoramic image frame based on the central point and the initial field angle to obtain the target image frame, and the clipping may include:
obtaining a target field angle according to the zooming proportion and the initial field angle;
and cutting the initial panoramic image frame based on the central point and the target field angle to obtain a target image frame.
In this embodiment, the playing parameter may further include a zoom ratio, and the target angle of view may be obtained according to the zoom ratio and the initial angle of view. For example, in the case of a zoom scale of magnification, in the initially selected panoramic image frame, the area seen by the user becomes smaller, i.e., the target angle of view thereof is smaller than the initial angle of view; on the contrary, in the case where the zoom ratio is a reduction factor, in the initially selected panoramic image frame, the area seen by the user becomes large, that is, the target angle of view thereof is larger than the initial angle of view.
As shown in fig. 4, the target image frame is illustrated as a rectangle for ease of understanding. When the user selects to zoom in 2 times, the initial field angle is reduced by 2 times in both the horizontal direction and the vertical direction, i.e. the target field angle may be 1/4 of the initial field angle.
And cutting the initial panoramic image frame based on the central point and the target field angle to obtain a target image frame. Taking the target field angle as 1/4 of the initial field angle as an example, the initial panoramic image frame is taken as a cutting center, and the target field angle is taken as a cutting range, so that the target image frame is obtained. The target image frame is 1/4 of the cutting range area with the initial view angle.
The method and the device can acquire the final target image frame by combining with the scaling, guarantee the smoothness of panoramic video distribution, and effectively save transmission resources in the panoramic video distribution process.
Alternatively, in one embodiment, the initial field angle may be obtained based on the field of view of the terminal according to human eyes, and the scaling may be obtained based on the operation information collected by the sensor of the terminal, or may be obtained based on the first input received by the terminal.
In this embodiment, the initial field angle may be obtained from the human eye visual field range based on the terminal. For example, after the user wears the VR device, the VR device may analyze a visual field range of human eyes of the user through an eye tracking sensor or other sensor, so as to set an initial field angle based on the analyzed data.
The angle range obtained by analysis can be directly used as an initial field angle, and the initial field angle obtained by adding the preset angle value on the basis of the angle range obtained by analysis can be used for eliminating the influence caused by errors possibly existing in analysis.
The scaling may be based on operational information collected by a sensor of the terminal, for example, a gyroscope on the VR device senses "zoom in, zoom out" actions of the user, and so on. Or may be derived based on a first input received by the terminal, e.g., an input that the user can perform a zoom-in or zoom-out through a remote controller, a mouse, or the like.
In an embodiment, after the terminal is powered on, the user selects a panoramic video to be played, and generates a corresponding initial play Uniform Resource Locator (URL), which is hereinafter referred to as URL1, where URL1 only represents a specific panoramic video content selection and does not represent a play parameter of the terminal.
The terminal performs local processing on the URL1 according to the initial position of the user, the integrated field angle of the user, the coordinates of the field of view of the user, the zoom ratio, the time dimension (i.e. the position information, the time information, the field of view information, the zoom ratio, etc. playing parameters, wherein the field of view information includes the field of view coordinates and the field of view angle), i.e. (X, Y, Z, T, a, B, F, L), etc. information, to form a URL2 (X, Y, Z, T, a, B, F, L), wherein the URL2 (X, Y, Z, T, a, B, F, L) represents the playing parameters of the terminal.
And the terminal sends a playing request to the CDN system according to the URL2 (X, Y, Z, T, A, B, F and L). After the CDN system schedules by its own scheduling mechanism, the CDN system finally directs the play request to a specific CDN device that provides service for the terminal. The specific scheduling process of the CDN system can be implemented by using a scheduling method of an existing video storage server system, which is not described herein again.
After receiving the play request URL2 (X, Y, Z, T, a, B, F, L), the CDN device first uses the URL2
(X, Y, Z, T) finding matching index information (X, Y, Z, T); and finding the corresponding initially selected panoramic image frame according to the (x, y, z, t) information (namely, determining the initially selected panoramic image frame from the panoramic video according to the position information and the time information).
The CDN device further searches corresponding central points (a, B) in the initially selected panoramic image frame according to the playing request URL2 (A, B, F, L), performs distribution frame calculation by combining F, L, and finishes clipping the initially selected panoramic image frame (namely, according to the field coordinate, the central point is determined from the initially selected panoramic image frame, the central point is a pixel point with the pixel coordinate matched with the field coordinate, and the initially selected panoramic image frame is clipped based on the central point and the initial field angle to obtain a target image frame).
After the CDN device completes processing of the material corresponding to the target, it only distributes the clipped target image frame to the terminal for display (i.e., distributes the target image frame to the terminal).
Fig. 5 is a schematic structural diagram of a panoramic video distribution apparatus according to another embodiment of the present application, and only a part related to the embodiment of the present application is shown for convenience of description.
Referring to fig. 5, the panoramic video distribution apparatus includes:
the receiving module is used for receiving playing parameters of the terminal under the condition of establishing panoramic video transmission connection with the terminal, wherein the playing parameters comprise position information, time information and view information;
the determining module is used for determining an initially selected panoramic image frame from the panoramic video according to the position information and the time information;
the cutting module is used for cutting the initially selected panoramic image frame according to the visual field information to obtain a target image frame;
and the distribution module is used for distributing the target image frame to the terminal.
Optionally, the panoramic video includes N panoramic sub-videos, each panoramic sub-video is associated with a shooting point location, each panoramic sub-video includes at least one panoramic image frame, where N is an integer greater than or equal to 1, and the determining module may include:
the first determining unit is used for determining a first panoramic sub-video matched with the position information from the panoramic video according to the corresponding relation between the preset position information and the shooting point location;
and the second determining unit is used for determining the initially selected panoramic image frame matched with the time information from at least one panoramic image frame included in the first panoramic sub video according to the corresponding relation between the preset time information and the panoramic image frame.
Optionally, the apparatus may further include:
the first acquisition module is used for acquiring N panoramic sub-videos in the panoramic video and point location information associated with each panoramic sub-video, wherein the panoramic sub-videos are videos shot by each shooting point location in the panoramic video;
the second acquisition module is used for acquiring a frame sequence of each panoramic sub video, wherein the frame sequence comprises at least one panoramic image frame and a playing progress corresponding to each panoramic image frame;
and the first storage module is used for storing the panoramic sub video and the point location information in an associated manner, and storing the panoramic image frame and the playing progress.
Optionally, the apparatus may further include:
the third acquisition module is used for acquiring the pixel coordinates of each pixel point of each panoramic image frame in a preset image coordinate system;
and the second storage module is used for associating and storing the pixel points and the pixel coordinates.
Optionally, the view information includes view coordinates and an initial field angle, and the cropping module may include:
the third determining unit is used for determining a central point from the initially selected panoramic image frame according to the field coordinate, wherein the central point is a pixel point of which the pixel coordinate is matched with the field coordinate;
and the cutting unit is used for cutting the initially selected panoramic image frame based on the central point and the initial field angle to obtain a target image frame.
Optionally, the playing parameter further includes a scaling, and the clipping unit may include:
the obtaining subunit is used for obtaining a target field angle according to the zooming proportion and the initial field angle;
and the cutting subunit is used for cutting the initial panoramic image frame based on the central point and the target field angle to obtain a target image frame.
Optionally, the initial field angle is obtained based on the field range of human eyes of the terminal, and the scaling is obtained based on the operation information acquired by the sensor of the terminal, or based on the first input received by the terminal.
It should be noted that, the contents of information interaction, execution process, and the like between the above-mentioned devices/units are based on the same concept as that of the embodiment of the method of the present application, and are devices corresponding to the above-mentioned panoramic video distribution method.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 6 shows a hardware structure diagram of an electronic device according to still another embodiment of the present application.
The device may comprise a processor 601 and a memory 602 in which computer program instructions are stored.
The steps in any of the various method embodiments described above are implemented when the processor 601 executes a computer program.
Illustratively, a computer program may be partitioned into one or more modules/units, which are stored in memory 602 and executed by processor 601 to complete the application. One or more modules/units may be a series of computer program instruction segments capable of performing certain functions, the instruction segments describing the execution of a computer program in a device.
Specifically, the processor 601 may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
Memory 602 may include a mass storage for data or instructions. By way of example, and not limitation, memory 602 may include a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, tape, or Universal Serial Bus (USB) Drive or a combination of two or more of these. Memory 602 may include removable or non-removable (or fixed) media, where appropriate. The memory 602 may be internal or external to the integrated gateway disaster recovery device, where appropriate. In a particular embodiment, the memory 602 is a non-volatile solid-state memory.
The memory may include Read Only Memory (ROM), random Access Memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. Thus, in general, the memory includes one or more tangible (non-transitory) computer-readable storage media (e.g., memory devices) encoded with software comprising computer-executable instructions and when the software is executed (e.g., by one or more processors), it is operable to perform operations described with reference to the methods according to an aspect of the present disclosure.
The processor 601 implements any of the methods in the above embodiments by reading and executing computer program instructions stored in the memory 602.
In one example, the electronic device may also include a communication interface 603 and a bus 610. The processor 601, the memory 602, and the communication interface 603 are connected via a bus 610 to complete communication therebetween.
The communication interface 603 is mainly used for implementing communication between modules, apparatuses, units and/or devices in this embodiment.
Bus 610 includes hardware, software, or both to couple the components of the online data traffic billing device to each other. By way of example, and not limitation, a bus may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a Hypertransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus or a combination of two or more of these. Bus 610 may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.
In addition, in combination with the methods in the foregoing embodiments, the embodiments of the present application may be implemented by providing a computer storage medium. The computer storage medium having computer program instructions stored thereon; the computer program instructions, when executed by a processor, implement any of the methods in the above embodiments.
It is to be understood that the present application is not limited to the particular arrangements and instrumentality described above and shown in the attached drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications, and additions or change the order between the steps after comprehending the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the present application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and so forth. The code segments may be downloaded via a computer grid such as the internet, an intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this application describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware for performing the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As described above, only the specific embodiments of the present application are provided, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the module and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It should be understood that the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered within the scope of the present application.

Claims (10)

1. A panoramic video distribution method, comprising:
receiving playing parameters of a terminal under the condition of establishing panoramic video transmission connection with the terminal, wherein the playing parameters comprise position information, time information and view information;
according to the position information and the time information, determining an initial panoramic image frame from the panoramic video;
according to the view information, the primary selected panoramic image frame is cut to obtain a target image frame;
and distributing the target image frame to the terminal.
2. The method of claim 1, wherein the panoramic video comprises N panoramic sub-videos, each panoramic sub-video having a shot point associated therewith, each panoramic sub-video comprising at least one panoramic image frame, wherein N is an integer greater than or equal to 1,
the determining an initially selected panoramic image frame from the panoramic video according to the position information and the time information comprises:
determining a first panoramic sub-video matched with the position information from the panoramic video according to the corresponding relation between the preset position information and the shooting point location;
and according to the corresponding relation between preset time information and panoramic image frames, determining an initially selected panoramic image frame matched with the time information from at least one panoramic image frame included in the first panoramic sub-video.
3. The method according to claim 2, wherein before receiving the play parameter of the terminal, the method further comprises:
acquiring N panoramic sub-videos in a panoramic video and point location information associated with each panoramic sub-video, wherein the panoramic sub-videos are videos shot by each shooting point location in the panoramic video;
acquiring a frame sequence of each panoramic sub video, wherein the frame sequence comprises at least one panoramic image frame and a playing progress corresponding to each panoramic image frame;
and storing the panoramic sub video and the point location information in an associated manner, and storing the panoramic image frame and the playing progress.
4. The method of claim 3, wherein after obtaining the sequence of frames for each panoramic sub-video, the method further comprises:
acquiring pixel coordinates of each pixel point of each panoramic image frame in a preset image coordinate system;
and storing the pixel point and the pixel coordinate in an associated manner.
5. The method of claim 4, wherein the field of view information includes field of view coordinates and an initial field of view, and wherein cropping the preliminary panoramic image frame based on the field of view information to obtain a target image frame comprises:
determining a central point from the initially selected panoramic image frame according to the field coordinate, wherein the central point is a pixel point of which the pixel coordinate is matched with the field coordinate;
and cutting the primary panoramic image frame based on the central point and the initial field angle to obtain a target image frame.
6. The method of claim 5, wherein the playback parameters further include a zoom ratio, and wherein the cropping the initial panoramic image frame based on the center point and the initial field angle to obtain a target image frame comprises:
obtaining a target field angle according to the zooming proportion and the initial field angle;
and cutting the primary panoramic image frame based on the central point and the target field angle to obtain a target image frame.
7. The method of claim 6,
the initial field angle is obtained based on the terminal according to the human eye field range, and the scaling is obtained based on operation information acquired by a sensor of the terminal or based on a first input received by the terminal.
8. A panoramic video distribution apparatus, characterized in that the apparatus comprises:
the terminal comprises a receiving module, a display module and a display module, wherein the receiving module is used for receiving playing parameters of the terminal under the condition of establishing panoramic video transmission connection with the terminal, and the playing parameters comprise position information, time information and view information;
the determining module is used for determining an initially selected panoramic image frame from the panoramic video according to the position information and the time information;
the cutting module is used for cutting the primary selected panoramic image frame according to the visual field information to obtain a target image frame;
and the distribution module is used for distributing the target image frame to the terminal.
9. An electronic device, characterized in that the device comprises: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements the method of any of claims 1-7.
10. A computer storage medium having computer program instructions stored thereon which, when executed by a processor, implement the method of any one of claims 1-7.
CN202110481689.4A 2021-04-30 2021-04-30 Panoramic video distribution method, device, equipment and computer storage medium Pending CN115278193A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110481689.4A CN115278193A (en) 2021-04-30 2021-04-30 Panoramic video distribution method, device, equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110481689.4A CN115278193A (en) 2021-04-30 2021-04-30 Panoramic video distribution method, device, equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN115278193A true CN115278193A (en) 2022-11-01

Family

ID=83745629

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110481689.4A Pending CN115278193A (en) 2021-04-30 2021-04-30 Panoramic video distribution method, device, equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN115278193A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005250560A (en) * 2004-03-01 2005-09-15 Mitsubishi Electric Corp Landscape display device
US9778351B1 (en) * 2007-10-04 2017-10-03 Hrl Laboratories, Llc System for surveillance by integrating radar with a panoramic staring sensor
US20180020204A1 (en) * 2015-04-15 2018-01-18 Lytro, Inc. Data structures and delivery methods for expediting virtual reality playback
CN108174240A (en) * 2017-12-29 2018-06-15 哈尔滨市舍科技有限公司 Panoramic video playback method and system based on user location
CN112153401A (en) * 2020-09-22 2020-12-29 咪咕视讯科技有限公司 Video processing method, communication device and readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005250560A (en) * 2004-03-01 2005-09-15 Mitsubishi Electric Corp Landscape display device
US9778351B1 (en) * 2007-10-04 2017-10-03 Hrl Laboratories, Llc System for surveillance by integrating radar with a panoramic staring sensor
US20180020204A1 (en) * 2015-04-15 2018-01-18 Lytro, Inc. Data structures and delivery methods for expediting virtual reality playback
CN108174240A (en) * 2017-12-29 2018-06-15 哈尔滨市舍科技有限公司 Panoramic video playback method and system based on user location
CN112153401A (en) * 2020-09-22 2020-12-29 咪咕视讯科技有限公司 Video processing method, communication device and readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
程舒慧;: "多路摄像头的视频拼接技术研究" *

Similar Documents

Publication Publication Date Title
CN108830894B (en) Remote guidance method, device, terminal and storage medium based on augmented reality
CN107820593B (en) Virtual reality interaction method, device and system
US9159169B2 (en) Image display apparatus, imaging apparatus, image display method, control method for imaging apparatus, and program
JP6621063B2 (en) Camera selection method and video distribution system
CN108932051B (en) Augmented reality image processing method, apparatus and storage medium
JP7017175B2 (en) Information processing equipment, information processing method, program
CN107404615B (en) Image recording method and electronic equipment
KR20160122702A (en) Information processing device, information processing method and program
CN111627116A (en) Image rendering control method and device and server
CN112165629B (en) Intelligent live broadcast method, wearable device and intelligent live broadcast system
US20180020203A1 (en) Information processing apparatus, method for panoramic image display, and non-transitory computer-readable storage medium
CN114205669B (en) Free view video playing method and device and electronic equipment
CN109587572B (en) Method and device for displaying product, storage medium and electronic equipment
CN108804161B (en) Application initialization method, device, terminal and storage medium
CN111818265B (en) Interaction method and device based on augmented reality model, electronic equipment and medium
JP2016195323A (en) Information processing apparatus, information processing method, and program
KR20180133052A (en) Method for authoring augmented reality contents based on 360 degree image and video
CN115278193A (en) Panoramic video distribution method, device, equipment and computer storage medium
CN110383819A (en) It generates the method for the directional information of omnidirectional images and executes the device of this method
CN112070903A (en) Virtual object display method and device, electronic equipment and computer storage medium
CN114007056A (en) Method and device for generating three-dimensional panoramic image
CN106101539A (en) A kind of self-shooting bar angle regulation method and self-shooting bar
US20200195844A1 (en) Method for street view service and apparatus for performing same method
CN105787988B (en) Information processing method, server and terminal equipment
KR102177876B1 (en) Method for determining information related to filming location and apparatus for performing the method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination