WO2023237095A1 - 一种基于环绕视角的视频合成方法、控制器及存储介质 - Google Patents

一种基于环绕视角的视频合成方法、控制器及存储介质 Download PDF

Info

Publication number
WO2023237095A1
WO2023237095A1 PCT/CN2023/099344 CN2023099344W WO2023237095A1 WO 2023237095 A1 WO2023237095 A1 WO 2023237095A1 CN 2023099344 W CN2023099344 W CN 2023099344W WO 2023237095 A1 WO2023237095 A1 WO 2023237095A1
Authority
WO
WIPO (PCT)
Prior art keywords
viewing angle
perspective
image
synthesis method
angle
Prior art date
Application number
PCT/CN2023/099344
Other languages
English (en)
French (fr)
Inventor
陈笑怡
李怀德
Original Assignee
咪咕视讯科技有限公司
咪咕文化科技有限公司
中国移动通信集团有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 咪咕视讯科技有限公司, 咪咕文化科技有限公司, 中国移动通信集团有限公司 filed Critical 咪咕视讯科技有限公司
Publication of WO2023237095A1 publication Critical patent/WO2023237095A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping

Definitions

  • the present application relates to the field of video synthesis technology, and in particular to a video synthesis method, controller and storage medium based on a surround perspective.
  • the video synthesis for surround shooting adopts tolerance splicing method.
  • the synthesized video data is processed by the streaming server and video codec, and then transmitted to the broadcast end for full decoding and then presented to the user.
  • the synthesis processing time for surround shooting is long, and the amount of synthesis data that needs to be transmitted is large.
  • Some low-end and mid-range terminal devices are prone to problems such as overheating or processor overheating when using free viewing angles, which is not conducive to the use of free viewing angle functions in ultra-high-end environments. Universality and application in the field of high definition.
  • the technical purpose to be achieved by the embodiments of this application is to provide a video synthesis method, controller and storage medium based on a surround view, so as to solve the problem that current terminal equipment is prone to overheating or processor overheating when using free viewing angles. As well as the problem of being unable to meet the universal application and application of the free viewing angle function in the ultra-high definition field.
  • the embodiment of the present application provides a video synthesis method based on the surround perspective, which is applied to the client.
  • the method includes:
  • adaptive frame interpolation processing is performed to obtain an automatic frame that satisfies the second viewing angle range. Adapt the image and render based on the adaptive image.
  • determining the user's adjusted second perspective range according to the perspective adjustment input includes:
  • the current playback angle range is determined to be the second angle range.
  • the adaptive frame interpolation process is performed according to the second viewing angle range to obtain an adaptive image that satisfies the second viewing angle range, including:
  • performing preset frame interpolation processing on images of adjacent frames to obtain adaptive images includes:
  • the image of each frame is mapped to a cylinder or a sphere
  • the homography is solved based on the projected feature points and spliced to obtain an adaptive image
  • the method further includes:
  • the second visual angle range is recorded as the first visual angle range
  • the step of capturing the user's adjusted second perspective range according to the perspective adjustment input is performed again.
  • the first rotation direction of the viewing angle dial corresponds to a preset first viewing angle rotation direction
  • the first unit rotation angle on the viewing angle dial corresponds to a first preset unit viewing angle rotation angle
  • the second rotation direction of the first viewing angle dial corresponds to a preset second viewing angle rotation direction
  • the second unit rotation angle on the first viewing angle dial corresponds to a second preset unit viewing angle rotation angle
  • the third rotation direction of the second viewing angle dial corresponds to the second rotation direction
  • the third unit rotation angle on the second viewing angle dial corresponds to a third preset unit viewing angle rotation angle, wherein the third preset unit viewing angle rotation angle The angle is smaller than the second preset unit viewing angle rotation angle
  • the third rotation direction of the second viewing angle dial corresponds to a preset third viewing angle rotation direction
  • the third unit rotation angle on the second viewing angle dial corresponds to a third preset unit viewing angle rotation angle
  • the rotation direction of the third perspective is perpendicular to the rotation direction of the second perspective.
  • Another embodiment of the present application also provides a video synthesis method based on a surround perspective, which is applied to the server.
  • the method includes:
  • the data packet After receiving the data packet transmitted through the signal from the shooting end, the data packet is parsed to obtain the video data;
  • the video data is pushed to the corresponding client.
  • the data packet is decompressed to obtain video data, including:
  • the data packet is decompressed to obtain video data, including:
  • controller applied to the client, where the controller includes:
  • the first processing module is used to parse and present the image according to the predetermined first angle range in the video data after receiving the video data pushed by the server;
  • the second processing module is configured to, after receiving the user's perspective adjustment input, determine the user's adjusted second perspective range based on the perspective adjustment input;
  • the third processing module is configured to perform adaptive frame interpolation processing according to the second viewing angle range, obtain an adaptive image that satisfies the second viewing angle range, and present the adaptive image according to the adaptive image.
  • controllers as described above and the second processing module include:
  • a first sub-processing module configured to pop up at least one perspective dial in the play box when receiving the user's first input to the play box
  • the second sub-processing module is used to adjust the playback angle range of the image according to the user's second input to the angle dial;
  • the third sub-processing module is configured to determine the current playback angle range as the second angle range when receiving the third input from the user.
  • the controller as described above, the third processing module includes:
  • the fourth sub-processing module is used to determine the rotation angle and rotation direction of the viewing angle change based on the second viewing angle range and the first viewing angle range;
  • the fifth sub-processing module is used to start from the boundary corresponding to the rotation direction and traverse the image of each frame within the rotation angle range according to the rotation direction;
  • the sixth sub-processing module is used to perform preset frame interpolation processing on images of adjacent frames to obtain adaptive images.
  • the controller as described above, the sixth sub-processing module includes:
  • a first processing unit configured to map the image of each frame to a cylinder or a sphere according to a preset first algorithm
  • the second processing unit is used to extract the projection feature points of the image on the cylinder or sphere;
  • the third processing unit is used to obtain the distance difference of the projected feature points based on the correspondence between the projected feature points of two adjacent frames;
  • the fourth processing unit is used to solve the homography according to the projected feature points when the distance difference is less than a threshold, and perform splicing processing to obtain an adaptive image;
  • the fifth processing unit is used to return to perform traversal rotation when the distance difference is greater than or equal to the threshold.
  • Direction is the step of rotating the image of each frame within the angle range starting from the corresponding boundary angle.
  • controller as mentioned above also includes:
  • the seventh processing module is used to record the second viewing angle range as the first viewing angle range
  • the eighth processing module is configured to, when the user's perspective adjustment input is received again, perform the step of capturing the second perspective range adjusted by the user according to the perspective adjustment input again.
  • the first rotation direction of the viewing angle dial corresponds to a preset first viewing angle rotation direction
  • the first unit rotation angle on the viewing angle dial corresponds to a first preset unit viewing angle rotation angle
  • the second rotation direction of the first viewing angle dial corresponds to a preset second viewing angle rotation direction
  • the second unit rotation angle on the first viewing angle dial corresponds to a second preset unit viewing angle rotation angle
  • the third rotation direction of the second viewing angle dial corresponds to the second rotation direction
  • the third unit rotation angle on the second viewing angle dial corresponds to a third preset unit viewing angle rotation angle, wherein the third preset unit viewing angle rotation angle The angle is smaller than the second preset unit viewing angle rotation angle
  • the third rotation direction of the second viewing angle dial corresponds to a preset third viewing angle rotation direction
  • the third unit rotation angle on the second viewing angle dial corresponds to a third preset unit viewing angle rotation angle
  • the rotation direction of the third perspective is perpendicular to the rotation direction of the second perspective.
  • Another embodiment of the present application also provides a controller, applied to the server, where the controller includes:
  • the fourth processing module is used to parse the data packets after receiving the data packets transmitted through signals from the shooting end to obtain video data;
  • the fifth processing module is used to predetermine the first angle of view range for presentation of video data according to the shooting method
  • the sixth processing module is used to push the video data to the corresponding client when receiving a video request.
  • the controller as described above, the fourth processing module includes:
  • the seventh sub-processing module is used to decompress data packets to obtain video data
  • the eighth sub-processing module is used to automatically detect the color curve of the image in the video data, and perform color correction on the portion of the image in two adjacent frames whose color difference is greater than the first difference;
  • the ninth sub-processing module is used to preload and analyze the surround angle of the video data, and when the picture difference between the images of two adjacent frames is greater than the second difference, generate a frame of transition image and insert it in video data.
  • Another embodiment of the present application also provides a computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • the above video synthesis method based on the surround perspective is applied to the client. Steps, or steps to implement the above video synthesis method based on surround view applied to the server.
  • Another embodiment of the present application also provides an electronic device, including a processor, a memory, and a program or instructions stored on the memory and executable on the processor, wherein the program or instructions are When executed, the processor implements the steps of the video synthesis method based on the surround perspective applied to the client as described above, or implements the steps of the video synthesis method based on the surround perspective applied to the server as described above.
  • Another embodiment of the present application also provides a chip, including a processor and a communication interface, wherein the communication interface is coupled to the processor, and the processor is used to run programs or instructions to implement the above-mentioned applications.
  • the embodiments of the present application provide a video synthesis method, controller and storage medium based on a surround perspective, which at least have the following beneficial effects:
  • the client When the client presents images based on video data, it will only parse and present them based on the predetermined first-angle range in the video data, and use other data as redundant data, thereby reducing the amount of calculation, which is beneficial when presenting videos or sequence images. Improve fluency and prevent the terminal device or processor from getting too hot.
  • the second viewing angle range that the user wants to obtain after adjustment is determined based on the viewing angle adjustment input, and then only the images within the second viewing angle range are subjected to adaptive frame interpolation processing to obtain an image that satisfies the second viewing angle range.
  • Adapting the image for presentation is also beneficial to reducing the amount of calculation, helping to avoid lags caused by too long intervals between two frames of images, ensuring the smoothness of image presentation, and helping to realize the free viewing angle function in ultra-high definition Universal applicability and application in the field.
  • Figure 1 is one of the flow diagrams of the video synthesis method based on the surround perspective applied to the client in this application;
  • Figure 2 is a schematic diagram of changes in the viewing angle range
  • Figure 3 is the second schematic flow chart of the video synthesis method based on the surround perspective applied to the client in this application;
  • Figure 4 is the third schematic flow chart of the video synthesis method based on the surround perspective applied to the client in this application;
  • Figure 5 is the fourth schematic flowchart of the video synthesis method based on surround perspective applied to the client in this application;
  • Figure 6 is one of the schematic diagrams of the perspective dial applied to the client in this application.
  • Figure 7 is the second schematic diagram of the perspective dial applied to the client in this application.
  • Figure 8 is one of the flow diagrams of the video synthesis method based on the surround perspective applied to the server in this application;
  • Figure 9 is a schematic structural diagram of the controller applied to the client in this application.
  • Figure 10 is a schematic structural diagram of a controller applied to the server side in this application.
  • one embodiment of the present application provides a video synthesis method based on a surround perspective, which is applied to the client and includes:
  • Step S101 After receiving the video data pushed by the server, parse and present the image according to the predetermined first angle range in the video data;
  • Step S102 After receiving the user's viewing angle adjustment input, determine the user's adjusted second viewing angle range according to the viewing angle adjustment input;
  • Step S103 Perform adaptive frame interpolation processing according to the second viewing angle range, obtain an adaptive image that satisfies the second viewing angle range, and present the adaptive image according to the adaptive image.
  • a video synthesis method with respect to the surround perspective is provided for the client.
  • the client After receiving the required video data pushed by the server, the client will predetermine the video data according to the video data.
  • the first angle of view range is parsed and the image is presented, while the video data of other angles of view except the first angle of view is not parsed, but is used as redundant data first, thereby reducing the amount of calculation, which is beneficial when presenting videos or sequence images. , improve fluency, and prevent the terminal device or processor from getting hot.
  • the client uses the first viewing angle range to present an image
  • the user's viewing angle adjustment input is received, it is determined that the user is using the client's free viewing angle function.
  • the input is first adjusted according to the viewing angle.
  • the presentation is performed, in which only the images within the second perspective are processed, while the video data from other perspectives are used as redundant data, which is beneficial to reducing the amount of calculation; and through adaptive frame interpolation processing, it is beneficial to avoid the gap between two frames of images.
  • the interval is too long
  • the occurrence of stuck and other situations ensures the smoothness of image presentation, which is conducive to the universal application and application of the free viewing angle function in the ultra-high definition field.
  • the viewing angle of the first viewing angle range is 2 ⁇
  • is a positive value, specifically it can be 30 degrees, 60 degrees or other positive values
  • the center of the first viewing angle range is 0 degrees. Only offset in one direction is explained.
  • the first viewing angle range can be expressed as [- ⁇ , ⁇ ].
  • the second viewing angle range is rotated ⁇ along the first direction based on the first viewing angle range
  • the second viewing angle range is The viewing angle range is [-( ⁇ + ⁇ ), ⁇ ].
  • both the first viewing angle range and the second viewing angle range include the playback viewing angle of the terminal device.
  • the subsequent determination of the second viewing angle range after the user's adjustment can be performed in batches and adaptive frame interpolation processing can be performed based on the second viewing angle range to obtain the second viewing angle.
  • determining the user's adjusted second perspective range according to the perspective adjustment input includes:
  • Step S301 when receiving the user's first input to the play box, pop up at least one perspective dial in the play box;
  • Step S302 Adjust the playback angle range of the image according to the user's second input to the angle dial;
  • Step S303 When receiving the third input from the user, determine the current playback angle range as the second angle range.
  • the first input to the playback box when the first input to the playback box is received, it can be determined that the user has a need to adjust the viewing angle.
  • at least one viewing angle dial pops up in the playback box to facilitate the user to adjust the viewing angle through the playback box.
  • the viewing angle dial adjusts the viewing angle.
  • the first input at this time includes but is not limited to, clicking, continuously clicking or long-pressing the first preset position on the playback box, or clicking on any position on the playback box. Perform operations such as continuous clicks or long presses.
  • a certain angle mark can be set on the viewing angle dial so that users can choose the appropriate offset angle according to their needs.
  • the viewing angle range of the image can be adjusted.
  • the adjustment method includes but is not limited to turning the viewing angle range left and right and/or up and down.
  • the second output includes but is not limited to adjusting the viewing angle dial. Turn or click.
  • the playback angle range currently selected by the user is the third The second viewing angle range, in which the third input includes but is not limited to the user's inactivity within the preset time, or the user's click, continuous click, or long press on the second preset position on or within the play box, and the third input
  • the second preset position may be the same as the first preset position, or may be located on the viewing angle dial.
  • adaptive frame interpolation processing is performed according to the second viewing angle range to obtain an adaptive image that satisfies the second viewing angle range, including:
  • Step S401 determine the rotation angle and rotation direction of the viewing angle change based on the second viewing angle range and the first viewing angle range;
  • Step S402 starting from the boundary corresponding to the rotation direction, traverse the image of each frame within the rotation angle range according to the rotation direction;
  • Step S403 Perform preset frame interpolation processing on images of adjacent frames to obtain adaptive images.
  • the rotation angle of the viewing angle change when performing adaptive frame interpolation processing according to the second viewing angle range, it is preferable to determine the rotation angle of the viewing angle change based on the adjusted second viewing angle range and the pre-adjusted first viewing angle range. and direction of rotation. Then starting from the boundary corresponding to the rotation direction, according to the rotation direction, traverse the image of each frame within the rotation angle range; that is, from the data that is not currently presented, obtain the image of each frame within the angle range that needs to be increased (not presented) image). It can also be understood that there is currently a frame of image, and the image that can be presented corresponds to 360 degrees. The currently presented image is a 60-degree image centered at 0 degrees, that is, an image of [-30°, 30°].
  • the image within [-60°,-30°) needs to be added to the existing image for presentation, so the [-60°,-30° needs to be obtained from redundant data first ), as the image in each frame. Then, preset frame interpolation processing is performed on the images of adjacent frames to obtain the adaptive image that needs to be presented.
  • preset frame interpolation processing is performed on images of adjacent frames to obtain adaptive images, including:
  • Step S501 Map the image of each frame to a cylinder or sphere according to the preset first algorithm
  • Step S502 extract the projection feature points of the image on the cylinder or sphere
  • Step S503 Obtain the distance difference of the projected feature points based on the correspondence between the projected feature points of two adjacent frames
  • Step S504 when the distance difference is less than a threshold, solve the homography according to the projected feature points, and perform splicing processing to obtain an adaptive image;
  • Step S505 when the distance difference is greater than or equal to the threshold, return to execute the traversal rotation direction from the The step of rotating the image of each frame within the angle range begins at the corresponding boundary angle.
  • a step of performing preset frame interpolation processing on images of adjacent frames to obtain an adaptive image that needs to be presented is specifically disclosed, wherein first, according to a preset first algorithm (such as deformation Algorithms (such as warp, using transformation matrices for image mapping), map the above-described image to a cylinder or a sphere.
  • a preset first algorithm such as deformation Algorithms (such as warp, using transformation matrices for image mapping)
  • the projection feature points of the image of each frame on the cylinder or sphere are extracted.
  • the number of the projection feature points can be multiple, and each projection feature point is recorded as: n ⁇ N i , where i represents the i-th frame image, N i represents the number of projected feature points on the i-th frame image; because the number of feature points on the image of each frame or the image of adjacent frames may fluctuate within a small range , so the number of projected feature points in each frame is not necessarily equal to the total number of corresponding feature points;
  • the projected feature points can be realized through scale-invariant feature transform (SIFT), where SIFT is used in the field of image processing A description that has scale invariance and can detect key points in the image. It is a local feature descriptor.
  • SIFT scale-invariant feature transform
  • N is the total number of projected feature points.
  • the homography can be solved based on the projection feature points (that is, the image mapped on the cylinder or sphere). Image restoration), and perform splicing processing with existing images to obtain corresponding adaptive images.
  • the distance difference is greater than or equal to the threshold, it is determined that the images of the two frames before and after are far apart. If frame interpolation is not performed, problems such as stuttering or unsmoothness will occur during image presentation, affecting the look and feel. Therefore, Interpolate frames between the two and go back to reacquire the image so that the final images appear smoothly.
  • the threshold can be set manually or calculated based on the fluency requirements of the device terminal. By changing the threshold, especially lowering the threshold, a clearer and smoother viewing effect can be achieved.
  • the method further includes:
  • the second visual angle range is recorded as the first visual angle range
  • the step of capturing the user's adjusted second perspective range according to the perspective adjustment input is performed again.
  • a further video synthesis method is provided, that is, after the user adjusts the viewing angle once, the second viewing angle range at this time is recorded as the first viewing angle range.
  • the angle of view can be adjusted on this basis. Avoid double counting and other situations caused by re-adjustment from the original first perspective range.
  • the first rotation direction of the viewing angle dial corresponds to a preset first viewing angle rotation direction
  • the first unit rotation angle on the viewing angle dial corresponds to a first preset unit viewing angle rotation angle
  • the second rotation direction of the first viewing angle dial corresponds to a preset second viewing angle rotation direction
  • the second unit rotation angle on the first viewing angle dial corresponds to a second preset unit viewing angle rotation angle
  • the third rotation direction of the second viewing angle dial corresponds to the second rotation direction
  • the third unit rotation angle on the second viewing angle dial corresponds to a third preset unit viewing angle rotation angle, wherein the third preset unit viewing angle rotation angle The angle is smaller than the second preset unit viewing angle rotation angle
  • the third rotation direction of the second viewing angle dial corresponds to a preset third viewing angle rotation direction
  • the third unit rotation angle on the second viewing angle dial corresponds to a third preset unit viewing angle rotation angle
  • the rotation direction of the third perspective is perpendicular to the rotation direction of the second perspective.
  • the second viewing angle dial (shown as a horizontal dial in Figure 6) can be a fine complement to the first viewing angle dial (shown as a vertical dial in Figure 6) to facilitate the user.
  • the second viewing angle dial can also be perpendicular to the rotation direction of the first viewing angle dial to facilitate 360-degree rotation of the sphere (as shown with the horizontal dial in Figure 7).
  • another embodiment of the present application also provides a video synthesis method based on a surround perspective, applied to the server, including:
  • Step S801 After receiving the data packet transmitted through the signal from the shooting end, analyze the data packet to obtain video data;
  • Step S802 predetermine the first viewing angle range for presenting the video data according to the shooting method
  • Step S803 When a video request is received, the video data is pushed to the corresponding client.
  • a video synthesis method applied to the server in which the server, after receiving the data packet transmitted by the shooting end through the signal, will parse the data packet to obtain the complete Sequence images or videos are used as video data, and based on the shooting method, the first angle range of the video data is predetermined so that when the client requests and receives the video data, it can prioritize the first angle range for parsing and presentation. , and video data from other angles of view except the first angle of view are not parsed, but are first used as redundant data, thereby reducing the amount of calculation, which is beneficial to improving fluency when presenting videos or sequence images, and avoiding terminal equipment or The processor becomes hot.
  • the data packet is decompressed to obtain video data, including:
  • And/or perform preload analysis on the surround angle of the video data, and when the picture difference between the images of two adjacent frames is greater than the second difference, generate a frame of transition image and insert it into the video data.
  • the data packet after receiving the data packet, the data packet will be decompressed to obtain the video data, and then the color process of the single frame in the sequence can be corrected by detecting and correcting the color in the image.
  • Automatically lower the exposure of the exposed image partially eliminating or reducing the video damage caused by objective factors such as ambient lighting, shutter array, signal packet loss, etc. to the "broadcast presentation" link in the "live shooting” and “signal transmission” links.
  • Undesirable effects such as screen jitter or flicker can be reduced to reduce the interference of original data on the calculation of subsequent steps.
  • You can also preload and analyze the surround angle of the original image of the surround frame sequence to calculate the picture difference between two adjacent frames.
  • yet another embodiment of the present application also provides a controller, which is applied to the client.
  • the controller includes:
  • the first processing module 901 is configured to parse and present the image according to the predetermined first angle range in the video data after receiving the video data pushed by the server;
  • the second processing module 902 is configured to, after receiving the user's viewing angle adjustment input, determine the user's adjusted second viewing angle range according to the viewing angle adjustment input;
  • the third processing module 903 is configured to perform adaptive frame interpolation processing according to the second viewing angle range, obtain an adaptive image that satisfies the second viewing angle range, and present the adaptive image according to the adaptive image.
  • the controller as described above, the second processing module 902 includes:
  • a first sub-processing module configured to pop up at least one perspective dial in the play box when receiving the user's first input to the play box
  • the second sub-processing module is used to adjust the playback angle range of the image according to the user's second input to the angle dial;
  • the third sub-processing module is configured to determine the current playback angle range as the second angle range when receiving the third input from the user.
  • the controller as described above, the third processing module 903, includes:
  • the fourth sub-processing module is used to determine the rotation angle and rotation direction of the viewing angle change based on the second viewing angle range and the first viewing angle range;
  • the fifth sub-processing module is used to start from the boundary corresponding to the rotation direction and traverse the image of each frame within the rotation angle range according to the rotation direction;
  • the sixth sub-processing module is used to perform preset frame interpolation processing on images of adjacent frames to obtain adaptive images.
  • controller and sixth sub-processing module as described above include:
  • a first processing unit configured to map the image of each frame to a cylinder or a sphere according to a preset first algorithm
  • the second processing unit is used to extract the projection feature points of the image on the cylinder or sphere;
  • the third processing unit is used to obtain the distance difference of the projected feature points based on the correspondence between the projected feature points of two adjacent frames;
  • the fourth processing unit is used to solve the homography according to the projected feature points when the distance difference is less than a threshold, and perform splicing processing to obtain an adaptive image;
  • the fifth processing unit is configured to, when the distance difference is greater than or equal to the threshold, return to the step of traversing the rotation direction of each frame of the image within the rotation angle range starting from the corresponding boundary angle.
  • controller as mentioned above also includes:
  • the seventh processing module is used to record the second viewing angle range as the first viewing angle range
  • the eighth processing module is configured to, when the user's perspective adjustment input is received again, perform the step of capturing the second perspective range adjusted by the user according to the perspective adjustment input again.
  • the controller has one viewing angle dial
  • the first rotation direction of the viewing angle dial corresponds to a preset first viewing angle rotation direction
  • the first unit rotation angle on the viewing angle dial corresponds to a first preset unit viewing angle rotation angle
  • the controller as mentioned above has at least two viewing angle dials
  • the second rotation direction of the first viewing angle dial corresponds to a preset second viewing angle rotation direction
  • the second unit rotation angle on the first viewing angle dial corresponds to a second preset unit viewing angle rotation angle
  • the third rotation direction of the second viewing angle dial corresponds to the second rotation direction
  • the third unit rotation angle on the second viewing angle dial corresponds to a third preset unit viewing angle rotation angle, wherein the third preset unit viewing angle rotation angle The angle is smaller than the second preset unit viewing angle rotation angle
  • the third rotation direction of the second viewing angle dial corresponds to a preset third viewing angle rotation direction
  • the third unit rotation angle on the second viewing angle dial corresponds to a third preset unit viewing angle rotation angle
  • the rotation direction of the third perspective is perpendicular to the rotation direction of the second perspective.
  • the embodiment of the controller applied to the client in this application is a device corresponding to the above-mentioned embodiment of the video synthesis method based on the surround perspective applied to the client. All implementation means in the above-mentioned method embodiments are applicable to the controller. In the embodiment, the same technical effect can also be achieved.
  • controller which is applied to the server.
  • the controller includes:
  • the fourth processing module 1001 is used to parse the data packets after receiving the data packets transmitted by signals from the shooting end to obtain video data;
  • the fifth processing module 1002 is used to predetermine the first viewing angle range for presentation of video data according to the shooting method
  • the sixth processing module 1003 is used to push the video data to the corresponding client when receiving a video request.
  • the controller as described above, the fourth processing module 1001 includes:
  • the seventh sub-processing module is used to decompress data packets to obtain video data
  • the eighth sub-processing module is used to automatically detect the color curve of the image in the video data, and perform color correction on the portion of the image in two adjacent frames whose color difference is greater than the first difference;
  • the ninth sub-processing module is used to preload and analyze the surround angle of the video data, and when the picture difference between the images of two adjacent frames is greater than the second difference, generate a frame of transition image and insert it in video data.
  • the embodiment of the controller applied to the server side of the present application is a device corresponding to the above-mentioned embodiment of the video synthesis method based on the surround perspective applied to the server side. All implementation means in the above-mentioned method embodiments are applicable to the implementation of the controller. In this example, the same technical effect can be achieved.
  • Another embodiment of the present application also provides a computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • the above video synthesis method based on the surround perspective is applied to the client. Steps, or, implement the above steps of the video synthesis method based on surround perspective applied to the server, and can achieve the same technical effect.
  • Another embodiment of the present application also provides an electronic device, including a processor, a memory, and a program or instructions stored on the memory and executable on the processor, wherein the program or instructions are When the processor is executed, the steps of the above-mentioned video synthesis method based on the surround perspective applied to the client are implemented, or the steps of the above-mentioned video synthesis method based on the surround perspective applied to the server are implemented, and the same effect can be achieved. technical effects.
  • Another embodiment of the present application also provides a chip, including a processor and a communication interface, wherein the communication interface is coupled to the processor, and the processor is used to run programs or instructions to implement the above-mentioned applications.
  • the steps of the video synthesis method based on the surround perspective are implemented on the client, or the steps of the video synthesis method based on the surround perspective applied to the server are implemented as described above, and the same technical effect can be achieved.

Abstract

提供了一种基于环绕视角的视频合成方法、控制器及存储介质,其中,应用于客户端的方法,包括:在接收到服务端推送的视频数据后,根据视频数据中预先确定的第一视角范围进行解析并呈现图像(S101);当接收到用户的视角调整输入后,根据视角调整输入,确定用户调整后的第二视角范围(S102);根据第二视角范围,进行自适应插帧处理,得到满足第二视角范围的自适应图像,并根据自适应图像进行呈现(S103)。

Description

一种基于环绕视角的视频合成方法、控制器及存储介质
相关申请的交叉引用
本申请主张在2022年6月9日在中国提交的中国专利申请No.202210651322.7的优先权,其全部内容通过引用包含于此。
技术领域
本申请涉及视频合成技术领域,特别涉及一种基于环绕视角的视频合成方法、控制器及存储介质。
背景技术
目前环绕拍摄的视频合成大部分采用容差拼接方式,合成完毕的视频数据经推流服务器、视频编解码处理,传输到播出端进行全量解码,再呈现给用户。但相关技术中环绕拍摄合成处理时间较长,需传输的合成数据量较大,部分中低档终端设备在使用自由视角时容易出现发烫或处理器发烫等问题,不利于自由视角功能在超高清领域的普适和应用。
发明内容
本申请实施例要达到的技术目的是提供一种基于环绕视角的视频合成方法、控制器及存储介质,用以解决当前终端设备在使用自由视角时容易出现发烫或处理器发烫等问题,以及无法满足自由视角功能在超高清领域普适和应用的问题。
本申请实施例提供了一种基于环绕视角的视频合成方法,应用于客户端,所述方法包括:
在接收到服务端推送的视频数据后,根据视频数据中预先确定的第一视角范围进行解析并呈现图像;
当接收到用户的视角调整输入后,根据视角调整输入,确定用户调整后的第二视角范围;
根据第二视角范围,进行自适应插帧处理,得到满足第二视角范围的自 适应图像,并根据自适应图像进行呈现。
可选地,如上所述的视频合成方法,所述当接收到用户的视角调整输入后,根据视角调整输入,确定用户调整后的第二视角范围,包括:
当接收到用户对播放框的第一输入时,在播放框内弹出至少一个视角拨盘;
根据用户对视角拨盘的第二输入,调整图像的播放视角范围;
当接收到用户的第三输入时,确定当前的播放视角范围为第二视角范围。
可选地,如上所述的视频合成方法,所述根据第二视角范围,进行自适应插帧处理,得到满足第二视角范围的自适应图像,包括:
根据第二视角范围和第一视角范围,确定视角变化的旋转角度和旋转方向;
从对应旋转方向的边界开始,根据旋转方向,遍历旋转角度范围内每一帧的图像;
对相邻帧的图像进行预设插帧处理,得到自适应图像。
可选地,如上所述的视频合成方法,所述对相邻帧的图像进行预设插帧处理,得到自适应图像,包括:
根据预设的第一算法,将每一帧的图像映射至一柱面或球面;
提取图像在柱面或球面上的投影特征点;
根据相邻两帧投影特征点的对应关系,获取投影特征点的距离差值;
当距离差值小于一阈值时,根据投影特征点求解单应性,并进行拼接处理,得到自适应图像;
当距离差值大于或等于阈值时,返回执行遍历旋转方向从对应的边界角度开始旋转角度范围内每一帧的图像的步骤。
可选地,如上所述的视频合成方法,所述在根据自适应图像进行呈现的步骤之后,方法还包括:
将第二视角范围记为第一视角范围;
当再次接收到用户的视角调整输入时,再次执行根据视角调整输入,捕获用户调整后的第二视角范围的步骤。
可选地,如上所述的视频合成方法,所述视角拨盘为一个;
其中,视角拨盘的第一旋转方向与一预设的第一视角旋转方向对应,视角拨盘上的第一单位旋转角度与一第一预设单位视角旋转角度对应。
可选地,如上所述的视频合成方法,所述视角拨盘为至少两个;
其中,第一视角拨盘的第二旋转方向与一预设的第二视角旋转方向对应,第一视角拨盘上的第二单位旋转角度与一第二预设单位视角旋转角度对应;
第二视角拨盘的第三旋转方向与第二旋转方向对应,第二视角拨盘上的第三单位旋转角度与一第三预设单位视角旋转角度对应,其中,第三预设单位视角旋转角度小于第二预设单位视角旋转角度;
或者,第二视角拨盘的第三旋转方向与一预设的第三视角旋转方向对应,第二视角拨盘上的第三单位旋转角度与一第三预设单位视角旋转角度对应,其中,第三视角旋转方向与第二视角旋转方向垂直。
本申请的另一实施例还提供了一种基于环绕视角的视频合成方法,应用于服务端,所述方法包括:
在接收到拍摄端通过信号传输的数据包后,对数据包进行解析,得到视频数据;
根据拍摄方法预先确定视频数据进行呈现的第一视角范围;
当接收到一视频请求时,将视频数据推送至对应的客户端。
可选地,如上所述的视频合成方法,所述在接收到拍摄端通过信号传输的数据包后,对数据包进行解压,得到视频数据,包括:
对数据包进行解压得到视频数据;
对视频数据中图像的色彩曲线进行自动检测,对相邻两帧的图像中色差大于第一差值的部分进行调色修正。
可选地,如上所述的视频合成方法,所述在接收到拍摄端通过信号传输的数据包后,对数据包进行解压,得到视频数据,包括:
对数据包进行解压得到视频数据;
对视频数据的环绕角度进行预加载分析,当相邻两帧的图像之间的画面差值大于第二差值时,生成一帧过渡图像并插入视频数据中。
本申请的再一实施例还提供了一种控制器,应用于客户端,所述控制器包括:
第一处理模块,用于在接收到服务端推送的视频数据后,根据视频数据中预先确定的第一视角范围进行解析并呈现图像;
第二处理模块,用于当接收到用户的视角调整输入后,根据视角调整输入,确定用户调整后的第二视角范围;
第三处理模块,用于根据第二视角范围,进行自适应插帧处理,得到满足第二视角范围的自适应图像,并根据自适应图像进行呈现。
可选地,如上所述的控制器,所述第二处理模块,包括:
第一子处理模块,用于当接收到用户对播放框的第一输入时,在播放框内弹出至少一个视角拨盘;
第二子处理模块,用于根据用户对视角拨盘的第二输入,调整图像的播放视角范围;
第三子处理模块,用于当接收到用户的第三输入时,确定当前的播放视角范围为第二视角范围。
可选地,如上所述的控制器,所述第三处理模块,包括:
第四子处理模块,用于根据第二视角范围和第一视角范围,确定视角变化的旋转角度和旋转方向;
第五子处理模块,用于从对应旋转方向的边界开始,根据旋转方向,遍历旋转角度范围内每一帧的图像;
第六子处理模块,用于对相邻帧的图像进行预设插帧处理,得到自适应图像。
可选地,如上所述的控制器,所述第六子处理模块,包括:
第一处理单元,用于根据预设的第一算法,将每一帧的图像映射至一柱面或球面;
第二处理单元,用于提取图像在柱面或球面上的投影特征点;
第三处理单元,用于根据相邻两帧投影特征点的对应关系,获取投影特征点的距离差值;
第四处理单元,用于当距离差值小于一阈值时,根据投影特征点求解单应性,并进行拼接处理,得到自适应图像;
第五处理单元,用于当距离差值大于或等于阈值时,返回执行遍历旋转 方向从对应的边界角度开始旋转角度范围内每一帧的图像的步骤。
可选地,如上所述的控制器,还包括:
第七处理模块,用于将第二视角范围记为第一视角范围;
第八处理模块,用于当再次接收到用户的视角调整输入时,再次执行根据视角调整输入,捕获用户调整后的第二视角范围的步骤。
可选地,如上所述的控制器,所述视角拨盘为一个;
其中,视角拨盘的第一旋转方向与一预设的第一视角旋转方向对应,视角拨盘上的第一单位旋转角度与一第一预设单位视角旋转角度对应。
可选地,如上所述的控制器,所述视角拨盘为至少两个;
其中,第一视角拨盘的第二旋转方向与一预设的第二视角旋转方向对应,第一视角拨盘上的第二单位旋转角度与一第二预设单位视角旋转角度对应;
第二视角拨盘的第三旋转方向与第二旋转方向对应,第二视角拨盘上的第三单位旋转角度与一第三预设单位视角旋转角度对应,其中,第三预设单位视角旋转角度小于第二预设单位视角旋转角度;
或者,第二视角拨盘的第三旋转方向与一预设的第三视角旋转方向对应,第二视角拨盘上的第三单位旋转角度与一第三预设单位视角旋转角度对应,其中,第三视角旋转方向与第二视角旋转方向垂直。
本申请的又一实施例还提供了一种控制器,应用于服务端,所述控制器包括:
第四处理模块,用于在接收到拍摄端通过信号传输的数据包后,对数据包进行解析,得到视频数据;
第五处理模块,用于根据拍摄方法预先确定视频数据进行呈现的第一视角范围;
第六处理模块,用于当接收到一视频请求时,将视频数据推送至对应的客户端。
可选地,如上所述的控制器,所述第四处理模块,包括:
第七子处理模块,用于对数据包进行解压得到视频数据;
第八子处理模块,用于对视频数据中图像的色彩曲线进行自动检测,对相邻两帧的图像中色差大于第一差值的部分进行调色修正;
和/或,第九子处理模块,用于对视频数据的环绕角度进行预加载分析,当相邻两帧的图像之间的画面差值大于第二差值时,生成一帧过渡图像并插入视频数据中。
本申请的另一实施例还提供了一种计算机可读存储介质,计算机可读存储介质上存储计算机程序,计算机程序被处理器执行时实现如上的应用于客户端的基于环绕视角的视频合成方法的步骤,或者,实现如上的应用于服务端的基于环绕视角的视频合成方法的步骤。
本申请的另一实施例还提供了一种电子设备,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,其中,所述程序或指令被所述处理器执行时实现如上所述的应用于客户端的基于环绕视角的视频合成方法的步骤,或者,实现如上所述的应用于服务端的基于环绕视角的视频合成方法的步骤。
本申请的另一实施例还提供了一种芯片,包括处理器和通信接口,其中,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如上所述的应用于客户端的基于环绕视角的视频合成方法的步骤,或者,实现如上所述的应用于服务端的基于环绕视角的视频合成方法的步骤。
本申请实施例提供的一种基于环绕视角的视频合成方法、控制器及存储介质,至少具有以下有益效果:
客户端根据视频数据呈现图像时会仅根据视频数据中预先确定的第一视角范围进行解析并呈现,而将其他数据作为冗余数据,从而减少计算量,有利于在呈现视频或序列图像时,提高流畅度,并避免终端设备或处理器发烫的情况出现。当用户调整视角时,根据视角调整输入,确定用户进行调整后想要得到的第二视角范围,进而仅对第二视角范围内的图像进行自适应插帧处理,以得到满足第二视角范围的自适应图像,从而进行呈现,同样有利于减少计算量,有利于避免两帧图像间的间隔过长而出现的卡顿等情况,保证图像呈现的流畅度,有利于实现自由视角功能在超高清领域的普遍适用和应用。
附图说明
图1为本申请应用于客户端的基于环绕视角的视频合成方法的流程示意图之一;
图2为一视角范围变化示意图;
图3为本申请应用于客户端的基于环绕视角的视频合成方法的流程示意图之二;
图4为本申请应用于客户端的基于环绕视角的视频合成方法的流程示意图之三;
图5为本申请应用于客户端的基于环绕视角的视频合成方法的流程示意图之四;
图6为本申请应用于客户端的视角拨盘示意图之一;
图7为本申请应用于客户端的视角拨盘示意图之二;
图8为本申请应用于服务端的基于环绕视角的视频合成方法的流程示意图之一;
图9为本申请应用于客户端的控制器的结构示意图;
图10为本申请应用于服务端的控制器的结构示意图。
具体实施方式
为使本申请要解决的技术问题、技术方案和优点更加清楚,下面将结合附图及具体实施例进行详细描述。在下面的描述中,提供诸如具体的配置和组件的特定细节仅仅是为了帮助全面理解本申请的实施例。因此,本领域技术人员应该清楚,可以对这里描述的实施例进行各种改变和修改而不脱离本申请的范围和精神。另外,为了清楚和简洁,省略了对已知功能和构造的描述。
应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。
在本申请的各种实施例中,应理解,下述各过程的序号的大小并不意味 着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
应理解,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。在本申请所提供的实施例中,应理解,“与A相应的B”表示B与A相关联,根据A可以确定B。但还应理解,根据A确定B并不意味着仅仅根据A确定B,还可以根据A和/或其它信息确定B。
参见图1,本申请的一实施例提供了一种基于环绕视角的视频合成方法,应用于客户端,包括:
步骤S101,在接收到服务端推送的视频数据后,根据视频数据中预先确定的第一视角范围进行解析并呈现图像;
步骤S102,当接收到用户的视角调整输入后,根据视角调整输入,确定用户调整后的第二视角范围;
步骤S103,根据第二视角范围,进行自适应插帧处理,得到满足第二视角范围的自适应图像,并根据自适应图像进行呈现。
在本申请的一实施例中,提供了一种应用于客户端的关于环绕视角的视频合成方法,其中,客户端在接收到服务端推送的所需要的视频数据后,会根据视频数据中预先确定的第一视角范围进行解析并呈现图像,而除第一视角范围外的其他视角的视频数据不进行解析,而是先作为冗余数据,从而减少计算量,有利于在呈现视频或序列图像时,提高流畅度,并避免终端设备或处理器发烫的情况出现。
在客户端使用第一视角范围呈现图像时,若接收到用户的视角调整输入,则确定用户在使用客户端的自由视角功能,此时,为向用户呈现对应视角的图像,先根据该视角调整输入,确定用户进行调整后想要得到的第二视角范围,进而根据该第二视角范围,对第二视角范围内的图像进行自适应插帧处理,以得到满足第二视角范围的自适应图像,从而进行呈现,其中仅需对第二视角范围内的图像进行处理,而其他视角的视频数据作为冗余数据,有利于减少计算量;且通过自适应插帧处理,有利于避免两帧图像间的间隔过长 而出现的卡顿等情况,保证图像呈现的流畅度,有利于实现自由视角功能在超高清领域的普遍适用和应用。
参见图2,在一实施例中,第一视角范围的视角为2θ,θ为正值,具体可以为30度、60度或其他正值,第一视角范围的正中心为0度,以视角仅沿一个方向偏移进行说明,此时第一视角范围可表示为[-θ,θ],当第二视角范围为在第一视角范围的基础上沿第一方向转动了φ时,第二视角范围为[-(θ+φ),θ]。其中,第一视角范围和第二视角范围均已包括终端设备的播放视角。
需要说明的是,当用户进行视角调整输入的时间较长时,可分次执行后续确定用户调整后的第二视角范围以及根据第二视角范围,进行自适应插帧处理,得到满足第二视角范围的自适应图像,并根据自适应图像进行呈现的步骤。且每一次的时间不超过一预设单位时间。
参见图3,可选地,如上所述的视频合成方法,所述当接收到用户的视角调整输入后,根据视角调整输入,确定用户调整后的第二视角范围,包括:
步骤S301,当接收到用户对播放框的第一输入时,在播放框内弹出至少一个视角拨盘;
步骤S302,根据用户对视角拨盘的第二输入,调整图像的播放视角范围;
步骤S303,当接收到用户的第三输入时,确定当前的播放视角范围为第二视角范围。
在本申请的一具体实施例中,接收到用对当播放框的第一输入时,可确定用户具有调整视角的需求,此时在播放框内弹出至少一个视角拨盘,以便于用户通过该视角拨盘进行视角调整,可选地,此时的第一输入包括但不限于,对播放框上的第一预设位置进行点击、连续点击或长按等操作,或者对播放框上任意位置进行连续点击或长按等操作。该视角拨盘上可设置一定的角度标识,以便用户根据需求选择合适的偏移角度。
进一步的,根据用户对视角拨盘的第二输入,可以调整图像的视角范围,调整方式包括但不限于视角范围的左右转动和/或上下转动,第二输出包括但不限于对视角拨盘进行转动或点击等操作。
当接收到用户的第三输入时,则确定当前用户选择的播放视角范围为第 二视角范围,其中第三输入包括但不限于用户在预设时间内无操作,或者用户对播放框上或播放框内的第二预设位置进行点击、连续点击或长按等操作,该第二预设位置可与第一预设位置相同,或,位于视角拨盘上。
参见图4,可选地,如上所述的视频合成方法,所述根据第二视角范围,进行自适应插帧处理,得到满足第二视角范围的自适应图像,包括:
步骤S401,根据第二视角范围和第一视角范围,确定视角变化的旋转角度和旋转方向;
步骤S402,从对应旋转方向的边界开始,根据旋转方向,遍历旋转角度范围内每一帧的图像;
步骤S403,对相邻帧的图像进行预设插帧处理,得到自适应图像。
在本申请的再一实施例中,其中,根据第二视角范围,进行自适应插帧处理时,优选根据调整后的第二视角范围和调整前的第一视角范围,确定视角变化的旋转角度和旋转方向。进而从对应旋转方向的边界开始,根据旋转方向,遍历旋转角度范围内每一帧的图像;即从当前未呈现的数据中,获取需要增加的角度范围内每一帧的图像(未进行呈现的图像)。也可以理解为,当前有一帧图像,其可呈现的图像对应360度,当前呈现的图像为以0度为中心的60度图像即[-30°,30°]的图像,此时,因视角向左转动了30°,使得该[-60°,-30°)内的图像需要增加至现有图像内进行呈现,因此需要先从冗余数据中获取该[-60°,-30°)内的图像,作为每一帧中的图像。进而再通过对相邻帧的图像进行预设插帧处理,得到所需要呈现的自适应图像。
参见图5,进一步的,如上所述的视频合成方法,所述对相邻帧的图像进行预设插帧处理,得到自适应图像,包括:
步骤S501,根据预设的第一算法,将每一帧的图像映射至一柱面或球面;
步骤S502,提取图像在柱面或球面上的投影特征点;
步骤S503,根据相邻两帧投影特征点的对应关系,获取投影特征点的距离差值;
步骤S504,当距离差值小于一阈值时,根据投影特征点求解单应性,并进行拼接处理,得到自适应图像;
步骤S505,当距离差值大于或等于阈值时,返回执行遍历旋转方向从对 应的边界角度开始旋转角度范围内的每一帧的图像的步骤。
在本申请的另一实施例中,具体公开了对相邻帧的图像进行预设插帧处理,得到所需要呈现的自适应图像的步骤,其中,首先根据预设的第一算法(例如形变算法(如warp),使用变换矩阵做图像的映射),将上述得到的图像映射至柱面或球面,其中,当仅需要在一个方向上转动视角时,可映射至柱面或球面上,当需要在两个互相垂直的方向上转动视角时,则仅能映射至球面。
进而,提取每一帧的图像在柱面或球面上的投影特征点,该投影特征点的数量可以为多个,且将每一个投影特征点记为:n∈Ni,其中,i代表第i帧图像,Ni表示第i帧图像上的投影特征点数;由于,每一帧的图像或相邻帧的图像上的特征点数可能在小范围内波动,因此每一帧的投影特征点数,并不一定等于对应的特征点总数;其中投影特征点可通过尺度不变特征变换(Scale-invariant feature transform,SIFT)实现,其中SIFT时用于图像处理领域的一种描述,其具有尺度不变性,可在图像中检测出关键点,是一种局部特征描述子。
进一步的,计算前后两帧柱面或球面上投影特征点的对应关系,并计算得到对应特征点的距离差值N为投影特征点总数。
当距离差值小于一阈值时,则表明前后两帧的图像之间很接近,在播放时可平滑过渡,此时即可根据投影特征点求解单应性(即将映射在柱面或球面上的图像还原),并与现有的图像进行拼接处理,得到对应的自适应图像。
当距离差值大于或等于该阈值时,则确定前后两帧的图像之间相距较远,若不进行插帧处理,会导致图像呈现时出现卡顿或不流畅等问题,影响观感,因此,在两者之间进行插帧,并返回重新获取图像,以便使最终得到的图像之间流畅呈现。
需要说明的是,该阈值可人为设定或根据设备终端等对流畅度等的要求计算得到,通过改变该阈值,尤其是降低阈值,可以时间更加清晰、流畅的观影效果。
优选地,在根据自适应图像进行呈现的步骤之后,方法还包括:
将第二视角范围记为第一视角范围;
当再次接收到用户的视角调整输入时,再次执行根据视角调整输入,捕获用户调整后的第二视角范围的步骤。
在本申请的另一实施例中,还提供了一种进一步的视频合成方法,即在用户调整过一次视角后,将此时的第二视角范围记为第一视角范围,当用户需要再次调整视角时,可在此基础上进行调整。避免重新从原始的第一视角范围进行调整导致出现重复计算等情况。
可选地,如上所述的视频合成方法,视角拨盘为一个;
其中,视角拨盘的第一旋转方向与一预设的第一视角旋转方向对应,视角拨盘上的第一单位旋转角度与一第一预设单位视角旋转角度对应。
参见图6和图7,可选地,如上所述的视频合成方法,视角拨盘为至少两个;
其中,第一视角拨盘的第二旋转方向与一预设的第二视角旋转方向对应,第一视角拨盘上的第二单位旋转角度与一第二预设单位视角旋转角度对应;
第二视角拨盘的第三旋转方向与第二旋转方向对应,第二视角拨盘上的第三单位旋转角度与一第三预设单位视角旋转角度对应,其中,第三预设单位视角旋转角度小于第二预设单位视角旋转角度;
或者,第二视角拨盘的第三旋转方向与一预设的第三视角旋转方向对应,第二视角拨盘上的第三单位旋转角度与一第三预设单位视角旋转角度对应,其中,第三视角旋转方向与第二视角旋转方向垂直。
当视角拨盘为至少两个,第二视角拨盘(如图6中横向拨盘所示)可以为第一视角拨盘(如图6中纵向拨盘所示)的精细补充,以便于用户进行更精细的视角调整;也可以与第一视角拨盘的转动方向垂直,以便于实现球面360度转动(如图7中横向拨盘所示)。
参见图8,本申请的另一实施例还提供了一种基于环绕视角的视频合成方法,应用于服务端,包括:
步骤S801,在接收到拍摄端通过信号传输的数据包后,对数据包进行解析,得到视频数据;
步骤S802,根据拍摄方法预先确定视频数据进行呈现的第一视角范围;
步骤S803,当接收到一视频请求时,将视频数据推送至对应的客户端。
在本申请的另一实施例中,还提供了一种应用于服务端的视频合成方法,其中,服务端,在接收到拍摄端通过信号传输的数据包后,会对数据包进行解析,得到完整序列图像或视频,作为视频数据,并基于拍摄方法,预先确定视频数据进行呈现的第一视角范围,以便于客户端在请求并接收到该视频数据时,能优先第一视角范围进行解析并呈现,而除第一视角范围外的其他视角的视频数据不进行解析,而是先作为冗余数据,从而减少计算量,有利于在呈现视频或序列图像时,提高流畅度,并避免终端设备或处理器发烫的情况出现。
优选地,如上所述的视频合成方法,所述在接收到拍摄端通过信号传输的数据包后,对数据包进行解压,得到视频数据,包括:
对数据包进行解压得到视频数据;
对视频数据中图像的色彩曲线进行自动检测,对相邻两帧的图像中色差大于第一差值的部分进行调色修正;
和/或,对视频数据的环绕角度进行预加载分析,当相邻两帧的图像之间的画面差值大于第二差值时,生成一帧过渡图像并插入视频数据中。
在本申请的另一实施例中,在接收到数据包后,会对该数据包进行解压,得到该视频数据,进而可通过对图像中的色彩进行检测和修正,对序列中单帧色彩过曝的图像进行自动下调曝光度,部分消除或减轻“现场拍摄”和“信号传输”两个环节中,由于环境灯光、快门阵列、信号丢包等客观因素对“播出展现”环节造成的视频画面抖动或闪烁等不良影响,降低原始数据对后续步骤计算的干扰。还可通过对环绕帧序列原始图像的环绕角度预加载分析,计算相临两帧之间的画面差值,若插值过大,则自动生成一帧过渡图像插入,使原始角度顺滑流畅。对原始视频数据的上述处理,有助于后续客户端在呈现时能够快速读取到最优的原始数据,以及其他生产系统之间在处理环绕视频时的“收录、剪辑、回看”的效率。
参见图9,本申请的再一实施例还提供了一种控制器,应用于客户端,所述控制器包括:
第一处理模块901,用于在接收到服务端推送的视频数据后,根据视频数据中预先确定的第一视角范围进行解析并呈现图像;
第二处理模块902,用于当接收到用户的视角调整输入后,根据视角调整输入,确定用户调整后的第二视角范围;
第三处理模块903,用于根据第二视角范围,进行自适应插帧处理,得到满足第二视角范围的自适应图像,并根据自适应图像进行呈现。
可选地,如上所述的控制器,第二处理模块902,包括:
第一子处理模块,用于当接收到用户对播放框的第一输入时,在播放框内弹出至少一个视角拨盘;
第二子处理模块,用于根据用户对视角拨盘的第二输入,调整图像的播放视角范围;
第三子处理模块,用于当接收到用户的第三输入时,确定当前的播放视角范围为第二视角范围。
可选地,如上所述的控制器,第三处理模块903,包括:
第四子处理模块,用于根据第二视角范围和第一视角范围,确定视角变化的旋转角度和旋转方向;
第五子处理模块,用于从对应旋转方向的边界开始,根据旋转方向,遍历旋转角度范围内每一帧的图像;
第六子处理模块,用于对相邻帧的图像进行预设插帧处理,得到自适应图像。
可选地,如上所述的控制器,第六子处理模块,包括:
第一处理单元,用于根据预设的第一算法,将每一帧的图像映射至一柱面或球面;
第二处理单元,用于提取图像在柱面或球面上的投影特征点;
第三处理单元,用于根据相邻两帧投影特征点的对应关系,获取投影特征点的距离差值;
第四处理单元,用于当距离差值小于一阈值时,根据投影特征点求解单应性,并进行拼接处理,得到自适应图像;
第五处理单元,用于当距离差值大于或等于阈值时,返回执行遍历旋转方向从对应的边界角度开始旋转角度范围内的每一帧的图像的步骤。
可选地,如上所述的控制器,还包括:
第七处理模块,用于将第二视角范围记为第一视角范围;
第八处理模块,用于当再次接收到用户的视角调整输入时,再次执行根据视角调整输入,捕获用户调整后的第二视角范围的步骤。
可选地,如上所述的控制器,视角拨盘为一个;
其中,视角拨盘的第一旋转方向与一预设的第一视角旋转方向对应,视角拨盘上的第一单位旋转角度与一第一预设单位视角旋转角度对应。
可选地,如上所述的控制器,视角拨盘为至少两个;
其中,第一视角拨盘的第二旋转方向与一预设的第二视角旋转方向对应,第一视角拨盘上的第二单位旋转角度与一第二预设单位视角旋转角度对应;
第二视角拨盘的第三旋转方向与第二旋转方向对应,第二视角拨盘上的第三单位旋转角度与一第三预设单位视角旋转角度对应,其中,第三预设单位视角旋转角度小于第二预设单位视角旋转角度;
或者,第二视角拨盘的第三旋转方向与一预设的第三视角旋转方向对应,第二视角拨盘上的第三单位旋转角度与一第三预设单位视角旋转角度对应,其中,第三视角旋转方向与第二视角旋转方向垂直。
本申请的应用于客户端的控制器器的实施例是与上述应用于客户端的基于环绕视角的视频合成方法的实施例对应的装置,上述方法实施例中的所有实现手段均适用于该控制器的实施例中,也能达到相同的技术效果。
参见图10,本申请的又一实施例还提供了一种控制器,应用于服务端,所述控制器包括:
第四处理模块1001,用于在接收到拍摄端通过信号传输的数据包后,对数据包进行解析,得到视频数据;
第五处理模块1002,用于根据拍摄方法预先确定视频数据进行呈现的第一视角范围;
第六处理模块1003,用于当接收到一视频请求时,将视频数据推送至对应的客户端。
优选地,如上所述的控制器,第四处理模块1001,包括:
第七子处理模块,用于对数据包进行解压得到视频数据;
第八子处理模块,用于对视频数据中图像的色彩曲线进行自动检测,对相邻两帧的图像中色差大于第一差值的部分进行调色修正;
和/或,第九子处理模块,用于对视频数据的环绕角度进行预加载分析,当相邻两帧的图像之间的画面差值大于第二差值时,生成一帧过渡图像并插入视频数据中。
本申请的应用于服务端的控制器的实施例是与上述应用于服务端的基于环绕视角的视频合成方法的实施例对应的装置,上述方法实施例中的所有实现手段均适用于该控制器的实施例中,也能达到相同的技术效果。
本申请的另一实施例还提供了一种计算机可读存储介质,计算机可读存储介质上存储计算机程序,计算机程序被处理器执行时实现如上的应用于客户端的基于环绕视角的视频合成方法的步骤,或者,实现如上的应用于服务端的基于环绕视角的视频合成方法的步骤,并能达到相同的技术效果。
本申请的另一实施例还提供了一种电子设备,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,其中,所述程序或指令被所述处理器执行时实现如上所述的应用于客户端的基于环绕视角的视频合成方法的步骤,或者,实现如上所述的应用于服务端的基于环绕视角的视频合成方法的步骤,并能达到相同的技术效果。
本申请的另一实施例还提供了一种芯片,包括处理器和通信接口,其中,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如上所述的应用于客户端的基于环绕视角的视频合成方法的步骤,或者,实现如上所述的应用于服务端的基于环绕视角的视频合成方法的步骤,并能达到相同的技术效果。
此外,本申请可以在不同例子中重复参考数字和/或字母。这种重复是为了简化和清楚的目的,其本身不指示所讨论各种实施例和/或设置之间的关系。
还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含。
以上所述是本申请的可选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请所述原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (15)

  1. 一种基于环绕视角的视频合成方法,应用于客户端,所述方法包括:
    在接收到服务端推送的视频数据后,根据所述视频数据中预先确定的第一视角范围进行解析并呈现图像;
    当接收到用户的视角调整输入后,根据所述视角调整输入,确定所述用户调整后的第二视角范围;
    根据所述第二视角范围,进行自适应插帧处理,得到满足所述第二视角范围的自适应图像,并根据所述自适应图像进行呈现。
  2. 根据权利要求1所述的视频合成方法,其中,所述当接收到用户的视角调整输入后,根据所述视角调整输入,确定所述用户调整后的第二视角范围,包括:
    当接收到所述用户对播放框的第一输入时,在所述播放框内弹出至少一个视角拨盘;
    根据所述用户对所述视角拨盘的第二输入,调整图像的播放视角范围;
    当接收到所述用户的第三输入时,确定当前的所述播放视角范围为所述第二视角范围。
  3. 根据权利要求1所述的视频合成方法,其中,所述根据所述第二视角范围,进行自适应插帧处理,得到满足所述第二视角范围的自适应图像,包括:
    根据所述第二视角范围和所述第一视角范围,确定视角变化的旋转角度和旋转方向;
    从对应所述旋转方向的边界开始,根据所述旋转方向,遍历所述旋转角度范围内的每一帧的图像;
    对相邻帧的所述图像进行预设插帧处理,得到所述自适应图像。
  4. 根据权利要求3所述的视频合成方法,其中,所述对相邻帧的所述图像进行预设插帧处理,得到所述自适应图像,包括:
    根据预设的第一算法,将每一帧的所述图像映射至一柱面或球面;
    提取所述图像在所述柱面或所述球面上的投影特征点;
    根据相邻两帧所述投影特征点的对应关系,获取所述投影特征点的距离差值;
    当所述距离差值小于一阈值时,根据所述投影特征点求解单应性,并进行拼接处理,得到所述自适应图像;
    当所述距离差值大于或等于所述阈值时,返回执行所述遍历所述旋转方向从对应的边界角度开始旋转角度范围内每一帧的图像的步骤。
  5. 根据权利要求1所述的视频合成方法,其中,在所述根据所述自适应图像进行呈现的步骤之后,所述方法还包括:
    将所述第二视角范围记为所述第一视角范围;
    当再次接收到所述用户的视角调整输入时,再次执行所述根据所述视角调整输入,捕获所述用户调整后的第二视角范围的步骤。
  6. 根据权利要求2所述的视频合成方法,其中,所述视角拨盘为一个;
    所述视角拨盘的第一旋转方向与一预设的第一视角旋转方向对应,所述视角拨盘上的第一单位旋转角度与一第一预设单位视角旋转角度对应。
  7. 根据权利要求2所述的视频合成方法,其中,所述视角拨盘为至少两个,至少两个所述视角拨盘包括第一视角拨盘和第二视角拨盘;
    其中,所述第一视角拨盘的第二旋转方向与一预设的第二视角旋转方向对应,所述第一视角拨盘上的第二单位旋转角度与一第二预设单位视角旋转角度对应;
    所述第二视角拨盘的第三旋转方向与第二旋转方向对应,所述第二视角拨盘上的第三单位旋转角度与一第三预设单位视角旋转角度对应,其中,第三预设单位视角旋转角度小于第二预设单位视角旋转角度;
    或者,所述第二视角拨盘的第三旋转方向与一预设的第三视角旋转方向对应,所述第二视角拨盘上的第三单位旋转角度与一第三预设单位视角旋转角度对应,其中,第三视角旋转方向与第二视角旋转方向垂直。
  8. 一种基于环绕视角的视频合成方法,应用于服务端,所述方法包括:
    在接收到拍摄端通过信号传输的数据包后,对所述数据包进行解析,得到视频数据;
    根据拍摄方法预先确定所述视频数据进行呈现的第一视角范围;
    当接收到一视频请求时,将所述视频数据推送至对应的客户端。
  9. 根据权利要求8所述的视频合成方法,其中,所述在接收到拍摄端通过信号传输的数据包后,对所述数据包进行解压,得到视频数据,包括:
    对所述数据包进行解压得到所述视频数据;
    对所述视频数据中图像的色彩曲线进行自动检测,对相邻两帧的所述图像中色差大于第一差值的部分进行调色修正。
  10. 根据权利要求8所述的视频合成方法,其中,所述在接收到拍摄端通过信号传输的数据包后,对所述数据包进行解压,得到视频数据,包括:
    对所述数据包进行解压得到所述视频数据;
    对所述视频数据的环绕角度进行预加载分析,当相邻两帧的所述图像之间的画面差值大于第二差值时,生成一帧过渡图像并插入所述视频数据中。
  11. 一种控制器,应用于客户端,所述控制器包括:
    第一处理模块,用于在接收到服务端推送的视频数据后,根据所述视频数据中预先确定的第一视角范围进行解析并呈现图像;
    第二处理模块,用于当接收到用户的视角调整输入后,根据所述视角调整输入,确定所述用户调整后的第二视角范围;
    第三处理模块,用于根据所述第二视角范围,进行自适应插帧处理,得到满足所述第二视角范围的自适应图像,并根据所述自适应图像进行呈现。
  12. 一种控制器,应用于服务端,所述控制器包括:
    第四处理模块,用于在接收到拍摄端通过信号传输的数据包后,对所述数据包进行解析,得到视频数据;
    第五处理模块,用于根据拍摄方法预先确定所述视频数据进行呈现的第一视角范围;
    第六处理模块,用于当接收到一视频请求时,将所述视频数据推送至对应的客户端。
  13. 一种计算机可读存储介质,所述计算机可读存储介质上存储计算机程序,其中,所述计算机程序被处理器执行时实现如权利要求1至7中任一项所述的应用于客户端的基于环绕视角的视频合成方法的步骤,或者,实现如权利要求8至10中任一项所述的应用于服务端的基于环绕视角的视频合成方法的步骤。
  14. 一种电子设备,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,其中,所述程序或指令被所述处理器执行时实现如权利要求1至7中任一项所述的应用于客户端的基于环绕视角的视频合成方法的步骤,或者,实现如权利要求8至10中任一项所述的应用于服务端的基于环绕视角的视频合成方法的步骤。
  15. 一种芯片,包括处理器和通信接口,其中,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如权利要求1至7中任一项所述的应用于客户端的基于环绕视角的视频合成方法的步骤,或者,实现如权利要求8至10中任一项所述的应用于服务端的基于环绕视角的视频合成方法的步骤。
PCT/CN2023/099344 2022-06-09 2023-06-09 一种基于环绕视角的视频合成方法、控制器及存储介质 WO2023237095A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210651322.7 2022-06-09
CN202210651322.7A CN115209181B (zh) 2022-06-09 2022-06-09 一种基于环绕视角的视频合成方法、控制器及存储介质

Publications (1)

Publication Number Publication Date
WO2023237095A1 true WO2023237095A1 (zh) 2023-12-14

Family

ID=83576712

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/099344 WO2023237095A1 (zh) 2022-06-09 2023-06-09 一种基于环绕视角的视频合成方法、控制器及存储介质

Country Status (2)

Country Link
CN (1) CN115209181B (zh)
WO (1) WO2023237095A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115209181B (zh) * 2022-06-09 2024-03-22 咪咕视讯科技有限公司 一种基于环绕视角的视频合成方法、控制器及存储介质

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160028997A1 (en) * 2014-07-24 2016-01-28 Sintai Optical (Shenzhen)Co., Ltd. Information-processing device, information-processing method and program
CN109146833A (zh) * 2018-08-02 2019-01-04 广州市鑫广飞信息科技有限公司 一种视频图像的拼接方法、装置、终端设备及存储介质
CN109427283A (zh) * 2017-08-25 2019-03-05 乐金显示有限公司 图像产生方法和使用该方法的显示装置
CN111131865A (zh) * 2018-10-30 2020-05-08 中国电信股份有限公司 提高vr视频播放流畅度的方法、装置、系统和机顶盒
US20200294188A1 (en) * 2017-11-30 2020-09-17 Sony Corporation Transmission apparatus, transmission method, reception apparatus, and reception method
CN112584196A (zh) * 2019-09-30 2021-03-30 北京金山云网络技术有限公司 视频插帧方法、装置及服务器
CN112770051A (zh) * 2021-01-04 2021-05-07 聚好看科技股份有限公司 一种基于视场角的显示方法及显示设备
CN114584769A (zh) * 2020-11-30 2022-06-03 华为技术有限公司 一种视角切换方法及装置
CN115209181A (zh) * 2022-06-09 2022-10-18 咪咕视讯科技有限公司 一种基于环绕视角的视频合成方法、控制器及存储介质

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010072065A1 (zh) * 2008-12-25 2010-07-01 深圳市泛彩溢实业有限公司 全息三维图像信息采集装置、方法及还原装置、方法
CN109076200B (zh) * 2016-01-12 2021-04-23 上海科技大学 全景立体视频系统的校准方法和装置
CN205430421U (zh) * 2016-03-25 2016-08-03 北京电影学院 用于电影虚拟化制作的可控俯仰角全景摄影系统
CN106385536A (zh) * 2016-09-19 2017-02-08 清华大学 用于视觉假体的双目图像采集方法及系统
JP2019009700A (ja) * 2017-06-27 2019-01-17 株式会社メディアタージ 多視点映像出力装置、および、多視点映像システム
EP3515082B1 (en) * 2018-01-19 2020-05-13 Nokia Technologies Oy Server device for streaming video content and client device for receiving and rendering video content
US11212438B2 (en) * 2018-02-14 2021-12-28 Qualcomm Incorporated Loop filter padding for 360-degree video coding
US11128764B2 (en) * 2018-05-17 2021-09-21 Canon Kabushiki Kaisha Imaging apparatus, control method, and non-transitory computer readable medium
JP7258482B2 (ja) * 2018-07-05 2023-04-17 キヤノン株式会社 電子機器
CN110519644A (zh) * 2019-09-05 2019-11-29 青岛一舍科技有限公司 结合推荐视角的全景视频视角调整方法和装置
JP7389602B2 (ja) * 2019-09-30 2023-11-30 株式会社ソニー・インタラクティブエンタテインメント 画像表示システム、画像処理装置、および動画配信方法
CN111163333A (zh) * 2020-01-09 2020-05-15 未来新视界教育科技(北京)有限公司 实现私人实时订制视觉内容的方法和装置
CN111355904A (zh) * 2020-03-26 2020-06-30 朱小林 一种矿井内部全景信息采集系统与展示方法
CN111447462B (zh) * 2020-05-20 2022-07-05 上海科技大学 基于视角切换的视频直播方法、系统、存储介质及终端
CN113992911A (zh) * 2021-09-26 2022-01-28 南京莱斯电子设备有限公司 全景视频h264编码的帧内预测模式确定方法和设备
CN114040119A (zh) * 2021-12-27 2022-02-11 未来电视有限公司 全景视频显示方法、装置和计算机设备

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160028997A1 (en) * 2014-07-24 2016-01-28 Sintai Optical (Shenzhen)Co., Ltd. Information-processing device, information-processing method and program
CN109427283A (zh) * 2017-08-25 2019-03-05 乐金显示有限公司 图像产生方法和使用该方法的显示装置
US20200294188A1 (en) * 2017-11-30 2020-09-17 Sony Corporation Transmission apparatus, transmission method, reception apparatus, and reception method
CN109146833A (zh) * 2018-08-02 2019-01-04 广州市鑫广飞信息科技有限公司 一种视频图像的拼接方法、装置、终端设备及存储介质
CN111131865A (zh) * 2018-10-30 2020-05-08 中国电信股份有限公司 提高vr视频播放流畅度的方法、装置、系统和机顶盒
CN112584196A (zh) * 2019-09-30 2021-03-30 北京金山云网络技术有限公司 视频插帧方法、装置及服务器
CN114584769A (zh) * 2020-11-30 2022-06-03 华为技术有限公司 一种视角切换方法及装置
CN112770051A (zh) * 2021-01-04 2021-05-07 聚好看科技股份有限公司 一种基于视场角的显示方法及显示设备
CN115209181A (zh) * 2022-06-09 2022-10-18 咪咕视讯科技有限公司 一种基于环绕视角的视频合成方法、控制器及存储介质

Also Published As

Publication number Publication date
CN115209181A (zh) 2022-10-18
CN115209181B (zh) 2024-03-22

Similar Documents

Publication Publication Date Title
EP3319320B1 (en) Adaptive media streaming method and apparatus according to decoding performance
WO2023237095A1 (zh) 一种基于环绕视角的视频合成方法、控制器及存储介质
US10158846B2 (en) Pseudo-3d forced perspective methods and devices
US9172907B2 (en) Method and apparatus for dynamically adjusting aspect ratio of images during a video call
US11489886B2 (en) Processing video including a physical writing surface
US20100321539A1 (en) Image processing apparatus and image processing method
CN111314624B (zh) 预览画面生成方法、装置、硬盘录像机及存储介质
CN1758286A (zh) 基于图像的三维远程可视化方法
US11356739B2 (en) Video playback method, terminal apparatus, and storage medium
WO2021104394A1 (zh) 图像处理方法及装置、电子设备、存储介质
WO2021179605A1 (zh) 基于gpu的摄像头视频投影方法、装置、设备及存储介质
WO2018058476A1 (zh) 一种图像校正方法及装置
US20230245685A1 (en) Removing Visual Content Representing a Reflection of a Screen
CN112437286A (zh) 一种全景原始画面视频分块传输方法
WO2022111208A1 (zh) 一种视频帧频提升方法、装置、设备及介质
WO2022141781A1 (zh) 一种播放全景视频的方法、系统、存储介质及播放设备
WO2021078269A1 (zh) 一种图像处理方法、装置、航拍相机及存储介质
US20210297654A1 (en) Panoramic video picture quality display method and device
WO2024055844A1 (zh) 用于多路数据流的传输的方法、装置、设备和介质
TW201811057A (zh) 影像訊框處理方法
US11095695B2 (en) Teleconference transmission
WO2023051166A1 (zh) 云桌面显示方法、终端、云桌面系统、设备及可读介质
WO2019210667A1 (zh) 屏幕画面传输方法、装置、服务器、系统及存储介质
WO2021068666A1 (zh) 一种显示屏的调整方法、存储介质及终端设备
US10681327B2 (en) Systems and methods for reducing horizontal misalignment in 360-degree video

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23819258

Country of ref document: EP

Kind code of ref document: A1