WO2021139728A1 - 全景视频处理方法、装置、设备及存储介质 - Google Patents

全景视频处理方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2021139728A1
WO2021139728A1 PCT/CN2021/070675 CN2021070675W WO2021139728A1 WO 2021139728 A1 WO2021139728 A1 WO 2021139728A1 CN 2021070675 W CN2021070675 W CN 2021070675W WO 2021139728 A1 WO2021139728 A1 WO 2021139728A1
Authority
WO
WIPO (PCT)
Prior art keywords
point
interest
target
video
tracking
Prior art date
Application number
PCT/CN2021/070675
Other languages
English (en)
French (fr)
Inventor
李皓宇
姜文杰
陈聪
张伟俊
Original Assignee
影石创新科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 影石创新科技股份有限公司 filed Critical 影石创新科技股份有限公司
Publication of WO2021139728A1 publication Critical patent/WO2021139728A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4728End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Definitions

  • the invention belongs to the field of computer technology, and in particular relates to a panoramic video processing method, device, equipment and storage medium.
  • Panoramic video is a dynamic video captured by a panoramic camera that contains 360-degree omnidirectional screen content. It converts static panoramic pictures into dynamic video images, and users can watch dynamic videos within the shooting angle range of the panoramic camera at will.
  • the 360-degree screen content cannot be completely displayed on the playback device at one time.
  • the user needs to select a suitable viewing angle of view, which is the viewing angle of the current video.
  • the user has the following ways to choose the viewing angle of the screen: Method one, sliding on the touch screen to select the viewing angle of the screen, and operating on the timeline of the panoramic video to return to the previous moment to select another viewing angle of the screen; Method two , Through the built-in gyroscope sensor of the playback device, matching the user's somatosensory posture to find the corresponding viewing angle of the screen.
  • the embodiments of the present invention provide a panoramic video processing method, device, equipment, and storage medium, aiming to solve the problem of poor panoramic video playback effect and poor user experience due to the inability of the prior art to provide an effective panoramic video processing method.
  • the problem is a panoramic video processing method, device, equipment, and storage medium, aiming to solve the problem of poor panoramic video playback effect and poor user experience due to the inability of the prior art to provide an effective panoramic video processing method. The problem.
  • an embodiment of the present invention provides a panoramic video processing method.
  • the method includes the following steps:
  • a tracking video corresponding to the point of interest target is generated.
  • an embodiment of the present invention provides a panoramic video processing device, and the device includes:
  • a panoramic video acquisition unit for acquiring a panoramic video
  • An interest point tracking unit configured to track the interest point target in the panoramic video and obtain corresponding tracking information
  • a playback perspective determining unit configured to determine, according to the tracking information, a screen playback perspective corresponding to the point of interest target;
  • the point-of-interest video generating unit is configured to generate a tracking video corresponding to the point-of-interest target according to the screen playback angle of view corresponding to the point-of-interest target and the panoramic video.
  • an embodiment of the present invention also provides a computer device, including a memory, a processor, and a computer program stored in the memory and running on the processor, and the processor executes the computer program
  • a computer device including a memory, a processor, and a computer program stored in the memory and running on the processor, and the processor executes the computer program
  • an embodiment of the present invention also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the above-mentioned panoramic video processing method is implemented. step.
  • the embodiment of the present invention tracks the interest point targets in the panoramic video, determines the image playback angle of the interest point target according to the tracking information of these interest point targets, and generates the interest point according to the image playback angle of view and the panoramic video corresponding to the interest point target.
  • the tracking video corresponding to the target so that the user can fully understand the video content of each point of interest target in the panoramic video by watching the tracking videos corresponding to different points of interest targets, without the user needing to manually adjust the viewing angle of the panoramic video. Improved the playback effect and user experience of panoramic video.
  • FIG. 1 is an implementation flowchart of a panoramic video processing method provided by Embodiment 1 of the present invention
  • Embodiment 2 is an implementation flowchart of a panoramic video processing method provided by Embodiment 2 of the present invention
  • Embodiment 3 is an implementation flowchart of a panoramic video processing method provided by Embodiment 3 of the present invention.
  • Embodiment 4 is a schematic structural diagram of a panoramic video processing device provided by Embodiment 4 of the present invention.
  • FIG. 5 is a schematic structural diagram of a panoramic video processing device provided by Embodiment 5 of the present invention.
  • FIG. 6 is a schematic diagram of a preferred structure of a panoramic video processing device provided by Embodiment 5 of the present invention.
  • FIG. 7 is a schematic structural diagram of a panoramic video processing device provided by Embodiment 6 of the present invention.
  • FIG. 8 is a schematic structural diagram of a panoramic video processing device provided by Embodiment 6 of the present invention.
  • FIG. 9 is a schematic structural diagram of a computer device provided in Embodiment 7 of the present invention.
  • FIG. 1 shows the implementation process of the panoramic video processing method provided in the first embodiment of the present invention.
  • FIG. 1 shows the implementation process of the panoramic video processing method provided in the first embodiment of the present invention.
  • the parts related to the embodiment of the present invention are shown, which are described in detail as follows:
  • step S101 a panoramic video is acquired.
  • the embodiments of the present invention are applicable to video processing systems or platforms to process panoramic videos.
  • a group of cameras are usually used to simultaneously shoot a 360-degree picture around the shooting point, and then stitch and stitch the images at a later stage to obtain the panoramic video. Therefore, the panoramic video that has been stitched and stitched in post-stage images can be directly obtained, and dynamic videos shot by various cameras that have not been stitched and stitched in post-stage images can also be obtained. If the dynamic video captured by each camera without post-image stitching and stitching is obtained, the image stitching and stitching can be carried out first, or in the subsequent step of tracking the interest point target in the panoramic video, each camera is taken at the same time each time. The video images taken at all times are subject to target tracking at the same time.
  • step S102 the point of interest target in the panoramic video is tracked to obtain corresponding tracking information.
  • the interest point targets can be people, vehicles, animals, robots, etc.
  • the recognition and tracking models of different interest point targets are pre-trained to recognize the panorama.
  • Different interest point targets in each frame of video image in the video and track the identified interest point targets to obtain the tracking information of these interest point targets.
  • the identification of interest point targets and the training process of the tracking model can use existing targets
  • the recognition and tracking algorithms are not restricted here.
  • the tracking information of the interest point target includes the image position of the interest point target in each frame of the video image.
  • step S103 according to the tracking information, the viewing angle of the screen corresponding to the point of interest target is determined.
  • the image position of the point-of-interest target in each frame of the panoramic video can be grasped.
  • the point-of-interest target leaves the scene monitored by the panoramic camera
  • the point-of-interest target leaves the scene monitored by the panoramic camera.
  • the image playback perspective corresponding to the point-of-interest target can be determined, that is, the screen playback perspective of each frame of the video image where the point-of-interest target appears. This ensures that the point of interest target always appears on the playback screen when the video is played.
  • step S104 the viewing angle and the panoramic video are played according to the screen corresponding to the point-of-interest target, and a tracking video corresponding to the point-of-interest target is generated.
  • the screen playback angle of view corresponding to the point of interest target that is, the screen playback angle of each frame of the video image in which the point of interest object appears
  • set the viewing angle of these video images and integrate these video images into the tracking video corresponding to the point of interest target in the order of shooting time.
  • a tracking video corresponding to each point-of-interest target can be generated.
  • the number of tracking videos corresponding to each tracked point-of-interest target is at least one, so that the dynamic video content of a single point-of-interest target is individually displayed through one or more tracking videos.
  • the video images of the point-of-interest target between the first appearance time and the first disappearing time of the panoramic video are acquired, and these video images form the first tracking video of the point-of-interest target.
  • multiple tracking videos of the point-of-interest target can be generated, so that the dynamic video content of a single point-of-interest target can be individually displayed through one or more tracking videos.
  • the first appearance time of the point-of-interest target in the panoramic video is the time when the point-of-interest target first appears in the panoramic video
  • the first disappearance time of the point-of-interest target in the panoramic video is the time when the point-of-interest target disappears in the panoramic video after the first appearance.
  • the disappearance time exceeds the preset duration
  • the second appearance time, the second disappearance time, the third appearance time, the third disappearance time and so on of the point of interest target can be obtained in sequence, which will not be repeated again.
  • the point of interest target in the panoramic video is tracked, and according to the tracking information of the point of interest target, the picture playback angle of the picture corresponding to the point of interest target is determined, and the picture playing angle of view and the panoramic video corresponding to the point of interest target are determined.
  • Generate tracking videos corresponding to points of interest targets so that users can fully understand the video content of each point of interest in the panoramic video by watching the tracking videos corresponding to different points of interest targets, without the need for the user to manually adjust the viewing angle of the panoramic video. , Which effectively improves the playback effect and user experience of the panoramic video.
  • Fig. 2 shows the implementation process of the panoramic video processing method provided by the second embodiment of the present invention.
  • Fig. 2 shows the implementation process of the panoramic video processing method provided by the second embodiment of the present invention.
  • the parts related to the embodiment of the present invention are shown, which are described in detail as follows:
  • step S201 a panoramic video is acquired.
  • panoramic videos that have been stitched and stitched in post-stage images can be directly obtained, or dynamic videos shot by various cameras that have not been stitched and stitched in post-stage images can be obtained. If the dynamic video captured by each camera without post-image stitching and stitching is obtained, the image stitching and stitching can be carried out first, or in the subsequent step of tracking the interest point target in the panoramic video, each camera is taken at the same time each time. Video images captured at all times are subject to target tracking at the same time.
  • step S202 target recognition is performed on each frame of video image in the panoramic video in turn, and the recognized interest point target is tracked to obtain tracking information of the interest point target, and at the same time, the tracked interest point target is added to the preset interest Point target collection.
  • a point-of-interest target set is constructed in advance, and the point-of-interest target set is used to store the point-of-interest target identified from the panoramic video. Pre-set different types of interest point targets, and pre-train the recognition and tracking models of different interest point targets.
  • each frame of video image in the panoramic video is sequentially identified and tracked, and every time in the next frame of video image
  • the point of interest target is added to the point of interest target set, so as to count the interest point targets that have been tracked through the point of interest target set.
  • the identified interest point target is screened according to a preset filtering condition, and if the interest point target does not meet the filtering condition, the interest point target is not tracked, It is equivalent to that the point of interest target does not appear in the video image, so through the screening of the point of interest target, it not only avoids the excessive number of point of interest targets, but also effectively improves the processing effect of the panoramic video and avoids the quality of the tracking video. .
  • the filtering conditions include one or any combination of the shape and size of the point-of-interest target, the distance between the point-of-interest target and the camera, and the number of frames that the point-of-interest target appears in the panoramic video, so as to improve the tracking of the point-of-interest target. Quality, avoid unnecessary tracking of points of interest targets that are too small in shape, too far away, or appearing for too short duration in the picture.
  • the point of interest target whose shape and size exceeds the first preset threshold is filtered out.
  • the point-of-interest target whose distance to the camera does not exceed the second preset threshold is filtered out.
  • the point-of-interest target whose number of frames appearing in the panoramic video exceeds the third preset threshold is filtered out.
  • the point-of-interest target is screened according to the tracked duration of the point-of-interest target, so as to improve the quality of the subsequently generated tracking video and avoid Tracking videos that appear to be too short.
  • the tracking of the point of interest may be ended.
  • step S203 according to the tracking information, the viewing angle of the screen corresponding to the point of interest target is determined.
  • the image position of the point-of-interest target in each frame of the panoramic video can be grasped.
  • the point-of-interest target leaves the scene monitored by the panoramic camera.
  • the point-of-interest target leaves the scene monitored by the panoramic camera.
  • the picture playback angle of the picture corresponding to the point-of-interest target can be determined, that is, the picture playback angle of each frame of the video image where the point-of-interest target appears, so as to ensure the interest when playing the video.
  • the point target always appears on the playback screen.
  • step S204 the viewing angle and the panoramic video are played according to the screen corresponding to the point-of-interest target, and a tracking video corresponding to the point-of-interest target is generated.
  • the screen playback angle of view corresponding to the point of interest target that is, the screen playback angle of each frame of the video image in which the point of interest object appears
  • set the viewing angle of these video images and integrate these video images into the tracking video corresponding to the point of interest target in the order of shooting time.
  • a tracking video corresponding to each point-of-interest target can be generated.
  • each frame of video image in the panoramic video is identified in turn and the identified interest point targets are tracked.
  • the tracked interest point targets are counted through the interest point target set, and the tracking of these interest point targets is performed
  • Information determine the screen playback perspective corresponding to the point-of-interest target, and generate tracking videos corresponding to the point-of-interest target according to the screen playback perspective and panoramic video corresponding to the point-of-interest target, so that the user can watch the tracking videos corresponding to different point-of-interest targets.
  • FIG. 3 shows the implementation flow of the panoramic video processing method provided in the third embodiment of the present invention.
  • FIG. 3 shows the implementation flow of the panoramic video processing method provided in the third embodiment of the present invention.
  • the parts related to the embodiment of the present invention are shown, which are described in detail as follows:
  • Step S301 Obtain a panoramic video.
  • Step S302 Track the point of interest target in the panoramic video to obtain corresponding tracking information.
  • step S301 and step S302 can refer to the detailed description of step S101 and step S102 in Embodiment 1 respectively, and can also refer to the detailed description of step S201 and step 202 in Embodiment 2 respectively. Go into details again.
  • Step S303 Obtain the image position of the point of interest target in the video image corresponding to the panoramic video from the tracking information.
  • the corresponding video image may be a video image in which the point-of-interest target appears, or may be all video images in a time period during which the point-of-interest target is tracked.
  • Step S304 according to the image position of the point-of-interest target, determine the viewing angle of the screen corresponding to the point-of-interest target to ensure that the point-of-interest target is always in the middle of the screen when the tracking video corresponding to the point-of-interest target is played.
  • the image position when obtaining the image position in each frame of video image in which the point of interest object appears or in each frame of video image in which the point of interest object is tracked, the image position is set in the center of the playback screen, so that the image position can be set in the center of the playback screen.
  • the image playback perspective corresponding to the point of interest target that is, the image playback perspective of each frame of the video image that the point of interest target appears, or it can also be considered to include the image playback perspective of each frame of the video image where the point of interest target is tracked, so as to ensure that the point of interest is played
  • the point of interest target is always located in the center of the screen, which improves the effect of panoramic video playback and improves the user experience.
  • step S305 the viewing angle and the panoramic video are played according to the screen corresponding to the point of interest target, and a tracking video corresponding to the point of interest target is generated.
  • the picture playback angle of view corresponding to the point of interest target that is, the picture playing angle of each frame of the video image where the point of interest target appears, or the picture playing angle of each frame of the video image where the point of interest target is tracked
  • Set the picture playback perspective of each frame of video image where the point of interest target is tracked and then combine these video images of the point of interest target to be tracked to generate the corresponding tracking video.
  • the corresponding tracking video is sent to the user for playback.
  • the tracking videos of the points of interest selected by the user are video-spliced, and then sent to the user, so that the user can
  • the complete tracking content of the point of interest target can be seen in the video, without the need to open the tracking video of the point of interest one by one to watch, which effectively improves the playback effect and user experience of the panoramic video.
  • the video splicing here is the splicing of multiple tracking videos of the same point-of-interest target selected by the user.
  • all tracking videos are sorted according to the video duration, and these tracking videos are recommended to users to watch according to the sort order, so as to provide users with high-quality tracking videos.
  • recommendations are made from first to last according to the order of sorting, and when sorting according to the order of video duration from short to long, then according to the order of sorting from back to front Make recommendations.
  • the total tracking duration of each interest point target is counted, and these interest point targets are sorted in the order from longest to shortest total tracking time, and the sorted interest point targets are recommended to The user, so that the user can select the corresponding point-of-interest target for tracking video playback.
  • the corresponding tracking video is filtered according to the search keyword and pushed to the user, thereby improving the pertinence of the tracking video pushing and effectively improving the user experience.
  • the point-of-interest target in the panoramic video is tracked, the image position of the point-of-interest target is obtained from the tracking information, and the image playback angle of the point-of-interest target is determined according to the image position of the point-of-interest target to ensure
  • the tracking video corresponding to the point of interest is played, the point of interest is always in the middle of the screen, and the viewing angle and panoramic video are played according to the screen corresponding to the point of interest, and the tracking video corresponding to the point of interest is generated, which improves the quality of the tracking video and effectively improves The playback effect and user experience of panoramic video.
  • Fig. 4 shows the structure of a panoramic video processing device provided in the fourth embodiment of the present invention.
  • Fig. 4 shows the structure of a panoramic video processing device provided in the fourth embodiment of the present invention.
  • the parts related to the embodiment of the present invention are shown, including:
  • the panoramic video obtaining unit 41 is configured to obtain a panoramic video.
  • panoramic videos that have been stitched and stitched in post-stage images can be directly obtained, or dynamic videos shot by various cameras that have not been stitched and stitched in post-stage images can be obtained. If the dynamic video captured by each camera without post-image stitching and stitching is obtained, the image stitching and stitching can be carried out first, or in the subsequent step of tracking the interest point target in the panoramic video, each camera is taken at the same time each time. Video images captured at all times are subject to target tracking at the same time.
  • the interest point tracking unit 42 is used to track the interest point target in the panoramic video to obtain corresponding tracking information.
  • different types of interest point targets are set in advance, and at the same time, the recognition and tracking models of different interest point targets are pre-trained to identify different interest point targets in each frame of the video image in the panoramic video. Tracking the target points of interest to obtain tracking information of these points of interest targets.
  • the process of identifying and tracking the points of interest targets can use existing target recognition and tracking algorithms, which are not limited here.
  • the tracking information of the interest point target includes the image position of the interest point target in each frame of the video image.
  • the playing angle of view determining unit 43 is configured to determine the image playing angle of view corresponding to the point of interest target according to the tracking information.
  • the image position of the point-of-interest target in each frame of the panoramic video can be grasped.
  • the point-of-interest target leaves the scene monitored by the panoramic camera
  • the point-of-interest target leaves the scene monitored by the panoramic camera.
  • the image playback perspective corresponding to the point-of-interest target can be determined, that is, the screen playback perspective of each frame of the video image where the point-of-interest target appears. This ensures that the point of interest target always appears on the playback screen when the video is played.
  • the point-of-interest video generating unit 44 is configured to play the perspective and panoramic video according to the screen corresponding to the point-of-interest target, and generate a tracking video corresponding to the point-of-interest target.
  • the screen playback angle of view corresponding to the point of interest target that is, the screen playback angle of each frame of the video image in which the point of interest object appears
  • set the viewing angle of these video images and integrate these video images into the tracking video corresponding to the point of interest target in the order of shooting time.
  • a tracking video corresponding to each point-of-interest target can be generated.
  • the number of tracking videos corresponding to each tracked point-of-interest target is at least one, so that the dynamic video content of a single point-of-interest target is individually displayed through one or more tracking videos.
  • the video images of the point-of-interest target between the first appearance time and the first disappearing time of the panoramic video are acquired, and these video images form the first tracking video of the point-of-interest target.
  • multiple tracking videos of the point-of-interest target can be generated, so that the dynamic video content of a single point-of-interest target can be individually displayed through one or more tracking videos.
  • the first appearance time of the point-of-interest target in the panoramic video is the time when the point-of-interest target first appears in the panoramic video
  • the first disappearance time of the point-of-interest target in the panoramic video is the time when the point-of-interest target disappears in the panoramic video after the first appearance.
  • the disappearance time exceeds the preset duration
  • the second appearance time, the second disappearance time, the third appearance time, the third disappearance time and so on of the point of interest target can be obtained in sequence, which will not be repeated again.
  • the point of interest target in the panoramic video is tracked, and according to the tracking information of the point of interest target, the picture playback angle of the picture corresponding to the point of interest target is determined, and the picture playing angle of view and the panoramic video corresponding to the point of interest target are determined.
  • Generate tracking videos corresponding to points of interest targets so that users can fully understand the video content of each point of interest in the panoramic video by watching the tracking videos corresponding to different points of interest targets, without the need for the user to manually adjust the viewing angle of the panoramic video. , Which effectively improves the playback effect and user experience of the panoramic video.
  • Fig. 5 shows the structure of a panoramic video processing apparatus provided in the fifth embodiment of the present invention.
  • Fig. 5 shows the structure of a panoramic video processing apparatus provided in the fifth embodiment of the present invention.
  • the parts related to the embodiment of the present invention including:
  • the panoramic video obtaining unit 51 is configured to obtain a panoramic video.
  • panoramic videos that have been stitched and stitched in post-stage images can be directly obtained, or dynamic videos shot by various cameras that have not been stitched and stitched in post-stage images can be obtained. If the dynamic video captured by each camera without post-image stitching and stitching is obtained, the image stitching and stitching can be carried out first, or in the subsequent step of tracking the interest point target in the panoramic video, each camera is taken at the same time each time. Video images captured at all times are subject to target tracking at the same time.
  • the target recognition and tracking unit 52 is used to sequentially target each frame of video image in the panoramic video and track the recognized point of interest to obtain tracking information of the point of interest, and at the same time add the tracked point of interest to the preset Set the set of points of interest targets.
  • a point-of-interest target set is constructed in advance, and the point-of-interest target set is used to store the point-of-interest target identified from the panoramic video. Pre-set different types of interest point targets, and pre-train the recognition and tracking models of different interest point targets.
  • each frame of video image in the panoramic video is sequentially identified and tracked, and every time in the next frame of video image
  • the point of interest target is added to the point of interest target set to count the tracked point of interest targets through the point of interest target set.
  • the playback perspective determining unit 53 is configured to determine the playback perspective of the screen corresponding to the point of interest target according to the tracking information.
  • the image position of the point-of-interest target in each frame of the panoramic video can be grasped.
  • the point-of-interest target leaves the scene monitored by the panoramic camera.
  • the point-of-interest target leaves the scene monitored by the panoramic camera.
  • the picture playback angle of the picture corresponding to the point-of-interest target can be determined, that is, the picture playback angle of each frame of the video image where the point-of-interest target appears, so as to ensure the interest when playing the video.
  • the point target always appears on the playback screen.
  • the point-of-interest video generating unit 54 is configured to play the perspective and panoramic video according to the screen corresponding to the point-of-interest target, and generate a tracking video corresponding to the point-of-interest target.
  • the screen playback angle of view corresponding to the point of interest target that is, the screen playback angle of each frame of the video image in which the point of interest object appears
  • set the viewing angle of these video images and integrate these video images into the tracking video corresponding to the point of interest target in the order of shooting time.
  • a tracking video corresponding to each point-of-interest target can be generated.
  • the target recognition and tracking unit 52 includes:
  • the first screening unit 521 is configured to screen the identified interest point targets according to preset screening conditions.
  • the identified interest point targets are filtered according to a preset filtering condition, and if the interest point target does not meet the filtering condition, the interest point is not Target tracking is equivalent to that the point-of-interest target does not appear in the video image.
  • the filtering conditions include one or any combination of the shape and size of the point-of-interest target, the distance between the point-of-interest target and the camera, and the number of frames that the point-of-interest target appears in the panoramic video, so as to improve the tracking of the point-of-interest target. Quality, avoid unnecessary tracking of points of interest targets that are too small in shape, too far away, or appearing for too short duration in the picture.
  • the point-of-interest target when the shape and size of the point-of-interest target is the filtering condition, the point-of-interest target whose shape and size exceeds the first preset threshold is filtered out.
  • the distance between the point-of-interest target and the camera is the filtering condition, the point-of-interest target whose distance to the camera does not exceed the second preset threshold is filtered out.
  • the point-of-interest target whose number of frames appearing in the panoramic video exceeds the third preset threshold is filtered out.
  • the target recognition and tracking unit 52 further includes:
  • the second screening unit 522 is configured to screen the interest point targets according to the tracked duration of the interest point targets.
  • the point-of-interest target is screened according to the tracked duration of the point-of-interest target, thereby improving the tracking video generated subsequently To avoid tracking videos that are too short in duration.
  • the tracking of the point of interest may be ended.
  • each frame of video image in the panoramic video is identified in turn and the identified interest point targets are tracked.
  • the tracked interest point targets are counted through the interest point target set, and the tracking of these interest point targets is performed
  • Information determine the screen playback perspective corresponding to the point-of-interest target, and generate tracking videos corresponding to the point-of-interest target according to the screen playback perspective and panoramic video corresponding to the point-of-interest target, so that the user can watch the tracking videos corresponding to different point-of-interest targets.
  • Embodiment 6 is a diagrammatic representation of Embodiment 6
  • Fig. 7 shows the structure of a panoramic video processing device provided in the sixth embodiment of the present invention.
  • Fig. 7 shows the structure of a panoramic video processing device provided in the sixth embodiment of the present invention.
  • the parts related to the embodiment of the present invention are shown, including:
  • the panoramic video acquisition unit 71 is configured to acquire a panoramic video.
  • the interest point tracking unit 72 is configured to track the interest point target in the panoramic video and obtain corresponding tracking information.
  • the panoramic video acquisition unit 71 and the interest point tracking unit 72 may refer to the detailed description of the unit 41 and the unit 42 in the fourth embodiment respectively, and may also refer to the detailed description of the unit 51 and the unit 52 in the fifth embodiment. The detailed description will not be repeated here.
  • the image position obtaining unit 73 is configured to obtain the image position of the point of interest target in the video image corresponding to the panoramic video from the tracking information.
  • the corresponding video image may be a video image in which the point-of-interest target appears, or may be all video images in a time period during which the point-of-interest target is tracked.
  • the playing perspective determining subunit 74 determines the image playing perspective corresponding to the point of interest target according to the image position of the point of interest target, so as to ensure that the point of interest target is always located in the middle of the screen when the tracking video corresponding to the point of interest target is played.
  • the image position when obtaining the image position in each frame of video image in which the point of interest object appears or in each frame of video image in which the point of interest object is tracked, the image position is set in the center of the playback screen, so that the image position can be set in the center of the playback screen.
  • the image playback perspective corresponding to the point of interest target that is, the image playback perspective of each frame of the video image that the point of interest target appears, or it can also be considered to include the image playback perspective of each frame of the video image where the point of interest target is tracked, so as to ensure that the point of interest is played
  • the point of interest target is always located in the center of the screen, which improves the effect of panoramic video playback and improves the user experience.
  • the point-of-interest video generating unit 75 generates a tracking video corresponding to the point-of-interest target by playing the angle of view and the panoramic video according to the screen corresponding to the point-of-interest target.
  • the picture playback angle of view corresponding to the point of interest target that is, the picture playing angle of each frame of the video image where the point of interest target appears, or the picture playing angle of each frame of the video image where the point of interest target is tracked
  • Set the picture playback perspective of each frame of video image where the point of interest target is tracked and then combine these video images of the point of interest target to be tracked to generate the corresponding tracking video.
  • the corresponding tracking video is sent to the user for playback.
  • the point of interest video generating unit 75 includes:
  • the tracking video splicing unit 751 is configured to splice the tracking video selected by the user and send it to the user according to the time sequence of the tracking video in the panoramic video when a video playback request of the user is received.
  • the tracking videos of the points of interest selected by the user are video spliced, and then sent to the user, thereby
  • the user can see the complete tracking content of the point-of-interest target in a video, without having to open the tracking video of the point-of-interest one by one to watch, which effectively improves the playback effect and user experience of the panoramic video.
  • the video splicing here is the splicing of multiple tracking videos of the same point-of-interest target selected by the user.
  • the point of interest video generating unit 75 further includes:
  • the tracking video sorting unit 752 is configured to sort the tracking videos according to the video duration, and recommend the tracking videos to the user to watch according to the order of sorting.
  • all tracking videos are sorted according to the video duration, and these tracking videos are recommended to users to watch in order to provide users with high-quality tracking videos.
  • recommendations are made from first to last according to the order of sorting, and when sorting according to the order of video duration from short to long, then according to the order of sorting from back to front Make recommendations.
  • the total tracking duration of each interest point target is counted, and these interest point targets are sorted in the order from longest to shortest total tracking time, and the sorted interest point targets are recommended to The user, so that the user can select the corresponding point-of-interest target for tracking video playback.
  • the corresponding tracking video is filtered according to the search keyword and pushed to the user, thereby improving the pertinence of the tracking video pushing and effectively improving the user experience.
  • the point-of-interest target in the panoramic video is tracked, the image position of the point-of-interest target is obtained from the tracking information, and the image playback angle of the point-of-interest target is determined according to the image position of the point-of-interest target to ensure
  • the tracking video corresponding to the point of interest is played, the point of interest is always in the middle of the screen, and the viewing angle and panoramic video are played according to the screen corresponding to the point of interest, and the tracking video corresponding to the point of interest is generated, which improves the quality of the tracking video and effectively improves The playback effect and user experience of panoramic video.
  • each unit of the panoramic video processing device can be realized by a corresponding hardware or software unit.
  • Each unit can be an independent software and hardware unit, or can be integrated into a software and hardware unit. invention.
  • FIG. 9 shows the structure of the computer device provided in the seventh embodiment of the present invention. For ease of description, only the parts related to the embodiment of the present invention are shown.
  • the computer device 9 in the embodiment of the present invention includes a processor 90, a memory 91, and a computer program 92 that is stored in the memory 91 and can run on the processor 90.
  • the processor 90 implements the steps in the foregoing method embodiments when the computer program 92 is executed, such as steps S101 to S104 shown in FIG. 1.
  • the processor 90 executes the computer program 92, the functions of the units in the foregoing device embodiments, such as the functions of the modules 41 to 44 shown in FIG. 4, are realized.
  • the point of interest target in the panoramic video is tracked, and according to the tracking information of the point of interest target, the picture playback angle of the picture corresponding to the point of interest target is determined, and the picture playing angle of view and the panoramic video corresponding to the point of interest target are determined.
  • Generate tracking videos corresponding to points of interest targets so that users can fully understand the video content of each point of interest in the panoramic video by watching the tracking videos corresponding to different points of interest targets, without the need for the user to manually adjust the viewing angle of the panoramic video. , Which effectively improves the playback effect and user experience of the panoramic video.
  • Embodiment 8 is a diagrammatic representation of Embodiment 8
  • a computer-readable storage medium stores a computer program, and the computer program implements the steps in the foregoing method embodiments when executed by a processor, for example, as shown in FIG. 1 Steps 101 to 104 are shown. Or, when the computer program is executed by the processor, the functions of the units in the foregoing device embodiments, such as the functions of the modules 41 to 44 shown in FIG. 4, are realized.
  • the point of interest target in the panoramic video is tracked, and according to the tracking information of the point of interest target, the picture playback angle of the picture corresponding to the point of interest target is determined, and the picture playing angle of view and the panoramic video corresponding to the point of interest target are determined.
  • Generate tracking videos corresponding to points of interest targets so that users can fully understand the video content of each point of interest in the panoramic video by watching the tracking videos corresponding to different points of interest targets, without the need for the user to manually adjust the viewing angle of the panoramic video. , Which effectively improves the playback effect and user experience of the panoramic video.
  • the computer-readable storage medium in the embodiment of the present invention may include any entity or device or recording medium capable of carrying computer program code, such as ROM/RAM, magnetic disk, optical disk, flash memory and other memories.

Abstract

本发明适用计算机技术领域,提供了一种全景视频处理方法,该方法包括:获取全景视频,对该全景视频中的兴趣点进行跟踪,获得相应的跟踪信息,根据跟踪信息,确定兴趣点目标对应的画面播放视角,根据兴趣点目标对应的画面播放视角和该全景视频,生成兴趣点目标对应的跟踪视频,从而为用户提供已处理成各个兴趣点目标跟踪视频的全景视频,无需用户手动选择播放画面的视角,有效地提升了全景视频的播放效果和用户体验。

Description

全景视频处理方法、装置、设备及存储介质 技术领域
本发明属于计算机技术领域,尤其涉及一种全景视频处理方法、装置、设备及存储介质。
背景技术
全景视频是通过全景相机拍摄的、包含360度全方位画面内容的动态视频,它将静态的全景图片转化为动态的视频图像,用户能够任意观看在全景摄像机拍摄角度范围内的动态视频。
在播放全景视频时,无法将360度画面内容一次性地完全显示在播放装置上。用户作为播放装置前的观察者,需要选择适合自己的观察视角,该观察视角即当前视频的画面播放视角。用户有以下几种方式进行画面播放视角的选择:方式一,在触摸屏上进行滑动操作选择画面播放视角,并在全景视频的时间轴上操作,以回到之前时刻选择其它画面播放视角;方式二,通过播放装置内置的陀螺仪传感器,配合用户的体感姿态,寻找对应的画面播放视角。
可以看出,用户在播放全景视频时需要不断地调整画面播放视角,以观察到自己想要看的视频内容,十分不便。例如,当用户想持续观看某个特定目标对象时,需要不断调整视频的画面播放视角,以防该目标对象消失在当前播放画面上,当该目标对象的位置不断变化时,连续调整画面播放视角还可能导致用户产生眩晕感。又如,当用户对多个目标对象感兴趣时,需要重复多次观看全景视频以寻找这些目标对象,十分耗费用户时间。
技术问题
本发明实施例在于提供一种全景视频处理方法、装置、设备及存储介质,旨在解决由于现有技术无法提供一种有效的全景视频处理方法,导致全景视频播放效果不佳、用户体验不佳的问题。
技术解决方案
一方面,本发明实施例提供了一种全景视频处理方法,所述方法包括下述步骤:
获取全景视频;
对所述全景视频中的兴趣点目标进行跟踪,获得相应的跟踪信息;
根据所述跟踪信息,确定所述兴趣点目标对应的画面播放视角;
根据所述兴趣点目标对应的画面播放视角和所述全景视频,生成所述兴趣点目标对应的跟踪视频。
另一方面,本发明实施例提供了一种全景视频处理装置,所述装置包括:
全景视频获取单元,用于获取全景视频;
兴趣点跟踪单元,用于对所述全景视频中的兴趣点目标进行跟踪,获得相应的跟踪信息;
播放视角确定单元,用于根据所述跟踪信息,确定所述兴趣点目标对应的画面播放视角;以及
兴趣点视频生成单元,用于根据所述兴趣点目标对应的画面播放视角和所述全景视频,生成所述兴趣点目标对应的跟踪视频。
另一方面,本发明实施例还提供了一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如上述全景视频处理方法所述的步骤。
另一方面,本发明实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如上述全景视频处理方法所述的步骤。
有益效果
本发明实施例对全景视频中的兴趣点目标进行跟踪,根据这些兴趣点目标的跟踪信息,确定兴趣点目标对应的画面播放视角,根据兴趣点目标对应的画面播放视角和全景视频,生成兴趣点目标对应的跟踪视频,从而用户可以通过观看不同兴趣点目标分别对应的跟踪视频,以全面了解全景视频中各个兴趣点目标的视频内容,而无需用户手动往复调整全景视频的画面播放视角,有效提高了全景视频的播放效果和用户体验。
附图说明
图1是本发明实施例一提供的全景视频处理方法的实现流程图;
图2是本发明实施例二提供的全景视频处理方法的实现流程图;
图3是本发明实施例三提供的全景视频处理方法的实现流程图;
图4是本发明实施例四提供的全景视频处理装置的结构示意图;
图5是本发明实施例五提供的全景视频处理装置的结构示意图;
图6是本发明实施例五提供的全景视频处理装置的优选结构示意图;
图7是本发明实施例六提供的全景视频处理装置的结构示意图;
图8是本发明实施例六提供的全景视频处理装置的结构示意图;以及
图9是本发明实施例七提供的计算机设备的结构示意图。
本发明的实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
以下结合具体实施例对本发明的具体实现进行详细描述:
实施例一:
图1示出了本发明实施例一提供的全景视频处理方法的实现流程,为了便于说明,仅示出了与本发明实施例相关的部分,详述如下:
在步骤S101中,获取全景视频。
本发明实施例适用于视频处理系统或平台,以对全景视频进行处理。
在本发明实施例中,在制作全景视频时,通常使用一组摄像机在拍摄点周围同时拍摄360度的画面,再经后期的图像拼接缝合,得到全景视频。因此,可直接获取已经后期图像拼接缝合好的全景视频,也可以获取未经后期图像拼接缝合的各个摄像机拍摄的动态视频。若获取的是未经后期图像拼接缝合的各个摄像机拍摄的动态视频,则可以先进行图像拼接缝合,或者在对全景视频中的兴趣点目标进行追踪的后续步骤中,每次取各个摄像头在同一时刻拍摄的视频图像同时进行目标跟踪。
在步骤S102中,对全景视频中的兴趣点目标进行跟踪,获得相应的跟踪信息。
在本发明实施例中,预先设置好不同类型的兴趣点目标,例如兴趣点目标可以是人、车辆、动物、机器人等等,同时预先训练不同兴趣点目标的识别与跟踪模型,以识别出全景视频中每帧视频图像中不同的兴趣点目标,并对识别出的兴趣点目标进行跟踪,得到这些兴趣点目标的跟踪信息,兴趣点目标的识别与跟踪模型的训练过程可采用现有的目标识别与跟踪算法,在此不进行限制。其中,兴趣点目标的跟踪信息包括兴趣点目标在每帧视频图像中的图像位置。
在步骤 S103中,根据跟踪信息,确定兴趣点目标对应的画面播放视角。
在本发明实施例中,在获得跟踪信息后,可以掌握兴趣点目标在全景视频中每帧视频图像中的图像位置,当然,如果兴趣点目标离开全景摄像头所监控的场景,兴趣点目标离开的时间段内所拍摄的视频图像中将不会出现兴趣点目标。根据兴趣点目标在全景视频中其出现或其被跟踪的相应视频图像中的图像位置,可以确定兴趣点目标对应的画面播放视角,即兴趣点目标所出现的每帧视频图像的画面播放视角,从而确保播放视频时兴趣点目标始终出现在播放画面上。
在步骤S104中,根据兴趣点目标对应的画面播放视角和全景视频,生成兴趣点目标对应的跟踪视频。
在本发明实施例中,在确定兴趣点目标对应的画面播放视角,即兴趣点目标所出现的每帧视频图像的画面播放视角时,从全景视频中获取该兴趣点目标出现的每帧视频图像,并设置这些视频图像的画面播放视角,将这些视频图像按照拍摄时间顺序整合成该兴趣点目标对应的跟踪视频。当全景视频中识别出多个不同的兴趣点目标时,则可以生成每个兴趣点目标分别对应的跟踪视频。
优选地,每个被跟踪的兴趣点目标分别对应的跟踪视频的数量至少为1个,从而通过一个或多个跟踪视频来单独展示单个兴趣点目标的动态视频内容。
进一步优选地,获取兴趣点目标在全景视频的首次出现时间和首次消失时间之间的视频图像,由这些视频图像组成该兴趣点目标的首个跟踪视频。获取兴趣点目标在全景视频的第二次出现时间和第二次消失时间之间的视频图像,由这些视频图像组成该兴趣点目标的第二个跟踪视频。依次类推,可以生成该兴趣点目标的多个跟踪视频,从而通过一个或多个跟踪视频来单独展示单个兴趣点目标的动态视频内容。其中,兴趣点目标在全景视频中的首次出现时间为兴趣点目标首次出现在全景视频中的时间,兴趣点目标在全景视频中的首次消失时间为兴趣点目标首次出现后消失在全景视频中、且消失时长超过预设时长的时间,依次可得到兴趣点目标的第二次出现时间、第二次消失时间、第三次出现时间、第三次消失时间等等,再次不再赘述。
在本发明实施例中,对全景视频中的兴趣点目标进行跟踪,根据这些兴趣点目标的跟踪信息,确定兴趣点目标对应的画面播放视角,根据兴趣点目标对应的画面播放视角和全景视频,生成兴趣点目标对应的跟踪视频,从而用户可以通过观看不同兴趣点目标分别对应的跟踪视频,以全面了解全景视频中各个兴趣点目标的视频内容,而无需用户手动往复调整全景视频的画面播放视角,有效提高了全景视频的播放效果和用户体验。
实施例二:
图2示出了本发明实施例二提供的全景视频处理方法的实现流程,为了便于说明,仅示出了与本发明实施例相关的部分,详述如下:
在步骤S201中,获取全景视频。
在本发明实施例中,可直接获取已经后期图像拼接缝合好的全景视频,也可以获取未经后期图像拼接缝合的各个摄像机拍摄的动态视频。若获取的是未经后期图像拼接缝合的各个摄像机拍摄的动态视频,则可以先进行图像拼接缝合,或者在对全景视频中的兴趣点目标进行追踪的后续步骤中,每次取各个摄像头在同一时刻拍摄的视频图像同时进行目标跟踪。
在步骤S202中,依次对全景视频中每帧视频图像进行目标识别并对识别出的兴趣点目标进行跟踪,获得兴趣点目标的跟踪信息,同时将被跟踪的兴趣点目标添加至预设的兴趣点目标集合。
在本发明实施例中,预先构建兴趣点目标集合,兴趣点目标集合用于存储从全景视频中识别出的兴趣点目标。预先设置好不同类型的兴趣点目标,同时预先训练不同兴趣点目标的识别与跟踪模型。
在本发明实施例中,按照时间顺序,通过训练好的兴趣点目标的识别与跟踪模型,依次对全景视频中的每帧视频图像进行兴趣点目标的识别与跟踪,每在下一帧视频图像中识别出之前所有帧中未出现的兴趣点目标时,将该兴趣点目标加入兴趣点目标集合,以通过兴趣点目标集合统计已被跟踪的兴趣点目标。
优选地,在当前帧视频图像中识别出兴趣点目标时,按照预设筛选条件对识别出的这些兴趣点目标进行筛选,若兴趣点目标不满足筛选条件,则不对该兴趣点目标进行跟踪,相当于该兴趣点目标未出现在视频图像中,从而通过对兴趣点目标的筛选,既避免了兴趣点目标的数量过多,也有效地提升了全景视频的处理效果,避免跟踪视频的质量良莠不齐。
进一步优选地,筛选条件包括兴趣点目标的形状大小、兴趣点目标与摄像头之间的距离和兴趣点目标在全景视频中出现的帧数之一或任意组合,从而提高所跟踪的兴趣点目标的质量,避免对画面中形状过小、距离过远、出现时长过短的兴趣点目标进行不必要的跟踪。
在本发明实施例中,当兴趣点目标的形状大小为筛选条件时,则筛选出形状大小超过第一预设阈值的兴趣点目标。当兴趣点目标与摄像头之间的距离为筛选条件时,则筛选出与摄像头之间的距离不超过第二预设阈值的兴趣点目标。当兴趣点目标在全景视频中出现的帧数为筛选条件时,则筛选出在全景视频中出现的帧数超过第三预设阈值的兴趣点目标。
优选地,在识别出兴趣点目标并对该兴趣点目标进行了跟踪结束后,按照该兴趣点目标的被跟踪时长,对该兴趣点目标进行筛选,从而提高后续生成的跟踪视频的质量,避免出现时长过短的跟踪视频。其中,当兴趣点目标连续未出现的帧数超过预设帧数阈值时,可结束对该兴趣点的该次跟踪。
在步骤S203中,根据跟踪信息,确定兴趣点目标对应的画面播放视角。
在本发明实施例中,在获得跟踪信息后,可以掌握兴趣点目标在全景视频中每帧视频图像中的图像位置,当然,如果兴趣点目标离开全景摄像头所监控的场景,兴趣点目标离开的时间段内所拍摄的视频图像中将不会出现兴趣点目标。根据兴趣点目标在全景视频中每帧视频图像中的图像位置,可以确定兴趣点目标对应的画面播放视角,即兴趣点目标所出现的每帧视频图像的画面播放视角,从而确保播放视频时兴趣点目标始终出现在播放画面上。
在步骤S204中,根据兴趣点目标对应的画面播放视角和全景视频,生成兴趣点目标对应的跟踪视频。
在本发明实施例中,在确定兴趣点目标对应的画面播放视角,即兴趣点目标所出现的每帧视频图像的画面播放视角时,从全景视频中获取该兴趣点目标出现的每帧视频图像,并设置这些视频图像的画面播放视角,将这些视频图像按照拍摄时间顺序整合成该兴趣点目标对应的跟踪视频。当全景视频中识别出多个不同的兴趣点目标时,则可以生成每个兴趣点目标分别对应的跟踪视频。
在本发明实施例中,依次对全景视频中每帧视频图像进行目标识别并对识别出的兴趣点目标进行跟踪,通过兴趣点目标集合统计被跟踪的兴趣点目标,根据这些兴趣点目标的跟踪信息,确定兴趣点目标对应的画面播放视角,根据兴趣点目标对应的画面播放视角和全景视频,生成兴趣点目标对应的跟踪视频,从而用户可以通过观看不同兴趣点目标分别对应的跟踪视频,以全面了解全景视频中各个兴趣点目标的视频内容,而无需用户手动往复调整全景视频的画面播放视角,有效提高了全景视频的播放效果和用户体验。
实施例三:
图3示出了本发明实施例三提供的全景视频处理方法的实现流程,为了便于说明,仅示出了与本发明实施例相关的部分,详述如下:
步骤S301,获取全景视频。
步骤S302,对全景视频中的兴趣点目标进行跟踪,获得相应的跟踪信息。
在本发明实施例中,步骤S301、步骤S302可分别参考实施例一中的步骤S101、步骤S102的详细描述,也可分别参考实施例二中的步骤S201、步骤202的详细描述,在此不再赘述。
步骤S303,从跟踪信息中,获取兴趣点目标在全景视频相应的视频图像中的图像位置。
在本发明实施例中,相应的视频图像可为兴趣点目标出现的视频图像,也可为兴趣点目标被跟踪的时段内的所有视频图像。
步骤S304,根据兴趣点目标的图像位置,确定兴趣点目标对应的画面播放视角,以确保播放兴趣点目标对应的跟踪视频时兴趣点目标始终位于画面中间。
在本发明实施例中,在获得兴趣点目标所出现的每帧视频图像中或兴趣点目标被跟踪的每帧视频图像中的图像位置时,将该图像位置设置在播放画面的中央,进而可以确定兴趣点目标对应的画面播放视角,即兴趣点目标所出现的每帧视频图像的画面播放视角,也可以认为包括兴趣点目标被跟踪的每帧视频图像的画面播放视角,从而确保播放兴趣点目标的跟踪视频时,兴趣点目标始终位于画面中央,提高了全景视频播放的效果,提高了用户体验。
步骤S305,根据兴趣点目标对应的画面播放视角和全景视频,生成兴趣点目标对应的跟踪视频。
在本发明实施例中,得到兴趣点目标对应的画面播放视角(即兴趣点目标所出现的每帧视频图像的画面播放视角、或者兴趣点目标被跟踪的每帧视频图像的画面播放视角)之后, 设置兴趣点目标被跟踪的每帧视频图像的画面播放视角,再结合兴趣点目标被跟踪的这些视频图像,即可生成相应的跟踪视频。在接收到用户的播放请求时,将相应的跟踪视频发送给用户播放。
优选地,在接收到用户的视频播放请求时,按照跟踪视频在全景视频中的时间顺序,将用户所选择观看的兴趣点目标的跟踪视频进行视频拼接,再发送给用户,从而用户可以在一段视频中看到该兴趣点目标的完整跟踪内容,而无需一个个地点开兴趣点的跟踪视频进行观看,有效地提升了全景视频的播放效果和用户体验。其中,这里的视频拼接是将用户所选择的同一个兴趣点目标的多个跟踪视频进行拼接。
优选地,在得到跟踪视频后,按照视频时长对所有跟踪视频进行排序,并按照排序先后将这些跟踪视频推荐给用户观看,以向用户提供高质量的跟踪视频。其中,当按照视频时长从长到短的顺序排序时,则按照排序的先后顺序从前往后进行推荐,当按照视频时长从短到长的顺序排序时,则按照排序的先后顺序从后往前进行推荐。
优选地,在得到跟踪视频后,统计每个兴趣点目标的总跟踪时长,按照总跟踪时长从长到短的顺序对这些兴趣点目标进行排序,将排序后的兴趣点目标按照排序顺序推荐给用户,以便用户选择相应的兴趣点目标进行跟踪视频播放。
优选地,在接收到用户输入的搜索关键词时,根据搜索关键词筛选相应的跟踪视频推送给用户,从而提高跟踪视频推送的针对性,有效提高用户体验。
在本发明实施例中,对全景视频中的兴趣点目标进行跟踪,从跟踪信息中获得兴趣点目标的图像位置,根据兴趣点目标的图像位置,确定兴趣点目标对应的画面播放视角,以确保播放兴趣点目标对应的跟踪视频时兴趣点目标始终位于画面中间,根据兴趣点目标对应的画面播放视角和全景视频,生成兴趣点目标对应的跟踪视频,提升了跟踪视频的质量,进而有效地提高全景视频的播放效果和用户体验。
实施例四:
图4示出了本发明实施例四提供的全景视频处理装置的结构,为了便于说明,仅示出了与本发明实施例相关的部分,其中包括:
全景视频获取单元41,用于获取全景视频。
在本发明实施例中,可直接获取已经后期图像拼接缝合好的全景视频,也可以获取未经后期图像拼接缝合的各个摄像机拍摄的动态视频。若获取的是未经后期图像拼接缝合的各个摄像机拍摄的动态视频,则可以先进行图像拼接缝合,或者在对全景视频中的兴趣点目标进行追踪的后续步骤中,每次取各个摄像头在同一时刻拍摄的视频图像同时进行目标跟踪。
兴趣点跟踪单元42,用于对全景视频中的兴趣点目标进行跟踪,获得相应的跟踪信息。
在本发明实施例中,预先设置好不同类型的兴趣点目标,同时预先训练不同兴趣点目标的识别与跟踪模型,以识别出全景视频中每帧视频图像中不同的兴趣点目标,并对识别出的兴趣点目标进行跟踪,得到这些兴趣点目标的跟踪信息,兴趣点目标的识别与跟踪模型的训练过程可采用现有的目标识别与跟踪算法,在此不进行限制。其中,兴趣点目标的跟踪信息包括兴趣点目标在每帧视频图像中的图像位置。
播放视角确定单元43,用于根据跟踪信息,确定兴趣点目标对应的画面播放视角。
在本发明实施例中,在获得跟踪信息后,可以掌握兴趣点目标在全景视频中每帧视频图像中的图像位置,当然,如果兴趣点目标离开全景摄像头所监控的场景,兴趣点目标离开的时间段内所拍摄的视频图像中将不会出现兴趣点目标。根据兴趣点目标在全景视频中其出现或其被跟踪的相应视频图像中的图像位置,可以确定兴趣点目标对应的画面播放视角,即兴趣点目标所出现的每帧视频图像的画面播放视角,从而确保播放视频时兴趣点目标始终出现在播放画面上。
兴趣点视频生成单元44,用于根据兴趣点目标对应的画面播放视角和全景视频,生成兴趣点目标对应的跟踪视频。
在本发明实施例中,在确定兴趣点目标对应的画面播放视角,即兴趣点目标所出现的每帧视频图像的画面播放视角时,从全景视频中获取该兴趣点目标出现的每帧视频图像,并设置这些视频图像的画面播放视角,将这些视频图像按照拍摄时间顺序整合成该兴趣点目标对应的跟踪视频。当全景视频中识别出多个不同的兴趣点目标时,则可以生成每个兴趣点目标分别对应的跟踪视频。
优选地,每个被跟踪的兴趣点目标分别对应的跟踪视频的数量至少为1个,从而通过一个或多个跟踪视频来单独展示单个兴趣点目标的动态视频内容。
进一步优选地,获取兴趣点目标在全景视频的首次出现时间和首次消失时间之间的视频图像,由这些视频图像组成该兴趣点目标的首个跟踪视频。获取兴趣点目标在全景视频的第二次出现时间和第二次消失时间之间的视频图像,由这些视频图像组成该兴趣点目标的第二个跟踪视频。依次类推,可以生成该兴趣点目标的多个跟踪视频,从而通过一个或多个跟踪视频来单独展示单个兴趣点目标的动态视频内容。其中,兴趣点目标在全景视频中的首次出现时间为兴趣点目标首次出现在全景视频中的时间,兴趣点目标在全景视频中的首次消失时间为兴趣点目标首次出现后消失在全景视频中、且消失时长超过预设时长的时间,依次可得到兴趣点目标的第二次出现时间、第二次消失时间、第三次出现时间、第三次消失时间等等,再次不再赘述。
在本发明实施例中,对全景视频中的兴趣点目标进行跟踪,根据这些兴趣点目标的跟踪信息,确定兴趣点目标对应的画面播放视角,根据兴趣点目标对应的画面播放视角和全景视频,生成兴趣点目标对应的跟踪视频,从而用户可以通过观看不同兴趣点目标分别对应的跟踪视频,以全面了解全景视频中各个兴趣点目标的视频内容,而无需用户手动往复调整全景视频的画面播放视角,有效提高了全景视频的播放效果和用户体验。
实施例五:
图5示出了本发明实施例五提供的全景视频处理装置的结构,为了便于说明,仅示出了与本发明实施例相关的部分,其中包括:
全景视频获取单元51,用于获取全景视频。
在本发明实施例中,可直接获取已经后期图像拼接缝合好的全景视频,也可以获取未经后期图像拼接缝合的各个摄像机拍摄的动态视频。若获取的是未经后期图像拼接缝合的各个摄像机拍摄的动态视频,则可以先进行图像拼接缝合,或者在对全景视频中的兴趣点目标进行追踪的后续步骤中,每次取各个摄像头在同一时刻拍摄的视频图像同时进行目标跟踪。
目标识别跟踪单元52,用于依次对全景视频中每帧视频图像进行目标识别并对识别出的兴趣点目标进行跟踪,获得兴趣点目标的跟踪信息,同时将被跟踪的兴趣点目标添加至预设的兴趣点目标集合。
在本发明实施例中,预先构建兴趣点目标集合,兴趣点目标集合用于存储从全景视频中识别出的兴趣点目标。预先设置好不同类型的兴趣点目标,同时预先训练不同兴趣点目标的识别与跟踪模型。
在本发明实施例中,按照时间顺序,通过训练好的兴趣点目标的识别与跟踪模型,依次对全景视频中的每帧视频图像进行兴趣点目标的识别与跟踪,每在下一帧视频图像中识别出之前所有帧中未出现的兴趣点目标时,将该兴趣点目标加入兴趣点目标集合,以通过兴趣点目标集合统计已被跟踪的兴趣点目标。
播放视角确定单元53,用于根据跟踪信息,确定兴趣点目标对应的画面播放视角。
在本发明实施例中,在获得跟踪信息后,可以掌握兴趣点目标在全景视频中每帧视频图像中的图像位置,当然,如果兴趣点目标离开全景摄像头所监控的场景,兴趣点目标离开的时间段内所拍摄的视频图像中将不会出现兴趣点目标。根据兴趣点目标在全景视频中每帧视频图像中的图像位置,可以确定兴趣点目标对应的画面播放视角,即兴趣点目标所出现的每帧视频图像的画面播放视角,从而确保播放视频时兴趣点目标始终出现在播放画面上。
兴趣点视频生成单元54,用于根据兴趣点目标对应的画面播放视角和全景视频,生成兴趣点目标对应的跟踪视频。
在本发明实施例中,在确定兴趣点目标对应的画面播放视角,即兴趣点目标所出现的每帧视频图像的画面播放视角时,从全景视频中获取该兴趣点目标出现的每帧视频图像,并设置这些视频图像的画面播放视角,将这些视频图像按照拍摄时间顺序整合成该兴趣点目标对应的跟踪视频。当全景视频中识别出多个不同的兴趣点目标时,则可以生成每个兴趣点目标分别对应的跟踪视频。
优选地,如图6所示,目标识别跟踪单元52包括:
第一筛选单元521,用于按照预设筛选条件对识别出的兴趣点目标进行筛选。
在本发明实施例中,在当前帧视频图像中识别出兴趣点目标时,按照预设筛选条件对识别出的这些兴趣点目标进行筛选,若兴趣点目标不满足筛选条件,则不对该兴趣点目标进行跟踪,相当于该兴趣点目标未出现在视频图像中,从而通过对兴趣点目标的筛选,既避免了兴趣点目标的数量过多,也有效地提升了全景视频的处理效果,避免跟踪视频的质量良莠不齐。
进一步优选地,筛选条件包括兴趣点目标的形状大小、兴趣点目标与摄像头之间的距离和兴趣点目标在全景视频中出现的帧数之一或任意组合,从而提高所跟踪的兴趣点目标的质量,避免对画面中形状过小、距离过远、出现时长过短的兴趣点目标进行不必要的跟踪。
在本发明实施例中,当兴趣点目标的形状大小为筛选条件时,则筛选出形状大小超过第一预设阈值的兴趣点目标。当兴趣点目标与摄像头之间的距离为筛选条件时,则筛选出与摄像头之间的距离不超过第二预设阈值的兴趣点目标。当兴趣点目标在全景视频中出现的帧数为筛选条件时,则筛选出在全景视频中出现的帧数超过第三预设阈值的兴趣点目标。
优选地,目标识别跟踪单元52还包括:
第二筛选单元522,用于按照兴趣点目标的被跟踪时长,对兴趣点目标进行筛选。
在本发明实施例中,在识别出兴趣点目标并对该兴趣点目标进行了跟踪结束后,按照该兴趣点目标的被跟踪时长,对该兴趣点目标进行筛选,从而提高后续生成的跟踪视频的质量,避免出现时长过短的跟踪视频。其中,当兴趣点目标连续未出现的帧数超过预设帧数阈值时,可结束对该兴趣点的该次跟踪。
在本发明实施例中,依次对全景视频中每帧视频图像进行目标识别并对识别出的兴趣点目标进行跟踪,通过兴趣点目标集合统计被跟踪的兴趣点目标,根据这些兴趣点目标的跟踪信息,确定兴趣点目标对应的画面播放视角,根据兴趣点目标对应的画面播放视角和全景视频,生成兴趣点目标对应的跟踪视频,从而用户可以通过观看不同兴趣点目标分别对应的跟踪视频,以全面了解全景视频中各个兴趣点目标的视频内容,而无需用户手动往复调整全景视频的画面播放视角,有效提高了全景视频的播放效果和用户体验。
实施例六:
图7示出了本发明实施例六提供的全景视频处理装置的结构,为了便于说明,仅示出了与本发明实施例相关的部分,其中包括:
全景视频获取单元71,用于获取全景视频。
兴趣点跟踪单元72,用于对全景视频中的兴趣点目标进行跟踪,获得相应的跟踪信息。
在本发明实施例中,全景视频获取单元71、兴趣点跟踪单元72可分别参考实施例四中的单元41、单元42的详细描述,也可分别参考实施例五中的单元51、单元52的详细描述,在此不再赘述。
图像位置获取单元73,用于从跟踪信息中,获取兴趣点目标在全景视频相应的视频图像中的图像位置。
在本发明实施例中,相应的视频图像可为兴趣点目标出现的视频图像,也可为兴趣点目标被跟踪的时段内的所有视频图像。
播放视角确定子单元74,根据兴趣点目标的图像位置,确定兴趣点目标对应的画面播放视角,以确保播放兴趣点目标对应的跟踪视频时兴趣点目标始终位于画面中间。
在本发明实施例中,在获得兴趣点目标所出现的每帧视频图像中或兴趣点目标被跟踪的每帧视频图像中的图像位置时,将该图像位置设置在播放画面的中央,进而可以确定兴趣点目标对应的画面播放视角,即兴趣点目标所出现的每帧视频图像的画面播放视角,也可以认为包括兴趣点目标被跟踪的每帧视频图像的画面播放视角,从而确保播放兴趣点目标的跟踪视频时,兴趣点目标始终位于画面中央,提高了全景视频播放的效果,提高了用户体验。
兴趣点视频生成单元75,根据兴趣点目标对应的画面播放视角和全景视频,生成兴趣点目标对应的跟踪视频。
在本发明实施例中,得到兴趣点目标对应的画面播放视角(即兴趣点目标所出现的每帧视频图像的画面播放视角、或者兴趣点目标被跟踪的每帧视频图像的画面播放视角)之后, 设置兴趣点目标被跟踪的每帧视频图像的画面播放视角,再结合兴趣点目标被跟踪的这些视频图像,即可生成相应的跟踪视频。在接收到用户的播放请求时,将相应的跟踪视频发送给用户播放。
优选地,如图8所示,兴趣点视频生成单元75包括:
跟踪视频拼接单元751,用于当接收到用户的视频播放请求时,按照跟踪视频在全景视频中的时间顺序,对用户选择的跟踪视频进行拼接后发送给用户。
在本发明实施例中,在接收到用户的视频播放请求时,按照跟踪视频在全景视频中的时间顺序,将用户所选择观看的兴趣点目标的跟踪视频进行视频拼接,再发送给用户,从而用户可以在一段视频中看到该兴趣点目标的完整跟踪内容,而无需一个个地点开兴趣点的跟踪视频进行观看,有效地提升了全景视频的播放效果和用户体验。其中,这里的视频拼接是将用户所选择的同一个兴趣点目标的多个跟踪视频进行拼接。
优选地,兴趣点视频生成单元75还包括:
跟踪视频排序单元752,用于按照视频时长对跟踪视频进行排序,并按照排序先后将跟踪视频推荐给用户观看。
在本发明实施例中,在得到跟踪视频后,按照视频时长对所有跟踪视频进行排序,并按照排序先后将这些跟踪视频推荐给用户观看,以向用户提供高质量的跟踪视频。其中,当按照视频时长从长到短的顺序排序时,则按照排序的先后顺序从前往后进行推荐,当按照视频时长从短到长的顺序排序时,则按照排序的先后顺序从后往前进行推荐。
优选地,在得到跟踪视频后,统计每个兴趣点目标的总跟踪时长,按照总跟踪时长从长到短的顺序对这些兴趣点目标进行排序,将排序后的兴趣点目标按照排序顺序推荐给用户,以便用户选择相应的兴趣点目标进行跟踪视频播放。
优选地,在接收到用户输入的搜索关键词时,根据搜索关键词筛选相应的跟踪视频推送给用户,从而提高跟踪视频推送的针对性,有效提高用户体验。
在本发明实施例中,对全景视频中的兴趣点目标进行跟踪,从跟踪信息中获得兴趣点目标的图像位置,根据兴趣点目标的图像位置,确定兴趣点目标对应的画面播放视角,以确保播放兴趣点目标对应的跟踪视频时兴趣点目标始终位于画面中间,根据兴趣点目标对应的画面播放视角和全景视频,生成兴趣点目标对应的跟踪视频,提升了跟踪视频的质量,进而有效地提高全景视频的播放效果和用户体验。
在本发明实施例中,全景视频处理装置的各单元可由相应的硬件或软件单元实现,各单元可以为独立的软、硬件单元,也可以集成为一个软、硬件单元,在此不用以限制本发明。
实施例七:
图9示出了本发明实施例七提供的计算机设备的结构,为了便于说明,仅示出了与本发明实施例相关的部分。
本发明实施例的计算机设备9包括处理器90、存储器91以及存储在所述存储器91中并可在所述处理器90上运行的计算机程序92。该处理器90执行所述计算机程序92时实现上述各个方法实施例中的步骤,例如图1所示的步骤S101至S104。或者,所述处理器90执行所述计算机程序92时实现上述各装置实施例中各单元的功能,例如图4所示模块41至44的功能。
在本发明实施例中,对全景视频中的兴趣点目标进行跟踪,根据这些兴趣点目标的跟踪信息,确定兴趣点目标对应的画面播放视角,根据兴趣点目标对应的画面播放视角和全景视频,生成兴趣点目标对应的跟踪视频,从而用户可以通过观看不同兴趣点目标分别对应的跟踪视频,以全面了解全景视频中各个兴趣点目标的视频内容,而无需用户手动往复调整全景视频的画面播放视角,有效提高了全景视频的播放效果和用户体验。
实施例八:
在本发明实施例中,提供了一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,该计算机程序被处理器执行时实现上述各个方法实施例中的步骤,例如,图1所示的步骤101至104。或者,该计算机程序被处理器执行时实现上述各装置实施例中各单元的功能,例如图4所示模块41至44的功能。
在本发明实施例中,对全景视频中的兴趣点目标进行跟踪,根据这些兴趣点目标的跟踪信息,确定兴趣点目标对应的画面播放视角,根据兴趣点目标对应的画面播放视角和全景视频,生成兴趣点目标对应的跟踪视频,从而用户可以通过观看不同兴趣点目标分别对应的跟踪视频,以全面了解全景视频中各个兴趣点目标的视频内容,而无需用户手动往复调整全景视频的画面播放视角,有效提高了全景视频的播放效果和用户体验。
本发明实施例的计算机可读存储介质可以包括能够携带计算机程序代码的任何实体或装置、记录介质,例如,ROM/RAM、磁盘、光盘、闪存等存储器。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。

Claims (12)

  1. 一种全景视频处理方法,其特征在于,所述方法包括下述步骤:
    获取全景视频;
    对所述全景视频中的兴趣点目标进行跟踪,获得相应的跟踪信息;
    根据所述跟踪信息,确定所述兴趣点目标对应的画面播放视角;
    根据所述兴趣点目标对应的画面播放视角和所述全景视频,生成所述兴趣点目标对应的跟踪视频。
  2. 如权利要求1所述的全景视频处理方法,其特征在于,对所述全景视频中的兴趣点目标进行跟踪的步骤,包括:
    依次对所述全景视频中每帧视频图像进行目标识别并对识别出的兴趣点目标进行跟踪,同时将被跟踪的所述兴趣点目标添加至预设的兴趣点目标集合。
  3. 如权利要求2所述的全景视频处理方法,其特征在于,依次对所述全景视频中每帧视频图像进行目标识别并对识别出的兴趣点目标进行跟踪的步骤,包括:
    按照预设筛选条件对识别出的所述兴趣点目标进行筛选,所述预设筛选条件包括所述兴趣点目标的形状大小、所述兴趣点目标与摄像头之间的距离和所述兴趣点目标在所述全景视频中出现的帧数之一或任意组合。
  4. 如权利要求2所述的全景视频处理方法,其特征在于,依次对所述全景视频中每帧视频图像进行目标识别并对识别出的兴趣点目标进行跟踪的步骤,包括:
    按照所述兴趣点目标的被跟踪时长,对所述兴趣点目标进行筛选。
  5. 如权利要求2所述的全景视频处理方法,其特征在于,依次对所述全景视频中每帧视频图像进行目标识别并对识别出的兴趣点目标进行跟踪的步骤,包括:
    预先构建兴趣点目标集合,兴趣点目标集合用于存储从所述全景视频中识别出的兴趣点目标;
    预先设置不同类型的兴趣点目标,同时预先训练不同兴趣点目标的识别与跟踪模型;
    按照时间顺序,通过所述预先训练的兴趣点目标的识别与跟踪模型,依次对所述全景视频中的每帧视频图像进行兴趣点目标的识别与跟踪,每在下一帧视频图像中识别出之前所有帧中未出现的兴趣点目标时,将该兴趣点目标加入所述兴趣点目标集合。
  6. 如权利要求1所述的全景视频处理方法,其特征在于,确定所述兴趣点目标对应的画面播放视角的步骤,包括:
    从所述跟踪信息中,获取所述兴趣点目标在所述全景视频相应的视频图像中的图像位置;
    根据所述兴趣点目标的图像位置,确定所述兴趣点目标对应的画面播放视角。
  7. 如权利要求1所述的全景视频处理方法,其特征在于,确定所述兴趣点目标对应的画面播放视角的步骤,包括:
    获取所述兴趣点目标所出现的每帧视频图像中或所述兴趣点目标被跟踪的每帧视频图像中的图像位置;
    将所述图像位置设置在播放画面的中央,进而得到所述兴趣点目标对应的画面播放视角,并确保播放兴趣点目标对应的跟踪视频时所述兴趣点目标始终位于画面中间。
  8. 如权利要求1所述的全景视频处理方法,其特征在于,生成所述兴趣点目标对应的跟踪视频的步骤,包括:
    当接收到所述用户的视频播放请求时,按照所述跟踪视频在所述全景视频中的时间顺序,对所述用户选择的所述跟踪视频进行拼接后发送给所述用户。
  9. 如权利要求1所述的全景视频处理方法,其特征在于,生成所述兴趣点目标对应的跟踪视频的步骤,包括:
    按照视频时长对所述跟踪视频进行排序,并按照排序先后将所述跟踪视频推荐给所述用户观看。
  10. 一种全景视频处理装置,其特征在于,所述装置包括:
    全景视频获取单元,用于获取全景视频;
    兴趣点跟踪单元,用于对所述全景视频中的兴趣点目标进行跟踪,获得相应的跟踪信息;
    播放视角确定单元,用于根据所述跟踪信息,确定所述兴趣点目标对应的画面播放视角;以及
    兴趣点视频生成单元,用于根据所述兴趣点目标对应的画面播放视角和所述全景视频,生成所述兴趣点目标对应的跟踪视频。
  11. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至7任一项所述方法的步骤。
  12. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述方法的步骤。
PCT/CN2021/070675 2020-01-07 2021-01-07 全景视频处理方法、装置、设备及存储介质 WO2021139728A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010015749.9A CN111182218A (zh) 2020-01-07 2020-01-07 全景视频处理方法、装置、设备及存储介质
CN202010015749.9 2020-01-07

Publications (1)

Publication Number Publication Date
WO2021139728A1 true WO2021139728A1 (zh) 2021-07-15

Family

ID=70652548

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/070675 WO2021139728A1 (zh) 2020-01-07 2021-01-07 全景视频处理方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN111182218A (zh)
WO (1) WO2021139728A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023116669A1 (zh) * 2021-12-22 2023-06-29 华为技术有限公司 视频生成系统、方法及相关装置
CN116567294A (zh) * 2023-05-19 2023-08-08 上海国威互娱文化科技有限公司 全景视频分割处理方法及系统

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111182218A (zh) * 2020-01-07 2020-05-19 影石创新科技股份有限公司 全景视频处理方法、装置、设备及存储介质
CN113473244A (zh) * 2020-06-23 2021-10-01 青岛海信电子产业控股股份有限公司 一种自由视点视频播放控制方法及设备
CN113472999B (zh) * 2020-09-11 2023-04-18 青岛海信电子产业控股股份有限公司 一种智能设备及其控制方法
CN115529451A (zh) * 2021-06-25 2022-12-27 北京金山云网络技术有限公司 数据的传输方法及装置、存储介质、电子设备
CN116366982A (zh) * 2021-12-27 2023-06-30 影石创新科技股份有限公司 视频处理方法、装置、计算机设备及存储介质
CN116862946A (zh) * 2022-03-25 2023-10-10 影石创新科技股份有限公司 运动视频生成方法、装置、终端设备以及存储介质
CN117197195B (zh) * 2023-09-15 2024-04-26 南京芯传汇电子科技有限公司 一种基于目标识别和跟踪的视频处理方法与系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105847379A (zh) * 2016-04-14 2016-08-10 乐视控股(北京)有限公司 全景视频运动方向追踪方法及追踪装置
CN105843541A (zh) * 2016-03-22 2016-08-10 乐视网信息技术(北京)股份有限公司 全景视频中的目标追踪显示方法和装置
US9947108B1 (en) * 2016-05-09 2018-04-17 Scott Zhihao Chen Method and system for automatic detection and tracking of moving objects in panoramic video
CN110225402A (zh) * 2019-07-12 2019-09-10 青岛一舍科技有限公司 智能保持全景视频中兴趣目标时刻显示的方法及装置
CN110324641A (zh) * 2019-07-12 2019-10-11 青岛一舍科技有限公司 全景视频中保持兴趣目标时刻显示的方法及装置
CN110570448A (zh) * 2019-09-07 2019-12-13 深圳岚锋创视网络科技有限公司 一种全景视频的目标追踪方法、装置及便携式终端
CN111182218A (zh) * 2020-01-07 2020-05-19 影石创新科技股份有限公司 全景视频处理方法、装置、设备及存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170180680A1 (en) * 2015-12-21 2017-06-22 Hai Yu Object following view presentation method and system
CN107888987B (zh) * 2016-09-29 2019-12-06 华为技术有限公司 一种全景视频播放方法及装置
CN106961597B (zh) * 2017-03-14 2019-07-26 深圳Tcl新技术有限公司 全景视频的目标追踪显示方法及装置
CN109698952B (zh) * 2017-10-23 2020-09-29 腾讯科技(深圳)有限公司 全景视频图像的播放方法、装置、存储介质及电子装置
CN107872731B (zh) * 2017-11-22 2020-02-21 三星电子(中国)研发中心 全景视频播放方法及装置
CN108848304B (zh) * 2018-05-30 2020-08-11 影石创新科技股份有限公司 一种全景视频的目标跟踪方法、装置和全景相机
CN109275020A (zh) * 2018-08-31 2019-01-25 惠州Tcl移动通信有限公司 存储器、移动终端及降低其存储视频的容量的方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105843541A (zh) * 2016-03-22 2016-08-10 乐视网信息技术(北京)股份有限公司 全景视频中的目标追踪显示方法和装置
CN105847379A (zh) * 2016-04-14 2016-08-10 乐视控股(北京)有限公司 全景视频运动方向追踪方法及追踪装置
US9947108B1 (en) * 2016-05-09 2018-04-17 Scott Zhihao Chen Method and system for automatic detection and tracking of moving objects in panoramic video
CN110225402A (zh) * 2019-07-12 2019-09-10 青岛一舍科技有限公司 智能保持全景视频中兴趣目标时刻显示的方法及装置
CN110324641A (zh) * 2019-07-12 2019-10-11 青岛一舍科技有限公司 全景视频中保持兴趣目标时刻显示的方法及装置
CN110570448A (zh) * 2019-09-07 2019-12-13 深圳岚锋创视网络科技有限公司 一种全景视频的目标追踪方法、装置及便携式终端
CN111182218A (zh) * 2020-01-07 2020-05-19 影石创新科技股份有限公司 全景视频处理方法、装置、设备及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023116669A1 (zh) * 2021-12-22 2023-06-29 华为技术有限公司 视频生成系统、方法及相关装置
CN116567294A (zh) * 2023-05-19 2023-08-08 上海国威互娱文化科技有限公司 全景视频分割处理方法及系统

Also Published As

Publication number Publication date
CN111182218A (zh) 2020-05-19

Similar Documents

Publication Publication Date Title
WO2021139728A1 (zh) 全景视频处理方法、装置、设备及存储介质
US10771760B2 (en) Information processing device, control method of information processing device, and storage medium
CN109326310B (zh) 一种自动剪辑的方法、装置及电子设备
JP7013139B2 (ja) 画像処理装置、画像生成方法及びプログラム
US10110850B1 (en) Systems and methods for directing content generation using a first-person point-of-view device
CN112165590B (zh) 视频的录制实现方法、装置及电子设备
JP6558587B2 (ja) 情報処理装置、表示装置、情報処理方法、プログラム、および情報処理システム
CN107105315A (zh) 直播方法、主播客户端的直播方法、主播客户端及设备
US20120120201A1 (en) Method of integrating ad hoc camera networks in interactive mesh systems
CN105939481A (zh) 一种交互式三维虚拟现实视频节目录播和直播方法
US8484223B2 (en) Image searching apparatus and image searching method
US20230040548A1 (en) Panorama video editing method,apparatus,device and storage medium
JP2020174345A (ja) 画像を取り込むシステムおよびカメラ機器
JP2020086983A (ja) 画像処理装置、画像処理方法、及びプログラム
JP2010232814A (ja) 映像編集プログラムおよび映像編集装置
CN106470313A (zh) 影像产生系统及影像产生方法
CN110351579B (zh) 一种视频的智能剪辑方法
CN111741325A (zh) 视频播放方法、装置、电子设备及计算机可读存储介质
US11622099B2 (en) Information-processing apparatus, method of processing information, and program
JP5532645B2 (ja) 映像編集プログラムおよび映像編集装置
CN112166599A (zh) 视频剪辑方法及终端设备
US20150375109A1 (en) Method of Integrating Ad Hoc Camera Networks in Interactive Mesh Systems
JP7282519B2 (ja) 画像処理装置または画像処理サーバー
CN113971693A (zh) 直播画面生成方法、系统、装置及电子设备
JP7375542B2 (ja) 制御装置、制御システム、および制御プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21738159

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09-12-2022)

122 Ep: pct application non-entry in european phase

Ref document number: 21738159

Country of ref document: EP

Kind code of ref document: A1