WO2022012585A1 - 一种视频分享、获取方法、服务器、终端设备及介质 - Google Patents

一种视频分享、获取方法、服务器、终端设备及介质 Download PDF

Info

Publication number
WO2022012585A1
WO2022012585A1 PCT/CN2021/106226 CN2021106226W WO2022012585A1 WO 2022012585 A1 WO2022012585 A1 WO 2022012585A1 CN 2021106226 W CN2021106226 W CN 2021106226W WO 2022012585 A1 WO2022012585 A1 WO 2022012585A1
Authority
WO
WIPO (PCT)
Prior art keywords
terminal
video data
shooting
server
terminals
Prior art date
Application number
PCT/CN2021/106226
Other languages
English (en)
French (fr)
Inventor
金出武雄
黄锐
Original Assignee
深圳市人工智能与机器人研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市人工智能与机器人研究院 filed Critical 深圳市人工智能与机器人研究院
Priority to US18/014,158 priority Critical patent/US20230262270A1/en
Priority to JP2023501650A priority patent/JP7456060B2/ja
Publication of WO2022012585A1 publication Critical patent/WO2022012585A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25808Management of client data
    • H04N21/25841Management of client data involving the geographical location of the client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2743Video hosting of uploaded data from client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41407Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4524Management of client data or end-user data involving the geographical location of the client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6582Data stored in the client, e.g. viewing habits, hardware capabilities, credit card number
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6587Control parameters, e.g. trick play commands, viewpoint selection

Definitions

  • the present application relates to the field of electronic technology, and in particular, to a video sharing and acquisition method, server, terminal device and medium.
  • the existing technologies mainly include the following types.
  • Live broadcast sites which are mainly shared by a host through the live broadcast platform in real time .
  • Video surveillance network this scenario is generally used in the security field.
  • a video surveillance network is composed of multiple fixed cameras, and then a terminal is used to watch the video images captured by each camera in the surveillance network in real time.
  • the shared user When multiple video sources are shared to the same user, the shared user must record the addresses, user names, and passwords of all video sources, which is very inconvenient for users.
  • This scenario is not conducive to video sharing.
  • the camera of the video surveillance network is fixed, and the captured content cannot be changed flexibly.
  • Embodiments of the present application provide a video sharing and acquisition method, server, terminal device and medium, which are used to solve the problem of insufficient flexibility in video sharing.
  • the present application provides a video sharing method, including:
  • the server obtains video data and position points from N shooting terminals respectively, the video data is used to record the video shot by the shooting terminal, the position point is used to record the position when the shooting terminal obtains the video data, and N is greater than 1 positive integer;
  • the server sends the position points obtained by the N shooting terminals to the M viewing terminals, where M is a positive integer greater than 1;
  • the server obtains a target position point from the Qth terminal, where the Qth terminal is one of the M viewing terminals, 1 ⁇ Q ⁇ M, and the target position point is one of the position points obtained by the N shooting terminals ;
  • the server sends the Jth video data to the Qth terminal according to the target position, the Jth video data is the video data shot by the Jth terminal at the target position, and the Jth terminal is one of the N shooting terminals terminal, 1 ⁇ J ⁇ N.
  • an embodiment of the present application further provides a video acquisition method, including: the viewing terminal acquires at least one location point from the server, where the location point is the video data captured by N shooting terminals in the target area and then uploaded to the server, and N is A positive integer greater than 1; the viewing terminal displays a map interface on the display interface, the map interface is the map interface of the target area, and the map interface includes at least one location point; the viewing terminal obtains the target location point selected by the user from the map interface, the target location The point is one of at least one position point; the viewing terminal sends the target position point to the server; the viewing terminal obtains the jth video data from the server, and the jth video data is the video data shot by the jth terminal at the target position point in the target area, The Jth terminal is one of the N photographing terminals, and 1 ⁇ J ⁇ N.
  • an embodiment of the present application further provides a video sharing device, including:
  • an acquisition unit which is used to acquire video data and a position point respectively from N shooting terminals, the video data is used to record the video shot by the shooting terminal, and the position point is used to record the position when the shooting terminal acquired the video data , where N is a positive integer greater than 1;
  • the sending unit is configured to send the position points obtained by the N shooting terminals obtained by the obtaining unit to the M viewing terminals, where M is a positive integer greater than 1;
  • the obtaining unit is further configured to obtain a target position point from the Qth terminal, where the Qth terminal is one of the M viewing terminals, 1 ⁇ Q ⁇ M, and the target position point is obtained by each of the N shooting terminals one of the location points;
  • the sending unit is further configured to send the Jth video data to the Qth terminal according to the target position obtained by the obtaining unit, where the Jth video data is the video data shot by the Jth terminal at the target position, and the Jth video data is the video data shot by the Jth terminal at the target position.
  • the terminal is the Jth terminal in the N shooting terminals, 1 ⁇ J ⁇ N.
  • the sending unit is further configured to: send identification information of R photographing terminals to the Qth terminal, where the R photographing terminals are the terminals that have photographed video data at the target position among the N photographing terminals, and R ⁇ N, the identification information is used to mark the corresponding shooting terminal;
  • the obtaining unit is further configured to: obtain identification information of a target terminal from the Qth terminal, where the target terminal is one of the R shooting terminals;
  • the sending unit is further configured to: send the target video data shot by the target terminal at the target position to the Qth terminal.
  • the device further includes a first splicing unit, and when the server determines that the Jth terminal is a popular terminal at the target location according to a first preset rule, and the Jth terminal is not the target terminal, the first Splice units are used for:
  • the server splices the Jth video data and the target video data into recommended video data
  • the sending unit is further configured to: the server sends the recommended video data to the Qth terminal.
  • the device further includes a modeling unit and a comparison unit, where the modeling unit is used for: acquiring an environment model, and the environment model is used to record the shooting environments of the N shooting terminals;
  • the comparison unit is used for: comparing the video data respectively shot by the N shooting terminals with the environment model obtained by the modeling unit, and determining the shooting perspectives of the N shooting terminals, optionally, the shooting perspective is the shooting angle The angle of the terminal when shooting;
  • the sending unit is further configured to: send the shooting angles of view of the N shooting terminals to the M viewing terminals;
  • the obtaining unit is further configured to: obtain a target shooting angle of view from the Qth terminal, where the target shooting angle is one of the shooting angles of the N shooting terminals;
  • the sending unit is further configured to: send the Jth video data to the Qth terminal according to the target shooting angle of view, where the Jth video data is video data captured by the Jth terminal at the target shooting angle of view.
  • the device also includes a second splicing unit, and when the server determines that the target location point is a popular location point according to a second preset rule, the second splicing unit is used for:
  • the P video data are spliced into a panoramic video data according to the shooting angle of view of the P video data, and the shooting pictures in the P video data are recorded in the panoramic video data;
  • the sending unit is also used to:
  • the server When the server acquires the target position point from the Qth terminal, the server sends the panoramic video data to the Qth terminal.
  • the obtaining unit is further configured to: obtain a time point from the N shooting terminals respectively, where the time point is used to record the time when the shooting terminal obtains the video data;
  • the sending unit is further configured to: send the time points obtained by the N shooting terminals to the M viewing terminals;
  • the obtaining unit is further configured to: obtain a target time point from the Qth terminal, where the target time point is one of the time points obtained by the N shooting terminals;
  • the sending unit is further configured to: send the Jth video data to the Qth terminal according to the target time point, where the Jth video data is video data acquired by the Jth terminal at the target time point.
  • the device further includes a third splicing unit, and the obtaining unit is also used for: obtaining S pieces of video data from the video data sent by the N shooting terminals, and the S pieces of video data are the target position points in the target time period. All video data captured on the
  • This third splicing unit is used for: splicing together the features of the S video data records to obtain fusion video data, and this fusion video data records all the features photographed on this target location point in this target time period;
  • the sending unit is further configured to send the merged video data to the Qth terminal.
  • an embodiment of the present application further provides a video acquisition device, including:
  • a first acquisition unit which is used to acquire at least one position point from the server, where the position point is uploaded to the server after N shooting terminals shoot video data in the target area, and the N is a positive integer greater than 1 ;
  • the display unit is used to display a map interface on a display interface, the map interface is a map interface of the target area, and the map interface includes the at least one location point acquired by the first acquisition unit;
  • a second acquisition unit which is used to acquire a target location point selected by the user from the map interface, where the target location point is one of the at least one location point;
  • a sending unit which is used to send the target location point obtained by the second obtaining unit to the server;
  • the first obtaining unit is further configured to obtain the Jth video data from the server, where the Jth video data is the video data shot by the Jth terminal at the target position in the target area, and the Jth terminal shoots the Nth video data One of the terminals, 1 ⁇ J ⁇ N.
  • the first obtaining unit is further configured to: obtain the identification information of R shooting terminals from the server, and the R shooting terminals are all in The shooting terminal that shoots the video data at the target position, 1 ⁇ R ⁇ N;
  • the display unit is further configured to display the identification information of the R shooting terminals on the display interface
  • the second obtaining unit is further configured to obtain the identification information of the Jth terminal selected by the user, where the Jth terminal is one of the R shooting terminals;
  • the sending unit is further configured to send, by the viewing terminal, the identification information of the Jth terminal to the server.
  • the first obtaining unit is further configured to: obtain the shooting angles of view of the N shooting terminals from the server;
  • the display unit is further configured to display, on the display interface, the viewing angles of the R shooting terminals on the viewing terminal;
  • the second obtaining unit is further configured to obtain a target shooting angle of view selected by the user, where the target shooting angle of view is one of the shooting angles of the R shooting terminals;
  • the sending unit is further configured to send the target shooting angle of view to the server, where the target shooting angle of view is used to request the server to send video data shot by the shooting terminal at the target shooting angle of view.
  • the first obtaining unit is further configured to: obtain at least one time point from the server, where the at least one time point is a time point when the N shooting terminals shoot video data;
  • the display unit is further configured to display the at least one time point on the display interface
  • the second obtaining unit is further configured to obtain a target time point selected by the user, where the target time point is one of the at least one time point;
  • the second acquiring unit is further configured to acquire and send the target time point to the server, where the target time point is used to request the server to send the video shot by the shooting terminal at the target time point.
  • an embodiment of the present application further provides a server, including: an interaction device, an input/output (I/O) interface, a processor, and a memory, where program instructions are stored in the memory; the interaction device is configured to obtain user input operation instructions; the processor is used to execute the program instructions stored in the memory, and execute the video sharing method described in any of the above.
  • a server including: an interaction device, an input/output (I/O) interface, a processor, and a memory, where program instructions are stored in the memory; the interaction device is configured to obtain user input operation instructions; the processor is used to execute the program instructions stored in the memory, and execute the video sharing method described in any of the above.
  • an embodiment of the present application further provides a server, including: an interaction device, an input/output (I/O) interface, a processor, and a memory, where program instructions are stored in the memory; the interaction device is configured to obtain user input
  • the processor is used to execute the program instructions stored in the memory, and execute the video acquisition method described in any of the above.
  • an embodiment of the present application further provides a computer-readable storage medium, characterized in that, when the instruction is executed on a computer device, the computer device is caused to execute any of the above video sharing methods.
  • an embodiment of the present application further provides a computer-readable storage medium, characterized in that, when the instruction is executed on a computer device, the computer device is caused to execute any one of the above video acquisition methods.
  • the embodiments of the present application have the following advantages:
  • An embodiment of the present application provides a video sharing method, which includes: a server obtains video data and position points from N shooting terminals respectively, the video data is used to record the video shot by the shooting terminal, and the position point is used to record when the shooting terminal obtains the video data position, N is a positive integer greater than 1; the server sends the position points obtained by the N shooting terminals to M viewing terminals, which is a positive integer greater than 1; the server obtains the target position point from the Qth terminal, and the Qth terminal is one of the M viewing terminals, 1 ⁇ Q ⁇ M, and the target position point is one of the position points obtained by the N shooting terminals; the server sends the Jth video data to the Qth terminal according to the target position point, and the Jth The video data is the video data shot by the Jth terminal at the target position, and the Jth terminal is the Jth terminal among the N shooting terminals, and 1 ⁇ J ⁇ N.
  • multiple video shooting terminals can share video images with multiple viewing terminals, and users of multiple viewing terminals can independently select the video images shared by shooting terminals located at different locations, thereby realizing a many-to-many video sharing mode , which can be applied to various scenarios to improve the richness and convenience of video sharing.
  • Figure 1 is the system architecture diagram of the live website
  • Figure 2 is a system architecture diagram of a video surveillance network
  • FIG. 3 is a system architecture diagram of a video sharing method provided by an embodiment of the present application.
  • 4a is a schematic diagram of an embodiment of a video sharing method provided by an embodiment of the present application.
  • 4b is a schematic diagram of an embodiment of a video sharing method provided by an embodiment of the present application.
  • 5a is a schematic diagram of another embodiment of a video sharing method provided by an embodiment of the present application.
  • 5b is a schematic diagram of another embodiment of the video sharing method provided by the embodiment of the application.
  • FIG. 6 is a schematic diagram of another embodiment of a video sharing method provided by an embodiment of the present application.
  • FIG. 7a is a schematic diagram of another embodiment of a video sharing method provided by an embodiment of the present application.
  • FIG. 7b is a schematic diagram of another embodiment of a video sharing method provided by an embodiment of the present application.
  • FIG. 8a is a schematic diagram of another embodiment of a video sharing method provided by an embodiment of the present application.
  • FIG. 8b is a schematic diagram of another embodiment of a video sharing method provided by an embodiment of the present application.
  • 9a is a schematic diagram of another embodiment of a video sharing method provided by an embodiment of the present application.
  • FIG. 9b is a schematic diagram of another embodiment of a video sharing method provided by an embodiment of the present application.
  • 10a is a schematic diagram of another embodiment of a video sharing method provided by an embodiment of the present application.
  • 10b is a schematic diagram of another embodiment of the video sharing method provided by the embodiment of the application.
  • 11a is a schematic diagram of a network architecture of a video sharing method provided by an embodiment of the present application.
  • 11b is a schematic diagram of a system architecture of a video sharing method provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a server provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of another video sharing apparatus provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of another video acquisition apparatus provided by an embodiment of the present application.
  • Embodiments of the present invention provide a video sharing and acquisition method, server, terminal device and medium.
  • video sharing and transmission technologies mainly include the following types.
  • Live-streaming website this type of website is mainly used by a host to share videos in real time through the live-streaming platform.
  • This type of live-streaming website usually takes individuals as the main body and realizes one-to-many video sharing.
  • the live broadcast website architecture includes a live broadcast terminal 101 and multiple viewing terminals 102.
  • the live broadcast terminal 101 acquires and generates a shooting image, and then shares it with multiple viewing terminals 102, so that multiple viewing terminals 102 can watch the images.
  • this architecture has only a single live broadcast perspective, and lacks the essence and practical value.
  • a video surveillance network which is generally used in the security field, consists of a plurality of fixed cameras 201 to form a video surveillance network, and then a terminal 203 watches the video images shot by each camera 201 in the surveillance network in real time.
  • the architecture of the video surveillance network includes multiple cameras 201 at different locations in a fixed branch, and a server 202 connected to the cameras 201 .
  • the server 202 sends these videos to the terminal 203 , and the terminal 203 sends the video to the terminal 203 .
  • Unified viewing of video images is a many-to-one video sharing method. In this working mode, a video sharing mode can also be implemented.
  • the user who needs to share the video needs to have the authority to access the server 202, and the shared user must record the addresses, user names, and The password also needs to go through strict authentication steps, which is very inconvenient for users.
  • This scenario is not conducive to video sharing.
  • the camera 201 of the video surveillance network is fixed, and the obtained shooting content cannot be flexibly changed. Mobility is not ideal.
  • an embodiment of the present application provides a video sharing method, in which video data is collected by multiple shooting terminals, and then shared to multiple viewing terminals, thereby realizing a many-to-many video sharing method.
  • the methods provided by the embodiments of the present application may be used for real-time live video sharing, and may also be used for recorded and broadcast video sharing, which are not limited in the embodiments of the present application.
  • the working system architecture of the method provided by the embodiment of the present application includes.
  • the number of shooting terminals 301 can be multiple, in the process of specific work, the shooting terminals 301 can be smart phones, tablet computers, smart watches, smart glasses, cameras with communication functions, or It is an intelligent terminal capable of acquiring video data, such as a communication and photographing vehicle, which is not limited in this embodiment of the present application.
  • the users of these shooting terminals 301 are users who need to share videos. These users can move freely with the shooting terminals 301 and shoot the content they want to share in different scenarios.
  • the server 302 is used for acquiring the video data uploaded by all the shooting terminals 301, and performing unified scheduling on the video data.
  • the viewing terminal 303 as shown in FIG. 3, the number of viewing terminals 303 may be multiple, and in the specific work process, the shooting terminal 301 may be a smart phone, a tablet computer, a desktop computer, a smart watch, smart glasses or a virtual reality device (virtual reality, VR), which is not limited in this embodiment of the present application.
  • These viewing terminals 303 are used for viewing videos shared by the shooting terminal 301 , and the viewing terminal 303 sends a request to the server 302 to obtain the video data sent by the server 302 , and the video data is shot by the shooting terminal 301 .
  • Embodiment 1 of the video sharing method provided by the embodiments of the present application includes the following steps.
  • the N shooting terminals acquire respective video data and position points respectively.
  • FIG. 4b is a plane map of a venue
  • the location points A, B, C, etc. of the map are the shooting locations of the shooting terminals in the venue
  • N shooting terminals are carried out in the venue.
  • Video shooting N is a positive integer greater than 1.
  • the Jth terminal is one of the N shooting terminals, and the video data is shot at point A in the map. At this time, the Jth terminal obtains the Jth video data and the position point A.
  • the server obtains video data and position points from the N shooting terminals respectively.
  • the N shooting terminals send their acquired video data and position points to the server.
  • the Jth terminal sends the Jth video data and the position point A to the server.
  • the N shooting terminals except for the Jth terminal Other terminals do the same.
  • the server records the correspondence between the video data and the location points, so as to know at which location point each video data was shot.
  • the server sends the position points acquired by the N shooting terminals to the M viewing terminals.
  • M is a positive integer greater than 1, and the server sends the acquired location points to multiple viewing terminals, so that these viewing terminals can know which location points currently can see the video data shot by the shooting terminal .
  • the interface of the viewing terminal may be as shown in Figure 4b, which is a map of an exhibition hall, and some location points are marked in the map, and these location points are the location points sent by the server to the viewing terminal, so that the user can pass
  • Figure 4b is a map of an exhibition hall, and some location points are marked in the map, and these location points are the location points sent by the server to the viewing terminal, so that the user can pass
  • This interface can intuitively know which points in the exhibition hall can see the video data shot by the shooting terminal.
  • the Qth terminal acquires the target location point selected by the user.
  • the Qth terminal is one of the M viewing terminals.
  • the interactive interface of the Qth terminal displays the interface shown in Figure 4b to the user. If the location point A is clicked on the interface, then the Qth terminal takes the location point A as the target location point and sends it to the server.
  • the server acquires the target location point from the Qth terminal.
  • the Qth terminal sends the target location point to the server, so that the server knows the video data that the Qth terminal expects to obtain.
  • the server sends the Jth video data to the Qth terminal according to the target location point.
  • the server sends the Jth video data to the Qth terminal according to the target position point.
  • the server obtains video data and position points from N shooting terminals respectively, the video data is used to record the video shot by the shooting terminal, the position point is used to record the position when the shooting terminal obtains the video data, and N is a positive value greater than 1. Integer; the server sends the position points obtained by the N shooting terminals to the M viewing terminals, which is a positive integer whose M is greater than 1; the server obtains the target position point from the Qth terminal, and the Qth terminal is one of the M viewing terminals.
  • the target position point is one of the position points obtained by the N shooting terminals; the server sends the Jth video data to the Qth terminal according to the target position point, and the Jth video data is that the Jth terminal is at the target position
  • the J th terminal is the J th terminal among the N shooting terminals, and 1 ⁇ J ⁇ N.
  • each shooting terminal may shoot video data at different positions, and at the same time, there may be multiple shooting terminals at the same position The data has been photographed.
  • the target position may have video data captured by multiple shooting terminals available for viewing. At this time, it is necessary to confirm with the user of the viewing terminal.
  • the second embodiment of the video sharing method provided by the embodiment of the present application includes the following steps.
  • the N shooting terminals acquire respective video data and position points respectively.
  • the server obtains video data, location points, and identification information from the N shooting terminals, respectively.
  • the specific working method for the server to obtain the video data and the position points from the N shooting terminals respectively can refer to the above step 502, which will not be repeated here; further, the N shooting terminals also send their respective identification information to the server, The identification information is used to uniquely identify each photographing terminal.
  • the server sends the position points and identification information obtained by the N shooting terminals to the M viewing terminals.
  • the server for the step of the server sending the location points to the M viewing terminals, reference may be made to the description of the above step 403, which will not be repeated here. Further, the server also sends the acquired identification information to the M viewing terminals.
  • the server when the server sends the identification information of the N shooting terminals to the M viewing terminals, it can also send the description information of the N shooting terminals, and the description information is the information uploaded when the shooting terminals are registered with the server, such as the nicknames of the shooting terminals. , text introduction or avatar and other information, when sending the identification information, the server simultaneously obtains the description information of the shooting terminal identified by the identification information, and sends the description information to the M viewing terminals respectively.
  • the Qth terminal acquires the target location point and the target terminal selected by the user.
  • the Qth terminal is one of the M viewing terminals, and the interactive interface of the Qth terminal displays the interface as shown in Figure 5b to the user, and the user clicks the position point, for example, the user clicks the position point in the interface A, at this time, the Qth terminal uses the position point A as the target position point.
  • the Qth terminal displays the option menu 5001 on the interactive interface.
  • the identification information of the R shooting terminals that can be selected by the position point A is displayed in the option menu, and R is a positive integer less than or equal to N.
  • description information of the R shooting terminals may be displayed in the option menu 5001, and the description information may include information such as nicknames, text introductions, or avatars, so that the user of the viewing terminal can make a decision and select a target terminal to watch.
  • the Q th terminal obtains the user-selected target position point as position point A, and the target terminal is the J th terminal.
  • the server acquires the target location point and the identification information of the target terminal from the Qth terminal.
  • step 504 the target position point selected by the user obtained by the Qth terminal is the position point A, and the target terminal is the Jth terminal, then the Qth terminal sends the identification information of the position point A and the Jth terminal to the server, Thus, the server is made aware that the Qth terminal expects to obtain the video data shot by the Jth terminal at the position point A.
  • the server sends the target video data captured by the target terminal at the target position to the Qth terminal.
  • the target position point selected by the Qth terminal is the position point A
  • the target terminal is the Jth terminal
  • the server sends the video data captured by the Qth terminal and the Jth terminal at the position point A, thereby realizing the Video acquisition.
  • the Jth video data captured by the Jth terminal is sent.
  • the same location may have multiple shooting terminals to shoot video data, and the identification information is used for the user of the viewing terminal to select, and the identification information may be further accompanied by a description. information, so that the sharing of the video has a social attribute, and the user of the viewing terminal can select the shooting terminal that he is interested in according to the description information to watch the video.
  • Embodiment 3 of the video sharing method provided by the embodiment of the present application includes the following steps.
  • steps 601 to 602 reference may be made to the above-mentioned steps 401 to 402, which will not be repeated here.
  • the server sends the position points acquired by the N shooting terminals to the M viewing terminals.
  • the specific implementation of this step can refer to the above-mentioned step 403. It should be noted that the location points sent by the server to the M viewing terminals may be regarded as options available for subscription, which are used to inform the M viewing terminals to select these The video data shot by the shooting terminal at the position point can be obtained.
  • the server sends the position points acquired by the N shooting terminals to the M viewing terminals, and then the M viewing terminals respectively send the target position points to the server, and these target position points can be regarded as a kind of
  • the subscription information is used to subscribe from the server to the video data shot by the shooting terminal at the target location.
  • the server can count the popularity of each shooting terminal according to the first preset rule.
  • the server determining that the Jth terminal is a popular terminal at the target location according to the first preset rule may include the following steps.
  • the server obtains the number of times that the R shooting terminals on the target location are selected by the viewing terminal user.
  • the server obtains and sorts the number of times the R shooting terminals are selected.
  • the server obtains the top n shooting terminals of the R shooting terminals as popular terminals, optionally, n is a positive integer greater than or equal to 1, which can be defined by the developer, which is not limited in this application.
  • n 1
  • the Jth terminal is the terminal most selected by the viewing terminal among the R shooting terminals corresponding to the target position, it is determined that the Jth terminal is the popular terminal of the target position.
  • the Qth terminal acquires the target location point and the target terminal selected by the user.
  • the specific working manner of acquiring the user-selected target position point by the Qth terminal may refer to the above-mentioned step 504 or 404, which will not be repeated here. If the target location point selected by the user corresponds to the video data of multiple shooting terminals,
  • the interface shown in Figure 5b is displayed to the user on the interactive interface of the Qth terminal.
  • the Qth terminal uses the position point A as the target position point.
  • the Qth terminal displays an option menu on the interactive interface, and the option menu displays the identities of the R shooting terminals available for selection at the position point A.
  • R is a positive integer less than or equal to N.
  • description information of the R shooting terminals may be displayed in the option menu, and the description information may include information such as nicknames, text introductions, or avatars, so that the user of the viewing terminal can make a decision and select a target terminal to watch.
  • the description information in the options menu can also be used to remind the user that a certain shooting terminal in the options menu is a popular terminal.
  • a certain shooting terminal in the options menu is a popular terminal.
  • it can be marked after the terminal.
  • the server acquires the target location point and the identification information of the target terminal from the Qth terminal.
  • the server sends the video data shot by the target terminal at the target position to the Qth terminal according to the target position and the identification information of the target terminal.
  • the J th terminal among the above R shooting terminals is a popular terminal, and the user does not select the J th terminal as the target terminal, at this time, the following step 606 is performed.
  • the server splices the Jth video data and the target video data into recommended video data.
  • the Jth video data is the video data shot by the Jth terminal, a popular terminal in the target location point
  • the target video data is the video data shot by the target terminal selected by the user in the target location point.
  • the video data is not real-time, and the server sends the video data stored locally on the server to the viewing terminal. At this time, the server will edit the video.
  • the specific implementation method is to The Jth video data and the target video data are spliced together. For example, if the total duration of the target video data is 20 seconds, it is recommended to use the video content in the target video data for the first ten seconds of the video data, and use the video content of the Jth video data for the last ten seconds. video content.
  • Live broadcast scenario In the live broadcast scenario, the video data is transmitted in real time, and the server directly sends the real-time live video stream to the viewing device. At this time, the specific method of video splicing is the switching of the live video stream. After setting the video stream of the duration target terminal, switch the video stream to the video stream of the Jth terminal. This preset for switching between two video streams constitutes a recommended video stream.
  • the server sends the recommended video data to the Qth terminal.
  • the server sends the recommended video data to the Qth terminal, so that the user of the Qth terminal can watch not only the video shot by the target terminal selected by himself, but also the video shot by the popular terminal when watching the video. Because the video shot by the popular terminal has been selected by most users, it may also be loved by the Qth terminal user, thereby ensuring the transmission efficiency of the video and improving the user experience.
  • the popularity of multiple shooting terminals at the target location is judged by the first preset rule, and videos shot by popular terminals are recommended to users watching the terminals by splicing videos, thereby improving the sharing of videos. Efficiency ensures that users watching the terminal see more popular videos, which improves the user experience.
  • different shooting terminals may take different shooting angles when shooting.
  • point A is an exhibition hall. If you shoot on the left side, you can see the first scene; if the shooting terminal shoots to the right, you can see the second scene; at this time, although shooting at the same location, the content displayed in the video is different due to the different shooting angles. It is completely different. Therefore, as a preferred implementation manner, in addition to the position point, the user of the viewing terminal should also be allowed to select the shooting angle of the position point, so that the user of the viewing terminal can select richer content. This situation will be described in detail below with reference to the accompanying drawings.
  • the fourth embodiment of the video sharing method provided by the embodiment of the present application includes the following steps.
  • the server obtains an environment model.
  • the environment model is used to record shooting environments of N shooting terminals.
  • the environment in the exhibition hall needs to be modeled to obtain the environment model.
  • the depth data and environmental data of each position of the exhibition hall can be obtained by combining the depth camera with the infrared camera and the visual camera, etc.
  • the modeling method of the environment model is in the prior art, and those skilled in the art can choose any modeling method according to actual needs, which will not be repeated in this embodiment of the present application.
  • the N shooting terminals acquire respective video data and position points respectively.
  • the N shooting terminals use different shooting perspectives at different positions in the shooting environment to obtain respective video data, wherein the shooting environment is an environment modeled by the environment, such as in an exhibition hall.
  • the server obtains video data and position points respectively from the N shooting terminals.
  • the server compares the video data respectively shot by the N shooting terminals with the environment model, and determines the shooting angles of the N shooting terminals.
  • the scene information of the shooting environment is recorded in the video data respectively shot by the N shooting terminals, and the environment model is matched according to the information, and the server can know which scene in the shooting environment is shot by the current video data. Combined with the location point information of the video data, the server can know the shooting position and shooting angle of each video data.
  • the server can implement the matching between the video data and the environment model through the visual positioning technology.
  • the visual positioning technology can be implemented in the following manner.
  • the terminal when the terminal usually uploads video data, it will attach the GPS address where the upload is located.
  • the positioning accuracy is required to reach the meter level
  • the equipment used in the shooting terminal is a smartphone and a camera
  • the application scenarios are gymnasiums and museums.
  • large venues such as schools, there are many occlusions, large flow of people, and the surrounding environment is invariant.
  • GPS positioning technology is initially selected for rough positioning.
  • other positioning manners in the prior art may also be used, which are not limited in this embodiment of the present application.
  • the target area is the area photographed by the photographing terminal, and the area is divided into a plurality of sub-areas, and the latitude and longitude ranges of each sub-area are recorded, and data collection is performed at intervals of m meters in each sub-area, and further Ground, the 360° direction is also discretized into k angles, image information and pose information at the k angles are collected, and the camera pose corresponding to each picture is recorded.
  • the camera pose is defined as (n,xi,yi,zi, ⁇ i, ⁇ i,v), where the x-axis is defined as the true north direction, the y-axis is the true east direction, the z-axis is perpendicular to the ground, n is the sub-region where it is located; xi, yi, zi are the offsets of the position point on each axis relative to the center of the sub-region; ⁇ i is the angle with the true north direction and clockwise is positive, and ⁇ i is the angle with the z-axis And clockwise is positive, v is the moving speed.
  • the shooting terminal according to the rough positioning information of the GPS, it can be first determined which sub-area divided by step 2 the shooting terminal is located.
  • the target sub-region is one of the multiple sub-regions acquired in step 2.
  • the image information in each sub-area is preset in the server.
  • the target area is an exhibition hall
  • the staff realizes that the panoramic image in the exhibition hall is obtained as the image information of the whole exhibition hall. After the discretization process of 2, the image information of each sub-region can be obtained.
  • a matching algorithm is used to extract the overall scene feature of the image in the video frame, and the feature is compared with the feature of the data set for similarity to find the image with the highest similarity score, denoted as image I, and this image I As the positioning photo of the data terminal.
  • the pose information of the preset images in each sub-area is measured when the images are preset, and the pose information includes the position information and the shooting angle of view information.
  • the video data uploaded by the shooting terminal has the pose information
  • the server and the viewing terminal can know the video data according to the pose information. pose information.
  • the server sends the position points and shooting angles of the N shooting terminals to the M viewing terminals.
  • the server sends the acquired position points and shooting angles to multiple viewing terminals, so that these viewing terminals can know where the shooting terminals that can currently see the video data are located, and which shooting angles are included. .
  • the Qth terminal acquires the target location point and the target shooting angle selected by the user.
  • the interface displayed by the Qth terminal may be as shown in Figure 7b, which is a map of an exhibition hall.
  • the map is marked with some location points, and these location points are the location points sent by the server to the viewing terminal , so that the user can intuitively know, through this interface, which positions in the exhibition hall can see the video data shot by the shooting terminal.
  • the position point A is an optional position point. At this position point, the Jth terminal shoots towards the angle of view A1, and the other shooting terminal shoots towards the angle of view A2.
  • the arrow 7001 represents the viewing angle A1, and the second arrow 7002 represents the viewing angle A2.
  • the user selects the first arrow 7001 after selecting the position point A, and the Qth terminal can know that the target position point selected by the user is the position point A, and the target shooting angle is the angle of view A1, that is, the video shot by the Jth terminal at the position point A is selected.
  • the server acquires the target location point and the target shooting angle from the Qth terminal.
  • the Qth terminal sends the target position point and the target shooting angle to the server, so that the server knows the information of the video data that the Qth terminal expects to obtain.
  • the server sends the Jth video data to the Qth terminal according to the target shooting angle of view.
  • the Jth video data is video data obtained by the Jth terminal standing at the target position and photographing at the target shooting angle.
  • the server acquires the video shot by the shooting terminal, and visually recognizes the video according to the pre-established environment model, so as to know the shooting angle of each video.
  • the server sends the location point and the shooting perspective to the viewing terminal, so that the user of the viewing terminal can not only select the location point, but also the shooting perspective, which further refines the granularity of the video type that the user can select, and can provide users with more personalized
  • the choice of space further enhances the richness of video sharing.
  • N shooting terminals shoot videos at different locations in the exhibition hall, and some locations may be popular, such as some in the exhibition hall.
  • some locations may be popular, such as some in the exhibition hall.
  • the shooting angle of each shooting terminal is different.
  • the videos of the popular locations can be spliced to generate a multi-angle panoramic video. , so that users of the viewing terminal can watch panoramic videos from multiple angles at one time.
  • the fifth embodiment of the video sharing method provided by the embodiment of the present application includes the following steps.
  • steps 801 to 805 reference may be made to the above-mentioned steps 701 to 705, which will not be repeated here.
  • the server determines, according to the second preset rule, that the target location point is a popular location point.
  • the second preset rule may be any rule used to confirm popular locations.
  • the implementation of the second preset rule may include the following steps.
  • the server obtains the position points sent by the M viewing terminals.
  • the server counts and sorts the number of times each location point is selected by the M viewing terminals.
  • the server obtains the sorted top n position points as popular position points.
  • a location selected for viewing by more viewing terminals is determined to be a popular location, and n is a positive integer greater than or equal to 1, which is not limited in this application.
  • the server obtains the target location point selected by the most people as the popular location point.
  • the server acquires P pieces of video data shot by the R pieces of shooting terminals at the target position point.
  • the R shooting terminals are all shooting terminals that shoot videos at the target position point, P is a positive integer greater than 1, and P may or may not be equal to R.
  • the server stitches the P pieces of video data into one panoramic video data according to the shooting angles of the P pieces of video data.
  • the panoramic video data records the shooting pictures in the P pieces of video data.
  • the server has acquired video data from two different angles at the target location, namely the first video and the second video.
  • the scene 8001 includes the first angle 8001a and the second video.
  • Two angles 8001b have two shooting angles
  • the first video 8002 captures the scene from the first angle 8001a
  • the second video 8003 captures the scene from the second angle 8001b
  • the first video 8002 and the second video 8003 are respectively captured different parts of the scene
  • the server fuses the first video 8002 and the second video 8003 according to the shooting angles of the first video 8002 and the second video 8003 to obtain a panoramic video data: the third video 8004, visible
  • the third video 8004 records the shooting images of the first video 8002 and the second video 8003, and simultaneously records the contents of two different shooting angles.
  • the server acquires the target position point from the Qth terminal, the server sends the panoramic video data to the Qth terminal.
  • the server when the Qth terminal sends the target position point to the server, the server can know that the Qth terminal has selected a popular position point. At this time, the server sends the panoramic video data corresponding to the target position point to the Qth terminal. , so that the Qth terminal can see the panoramic video of the popular scenic spot.
  • the server when the server obtains the shooting angle of view of the video shot by the shooting terminal, it further determines which position point is a popular position point according to the second preset rule, and then according to the shooting angle of view of multiple videos in the popular position point, Splicing multiple hot videos to obtain a panoramic video, so that users who watch the terminal can see the multi-angle content of popular locations at one time through the panoramic video data, which improves the efficiency of video sharing and the richness of video content .
  • the method provided by the embodiment of the present application can be used in a live broadcast scenario or a recording and broadcasting scenario, wherein, in the recording and broadcasting scenario, the server will store the data uploaded by the shooting terminal at different time points. At this time, at the same position, there may be video data captured by the shooting terminal in different time periods. At this time, when the user selects this position, he needs to confirm with the user of the viewing terminal, and he needs to choose to watch the video data. Video data at which point in time. For ease of understanding, this working mode will be described in detail below with reference to the accompanying drawings.
  • the sixth embodiment of the video sharing method provided by the embodiment of the present application includes the following steps.
  • the N shooting terminals acquire respective video data, location points, and time points.
  • N shooting terminals acquire the video data when shooting video data point in time.
  • the server obtains video data, a location point, and a time point from the N shooting terminals, respectively.
  • the N shooting terminals send the acquired video data, location point and time point to the server.
  • the server sends the position points and time points acquired by the N shooting terminals to the M viewing terminals.
  • M is a positive integer greater than 1, and the server sends the acquired location and time to multiple viewing terminals, so that these viewing terminals can know which locations currently can be seen by the shooting terminal. , and the video data of which time points are available for selection at each location point.
  • the Qth terminal acquires the target location point and target time point selected by the user.
  • the Qth terminal is one of the M viewing terminals
  • the interactive interface of the Qth terminal displays the interface as shown in Figure 9b to the user, and the user clicks on the position point, for example, the user clicks the position point in the interface A, then the Qth terminal takes the position point A as the target position point.
  • the interface will further display the option menu 9001, and the options menu 9001 displays different time points for users to select video data at different time points.
  • the server acquires the target time point and the target location point from the Qth terminal.
  • the Qth terminal sends the acquired target time point and target position point to the server, so that the server knows the target time point and the target position point selected by the Qth terminal.
  • the server sends the Jth video data to the Qth terminal according to the target location point and the target time point.
  • the Jth video data is video data acquired by the Jth terminal at the target time point of the target location point.
  • one location has video data of different time periods, and the user on the viewing end can only select the video at one time point to watch it.
  • video data you need to switch back and forth, which is more troublesome.
  • video data at the same location can be fused to reflect the characteristics of the videos shot at each time period in one video, so that users can see all in one video. content of the time period.
  • Embodiment 7 of the video sharing method provided by the embodiments of the present application includes the following steps.
  • the N shooting terminals acquire respective video data, location points, and time points.
  • N shooting terminals acquire the video data when shooting video data point in time.
  • the server obtains video data, a location point, and a time point from the N shooting terminals, respectively.
  • the N shooting terminals send the acquired video data, location point and time point to the server.
  • the server acquires S pieces of video data from the video data sent by the N shooting terminals.
  • the S pieces of video data are all the video data captured at the target position within the target time period, and the target time period is a preset time period, which can be set by those skilled in the art according to actual needs. Not limited.
  • the server splices together the features recorded in the S pieces of video data to obtain fused video data.
  • the fusion video data records all the features photographed at the target position within the target time period.
  • the server can extract the passerby A 10041a and the passerby B 10041b from the first video 10041 and the second video 10042 according to the feature identification algorithm.
  • the feature identification and extraction algorithms are the prior art in the art, and those skilled in the art can Appropriate feature recognition and extraction algorithms need to be selected, which are not limited in this embodiment of the present application.
  • the server extracts the two features of passerby A 10041a and passerby B 10041b, the two features are spliced together to obtain a fusion video data 10043, in which the passerby A 10041a is simultaneously recorded in the same scene. And passerby B 10041b, so that you can see the feature information recorded in videos of different time periods in one video.
  • the server sends the location point to the M viewing terminals.
  • the Qth terminal acquires the target location point selected by the user.
  • the specific implementation manner of obtaining the target position point selected by the user by the Qth terminal may refer to the above step 404, which will not be repeated here.
  • the following steps are performed when the target position point corresponds to video data of multiple time periods.
  • the server acquires the target location point from the Qth terminal.
  • step 1008 if the target position point corresponds to the fused video data, the following step 1008 is performed.
  • the server sends the fused video data to the Qth terminal.
  • the fused video data is obtained in step 1004, so that the user of the Qth terminal can view more features recorded in all time periods through one video at the same time according to the video.
  • the location may correspond to multiple videos in different time periods, and the server to many The videos obtained at all time points within a certain period of time are spliced, so that users of the viewing terminal can see the features recorded at all time points within a certain period of time in one video, eliminating the need for users to select time points. , which improves the efficiency of video transmission and reduces the amount of data transmitted between the server and the viewing terminal.
  • the solution of the seventh embodiment of the present application can be combined with the solution of the fifth embodiment, and in the process of video splicing, not only the video data of different time points, but also the video data of different shooting angles can be spliced, which can be at the same position.
  • a panoramic full-time video data is obtained, which not only records the video data of each shooting angle of view at the position, but also records the video features recorded in different time periods of each shooting angle, thereby further improving the richness of the video content, Users of the viewing terminal can see the content of all viewing angles and time periods through only one video. On the one hand, the number of interactions between the server and the viewing terminal is reduced, and the efficiency of video sharing is improved. On the other hand, for the user of the viewing terminal, a better user experience is also provided.
  • the shooting terminal when the shooting terminal is shooting video data, the current position point needs to be acquired in real time before the To implement the content of this solution, further, in a live broadcast scenario, the shooting terminal may upload video data while moving, that is, the location point changes in real time. Therefore, the shooting terminal needs to accurately obtain the current position point to ensure the smooth progress of the solution of the present application.
  • the following describes in detail the positioning method of the shooting terminal in the video sharing method provided by the embodiment of the present application.
  • the shooting scene 1101 includes a plurality of shooting terminals. These shooting terminals are connected to the cloud server 1103 through the network conversion layer 1102, and upload location points and video data to the cloud server 1103. information, wherein, the access network of the shooting terminal may adopt a 4G network, a 5G network or a WIFI network, and those skilled in the art can select the required access mode according to actual needs, which is not limited in the embodiments of the present application.
  • the shooting terminal may switch between indoor and outdoor.
  • the terminal can switch between different positioning modes synchronously.
  • the positioning method of the photographing terminal may include visual positioning, optical positioning, WIFI fingerprint positioning, and global positioning system (global positioning system, GPS) positioning, and the like.
  • the shooting terminal identifies whether the current position is indoors or outdoors through visual positioning. For example, the shooting terminal sends the currently acquired video data to the server in real time, and the server determines the image captured in the video data according to the pre-built environment model. Whether it is located indoors or outdoors, for the manner in which the server obtains the environment model, please refer to the description of the above step 701, which will not be repeated here.
  • the server determines that the shooting terminal is currently located indoors, it instructs the shooting terminal to use optical positioning or WIFI fingerprint positioning. These two methods can work efficiently indoors and perform relatively accurate positioning of the shooting terminal.
  • other methods can also be used.
  • the indoor positioning method is not limited in this embodiment of the present application.
  • the server determines that the photographing terminal is currently located outdoors, it instructs the photographing terminal to use GPS positioning.
  • other outdoor positioning methods may also be adopted, which are not limited in this embodiment of the present application.
  • the server first determines whether the photographing terminal is currently indoors or outdoors. When the photographing terminal is indoors, the server instructs the photographing terminal to use the indoor positioning method, and when the photographing terminal is outdoors, the server instructs the photographing terminal to use the outdoor positioning method, so that the The shooting terminal can accurately obtain the position information of the position point no matter what the scene is.
  • the video sharing and video acquisition methods provided by the embodiments of the present application can be implemented according to the architecture shown in FIG. 11b.
  • the system includes a client, a server 1106 and a The administrator terminal 1107, wherein the user terminal includes a shooting terminal 1104 and a viewing terminal 1105.
  • the shooting terminal 1104 is used to shoot a video and upload it to the server 1106.
  • the server 1106 sends the video shot by the shooting terminal 1104 to the viewing terminal 1105.
  • the administrator 1107 schedules and manages the work of the system.
  • the number of the viewing terminals 1105 and the shooting terminals 1104 may be multiple.
  • the shooting terminal 1104 uploads the captured video data through the app11041, and at the same time, uploads the positioning data when shooting the video.
  • the video data and the position data are uploaded to the server 1106 through the wireless module (Router/Proxy) 11042, and the server 1106
  • a video streaming service unit (Streaming Server) 11061 in it, for receiving and processing the video uploaded by the shooting terminal 1104, and the video streaming service unit 11061 sends the video data uploaded by the shooting terminal 1104 to the localization unit (Localization Unit) respectively as a video stream.
  • Streaming Server Streaming Server
  • Video Processing Server 11062 and a video processing unit (Video Processing Server) 11063, wherein the localization unit 11062 is used to send the video stream to the administrator terminal 1107, so that the administrator terminal 1107 can manage the video stream obtained by the server terminal 1106,
  • the video processing unit 11063 is used to perform splicing processing and visual positioning on the video stream.
  • the video processing unit 11063 also needs to send the processed spliced video to the administrator terminal 1107, so that the administrator terminal 1107 can view and manage the spliced video.
  • the server side 1106 further includes a map unit 11064, the map unit 11064 is used to send the map information of the target area to the viewing terminal 1105, optionally, the target area is the working area of the shooting terminal 1104, such as an exhibition hall, a campus Or a library, etc., the map information may be a two-dimensional plane map or a panoramic map, which is not limited in this embodiment of the present application.
  • the viewing terminal 1105 After the viewing terminal 1105 obtains the map information sent by the server 1106, it displays the map interface on the display interface through app11051, as well as the location where the video can be viewed on the map interface, optionally, the app11051 used by the viewing terminal 1105
  • the app 11041 used by the shooting terminal 1104 may be the same app or a different app, which is not limited in this embodiment of the present application.
  • the viewing terminal 1105 is provided with a video distribution unit 11052. After the user of the viewing terminal 1105 selects a location point on the map interface, the viewing terminal 1105 sends the location point to the server 1106 through the video distribution unit (Video Distribute Module) 11052.
  • Video Distribute Module Video Distribute Module
  • the server side 1106 judges according to the location point information, if only one shooting terminal 1104 has uploaded video data at the location point, the server side 1106 directly sends the video data corresponding to the location point to the viewing terminal through the video streaming service unit 11061
  • the video distribution unit 11052 of 1105 the video distribution unit 11052 displays the piece of video data on the display interface of the viewing terminal 1105 through the app, so that the user can watch the video data corresponding to the selected location point.
  • the spliced video data is the video data formed after splicing all the video data at the position point, and the specific splicing method can refer to the above-mentioned embodiment, which will not be repeated here.
  • the above-mentioned video data can be transmitted in a live broadcast mode, or the uploaded video can be stored in the server 1106 to work in a recording and broadcasting mode, or both live broadcasting and recording and broadcasting can be performed, which is not carried out in this embodiment of the present application. limited.
  • the number of shooting terminals 1104 and viewing terminals 1105 can be multiple, so combined with the video sharing and video acquisition methods provided by the embodiments of the present application, multiple video sources can be used for multiple users. sharing process.
  • a video sharing method includes: the server obtains video data and position points from N shooting terminals respectively, the video data is used to record the video shot by the shooting terminal, and the position point is used to record when the shooting terminal obtains the video data position, N is a positive integer greater than 1; the server sends the position points obtained by the N shooting terminals to M viewing terminals, which is a positive integer greater than 1; the server obtains the target position point from the Qth terminal, and the Qth terminal is one of the M viewing terminals, 1 ⁇ Q ⁇ M, and the target position point is one of the position points obtained by the N shooting terminals; the server sends the Jth video data to the Qth terminal according to the target position point, and the Jth The video data is the video data shot by the Jth terminal at the target position, and the Jth terminal is the Jth terminal among the N shooting terminals, 1 ⁇ J ⁇ N.
  • multiple video shooting terminals can share video images with multiple viewing terminals, and users of multiple viewing terminals can independently select the video images shared by shooting terminals located at different locations, thereby realizing a many-to-many video sharing mode , which can be applied to various scenarios to improve the richness and convenience of video sharing.
  • the above method may be implemented by one entity device, or jointly implemented by multiple entity devices, or may be a logic function module in one entity device, which is not specifically limited in this embodiment of the present application.
  • FIG. 12 is a schematic diagram of a hardware structure of a network device according to an embodiment of the present application; the network device may be a network device in an embodiment of the present invention or a terminal device.
  • the network device includes at least one processor 1201 , communication line 1202 , memory 1203 and at least one communication interface 1204 .
  • the processor 1201 may be a general-purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (application-specific integrated circuit, server IC), or one or more programs used to control the program execution of the present application. of integrated circuits.
  • CPU central processing unit
  • microprocessor microprocessor
  • application-specific integrated circuit application-specific integrated circuit, server IC
  • programs used to control the program execution of the present application. of integrated circuits.
  • Communication line 1202 may include a path to communicate information between the aforementioned components.
  • Communication interface 1204 using any transceiver-like device, for communicating with other devices or communication networks, such as Ethernet, radio access network (RAN), wireless local area networks (WLAN), etc. .
  • RAN radio access network
  • WLAN wireless local area networks
  • Memory 1203 may be read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (RAM) or other types of information and instructions It can also be an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disk storage, CD-ROM storage (including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or capable of carrying or storing desired program code in the form of instructions or data structures and capable of being executed by a computer Access any other medium without limitation.
  • the memory may exist independently and be connected to the processor through communication line 1202 .
  • the memory can also be integrated with the processor.
  • the memory 1203 is used for storing computer-executed instructions for executing the solution of the present application, and the execution is controlled by the processor 1201 .
  • the processor 1201 is configured to execute the computer-executed instructions stored in the memory 1203, thereby implementing the charging management method provided by the following embodiments of the present application.
  • the computer-executed instructions in the embodiment of the present application may also be referred to as application code, which is not specifically limited in the embodiment of the present application.
  • the processor 1201 may include one or more CPUs, such as CPU0 and CPU1 in FIG. 12 .
  • the network device may include multiple processors, such as the processor 1201 and the processor 1205 in FIG. 12 .
  • processors can be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor.
  • a processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (eg, computer program instructions).
  • the network device may further include an output device 1205 and an input device 1206 .
  • the output device 1205 is in communication with the processor 1201 and can display information in a variety of ways.
  • the output device 1205 may be a liquid crystal display (LCD), a light emitting diode (LED) display device, a cathode ray tube (CRT) display device, or a projector (projector) Wait.
  • Input device 1206 is in communication with processor 1201 and can receive user input in a variety of ways.
  • the input device 1206 may be a mouse, a keyboard, a touch screen device, a sensor device, or the like.
  • the above-mentioned network device may be a general-purpose device or a dedicated device.
  • the network device may be a server, a wireless terminal device, an embedded device, or a device with a similar structure in FIG. 12 .
  • the embodiments of the present application do not limit the types of network devices.
  • the network device may be divided into functional units according to the foregoing method examples.
  • each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units. It should be noted that the division of units in the embodiments of the present application is illustrative, and is only a logical function division, and other division methods may be used in actual implementation.
  • FIG. 13 shows a schematic structural diagram of a video sharing apparatus.
  • the video sharing apparatus provided by the embodiment of the present application includes:
  • the obtaining unit 1301 is used to obtain video data and a position point respectively from N shooting terminals, the video data is used to record the video shot by the shooting terminal, and the position point is used to record when the shooting terminal obtains the video data.
  • the position of N is a positive integer greater than 1;
  • the sending unit 1302 is configured to send the position points obtained by the N shooting terminals obtained by the obtaining unit 1301 to the M viewing terminals, where M is a positive integer greater than 1;
  • the obtaining unit 1301 is further configured to obtain a target position point from the Qth terminal, where the Qth terminal is one of the M viewing terminals, 1 ⁇ Q ⁇ M, and the target position point is obtained by each of the N shooting terminals one of the location points;
  • the sending unit 1302 is further configured to send the Jth video data to the Qth terminal according to the target position obtained by the obtaining unit 1301, where the Jth video data is the video data shot by the Jth terminal at the target position, and the The J th terminal is the J th terminal among the N photographing terminals, and 1 ⁇ J ⁇ N.
  • the sending unit 1302 is further configured to: send identification information of R shooting terminals to the Qth terminal, where the R shooting terminals are the terminals that have photographed video data at the target position among the N shooting terminals, R ⁇ N, the identification information is used to mark the corresponding shooting terminal;
  • the obtaining unit 1301 is further configured to: obtain identification information of a target terminal from the Qth terminal, where the target terminal is one of the R shooting terminals;
  • the sending unit 1302 is further configured to: send the target video data shot by the target terminal at the target position to the Qth terminal.
  • the device further includes a first splicing unit 1303, when the server determines that the Jth terminal is a popular terminal of the target location point according to the first preset rule, and the Jth terminal is not the target terminal, the Jth terminal is not the target terminal.
  • a splicing unit 1303 is used for:
  • the server splices the Jth video data and the target video data into recommended video data
  • the sending unit 1302 is further configured to: the server sends the recommended video data to the Qth terminal.
  • the apparatus further includes a modeling unit 1304 and a comparison unit 1305, and the modeling unit 1304 is used to: acquire an environment model, and the environment model is used to record the shooting environments of the N shooting terminals;
  • the comparing unit 1305 is configured to: compare the video data respectively shot by the N shooting terminals with the environment model obtained by the modeling unit 1304, and determine the shooting angles of the N shooting terminals;
  • the sending unit 1302 is further configured to: send the shooting angles of view of the N shooting terminals to the M viewing terminals;
  • the obtaining unit 1301 is further configured to: obtain a target shooting angle of view from the Qth terminal, where the target shooting angle of view is one of the shooting angles of the N shooting terminals;
  • the sending unit 1302 is further configured to: send the Jth video data to the Qth terminal according to the target shooting angle of view, where the Jth video data is video data captured by the Jth terminal at the target shooting angle of view.
  • the device also includes a second splicing unit 1306, when the server determines that the target location point is a popular location point according to the second preset rule, the second splicing unit 1306 is used for:
  • the P video data are spliced into a panoramic video data according to the shooting angle of view of the P video data, and the shooting pictures in the P video data are recorded in the panoramic video data;
  • the sending unit 1302 is also used for:
  • the server When the server obtains the target position point from the Qth terminal, the server sends the panoramic video data to the Qth terminal.
  • the obtaining unit 1301 is further configured to: obtain time points from the N shooting terminals respectively, where the time points are used to record the time when the shooting terminal obtains the video data;
  • the sending unit 1302 is further configured to: send the time points obtained by the N shooting terminals to the M viewing terminals;
  • the obtaining unit 1301 is further configured to: obtain a target time point from the Qth terminal, where the target time point is one of the time points obtained by the N shooting terminals;
  • the sending unit 1302 is further configured to: send the Jth video data to the Qth terminal according to the target time point, where the Jth video data is video data acquired by the Jth terminal at the target time point.
  • the device further includes a third splicing unit 1307, and the obtaining unit 1301 is further configured to: obtain S pieces of video data from the video data sent by the N shooting terminals, and the S pieces of video data are targets within a target time period All video data captured at the location point;
  • the third splicing unit 1307 is used for: splicing together the features of the S video data records to obtain fused video data, and the fused video data records all the features photographed on the target location in the target time period;
  • the sending unit 1302 is further configured to: send the merged video data to the Qth terminal.
  • the video acquisition device provided by the embodiment of the present application includes:
  • the first acquisition unit 1401 is used to acquire at least one location point from the server, where the location point is uploaded to the server after N shooting terminals shoot video data in the target area, and the N is greater than 1. positive integer;
  • the display unit 1402 is used to display a map interface on the display interface, the map interface is a map interface of the target area, and the map interface includes the at least one location point acquired by the first acquisition unit 1401;
  • the second acquiring unit 1403, the second acquiring unit 1403 is configured to acquire a target location point selected by the user from the map interface, where the target location point is one of the at least one location point;
  • a sending unit which is used to send the target location point obtained by the second obtaining unit 1403 to the server;
  • the first obtaining unit 1401 is further configured to obtain the Jth video data from the server, where the Jth video data is the video data shot by the Jth terminal at the target position in the target area, and the Jth terminal is the N One of the shooting terminals, 1 ⁇ J ⁇ N.
  • the first obtaining unit 1401 is further configured to: obtain the identification information of R shooting terminals from the server, where the R shooting terminals are The shooting terminal that shoots the video data at the target position, 1 ⁇ R ⁇ N;
  • the display unit 1402 is further configured to display the identification information of the R shooting terminals on the display interface;
  • the second obtaining unit 1403 is further configured to obtain the identification information of the Jth terminal selected by the user, where the Jth terminal is one of the R shooting terminals;
  • the sending unit is further configured to send, by the viewing terminal, the identification information of the Jth terminal to the server.
  • the first obtaining unit 1401 is further configured to: obtain the shooting angles of view of the N shooting terminals from the server;
  • the display unit 1402 is further configured to display the shooting angles of the R shooting terminals on the display interface by the viewing terminal;
  • the second obtaining unit 1403 is further configured to obtain a target shooting angle of view selected by the user, where the target shooting angle of view is one of the shooting angles of the R shooting terminals;
  • the sending unit is further configured to send the target shooting angle of view to the server, where the target shooting angle of view is used to request the server to send video data shot by the shooting terminal at the target shooting angle of view.
  • the first obtaining unit 1401 is further configured to: obtain at least one time point from the server, where the at least one time point is a time point when the N shooting terminals shoot video data;
  • the display unit 1402 is further configured to display the at least one time point on the display interface
  • the second obtaining unit 1403 is further configured to obtain a target time point selected by the user, where the target time point is one of the at least one time point;
  • the second acquiring unit 1403 is further configured to acquire and send the target time point to the server, where the target time point is used to request the server to send the video shot by the shooting terminal at the target time point.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server, or data center Transmission to another website site, computer, server, or data center is by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • wire eg, coaxial cable, fiber optic, digital subscriber line (DSL)
  • wireless eg, infrared, wireless, microwave, etc.
  • the computer-readable storage medium may be any available medium that can be stored by a computer, or a data storage device such as a server, data center, etc., which includes one or more available media integrated.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), and the like.
  • the disclosed communication method, relay device, host base station, and computer storage medium may be implemented in other ways.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer-readable storage medium.
  • the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, and the computer software products are stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (full English name: Read-Only Memory, English abbreviation: ROM), random access memory (English full name: Random Access Memory, English abbreviation: RAM), magnetic Various media that can store program codes, such as discs or optical discs.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

本申请实施例公开了一种视频分享方法,包括:服务器从N个拍摄终端分别获取视频数据和位置点;服务器将N个拍摄终端各自获取的位置点发送给M个观看终端;服务器从第Q终端获取目标位置点,第Q终端为M个观看终端中的一个终端;服务器根据目标位置点向第Q终端发送第J视频数据,第J视频数据为第J终端在目标位置点拍摄的视频数据,第J终端为N个拍摄终端中的第J个终端。本申请还提供一种视频获取方法、装置、设备及介质,多个视频拍摄终端可以向多个观看终端分享视频画面,观看终端的用户可以自主选择位于不同位置的拍摄终端所分享的视频画面,实现了多对多的视频分享模式,能应用于各种不同的场景,提高视频分享的丰富度和便捷性。

Description

一种视频分享、获取方法、服务器、终端设备及介质
本申请要求于2020年07月14日提交中国专利局、申请号为202010675008.3、发明名称为“一种视频分享、获取方法、服务器、终端设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子技术领域,尤其涉及一种视频分享、获取方法、服务器、终端设备及介质。
背景技术
随着互联网以及智能设备的普及,一大批视频传输和分享的技术涌现了出来,其中,现有技术主要包括以下几种类型。
直播网站,这类网站主要由一名主播通过直播平台来进行视频的实时分享,这种类型的直播网站通常以个人为主体,实现一对多视频分享,只有单一直播视角,缺少主旨和实际价值。
视频监控网络,这种场景一般用于安保领域,由多个固定的摄像头组成视频监控网络,之后由一个终端来实时地观看监控网络中各个摄像头拍摄的视频画面,在这种工作方式中,当多个视频源分享给同一个用户时,被分享的用户必须纪录所有视频源的地址、用户名、以及密码,这对用户来说是十分不方便,这种场景不利于视频的分享,同时,视频监控网络的摄像头是固定的,所得到的拍摄内容无法灵活变化。
综上所述,现有技术中的视频传输和分享方法还有待于改进。
发明内容
本申请实施例提供了一种视频分享、获取方法、服务器、终端设备及介质,用于解决视频分享灵活性不足的问题。
有鉴于此,本申请提供了一种视频分享方法,包括:
服务器从N个拍摄终端分别获取视频数据和位置点,该视频数据用于记录该拍摄终端拍摄的视频,该位置点用于记录该拍摄终端获取该视频数据时的位置,该N为大于1的正整数;
该服务器将该N个拍摄终端各自获取的位置点发送给M个观看终端,该为M大于1的正整数;
该服务器从第Q终端获取目标位置点,该第Q终端为该M个观看终端中的一个终端,1≤Q≤M,该目标位置点为该N个拍摄终端各自获取的位置点中的一个;
该服务器根据该目标位置点向该第Q终端发送第J视频数据,该第J视频数据为第J终端在该目标位置点拍摄的视频数据,该第J终端为该N个拍摄终端中的一个终端,1≤J≤N。
可选地,本申请实施例还提供一种视频获取方法,包括:观看终端从服务器获取至少一个位置点,位置点为N个拍摄终端在目标区域内拍摄视频数据后上传给服务器的,N为大于1的正整数;观看终端在显示界面上展示地图界面,地图界面为目标区域的地图界面,地图界面上包括至少一个位置点;观看终端获取用户从地图界面中选择的目标位置点,目标位置点为至少一个位置点中的一个;观看终端向服务器发送目标位置点;观看终端从服务器获取第J视频数据,第J视频数据为第J终端在目标区域内的目标位置点拍摄的视频数据,第J终端为N个拍摄终端中的一个,1≤J≤N。
可选地,本申请实施例还提供一种视频分享装置,包括:
获取单元,该获取单元用于从N个拍摄终端分别获取视频数据和位置点,该视频数据用于记录该拍摄终端拍摄的视频,该位置点用于记录该拍摄终端获取该视频数据时的位置,该N为大于1的正整数;
发送单元,该发送单元用于将该获取单元获取的该N个拍摄终端各自获取的位置点发送给M个观看终端,该为M大于1的正整数;
该获取单元,还用于从第Q终端获取目标位置点,该第Q终端为该M个观看终端中的一个终端,1≤Q≤M,该目标位置点为该N个拍摄终端各自获取的位置点中的一个;
该发送单元,还用于根据该获取单元获取的该目标位置点向该第Q终端发送第J视频数据,该第J视频数据为第J终端在该目标位置点拍摄的视频数据,该第J终端为该N个拍摄终端中的第J个终端,1≤J≤N。
可选地,该发送单元还用于:向该第Q终端发送R个拍摄终端的标识信息,该R个拍摄终端为该N个拍摄终端中在该目标位置点拍摄了视频数据的终端,R≤N,该标识信息用于标记所对应的拍摄终端;
该获取单元还用于:从该第Q终端获取目标终端的标识信息,该目标终端为该R个拍摄终端中的一个;
该发送单元还用于:向该第Q终端发送该目标终端在该目标位置点拍摄的目标视频数据。
可选地,该装置还包括第一拼接单元,当该服务器根据第一预设规则判定该第J终端为目标位置点的热门终端,且该第J终端不为该目标终端时,该第一拼接单元用于:
该服务器将该第J视频数据与该目标视频数据拼接为推荐视频数据;
该发送单元还用于:该服务器将该推荐视频数据发送给该第Q终端。
可选地,该装置还包括建模单元和比对单元,该建模单元用于:获取环境模型,该环境模型用于记录该N个拍摄终端的拍摄环境;
该比对单元用于:将该N个拍摄终端分别拍摄的视频数据与该建模单元获取的该环境模型比对,确定该N个拍摄终端的拍摄视角,可选地,该拍摄视角为拍摄终端进行拍摄时的角度;
该发送单元还用于:将该N个拍摄终端的拍摄视角发送给该M个观看终端;
该获取单元还用于:从该第Q终端获取目标拍摄视角,该目标拍摄视角为该N个拍摄终端的拍摄视角中的一个;
该发送单元还用于:根据该目标拍摄视角向该第Q终端发送该第J视频数据,该第J视频数据为该第J终端在该目标拍摄视角拍摄得到的视频数据。
可选地,该装置还包括第二拼接单元,当该服务器根据第二预设规则确定该目标位置点为热门位置点时,该第二拼接单元用于:
获取该R个拍摄终端在该目标位置点上拍摄的P个视频数据,该P为大于1的正整数;
根据该P个视频数据的拍摄视角将该P个视频数据拼接为一个全景视频数据,该全景视频数据中记录有该P个视频数据中的拍摄画面;
该发送单元还用于:
当该服务器从该第Q终端获取到该目标位置点时,该服务器向该第Q终端发送该全景视频数据。
可选地,该获取单元还用于:从该N个拍摄终端分别获取时间点,该时间点用于记录该拍摄终端获取该视频数据的时间;
该发送单元还用于:将该N个拍摄终端各自获取的时间点发送给该M个观看终端;
该获取单元还用于:从该第Q终端获取目标时间点,该目标时间点为该N个拍摄终端各自获取的时间点中的一个;
该发送单元还用于:根据该目标时间点向该第Q终端发送该第J视频数据,该第J视频数据为该第J终端在该目标时间点获取到的视频数据。
可选地,该装置还包括第三拼接单元,该获取单元还用于:从该N个拍摄终端发送的视频数据中获取S个视频数据,该S个视频数据为目标时间段内目标位置点上拍摄的所有视频数据;
该第三拼接单元用于:将该S个视频数据记录的特征拼接在一起得到融合视频数据,该融合视频数据记录有该目标时间段内该目标位置点上拍摄的所有特征;
该发送单元还用于:向该第Q终端发送该融合视频数据。
可选地,本申请实施例还提供一种视频获取装置,包括:
第一获取单元,该第一获取单元用于从服务器获取至少一个位置点,该位置点为N个拍摄终端在目标区域内拍摄视频数据后上传给该服务器的,该N为大于1的正整数;
展示单元,该展示单元用于在显示界面上展示地图界面,该地图界面为该目标区域的地图界面,该地图界面上包括该第一获取单元获取的该至少一个位置点;
第二获取单元,该第二获取单元用于获取用户从该地图界面中选择的目标位置点,该目标位置点为该至少一个位置点中的一个;
发送单元,该发送单元用于向该服务器发送该第二获取单元获取的该目标位置点;
该第一获取单元还用于从该服务器获取第J视频数据,该第J视频数据为第J终端在该目标区域内的该目标位置点拍摄的视频数据,该第J终端为该N个拍摄终端中的一个,1≤J≤N。
可选地,当该目标位置点对应有多个拍摄终端拍摄的视频数据时,该第一获取单元还用于:从该服务器获取R个拍摄终端的标识信息,该R个拍摄终端均为在该目标位置点拍摄了视频数据的拍摄终端,1≤R≤N;
该展示单元还用于,在显示界面上展示该R个拍摄终端的标识信息;
该第二获取单元还用于,获取用户选择的该第J终端的标识信息,该第J终端为该R个拍摄终端中的一个;
该发送单元还用于,该观看终端向该服务器发送该第J终端的标识信息。
可选地,该第一获取单元还用于:从该服务器获取该N个拍摄终端的拍摄视角;
该展示单元还用于,该观看终端在显示界面上展示该R个拍摄终端的拍摄视角;
该第二获取单元还用于,获取用户选择的目标拍摄视角,该目标拍摄视角为该R个拍摄终端的拍摄视角中的一个;
该发送单元还用于,向该服务器发送该目标拍摄视角,该目标拍摄视角用于请求服务器发送拍摄终端在该目标拍摄视角拍摄的视频数据。
可选地,该第一获取单元还用于:从该服务器获取至少一个时间点,该至少一个时间点分别为该N个拍摄终端拍摄视频数据时的时间点;
该展示单元还用于,在显示界面上展示该至少一个时间点;
该第二获取单元还用于,获取用户选择的目标时间点,该目标时间点为该至少一个时间点中的一个;
该第二获取单元还用于,获取向该服务器发送该目标时间点,该目标时间点用于请求该服务器发送拍摄终端在该目标时间点拍摄的视频。
可选地,本申请实施例还提供一种服务器,包括:交互装置、输入/输出(I/O)接口、处理器和存储器,该存储器中存储有程序指令;该交互装置用于获取用户输入的操作指令;该处理器用于执行存储器中存储的程序指令,执行如上任一所述的的视频分享方法。
可选地,本申请实施例还提供一种服务器,包括:交互装置、输入/输出(I/O)接口、处理器和存储器,该存储器中存储有程序指令;该交互装置用于获取用户输入的操作指令;该处理器用于执行存储器中存储的程序指令,执行如上任一所述的的视频获取方法。
可选地,本申请实施例还提供一种计算机可读存储介质,其特征在于,当该指令在计算机设备上运行时,使得该计算机设备执行如上任一所述的的视频分享方法。
可选地,本申请实施例还提供一种计算机可读存储介质,其特征在于,当该指令在计算机设备上运行时,使得该计算机设备执行如上任一所述的的视频获取方法。
从以上技术方案可以看出,本申请实施例具有以下优点:
本申请实施例提供了一种视频分享方法,包括:服务器从N个拍摄终端分别获取视频数据和位置点,视频数据用于记录拍摄终端拍摄的视频,位置点用于记录拍摄终端获取视频数据时的位置,N为大于1的正整数;服务器将N个拍摄终端各自获取的位置点发送给M个观看终端,为M大于1的正整数;服务器从第Q终端获取目标位置点,第Q终端为M个观看终端中的一个终端,1≤Q≤M,目标位置点为N个拍摄终端各自获取的位置点中的一个;服务器根据目标位置点向第Q终端发送第J视频数据,第J视频数据为第J终端在目标位置点拍摄的视频数据,第J终端为N个拍摄终端中的第J个终端,1≤J≤N。通过此方法,多个视频拍摄终端可以向多个观看终端分享视频画面,多个观看终端的用户可以自主选择位于不同位置的拍摄终端所分享的视频画面,从而实现了多对多的视频分享模式, 能够应用于各种不同的场景下,提高视频分享的丰富度和便捷性。
附图说明
图1为直播网站的系统架构图;
图2为视频监控网络的系统架构图;
图3为本申请实施例所提供的视频分享方法的系统架构图;
图4a为本申请实施例所提供的视频分享方法的一个实施例的示意图;
图4b为本申请实施例所提供的视频分享方法的一个实施例的示意图;
图5a为本申请实施例所提供的视频分享方法的另一个实施例的示意图;
图5b为本申请实施例所提供的视频分享方法的另一个实施例的示意图;
图6为本申请实施例所提供的视频分享方法的另一个实施例的示意图;
图7a为本申请实施例所提供的视频分享方法的另一个实施例的示意图;
图7b为本申请实施例所提供的视频分享方法的另一个实施例的示意图;
图8a为本申请实施例所提供的视频分享方法的另一个实施例的示意图;
图8b为本申请实施例所提供的视频分享方法的另一个实施例的示意图;
图9a为本申请实施例所提供的视频分享方法的另一个实施例的示意图;
图9b为本申请实施例所提供的视频分享方法的另一个实施例的示意图;
图10a为本申请实施例所提供的视频分享方法的另一个实施例的示意图;
图10b为本申请实施例所提供的视频分享方法的另一个实施例的示意图;
图11a为本申请实施例所提供的视频分享方法的网络架构的示意图;
图11b为本申请实施例所提供的视频分享方法的系统架构的示意图;
图12为本申请实施例所提供的一种服务器的示意图;
图13为本申请实施例所提供的另一种视频分享装置的示意图;
图14为本申请实施例所提供的另一种视频获取装置的示意图。
具体实施方式
本发明实施例提供一种视频分享、获取方法、服务器、终端设备及介质。
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
随着互联网以及智能设备的普及,一大批视频传输和分享的技术涌现了出来,其中,当前,视频分享和传输技术主要包括以下几种类型。
直播网站,这类网站主要由一名主播通过直播平台来进行视频的实时分享,这种类型的直播网站通常以个人为主体,实现一对多视频分享。如图1所示,直播网站架构下包括一个直播终端101和多个观看终端102,由直播终端101来获取并生成拍摄画面,之后分享给多个观看终端102,从而多个观看终端102能够看到直播终端101拍摄的实时画面,这种架构只有单一直播视角,缺少主旨和实际价值。
视频监控网络,这种场景一般用于安保领域,由多个固定的摄像头201组成视频监控网络,之后由一个终端203来实时地观看监控网络中各个摄像头201拍摄的视频画面。如图2所示,视频监控网络的架构包括固定分部在不同地点多个摄像头201,以及与该摄像头201连接的一个服务器202,服务器202将这些视频汇总发送到终端203,由该终端203来统一观看视频画面,属于多对一的视频分享方式。在这种工作方式中,也可以实现视频的分享方式,然而这种方式中,需要分享视频的用户需要具有访问服务器202的权限,被分享的用户必须纪录所有视频源的地址、用户名、以及密码,还需要经过严格的认证步骤,这对用户来说是十分不方便,这种场景不利于视频的分享,同时,视频监控网络的摄像头201是固定的,所得到的拍摄内容无法灵活变化。机动性并不理想。
对此,本申请实施例提供一种视频分享方法,由多个拍摄终端来采集视频数据,之后分享给多个观看终端,从而实现了多对多的视频分享方式。
需要说明的是,本申请实施例所提供的方法可以用于实时直播的视频分享,也可以用于录播的视频分享,对此本申请实施例并不进行限定。
为便于理解,以下结合附图,对本申请实施例所提供的视频分享方法进行详细说明。
首先,对本申请实施例所提供方法的系统架构进行说明。
请参阅图3,如图3所示,本申请实施例所提供方法的工作系统架构包括。
拍摄终端301,如图3所示,拍摄终端301的数量可以为多个,在具体工作的过程中,拍摄终端301可以是智能手机、平板电脑、智能手表、智能眼镜、具有通信功能的摄像机或者是通信拍摄车辆等能够获取视频数据的智能终端,对此本申请实施例并不进行限定。这些拍摄终端301的使用者为需要分享视频的用户,这些用户可以持拍摄终端301自由的移动,在不同场景下拍摄自己想要分享的内容。
服务器302,服务器302用于获取所有拍摄终端301所上传的视频数据,并对这些视频数据进行统一的调度。
观看终端303,如图3所示,观看终端303的数量可以为多个,在具体工作的过程中,拍摄终端301可以是智能手机、平板电脑、台式电脑、智能手表、智能眼镜或者虚拟现实设备(virtual reality,VR),对此本申请实施例并不进行限定。这些观看终端303用于观看拍摄终端301所分享的视频,观看终端303通过向服务器302发送请求,从而获取服务器302发送的视频数据,这些视频数据是由拍摄终端301所拍摄的。
基于如图3所示的系统架构,请参阅图4a,如图4a所示,本申请实施例所提供的视频分享方法的实施例一,包括以下步骤。
401、N个拍摄终端分别获取各自的视频数据和位置点。
本实施例中,如图4b所示,图4b为一个场馆的平面地图,地图的位置点A、B、C等分别为该场馆内拍摄终端的拍摄地点,N个拍摄终端在该场馆内进行视频的拍摄,N为大于1的正整数。例如,第J终端为N个拍摄终端中的一个,在地图中的A点拍摄了视频数据,此时,第J终端得到了第J视频数据和位置点A。
402、服务器从N个拍摄终端分别获取视频数据和位置点。
本实施例中,N个拍摄终端将各自获取的视频数据和位置点发送给服务器,例如第J终端将第J视频数据和位置点A发送给服务器,N个拍摄终端中除第J终端以外的其他终端也执行同样的操作。
同时,服务器记录视频数据和位置点的对应关系,从而知晓每个视频数据是在哪个位置点拍摄的。
403、服务器将N个拍摄终端各自获取的位置点发送给M个观看终端。
本实施例中,M为大于1的正整数,则服务器将所获取到的位置点发送给多个观看终端,以使得这些观看终端可以知晓当前有哪些位置点可以看到拍摄终端拍摄的视频数据。
可选地,观看终端的界面可以如图4b所示,图4b为一个展览馆的地图,该地图中标记有一些位置点,这些位置点都是服务器向观看终端发送的位置点,从而用户通过该界面可以直观地了解到,在展览馆中,有哪些位置点可以看到拍摄终端拍摄的视频数据。
404、第Q终端获取用户选择的目标位置点。
本实施例中,第Q终端为M个观看终端中的一个终端,如上所述的方式,第Q终端的交互界面上向用户展示如图4b所示的界面,用户点击位置点,例如用户在界面中点击了位置点A,则此时第Q终端将位置点A作为目标位置点,发送给服务器。
405、服务器从第Q终端获取目标位置点。
本实施例中,第Q终端将目标位置点发送给服务器,以使得服务器知晓第Q终端所期望获得的视频数据。
406、服务器根据目标位置点向第Q终端发送第J视频数据。
本实施例中,根据目标位置点可以知晓该目标位置点下有哪个拍摄终端拍摄了视频数据,例如,目标位置点为位置点A,第J终端在位置点A拍摄了第J视频数据,则此时服务器根据目标位置点向第Q终端发送第J视频数据。
本实施例中,服务器从N个拍摄终端分别获取视频数据和位置点,视频数据用于记录拍摄终端拍摄的视频,位置点用于记录拍摄终端获取视频数据时的位置,N为大于1的正整数; 服务器将N个拍摄终端各自获取的位置点发送给M个观看终端,为M大于1的正整数;服务器从第Q终端获取目标位置点,第Q终端为M个观看终端中的一个终端,1≤Q≤M,目标位置点为N个拍摄终端各自获取的位置点中的一个;服务器根据目标位置点向第Q终端发送第J视频数据,第J视频数据为第J终端在目标位置点拍摄的视频数据,第J终端为N个拍摄终端中的第J个终端,1≤J≤N。通过此方法,多个视频拍摄终端可以向多个观看终端分享视频画面,多个观看终端的用户可以自主选择位于不同位置的拍摄终端所分享的视频画面,从而实现了多对多的视频分享模式,能够应用于各种不同的场景下,提高视频分享的丰富度和便捷性。
需要说明的是,由于N个拍摄终端是在拍摄者的意志下自由地运动的,因此,每个拍摄终端可能在不同的位置点拍摄了视频数据,同时,同一位置点可能有多个拍摄终端拍摄了数据,例如,当观看终端的用户选择了一个目标位置点时,该目标位置点可能有多个拍摄终端所拍摄的视频数据可供观看,此时,需要向观看终端的用户确认,需要观看哪一拍摄终端所拍摄的视频画面,为便于理解,以下结合附图对此种情况的具体工作方式进行详细说明。
请参阅图5a,如图5a所示,本申请实施例所提供的视频分享方法的实施例二,包括以下步骤。
501、N个拍摄终端分别获取各自的视频数据和位置点。
本实施例中,本步骤可参阅上述步骤401,此处不再赘述。
502、服务器从N个拍摄终端分别获取视频数据、位置点和标识信息。
本实施例中,服务器从N个拍摄终端分别获取视频数据和位置点的具体工作方式可参阅上述步骤502,此处不再赘述;进一步地,N个拍摄终端还向服务器发送各自的标识信息,该标识信息用于唯一地标识每个拍摄终端。
503、服务器将N个拍摄终端各自获取的位置点和标识信息发送给M个观看终端。
本实施例中,服务器向M个观看终端发送位置点的步骤可参阅上述步骤403的记载,此处不再赘述。进一步地,服务器还将所获取的标识信息发送给M个观看终端。
进一步地,服务器向M个观看终端发送N个拍摄终端的标识信息时,还可以附带发送N个拍摄终端的描述信息,这些描述信息是拍摄终端向服务器注册时上传的信息,例如拍摄终端的昵称、文字简介或头像等信息,服务器在发送标识信息时同时获取该标识信息所标识的拍摄终端的描述信息,并将这些描述信息分别发送给M个观看终端。
504、第Q终端获取用户选择的目标位置点和目标终端。
本实施例中,第Q终端为M个观看终端中的一个终端,第Q终端的交互界面上向用户展示如图5b所示的界面,用户点击位置点,例如用户在界面中点击了位置点A,则此时第Q终端将位置点A作为目标位置点,此时,若位置点A附近有R个拍摄终端拍摄的视频数据可供选择,则第Q终端在交互界面上展示选项菜单5001,该选项菜单中展示了位置点A可供选择的R个拍摄终端的标识信息,R为小于或等于N的正整数。
可选地,选项菜单5001中可以展示R个拍摄终端的描述信息,描述信息可以包括昵称、文字简介或头像等信息,以供观看终端的用户进行决策,选择一个想要观看的目标终端。
例如,观看终端的用户在选项菜单5001中点击了R个拍摄终端中的第J终端,则第Q终端获取用户选择的目标位置点为位置点A,目标终端为第J终端。
505、服务器从第Q终端获取目标位置点和目标终端的标识信息。
本实施例中,在步骤504中第Q终端获取到用户选择的目标位置点为位置点A,目标终端为第J终端,则第Q终端向服务器发送位置点A和第J终端的标识信息,从而使得服务器知晓,第Q终端期望获得第J终端在位置点A上拍摄的视频数据。
506、服务器向第Q终端发送目标终端在目标位置点拍摄的目标视频数据。
本实施例中,第Q终端选择的目标位置点为位置点A,目标终端为第J终端,则服务器向第Q终端第J终端在位置点A拍摄的视频数据,从而实现了第Q终端的视频获取。
本实施例中,当目标终端为前述第J终端时,即发送第J终端拍摄的第J视频数据。
本实施例中,结合多对多的视频分享场景下同一位置点可能具有多个拍摄终端拍摄视频数据的情况,通过标识信息的方式供观看终端的用户进行选择,同时标识信息还可以进一步附带描述信息,从而使得视频的分享具备了社交的属性,观看终端的用户可以根据描述信息选择自己感兴趣的拍摄终端进行视频的观看。
需要说明的是,在实施例二的工作方式中,当同一个位置点有多个拍摄终端在拍摄时,有可能其中一个拍摄终端是热门终端,这个热门终端所拍摄的视频受到了较多观看终端的选择,因此,可以将热门终端所拍摄的视频向观看终端进行较高优先级的推荐。为便于理解,以下结合附图,对此种情况进行详细的说明。
请参阅图6,如图6所示,本申请实施例所提供的视频分享方法的实施例三,包括以下步骤。
步骤601至602可参阅上述步骤401至402,此处不再赘述。
603、服务器将N个拍摄终端各自获取的位置点发送给M个观看终端。
本实施例中,本步骤的具体实现方式可参考上述步骤403,需要说明的是,服务器向M个观看终端发送的位置点可以视为可供订阅的选项,用于告知M个观看终端选择这些位置点即可获得该位置点上拍摄终端拍摄的视频数据。
需要说明的是,在步骤603中,服务器将N个拍摄终端各自获取的位置点发送给M个观看终端,之后M个观看终端分别向服务器发送目标位置点,这些目标位置点可以视为一种订阅信息,用于从服务器端订阅拍摄终端在该目标位置点上拍摄的视频数据,在此过程中,服务器可以根据第一预设规则对各个拍摄终端的热门程度进行统计。
可选地,服务器根据第一预设规则判定第J终端为目标位置点的热门终端,可以包括以下步骤。
1、服务器获取目标位置点上R个拍摄终端被观看终端用户选择的次数。
2、服务器获取对R个拍摄终端被选择的次数进行排序。
3、服务器获取R个拍摄终端排序前n个拍摄终端作为热门终端,可选地,n为大于或等于1的正整数,可以由开发人员进行定义,本申请并不限定。
本实施例中,当n等于1时,若第J终端为目标位置点对应的R个拍摄终端中被观看终端选择最多的终端,则判定第J终端为目标位置点的热门终端。
604、第Q终端获取用户选择的目标位置点和目标终端。
本实施例中,第Q终端获取用户选择的目标位置点的具体工作方式可参阅上述步骤504或404,此处不再赘述。若用户所选择的目标位置点对应有多个拍摄终端的视频数据时,
第Q终端的交互界面上向用户展示如图5b所示的界面,用户点击位置点,例如用户在见面中点击了位置点A,则此时第Q终端将位置点A作为目标位置点,此时,若位置点A有R个拍摄终端拍摄的视频数据可供选择,则第Q终端在交互界面上展示选项菜单,该选项菜单中展示了位置点A可供选择的R个拍摄终端的标识信息,R为小于或等于N的正整数。
可选地,选项菜单中可以展示R个拍摄终端的描述信息,描述信息可以包括昵称、文字简介或头像等信息,以供观看终端的用户进行决策,选择一个想要观看的目标终端。
可选地,选项菜单中的描述信息还可以用于提醒用户,选项菜单中的某种拍摄终端为热门终端,例如,当选项菜单中的第J终端为热门终端时,可以在该终端后标记一个星号,或者任何预设的热门标识,以使得用户在作出选择决策时得到一个推荐意见。
605、服务器从第Q终端获取目标位置点和目标终端的标识信息。
本实施例中,服务器根据目标位置点和目标终端的标识信息,向第Q终端发送目标终端在目标位置点拍摄的视频数据。此时,若上述R个拍摄终端中第J终端为热门终端,而用户并没有选择该第J终端作为目标终端,此时,执行以下步骤606。
606、服务器将第J视频数据与目标视频数据拼接为推荐视频数据。
本实施例中,第J视频数据为目标位置点中的热门终端第J终端所拍摄的视频数据,目标视频数据为目标位置点中用户选择的目标终端所拍摄的视频数据,对这两个视频的拼接,根据使用场景的不同,可以包括以下两种简介方案。
1、录播场景,在录播场景下,视频数据并没有实时性,服务器向观看终端发送的是存储在服务器本地的视频数据,此时,由服务器来对视频进行剪辑,具体实现方式为将第J视频数据与目标视频数据拼接在一起,例如,目标视频数据的总时长为20秒,则推荐视频数据的前十秒使用目标视频数据中的视频内容,后十秒使用第J视频数据的视频内容。
2、直播场景,在直播场景下,视频数据是实时传输的,服务器直接向观看设备发送实时的直播视频流,此时,视频拼接的具体方式为直播视频流的切换,例如,在用户观看预设时长目标终端的视频流后,将视频流切换为第J终端的视频流。这种两个视频流切换的预设,组成了推荐视频流。
607、服务器将推荐视频数据发送给第Q终端。
本实施例中,服务器将推荐视频数据发送给第Q终端,从而第Q终端的用户在观看视频时,不仅能够观看到自己所选择的目标终端所拍摄的视频,还可以看到热门终端所拍摄的视频,因热门终端所拍摄的视频得到了大多数用户的选择,因此可能也会获得第Q终端用户的喜爱,从而保证了视频的传输效率,提升了用户的体验。
本实施例中,通过第一预设规则,对目标位置点多个拍摄终端的热门程度进行判断,并且通过拼接视频的方式向观看终端的用户推荐热门终端拍摄的视频,从而提升了视频分享的效率,确保观看终端的用户看到更加热门的视频,提上了用户的体验。
需要说明的是,在同一个位置点,不同的拍摄终端在拍摄时,可能会采取不同的拍摄 视角,例如站在一个展览馆的位置点A是一个展厅,站在位置点A若拍摄终端向左侧拍摄,可以看到第一场景;若拍摄终端向右侧拍摄,可以看到第二场景;此时,虽然在同一个位置点进行拍摄,但是由于拍摄视角的不同,视频所展现的内容完全不一样,因此,作为一种优选的实现方式,除了位置点以外,还应该允许观看终端的用户选择位置点的拍摄视角,从而使得观看终端的用户能够选择更加丰富的内容。以下结合附图对此种情况进行详细说明。
请参阅图7a,如图7a所示,本申请实施例所提供的视频分享方法的实施例四,包括以下步骤。
701、服务器获取环境模型。
本实施例中,环境模型用于记录N个拍摄终端的拍摄环境。例如,在一个展会中,需要对展览馆中的环境进行建模以得到环境模型,例如可以通过深度相机结合红外线相机和视觉相机等方式,得到展览馆的各个位置的深度数据和环境数据等等,环境模型的建模方法为现有技术,本领域技术人员可以根据实际需要选择任意一种建模方式,对此本申请实施例不再赘述。
702、N个拍摄终端分别获取各自的视频数据和位置点。
本实施例中,N个拍摄终端在拍摄环境中的各个位置点使用不同的拍摄视角,拍摄得到各自的视频数据,其中,该拍摄环境为经过环境建模的环境,例如展览馆内。
703、服务器从N个拍摄终端分别获取视频数据和位置点。
本实施例中,本步骤可参阅上述步骤402,此处不再赘述。
704、服务器将N个拍摄终端分别拍摄的视频数据与环境模型比对,确定N个拍摄终端的拍摄视角。
本实施例中,N个拍摄终端分别拍摄的视频数据中记录了拍摄环境的场景信息,根据这些信息在环境模型中进行匹配,服务器即可知晓当前视频数据拍摄了拍摄环境中的哪一个场景,结合视频数据的位置点信息,服务器即可知晓每一个视频数据的拍摄位置和拍摄视角。
具体地,服务器可以通过视觉定位技术,来实现视频数据与环境模型的匹配,可选地,作为一种优选的实施例,视觉定位技术可以通过以下方式来实现。
1、通过GPS定位技术进行粗定位。
本实施例中,平时终端在上传视频数据时,会附带上传所在的GPS地址,本实施例中,定位精度要求达到米级,拍摄终端采用的设备是智能手机以及相机,应用场景为体育馆、博物馆以及学校等大型场馆,遮挡较多、人流量大、并且周围环境具有不变性,结合数据源终端特性以及周围环境特性,初步选择GPS定位技术进行粗定位。可选地,也可以采用现有技术中的其他定位方式,对此本申请实施例并不进行限定。
2、对目标区域进行离散化处理。
本实施例中,目标区域即为拍摄终端所拍摄的区域,将该区域划分为多个子区域,并记录各个子区域的经纬度范围,在每个子区域范围内以m米为间隔进行数据采集,进一步地,将360°方向同样进行离散化处理,离散为k个角度,采集k个角度上的图像信息以及位姿信 息,并且记录每张图片对应的相机位姿。
进一步地,相机位姿定义为(n,xi,yi,zi,αi,βi,v),其中,x轴定义为正北方向,y轴为正东方向,z轴为垂直于地面向上,n为所在子区域;xi、yi、zi为位置点在各轴上相对于子区域圆心的偏移量;αi为与正北方向的夹角且顺时针为正,βi为与z轴的夹角且顺时针为正,v为移动速度。
3、根据GPS粗定位信息确定拍摄终端所在的目标子区域。
本实施例中,根据GPS的粗定位信息,可以首先判断拍摄终端位于步骤2所划分的哪一个子区域中。该目标子区域为步骤二中所获取的多个子区域中的一个。
4、从服务器中获取目标子区域内的图像信息。
本实施例中,各个子区域内的图像信息是预设在服务器中的,例如目标区域是一个展览馆,则工作人员实现获取该展览馆内的全景图像作为整个展览馆的图像信息,通过步骤2的离散化处理后,即可得到各个子区域的图像信息。
5、每间隔预设时间取视频数据的一帧图像与目标子区域内的图像信息进行匹配。
本实施例中,采用匹配算法提取视频帧中的图像整体场景特征,并将特征跟数据集的特征进行相似度对比,找出最相似度得分最高的图像,记为图像I,将此图像I作为该数据终端的定位照片。
6、从服务器中获取图像I对应的位姿信息作为将关键帧位姿信息。
本实施例中,每个子区域内预置图像的位姿信息,都是在预置图像时测量好的,位姿信息包括位置信息和拍摄视角信息等,通过上述方式,即可匹配当前拍摄终端所上传视频的位姿信息。
7、将关键帧位姿信息附加在视频数据的识别码中。
本实施例中,通过将关键帧位姿信息附加在视频数据的识别码中,使得拍摄终端所上传的视频数据具备了位姿信息,服务器和观看终端可以根据该位姿信息知晓视频想数据的位姿信息。
705、服务器向M个观看终端发送N个拍摄终端的位置点和拍摄视角。
本实施例中,则服务器将所获取到的位置点和拍摄视角发送给多个观看终端,以使得这些观看终端可以知晓当前可以看到视频数据的拍摄终端都在哪些位置点,包含哪些拍摄视角。
706、第Q终端获取用户选择的目标位置点和目标拍摄视角。
本实施例中,第Q终端所展示的界面可以如图7b所示,图7b为一个展览馆的地图,该地图中标记有一些位置点,这些位置点都是服务器向观看终端发送的位置点,从而用户通过该界面可以直观地了解到,在展览馆中,有哪些位置点可以看到拍摄终端拍摄的视频数据。进一步地,这些位置点上还有用于指示拍摄视角的信息。例如图7b中,位置点A是一个可选的位置点,在这个位置点上第J终端向视角A1进行拍摄,另一个拍摄终端向视角A2进行拍摄,则位置点A上伸出的第一箭头7001代表视角A1,第二箭头7002代表视角A2。用户选择位置点A后再选择第一箭头7001,第Q终端即可知晓用户选择的目标位置点为位置点A,目标拍摄视角为视角A1,即选择第J终端在位置点A拍摄的视频。
707、服务器从第Q终端获取目标位置点和目标拍摄视角。
本实施例中,第Q终端将目标位置点和目标拍摄视角发送给服务器,以使得服务器知晓第Q终端所期望获得的视频数据的信息。
708、服务器根据目标拍摄视角向第Q终端发送第J视频数据。
本实施例中,第J视频数据为第J终端站在目标位置点,在目标拍摄视角拍摄得到的视频数据。
本实施例中,服务器获取拍摄终端拍摄的视频,并根据预先建立好的环境模型对视频进行视觉识别,从而知晓每个视频的拍摄视角。服务器将位置点和拍摄视角发送给观看终端,从而使得观看终端的用户不仅可以选择位置点,还可以选择拍摄视角,更加细化了用户可选择视频类型的粒度,能够为用户提供更加个性化的选择空间,进一步提升了视频分享的丰富程度。
需要说明的是,在一些场景下,例如在展览馆的使用场景,N个拍摄终端在展览馆中的不同地点进行视频的拍摄,其中,一些位置点可能是比较热门的,例如展览馆中一些比较热门的展厅,会有较多的拍摄终端同时在此拍摄,同时,虽然是同在一个位置点进行拍摄,每个拍摄终端的拍摄视角并不相同,此时,对于这个热门位置而言,可能会有非常多的拍摄终端可供选择,此时,为了让观看终端能够同时在热门位置看到更多的拍摄视角,可以对热门位置点的视频进行拼接,以生成一个多角度的全景视频,使得观看终端的用户能够一次性观看多角度的全景视频。
请参阅图8a,如图8a所示,本申请实施例所提供的视频分享方法的实施例五,包括以下步骤。
步骤801至805可参阅上述步骤701至705,此处不再赘述。
806、服务器根据第二预设规则确定目标位置点为热门位置点。
本实施例中,第二预设规则可以是用于确认热门位置点的任意规则,作为一种优选的实施例,第二预设规则的实现可以包括以下步骤。
1、服务器获取M个观看终端发送的位置点。
2、服务器统计各个位置点被M个观看终端选中的次数并进行排序。
3、服务器获取排序的前n个位置点作为热门位置点。
本实施例中,被更多观看终端选择观看的位置点,则判断为热门的位置点,该n为大于或等于1的正整数,对此本申请并不进行限定。作为一种举例,当n等于1时,服务器获取最多人选择的目标位置点作为热门位置点。
807、服务器获取R个拍摄终端在目标位置点上拍摄的P个视频数据。
本实施例中,R个拍摄终端为在目标位置点上拍摄视频的所有拍摄终端,P为大于1的正整数,P可以等于R也可以不等于R。
808、服务器根据P个视频数据的拍摄视角将P个视频数据拼接为一个全景视频数据。
本实施例中,全景视频数据中记录有P个视频数据中的拍摄画面。例如,P=2,服务器在目标位置点获取到了两个不同角度的视频数据,分别为第一视频和第二视频,请参阅图8b,如图8b所示场景8001包括第一角度8001a和第二角度8001b两个拍摄视角,第一视频8002 通过第一角度8001a对场景进行了拍摄,第二视频8003通过第二角度8001b对场景进行了拍摄,第一视频8002和第二视频8003中分别拍摄了场景中的不同部分,则此时服务器根据第一视频8002和第二视频8003的拍摄视角,将第一视频8002和第二视频8003进行融合,得到一个全景视频数据:第三视频8004,可见第三视频8004记录有第一视频8002和第二视频8003的拍摄画面,同时记录了两个不同拍摄视角的内容。
809、当服务器从第Q终端获取到目标位置点时,服务器向第Q终端发送全景视频数据。
本实施例中,当第Q终端向服务器发送目标位置点时,服务器即可知晓当前第Q终端选择了一个热门位置点,此时,服务器将目标位置点对应的全景视频数据发送给第Q终端,以使得第Q终端能够看到该热门景点的全景视频。
本实施例中,服务器在获取到拍摄终端拍摄视频拍摄视角的情况下,进一步根据第二预设规则判断哪一个位置点为热门的位置点,之后根据热门位置点中多个视频的拍摄视角,对热多个视频进行拼接,从而得到一个全景视频,以使得观看终端的用户能够通过全景视频数据,一次性看到热门位置点的多角度内容,提升了视频分享的效率和视频内容的丰富度。
需要说明的是,如上所述,本申请实施例所提供的方法可以用于直播场景,也可以用于录播的场景,其中,在录播的场景下,服务器会存储不同时间点拍摄终端上传的视频数据,此时,在同一位置点,可能有拍摄终端在不同的时段拍摄了视频数据,此时,当用户选择了这一位置点时,就需要向观看终端的用户确认,需要选择观看哪一时间点的视频数据。为便于理解,以下结合附图对本种工作方式进行详细说明。
请参阅图9a,如图9a所示,本申请实施例所提供的视频分享方法的实施例六,包括以下步骤。
901、N个拍摄终端分别获取各自的视频数据、位置点和时间点。
本实施例中,N个拍摄终端分别获取各自的视频数据和位置点的工作方案可参阅上述步骤401,在此基础上,N个拍摄终端在拍摄视频数据时一并获取获取到该视频数据的时间点。
902、服务器从N个拍摄终端分别获取视频数据、位置点和时间点。
本实施例中,N个拍摄终端将获取到的视频数据、位置点和时间点发送给服务器。
903、服务器将N个拍摄终端各自获取的位置点和时间点发送给M个观看终端。
本实施例中,M为大于1的正整数,则服务器将所获取到的位置点和时间点发送给多个观看终端,以使得这些观看终端可以知晓当前有哪些位置点可以看到拍摄终端拍摄的视频数据,以及,每个位置点中有哪些时间点的视频数据可供选择。
904、第Q终端获取用户选择的目标位置点和目标时间点。
本实施例中,第Q终端为M个观看终端中的一个终端,第Q终端的交互界面上向用户展示如图9b所示的界面,用户点击位置点,例如用户在界面中点击了位置点A,则此时第Q终端将位置点A作为目标位置点,此时,若位置点A有多个时段的视频可供选择,则界面上进一步展示选项菜单9001,选项菜单9001中显示有不同的时间点,以供用户选择不同时间点的视频数据。
905、服务器从第Q终端获取目标时间点和目标位置点。
本实施例中,第Q终端将所获取到的目标时间点和目标位置点发送给服务器,以使得服务器知晓第Q终端选择的目标时间点和目标位置点。
906、服务器根据目标位置点和目标时间点向第Q终端发送第J视频数据。
本实施例中,第J视频数据为第J终端在目标位置点的目标时间点获取到的视频数据。
本实施例中,针对一个位置点在不同的时段都拍摄了视频数据的情况,当观看终端的用户选择了一个位置点时,进一步向用户确认需要观看哪一时段的内容,从而从时间的角度上细化了可供选择的粒度,观看终端的用户不仅能够选择不同位置点的视频数据,针对同一位置点,还能选择不同时间点的视频数据,从而使得视频分享的内容更加的丰富。
需要说明的是,在上述录播的场景下,一个位置点具有不同时段的视频数据,观看端的用户每次只能选择其中一个时间点的视频来观看,若用户想要查看多个时间点的视频数据,则需要来回切换,较为麻烦,对此,对于同一位置点的视频数据,可以进行融合,在一个视频中体现每个时段拍摄视频的特征,从而使得用户可以在一个视频中看到所有时段的内容。为便于理解,以下结合附图对本种工作方式进行详细说明。
请参阅图10a,如图10a所示,本申请实施例所提供的视频分享方法的实施例七,包括以下步骤。
1001、N个拍摄终端分别获取各自的视频数据、位置点和时间点。
本实施例中,N个拍摄终端分别获取各自的视频数据和位置点的工作方案可参阅上述步骤401,在此基础上,N个拍摄终端在拍摄视频数据时一并获取获取到该视频数据的时间点。
1002、服务器从N个拍摄终端分别获取视频数据、位置点和时间点。
本实施例中,N个拍摄终端将获取到的视频数据、位置点和时间点发送给服务器。
1003、服务器从N个拍摄终端发送的视频数据中获取S个视频数据。
本实施例中,S个视频数据为目标时间段内目标位置点上拍摄的所有视频数据,目标时间段为预设的时间段,本领域技术人员可以根据实际需要进行设置,对此本申请并不进行限定。
1004、服务器将S个视频数据记录的特征拼接在一起得到融合视频数据。
本实施例中,融合视频数据记录有目标时间段内目标位置点上拍摄的所有特征,例如图10b所示,对于同一目标位置点的同一拍摄视角,在目标时间段的第一时刻,一个拍摄终端拍摄了第一视频10041,第一视频10041中记录的特征为场景中的一名路人甲10041a;在目标时间段的第二时刻,另一个拍摄终端拍摄了第二视频10042,第二视频10042中记录的特征为场景中的一名路人乙10041b。可选地,服务器可以根据特征识别算法从第一视频10041和第二视频10042中分别提取出路人甲10041a和路人乙10041b,特征识别和提取算法为本领域现有技术,本领域技术人员可根据需要选择合适的特征识别和提取算法,对此本申请实施例并不进行限定。进一步地,当服务器提取到路人甲10041a和路人乙10041b两个特征后,将两个特征拼接在一起,得到一个融合视频数据10043,该融合视频数据10043中在同一场景下同时记录有路人甲10041a和路人乙10041b,从而在一个视频中就能看到不同时段的视频所记录的特征信息。
1005、服务器将位置点发送给M个观看终端。
本实施例中,本步骤可参阅上述步骤403,此处不再赘述。
1006、第Q终端获取用户选择的目标位置点。
本实施例中,第Q终端获取用户选择的目标位置点的具体实现方式可参阅上述步骤404,此处不再赘述,当目标位置点对应有多个时段的视频数据时执行以下步骤。
1007、服务器从第Q终端获取目标位置点。
本实施例中,若目标位置点对应有融合视频数据,则执行下述步骤1008。
1008、服务器向第Q终端发送融合视频数据。
本实施例中,融合视频数据是在步骤1004中所得到的,从而第Q终端的用户根据该视频,即可通过一个视频同时看多所有时段所记录的特征。
本实施例中,针对一个位置点在不同的时段都拍摄了视频数据的情况,当观看终端的用户选择了一个位置点时,该位置点可能对应有不同时间段的多个视频,服务器对多个一定时段内所有时间点所获取的视频进行拼接,从而使得观看终端的用户在一个视频内就可以看到一定时段内所有时间点所记录的特征,省去了用户选择时间点的步骤,同时,提升了视频传输的效率,减少了服务器和观看终端之间的传输数据量。
进一步地,本申请实施例七的方案可以和实施例五的方案相结合,在视频拼接的过程中不仅拼接不同时间点的视频数据,还拼接不同拍摄视角的视频数据,可以在同一位置点上得到一个全景全时段视频数据,该视频数据不仅记录有该位置点上各个拍摄视角的视频数据,还记录有各个拍摄视角不同时段所记录的视频特征,从而进一步的提升了视频内容的丰富度,观看终端的用户仅通过一个视频,即可看到全部视角、全部时段的内容,一方面减少了服务器与观看终端之间的交互次数,提升了视频分享的效率。另一方面对于观看终端的用户而言,也具备了更好的用户体验。
上述对本申请实施例所提供的视频分享方法的各个实施例进行了介绍,需要说明的是,在上述各个实施例中,当拍摄终端在拍摄视频数据时,需要实时地获取当前的位置点,才能实现本方案的内容,进一步地,在直播的场景下,拍摄终端可能是一边移动一边上传视频数据的,即位置点在实时地发生变化。因此,拍摄终端需要准确地获取到当前的位置点,才能保证本申请方案的顺利进行,以下对本申请实施例所提供视频分享方法中,拍摄终端的定位方法进行详细说明。
请参阅图11a,如图11a所示的系统架构中,拍摄场景1101中包括多个拍摄终端,这些拍摄终端通过网络转化层1102与云端服务器1103连接,向云端服务器1103上传位置点和视频数据等信息,其中,拍摄终端的接入网可以采用4G网络、5G网络或者WIFI网络,本领域技术人员可以根据实际需要选择需要的接入方式,对此本申请实施例并不进行限定。
进一步地,本申请的方案中,拍摄终端获取位置点主要有室内定位和室外定位两种不同的定位环境,同时拍摄终端可能在室内和室外之间切换,此时需要根据定位环境的改变,拍摄终端可以同步地切换不同的定位方式。
其中,对拍摄终端的定位方式可以包括视觉定位、光学定位、WIFI指纹定位和全球定位系统(global positioning system,GPS)定位等。
具体工作时,拍摄终端通过视觉定位识别当前位置是位于室内还是室外,例如,拍摄 终端实时地向服务器发送当前获取到的视频数据,服务器根据预先搭建的环境模型,确定视频数据中所拍摄的画面是位于室内还是室外,关于服务器获取环境模型的方式可参阅上述步骤701的记载,此处不再赘述。当服务器判定拍摄终端当前位于室内时,指示拍摄终端使用光学定位或WIFI指纹定位,这两种方式能够在室内高效的工作,对拍摄终端进行较为精确的定位,可选地,也可以采取其他的室内定位方式,对此本申请实施例并不限定。当服务器判定拍摄终端当前位于室外时,指示拍摄终端使用GPS定位,可选地,也可以采取其他的室外定位方式,对此本申请实施例并不限定。
本实施例中,服务器首先判断拍摄终端当前处于室内还是室外,当拍摄终端处于室内时,服务器指示拍摄终端使用室内定位方法,当拍摄终端处于室外时,服务器指示拍摄终端使用室外定位方法,从而使得拍摄终端无论是在什么场景下都能够精确地获取到位置点的位置信息。
进一步地,请参阅图11b,本申请实施例所提供的视频分享、视频获取方法,可以按照如图11b所示的架构来实现,如图11b所示,系统中包括用户端、服务器端1106和管理员端1107,其中,用户端包括拍摄终端1104和观看终端1105。拍摄终端1104用于拍摄视频后上传给服务器端1106,服务器端1106将拍摄终端1104拍摄的视频发送给观看终端1105,在此过程中是,由管理员端1107来对系统的工作进行调度和管理,该观看终端1105和拍摄终端1104的数量均可以为多个。
如图11b所示,拍摄终端1104通过app11041上传所拍摄的视频数据,同时,上传拍摄视频时的定位数据,视频数据和位置数据通过无线模块(Router/Proxy)11042上传给服务器端1106,服务器1106内设置有视频流服务单元(Streaming Server)11061,用于接收和处理拍摄终端1104上传的视频,视频流服务单元11061分别将拍摄终端1104所上传的视频数据作为视频流发送给本地化单元(Localization Server)11062和视频处理单元(Video Processing Server)11063,其中,本地化单元11062用于将视频流发送给管理员端1107,以使得管理员端1107能够管理服务器端1106所获取到的视频流,视频处理单元11063用于对视频流进行拼接处理以及视觉定位,视频拼接和视觉定位的具体实现方式可参阅上述实施例中的记载,此处不再赘述。同时,视频处理单元11063也需要将处理得到的拼接视频发送给管理员端1107,以使得管理员端1107能够对拼接后的视频进行查看和管理。
进一步地,服务器端1106还包括地图单元11064,所述地图单元11064用于将目标区域的地图信息发送给观看终端1105,可选地,目标区域为拍摄终端1104的工作区域,例如展览馆、校园或图书馆等等,该地图信息可以是二维平面地图,也可以是全景地图,对此本申请实施例并不进行限定。
观看终端1105在获取到服务器端1106所发送的地图信息后,通过app11051在显示界面上展示地图界面,以及地图界面上可供观看视频所在的位置点,可选地,观看终端1105所使用的app11051和拍摄终端1104所使用的app11041可以是同一app,也可以是不同的app,对此本申请实施例并不进行限定。观看终端1105内设置有视频分发单元11052,当观看终端1105的用户选择地图界面上的位置点后,观看终端1105通过视频分发单元(Video Distribute Module)11052将该位置点发送给服务器端1106。服务器端1106根据该位置点 信息进行判断,若该位置点仅有一个拍摄终端1104上传了视频数据,则服务器端1106直接通过视频流服务单元11061将该位置点所对应的视频数据发送给观看终端1105的视频分发单元11052,视频分发单元11052将该条视频数据通过app显示在观看终端1105的显示界面上,以使得用户能够观看到所选择的位置点所对应的视频数据。可选地,若用户选择的位置点对应有多条视频数据,可以进一步向用户确认需要选择这些视频数据中的哪一个数据,也可以通过视频处理单元11063直接向用户发送拼接后的视频数据,其中,该拼接后的视频数据是对该位置点所有视频数据进行拼接之后所形成的视频数据,其中具体的拼接方式可参阅上述实施例,此处不再赘述。
可选地,上述视频数据可以直播的方式实施传输,也可以将上传的视频存储在服务器1106中,以录播的方式工作,也可以直播和录播兼顾,对此本申请实施例并不进行限定。
通过图11b所示的架构,拍摄终端1104和观看终端1105的数量均可以为多个,从而结合本申请实施例所提供的视频分享、视频获取方法,实现了多个视频源对多个用户的分享过程。
本申请实施例提供的一种视频分享方法,包括:服务器从N个拍摄终端分别获取视频数据和位置点,视频数据用于记录拍摄终端拍摄的视频,位置点用于记录拍摄终端获取视频数据时的位置,N为大于1的正整数;服务器将N个拍摄终端各自获取的位置点发送给M个观看终端,为M大于1的正整数;服务器从第Q终端获取目标位置点,第Q终端为M个观看终端中的一个终端,1≤Q≤M,目标位置点为N个拍摄终端各自获取的位置点中的一个;服务器根据目标位置点向第Q终端发送第J视频数据,第J视频数据为第J终端在目标位置点拍摄的视频数据,第J终端为N个拍摄终端中的第J个终端,1≤J≤N。通过此方法,多个视频拍摄终端可以向多个观看终端分享视频画面,多个观看终端的用户可以自主选择位于不同位置的拍摄终端所分享的视频画面,从而实现了多对多的视频分享模式,能够应用于各种不同的场景下,提高视频分享的丰富度和便捷性。
从硬件结构上来描述,上述方法可以由一个实体设备实现,也可以由多个实体设备共同实现,还可以是一个实体设备内的一个逻辑功能模块,本申请实施例对此不作具体限定。
例如,上述信息传输方法可以通过图12中的网络设备来实现。图12为本申请实施例提供的一种网络设备的硬件结构示意图;该网络设备可以是本发明实施例中的网络设备,也可以是终端设备。该网络设备包括至少一个处理器1201,通信线路1202,存储器1203以及至少一个通信接口1204。
处理器1201可以是一个通用中央处理器(central processing unit,CPU),微处理器,特定应用集成电路(application-specific integrated circuit,服务器IC),或一个或多个用于控制本申请方案程序执行的集成电路。
通信线路1202可包括一通路,在上述组件之间传送信息。
通信接口1204,使用任何收发器一类的装置,用于与其他设备或通信网络通信,如以太网,无线接入网(radio access network,RAN),无线局域网(wireless local area networks,WLAN)等。
存储器1203可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令 的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(electrically erable programmable read-only memory,EEPROM)、只读光盘(compact disc read-only memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器可以是独立存在,通过通信线路1202与处理器相连接。存储器也可以和处理器集成在一起。
其中,存储器1203用于存储执行本申请方案的计算机执行指令,并由处理器1201来控制执行。处理器1201用于执行存储器1203中存储的计算机执行指令,从而实现本申请下述实施例提供的计费管理的方法。
可选的,本申请实施例中的计算机执行指令也可以称之为应用程序代码,本申请实施例对此不作具体限定。
在具体实现中,作为一种实施例,处理器1201可以包括一个或多个CPU,例如图12中的CPU0和CPU1。
在具体实现中,作为一种实施例,网络设备可以包括多个处理器,例如图12中的处理器1201和处理器1205。这些处理器中的每一个可以是一个单核(single-CPU)处理器,也可以是一个多核(multi-CPU)处理器。这里的处理器可以指一个或多个设备、电路、和/或用于处理数据(例如计算机程序指令)的处理核。
在具体实现中,作为一种实施例,网络设备还可以包括输出设备1205和输入设备1206。输出设备1205和处理器1201通信,可以以多种方式来显示信息。例如,输出设备1205可以是液晶显示器(liquid crystal display,LCD),发光二级管(light emitting diode,LED)显示设备,阴极射线管(cathode ray tube,CRT)显示设备,或投影仪(projector)等。输入设备1206和处理器1201通信,可以以多种方式接收用户的输入。例如,输入设备1206可以是鼠标、键盘、触摸屏设备或传感设备等。
上述的网络设备可以是一个通用设备或者是一个专用设备。在具体实现中,网络设备可以服务器、无线终端设备、嵌入式设备或有图12中类似结构的设备。本申请实施例不限定网络设备的类型。
本申请实施例可以根据上述方法示例对网络设备进行功能单元的划分,例如,可以对应各个功能划分各个功能单元,也可以将两个或两个以上的功能集成在一个处理单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。需要说明的是,本申请实施例中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
比如,以采用集成的方式划分各个功能单元的情况下,图13示出了一种视频分享装置的结构示意图。
如图13所示,本申请实施例所提供的视频分享装置,包括:
获取单元1301,该获取单元1301用于从N个拍摄终端分别获取视频数据和位置点, 该视频数据用于记录该拍摄终端拍摄的视频,该位置点用于记录该拍摄终端获取该视频数据时的位置,该N为大于1的正整数;
发送单元1302,该发送单元1302用于将该获取单元1301获取的该N个拍摄终端各自获取的位置点发送给M个观看终端,该为M大于1的正整数;
该获取单元1301,还用于从第Q终端获取目标位置点,该第Q终端为该M个观看终端中的一个终端,1≤Q≤M,该目标位置点为该N个拍摄终端各自获取的位置点中的一个;
该发送单元1302,还用于根据该获取单元1301获取的该目标位置点向该第Q终端发送第J视频数据,该第J视频数据为第J终端在该目标位置点拍摄的视频数据,该第J终端为该N个拍摄终端中的第J个终端,1≤J≤N。
可选地,该发送单元1302还用于:向该第Q终端发送R个拍摄终端的标识信息,该R个拍摄终端为该N个拍摄终端中在该目标位置点拍摄了视频数据的终端,R≤N,该标识信息用于标记所对应的拍摄终端;
该获取单元1301还用于:从该第Q终端获取目标终端的标识信息,该目标终端为该R个拍摄终端中的一个;
该发送单元1302还用于:向该第Q终端发送该目标终端在该目标位置点拍摄的目标视频数据。
可选地,该装置还包括第一拼接单元1303,当该服务器根据第一预设规则判定该第J终端为目标位置点的热门终端,且该第J终端不为该目标终端时,该第一拼接单元1303用于:
该服务器将该第J视频数据与该目标视频数据拼接为推荐视频数据;
该发送单元1302还用于:该服务器将该推荐视频数据发送给该第Q终端。
可选地,该装置还包括建模单元1304和比对单元1305,该建模单元1304用于:获取环境模型,该环境模型用于记录该N个拍摄终端的拍摄环境;
该比对单元1305用于:将该N个拍摄终端分别拍摄的视频数据与该建模单元1304获取的该环境模型比对,确定该N个拍摄终端的拍摄视角;
该发送单元1302还用于:将该N个拍摄终端的拍摄视角发送给该M个观看终端;
该获取单元1301还用于:从该第Q终端获取目标拍摄视角,该目标拍摄视角为该N个拍摄终端的拍摄视角中的一个;
该发送单元1302还用于:根据该目标拍摄视角向该第Q终端发送该第J视频数据,该第J视频数据为该第J终端在该目标拍摄视角拍摄得到的视频数据。
可选地,该装置还包括第二拼接单元1306,当该服务器根据第二预设规则确定该目标位置点为热门位置点时,该第二拼接单元1306用于:
获取该R个拍摄终端在该目标位置点上拍摄的P个视频数据,该P为大于1的正整数;
根据该P个视频数据的拍摄视角将该P个视频数据拼接为一个全景视频数据,该全景视频数据中记录有该P个视频数据中的拍摄画面;
该发送单元1302还用于:
当该服务器从该第Q终端获取到该目标位置点时,该服务器向该第Q终端发送该全景 视频数据。
可选地,该获取单元1301还用于:从该N个拍摄终端分别获取时间点,该时间点用于记录该拍摄终端获取该视频数据的时间;
该发送单元1302还用于:将该N个拍摄终端各自获取的时间点发送给该M个观看终端;
该获取单元1301还用于:从该第Q终端获取目标时间点,该目标时间点为该N个拍摄终端各自获取的时间点中的一个;
该发送单元1302还用于:根据该目标时间点向该第Q终端发送该第J视频数据,该第J视频数据为该第J终端在该目标时间点获取到的视频数据。
可选地,该装置还包括第三拼接单元1307,该获取单元1301还用于:从该N个拍摄终端发送的视频数据中获取S个视频数据,该S个视频数据为目标时间段内目标位置点上拍摄的所有视频数据;
该第三拼接单元1307用于:将该S个视频数据记录的特征拼接在一起得到融合视频数据,该融合视频数据记录有该目标时间段内该目标位置点上拍摄的所有特征;
该发送单元1302还用于:向该第Q终端发送该融合视频数据。
如图14所示,本申请实施例所提供的视频获取装置,包括:
第一获取单元1401,该第一获取单元1401用于从服务器获取至少一个位置点,该位置点为N个拍摄终端在目标区域内拍摄视频数据后上传给该服务器的,该N为大于1的正整数;
展示单元1402,该展示单元1402用于在显示界面上展示地图界面,该地图界面为该目标区域的地图界面,该地图界面上包括该第一获取单元1401获取的该至少一个位置点;
第二获取单元1403,该第二获取单元1403用于获取用户从该地图界面中选择的目标位置点,该目标位置点为该至少一个位置点中的一个;
发送单元,该发送单元用于向该服务器发送该第二获取单元1403获取的该目标位置点;
该第一获取单元1401还用于从该服务器获取第J视频数据,该第J视频数据为第J终端在该目标区域内的该目标位置点拍摄的视频数据,该第J终端为该N个拍摄终端中的一个,1≤J≤N。
可选地,当该目标位置点对应有多个拍摄终端拍摄的视频数据时,该第一获取单元1401还用于:从该服务器获取R个拍摄终端的标识信息,该R个拍摄终端均为在该目标位置点拍摄了视频数据的拍摄终端,1≤R≤N;
该展示单元1402还用于,在显示界面上展示该R个拍摄终端的标识信息;
该第二获取单元1403还用于,获取用户选择的该第J终端的标识信息,该第J终端为该R个拍摄终端中的一个;
该发送单元还用于,该观看终端向该服务器发送该第J终端的标识信息。
可选地,该第一获取单元1401还用于:从该服务器获取该N个拍摄终端的拍摄视角;
该展示单元1402还用于,该观看终端在显示界面上展示该R个拍摄终端的拍摄视角;
该第二获取单元1403还用于,获取用户选择的目标拍摄视角,该目标拍摄视角为该R 个拍摄终端的拍摄视角中的一个;
该发送单元还用于,向该服务器发送该目标拍摄视角,该目标拍摄视角用于请求服务器发送拍摄终端在该目标拍摄视角拍摄的视频数据。
可选地,该第一获取单元1401还用于:从该服务器获取至少一个时间点,该至少一个时间点分别为该N个拍摄终端拍摄视频数据时的时间点;
该展示单元1402还用于,在显示界面上展示该至少一个时间点;
该第二获取单元1403还用于,获取用户选择的目标时间点,该目标时间点为该至少一个时间点中的一个;
该第二获取单元1403还用于,获取向该服务器发送该目标时间点,该目标时间点用于请求该服务器发送拍摄终端在该目标时间点拍摄的视频。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。
所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的通信方法、中继设备、宿主基站及计算机存储介质,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可 以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(英文全称:Read-Only Memory,英文缩写:ROM)、随机存取存储器(英文全称:Random Access Memory,英文缩写:RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (17)

  1. 一种视频分享方法,其特征在于,包括:
    服务器从N个拍摄终端分别获取视频数据和位置点,所述视频数据用于记录所述拍摄终端拍摄的视频,所述位置点用于记录所述拍摄终端获取所述视频数据时的位置,所述N为大于1的正整数;
    所述服务器将所述N个拍摄终端各自获取的位置点发送给M个观看终端,所述为M大于1的正整数;
    所述服务器从第Q终端获取目标位置点,所述第Q终端为所述M个观看终端中的一个终端,1≤Q≤M,所述目标位置点为所述N个拍摄终端各自获取的位置点中的一个;
    所述服务器根据所述目标位置点向所述第Q终端发送第J视频数据,所述第J视频数据为第J终端在所述目标位置点拍摄的视频数据,所述第J终端为所述N个拍摄终端中的一个终端,1≤J≤N。
  2. 根据权利要求1所述的方法,其特征在于,当所述目标位置点对应有多个拍摄终端拍摄的视频数据时,所述服务器从第Q终端获取目标位置点之后,还包括:
    所述服务器向所述第Q终端发送R个拍摄终端的标识信息,所述R个拍摄终端为所述N个拍摄终端中在所述目标位置点拍摄了视频数据的终端,1≤R≤N,所述标识信息用于标记所对应的拍摄终端;
    所述服务器从所述第Q终端获取所述第J终端的标识信息,所述第J终端为所述R个拍摄终端中的一个;
    所述服务器向所述第Q终端发送所述第J终端在所述目标位置点拍摄的所述第J视频数据。
  3. 根据权利要求2所述的方法,其特征在于,所述方法包括:
    所述服务器根据第一预设规则判定所述R个拍摄终端中的至少一个拍摄终端为热门终端,所述热门终端与所述第J终端为不同终端;
    所述服务器从所述第Q终端获取目标位置点之后,还包括:
    所述服务器将所述第J视频数据与热门视频数据拼接为推荐视频数据,所述热门视频数据为所述热门终端所拍摄的视频数据;
    所述服务器根据所述目标位置点向所述第Q终端发送第J视频数据,包括:
    所述服务器将所述推荐视频数据发送给所述第Q终端。
  4. 根据权利要求3所述的方法,其特征在于,所述服务器从N个拍摄终端分别获取视频数据和位置点之前,还包括:
    所述服务器获取环境模型,所述环境模型用于记录所述N个拍摄终端的拍摄环境;
    所述服务器从N个拍摄终端分别获取视频数据和位置点之后,还包括:
    所述服务器将所述N个拍摄终端分别拍摄的视频数据与所述环境模型比对,确定所述 N个拍摄终端的拍摄视角;
    所述服务器将所述N个拍摄终端的拍摄视角发送给所述M个观看终端;
    所述服务器从所述第Q终端获取目标拍摄视角,所述目标拍摄视角为所述N个拍摄终端的拍摄视角中的一个;
    所述服务器根据所述目标拍摄视角向所述第Q终端发送所述第J视频数据,所述第J视频数据为所述第J终端在所述目标拍摄视角拍摄得到的视频数据。
  5. 根据权利要求4所述的方法,其特征在于,所述方法还包括:当所述服务器根据第二预设规则确定所述目标位置点为热门位置点时,执行以下步骤;
    所述服务器获取所述R个拍摄终端在所述目标位置点上拍摄的P个视频数据,所述P为大于1的正整数;
    所述服务器根据所述P个视频数据的拍摄视角将所述P个视频数据拼接为一个全景视频数据,所述全景视频数据中记录有所述P个视频数据中的拍摄画面;
    当所述服务器从所述第Q终端获取到所述目标位置点时,所述服务器向所述第Q终端发送所述全景视频数据。
  6. 根据权利要求1至3任一所述的方法,其特征在于,所述方法还包括:
    所述服务器从所述N个拍摄终端分别获取时间点,所述时间点用于记录所述拍摄终端获取所述视频数据的时间;
    所述服务器将所述N个拍摄终端各自获取的时间点发送给所述M个观看终端;
    所述服务器从所述第Q终端获取目标时间点,所述目标时间点为所述N个拍摄终端各自获取的时间点中的一个;
    所述服务器根据所述目标时间点向所述第Q终端发送所述第J视频数据,所述第J视频数据为所述第J终端在所述目标时间点获取到的视频数据。
  7. 根据权利要求6所述的方法,其特征在于,所述方法还包括:
    所述服务器从所述N个拍摄终端发送的视频数据中获取S个视频数据,所述S个视频数据为目标时间段内目标位置点上拍摄的所有视频数据;
    所述服务器将所述S个视频数据记录的特征拼接在一起得到融合视频数据,所述融合视频数据记录有所述目标时间段内所述目标位置点上拍摄的所有特征;
    所述服务器从第Q终端获取目标位置点之后,还包括:
    所述服务器向所述第Q终端发送所述融合视频数据。
  8. 一种视频获取方法,其特征在于,包括:
    观看终端从服务器获取至少一个位置点,所述位置点为N个拍摄终端在目标区域内拍摄视频数据后上传给所述服务器的,所述N为大于1的正整数;
    所述观看终端在显示界面上展示地图界面,所述地图界面为所述目标区域的地图界面, 所述地图界面上包括所述至少一个位置点;
    所述观看终端获取用户从所述地图界面中选择的目标位置点,所述目标位置点为所述至少一个位置点中的一个;
    所述观看终端向所述服务器发送所述目标位置点;
    所述观看终端从所述服务器获取第J视频数据,所述第J视频数据为第J终端在所述目标区域内的所述目标位置点拍摄的视频数据,所述第J终端为所述N个拍摄终端中的一个,1≤J≤N。
  9. 根据权利要求8所述的方法,其特征在于,当所述目标位置点对应有多个拍摄终端拍摄的视频数据时,所述方法还包括:所述观看终端从所述服务器获取R个拍摄终端的标识信息,所述R个拍摄终端均为在所述目标位置点拍摄了视频数据的拍摄终端,1≤R≤N;
    所述观看终端获取用户从所述地图界面中选择的目标位置点之后,还包括:
    所述观看终端在显示界面上展示所述R个拍摄终端的标识信息;
    所述观看终端获取用户选择的所述第J终端的标识信息,所述第J终端为所述R个拍摄终端中的一个;
    所述观看终端向所述服务器发送所述第J终端的标识信息。
  10. 根据权利要求9所述的方法,其特征在于,所述观看终端从服务器获取至少一个位置点之后,还包括:
    所述观看终端从所述服务器获取所述N个拍摄终端的拍摄视角;
    所述观看终端获取用户从所述地图界面中选择的目标位置点之后,还包括:
    所述观看终端在显示界面上展示所述R个拍摄终端的拍摄视角;
    所述观看终端获取用户选择的目标拍摄视角,所述目标拍摄视角为所述R个拍摄终端的拍摄视角中的一个;
    所述观看终端向所述服务器发送所述目标拍摄视角,所述目标拍摄视角用于请求服务器发送拍摄终端在所述目标拍摄视角拍摄的视频数据。
  11. 根据权利要求8至10任一所述的方法,其特征在于,所述观看终端从服务器获取至少一个位置点之后,还包括:
    所述观看终端从所述服务器获取至少一个时间点,所述至少一个时间点分别为所述N个拍摄终端拍摄视频数据时的时间点;
    所述观看终端获取用户从所述地图界面中选择的目标位置点之后,还包括:
    所述观看终端在显示界面上展示所述至少一个时间点;
    所述观看终端获取用户选择的目标时间点,所述目标时间点为所述至少一个时间点中的一个;
    所述观看终端获取向所述服务器发送所述目标时间点,所述目标时间点用于请求所述服务器发送拍摄终端在所述目标时间点拍摄的视频。
  12. 一种服务器,其特征在于,包括:
    获取单元,所述获取单元用于从N个拍摄终端分别获取视频数据和位置点,所述视频数据用于记录所述拍摄终端拍摄的视频,所述位置点用于记录所述拍摄终端获取所述视频数据时的位置,所述N为大于1的正整数;
    发送单元,所述发送单元用于将所述获取单元获取的所述N个拍摄终端各自获取的位置点发送给M个观看终端,所述为M大于1的正整数;
    所述获取单元,还用于从第Q终端获取目标位置点,所述第Q终端为所述M个观看终端中的一个终端,1≤Q≤M,所述目标位置点为所述N个拍摄终端各自获取的位置点中的一个;
    所述发送单元,还用于根据所述获取单元获取的所述目标位置点向所述第Q终端发送第J视频数据,所述第J视频数据为第J终端在所述目标位置点拍摄的视频数据,所述第J终端为所述N个拍摄终端中的第J个终端,1≤J≤N。
  13. 一种观看终端、其特征在于,包括:
    第一获取单元,所述第一获取单元用于从服务器获取至少一个位置点,所述位置点为N个拍摄终端在目标区域内拍摄视频数据后上传给所述服务器的,所述N为大于1的正整数;
    展示单元,所述展示单元用于在显示界面上展示地图界面,所述地图界面为所述目标区域的地图界面,所述地图界面上包括所述第一获取单元获取的所述至少一个位置点;
    第二获取单元,所述第二获取单元用于获取用户从所述地图界面中选择的目标位置点,所述目标位置点为所述展示单元所展示的所述至少一个位置点中的一个;
    发送单元,所述发送单元用于向所述服务器发送所述第二获取单元获取的所述目标位置点;
    所述第一获取单元还用于,从所述服务器获取第J视频数据,所述第J视频数据为第J终端在所述目标区域内的所述目标位置点拍摄的视频数据,所述第J终端为所述N个拍摄终端中的一个,1≤J≤N。
  14. 一种服务器,其特征在于,所述服务器包括:交互装置、输入/输出(I/O)接口、处理器和存储器,所述存储器中存储有程序指令;
    所述交互装置用于获取用户输入的操作指令;
    所述处理器用于执行存储器中存储的程序指令,执行如权利要求1-7中任一所述的方法。
  15. 一种终端设备,其特征在于,所述终端设备包括:交互装置、输入/输出(I/O)接口、处理器和存储器,所述存储器中存储有程序指令;
    所述交互装置用于获取用户输入的操作指令;
    所述处理器用于执行存储器中存储的程序指令,执行如权利要求8-11中任一所述的方法。
  16. 一种计算机可读存储介质,包括指令,其特征在于,当所述指令在计算机设备上运行时,使得所述计算机设备执行如权利要求1-7中任一项所述的方法。
  17. 一种计算机可读存储介质,包括指令,其特征在于,当所述指令在计算机设备上运行时,使得所述计算机设备执行如权利要求8-11中任一项所述的方法。
PCT/CN2021/106226 2020-07-14 2021-07-14 一种视频分享、获取方法、服务器、终端设备及介质 WO2022012585A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US18/014,158 US20230262270A1 (en) 2020-07-14 2021-07-14 Video sharing and acquisition method, and server, terminal device and medium
JP2023501650A JP7456060B2 (ja) 2020-07-14 2021-07-14 ビデオ共有、取得方法、サーバー、端末機器及び媒体

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010675008.3 2020-07-14
CN202010675008.3A CN111800644B (zh) 2020-07-14 2020-07-14 一种视频分享、获取方法、服务器、终端设备及介质

Publications (1)

Publication Number Publication Date
WO2022012585A1 true WO2022012585A1 (zh) 2022-01-20

Family

ID=72808636

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/106226 WO2022012585A1 (zh) 2020-07-14 2021-07-14 一种视频分享、获取方法、服务器、终端设备及介质

Country Status (4)

Country Link
US (1) US20230262270A1 (zh)
JP (1) JP7456060B2 (zh)
CN (1) CN111800644B (zh)
WO (1) WO2022012585A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111800644B (zh) * 2020-07-14 2022-10-14 深圳市人工智能与机器人研究院 一种视频分享、获取方法、服务器、终端设备及介质
CN113038271B (zh) * 2021-03-25 2023-09-08 深圳市人工智能与机器人研究院 视频自动剪辑方法、装置及计算机存储介质
CN114500846B (zh) * 2022-02-12 2024-04-02 北京蜂巢世纪科技有限公司 现场活动观看视角切换方法、装置、设备及可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105681763A (zh) * 2016-03-10 2016-06-15 江苏南亿迪纳数字科技发展有限公司 基于视频的实时路况直播方法及系统
US20180015369A1 (en) * 2016-02-17 2018-01-18 Schupak Nick Geographic-based content curation in a multiplayer gaming environment
CN108965919A (zh) * 2018-07-31 2018-12-07 优视科技新加坡有限公司 视频处理方法、装置、设备/终端/服务器及计算机可读存储介质
CN109067889A (zh) * 2018-08-18 2018-12-21 北京旺马科技有限公司 基于地理位置的直播方法、车载系统、服务器及存储介质
CN111800644A (zh) * 2020-07-14 2020-10-20 深圳市人工智能与机器人研究院 一种视频分享、获取方法、服务器、终端设备及介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4720687B2 (ja) 2006-09-04 2011-07-13 株式会社ニコン 映像共有システム
US8339500B2 (en) 2006-09-04 2012-12-25 Nikon Corporation Video sharing system, photography support system, and camera
CN102157011B (zh) * 2010-12-10 2012-12-26 北京大学 利用移动拍摄设备进行动态纹理采集及虚实融合的方法
JP6607433B2 (ja) 2014-06-23 2019-11-20 パナソニックIpマネジメント株式会社 映像配信方法及びサーバ
US9692800B2 (en) * 2014-06-11 2017-06-27 Google Inc. Enhanced streaming media playback
JP6815156B2 (ja) 2016-08-03 2021-01-20 ファン.クレディブル株式会社 画像表示システムおよび画像表示プログラム
CN107509097B (zh) * 2017-09-15 2020-10-16 武汉斗鱼网络科技有限公司 视频分享方法、装置及分享服务器
CN110446053A (zh) * 2018-05-02 2019-11-12 触信(厦门)智能科技有限公司 一种个人移动直播系统的实现方法
CN109379578A (zh) * 2018-09-21 2019-02-22 苏州因确匹电子科技有限公司 全方位三维视频拼接方法、装置、设备以及存储介质
CN110505463A (zh) * 2019-08-23 2019-11-26 上海亦我信息技术有限公司 基于拍照的实时自动3d建模方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180015369A1 (en) * 2016-02-17 2018-01-18 Schupak Nick Geographic-based content curation in a multiplayer gaming environment
CN105681763A (zh) * 2016-03-10 2016-06-15 江苏南亿迪纳数字科技发展有限公司 基于视频的实时路况直播方法及系统
CN108965919A (zh) * 2018-07-31 2018-12-07 优视科技新加坡有限公司 视频处理方法、装置、设备/终端/服务器及计算机可读存储介质
CN109067889A (zh) * 2018-08-18 2018-12-21 北京旺马科技有限公司 基于地理位置的直播方法、车载系统、服务器及存储介质
CN111800644A (zh) * 2020-07-14 2020-10-20 深圳市人工智能与机器人研究院 一种视频分享、获取方法、服务器、终端设备及介质

Also Published As

Publication number Publication date
JP7456060B2 (ja) 2024-03-26
CN111800644A (zh) 2020-10-20
CN111800644B (zh) 2022-10-14
JP2023537213A (ja) 2023-08-31
US20230262270A1 (en) 2023-08-17

Similar Documents

Publication Publication Date Title
WO2022012585A1 (zh) 一种视频分享、获取方法、服务器、终端设备及介质
US9716827B2 (en) Location aware photograph recommendation notification
US9570111B2 (en) Clustering crowdsourced videos by line-of-sight
US11825142B2 (en) Systems and methods for multimedia swarms
EP3238445B1 (en) Interactive binocular video display
US9877059B1 (en) Video broadcasting with geolocation
WO2017133147A1 (zh) 一种实景地图的生成方法、推送方法及其装置
TWI519167B (zh) 運用後設資料來進行目標辨識與事件重現之系統
CN107743262B (zh) 一种弹幕显示方法和装置
US10289855B2 (en) Ad hoc target based photograph sharing
US20150341541A1 (en) Methods and systems of remote acquisition of digital images or models
CN111163306B (zh) 一种vr视频处理的方法及相关装置
US10778855B2 (en) System and method for creating contents by collaborating between users
KR20180131687A (ko) 실시간기반 및 컨텐츠 기반 실시간 공연공유 서비스 제공시스템
JP2023140922A (ja) 表示端末、情報処理システム、通信システム、表示方法、情報処理方法、通信方法、及びプログラム
KR102567767B1 (ko) 비디오 콘텐츠의 교환을 위한 방법, 장치 및 시스템
US20220053248A1 (en) Collaborative event-based multimedia system and method
TW201740346A (zh) 於網路平台上環景影像與留言之間的對應方法及其系統
Zhou et al. Streaming Location-Based Panorama Videos into Augmented Virtual Environment
JP2016082491A (ja) 画像配信システム及びその管理装置
CN112287000A (zh) 一种可满足高访问量要求的展览平台系统
CN112311778A (zh) 一种利用全景电子地图构建的网上展览平台系统
KR20180133571A (ko) 실시간 가족행사기반 및 컨텐츠 기반 공유시스템
KR20180132185A (ko) 실시간기반 및 컨텐츠 기반 실시간 공연공유 서비스 제공시스템
Jain Practical Architectures for Fused Visual and Inertial Mobile Sensing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21841249

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023501650

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21841249

Country of ref document: EP

Kind code of ref document: A1