CN111931071A - Video data pushing method and device - Google Patents

Video data pushing method and device Download PDF

Info

Publication number
CN111931071A
CN111931071A CN202011068602.2A CN202011068602A CN111931071A CN 111931071 A CN111931071 A CN 111931071A CN 202011068602 A CN202011068602 A CN 202011068602A CN 111931071 A CN111931071 A CN 111931071A
Authority
CN
China
Prior art keywords
video data
request
target
acquisition
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011068602.2A
Other languages
Chinese (zh)
Inventor
倪绪能
徐庆
陈兴文
洪志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yankan Intelligent Technology Co.,Ltd.
Original Assignee
Beijing Overlooking Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Overlooking Technology Co ltd filed Critical Beijing Overlooking Technology Co ltd
Priority to CN202011068602.2A priority Critical patent/CN111931071A/en
Publication of CN111931071A publication Critical patent/CN111931071A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • Computational Linguistics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a video data pushing method and a device, comprising the following steps: receiving a browsing request of a user side, wherein the browsing request comprises request target information and request azimuth information; determining a corresponding acquisition equipment group according to the request target information; acquiring at least one local video data acquired by the acquisition equipment group; determining global video data according to the at least one local video data; selecting a corresponding target range from the global video data according to the request azimuth information; the target video data is determined according to the target range, and is pushed to the user side; the invention realizes that the user terminal autonomously selects the browsing content; the restriction of the formation of the traditional video browsing mode is broken; the substitution sense of browsing videos by the user side is enhanced.

Description

Video data pushing method and device
Technical Field
The invention relates to the technical field of computers, in particular to a video data pushing method and device.
Background
With the development of mobile internet technology, in recent years, information technologies relying on streaming media, such as live video, short video and the like, tend to mature, and become popular public entertainment modes. Take live video as an example. In the process of live video broadcast, the acquisition end acquires video data in real time and pushes the video data through a network. At the same time, the user can obtain the video data through the network and play the video data, so that live broadcasting is watched.
In most cases, the user terminal can play the watched content, which is basically consistent with the video data content pushed by the acquisition terminal. The video data content pushed by the acquisition end is single, so the live broadcast mode forms certain restriction on the watching experience of the user.
Disclosure of Invention
The invention provides a video data pushing method and a video data pushing device, which are used for at least solving the technical problems in the prior art.
In a first aspect, the present invention provides a video data pushing method, including:
receiving a browsing request of a user side, wherein the browsing request comprises request target information and request azimuth information;
determining a corresponding acquisition equipment group according to the request target information; acquiring at least one local video data acquired by the acquisition equipment group;
determining global video data according to the at least one local video data;
selecting a corresponding target range from the global video data according to the request azimuth information; determining the target video data according to the target range;
and pushing the target video data to the user side.
Preferably, the determining, according to the request target information, a corresponding acquisition device group includes:
determining a corresponding browsing target according to the request target information;
and determining the acquisition equipment group for performing video acquisition on the browsing target as a corresponding acquisition equipment group.
Preferably, the acquisition device group comprises at least one acquisition device, and each acquisition device respectively and correspondingly acquires azimuth information; the acquiring at least one local video data acquired by the acquisition device group includes:
and acquiring at least one piece of local video data acquired by each acquisition device, wherein each piece of local video data comprises corresponding acquisition azimuth information.
Preferably, the acquiring at least one local video data acquired by each of the acquiring devices includes:
acquiring synchronous local video data acquired by each acquisition device in a simultaneous segment;
or acquiring asynchronous local video data acquired by the acquisition equipment at different time intervals.
Preferably, the determining global video data according to the at least one local video data comprises:
and splicing the local video data according to the acquisition azimuth information corresponding to the local video data to obtain the global video data.
Preferably, the acquiring the local video data acquired by the acquisition device group includes:
responding to the browsing request to start the acquisition equipment group and acquire the obtained local video data;
or responding to the browsing request to acquire the local video data acquired in advance.
Preferably, the method further comprises the following steps:
determining acquisition azimuth information matched with the request azimuth information;
determining the local video data corresponding to the matched acquisition azimuth information as fixed-point video data;
and pushing the fixed point video data to the user side.
In a second aspect, the present invention provides a video data pushing apparatus, including:
the request receiving module is used for receiving a browsing request of a user side, wherein the browsing request comprises request target information and request azimuth information;
the local video data acquisition module is used for determining a corresponding acquisition equipment group according to the request target information; acquiring at least one local video data acquired by the acquisition equipment group;
the global video data determining module is used for determining global video data according to the at least one local video data;
the target video data determining module is used for selecting a corresponding target range from the global video data according to the request azimuth information; determining the target video data according to the target range;
and the pushing module is used for pushing the target video data to the user side.
In a third aspect, the present invention provides a computer-readable storage medium storing a computer program for executing the video data pushing method according to the present invention.
In a fourth aspect, the present invention provides an electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is used for reading the executable instruction from the memory and executing the instruction to realize the video data pushing method.
Compared with the prior art, the video data pushing method and the video data pushing device provided by the invention have the advantages that the target video data are determined from the local video data according to the browsing request by acquiring the plurality of local video data obtained by the equipment group and pushing the target video data to the user side for watching, so that the user side has active selectivity when browsing videos, and the user side can independently select browsing contents; the acquisition equipment groups are formed by the plurality of acquisition equipment, so that the equipment, the switching place and the switching angle can be dynamically switched according to the requirements of users, and the experience which is obtained by thinking and takes the users as the center is really realized; the acquisition equipment can be dynamically moved, so that the acquisition of video data at any place and at any visual angle can be realized in the embodiment, and a user side can browse richer contents; the video image acquisition can be simultaneously carried out through multiple acquisition devices as required, global video data are spliced and synthesized based on a cloud algorithm, and required target video data are selected from the global video data, so that the real immersive feeling is brought; therefore, the restriction of the formation of the traditional video browsing mode is broken; the substitution sense of browsing videos by the user side is enhanced.
Drawings
Fig. 1 is a flowchart illustrating a video data pushing method according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating a relationship between a collection device group and a browsing target in a video data pushing method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of global video data in another video data pushing method according to an embodiment of the present invention;
fig. 4 is a flowchart illustrating another video data pushing method according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a video data pushing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
By the aid of the method, in the live video broadcasting process, the acquisition end acquires video data in real time and pushes the video data through a network. At the same time, the user can obtain the video data through the network and play the video data, so that live broadcasting is watched. I.e. between the acquisition end and the user end, is often a "one-to-many" live mode. In most cases, the user terminal can play the watched content, which is basically consistent with the video data content pushed by the acquisition terminal. I.e. what was shot by the "anchor" on the live broadcast, what was seen by the "user" watching the live broadcast.
In some scenarios, a user may wish to achieve a "virtual travel" experience by watching a live broadcast. In particular, with the current VR devices, such "virtual travel" is made more possible. In the traditional live broadcast process, although a user side plays a live broadcast picture, the user can follow the shooting of the main broadcast to browse the landscape and the landscape in a specific environment; however, in this mode, the "user" is not basically active, that is, the "user" can only see the intended content of the "anchor" shot, and cannot autonomously select any content that the user wants to browse.
Therefore, in the prior art, the content of the video data pushed by the acquisition end is fixed and single, so that the live broadcast mode forms certain restriction on the watching experience of the user. So that the so-called "virtual travel" cannot achieve the effect of being as immersive as the "real travel".
Therefore, embodiments of the present invention provide a video data pushing method to solve at least the above technical problems in the prior art. As shown in fig. 1, the method comprises the following steps:
step 101, receiving a browsing request of a user side, wherein the browsing request comprises request target information and request direction information.
When the user end has the requirement of browsing the specific content, the specific browsing request can be sent for the server end to receive. The content that the user needs to browse may be live video content or video content that is recorded or edited in advance, which is not limited in this embodiment.
For example, in the above "virtual travel" scenario, the content that the user needs to browse may be a specific browsing target, such as sight a. Therefore, the request target information should be included in the browsing request. The request target information can be used for representing the browsing target, thereby determining the content needing to be browsed by the user.
In this embodiment, since the user terminal is intended to have a certain active selectivity in the browsing process, the browsing request should also include the request location information. It can be understood that a browsing target usually covers a certain range, and a user can browse from different positions and angles, so that different specific contents are experienced and obtained, and the immersive experience is realized in time. The request direction information represents the position and angle from which the user selects to browse the browsing target, i.e. which part of the browsing target the user wishes to browse.
Step 102, determining a corresponding acquisition equipment group according to the request target information; and acquiring at least one local video data acquired by the acquisition equipment group.
According to the request target information in the browsing request, a corresponding browsing target, namely the scenery spot a in the embodiment, can be determined. And then the acquisition equipment group for performing video acquisition on the browsing target can be determined as a corresponding acquisition equipment group. That is, the set of capture devices that have been deployed or appear within the range of sight a and that are video capturing for sight a may be determined by requesting target information.
It should be noted that the acquisition device group includes at least one acquisition device. The acquisition device may specifically be a mobile terminal (such as a mobile phone) having a camera function; or a camera, a video camera or other devices which are connected to the network and have a video stream pushing function; this is not limited in this embodiment. Generally, within the scope of a browsing target, video capture can be performed by a plurality of capture devices respectively to form a capture device group.
And each acquisition device respectively and correspondingly acquires azimuth information. The collected azimuth information is equivalent to the pose of the collecting equipment, namely the position and the angle of the video collection of the collecting equipment. That is, each acquisition device performs video acquisition on the browsing target from different positions and different angles to obtain corresponding local video data. The local video data may present content of a particular local portion of the browsing target. And each acquired local video data can also carry acquisition direction information of corresponding acquisition equipment.
In the embodiment, a plurality of acquisition devices can form an acquisition device group, so that the devices, the switching places and the switching angles can be dynamically switched according to the requirements of users, and the experience which is obtained by users and is centered on the users is really realized.
From the aspect of time, the server side can acquire synchronous local video data acquired by each acquisition device in a simultaneous period; and asynchronous local video data acquired by each acquisition device in different time periods can be acquired. The acquiring of the synchronous local video data means that each synchronous local video data acquired by each acquisition device at the same time is acquired at one time node. The acquiring of the asynchronous local video data means that each asynchronous local video data can be acquired at different time, and can also be pushed to the user side at any time to be played and browsed.
In this embodiment, the acquisition device group may be in an off state before acquiring the local video data. When the server responds to the browsing request, the acquisition equipment group can be started, so that the acquisition equipment group acquires the obtained local video data under the trigger of the browsing request. Of course, the acquisition device group can be in the starting state in advance and continuously acquire the local video data. When the server side responds to the browsing request, the local video data acquired in advance can be immediately acquired. Both the above two modes can be combined in the whole technical scheme of the embodiment, and can be specifically selected according to requirements.
It should be further noted that, during the process of video capture by the capture device, the capture orientation information may be fixed (i.e. fixed shooting) or may be changed (i.e. moving shooting). Different video acquisition modes can be combined under the whole technical scheme of the embodiment. That is to say, in some cases, the capture device may be dynamically moved, so that in this embodiment, video data capture at any place and at any view angle can be achieved, and a user can browse richer contents.
Fig. 2 shows the position relationship between each acquisition device in the acquisition device group and the browsing target. As can be seen in FIG. 2, the browsing target (i.e., attraction A) can be viewed as a square object. In this embodiment, 4 acquisition devices are used for video acquisition from 4 directions respectively, and an included angle formed by a virtual straight line in the figure represents a range of the acquisition devices for video acquisition. And each corresponding local video data acquired can present the content of one surface of the browsing object. And the content with complete browsing target can be presented by integrating the local video data.
In this embodiment, local video data acquired by each acquisition device is acquired in response to a browsing request of a user side, and each local video data includes corresponding acquisition orientation information.
Step 103, determining global video data according to at least one local video data.
In the case shown in fig. 2, each corresponding partial video data can present the contents of one side of the browsing object. And the content with complete browsing target can be presented by integrating the local video data. Therefore, in this embodiment, according to the acquisition azimuth information corresponding to each local video data, each local video data is spliced according to the corresponding position relationship, so as to obtain the global video data. In the global video data, the content presented by all the local video data is integrated. The local video data can be spliced based on the current video processing and image processing technologies, which is not limited in this embodiment; any technical means capable of achieving the same or similar effects can be combined in the overall technical solution of the present embodiment.
In the example of fig. 3, the relationship of the browsing target to the global video data is spatially customized. The global video data may appear as a "ring" like picture. The global video data presents the complete content of the browsing target in a multi-dimensional manner.
104, selecting a corresponding target range from the global video data according to the request azimuth information; and determines target video data according to the target range.
As mentioned above, the browsing request also carries request direction information, i.e. indicating which part of the browsing target the user terminal wishes to browse to. Therefore, in this step, a part of video data required by the user side is determined according to the request direction information and the local video data.
After the global video data is determined, a corresponding target range can be selected from the global video data according to the request azimuth information. That is, it is calculated which portions of the global video data correspond to the content that the user terminal needs to browse according to the request direction information. And determines the portion as the target range. And then, capturing a picture corresponding to the target range from the global video data to be used as target video data.
In fig. 3, two solid straight lines form an angle range, which represents a range corresponding to the requested azimuth information. Which is projected on a "ring" picture of the global video data, i.e. forming a target area. And further intercepting the target range from the global video data to obtain the target video data.
In the embodiment, video images can be acquired simultaneously through multiple acquisition devices as required, global video data are spliced and synthesized based on a cloud algorithm, and required target video data are selected from the global video data, so that the real immersive feeling is brought.
And 105, pushing the target video data to a user side.
After the target video data is obtained, the server side can push the target video data to the user side through the network, so that the user side can play and browse the video content corresponding to the target video data. The video content corresponds to a specific part of the browsing target selected by the user terminal. Therefore, the embodiment realizes that the selected video content is pushed to the user side, and the user side has active selectivity in the process of browsing videos.
According to the technical scheme, the beneficial effects of the embodiment are as follows: the method comprises the steps that a plurality of local video data are obtained through a collection device group, target video data are determined from the local video data according to a browsing request, and the target video data are pushed to a user side for watching, so that the user side has active selectivity when browsing videos, and the user side can independently select browsing contents; the acquisition equipment groups are formed by the plurality of acquisition equipment, so that the equipment, the switching place and the switching angle can be dynamically switched according to the requirements of users, and the experience which is obtained by thinking and takes the users as the center is really realized; the acquisition equipment can be dynamically moved, so that the acquisition of video data at any place and at any visual angle can be realized in the embodiment, and a user side can browse richer contents; the video image acquisition can be simultaneously carried out through multiple acquisition devices as required, global video data are spliced and synthesized based on a cloud algorithm, and required target video data are selected from the global video data, so that the real immersive feeling is brought; therefore, the restriction of the formation of the traditional video browsing mode is broken; the substitution sense of browsing videos by the user side is enhanced.
In addition, on the basis of the embodiment shown in fig. 1, the present invention may also include another video data pushing method, as shown in fig. 4. The method in this embodiment may be combined with the embodiment shown in fig. 1, and the two may be implemented in parallel. The method in this embodiment specifically includes the following steps:
step 401, receiving a browsing request of a user side, where the browsing request includes request target information and request direction information.
Step 402, determining a corresponding acquisition equipment group according to the request target information; and acquiring at least one local video data acquired by the acquisition equipment group.
The above steps 401 to 402 are consistent with the foregoing embodiments, and are not repeated here.
And step 403, determining the acquisition azimuth information matched with the request azimuth information.
And step 404, determining the local video data corresponding to the matched acquisition orientation information as fixed point video data.
In this embodiment, one of the local video data is directly selected as the fixed point video data. Specifically, the request azimuth information and each acquisition azimuth information may be compared to obtain an acquisition azimuth information that is closest to the request azimuth information and considered to be matched with each other. That is, it can be considered that the position and angle of video capturing with the matching capturing orientation information correspond to the orientation and angle that the user side desires to browse. Therefore, the local video data corresponding to the matched acquisition orientation information can be directly determined as the fixed point video data.
In some cases, if the server is supported to adjust the remote pose of the acquisition device, it is preferable to adjust the acquisition orientation information according to the request orientation information, that is, to adjust the position and angle of the acquisition device for video acquisition, so that the adjusted acquisition orientation information is more consistent with the request orientation information, thereby further improving the browsing experience of the user side.
Step 405, pushing the fixed point video data to the user side.
In this embodiment, after the fixed point video data is obtained, the fixed point video data can be similarly pushed to the user side for the user side to select and browse.
In the embodiment shown in fig. 4, the global video data is obtained by further processing the local video data, so as to realize more flexible and various choices for the user side; this implementation is complex in terms of data processing requirements. The two implementation modes can be selected and switched according to requirements in practical application.
Fig. 5 shows an embodiment of a video data pushing apparatus according to the present invention. The apparatus of this embodiment is a physical apparatus for performing the method described in FIGS. 1-4. The technical solution is essentially the same as that in the above embodiment, and the corresponding description in the above embodiment is also applicable to this embodiment. The device in the embodiment comprises:
the request receiving module 501 is configured to receive a browsing request from a user side, where the browsing request includes request target information and request direction information.
A local video data obtaining module 502, configured to determine a corresponding acquisition device group according to the request target information; and acquiring at least one local video data acquired by the acquisition equipment group.
A global video data determining module 503, configured to determine global video data according to the at least one local video data.
A target video data determination module 504, configured to select a corresponding target range from the global video data according to the request direction information; and determines target video data according to the target range.
The pushing module 505 is configured to push the target video data to the user side.
In addition, on the basis of the embodiment shown in fig. 5, it is preferable that:
the collection device group includes at least one collection device, each collection device corresponds to the collection position information respectively, and the local video data acquisition module 502 includes:
and the browsing target determining unit is used for determining a corresponding browsing target according to the request target information.
And the acquisition equipment group determining unit is used for determining the acquisition equipment group for performing video acquisition on the browsing target as a corresponding acquisition equipment group.
The local video data acquisition unit is used for acquiring at least one local video data acquired by each acquisition device, and each local video data comprises corresponding acquisition azimuth information.
The local video data acquisition unit can acquire synchronous local video data acquired by each acquisition device at the same time segment; or acquiring asynchronous local video data acquired by each acquisition device at different time intervals.
The global video data determination module 503 includes:
and the splicing unit is used for splicing the local video data according to the acquisition azimuth information corresponding to the local video data.
And the global video data determining unit is used for determining the spliced local video data as global video data.
The device further comprises:
the fixed point video data determining module is used for determining the acquisition azimuth information matched with the request azimuth information; and determining the local video data corresponding to the matched acquisition azimuth information as fixed-point video data.
The push module 505 is further configured to: and pushing the fixed point video data to the user terminal.
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the methods according to the various embodiments of the present application described in the "exemplary methods" section of this specification, above.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform steps in a method according to various embodiments of the present application described in the "exemplary methods" section above of this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (10)

1. A video data push method, comprising:
receiving a browsing request of a user side, wherein the browsing request comprises request target information and request azimuth information;
determining a corresponding acquisition equipment group according to the request target information; acquiring at least one local video data acquired by the acquisition equipment group;
determining global video data according to the at least one local video data;
selecting a corresponding target range from the global video data according to the request azimuth information; determining the target video data according to the target range;
and pushing the target video data to the user side.
2. The method of claim 1, wherein determining the corresponding set of acquisition devices based on the requested target information comprises:
determining a corresponding browsing target according to the request target information;
and determining the acquisition equipment group for performing video acquisition on the browsing target as a corresponding acquisition equipment group.
3. The method according to claim 1, wherein the collection device group comprises at least one collection device, and each collection device respectively collects the orientation information; the acquiring at least one local video data acquired by the acquisition device group includes:
and acquiring at least one piece of local video data acquired by each acquisition device, wherein each piece of local video data comprises corresponding acquisition azimuth information.
4. The method of claim 3, wherein the obtaining at least one local video data collected by each of the collection devices comprises:
acquiring synchronous local video data acquired by each acquisition device in a simultaneous segment;
or acquiring asynchronous local video data acquired by the acquisition equipment at different time intervals.
5. The method of claim 3, wherein the determining global video data according to the at least one local video data comprises:
and splicing the local video data according to the acquisition azimuth information corresponding to the local video data to obtain the global video data.
6. The method according to any one of claims 1 to 5, wherein the acquiring the local video data acquired by the acquisition equipment group comprises:
responding to the browsing request to start the acquisition equipment group and acquire the obtained local video data;
or responding to the browsing request to acquire the local video data acquired in advance.
7. The method according to any one of claims 3 to 5, further comprising:
determining acquisition azimuth information matched with the request azimuth information;
determining the local video data corresponding to the matched acquisition azimuth information as fixed-point video data;
and pushing the fixed point video data to the user side.
8. A video data pushing apparatus, comprising:
the request receiving module is used for receiving a browsing request of a user side, wherein the browsing request comprises request target information and request azimuth information;
the local video data acquisition module is used for determining a corresponding acquisition equipment group according to the request target information; acquiring at least one local video data acquired by the acquisition equipment group;
the global video data determining module is used for determining global video data according to the at least one local video data;
the target video data determining module is used for selecting a corresponding target range from the global video data according to the request azimuth information; determining the target video data according to the target range;
and the pushing module is used for pushing the target video data to the user side.
9. A computer-readable storage medium storing a computer program for executing the video data push method according to any one of claims 1 to 7.
10. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the video data pushing method according to any one of claims 1 to 7.
CN202011068602.2A 2020-10-09 2020-10-09 Video data pushing method and device Pending CN111931071A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011068602.2A CN111931071A (en) 2020-10-09 2020-10-09 Video data pushing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011068602.2A CN111931071A (en) 2020-10-09 2020-10-09 Video data pushing method and device

Publications (1)

Publication Number Publication Date
CN111931071A true CN111931071A (en) 2020-11-13

Family

ID=73333693

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011068602.2A Pending CN111931071A (en) 2020-10-09 2020-10-09 Video data pushing method and device

Country Status (1)

Country Link
CN (1) CN111931071A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1856096A (en) * 2001-03-15 2006-11-01 康斯坦丁迪斯·阿波斯托洛斯 System for multiple viewpoint video signal recording and reproduction
CN101291428A (en) * 2008-05-30 2008-10-22 上海天卫通信科技有限公司 Panoramic video monitoring system and method with perspective automatically configured
CN107197318A (en) * 2017-06-19 2017-09-22 深圳市望尘科技有限公司 A kind of real-time, freedom viewpoint live broadcasting method shot based on multi-cam light field
CN107623658A (en) * 2016-07-14 2018-01-23 幸福在线(北京)网络技术有限公司 A kind of method, apparatus and system for realizing that driving virtual reality is live
US20190222748A1 (en) * 2018-01-18 2019-07-18 Google Llc Multi-camera navigation interface

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1856096A (en) * 2001-03-15 2006-11-01 康斯坦丁迪斯·阿波斯托洛斯 System for multiple viewpoint video signal recording and reproduction
CN101291428A (en) * 2008-05-30 2008-10-22 上海天卫通信科技有限公司 Panoramic video monitoring system and method with perspective automatically configured
CN107623658A (en) * 2016-07-14 2018-01-23 幸福在线(北京)网络技术有限公司 A kind of method, apparatus and system for realizing that driving virtual reality is live
CN107197318A (en) * 2017-06-19 2017-09-22 深圳市望尘科技有限公司 A kind of real-time, freedom viewpoint live broadcasting method shot based on multi-cam light field
US20190222748A1 (en) * 2018-01-18 2019-07-18 Google Llc Multi-camera navigation interface

Similar Documents

Publication Publication Date Title
US11030987B2 (en) Method for selecting background music and capturing video, device, terminal apparatus, and medium
CN108989691B (en) Video shooting method and device, electronic equipment and computer readable storage medium
RU2704244C1 (en) Method for generating a virtual viewpoint image and an image processing device
US10812868B2 (en) Video content switching and synchronization system and method for switching between multiple video formats
RU2704608C1 (en) Control device, control method and data medium
RU2718119C1 (en) Information processing device, image generation method, control method and data medium
WO2020015333A1 (en) Video shooting method and apparatus, terminal device, and storage medium
CN107888987B (en) Panoramic video playing method and device
US20220222028A1 (en) Guided Collaborative Viewing of Navigable Image Content
CN108900771B (en) Video processing method and device, terminal equipment and storage medium
JP7017175B2 (en) Information processing equipment, information processing method, program
CN106803966A (en) A kind of many people's live network broadcast methods, device and its electronic equipment
CN106303555A (en) A kind of live broadcasting method based on mixed reality, device and system
CN104012106A (en) Aligning videos representing different viewpoints
CN113301351B (en) Video playing method and device, electronic equipment and computer storage medium
US20210076081A1 (en) Video transmission method, client, and server
CN110958465A (en) Video stream pushing method and device and storage medium
JP2023528958A (en) Video Composite Method, Apparatus, Electronics and Computer Readable Medium
CN105163152A (en) Interactive access method for television interaction system
CN108616769B (en) Video-on-demand method and device
CN108401163B (en) Method and device for realizing VR live broadcast and OTT service system
CN111757138A (en) Close-up display method and device based on single-shot live video
CN111931071A (en) Video data pushing method and device
JP6659184B2 (en) Information processing apparatus, information processing method and program
CN113810253B (en) Service providing method, system, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210927

Address after: 100000 1706, floor 7, Section A, No. 203, zone 2, Lize Zhongyuan, Wangjing, Chaoyang District, Beijing

Applicant after: Beijing Yankan Intelligent Technology Co.,Ltd.

Address before: 1813, 8th floor, Section A, No.203, zone 2, Lize Zhongyuan, Wangjing, Chaoyang District, Beijing

Applicant before: Beijing overlooking Technology Co.,Ltd.

TA01 Transfer of patent application right
RJ01 Rejection of invention patent application after publication

Application publication date: 20201113

RJ01 Rejection of invention patent application after publication