CN109309787B - Operation method and system of panoramic video data - Google Patents

Operation method and system of panoramic video data Download PDF

Info

Publication number
CN109309787B
CN109309787B CN201811046362.9A CN201811046362A CN109309787B CN 109309787 B CN109309787 B CN 109309787B CN 201811046362 A CN201811046362 A CN 201811046362A CN 109309787 B CN109309787 B CN 109309787B
Authority
CN
China
Prior art keywords
video data
view
panoramic video
instruction
visual field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811046362.9A
Other languages
Chinese (zh)
Other versions
CN109309787A (en
Inventor
庾少华
亓娜
王艳辉
刘坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionvera Information Technology Co Ltd
Original Assignee
Visionvera Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionvera Information Technology Co Ltd filed Critical Visionvera Information Technology Co Ltd
Priority to CN201811046362.9A priority Critical patent/CN109309787B/en
Publication of CN109309787A publication Critical patent/CN109309787A/en
Application granted granted Critical
Publication of CN109309787B publication Critical patent/CN109309787B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Abstract

The embodiment of the invention provides an operation method and a system of panoramic video data, wherein the method comprises the following steps: the graphics workstation receives multi-channel source video data which are sent by the streaming media server and collected by the plurality of monitoring cameras; the graphics workstation synthesizes the multi-path source video data into panoramic video data; the graphics workstation issues the panoramic video data at the video networking terminal; the scheduling server detects a visual field adjusting operation aiming at the panoramic video data on a page; the scheduling server generates a visual field adjusting instruction for adjusting the presenting visual field of the panoramic video data according to the visual field adjusting operation and sends the visual field adjusting instruction to a graphic workstation; and the graphic workstation adjusts the presentation visual field of the panoramic video data issued at the video network terminal according to the visual field adjusting instruction. The user adjusts the display visual field of the needed panoramic video data by executing visual field adjusting operation, and the panoramic video data accords with the thinking habit of the user and can quickly select angles.

Description

Operation method and system of panoramic video data
Technical Field
The invention relates to the technical field of video networking, in particular to an operation method and system of panoramic video data.
Background
The video networking is a real-time network, can realize the real-time transmission of the high definition video of the whole network that present internet can't realize, push numerous internet applications to high definition video ization, high definition is face-to-face.
A plurality of monitoring cameras can be connected to the video network to collect multi-channel video data and provide monitoring service for users.
At present, a plurality of windows are used for displaying multi-path video data to a user, the user needs to watch the video data at different angles under the conditions of suspicious characters, site browsing and the like, at the moment, the user needs to distinguish the angle of the video data from the multi-path video data and select the video data at the required angle for watching, and the operation is complex.
Disclosure of Invention
In view of the above, embodiments of the present invention are proposed to provide an operating method and system of panoramic video data that overcomes or at least partially solves the above mentioned problems.
According to an aspect of the present invention, there is provided an operating method of panoramic video data, in which a graphics workstation, a streaming server and a video network terminal are deployed in a video network, the streaming server is connected to a plurality of monitoring cameras installed in a same site, and a scheduling server is deployed in an IP network, the method including:
the graphics workstation receives multi-channel source video data which are sent by the streaming media server and collected by the plurality of monitoring cameras;
the graphics workstation synthesizes the multi-path source video data into panoramic video data;
the graphics workstation issues the panoramic video data at the video networking terminal;
the scheduling server detects a visual field adjusting operation aiming at the panoramic video data on a page;
the scheduling server generates a visual field adjusting instruction for adjusting the presenting visual field of the panoramic video data according to the visual field adjusting operation and sends the visual field adjusting instruction to a graphic workstation;
and the graphic workstation adjusts the presentation visual field of the panoramic video data issued at the video network terminal according to the visual field adjusting instruction.
Optionally, the page has a popup window, and the view adjustment operation includes a mouse operation;
the scheduling server detects a field of view adjustment operation for the panoramic video data on a page, including:
detecting mouse operations on the panoramic video data on a popup of the page.
Optionally, the generating, by the scheduling server, a view adjustment instruction for adjusting a presentation view of the panoramic video data according to the view adjustment operation includes:
identifying the operation type of the mouse operation;
and generating a visual field adjusting instruction for adjusting the presenting visual field of the panoramic video data according to the operation type.
Optionally, the view adjusting instruction comprises at least one of a view moving instruction, a view rotating instruction, and a view zooming instruction;
the generating of the view adjusting instruction for adjusting the presentation view of the panoramic video data according to the operation type includes:
if the operation type is a first mouse operation, generating a view moving instruction for moving a presenting view of the panoramic video data according to the first mouse operation;
if the operation type is a second mouse operation, generating a view rotating instruction for rotating the presenting view of the panoramic video data according to the first mouse operation;
and if the operation type is a third mouse operation, generating a view field zooming instruction for zooming the presentation view field of the panoramic video data according to the first mouse operation.
Optionally, the first mouse operation includes pressing a left mouse button and dragging a mouse;
generating a view moving instruction for moving a presentation view of the panoramic video data according to the first mouse operation, including:
calculating a first dragging direction and a first dragging distance dragged by the mouse;
mapping the first dragging direction to a moving direction;
mapping the first dragging distance to a moving distance;
generating a view moving instruction for moving a presentation view of the panoramic video data according to the moving direction and the moving distance;
the second mouse operation comprises pressing a right mouse button and dragging a mouse;
the generating of the view rotation instruction for rotating the presentation view of the panoramic video data according to the first mouse operation includes:
calculating a second dragging direction and a second dragging distance dragged by the mouse;
mapping the second dragging direction to a rotating direction;
mapping the second dragging distance into a rotation angle;
generating a view rotating instruction for rotating the presenting view of the panoramic video data according to the rotating direction and the rotating angle;
the third mouse operation comprises mouse wheel sliding;
the generating of the view field zooming instruction for zooming the presentation view field of the panoramic video data according to the first mouse operation includes:
calculating the sliding direction and the sliding angle of the mouse roller;
mapping the sliding direction and the sliding angle to a reduction scale or an enlargement scale;
generating a field of view scaling instruction to scale a rendered field of view of the panoramic video data at the reduced scale or the enlarged scale.
Optionally, the adjusting, by the graphics workstation, the presentation visual field of the panoramic video data issued at the terminal of the video networking according to the visual field adjusting instruction includes:
if the view adjusting instruction is a view moving instruction, moving the presenting view of the panoramic video data issued at the video networking terminal;
if the view adjusting instruction is a view rotating instruction, rotating the presenting view of the panoramic video data issued at the video networking terminal;
and if the view field adjusting instruction is a view field zooming instruction, zooming the presentation view field of the panoramic video data issued at the video network terminal.
Optionally, if the view field adjustment instruction is a view field movement instruction, moving a presentation view field of the panoramic video data issued by the video networking terminal includes:
reading a moving direction and a moving distance from the visual field moving instruction;
moving the presenting visual field of the panoramic video data published on the video network terminal according to the moving direction and the moving distance;
if the view adjusting instruction is a view rotating instruction, rotating the presenting view of the panoramic video data issued by the video networking terminal, including:
reading a rotation direction and a rotation angle from the visual field rotation instruction;
rotating the presenting visual field of the panoramic video data issued at the video network terminal according to the rotating direction and the rotating angle;
if the view field adjusting instruction is a view field zooming instruction, zooming the presentation view field of the panoramic video data issued at the video networking terminal, including:
reading a zoom-out scale or a zoom-in scale from the view scaling instruction;
and reducing or amplifying the presenting visual field of the panoramic video data published on the video network terminal according to the reduction scale or the amplification scale.
According to another aspect of the present invention, there is provided an operating system for panoramic video data, in which a graphics workstation, a streaming media server and a video network terminal are deployed in a video network, the streaming media server is connected to a plurality of monitoring cameras installed in the same location, and a scheduling server is deployed in an IP network;
the graphic workstation comprises a source video data receiving module, a panoramic video data synthesizing module, a panoramic video data publishing module and a presentation visual field adjusting module; the scheduling server comprises a visual field adjusting operation detection module and a visual field adjusting instruction generation module;
the source video data receiving module is used for receiving multi-channel source video data which are sent by the streaming media server and collected by the plurality of monitoring cameras;
the panoramic video data synthesis module is used for synthesizing the multi-channel source video data into panoramic video data;
the panoramic video data publishing module is used for publishing the panoramic video data at the video networking terminal;
the visual field adjusting operation detection module is used for detecting visual field adjusting operation aiming at the panoramic video data on a page;
the visual field adjusting instruction generating module is used for generating a visual field adjusting instruction for adjusting the presenting visual field of the panoramic video data according to the visual field adjusting operation and sending the visual field adjusting instruction to the graphic workstation;
and the presentation visual field adjusting module is used for adjusting the presentation visual field of the panoramic video data issued at the video network terminal according to the visual field adjusting instruction.
Optionally, the page has a popup window, and the view adjustment operation includes a mouse operation;
the visual field adjustment operation detection module includes:
and the pop window detection submodule is used for detecting mouse operation aiming at the panoramic video data on the pop window of the page.
Optionally, the visual field adjustment instruction generation module includes:
the operation type identification submodule is used for identifying the operation type of the mouse operation;
and the type generation submodule is used for generating a visual field adjusting instruction for adjusting the presenting visual field of the panoramic video data according to the operation type.
Optionally, the view adjusting instruction comprises at least one of a view moving instruction, a view rotating instruction, and a view zooming instruction;
the type generation submodule comprises:
a field of view movement instruction generating unit, configured to generate a field of view movement instruction for moving a presentation field of view of the panoramic video data according to the first mouse operation if the operation type is the first mouse operation;
a field rotation instruction generating unit, configured to generate a field rotation instruction for rotating a presentation field of the panoramic video data according to the first mouse operation if the operation type is a second mouse operation;
and the visual field zooming instruction generating unit is used for generating a visual field zooming instruction for zooming the presenting visual field of the panoramic video data according to the first mouse operation if the operation type is the third mouse operation.
Optionally, the first mouse operation includes pressing a left mouse button and dragging a mouse;
the visual field movement instruction generation unit includes:
the first dragging parameter calculating subunit is used for calculating a first dragging direction and a first dragging distance dragged by the mouse;
a moving direction mapping subunit, configured to map the first dragging direction as a moving direction;
a moving distance mapping subunit, configured to map the first dragging distance as a moving distance;
a first generation subunit configured to generate a view field movement instruction to move a presentation view field of the panoramic video data in the movement direction and the movement distance;
the second mouse operation comprises pressing a right mouse button and dragging a mouse;
the visual field rotation instruction generation unit includes:
the second dragging parameter calculation subunit is used for calculating a second dragging direction and a second dragging distance dragged by the mouse;
a rotation direction mapping subunit, configured to map the second dragging direction as a rotation direction;
a rotation distance mapping subunit, configured to map the second dragging distance into a rotation angle;
a second generation subunit, configured to generate a view rotation instruction for rotating a presentation view of the panoramic video data according to the rotation direction and the rotation angle;
the third mouse operation comprises mouse wheel sliding;
the field of view scaling instruction generation unit includes:
the sliding parameter calculating subunit is used for calculating the sliding direction and the sliding angle of the mouse roller;
a scaling parameter mapping subunit, configured to de-map the sliding direction and the sliding angle into a reduction scale or an enlargement scale;
a third generating subunit configured to generate a view field scaling instruction to scale a presentation view field of the panoramic video data in accordance with the reduction scale or the enlargement scale.
Optionally, the presentation horizon adjusting module comprises:
the view-presenting moving submodule is used for moving the view presenting of the panoramic video data issued by the video networking terminal if the view adjusting instruction is a view moving instruction;
the presentation view rotating submodule is used for rotating the presentation view of the panoramic video data issued by the video networking terminal if the view adjusting instruction is a view rotating instruction;
and the presentation view field scaling submodule is used for scaling the presentation view field of the panoramic video data issued at the video network terminal if the view field adjusting instruction is a view field scaling instruction.
Optionally, the presentation field of view moving submodule includes:
the movement parameter reading unit is used for reading the movement direction and the movement distance from the visual field movement instruction;
the parameter control mobile unit is used for moving the presenting visual field of the panoramic video data published by the video network terminal according to the moving direction and the moving distance;
the presentation view rotation sub-module includes:
a rotation parameter reading unit for reading a rotation direction and a rotation angle from the view rotation instruction;
the parameter control rotating unit is used for rotating the presenting visual field of the panoramic video data issued at the video network terminal according to the rotating direction and the rotating angle;
the rendering view scaling submodule includes:
a zoom parameter reading unit for reading a zoom-out scale or a zoom-in scale from the view field zoom instruction;
and the parameter control zooming unit is used for zooming or amplifying the presentation visual field of the panoramic video data distributed at the video network terminal according to the zooming-out proportion or the zooming-in proportion.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, a graphic workstation receives multi-channel source video data which are sent by a streaming media server and collected by a plurality of monitoring cameras, synthesizes panoramic video data, releases the panoramic video data at a video networking terminal, a scheduling server detects visual field adjusting operation aiming at the panoramic video data on a page, generates a visual field adjusting instruction for adjusting the presenting visual field of the panoramic video data according to the visual field adjusting operation and sends the visual field adjusting instruction to the graphic workstation, the graphic workstation adjusts the presenting visual field of the panoramic video data released at the video networking terminal according to the visual field adjusting instruction, and a user adjusts the presenting visual field of the required panoramic video data by executing the visual field adjusting operation.
Drawings
FIG. 1 is a networking diagram of a video network, according to one embodiment of the invention;
FIG. 2 is a diagram illustrating a hardware architecture of a node server according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a hardware structure of an access switch according to an embodiment of the present invention;
fig. 4 is a schematic hardware structure diagram of an ethernet protocol conversion gateway according to an embodiment of the present invention;
FIG. 5 is a flow chart of the steps of a method of operation of panoramic video data in accordance with one embodiment of the present invention;
FIG. 6 is a diagram illustrating a view adjustment operation on a page according to one embodiment of the present invention;
fig. 7 is a block diagram of an operating system of panoramic video data according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The video networking is an important milestone for network development, is a real-time network, can realize high-definition video real-time transmission, and pushes a plurality of internet applications to high-definition video, and high-definition faces each other.
The video networking adopts a real-time high-definition video exchange technology, can integrate required services such as dozens of services of video, voice, pictures, characters, communication, data and the like on a system platform on a network platform, such as high-definition video conference, video monitoring, intelligent monitoring analysis, emergency command, digital broadcast television, delayed television, network teaching, live broadcast, VOD on demand, television mail, Personal Video Recorder (PVR), intranet (self-office) channels, intelligent video broadcast control, information distribution and the like, and realizes high-definition quality video broadcast through a television or a computer.
To better understand the embodiments of the present invention, the following description refers to the internet of view:
some of the technologies applied in the video networking are as follows:
network Technology (Network Technology)
Network technology innovation in video networking has improved over traditional Ethernet (Ethernet) to face the potentially enormous video traffic on the network. Unlike pure network Packet Switching (Packet Switching) or network circuit Switching (circuit Switching), the Packet Switching is adopted by the technology of the video networking to meet the Streaming requirement. The video networking technology has the advantages of flexibility, simplicity and low price of packet switching, and simultaneously has the quality and safety guarantee of circuit switching, thereby realizing the seamless connection of the whole network switching type virtual circuit and the data format.
Switching Technology (Switching Technology)
The video network adopts two advantages of asynchronism and packet switching of the Ethernet, eliminates the defects of the Ethernet on the premise of full compatibility, has end-to-end seamless connection of the whole network, is directly communicated with a user terminal, and directly bears an IP data packet. The user data does not require any format conversion across the entire network. The video networking is a higher-level form of the Ethernet, is a real-time exchange platform, can realize the real-time transmission of the whole-network large-scale high-definition video which cannot be realized by the existing Internet, and pushes a plurality of network video applications to high-definition and unification.
Server Technology (Server Technology)
The server technology on the video networking and unified video platform is different from the traditional server, the streaming media transmission of the video networking and unified video platform is established on the basis of connection orientation, the data processing capacity of the video networking and unified video platform is independent of flow and communication time, and a single network layer can contain signaling and data transmission. For voice and video services, the complexity of video networking and unified video platform streaming media processing is much simpler than that of data processing, and the efficiency is greatly improved by more than one hundred times compared with that of a traditional server.
Storage Technology (Storage Technology)
The super-high speed storage technology of the unified video platform adopts the most advanced real-time operating system in order to adapt to the media content with super-large capacity and super-large flow, the program information in the server instruction is mapped to the specific hard disk space, the media content is not passed through the server any more, and is directly sent to the user terminal instantly, and the general waiting time of the user is less than 0.2 second. The optimized sector distribution greatly reduces the mechanical motion of the magnetic head track seeking of the hard disk, the resource consumption only accounts for 20% of that of the IP internet of the same grade, but concurrent flow which is 3 times larger than that of the traditional hard disk array is generated, and the comprehensive efficiency is improved by more than 10 times.
Network Security Technology (Network Security Technology)
The structural design of the video network completely eliminates the network security problem troubling the internet structurally by the modes of independent service permission control each time, complete isolation of equipment and user data and the like, generally does not need antivirus programs and firewalls, avoids the attack of hackers and viruses, and provides a structural carefree security network for users.
Service Innovation Technology (Service Innovation Technology)
The unified video platform integrates services and transmission, and is not only automatically connected once whether a single user, a private network user or a network aggregate. The user terminal, the set-top box or the PC are directly connected to the unified video platform to obtain various multimedia video services in various forms. The unified video platform adopts a menu type configuration table mode to replace the traditional complex application programming, can realize complex application by using very few codes, and realizes infinite new service innovation.
Networking of the video network is as follows:
the video network is a centralized control network structure, and the network can be a tree network, a star network, a ring network and the like, but on the basis of the centralized control node, the whole network is controlled by the centralized control node in the network.
As shown in fig. 1, the video network is divided into an access network and a metropolitan network.
The devices of the access network part can be mainly classified into 3 types: node server, access switch, terminal (including various set-top boxes, coding boards, memories, etc.). The node server is connected to an access switch, which may be connected to a plurality of terminals and may be connected to an ethernet network.
The node server is a node which plays a centralized control function in the access network and can control the access switch and the terminal. The node server can be directly connected with the access switch or directly connected with the terminal.
Similarly, devices of the metropolitan network portion may also be classified into 3 types: a metropolitan area server, a node switch and a node server. The metro server is connected to a node switch, which may be connected to a plurality of node servers.
The node server is a node server of the access network part, namely the node server belongs to both the access network part and the metropolitan area network part.
The metropolitan area server is a node which plays a centralized control function in the metropolitan area network and can control a node switch and a node server. The metropolitan area server can be directly connected with the node switch or directly connected with the node server.
Therefore, the whole video network is a network structure with layered centralized control, and the network controlled by the node server and the metropolitan area server can be in various structures such as tree, star and ring.
The access network part can form a unified video platform (the part in the dotted circle), and a plurality of unified video platforms can form a video network; each unified video platform may be interconnected via metropolitan area and wide area video networking.
Video networking device classification
1.1 devices in the video network of the embodiment of the present invention can be mainly classified into 3 types: servers, switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.). The video network as a whole can be divided into a metropolitan area network (or national network, global network, etc.) and an access network.
1.2 wherein the devices of the access network part can be mainly classified into 3 types: node servers, access switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.).
The specific hardware structure of each access network device is as follows:
a node server:
as shown in fig. 2, the system mainly includes a network interface module 201, a switching engine module 202, a CPU module 203, and a disk array module 204;
the network interface module 201, the CPU module 203, and the disk array module 204 all enter the switching engine module 202; the switching engine module 202 performs an operation of looking up the address table 205 on the incoming packet, thereby obtaining the direction information of the packet; and stores the packet in a queue of the corresponding packet buffer 206 based on the packet's steering information; if the queue of the packet buffer 206 is nearly full, it is discarded; the switching engine module 202 polls all packet buffer queues for forwarding if the following conditions are met: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero. The disk array module 204 mainly implements control over the hard disk, including initialization, read-write, and other operations on the hard disk; the CPU module 203 is mainly responsible for protocol processing with an access switch and a terminal (not shown in the figure), configuring an address table 205 (including a downlink protocol packet address table, an uplink protocol packet address table, and a data packet address table), and configuring the disk array module 204.
The access switch:
as shown in fig. 3, the network interface module mainly includes a network interface module (a downlink network interface module 301 and an uplink network interface module 302), a switching engine module 303 and a CPU module 304;
wherein, the packet (uplink data) coming from the downlink network interface module 301 enters the packet detection module 305; the packet detection module 305 detects whether the Destination Address (DA), the Source Address (SA), the packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id) and enters the switching engine module 303, otherwise, discards the stream identifier; the packet (downstream data) coming from the upstream network interface module 302 enters the switching engine module 303; the data packet coming from the CPU module 204 enters the switching engine module 303; the switching engine module 303 performs an operation of looking up the address table 306 on the incoming packet, thereby obtaining the direction information of the packet; if the packet entering the switching engine module 303 is from the downstream network interface to the upstream network interface, the packet is stored in the queue of the corresponding packet buffer 307 in association with the stream-id; if the queue of the packet buffer 307 is nearly full, it is discarded; if the packet entering the switching engine module 303 is not from the downlink network interface to the uplink network interface, the data packet is stored in the queue of the corresponding packet buffer 307 according to the guiding information of the packet; if the queue of the packet buffer 307 is nearly full, it is discarded.
The switching engine module 303 polls all packet buffer queues, which in this embodiment of the present invention is divided into two cases:
if the queue is from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queued packet counter is greater than zero; 3) obtaining a token generated by a code rate control module;
if the queue is not from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero.
The rate control module 208 is configured by the CPU module 204, and generates tokens for packet buffer queues from all downstream network interfaces to upstream network interfaces at programmable intervals to control the rate of upstream forwarding.
The CPU module 304 is mainly responsible for protocol processing with the node server, configuration of the address table 306, and configuration of the code rate control module 308.
Ethernet protocol conversion gateway
As shown in fig. 4, the apparatus mainly includes a network interface module (a downlink network interface module 401 and an uplink network interface module 402), a switching engine module 403, a CPU module 404, a packet detection module 405, a rate control module 408, an address table 406, a packet buffer 407, a MAC adding module 409, and a MAC deleting module 410.
Wherein, the data packet coming from the downlink network interface module 401 enters the packet detection module 405; the packet detection module 405 detects whether the ethernet MAC DA, the ethernet MAC SA, the ethernet length or frame type, the video network destination address DA, the video network source address SA, the video network packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id); then, the MAC deletion module 410 subtracts MAC DA, MAC SA, length or frame type (2byte) and enters the corresponding receiving buffer, otherwise, discards it;
the downlink network interface module 401 detects the sending buffer of the port, and if there is a packet, obtains the ethernet MAC DA of the corresponding terminal according to the destination address DA of the packet, adds the ethernet MAC DA of the terminal, the MACSA of the ethernet coordination gateway, and the ethernet length or frame type, and sends the packet.
The other modules in the ethernet protocol gateway function similarly to the access switch.
A terminal:
the system mainly comprises a network interface module, a service processing module and a CPU module; for example, the set-top box mainly comprises a network interface module, a video and audio coding and decoding engine module and a CPU module; the coding board mainly comprises a network interface module, a video and audio coding engine module and a CPU module; the memory mainly comprises a network interface module, a CPU module and a disk array module.
1.3 devices of the metropolitan area network part can be mainly classified into 2 types: node server, node exchanger, metropolitan area server. The node switch mainly comprises a network interface module, a switching engine module and a CPU module; the metropolitan area server mainly comprises a network interface module, a switching engine module and a CPU module.
2. Video networking packet definition
2.1 Access network packet definition
The data packet of the access network mainly comprises the following parts: destination Address (DA), Source Address (SA), reserved bytes, payload (pdu), CRC.
As shown in the following table, the data packet of the access network mainly includes the following parts:
DA SA Reserved Payload CRC
wherein:
the Destination Address (DA) is composed of 8 bytes (byte), the first byte represents the type of the data packet (such as various protocol packets, multicast data packets, unicast data packets, etc.), there are 256 possibilities at most, the second byte to the sixth byte are metropolitan area network addresses, and the seventh byte and the eighth byte are access network addresses;
the Source Address (SA) is also composed of 8 bytes (byte), defined as the same as the Destination Address (DA);
the reserved byte consists of 2 bytes;
the payload part has different lengths according to different types of datagrams, and is 64 bytes if the datagram is various types of protocol packets, and is 32+1024 or 1056 bytes if the datagram is a unicast packet, of course, the length is not limited to the above 2 types;
the CRC consists of 4 bytes and is calculated in accordance with the standard ethernet CRC algorithm.
2.2 metropolitan area network packet definition
The topology of a metropolitan area network is a graph and there may be 2, or even more than 2, connections between two devices, i.e., there may be more than 2 connections between a node switch and a node server, a node switch and a node switch, and a node switch and a node server. However, the metro network address of the metro network device is unique, and in order to accurately describe the connection relationship between the metro network devices, parameters are introduced in the embodiment of the present invention: a label to uniquely describe a metropolitan area network device.
In this specification, the definition of the Label is similar to that of the Label of MPLS (Multi-Protocol Label Switch), and assuming that there are two connections between the device a and the device B, there are 2 labels for the packet from the device a to the device B, and 2 labels for the packet from the device B to the device a. The label is classified into an incoming label and an outgoing label, and assuming that the label (incoming label) of the packet entering the device a is 0x0000, the label (outgoing label) of the packet leaving the device a may become 0x 0001. The network access process of the metro network is a network access process under centralized control, that is, address allocation and label allocation of the metro network are both dominated by the metro server, and the node switch and the node server are both passively executed, which is different from label allocation of MPLS, and label allocation of MPLS is a result of mutual negotiation between the switch and the server.
As shown in the following table, the data packet of the metro network mainly includes the following parts:
DA SA Reserved label (R) Payload CRC
Namely Destination Address (DA), Source Address (SA), Reserved byte (Reserved), tag, payload (pdu), CRC. The format of the tag may be defined by reference to the following: the tag is 32 bits with the upper 16 bits reserved and only the lower 16 bits used, and its position is between the reserved bytes and payload of the packet.
Referring to fig. 5, a flowchart illustrating steps of an operating method of panoramic video data according to an embodiment of the present invention is shown, where a graphics workstation, a streaming server and a video network terminal are deployed in a video network, the streaming server is connected to a plurality of monitoring cameras installed in a same location, and a scheduling server is deployed in an IP (Internet Protocol, Protocol for interconnection between networks), where the method specifically includes the following steps:
step 501, the graphics workstation receives the multi-channel source video data which are sent by the streaming media server and collected by the plurality of monitoring cameras.
In the embodiment of the invention, a plurality of monitoring cameras are installed in the same place, such as a street, a factory, a community and the like, monitoring ranges between the monitoring cameras which are distributed adjacently are partially overlapped, and the plurality of monitoring cameras can acquire all ranges of the place, so that the whole place is monitored.
Each monitoring camera collects one path of source video data in real time and sends the data to the streaming media server.
And the streaming media server transmits the multi-channel source video data acquired by the plurality of monitoring cameras in the same place to the graphic workstation.
Further, the monitoring camera generally uses a TCP (Transmission Control Protocol), so that the source video data collected by the monitoring camera can be converted into a video networking Protocol when entering the video networking.
For example, packets of source video data may be encapsulated for transmission in an video network by the 2000 specification of the video network protocol:
Figure BDA0001793391200000151
Figure BDA0001793391200000161
step 502, the graphics workstation synthesizes the multiple paths of source video data into panoramic video data.
Because the monitoring ranges of the monitoring cameras distributed adjacently are partially overlapped, the source video data collected by the monitoring cameras distributed adjacently are partially overlapped, and the overlapped parts of the multiple paths of source video data are removed and spliced into the three-dimensional panoramic video data.
It should be noted that, because the monitoring camera is used for acquiring the source video data in real time and continuously, the graphics workstation can synthesize the panoramic video data in real time and continuously.
Step 503, the graphics workstation issues the panoramic video data at the video networking terminal.
The graphic workstation selects a certain video network terminal, the real-time panoramic video data is distributed at the video network terminal in a multicast mode, and other video network terminals can watch the panoramic video data at the video network terminal.
It should be noted that, because the synthesized panoramic video data belongs to a three-dimensional image, and the area of the synthesized panoramic video data is generally larger than the display area of the display, and the panoramic video data cannot be played all at once, the graphics workstation may select a certain angle, select a part of images in the panoramic video data at the angle to distribute and present the part of images to the user, and at this time, the part of images may be referred to as a presentation field of view.
Step 504, the scheduling server detects a view adjustment operation for the panoramic video data on a page.
In an embodiment of the present invention, a user (e.g., a hypervisor) may log into a dispatch server in an application such as a browser, and the dispatch server may provide a page that controls a graphics workstation.
When a user (such as a super administrator) watches the panoramic video data synthesized for a certain place at the terminal of the video network, the user can trigger the view field adjusting operation on the page according to the requirement, so as to adjust the presenting view field of the panoramic video data.
In one embodiment, as shown in fig. 6, the page provided by the scheduling server may perform various operations, and in order to avoid performing an error operation, a popup may be generated in the page.
In this embodiment, the view adjustment operation includes a mouse operation, i.e., a user can trigger adjustment of the rendering view of the panoramic video data by operating the mouse.
Accordingly, the scheduling server may detect a mouse operation for the panoramic video data on a popup of the page.
And 505, generating a visual field adjusting instruction for adjusting the presenting visual field of the panoramic video data by the scheduling server according to the visual field adjusting operation, and sending the visual field adjusting instruction to a graphic workstation.
And after detecting the visual field adjusting operation, the dispatching server responds to the visual field adjusting operation to generate a corresponding visual field adjusting instruction for adjusting the presenting visual field of the panoramic video data and sends the visual field adjusting instruction to the graphic workstation through a link of the IP network.
It should be noted that the graphics workstation may set a network card of the video network and a network card of the IP network at the same time, and may communicate with the IP network in the video network at the same time.
In an IP network, a scheduling server can establish long connection with a graphic workstation through a WebSocket protocol and the like, and the scheduling server can communicate with the graphic workstation based on the long connection.
In one embodiment of the present invention, step 505 may comprise the sub-steps of:
and a substep S11 of identifying an operation type of the mouse operation.
And a substep S12 of generating a view adjustment instruction for adjusting a presentation view of the panoramic video data according to the operation type.
In the embodiment of the invention, for mouse operation, the mouse operation can be divided into a plurality of operation types in advance, and each operation type can be applied with a visual field adjusting instruction.
For the mouse operation detected on the page currently, the operation type can be identified, so that a corresponding visual field adjusting instruction is generated.
In a particular implementation, the view adjustment instruction includes at least one of a view movement instruction, a view rotation instruction, and a view scaling instruction.
In one embodiment, if the operation type is a first mouse operation, a field-of-view movement instruction to move a presentation field of view of the panoramic video data is generated in accordance with the first mouse operation.
In one example, the first mouse operation includes a left mouse button press and a mouse drag.
Then in this example, a first drag direction and a first drag distance of the mouse drag are calculated.
The first drag direction is mapped to a movement direction, for example, leftward drag is mapped to leftward movement, rightward drag is mapped to rightward movement, upward drag is mapped to upward movement, and downward drag is mapped to downward movement.
And mapping the first dragging distance into a moving distance according to a preset distance mapping proportion (generally positive correlation).
Thereafter, a view movement instruction for moving the presentation view of the panoramic video data in the movement direction and the movement distance is generated.
In another embodiment, if the operation type is the second mouse operation, a view rotation instruction to rotate the presentation view of the panoramic video data is generated in accordance with the first mouse operation.
In one example, the second mouse operation includes a right mouse button press and drag of the mouse.
Then, in this example, a second drag direction and a second drag distance of the mouse drag are calculated.
The second drag direction is mapped to a rotation direction, for example, a leftward drag is mapped to a leftward rotation, a rightward drag is mapped to a rightward rotation, an upward drag is mapped to an upward rotation, and a downward drag is mapped to a downward rotation.
And mapping the second dragging distance to be a rotating angle according to a preset angle mapping proportion (generally positive correlation).
Thereafter, a view rotation command for rotating the presentation view of the panoramic video data in the rotation direction and the rotation angle is generated.
In yet another embodiment, if the operation type is the third mouse operation, a view scaling instruction to scale the presentation view of the panoramic video data is generated in accordance with the first mouse operation.
In one example, the third mouse operation includes a mouse wheel swipe.
Then in this example, the direction and angle of the mouse wheel slide are calculated.
The sliding direction and the sliding angle are mapped to a reduction scale or an enlargement scale, for example, an upward sliding is performed for enlargement, and a downward sliding is performed for reduction, and the sliding angle is mapped to an enlargement scale or a reduction scale according to a preset sliding map scale (generally, a positive correlation).
Thereafter, a field of view scaling instruction to scale the rendered field of view of the panoramic video data at a reduced or enlarged scale is generated.
Of course, the above-mentioned manner of generating the view field adjustment command is only an example, and when implementing the embodiment of the present invention, other manners of generating the view field adjustment command may be set according to actual situations, and the embodiment of the present invention is not limited to this. In addition, besides the above-mentioned manner of generating the view field adjustment command, a person skilled in the art may also adopt other manners of generating the view field adjustment command according to actual needs, and the embodiment of the present invention is not limited to this.
Step 506, the graphic workstation adjusts the presentation visual field of the panoramic video data issued at the terminal of the video network according to the visual field adjusting instruction.
And after receiving the visual field adjusting instruction, the graphic workstation adjusts the presentation visual field of the panoramic video data issued at the terminal of the video network according to the visual field adjusting instruction.
Thereafter, other video network terminals may view the panoramic video data from the video network terminal adjusting the presentation field of view.
It should be noted that, since the panoramic video data is synthesized continuously in real time, adjusting the presentation field of view of the panoramic video data may be a gradual change process, that is, gradually changing the presentation field of view of the multi-frame image data in the panoramic video data, so as to adjust to the angle required by the user.
In a particular implementation, the view adjustment instruction includes at least one of a view movement instruction, a view rotation instruction, and a view scaling instruction.
In one embodiment, if the view field adjustment instruction is a view field movement instruction, the view field of the panoramic video data distributed by the terminal of the video network is moved.
Further, the moving direction and the moving distance are read from the view moving instruction, and the presenting view of the panoramic video data issued at the terminal of the video network is moved according to the moving direction and the moving distance.
In another embodiment, if the view field adjustment instruction is a view field rotation instruction, the view field of the panoramic video data distributed by the terminal of the video network is rotated.
Further, the rotation direction and the rotation angle are read from the view rotation instruction, and the presenting view of the panoramic video data issued by the terminal of the video network is rotated according to the rotation direction and the rotation angle.
In another embodiment, if the view field adjusting instruction is a view field scaling instruction, the rendered view field of the panoramic video data distributed at the terminal of the video network is scaled.
Further, a reduction scale or an enlargement scale is read from the view field zoom instruction, and the view field of the panoramic video data distributed at the terminal of the video network is reduced or enlarged according to the reduction scale or the enlargement scale.
In the embodiment of the invention, a graphic workstation receives multi-channel source video data which are sent by a streaming media server and collected by a plurality of monitoring cameras, synthesizes panoramic video data, releases the panoramic video data at a video networking terminal, a scheduling server detects visual field adjusting operation aiming at the panoramic video data on a page, generates a visual field adjusting instruction for adjusting the presenting visual field of the panoramic video data according to the visual field adjusting operation and sends the visual field adjusting instruction to the graphic workstation, the graphic workstation adjusts the presenting visual field of the panoramic video data released at the video networking terminal according to the visual field adjusting instruction, and a user adjusts the presenting visual field of the required panoramic video data by executing the visual field adjusting operation.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 7, there is shown a block diagram of an operating system for panoramic video data according to an embodiment of the present invention, in which a graphics workstation 710, a streaming server and a video network terminal are deployed in a video network, the streaming server is connected to a plurality of monitoring cameras installed in the same site, and a scheduling server 720 is deployed in an IP network;
the graphics workstation 710 comprises a source video data receiving module 711, a panoramic video data synthesizing module 712, a panoramic video data publishing module 713 and a presentation view adjusting module 714; the scheduling server 720 includes a visual field adjusting operation detecting module 721 and a visual field adjusting instruction generating module 722;
a source video data receiving module 711, configured to receive multiple paths of source video data sent by the streaming media server and collected by the multiple monitoring cameras;
a panoramic video data synthesis module 712, configured to synthesize the multiple paths of source video data into panoramic video data;
a panoramic video data publishing module 713, configured to publish the panoramic video data at the terminal of the video networking;
a view adjustment operation detection module 721 configured to detect a view adjustment operation for the panoramic video data on a page;
the visual field adjusting instruction generating module 722 is configured to generate a visual field adjusting instruction for adjusting a presentation visual field of the panoramic video data according to the visual field adjusting operation, and send the visual field adjusting instruction to a graphics workstation;
and a presentation view adjusting module 714, configured to adjust a presentation view of the panoramic video data issued at the terminal of the video networking according to the view adjusting instruction.
In one embodiment of the invention, the page has a popup window, and the view adjustment operation comprises a mouse operation;
the visual field adjustment operation detection module 721 includes:
and the pop window detection submodule is used for detecting mouse operation aiming at the panoramic video data on the pop window of the page.
In an embodiment of the present invention, the visual field adjustment instruction generating module 722 includes:
the operation type identification submodule is used for identifying the operation type of the mouse operation;
and the type generation submodule is used for generating a visual field adjusting instruction for adjusting the presenting visual field of the panoramic video data according to the operation type.
In one embodiment of the invention, the view adjusting instruction comprises at least one of a view moving instruction, a view rotating instruction, and a view zooming instruction;
the type generation submodule comprises:
a field of view movement instruction generating unit, configured to generate a field of view movement instruction for moving a presentation field of view of the panoramic video data according to the first mouse operation if the operation type is the first mouse operation;
a field rotation instruction generating unit, configured to generate a field rotation instruction for rotating a presentation field of the panoramic video data according to the first mouse operation if the operation type is a second mouse operation;
and the visual field zooming instruction generating unit is used for generating a visual field zooming instruction for zooming the presenting visual field of the panoramic video data according to the first mouse operation if the operation type is the third mouse operation.
In one example of the embodiment of the present invention, the first mouse operation includes a left mouse button pressing and dragging a mouse;
the visual field movement instruction generation unit includes:
the first dragging parameter calculating subunit is used for calculating a first dragging direction and a first dragging distance dragged by the mouse;
a moving direction mapping subunit, configured to map the first dragging direction as a moving direction;
a moving distance mapping subunit, configured to map the first dragging distance as a moving distance;
a first generation subunit configured to generate a view field movement instruction to move a presentation view field of the panoramic video data in the movement direction and the movement distance;
the second mouse operation comprises pressing a right mouse button and dragging a mouse;
the visual field rotation instruction generation unit includes:
the second dragging parameter calculation subunit is used for calculating a second dragging direction and a second dragging distance dragged by the mouse;
a rotation direction mapping subunit, configured to map the second dragging direction as a rotation direction;
a rotation distance mapping subunit, configured to map the second dragging distance into a rotation angle;
a second generation subunit, configured to generate a view rotation instruction for rotating a presentation view of the panoramic video data according to the rotation direction and the rotation angle;
the third mouse operation comprises mouse wheel sliding;
the field of view scaling instruction generation unit includes:
the sliding parameter calculating subunit is used for calculating the sliding direction and the sliding angle of the mouse roller;
a scaling parameter mapping subunit, configured to de-map the sliding direction and the sliding angle into a reduction scale or an enlargement scale;
a third generating subunit configured to generate a view field scaling instruction to scale a presentation view field of the panoramic video data in accordance with the reduction scale or the enlargement scale.
In one embodiment of the present invention, the rendering field of view adjustment module 714 includes:
the view-presenting moving submodule is used for moving the view presenting of the panoramic video data issued by the video networking terminal if the view adjusting instruction is a view moving instruction;
the presentation view rotating submodule is used for rotating the presentation view of the panoramic video data issued by the video networking terminal if the view adjusting instruction is a view rotating instruction;
and the presentation view field scaling submodule is used for scaling the presentation view field of the panoramic video data issued at the video network terminal if the view field adjusting instruction is a view field scaling instruction.
In one embodiment of the present invention, the presentation field of view moving submodule includes:
the movement parameter reading unit is used for reading the movement direction and the movement distance from the visual field movement instruction;
the parameter control mobile unit is used for moving the presenting visual field of the panoramic video data published by the video network terminal according to the moving direction and the moving distance;
the presentation view rotation sub-module includes:
a rotation parameter reading unit for reading a rotation direction and a rotation angle from the view rotation instruction;
the parameter control rotating unit is used for rotating the presenting visual field of the panoramic video data issued at the video network terminal according to the rotating direction and the rotating angle;
the rendering view scaling submodule includes:
a zoom parameter reading unit for reading a zoom-out scale or a zoom-in scale from the view field zoom instruction;
and the parameter control zooming unit is used for zooming or amplifying the presentation visual field of the panoramic video data distributed at the video network terminal according to the zooming-out proportion or the zooming-in proportion.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
In the embodiment of the invention, a graphic workstation receives multi-channel source video data which are sent by a streaming media server and collected by a plurality of monitoring cameras, synthesizes panoramic video data, releases the panoramic video data at a video networking terminal, a scheduling server detects visual field adjusting operation aiming at the panoramic video data on a page, generates a visual field adjusting instruction for adjusting the presenting visual field of the panoramic video data according to the visual field adjusting operation and sends the visual field adjusting instruction to the graphic workstation, the graphic workstation adjusts the presenting visual field of the panoramic video data released at the video networking terminal according to the visual field adjusting instruction, and a user adjusts the presenting visual field of the required panoramic video data by executing the visual field adjusting operation.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The above provides a detailed description of the operation method of panoramic video data and the operation system of panoramic video data, and a specific example is applied in this document to illustrate the principle and implementation of the present invention, and the description of the above embodiment is only used to help understand the method of the present invention and its core idea; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. An operation method of panoramic video data is characterized in that a graphic workstation, a streaming media server and a video network terminal are deployed in a video network, the streaming media server is connected with a plurality of monitoring cameras installed in the same place, and a scheduling server is deployed in an IP network, the method comprises the following steps:
the graphics workstation receives multi-channel source video data which are sent by the streaming media server and collected by the plurality of monitoring cameras;
the graphics workstation synthesizes the multi-path source video data into panoramic video data;
the graphics workstation issues the panoramic video data at the video networking terminal; the panoramic video data is published in the video networking terminal in a multicast mode;
the scheduling server detects a visual field adjusting operation aiming at the panoramic video data on a page; the scheduling server establishes long connection with the graphic workstation in an IP network through a WebSocket protocol;
the scheduling server generates a visual field adjusting instruction for adjusting the presenting visual field of the panoramic video data according to the visual field adjusting operation and sends the visual field adjusting instruction to a graphic workstation;
and the graphic workstation adjusts the presentation visual field of the panoramic video data issued at the video network terminal according to the visual field adjusting instruction.
2. The method of claim 1, wherein the page has a pop-up window, and wherein the view adjustment operation comprises a mouse operation;
the scheduling server detects a field of view adjustment operation for the panoramic video data on a page, including:
detecting mouse operations on the panoramic video data on a popup of the page.
3. The method of claim 2, wherein the scheduling server generates a view adjustment instruction for adjusting the rendering view of the panoramic video data according to the view adjustment operation, and comprises:
identifying the operation type of the mouse operation;
and generating a visual field adjusting instruction for adjusting the presenting visual field of the panoramic video data according to the operation type.
4. The method of claim 3, wherein the view adjustment instruction comprises at least one of a view movement instruction, a view rotation instruction, a view scaling instruction;
the generating of the view adjusting instruction for adjusting the presentation view of the panoramic video data according to the operation type includes:
if the operation type is a first mouse operation, generating a view moving instruction for moving a presenting view of the panoramic video data according to the first mouse operation;
if the operation type is a second mouse operation, generating a view rotating instruction for rotating the presenting view of the panoramic video data according to the first mouse operation;
and if the operation type is a third mouse operation, generating a view field zooming instruction for zooming the presentation view field of the panoramic video data according to the first mouse operation.
5. The method of claim 4,
the first mouse operation comprises pressing a left mouse button and dragging a mouse;
generating a view moving instruction for moving a presentation view of the panoramic video data according to the first mouse operation, including:
calculating a first dragging direction and a first dragging distance dragged by the mouse;
mapping the first dragging direction to a moving direction;
mapping the first dragging distance to a moving distance;
generating a view moving instruction for moving a presentation view of the panoramic video data according to the moving direction and the moving distance;
the second mouse operation comprises pressing a right mouse button and dragging a mouse;
the generating of the view rotation instruction for rotating the presentation view of the panoramic video data according to the first mouse operation includes:
calculating a second dragging direction and a second dragging distance dragged by the mouse;
mapping the second dragging direction to a rotating direction;
mapping the second dragging distance into a rotation angle;
generating a view rotating instruction for rotating the presenting view of the panoramic video data according to the rotating direction and the rotating angle;
the third mouse operation comprises mouse wheel sliding;
the generating of the view field zooming instruction for zooming the presentation view field of the panoramic video data according to the first mouse operation includes:
calculating the sliding direction and the sliding angle of the mouse roller;
mapping the sliding direction and the sliding angle to a reduction scale or an enlargement scale;
generating a field of view scaling instruction to scale a rendered field of view of the panoramic video data at the reduced scale or the enlarged scale.
6. The method according to any one of claims 1 to 5, wherein the graphics workstation adjusts the view of the panoramic video data published at the video networking terminal according to the view adjustment instruction, and the method comprises:
if the view adjusting instruction is a view moving instruction, moving the presenting view of the panoramic video data issued at the video networking terminal;
if the view adjusting instruction is a view rotating instruction, rotating the presenting view of the panoramic video data issued at the video networking terminal;
and if the view field adjusting instruction is a view field zooming instruction, zooming the presentation view field of the panoramic video data issued at the video network terminal.
7. The method of claim 6,
if the view adjusting instruction is a view moving instruction, moving the presenting view of the panoramic video data issued by the video networking terminal, including:
reading a moving direction and a moving distance from the visual field moving instruction;
moving the presenting visual field of the panoramic video data published on the video network terminal according to the moving direction and the moving distance;
if the view adjusting instruction is a view rotating instruction, rotating the presenting view of the panoramic video data issued by the video networking terminal, including:
reading a rotation direction and a rotation angle from the visual field rotation instruction;
rotating the presenting visual field of the panoramic video data issued at the video network terminal according to the rotating direction and the rotating angle;
if the view field adjusting instruction is a view field zooming instruction, zooming the presentation view field of the panoramic video data issued at the video networking terminal, including:
reading a zoom-out scale or a zoom-in scale from the view scaling instruction;
and reducing or amplifying the presenting visual field of the panoramic video data published on the video network terminal according to the reduction scale or the amplification scale.
8. An operating system of panoramic video data is characterized in that a graphic workstation, a streaming media server and a video network terminal are deployed in a video network, the streaming media server is connected with a plurality of monitoring cameras installed in the same place, and a scheduling server is deployed in an IP network;
the graphic workstation comprises a source video data receiving module, a panoramic video data synthesizing module, a panoramic video data publishing module and a presentation visual field adjusting module; the scheduling server comprises a visual field adjusting operation detection module and a visual field adjusting instruction generation module;
the source video data receiving module is used for receiving multi-channel source video data which are sent by the streaming media server and collected by the plurality of monitoring cameras;
the panoramic video data synthesis module is used for synthesizing the multi-channel source video data into panoramic video data;
the panoramic video data publishing module is used for publishing the panoramic video data at the video networking terminal; the panoramic video data is published in the video networking terminal in a multicast mode;
the visual field adjusting operation detection module is used for detecting visual field adjusting operation aiming at the panoramic video data on a page; the scheduling server establishes long connection with the graphic workstation in an IP network through a WebSocket protocol;
the visual field adjusting instruction generating module is used for generating a visual field adjusting instruction for adjusting the presenting visual field of the panoramic video data according to the visual field adjusting operation and sending the visual field adjusting instruction to the graphic workstation;
and the presentation visual field adjusting module is used for adjusting the presentation visual field of the panoramic video data issued at the video network terminal according to the visual field adjusting instruction.
9. The system of claim 8, wherein the page has a pop-up window, and wherein the view adjustment operation comprises a mouse operation;
the visual field adjustment operation detection module includes:
and the pop window detection submodule is used for detecting mouse operation aiming at the panoramic video data on the pop window of the page.
10. The system of claim 9, wherein the horizon adjustment instruction generation module comprises:
the operation type identification submodule is used for identifying the operation type of the mouse operation;
and the type generation submodule is used for generating a visual field adjusting instruction for adjusting the presenting visual field of the panoramic video data according to the operation type.
CN201811046362.9A 2018-09-07 2018-09-07 Operation method and system of panoramic video data Active CN109309787B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811046362.9A CN109309787B (en) 2018-09-07 2018-09-07 Operation method and system of panoramic video data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811046362.9A CN109309787B (en) 2018-09-07 2018-09-07 Operation method and system of panoramic video data

Publications (2)

Publication Number Publication Date
CN109309787A CN109309787A (en) 2019-02-05
CN109309787B true CN109309787B (en) 2020-10-30

Family

ID=65225002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811046362.9A Active CN109309787B (en) 2018-09-07 2018-09-07 Operation method and system of panoramic video data

Country Status (1)

Country Link
CN (1) CN109309787B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112866725A (en) * 2021-01-14 2021-05-28 视联动力信息技术股份有限公司 Live broadcast control method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101841695A (en) * 2009-03-19 2010-09-22 新奥特硅谷视频技术有限责任公司 Court trial rebroadcasting monitoring system for panoramic video
CN104219584A (en) * 2014-09-25 2014-12-17 广州市联文信息科技有限公司 Reality augmenting based panoramic video interaction method and system
CN105163158A (en) * 2015-08-05 2015-12-16 北京奇艺世纪科技有限公司 Image processing method and device
CN106385587A (en) * 2016-09-14 2017-02-08 三星电子(中国)研发中心 Method, device and system for sharing virtual reality view angle

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106383576B (en) * 2016-09-08 2019-06-14 北京美吉克科技发展有限公司 The method and system of experiencer's body part are shown in VR environment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101841695A (en) * 2009-03-19 2010-09-22 新奥特硅谷视频技术有限责任公司 Court trial rebroadcasting monitoring system for panoramic video
CN104219584A (en) * 2014-09-25 2014-12-17 广州市联文信息科技有限公司 Reality augmenting based panoramic video interaction method and system
CN105163158A (en) * 2015-08-05 2015-12-16 北京奇艺世纪科技有限公司 Image processing method and device
CN106385587A (en) * 2016-09-14 2017-02-08 三星电子(中国)研发中心 Method, device and system for sharing virtual reality view angle

Also Published As

Publication number Publication date
CN109309787A (en) 2019-02-05

Similar Documents

Publication Publication Date Title
CN108737768B (en) Monitoring method and monitoring device based on monitoring system
CN109302455B (en) Data processing method and device for video network
CN110489042B (en) Method and system for simulating dragging based on video network
CN109819214B (en) Video split-screen method and device
CN111107299A (en) Method and device for synthesizing multi-channel video
CN110769310B (en) Video processing method and device based on video network
CN109194915B (en) Video data processing method and system
CN109246135B (en) Method and system for acquiring streaming media data
CN108965930B (en) Video data processing method and device
CN112866725A (en) Live broadcast control method and device
CN108810457B (en) Method and system for controlling video network monitoring camera
CN110113564B (en) Data acquisition method and video networking system
CN109544879B (en) Alarm data processing method and system
CN109743284B (en) Video processing method and system based on video network
CN110769179B (en) Audio and video data stream processing method and system
CN108989850B (en) Video playing control method and control system
CN109309787B (en) Operation method and system of panoramic video data
CN109768964B (en) Audio and video display method and device
CN108632635B (en) Data processing method and device based on video network
CN110858887A (en) Method and device for playing monitoring data
CN109660595B (en) Remote operation method and device for real-time street view
CN110324578B (en) Monitoring video processing method, device and storage medium
CN110830185B (en) Data transmission method and device
CN109688073B (en) Data processing method and system based on video network
CN109859824B (en) Pathological image remote display method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant