CN110047035B - Panoramic video hot spot interaction system and interaction equipment - Google Patents

Panoramic video hot spot interaction system and interaction equipment Download PDF

Info

Publication number
CN110047035B
CN110047035B CN201910297395.9A CN201910297395A CN110047035B CN 110047035 B CN110047035 B CN 110047035B CN 201910297395 A CN201910297395 A CN 201910297395A CN 110047035 B CN110047035 B CN 110047035B
Authority
CN
China
Prior art keywords
panoramic video
interaction
information
user
hotspot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910297395.9A
Other languages
Chinese (zh)
Other versions
CN110047035A (en
Inventor
修文群
彭信
齐文光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Research Center Of Digital City Engineering
Shenzhen Technology Institute of Urban Public Safety Co Ltd
Original Assignee
Shenzhen Research Center Of Digital City Engineering
Shenzhen Technology Institute of Urban Public Safety Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Research Center Of Digital City Engineering, Shenzhen Technology Institute of Urban Public Safety Co Ltd filed Critical Shenzhen Research Center Of Digital City Engineering
Priority to CN201910297395.9A priority Critical patent/CN110047035B/en
Publication of CN110047035A publication Critical patent/CN110047035A/en
Application granted granted Critical
Publication of CN110047035B publication Critical patent/CN110047035B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06T3/08
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images

Abstract

The invention relates to the technical field of panoramic video interaction, in particular to a panoramic video hot spot interaction system and interaction equipment. The interactive system comprises a panoramic roaming building module, an interactive model building module, an interactive exhibition module, a panoramic video display module and a background management module. The interactive device includes a processor, a memory, and a display. According to the hotspot interaction system of the panoramic video, hotspots are set for the objects of urban public safety supervision in the panoramic video according to the spherical coordinate information, the attribute information of the objects is used as hotspot association information, the attribute information of the objects is associated to projection points of the hotspots on different line-of-sight perspective views, interaction between video content and the attribute information is achieved through the hotspots, and user management is facilitated.

Description

Panoramic video hot spot interaction system and interaction equipment
Technical Field
The invention relates to the technical field of panoramic video interaction, in particular to a panoramic video hot spot interaction system and interaction equipment.
Background
With the gradual advancement of video application technology and urban public safety supervision systems, the number of video supervision points in each large city has reached a certain order of magnitude.
Panoramic video technology and panoramic cameras have also been used in urban management, unlike traditional video, which breaks the limitations of the angle of view of traditional video, and can be fully immersed in the environment in which the video is displayed. Autonomous interactivity is the most remarkable feature of panoramic video, which is different from traditional video, and a user can arbitrarily change the visual angle and arbitrarily zoom, and the existing panoramic video interaction process mainly depends on a mouse or a keyboard.
In view of this, overcoming the above drawbacks in the prior art, providing a new hotspot interaction system suitable for panoramic video in the urban public security field is a technical problem to be solved in the art.
Disclosure of Invention
The aim of the invention can be achieved by the following technical measures:
the invention provides a panoramic video hot spot interaction system, which comprises:
the panoramic roaming building module is used for acquiring original video images, splicing the original video images to generate spherical panoramic video, reconstructing the panoramic video in different sight line directions with the spherical center of the panoramic video as a view point by utilizing a spherical re-projection algorithm, and generating perspective views in the different sight line directions to form panoramic video data;
the interaction model construction module is used for acquiring the panoramic video data, receiving a click operation on a target object in the panoramic video, configuring a pixel position corresponding to the click operation as a hot spot, acquiring spherical coordinate information of the hot spot, associating attribute information of the target object with the spherical coordinate information of the hot spot, acquiring image coordinate information of a projection point corresponding to the hot spot on the perspective view, and establishing a mapping relation between the image coordinate information of the projection point and the spherical coordinate information of the hot spot to form panoramic video interaction data;
the interactive exhibition module is used for receiving a request instruction carrying query information input by a user and acquiring panoramic video interactive data corresponding to the query information; and/or, the method is used for receiving a request instruction carrying interaction information input by a user, and managing the labeling content and/or attribute information of the corresponding hotspot icon according to the request instruction carrying interaction information; and
and the panoramic video display module is used for loading panoramic video interaction data corresponding to the query information and/or the interaction information, and displaying part or all of attribute information of the target object as annotation content in the panoramic video and the perspective view respectively according to the spherical coordinate information of the hot spot and the image coordinate information of the projection point corresponding to the hot spot.
Preferably, the interactive exhibition module is used for receiving a request instruction carrying spherical coordinate information input by a user, identifying a target object according to the spherical coordinate information, and acquiring a panoramic video image frame and a perspective view corresponding to the target object.
Preferably, the interactive exhibition module is configured to receive a request instruction carrying first attribute information input by a user, identify a target object according to the first attribute information, and obtain a panoramic video image frame and a perspective view corresponding to the target object.
Preferably, the interactive exhibition module is further configured to receive viewing angle information input by a user, and obtain a perspective view of a corresponding line of sight according to the viewing angle information.
Preferably, the interactive exhibition module is further configured to receive a first click operation of a user on a hotspot icon displayed with the labeling content, and switch the labeling content between a part of attribute information and all attribute information.
Preferably, the interactive exhibition module is further configured to receive a second click operation of the user on the hotspot icon displayed with the labeling content, switch the hotspot icon to an editable mode, and update the editing content of the user to attribute information of the corresponding target object.
Preferably, the interactive system further comprises a background management module, which is used for storing and managing attribute information of the target object and panoramic video interactive data.
Preferably, the background management module is configured to receive user-defined authority information, set different user authorities for each attribute information according to the user-defined authority information, and map the user authorities of the attribute information to hot spots of corresponding targets, corresponding projection points of the hot spots on a perspective view, and labeling contents corresponding to the hot spots or projection points.
Preferably, the background management module is used for loading the original video image to the panoramic roaming building module, loading the panoramic video data to the interaction model building module, and/or loading the panoramic video interaction data to the panoramic video display module.
The invention also provides panoramic video hotspot interaction equipment, which comprises:
the processor is used for acquiring panoramic video interaction data corresponding to the user permission according to a request instruction of the user;
the memory is used for storing panoramic video interaction data;
the display is used for displaying scenes of the panoramic video according to the loaded panoramic video interaction data;
the processor is also used for acquiring interaction information of a user in a scene of the panoramic video, re-acquiring panoramic video interaction data corresponding to the interaction information or modifying current panoramic video interaction data according to the interaction information.
According to the hotspot interaction system of the panoramic video, hotspots are set for the objects of urban public safety supervision in the panoramic video according to the spherical coordinate information, the attribute information of the objects is used as hotspot association information, the attribute information of the objects is associated to projection points of the hotspots on different line-of-sight perspective views, interaction between video content and the attribute information is achieved through the hotspots, and user management is facilitated.
Drawings
Fig. 1 is a block diagram of a hotspot interaction system of a panoramic video according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a reprojection coordinate system of a spherical panorama in an interactive system according to an embodiment of the present invention.
FIG. 3 is a schematic illustration of hotspots in an interactive system according to an embodiment of the invention.
Fig. 4 is a block diagram of a hotspot interaction apparatus for panoramic video according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In order that the present disclosure may be more fully described and fully understood, the following description is provided by way of illustration of embodiments and specific examples of the present invention; this is not the only form of practicing or implementing the invention as embodied. The description covers the features of the embodiments and the method steps and sequences for constructing and operating the embodiments. However, other embodiments may be utilized to achieve the same or equivalent functions and sequences of steps.
It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present application to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, a first message may also be referred to as a second message, and similarly, a second message may also be referred to as a first message, without departing from the scope of the present application. Depending on the context, furthermore, the word "if" used may be interpreted as "at … …" or "at … …" or "in response to a determination".
An embodiment of the present invention provides a panoramic video hotspot interaction system, referring to fig. 1, the interaction system includes: the system comprises a panoramic roaming building module 10, an interactive model building module 20, an interactive exhibition module 30, a panoramic video display module 40 and a background management module 50.
The panorama roaming building module 10 is configured to obtain an original video image, splice the original video image to generate a spherical panoramic video, reconstruct the panoramic video in different directions of sight with a spherical center of the panoramic video as a viewpoint by using a spherical re-projection algorithm, and generate perspective views in different directions of sight to form panoramic video data, where the panoramic video data includes the spherical panoramic video and each perspective view corresponding to the spherical panoramic video. For example, a user shoots a group of original video images at the position A, a panoramic video A is formed by splicing, then the panoramic video A is reconstructed through a spherical surface re-projection algorithm, and the panoramic video data A comprises the panoramic video A formed by splicing and a reconstruction perspective corresponding to the panoramic video A; a user shoots a group of original video images at the position B, a panoramic video B is formed by splicing, then the panoramic video B is reconstructed through a spherical surface re-projection algorithm, and the panoramic video data B comprises a panoramic video A formed by splicing and a reconstruction perspective corresponding to the panoramic video B. The plurality of panoramic video data formed by the panoramic roaming building module 10 may be stored in a database or server.
Specifically, when panoramic video stitching is performed, obtaining image frames of at least two frames of original videos, and performing video stitching on the image frames of the at least two frames of original videos, so as to generate panoramic videos after stitching.
Firstly, extracting characteristic points of video image frames; then, carrying out feature point matching on the image frame, and eliminating mismatching points in the feature point matching by utilizing a least square method or a random sampling consensus algorithm (Random Sample Consensus, RANSAC); then, based on the characteristic points of the mismatching points, a registration model of the image frame is established; and finally, registering the image frames on the three-dimensional sphere of the sphere model according to the registration model to generate a spherical panoramic video.
The RANSAC algorithm needs to be used under a certain confidence probability P, where P is generally set to 0.99, and at least one group of data in N groups of samples is all intra-local points, and the calculation formula of N is as follows:
Figure BDA0002027049190000051
where μ is the proportion of outliers and m is the minimum data needed to calculate model parameters.
The RANSAC algorithm comprises the following specific steps:
calculating a current parameter model to adapt to the assumed local points, wherein all unknown parameters in the model can be obtained by input sample calculation, and initializing the parameters; calculating the symmetrical transformation error of the assumed corresponding points obtained by matching the characteristic points and counting the number of the local points of the error; if there are enough points to be classified as hypothetical intra-local points, then the estimated model is reasonable enough; the number of loops N is calculated by using the above formula, and steps 1 to 3 are circularly executed.
And when the circulation is finished, calculating the parameter model again by using the maximum local point set, and obtaining a transformation matrix H which is the optimal model matrix.
The panoramic video can realize the circular view in any view angle (viewing angle) direction of 360 degrees, when browsing the panoramic video, the panoramic video of the sphere needs to be subjected to reprojection transformation according to the current view direction and the view range, and a perspective view of a plane conforming to the visual habit of human eyes is generated, wherein the view range is related to the parameters of the original video, namely the view of a camera shooting the original video. The rotation motion of the camera is simulated by using a reprojection algorithm of the spherical panorama, and the zoom motion of the camera can be simulated by changing the view field of the camera, so that the viewpoint of an observer is simulated, and a corresponding scene is displayed.
The principle of the reprojection algorithm of the spherical panorama is as follows: and establishing a world coordinate system XYZ taking the sphere center of the panoramic video as an origin, rotating the world coordinate system XYZ around an X axis by alpha degrees to obtain a camera coordinate system XYZ, wherein the two coordinate systems can be obtained through rotating the X degrees. Referring to fig. 2, O is the origin of the two-dimensional coordinate system of the perspective view K, O 'is the origin of the camera coordinate system xyz, for any point P' in the panoramic video, the corresponding projection point on the perspective view (two-dimensional image plane) is P (x, y), the coordinate of P 'on the spherical surface is P' (Φ, λ), λ is the horizontal rotation angle between the plane of the perspective view and the camera coordinate system xyz, Φ is the pitch angle, H is the pixel height of the image, and W is the pixel width.
The virtual camera has 3 rotational degrees of freedom in three-dimensional space, namely, rotation around an X axis, wherein the rotation angle is pitch; rotation around the Y axis is performed at a rotation angle of yaw; and rotating around the Z axis, wherein the rotating angle is roll.
Rotation matrix R of camera around X-axis x The method comprises the following steps:
Figure BDA0002027049190000061
camera rotation matrix R around Y-axis location y The method comprises the following steps:
Figure BDA0002027049190000071
when the camera rotates around the X-axis and the Y-axis simultaneously, the rotated composite rotation matrix r=r x ·R y
The transformation matrix of world coordinate system XYZ and camera coordinate system XYZ is 1/=r x
The coordinates of the point P (x, y) in the camera coordinate system XYZ are (x-W/2, y-H/2, -r) and the coordinates of the point P (x, y) in the world coordinate system XYZ are (u, v, W):
Figure BDA0002027049190000072
where r is the distance from the observation point (the sphere center of the camera coordinate xyz) to the two-dimensional plane where the perspective view is located, and is also the focal length of the camera.
And (3) by establishing a coordinate conversion relation of a corresponding projection point P projected to the perspective view of any point P' in the panoramic video, calculating two-dimensional image coordinates (x, y) of the corresponding projection point P on the perspective view according to coordinates (u, v, w) of the point P under a world coordinate system XYZ and the coordinate conversion relation, and generating the perspective views in different sight directions.
The interaction model construction module 20 is configured to obtain the panoramic video data, receive a click operation on a target object in the panoramic video, configure a pixel position corresponding to the click operation as a hot spot, obtain spherical coordinate information of the hot spot, correlate attribute information of the target object with the spherical coordinate information of the hot spot, obtain image coordinate information of a projection point corresponding to the hot spot on the perspective view, and establish a mapping relationship between the image coordinate information of the projection point and the spherical coordinate information of the hot spot to form panoramic video interaction data.
Specifically, in order to implement interaction with the target object in the panoramic video, the model building module 20 sets an interaction hotspot of the target object on an image frame picture of the panoramic video and on each reconstructed perspective view, and the interaction model building module 20 adds hotspot interaction data into the panoramic video data formed by the panoramic roaming building module 10 to generate panoramic video interaction data. For example, loading panoramic video data a into the model building module 20, and adding hot spot data into the panoramic video data a to obtain panoramic video interaction data a; the panoramic video data B is loaded into the model construction module 20, hot spot data is added into the panoramic video data B to obtain panoramic video interaction data B, and a plurality of panoramic video interaction data formed by the interaction model construction module 20 can be stored in a database or a server.
When the target object is a building, the hot spot of the target object is a static hot spot and is relatively static in the panoramic video; when the target object is a vehicle or a pedestrian, the hot spot of the target object is a dynamic hot spot, and the hot spot is continuously changed along with the position movement of the vehicle or the pedestrian in the panoramic video. Building attribute information may include, but is not limited to, name, address, security level, etc.
When the target object is static, performing clicking operation on the position of the target object in the image frame of the panoramic video, converting the clicked pixel position into a spherical coordinate point, configuring the spherical coordinate point as a hot spot, establishing a hot spot information storage table through a database technology, storing attribute information of the target object, and associating the attribute information of the target object with the spherical coordinate information of the hot spot. And establishing a mapping relation between the projection points corresponding to the hot spots on the perspective view and the hot spots, and displaying hot spot icons on the hot spots and the projection points corresponding to the hot spots, namely displaying the hot spot icons on the picture of the panoramic video image frame and the perspective view, wherein the hot spot icons are written with labeling contents, the labeling contents are attribute information of a target object, and the labeling contents can be part of attribute information or all of the attribute information.
Further, the projected points of the perspective view may also be configured as hot spots, and the user may interact directly on the perspective view.
When a target object is dynamic, acquiring an image frame of the target object appearing in a panoramic video for the first time, receiving a point selection operation on the target object in the image frame, converting a pixel position corresponding to the point selection operation into a spherical coordinate point, configuring the spherical coordinate point as an initial hot spot, acquiring spherical coordinate information of the initial hot spot, establishing a hot spot information storage table through a database technology, storing attribute information of the target object, and associating the attribute information of the target object with the spherical coordinate information of the initial hot spot; amplifying X pixels of the target object in the up-down, left-right directions in the image frame, and extracting features of the amplified region to obtain feature information of the target object, wherein X is a natural number greater than 20 and less than 50; continuing to play the panoramic video, identifying targets in image frames of the panoramic video according to the characteristic information, finding out image frames in which all targets appear, converting an identification area on the image frames into spherical coordinate points, configuring the spherical coordinate points as subsequent hot spots, specifically, dividing the image of the enlarged area, extracting characteristics, searching the characteristics in the panoramic video image frames after T time intervals and taking the characteristics as the next hot spot, and continuously repeating the above processes to calculate all the hot spots. Associating the attribute information of the target object with the spherical coordinate information of the subsequent hot spot; and acquiring image coordinate information of a projection point corresponding to the initial hot spot and the subsequent hot spot on the perspective view, and establishing a mapping relation between the image coordinate information of the projection point and spherical coordinate information corresponding to the initial hot spot or the subsequent hot spot. And displaying the hotspot icons on all hotspots and projection points corresponding to the hotspots, namely displaying the hotspot icons on the picture of the panoramic video image frame and on the perspective view, wherein the hotspot icons are written with labeling contents, wherein the labeling contents are attribute information of the target object, and the labeling contents can be part of attribute information or all of attribute information.
Further, the projected points of the perspective view may also be configured as hot spots, and the user may interact directly on the perspective view.
The user may select panoramic video interaction data to be displayed for loading by the panoramic video display module 40 through the interactive display module 30. The interactive exhibition module 30 is configured to receive a request instruction carrying query information input by a user, and obtain panoramic video interactive data corresponding to the query information. The panoramic video display module 40 is configured to load panoramic video interaction data corresponding to the query information, and display part or all of attribute information of the target object as labeling content according to spherical coordinate information of the hot spot and image coordinate information of a projection point corresponding to the hot spot in the panoramic video and the perspective view, respectively, as shown in fig. 3, and is at least applicable to the following scenes: (a) building static hot spots (b) vehicle dynamic hot spots (c) pedestrian dynamic hot spots.
In a preferred embodiment, the query information may be a capture location of the original video, through which one of the plurality of panoramic video interaction data is selected for display by the panoramic video display module 40. Further, when the same shooting position corresponds to a plurality of panoramic videos with different views, the query information may further include shooting position and view information of the original video.
In a preferred embodiment, for one panoramic video interaction data, the query information may be spherical coordinate information, and the target object is matched on the spherical surface of the panoramic video through the spherical coordinate information, so that panoramic video image frames and perspective views where the target object appears are selected from the panoramic video interaction data, and the panoramic video image frames and perspective views where the target object appears are the panoramic video interaction data corresponding to the query information. Namely, the interactive exhibition module 30 is configured to receive a request instruction carrying spherical coordinate information input by a user, identify a target object according to the spherical coordinate information, and obtain a panoramic video image frame and a perspective view corresponding to the target object.
In a preferred embodiment, for one panoramic video interaction data, the query information may be first attribute information, the first attribute information is matched with attribute information of a plurality of objects, attribute information matched with the first attribute information is searched, the object corresponding to the attribute information is a query target, a panoramic video image frame and a perspective view in which the object appears are selected from the panoramic video interaction data, and the panoramic video image frame and the perspective view in which the object appears are panoramic video interaction data corresponding to the query information. Namely, the interactive exhibition module 30 is configured to receive a request instruction carrying first attribute information input by a user, identify a target object according to the first attribute information, and obtain a panoramic video image frame and a perspective view corresponding to the target object.
In a preferred embodiment, for a panoramic video interactive data, the query information may be viewing angle information, and after receiving the viewing angle information input by the user, the interactive display module 30 obtains a perspective view of the corresponding line of sight according to the viewing angle information.
The user may also interact with the content of the panoramic video (target hotspots and labels in the hotspot icons) through the interactive display module 30. The interactive exhibition module 30 is further configured to receive a request instruction carrying interaction information input by a user, and manage labeling content and/or attribute information of a corresponding hotspot icon according to the request instruction carrying interaction information.
In a preferred embodiment, the interactive display module 30 is further configured to receive a first click operation of a user on a hotspot icon displayed with the labeling content, and switch the labeling content between a part of the attribute information and all of the attribute information. The labeling content may include only basic information in the attribute information, or may include detailed information in the attribute information, when the basic information is currently displayed, the user performs a first clicking operation on the hotspot icon, the labeling content is switched from the basic information to the detailed information, the user performs a first clicking operation on the hotspot icon again, and the labeling content is switched from the detailed information to the basic information.
In a preferred embodiment, the interactive exhibition module 30 is further configured to receive a second click operation of the user on the hotspot icon displayed with the labeling content, switch the hotspot icon to an editable mode, and update the editing content of the user to the attribute information of the corresponding target object. And the user starts an editing mode through a second clicking operation, edits the labeling content in the hot spot icon, and simultaneously, the attribute information of the target object corresponding to the hot spot icon is edited again.
Each time a user inputs a request instruction (whether carrying query information or interactive information), the interactive display module 30 executes the following procedure according to the request instruction: (i) Recall panoramic video interaction data or (ii) modify panoramic video interaction data. In the process of browsing the panoramic video, the user invokes different panoramic video interaction data or modifies the current panoramic video interaction data through the interactive exhibition module 30, and loads the recalled or modified panoramic video interaction data to the panoramic video display module 40 to synchronize the interaction actions of the interactive exhibition module 30 in real time.
Further, the interactive system according to the embodiment of the present invention further includes a background management module 50, where the background management module 50 is connected to the panoramic roaming building module 10, the interactive model building module 20, and the interactive exhibition module 30, and is configured to store and manage attribute information of a target object, panoramic video data, and panoramic video interactive data.
Specifically, the background management module 50 is configured to receive the custom authority information, set different user authorities for each attribute information according to the custom authority information, and map the user authorities of the attribute information to a hot spot of a corresponding target object, a corresponding projection point of the hot spot on a perspective view, and a labeling content corresponding to the hot spot or the projection point. That is, during the playing process of the panoramic video or during the loading and displaying process of the panoramic video, the user can only see the hot spot and the hot spot icon of the target object corresponding to the self authority, and can only interact with the hot spot of the target object corresponding to the self authority.
Further, the background management module 50 is responsible for unified configuration and linking call of the panoramic roaming building module 10, the interactive model building module 20 and the interactive exhibition module 30, and the background management module 50 is used for loading an original video image to the panoramic roaming building module 10, loading panoramic video data to the interactive model building module 20 after the panoramic roaming building module 10 completes panoramic video splicing and perspective view generation, selecting panoramic video interactive data to be loaded through the interactive exhibition module 30 after the interactive model building module 20 completes hotspot configuration and attribute information association, and loading the panoramic video interactive data to the panoramic video display module 40. And, in the process of browsing the panoramic video, the user invokes different panoramic video interaction data or modifies the current panoramic video interaction data through the interactive display module 30, and loads the invoked or modified panoramic video interaction data to the panoramic video display module 40, so as to synchronize the interaction actions of the interactive display module 30 in real time.
Correspondingly, the embodiment of the invention also provides panoramic video hot spot interaction equipment, referring to fig. 4, the interaction equipment comprises: the processor 100 is configured to obtain panoramic video interaction data corresponding to user rights according to a request instruction of a user, the processor 100 may obtain panoramic video interaction data corresponding to the user rights from a server or a database, and store the panoramic video interaction data in the memory 200; the memory 200 is used for storing the panoramic video interaction data acquired by the processor 100, and the memory is not required to store all the panoramic video interaction data; a display 300 for displaying scenes of the panoramic video according to the loaded panoramic video interactive data; the processor 100 is further configured to acquire interaction information of a user in a scene of the panoramic video, re-acquire panoramic video interaction data corresponding to the interaction information, or modify current panoramic video interaction data according to the interaction information.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (10)

1. A panoramic video hotspot interaction system, the interaction system comprising:
the panoramic roaming building module is used for acquiring original video images, splicing the original video images to generate spherical panoramic video, reconstructing the panoramic video in different sight line directions with the spherical center of the panoramic video as a viewpoint by utilizing a spherical re-projection algorithm, and generating perspective views in the different sight line directions to form panoramic video data, wherein the panoramic video data comprises the spherical panoramic video and the perspective views corresponding to the spherical panoramic video;
the interaction model construction module is used for acquiring the panoramic video data, receiving a click operation on a target object in the panoramic video, configuring a pixel position corresponding to the click operation as a hot spot, acquiring spherical coordinate information of the hot spot, associating attribute information of the target object with the spherical coordinate information of the hot spot, acquiring image coordinate information of a projection point corresponding to the hot spot on the perspective view, and establishing a mapping relation between the image coordinate information of the projection point and the spherical coordinate information of the hot spot to form panoramic video interaction data;
the interactive exhibition module is used for receiving a request instruction carrying query information input by a user and acquiring panoramic video interactive data corresponding to the query information; and/or, the method is used for receiving a request instruction carrying interaction information input by a user, and managing the labeling content and/or attribute information of the corresponding hotspot icon according to the request instruction carrying interaction information; and
and the panoramic video display module is used for loading panoramic video interaction data corresponding to the query information and/or the interaction information, and displaying part or all of attribute information of the target object as annotation content in the panoramic video and the perspective view respectively according to the spherical coordinate information of the hot spot and the image coordinate information of the projection point corresponding to the hot spot.
2. The panoramic video hotspot interaction system according to claim 1, wherein the interaction exhibition module is configured to receive a request instruction carrying spherical coordinate information input by a user, identify a target object according to the spherical coordinate information, and obtain a panoramic video image frame and a perspective view corresponding to the target object.
3. The panoramic video hotspot interaction system according to claim 1, wherein the interactive exhibition module is configured to receive a request instruction carrying first attribute information input by a user, identify a target object according to the first attribute information, and obtain a panoramic video image frame and a perspective view corresponding to the target object.
4. The panoramic video hotspot interaction system of claim 1, wherein the interactive display module is further configured to receive viewing angle information input by a user, and obtain a perspective view of a corresponding line of sight according to the viewing angle information.
5. The panoramic video hotspot interaction system of claim 1, wherein the interactive display module is further configured to receive a first click operation of a user on a hotspot icon displayed with annotation content, and switch the annotation content between a portion of the attribute information and all of the attribute information.
6. The panoramic video hotspot interaction system of claim 1, wherein the interactive exhibition module is further configured to receive a second click operation of a user on a hotspot icon displayed with labeling content, switch the hotspot icon to an editable mode, and update editing content of the user to attribute information of a corresponding target object.
7. The panoramic video hotspot interaction system of claim 1, further comprising a background management module for storing and managing attribute information of objects and panoramic video interaction data.
8. The panoramic video hotspot interaction system of claim 7, wherein the background management module is configured to receive custom authority information, set different user authorities for each attribute information according to the custom authority information, and map the user authorities of the attribute information to hotspots of corresponding targets, corresponding projection points of the hotspots in a perspective view, and labeling contents corresponding to the hotspots or projection points.
9. The panoramic video hotspot interaction system of claim 7, wherein the background management module is configured to load an original video image to a panoramic rover building module, and/or to load panoramic video data to an interaction model building module, and/or to load panoramic video interaction data to a panoramic video display module.
10. A panoramic video hotspot interaction device, the interaction device comprising:
the processor is used for acquiring panoramic video interaction data corresponding to the user permission according to a request instruction of the user;
the memory is used for storing panoramic video interaction data;
the display is used for displaying scenes of the panoramic video according to the loaded panoramic video interaction data;
the processor is also used for acquiring interaction information of a user in a scene of the panoramic video, re-acquiring panoramic video interaction data corresponding to the interaction information or modifying current panoramic video interaction data according to the interaction information.
CN201910297395.9A 2019-04-15 2019-04-15 Panoramic video hot spot interaction system and interaction equipment Active CN110047035B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910297395.9A CN110047035B (en) 2019-04-15 2019-04-15 Panoramic video hot spot interaction system and interaction equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910297395.9A CN110047035B (en) 2019-04-15 2019-04-15 Panoramic video hot spot interaction system and interaction equipment

Publications (2)

Publication Number Publication Date
CN110047035A CN110047035A (en) 2019-07-23
CN110047035B true CN110047035B (en) 2023-04-28

Family

ID=67277171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910297395.9A Active CN110047035B (en) 2019-04-15 2019-04-15 Panoramic video hot spot interaction system and interaction equipment

Country Status (1)

Country Link
CN (1) CN110047035B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111858799B (en) * 2020-06-28 2022-10-21 江苏核电有限公司 Dynamic marking and positioning method, system and equipment for panoramic image for nuclear power plant
CN111866488A (en) * 2020-07-23 2020-10-30 深圳市福莱斯科数据开发有限公司 Editing system and editing method based on panoramic image

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010052558A2 (en) * 2008-11-05 2010-05-14 Easywalk Capital S.A. System and method for the precise integration of virtual objects to interactive panoramic walk-through applications
CA2794928A1 (en) * 2010-03-30 2011-10-06 Social Animal, Inc. System and method for capturing and displaying cinema quality panoramic images
US9865069B1 (en) * 2014-11-25 2018-01-09 Augmented Reality Concepts, Inc. Method and system for generating a 360-degree presentation of an object
US10368047B2 (en) * 2017-02-15 2019-07-30 Adone Inc. Six-degree of freedom video playback of a single monoscopic 360-degree video
CN108280873A (en) * 2018-01-05 2018-07-13 上海户美信息科技有限公司 Model space position capture and hot spot automatically generate processing system
CN109063123B (en) * 2018-08-01 2021-01-05 深圳市城市公共安全技术研究院有限公司 Method and system for adding annotations to panoramic video

Also Published As

Publication number Publication date
CN110047035A (en) 2019-07-23

Similar Documents

Publication Publication Date Title
CN112053446B (en) Real-time monitoring video and three-dimensional scene fusion method based on three-dimensional GIS
US10380410B2 (en) Apparatus and method for image-based positioning, orientation and situational awareness
Wagner et al. Real-time panoramic mapping and tracking on mobile phones
CN108564527B (en) Panoramic image content completion and restoration method and device based on neural network
CN110874818B (en) Image processing and virtual space construction method, device, system and storage medium
CN110060201B (en) Hot spot interaction method for panoramic video
CN109509255B (en) Tagged map construction and space map updating method and device
US20180033208A1 (en) Telelocation: location sharing for users in augmented and virtual reality environments
CN103716586A (en) Monitoring video fusion system and monitoring video fusion method based on three-dimension space scene
US20240078703A1 (en) Personalized scene image processing method, apparatus and storage medium
CN111737518A (en) Image display method and device based on three-dimensional scene model and electronic equipment
CN111031293B (en) Panoramic monitoring display method, device and system and computer readable storage medium
CN112954292B (en) Digital museum navigation system and method based on augmented reality
KR102435185B1 (en) How to create 3D images based on 360° VR shooting and provide 360° VR contents service
Jian et al. Augmented virtual environment: fusion of real-time video and 3D models in the digital earth system
CN111161398B (en) Image generation method, device, equipment and storage medium
CN110047035B (en) Panoramic video hot spot interaction system and interaction equipment
CN111429518A (en) Labeling method, labeling device, computing equipment and storage medium
CN113838116B (en) Method and device for determining target view, electronic equipment and storage medium
CN114332417B (en) Method, equipment, storage medium and program product for interaction of multiple scenes
CN114926612A (en) Aerial panoramic image processing and immersive display system
CN112288878B (en) Augmented reality preview method and preview device, electronic equipment and storage medium
Gomes Jr et al. Semi-automatic methodology for augmented panorama development in industrial outdoor environments
CN112312041B (en) Shooting-based image correction method and device, electronic equipment and storage medium
CN114900743A (en) Scene rendering transition method and system based on video plug flow

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant