CN114779981B - Draggable hot spot interaction method, system and storage medium in panoramic video - Google Patents

Draggable hot spot interaction method, system and storage medium in panoramic video Download PDF

Info

Publication number
CN114779981B
CN114779981B CN202210260252.2A CN202210260252A CN114779981B CN 114779981 B CN114779981 B CN 114779981B CN 202210260252 A CN202210260252 A CN 202210260252A CN 114779981 B CN114779981 B CN 114779981B
Authority
CN
China
Prior art keywords
hot spot
dimensional
coordinates
position coordinates
dimensional space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210260252.2A
Other languages
Chinese (zh)
Other versions
CN114779981A (en
Inventor
张海涛
马进东
曾泷
王宁宁
马华东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202210260252.2A priority Critical patent/CN114779981B/en
Publication of CN114779981A publication Critical patent/CN114779981A/en
Application granted granted Critical
Publication of CN114779981B publication Critical patent/CN114779981B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention provides a draggable hot spot interaction method, a draggable hot spot interaction system and a storage medium in panoramic video, which comprise the following steps: acquiring the longitude and latitude coordinates, the size and the radius of a projection sphere of the center of the hot spot, and calculating the position coordinates of the center of the hot spot and each boundary vertex in a three-dimensional space; creating a hotspot in the panoramic video; acquiring a current position coordinate of a cursor of an input device on a two-dimensional display interface, converting the current position coordinate into a three-dimensional current position coordinate in a three-dimensional space, establishing a first ray to select a hot spot, and calculating a position difference between the center of the hot spot and an initial position coordinate of the input device; the method comprises the steps of obtaining current position coordinates of a cursor of an input device on a two-dimensional display interface, converting the current position coordinates into three-dimensional current position coordinates in a three-dimensional space, establishing a second ray taking a connecting line of the three-dimensional current position coordinates of the input device and position coordinates of a viewpoint camera of a three-dimensional space scene as a direction vector, determining expected center position coordinates of a hot spot based on the second ray and a position difference, and updating the position of the hot spot.

Description

Draggable hot spot interaction method, system and storage medium in panoramic video
Technical Field
The invention relates to the technical field of panoramic videos, in particular to a draggable hot spot interaction method, a draggable hot spot interaction system and a storage medium in a panoramic video.
Background
Video (Video) generally refers to various techniques for capturing, recording, processing, storing, transmitting, and reproducing a series of still images as electrical signals. When the continuous image changes more than 24 frames (frames) per second, according to the persistence of vision principle, the human eyes cannot distinguish a single static picture; it appears as a smooth continuous visual effect, such that successive pictures are called videos.
The panoramic video is a video shot by a 3D camera in 360 degrees in all directions, and when a user watches the video, the user can randomly adjust the video to watch up, down, left and right, so that the user has a sense of being personally on the scene in a real sense and is not limited by time, space and region.
The related technology of AR hot spots in the panoramic video is mainly focused on the dynamic identification and auxiliary selection of the hot spots, and two methods for determining the positions of the AR hot spots in the panoramic video are mainly available for a user at present: 1. the location of the hotspot in the panoramic video is specified directly from the data plane by modifying the configuration file. 2. The hot spot location is specified in the panoramic video by clicking somewhere on the screen with the mouse. The first method is not intuitive enough, and a user needs to modify data for many times to determine the position of the hot spot; in the second method, after the position of the hot spot is determined for the first time, the hot spot cannot be moved in the panoramic video through mouse dragging, namely, when a user needs to change the position of the hot spot, the interaction process is complex, and the modification of the position of the hot spot is not intuitive and free. Therefore, how to make the changing manner of the hot spot position simpler and more intuitive is a technical problem to be solved.
Disclosure of Invention
In view of the above, the present invention provides a method, a system and a storage medium for draggable hotspot interaction in panoramic video, so as to solve one or more problems in the prior art.
According to one aspect of the invention, the invention discloses a draggable hotspot interaction method in panoramic video, which comprises the following steps: acquiring the center longitude and latitude coordinates, the size and the radius of the projection sphere of the hot spot, and calculating the center of the hot spot and the position coordinates of each boundary vertex in a three-dimensional space based on the acquired center longitude and latitude coordinates, the size and the radius of the projection sphere of the hot spot;
acquiring style information of the hot spot, and creating the hot spot in the panoramic video based on the calculated center of the hot spot, the position coordinates of each boundary vertex in the three-dimensional space and the acquired style information of the hot spot; wherein the style information includes at least one of transparency, scale, title, color;
selecting the hot spot in a two-dimensional display interface through an input device, acquiring an initial position coordinate of a cursor of the input device on the two-dimensional display interface, converting the initial position coordinate of the input device into a three-dimensional initial position coordinate in a three-dimensional space, establishing a first ray taking a connecting line of the three-dimensional initial position coordinate of the input device and a position coordinate of a viewpoint camera of a three-dimensional space scene as a direction vector, performing collision detection on the first ray and the hot spot to select the hot spot, and calculating a position difference between the central position of the hot spot and the initial position coordinate of the input device;
acquiring the current position coordinate of a cursor of the input device on a two-dimensional display interface, converting the current position coordinate of the input device into a three-dimensional current position coordinate in a three-dimensional space, establishing a second ray taking a connecting line of the three-dimensional current position coordinate of the input device and the position coordinate of a viewpoint camera of a three-dimensional space scene as a direction vector, determining the expected central position coordinate of the hot spot based on the second ray and the calculated position difference, and updating the position of the hot spot.
In some embodiments of the present invention, a calculation formula of a position coordinate of the center of the hot spot in the three-dimensional space is:
Figure BDA0003550480170000021
wherein x, y and z are the position coordinates of the center of the hot spot in the three-dimensional space, lat and lon are the longitude and latitude coordinates of the center of the hot spot, and r is the radius of the projection sphere.
In some embodiments of the present invention, it is determined whether a vector between the hotspot center position coordinates and the viewpoint camera position coordinates is co-directional with an x-axis of a three-dimensional coordinate system;
under the condition of co-orientation, the calculation formula of the position coordinates of each boundary vertex of the hot spot in the three-dimensional space is as follows:
Figure BDA0003550480170000022
wherein Z is 1 、Z 2 、Z 3 、Z 4 Four vertex coordinates, x, of the hot spot, respectively 1 The coordinate is the central position coordinate of the hot spot, h is the height of the hot spot plane, and w is the width of the hot spot plane;
under the condition of not sharing, the calculation formula of the position coordinates of each boundary vertex of the hot spot in the three-dimensional space is as follows:
Figure BDA0003550480170000031
wherein Z is 1 、Z 2 、Z 3 、Z 4 Four vertices of the hot spot respectively,
Figure BDA0003550480170000032
Figure BDA0003550480170000033
Figure BDA0003550480170000034
(p,q,r) T =x 2 -x 1 ,x 1 as the central position coordinate of the hot spot, x 2 Is the position coordinates of the observer.
In some embodiments of the present invention, the conversion formula for converting the initial position coordinates of the input device into three-dimensional initial position coordinates in the three-dimensional space is:
mouse.x=(clientX/window.innerWidth)*2–1;
mouse.y=-(clientY/window.innerHeight)*2+1;
wherein, mouse.x and mouse.y are the coordinates of the input device in the X-axis and Y-axis directions in the three-dimensional space, window. Incnerwidth and window. Incnerheight are the width and height of the two-dimensional display interface, respectively, and client X and client Y are the initial position coordinates of the input device in the X-axis and Y-axis directions of the two-dimensional display interface, respectively.
In some embodiments of the present invention, creating a hotspot in a panoramic video based on the location coordinates of the center of the hotspot and each boundary vertex in three-dimensional space and the acquired style information of the hotspot, includes:
creating a hot spot plane based on the position coordinates of the center of the hot spot and each boundary vertex in the three-dimensional space;
and drawing the hotspot picture corresponding to the hotspot on the hotspot plane.
In some embodiments of the invention, the method further comprises: and acquiring the title of the hot spot, and establishing an index between the hot spot plane and the title of the hot spot.
In some embodiments of the invention, the method further comprises: and acquiring a response event corresponding to the hot spot, and adding the response event for the hot spot.
In some embodiments of the invention, the input device is a mouse or a keyboard.
According to another aspect of the present invention, there is also disclosed a draggable hotspot interaction system in panoramic video, the system comprising a processor and a memory, the memory having stored therein computer instructions for executing the computer instructions stored in the memory, the system implementing the steps of the method as described in any of the embodiments above when the computer instructions are executed by the processor.
According to yet another aspect of the present invention, a computer-readable storage medium is also disclosed, on which a computer program is stored, which program, when being executed by a processor, carries out the steps of the method according to any of the embodiments above.
According to the method and the system for the interaction of the draggable hot spot in the panoramic video, the change of the position of the hot spot in the panoramic video can be realized based on the movement of the mouse, namely, the hot spot can be accurately dragged through the mouse, and the dragging process is more visual and free, so that the position of the hot spot in the panoramic video is simpler, more visual and free to modify by the method and the system.
In addition, the draggable hotspot interaction method has extremely high degree of freedom for editing the related attributes of the AR hotspots, so that the editability of the hotspots is greatly improved, the method and the system can selectively set the patterns of the hotspots when the hotspots are created, and the hotspots are more attractive and unique due to rich patterns of the hotspots; and interactive events can be bound for the hot spot labels, so that the functions of the hot spot labels are enriched, the interactivity of the panoramic video is improved, the panoramic video can bear more information, and the application scene of the panoramic video is widened.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
It will be appreciated by those skilled in the art that the objects and advantages that can be achieved with the present invention are not limited to the above-described specific ones, and that the above and other objects that can be achieved with the present invention will be more clearly understood from the following detailed description.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate and together with the description serve to explain the invention. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. Corresponding parts in the drawings may be exaggerated, i.e. made larger relative to other parts in an exemplary device actually manufactured according to the present invention, for convenience in showing and describing some parts of the present invention. In the drawings:
fig. 1 is a flow chart illustrating a draggable hot spot interaction method in a panoramic video according to an embodiment of the invention.
FIG. 2 is a schematic diagram of a hot spot creation process according to an embodiment of the present invention.
FIG. 3 is a schematic diagram illustrating the location of a hot spot plane in three-dimensional space according to an embodiment of the present invention.
FIG. 4 is a schematic diagram illustrating a process of selecting a hot spot when dragging the hot spot according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of a correspondence relationship between a two-dimensional display interface and three-dimensional space coordinates according to an embodiment of the invention.
Fig. 6 is a schematic diagram of an object selected by rays.
Fig. 7 is a perspective projection camera imaging schematic.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention will be described in further detail with reference to the accompanying drawings. The exemplary embodiments of the present invention and their descriptions herein are for the purpose of explaining the present invention, but are not to be construed as limiting the invention.
It should be noted that, in order to avoid obscuring the present invention due to unnecessary details, only structures and/or processing steps closely related to the solution according to the present invention are shown in the drawings, while other details not greatly related to the present invention are omitted.
It should be emphasized that the term "comprises/comprising" when used herein is taken to specify the presence of stated features, elements, steps or components, but does not preclude the presence or addition of one or more other features, elements, steps or components.
The related technology of the AR hot spot in the panoramic video is mainly focused on the dynamic identification and auxiliary selection of the hot spot, wherein the related technology comprises a simple hot spot interaction function, but has no function of freely modifying the attribute of the AR hot spot, and has no optimal effect in the aspects of richness of the AR hot spot form and user interaction friendliness. In addition, although the existing method for creating the AR hotspots can specify icons and text contents of the hotspots, the size of the hotspots, the style of fonts and the display mode of the hotspots lack the capability of custom editing, and the function of triggering related interaction events after clicking the hotspots by a mouse is also lacking. At present, panoramic videos are applied to various fields including digital travel and event live broadcast, and the application scene of the panoramic videos is limited by the single display mode and function of the current hot spot, so that the invention discloses a method, a system and a storage medium capable of dragging an AR hot spot to any position in the panoramic videos by applying the knowledge theory of computer graphics.
The invention is mainly applied to the field of panoramic video, and provides a draggable augmented reality (Augmented Reality, AR) hotspot creation method in panoramic video. According to the method, related knowledge of computer graphics is used as a theoretical basis, and computer technologies such as WebGL, javaScript are used, so that a user can create interactive AR hotspots in the panoramic video, and can drag the hotspots to any positions in the panoramic video according to the needs of the user, and finally, various functions of the AR hotspots can be edited. The invention mainly solves the problems that:
(a) The function of freely dragging the AR hot spot in the panoramic video is realized;
(b) The AR hot spot size modification and continuous scaling functions are realized;
(c) The multiple attributes of the AR hotspots can be edited in a self-defined mode, and application scenes of the hotspots are enriched.
It should be appreciated that panoramic video is one of the most common forms of VR applications that take an omnidirectional 360 degree picture with a panoramic camera, and a user can adjust any angle to view the video. Technically, panoramic technology is a weak interactive VR technology, where interactions affect only viewing angles, do not affect physical states in the virtual environment, and are 2D in nature as planar video. The panoramic video is formed by splicing and linking a plurality of planar videos, the carrier of the planar picture is rectangular, the carrier of the panoramic picture is a sphere, and the panoramic video has cube projection, sphere projection and other modes. Since panoramic video records three-dimensional live-action pictures, in order to adopt the existing two-dimensional video coding and storage technology, the panoramic video needs to be converted into a two-dimensional form so as to be convenient for coding, storage and transmission. This technique of mapping a panoramic three-dimensional picture to a two-dimensional plane and being able to restore to a three-dimensional space again is a projection technique of panoramic video. In consideration of coding efficiency, image quality loss, display effect and the like, equirectangular Projection (ERP) is selected herein, namely a longitude and latitude image projection mode, so that a panoramic video rendering function is realized. In ERP projection, spherical images are uniformly unfolded into a plane image according to longitude and latitude so as to facilitate the encoding, storage and transmission of panoramic video. And the reverse projection operation of ERP is needed during panoramic playing, namely, the rectangular plane video is projected onto the sphere according to the equal longitude and latitude. The panoramic playing process is to take the center of the sphere as an observation point and watch different positions of the sphere along different viewing angles. The switching of the viewing angle may be controlled by sliding the screen or using device sensor data.
In particular, hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In the drawings, the same reference numerals represent the same or similar components, or the same or similar steps.
Fig. 1 is a flow chart of a draggable hot spot interaction method in a panoramic video according to an embodiment of the invention, as shown in fig. 1, the method includes steps S10 to S40.
Step S10: and acquiring the center longitude and latitude coordinates, the size and the radius of the projection sphere of the hot spot, and calculating the center of the hot spot and the position coordinates of each boundary vertex in the three-dimensional space based on the acquired center longitude and latitude coordinates, the size and the radius of the projection sphere of the hot spot.
The central longitude and latitude coordinates of the hot spot are used for determining the specific position of the hot spot in the panoramic view, and the size of the hot spot is used for determining the position coordinates of each vertex of the hot spot in the panoramic view. Because the ERP projection mode is selected in the invention, the hot spot can be projected onto the sphere according to the equally divided longitude and latitude, and therefore, the position coordinates of the hot spot in the panoramic view are closely related to the size of the projected sphere.
In the step, firstly, the position of the center of the hot spot in the three-dimensional space is calculated through the longitude and latitude position information of the given hot spot and the spherical radius of the panoramic video background. Let the longitude and latitude of the center of a given hot spot be lat and lon respectively, the sphere radius of the panoramic video background be r, the sphere center be the origin (0, 0), then the position of the hot spot in the three-dimensional space is:
Figure BDA0003550480170000061
in the above calculation formula, x, y, and z are coordinates of the center of the hot spot on x, y, and z axes of the three-dimensional space, respectively.
After the position coordinates of the center of the hot spot in the three-dimensional space are calculated by the method, the specific position coordinates of each vertex of the boundary of the hot spot in the three-dimensional space are needed to be calculated. Since the hotspot type is planar, always facing the observer, the hotspot plane also rotates to an angle facing the observer when the observer position changes, the coordinates of the observer need to be acquired when calculating the vertex coordinates, and the coordinates of the vertices also are updated in real time when the observer coordinates are updated. And at the moment, judging whether the x2-x1 and the x axis are co-oriented, namely judging whether the vector between the coordinates of the center position of the hot spot and the coordinates of the position of the observer is co-oriented with the x axis of the three-dimensional coordinate system or not.
When x2-x1 is co-directional with the x-axis (state shown in fig. 3), the four vertex coordinates of the hot spot are respectively:
Figure BDA0003550480170000071
wherein Z is 1 、Z 2 、Z 3 、Z 4 Four vertex coordinates, x, of the hot spot, respectively 1 And h is the height of the hot spot plane, and w is the width of the hot spot plane.
If the x2-x1 and the x axis are not co-oriented, the rotation operation can be seen when the x2-x1 and the x axis are co-oriented, that is, the calculation formula of the position coordinates of each boundary vertex of the hot spot in the three-dimensional space at this time is:
Figure BDA0003550480170000072
wherein Z is 1 、Z 2 、Z 3 、Z 4 Respectively setting four vertex coordinates of the hot spot
Figure BDA0003550480170000073
Figure BDA0003550480170000074
x 2 -x 1 =(p,q,r) T Then the coordinates of the four vertices of the rotated hot spot may be expressed as x1+av1, x1+av2, x1+av3, x1+av4>
Figure BDA0003550480170000075
Figure BDA0003550480170000076
Wherein the method comprises the steps of
Figure BDA0003550480170000077
Step S20: acquiring style information of the hot spot, and creating the hot spot in the panoramic video based on the calculated center of the hot spot, the position coordinates of each boundary vertex in the three-dimensional space and the acquired style information of the hot spot; wherein the style information includes at least one of transparency, scale, title, and color.
After calculating the position coordinates of each position of the hot spot in the three-dimensional space in step S10, vertex coordinates and texture coordinates may be further set, so as to achieve the effect of creating a plane. Specifically, a hot spot plane is firstly created based on the position coordinates of the center of the hot spot and each boundary vertex in the three-dimensional space; and drawing the hotspot picture corresponding to the hotspot on the hotspot plane.
Illustratively, when drawing textures on an AR hot spot plane, firstly creating an HTML < canvas > element as a canvas, importing a hot spot picture through the < image > element and drawing the hot spot picture onto the canvas, then drawing a hot spot title under the picture, and finally drawing the whole drawn canvas as textures onto the previously created hot spot plane. In this embodiment, style information of the hot spot may include whether to flash, hot spot picture, image height, image width, etc., in addition to transparency, scale, title, color; i.e. the location information of the hotspot is used as the necessary information for creating the hotspot, while the style information may be used as optional information for creating the hotspot, i.e. the user may change the style of the hotspot according to the optional information.
In the process of creating the hot spot, the size of the hot spot can be modified in real time by dragging the sliding block; when the size of the hot spot is changed, the system creates the hot spot again according to the new size parameter by the method and covers the hot spot with the same content before.
In an embodiment, the title of the hotspot can also be obtained, an index is established between the hotspot plane and the title of the hotspot, and an index is established between the title of the hotspot and the established hotspot plane, so that later management is facilitated.
The draggable hot spot in the present invention mainly comprises a hot spot picture located at the upper part and a hot spot name located at the lower part, and in an embodiment, fonts and colors of the hot spot name located at the lower part can be edited in a self-defined manner. Specifically, after the system obtains the font color and font style through the color extractor, the browser engine can render new color and font style to complete the modification of the hot spot style. The hotspot pattern may further include hotspot transparency, scaling, title, and the like in addition to the hotspot color.
Step S30: selecting the hot spot in a two-dimensional display interface through an input device, acquiring an initial position coordinate of a cursor of the input device on the two-dimensional display interface, converting the initial position coordinate of the input device into a three-dimensional initial position coordinate in a three-dimensional space, establishing a first ray taking a connecting line of the three-dimensional initial position coordinate of the input device and a position coordinate of a viewpoint camera of a three-dimensional space scene as a direction vector, performing collision detection on the first ray and the hot spot to select the hot spot, and calculating a position difference between the central position of the hot spot and the initial position coordinate of the input device.
Step S40: acquiring current position coordinates of a cursor of the input device on a two-dimensional display interface, converting the current position coordinates of the input device into three-dimensional current position coordinates in a three-dimensional space, establishing a second ray which takes a connecting line of the three-dimensional current position coordinates of the input device and position coordinates of a viewpoint camera of a three-dimensional space scene as a direction vector, determining expected center position coordinates of the hot spot based on the second ray and the calculated position difference, and updating the position of the hot spot based on the expected center position coordinates of the hot spot.
Steps S30 and S40 are for dragging the hot spot using an input device, which may be a mouse or a keyboard, for example. In the present invention, a mouse is taken as an example, and it is easy to understand that changing the location of the hot spot through the keyboard is similar to a mouse. Specifically, the function of dragging the hot spot position refers to that a user can press a mouse to select a hot spot which is added into the panoramic video, and the hot spot is moved to other positions in the panoramic video in a mode of dragging the mouse. The realization of the function mainly comprises three steps: when the mouse is pressed down, the hot spot to be dragged is selected, when the mouse is moved, the hot spot moves in the panoramic video, and when the mouse is loosened, the system updates the position of the hot spot.
Specifically, after creating a hotspot in the panoramic video in step S20, the hotspot may be selected by clicking a mouse; since the browser is a two-dimensional interface and the panoramic video is a three-dimensional scene, how to select objects in the three-dimensional scene by clicking on the two-dimensional screen is a problem that we need to solve. The principle of solving the problem is that screen coordinates of a mouse are converted into world coordinates in a three-dimensional scene, the world coordinates are combined with the position of a viewpoint camera to determine a direction vector, and then a ray is emitted along the direction vector by taking the viewpoint camera as a starting point, so that the ray can pass through a selected element. The specific steps are shown in fig. 4. The difference between the center position of the hot spot and the initial position coordinates of the input device is calculated in step S40, because the mouse cursor does not correspond to the center of the hot spot when the hot spot is selected based on the collision detection algorithm, and thus the difference in position calculated in this step specifically represents the offset between the mouse cursor and the center of the hot spot.
In real life, the conversion of the position coordinates of the mouse is a two-dimensional display interface, but in many cases, it is necessary to display the three-dimensional world object on the two-dimensional screen, and in order to display and convert the object at the correct position, it is necessary to construct a different coordinate system, and perform calculation such as coordinate conversion and projection, so as to construct a link between the two-dimensional world and the three-dimensional world. Referring to fig. 5, a screen coordinate system is coordinates on a two-dimensional display interface of a mobile phone or computer screen, which is defined by pixels, an upper left corner of the screen is an origin (0, 0), a right side is an x-axis positive direction, a lower side is a y-axis positive direction, and a lower right corner is a (screen width, screen height). It should be understood that the screen coordinate system herein is also a two-dimensional display interface coordinate system.
In the case of the world coordinate system (three-dimensional space coordinate system), the world coordinate system is always constant with the screen center as the origin (0, 0) in WebGL. When a person faces the screen, the right direction is the x positive axis, the upward direction is the y positive axis, and the direction from the screen to the person is the z positive axis. Meanwhile, the upper left corner coordinates of the screen in the world coordinate system are (1, 1), and the lower right corner coordinates are (-1, -1, -1). Based on the above relation, the conversion formula for converting the mouse screen coordinates into world coordinates is as follows:
mouse.x=(clientX/window.innerWidth)*2–1;
mouse.y=-(clientY/window.innerHeight)*2+1;
in the above formula, mouse.x and mouse.y are coordinates of the mouse in the X-axis and Y-axis directions in the three-dimensional space, window. Incnerwidth and window. Incnerheight are the width and height of the two-dimensional display interface, respectively, and clientX and clientY are position coordinates of the input device in the X-axis and Y-axis directions of the two-dimensional display interface, respectively. It will be understood that, when the mouse is dragged, the mouse will move from the initial position to the current position, so that the initial position coordinate and the current position coordinate of the mouse on the two-dimensional display interface need to be converted into the three-dimensional initial position coordinate and the three-dimensional current position coordinate in the three-dimensional space scene respectively.
To obtain the desired selected hot spot, one ray needs to be determined to pass through the hot spot, while determining one ray needs two variables: ray end point and direction of ray. In this embodiment, a line determined by the camera and the mouse may be used as a direction vector to emit a ray, which may necessarily pass through the object selected by the user. The principle of which is shown in figure 6.
Wherein, for each three-dimensional scene, a virtual camera is added to the scene for observing objects in the three-dimensional scene, and the observed objects are projected on a two-dimensional screen for users to watch. Depending on the projection mode from three-dimensional space to two-dimensional space, the common camera types are an orthogonal projection camera and a perspective projection camera, and the embodiment adopts the perspective projection camera, and the projection effect of the camera is similar to that of a human eye, so that the observed object size is related to the distance between the objects. The imaging principle of the perspective projection camera is shown in fig. 7, and only the distance from the camera is larger than the distance between the camera and the near section and smaller than the distance between the camera and the far section, and objects within the visible angle of the camera can be projected by the camera. While our mouse clicks at a point on the near section, the selected object is expected to be at a position between the near section and the far section. If the virtual camera position is taken as a starting point, and the connecting line of the virtual camera and the clicking position of the mouse is taken as a direction vector to draw a ray, the ray must pass through an object (hot spot) which we want to select according to the principle of perspective projection.
Therefore, after the first ray taking the line between the three-dimensional initial position coordinate of the input device and the position coordinate of the viewpoint camera of the three-dimensional space scene as the direction vector is established, the first ray intersects one or more three-dimensional objects, and the judgment of which three-dimensional objects in the three-dimensional scene intersect the first ray is a problem which needs to be solved. The intersection problem of the ray and the geometry is essentially the collision detection problem of the ray and the geometry; through collision detection algorithms, the hot spot that is expected to be selected can be obtained. When the mouse position is moved, a second ray is similarly established, and the hot spot is located on the second ray.
Specifically, when the user moves the mouse, if the system records that the user is dragging a hotspot, the position of the dragged hotspot needs to be updated in real time, and the direction of the hotspot is dynamically changed to keep the angle facing the observer, and if the direction of the hotspot is to be changed, the position coordinates of the center of the hotspot and the vertices of each boundary in the three-dimensional space are calculated based on step S10. Firstly, the current position of the mouse in the screen is required to be obtained, and then the position of the intersection point of the mouse and the panoramic video background is obtained through a second ray; before the method, a hot spot is selected by using a collision detection algorithm based on a first ray established by the initial position of the mouse and the position coordinate of the viewpoint camera, and the position difference between the three-dimensional initial position coordinate of the input device and the central position coordinate of the hot spot is calculated; in order to obtain a new center position of the hot spot dragged by the mouse, determining the new center position of the hot spot based on the established second ray and adding the calculated position difference on the basis of the current position coordinate of the mouse; and further redrawing the hot spot plane, the hot spot can synchronously move along with the mouse in the panoramic video when the mouse is moved.
In the above embodiment, after the hot spot is dragged to a place of the panoramic video, the position of the hot spot is updated by releasing the mouse. In the process of moving the hot spot, the hot spot position is changed based on a Cartesian coordinate system, and the system is based on a spherical coordinate system to store the hot spot position value, so that the data of the hot spot position is also required to be converted into the spherical coordinate system from the Cartesian coordinate system. Assuming that the coordinates of an object in a Cartesian coordinate system are x, y and z, the parameters in a spherical coordinate system can be obtained by the following formula
Figure BDA0003550480170000101
(in angular units):
Figure BDA0003550480170000102
Figure BDA0003550480170000103
Figure BDA0003550480170000104
based on the above conversion formula, two parameters representing the location of the hot spot based on the spherical coordinate system are obtained. Then, the system calls the location storage function to save the latest hotspot location into the JavaScript class of the encapsulated hotspot. Finally, the system can restore the function of freely changing the view angle of the camera in the panoramic video, and the user can continue to watch the panoramic video from any angle.
In another embodiment, the method for interaction of draggable hot spots in panoramic video further comprises the steps of obtaining response events corresponding to the hot spots and adding the response events to the hot spots.
In this embodiment, a response event may be added to the AR hotspot, enriching the presentation content of the hotspot. The responsive events include web page popup display, image-text popup display, video popup display, scene switching and the like. When the display of the picture and text popup window is triggered, the picture and text popup window appears above the panoramic picture, wherein the picture can be configured to click the hyperlink, i.e. clicking the picture can open a new webpage on a new label page, a closing button is arranged at the upper right of the popup window, and a user clicks the closing button to hide the picture and text popup window. And after the webpage popup window and the video popup window are displayed similarly and the user clicks the hot spot label configured with the video popup window, the popup window webpage or popup window video is displayed above the panoramic picture, and the popup window video is automatically played. The scene switching function can realize seamless switching from the panoramic live scene of one scenic spot to another panoramic live scene.
The method for adding the interaction event for the hot spot by the user is to package the type of the response event and the function called after triggering the response event into javascript type as attributes and store the attributes into a database file. The steps of the user triggering the response event are: 1. and 2, clicking the target hot spot by using a mouse, operating the corresponding bound event 3 by using the system, and ending the interaction event by closing the button. The principle of selecting the hot spot through the mouse is consistent with the principle of dragging the hot spot through the mouse; after the target hot spot is selected, the system calls the function of hot spot encapsulation; the system may provide a container for displaying pictures or video on a web page; meanwhile, the system can set the view angle of the camera in the panoramic video to be in a fixed state, so that the user is prevented from changing the view angle of the panoramic video when interacting with the container; after the user clicks the close button, the system destroys the container and sets the camera view to a variable state.
In another embodiment, the hotspot tag may also blink. The flickering of hot spot labels is a long-lasting process, which is specifically a variation in hot spot scaling. The maximum and minimum values of the hot spot scaling during the flicker process (e.g., 1.1 and 0.9 for example) are first set, as well as the duration of one change (e.g., 100 ms) from minimum to maximum or from maximum to minimum, and then the flicker process is started. In the flickering process, the current corresponding hot spot scaling is calculated through the difference value between the current time and the starting time, and then the hot spot is modified according to the scaling.
After the attributes of the hot spots are configured, the system sequentially exports the configured attributes in a json data format mode, combines all the attributes into a complete configuration file, and stores the configuration file into a cloud NoSQL database through an http request. When the user needs to acquire the created hot spot, the user can acquire the created hot spot from the database through an http request.
According to the embodiment, the draggable hot spot interaction method in the panoramic video can enable the AR hot spot to be dragged freely in the panoramic video through the input device, so that the change mode of the hot spot position is simpler and more visual. In addition, the hot spot of the panoramic video can also flash, add interactive events and the like, so that the functions of the hot spot label are enriched, the interactivity of the panoramic video is improved, the panoramic video can bear more information, the application scene of the panoramic video is widened, the method has extremely high freedom degree for editing the related attribute of the hot spot, the editability of the hot spot is greatly improved, and the hot spot is more attractive and unique due to the rich hot spot patterns.
Correspondingly, the invention also discloses a draggable hot spot interaction system in the panoramic video, which comprises a processor and a memory, and is characterized in that the memory stores computer instructions, the processor is used for executing the computer instructions stored in the memory, and the system realizes the steps of the method in any embodiment when the computer instructions are executed by the processor.
In addition, the invention also discloses a computer readable storage medium, on which a computer program is stored, which program, when being executed by a processor, implements the steps of the method according to any of the embodiments above.
Those of ordinary skill in the art will appreciate that the various illustrative components, systems, and methods described in connection with the embodiments disclosed herein can be implemented as hardware, software, or a combination of both. The particular implementation is hardware or software dependent on the specific application of the solution and the design constraints. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this disclosure describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, or may be performed in a different order from the order in the embodiments, or several steps may be performed simultaneously.
In this disclosure, features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments and/or in combination with or instead of the features of the other embodiments.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, and various modifications and variations can be made to the embodiments of the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. The draggable hot spot interaction method in the panoramic video is characterized by comprising the following steps:
acquiring the center longitude and latitude coordinates, the size and the radius of the projection sphere of the hot spot, and calculating the center of the hot spot and the position coordinates of each boundary vertex in a three-dimensional space based on the acquired center longitude and latitude coordinates, the size and the radius of the projection sphere of the hot spot;
acquiring style information of the hot spot, and creating the hot spot in the panoramic video based on the calculated center of the hot spot, the position coordinates of each boundary vertex in the three-dimensional space and the acquired style information of the hot spot; wherein the style information includes at least one of transparency, scale, title, color;
selecting the hot spot in a two-dimensional display interface through an input device, acquiring an initial position coordinate of a cursor of the input device on the two-dimensional display interface, converting the initial position coordinate of the input device into a three-dimensional initial position coordinate in a three-dimensional space, establishing a first ray taking a connecting line of the three-dimensional initial position coordinate of the input device and a position coordinate of a viewpoint camera of a three-dimensional space scene as a direction vector, performing collision detection on the first ray and the hot spot to select the hot spot, and calculating a position difference between the central position of the hot spot and the initial position coordinate of the input device;
acquiring current position coordinates of a cursor of the input device on a two-dimensional display interface, converting the current position coordinates of the input device into three-dimensional current position coordinates in a three-dimensional space, establishing a second ray which takes a connecting line of the three-dimensional current position coordinates of the input device and position coordinates of a viewpoint camera of a three-dimensional space scene as a direction vector, determining expected center position coordinates of the hot spot based on the second ray and the calculated position difference, and updating the position of the hot spot based on the expected center position coordinates of the hot spot.
2. The method for interaction of draggable hot spots in panoramic video according to claim 1, wherein the calculation formula of the position coordinates of the center of the hot spot in the three-dimensional space is:
Figure FDA0003550480160000011
wherein x, y and z are the position coordinates of the center of the hot spot in the three-dimensional space, lat and lon are the longitude and latitude coordinates of the center of the hot spot, and r is the radius of the projection sphere.
3. The method of draggable hotspot interaction in panoramic video of claim 1, further comprising: judging whether vectors between the hot spot center position coordinates and the observer position coordinates are co-oriented with the x axis of a three-dimensional coordinate system or not;
under the condition of co-orientation, the calculation formula of the position coordinates of each boundary vertex of the hot spot in the three-dimensional space is as follows:
Figure FDA0003550480160000021
wherein Z is 1 、Z 2 、Z 3 、Z 4 Four vertex coordinates, x, of the hot spot, respectively 1 The coordinate is the central position coordinate of the hot spot, h is the height of the hot spot plane, and w is the width of the hot spot plane;
under the condition of not sharing, the calculation formula of the position coordinates of each boundary vertex of the hot spot in the three-dimensional space is as follows:
Figure FDA0003550480160000022
wherein Z is 1 、Z 2 、Z 3 、Z 4 Four vertex coordinates of the hot spot,
Figure FDA0003550480160000023
Figure FDA0003550480160000024
(p,q,r) T =x 2 -x 1 ,x 1 as the central position coordinate of the hot spot, x 2 Is the position coordinates of the observer.
4. The method for interaction of draggable hot spots in panoramic video according to claim 1, wherein the conversion formula for converting the initial position coordinates of the input device into three-dimensional initial position coordinates in the three-dimensional space is as follows:
mouse.x=(clientX/window.innerWidth)*2–1;
mouse.y=-(clientY/window.innerHeight)*2+1;
wherein, mouse.x and mouse.y are the coordinates of the input device in the X-axis and Y-axis directions in the three-dimensional space, window. InnerWidth and window. InnerHeight are the width and height of the two-dimensional display interface, respectively, and client X and client Y are the initial position coordinates of the input device in the X-axis and Y-axis directions of the two-dimensional display interface, respectively.
5. The method for interaction between draggable hot spots in a panoramic video according to claim 1, wherein creating the hot spot in the panoramic video based on the coordinates of the center of the hot spot and the positions of the boundary vertices in the three-dimensional space and the acquired style information of the hot spot comprises:
creating a hot spot plane based on the position coordinates of the center of the hot spot and each boundary vertex in the three-dimensional space;
and drawing the hotspot picture corresponding to the hotspot on the hotspot plane.
6. The method of draggable hotspot interaction in panoramic video of claim 5, further comprising:
and acquiring the title of the hot spot, and establishing an index between the hot spot plane and the title of the hot spot.
7. The method of draggable hotspot interaction in panoramic video according to any of claims 1 to 6, wherein the method further comprises:
and acquiring a response event corresponding to the hot spot, and adding the response event for the hot spot.
8. The method of any one of claims 1 to 6, wherein the input device is a mouse or a keyboard.
9. A draggable hotspot interaction system in a panoramic video, the system comprising a processor and a memory, wherein the memory has stored therein computer instructions for executing the computer instructions stored in the memory, the system implementing the steps of the method according to any of claims 1 to 8 when the computer instructions are executed by the processor.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN202210260252.2A 2022-03-16 2022-03-16 Draggable hot spot interaction method, system and storage medium in panoramic video Active CN114779981B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210260252.2A CN114779981B (en) 2022-03-16 2022-03-16 Draggable hot spot interaction method, system and storage medium in panoramic video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210260252.2A CN114779981B (en) 2022-03-16 2022-03-16 Draggable hot spot interaction method, system and storage medium in panoramic video

Publications (2)

Publication Number Publication Date
CN114779981A CN114779981A (en) 2022-07-22
CN114779981B true CN114779981B (en) 2023-06-20

Family

ID=82426213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210260252.2A Active CN114779981B (en) 2022-03-16 2022-03-16 Draggable hot spot interaction method, system and storage medium in panoramic video

Country Status (1)

Country Link
CN (1) CN114779981B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101877139A (en) * 2009-04-30 2010-11-03 爱国者全景(北京)网络科技发展有限公司 Method and system for realizing spacial hot spots in three-dimensional video panorama

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170084084A1 (en) * 2015-09-22 2017-03-23 Thrillbox, Inc Mapping of user interaction within a virtual reality environment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101877139A (en) * 2009-04-30 2010-11-03 爱国者全景(北京)网络科技发展有限公司 Method and system for realizing spacial hot spots in three-dimensional video panorama

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于PV3D引擎的热点不变形理论研究;曾雪松;张娅莉;;商丘职业技术学院学报(第05期);全文 *

Also Published As

Publication number Publication date
CN114779981A (en) 2022-07-22

Similar Documents

Publication Publication Date Title
US10750161B2 (en) Multi-view interactive digital media representation lock screen
WO2018188499A1 (en) Image processing method and device, video processing method and device, virtual reality device and storage medium
CN107590771B (en) 2D video with options for projection viewing in modeled 3D space
CN109478344B (en) Method and apparatus for synthesizing image
CN110650368A (en) Video processing method and device and electronic equipment
CN108292489A (en) Information processing unit and image generating method
US20100153847A1 (en) User deformation of movie character images
CN108227916A (en) For determining the method and apparatus of the point of interest in immersion content
CN105872353A (en) System and method for implementing playback of panoramic video on mobile device
US20190130648A1 (en) Systems and methods for enabling display of virtual information during mixed reality experiences
CN114115525B (en) Information display method, device, equipment and storage medium
CN114327700A (en) Virtual reality equipment and screenshot picture playing method
CN110710203B (en) Methods, systems, and media for generating and rendering immersive video content
CN114175630A (en) Methods, systems, and media for rendering immersive video content using a point of gaze grid
CN110390712B (en) Image rendering method and device, and three-dimensional image construction method and device
CN114926612A (en) Aerial panoramic image processing and immersive display system
KR101423915B1 (en) Method and apparatus for generating 3D On screen display
US20210349308A1 (en) System and method for video processing using a virtual reality device
EP3236423A1 (en) Method and device for compositing an image
KR102176805B1 (en) System and method for providing virtual reality contents indicated view direction
CN114779981B (en) Draggable hot spot interaction method, system and storage medium in panoramic video
CN113906731A (en) Video processing method and device
CN116091292B (en) Data processing method and related device
CN113066189B (en) Augmented reality equipment and virtual and real object shielding display method
CN108737907B (en) Method and device for generating subtitles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant