CN109885369B - Image linkage method and device - Google Patents

Image linkage method and device Download PDF

Info

Publication number
CN109885369B
CN109885369B CN201910133989.6A CN201910133989A CN109885369B CN 109885369 B CN109885369 B CN 109885369B CN 201910133989 A CN201910133989 A CN 201910133989A CN 109885369 B CN109885369 B CN 109885369B
Authority
CN
China
Prior art keywords
dragging
event
dimensional
thumbnail
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910133989.6A
Other languages
Chinese (zh)
Other versions
CN109885369A (en
Inventor
张海锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Knownsec Information Technology Co Ltd
Original Assignee
Beijing Knownsec Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Knownsec Information Technology Co Ltd filed Critical Beijing Knownsec Information Technology Co Ltd
Priority to CN201910133989.6A priority Critical patent/CN109885369B/en
Publication of CN109885369A publication Critical patent/CN109885369A/en
Application granted granted Critical
Publication of CN109885369B publication Critical patent/CN109885369B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The embodiment of the application provides an image linkage method and device, which are applied to data processing equipment for storing three-dimensional images and two-dimensional thumbnails generated based on any two dimensions of the three-dimensional images. When monitoring a position updating event of one of a three-dimensional image and a two-dimensional thumbnail, the data processing equipment determines the current position of a camera of the three-dimensional image and the current coordinate information of the center of a picture shot when the camera is at the current position; and updating the stored value of the global variable to the current coordinate information, so that the other one of the three-dimensional image and the two-dimensional thumbnail updates the displayed picture according to the current value of the global variable, thereby realizing the linkage of the three-dimensional image and the two-dimensional thumbnail.

Description

Image linkage method and device
Technical Field
The application relates to the technical field of image data processing, in particular to an image linkage method and device.
Background
In most existing scenes using three-dimensional images, a user cannot quickly and conveniently preview the whole scene, and when the user controls to focus and display a certain position in the three-dimensional image, namely, the position is located in the center of a picture of the three-dimensional image, the position information of the position in the whole three-dimensional scene cannot be acquired, so that the linkage display of a local image (namely, the three-dimensional image) and a global image (namely, a two-dimensional thumbnail) cannot be realized.
Disclosure of Invention
In view of the above, the present application provides an image linkage method and apparatus to at least partially improve the above problems.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
in a first aspect, an embodiment of the present application provides an image linkage method, which is applied to a data processing device that stores a three-dimensional image and a two-dimensional thumbnail generated based on any two dimensions of the three-dimensional image, wherein the data processing device further stores a global variable; the method comprises the following steps:
when a position updating event of one of the three-dimensional image and the two-dimensional thumbnail is monitored, determining the current position of a camera of the three-dimensional image and the current coordinate information of the center of a picture shot when the camera is at the current position;
and updating the value of the global variable to the current coordinate information, so that the other one of the three-dimensional image and the two-dimensional thumbnail updates the displayed picture according to the current value of the global variable, and the linkage of the three-dimensional image and the two-dimensional thumbnail is realized.
In a second aspect, an embodiment of the present application further provides an image linkage device, which is applied to a data processing device that stores a three-dimensional image and a two-dimensional thumbnail generated based on any two dimensions of the three-dimensional image, where the data processing device further stores a global variable; the device comprises:
the monitoring module is used for determining the current position of an observation camera of the three-dimensional image and the current coordinate information of the center of a picture shot when the camera is at the current position when monitoring a position updating event of one of the three-dimensional image and the two-dimensional thumbnail;
and the linkage module is used for updating the value of the global variable to the current coordinate information, so that the other one of the three-dimensional image and the two-dimensional thumbnail updates the displayed picture according to the current value of the global variable, and the linkage of the three-dimensional image and the two-dimensional thumbnail is realized.
Compared with the prior art, the embodiment of the application has the following beneficial effects:
the image linkage method and device provided by the embodiment of the application are applied to data processing equipment which stores three-dimensional images and two-dimensional thumbnails generated based on any two dimensions of the three-dimensional images. When monitoring a position updating event of one of a three-dimensional image and a two-dimensional thumbnail, the data processing equipment determines the current position of a camera of the three-dimensional image and the current coordinate information of the center of a picture shot when the camera is at the current position; and updating the stored value of the global variable to the current coordinate information, so that the other one of the three-dimensional image and the two-dimensional thumbnail updates the displayed picture according to the current value of the global variable, thereby realizing the linkage of the three-dimensional image and the two-dimensional thumbnail.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a block diagram of a data processing apparatus according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a three-dimensional image and a two-dimensional image provided by an embodiment of the present application;
fig. 3 is a schematic flowchart of an image linkage method according to an embodiment of the present disclosure;
FIG. 4 is a schematic flow chart illustrating an image linkage method according to an embodiment of the present disclosure;
fig. 5 is a functional block diagram of an image linkage device according to an embodiment of the present disclosure.
Icon: 100-a data processing device; 110-image linkage; 111-a listening module; 112-a linkage module; 113-a first generation module; 114-a second generation module; 115-a first prediction module; 116-a second prediction module; 117-picture update module; 120-a machine-readable storage medium; 130-a processor; 140-a communication unit; 150-display unit.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
Referring to fig. 1, a block diagram of a data processing apparatus 100 according to an embodiment of the present disclosure is shown. The data processing apparatus 100 may be any electronic apparatus having a data processing function, such as a Personal Computer (PC), a server, a tablet Computer, and a smart phone. The data processing apparatus 100 includes an image linkage 110, a machine-readable storage medium 120, a processor 130, a communication unit 140, and a display unit 150.
The elements of the machine-readable storage medium 120, the processor 130, the communication unit 140, and the display unit 150 are electrically connected to each other, directly or indirectly, to enable transmission or interaction of data. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The machine-readable storage medium 120 stores machine-executable instructions, and the processor 130 may perform the image linkage method described below by reading and executing the machine-executable instructions of the machine-readable storage medium 120 corresponding to the image linkage logic.
The machine-readable storage medium 120 referred to herein may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium 120 may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., a compact disk, a DVD, etc.), or similar storage medium, or a combination thereof.
The machine-readable storage medium 120 has stored therein a three-dimensional image and a two-dimensional thumbnail generated based on any two dimensions of the three-dimensional map. Wherein the two-dimensional thumbnail can be generated by the following process: and generating a two-dimensional image based on any two dimensions of the three-dimensional map, and reducing the two-dimensional image to a specified size to obtain the two-dimensional thumbnail.
In practical applications, the coordinates of the three-dimensional image are usually determined by three dimensions, such as the X-axis, Y-axis and Z-axis shown in fig. 2(a), and then a two-dimensional image can be constructed by any two of the X-axis, Y-axis and Z-axis, such as the two-dimensional image constructed based on the X-axis and Y-axis shown in fig. 2(a) shown in fig. 2 (b). And then the two-dimensional image is reduced to a specific size, and a two-dimensional thumbnail can be obtained.
In this embodiment, the two-dimensional thumbnail may be disposed at any position on a display interface where the three-dimensional image is located, for example, may be disposed at an upper right corner, a small left corner, and the like of the three-dimensional image, which is not limited in this embodiment.
Alternatively, the three-dimensional image and the two-dimensional thumbnail may be run on a specific Application (APP) of the data processing apparatus 100, for example, on a browser client. Alternatively, the three-dimensional image may be a three-dimensional map, and correspondingly, the two-dimensional thumbnail may be a thumbnail of a two-dimensional map.
The communication unit 140 is used for establishing a communication connection between the data processing apparatus 100 and other apparatuses to realize data interaction. The display unit 150 is used for displaying information to be displayed (such as the three-dimensional image and the two-dimensional thumbnail, etc., described above) or for enabling interaction with a user.
It should be understood that the configuration shown in fig. 1 is merely illustrative, and that data processing apparatus 100 may also include more or fewer components than shown in fig. 1, or have a completely different configuration than shown in fig. 1. The components shown in fig. 1 may be implemented by software, hardware, or a combination thereof, which is not limited in this embodiment.
Referring to fig. 3, fig. 3 is a flowchart of an image linkage method applied to the data processing apparatus 100 shown in fig. 1, and the method including various steps will be described in detail below.
Step S31, when a position update event of one of the three-dimensional image and the two-dimensional thumbnail is monitored, determining a current position where a camera of the three-dimensional image is located and current coordinate information of a center of a picture captured when the camera is located at the current position.
The camera refers to a camera object arranged in a three-dimensional image, and an observed picture of the camera object is a picture presented to a current user. When the position of the camera in the three-dimensional scene changes, the picture observed by the camera changes, and correspondingly, the picture centers of the three-dimensional image and the two-dimensional thumbnail change correspondingly.
In this embodiment, the user may perform an operation on any one of the three-dimensional image and the two-dimensional thumbnail, for example, a click operation, a drag operation, and the like, and the location update event may be an event triggered by these operations. In this embodiment, a position update event generated by the three-dimensional image and the two-dimensional thumbnail may be monitored through a JavaScript script, and when the position update event is monitored, the current position of the camera in the three-dimensional scene may be determined according to the position update event, so as to determine the position of the center of the picture, which is presented to the user when the camera is located at the current position, that is, the current coordinate information.
And step S32, updating the value of the global variable to the current coordinate information, so that the other one of the three-dimensional image and the two-dimensional thumbnail updates the displayed picture according to the current value of the global variable, and the linkage of the three-dimensional image and the two-dimensional thumbnail is realized.
In this embodiment, the data processing apparatus 100 stores a global variable, for example, the three-dimensional image and the two-dimensional thumbnail are run on a browser client, and the global variable may be a window object used to store coordinate information of the center of the three-dimensional image and the two-dimensional thumbnail. Target, the coordinate information (e.g., X-axis coordinates and Y-axis coordinates) of the three-dimensional image and the screen center of the two-dimensional thumbnail may be acquired as follows: target.x; window. As such, the coordinates of the screen centers of the three-dimensional image and the two-dimensional thumbnail will change as the value of the global variable changes.
Through the global variable, on one hand, the synchronization of the three-dimensional image and the picture center of the two-dimensional thumbnail can be realized. On the other hand, the number of recorded coordinate variables can be reduced, and time consumption caused by data transmission of actual parameters and form parameters can be reduced.
In implementation, after the current coordinate information is determined, the value of the global variable may be updated to the current coordinate information. If a location update event is generated by the three-dimensional image, the two-dimensional thumbnail will be positionally adjusted in accordance with the current value of the global variable such that the picture center is located at the position indicated by the current value of the global variable. If a location update event is generated by the two-dimensional thumbnail, the three-dimensional image will be positionally adjusted in accordance with the current value of the global variable such that the center of the picture is at the location indicated by the current value of the global variable.
In this embodiment, the location update event may be an event generated according to an event triggered by a partial operation of a user. The reason is that: some operations by the user may not cause the change of the center of the screen, and based on this, events triggered by the operations by the user can be distinguished, and the location update event is generated only for the event triggered by a specific operation.
In a specific implementation manner, the image linkage method may include the steps of:
generating the location update event according to the displacement event when the displacement event of the camera is detected.
Wherein the displacement event refers to an event triggered when the coordinates of the camera in the three-dimensional scene change. In this embodiment, since the two-dimensional thumbnail has no direction difference, it is not necessary to link the three-dimensional image and the two-dimensional thumbnail when the viewing direction of the camera is changed.
In another specific implementation manner, the image linkage method may include the steps of:
and when a target click event of the two-dimensional thumbnail is detected, generating the position updating event according to the target click event.
In this embodiment, when the user performs a dragging operation on the two-dimensional thumbnail, a plurality of consecutive operations may be performed, and in order to avoid frequently modifying the global variable, the image linkage method may further include the steps of:
and when detecting that any one of the dragging speed and the dragging direction of two adjacent dragging events of the two-dimensional thumbnail is different, generating a corresponding position updating event according to the last dragging event of the two adjacent dragging events.
For two adjacent drag events with the same drag speed and drag direction, the two drag events can be merged into one drag event.
In the implementation process, for at least two combined dragging events, in the process, since the linkage operation is not performed through the global variable, the pictures presented by the three-dimensional image and the two-dimensional thumbnail may be different, and a click phenomenon may occur.
In response to this problem, the image linkage method provided in this embodiment may further include the steps shown in fig. 4.
Step S41, when it is detected that the dragging speeds and the dragging directions of two adjacent dragging events of the two-dimensional thumbnail are the same, determining a first spacing distance of actual positions of the two adjacent dragging events in a first dimension of the two-dimensional thumbnail and a second spacing distance of the actual positions of the two adjacent dragging events in a second dimension of the two-dimensional thumbnail, and determining a first duration between occurrence times of the two adjacent dragging events.
Wherein, the position of the drag event may refer to a position where the drag event ends the drag.
Step S41 is further explained below by way of an example, assuming: the coordinate of the actual position of the nth drag event in the two-dimensional scene is P1(x1, y1), and the occurrence time thereof is t 1; the coordinate of the actual position of the N-1 th drag event in the two-dimensional scene is P2(x2, y2), and the occurrence time is t 0. In implementation, the moving direction vector of the nth drag event relative to the N-1 st drag event may be calculated as: d1(Δ x, Δ y) ═ x1-x0, y1-y 0.
Where Δ x — x1-x0 is a first separation distance of actual positions of the nth and N-1 th drag events in one dimension (i.e., a first dimension) of the two-dimensional scene. Y1-y0 is a second separation distance of the actual positions of the nth and N-1 th drag events in another dimension (i.e., a second dimension) of the two-dimensional scene.
In addition, the difference between the occurrence time of the nth drag event and the occurrence time of the N-1 th drag event can be calculated, that is, the first time length is: and t is t1-t 0.
Step S42, when a new drag event is detected, if the drag speed and drag direction of the new drag event are not changed with respect to the drag speed and drag direction of the subsequent drag event, predicting to obtain a first drag speed of the new drag event in the first dimension according to the first interval distance and the first time length, and predicting to obtain a second drag speed of the new drag event in the second dimension according to the second interval distance and the first time length.
Step S43, determining a predicted position of the new drag event according to the first drag speed, the second drag, and a second duration between the occurrence time of the new drag event and the next drag event.
Still referring to the above example, assuming that the N +1 th drag event is detected, if the actual position of the N +1 th drag event is P2(x2, y2), the occurrence time is t 2. Then, the predicted position P2' (x2', y2') of the N +1 th drag event may be calculated according to the movement direction vector and the first duration calculated above. In detail, the calculation can be performed by the following calculation formula:
where Δ x/Δ t represents an actual first drag speed in the first dimension from the nth drag event, and in implementation, the actual first drag speed may be used as the first drag speed predicted in the first dimension by the N +1 th drag event. Correspondingly, Δ y/Δ t may be taken as the predicted second drag speed of the N +1 th drag event in the second dimension.
The dragging duration corresponding to the (N + 1) th dragging event is as follows: t2-t1, i.e., the second duration. In this case, the product of the predicted first dragging speed and the second duration is the dragging distance of the (N + 1) th dragging event in the first dimension, the product of the predicted second dragging speed and the second duration is the dragging distance of the (N + 1) th dragging event in the second dimension, and the predicted position of the (N + 1) th dragging event can be determined according to the dragging distances of the (N + 1) th dragging event in the first dimension and the second dimension.
And step S44, when the distance between the predicted position and the actual position of the new drag event reaches a preset value, updating the picture of the three-dimensional image to the picture shot when the camera is at the predicted position.
In practice, it may be determined whether the distance between the predicted positions P2' and P2 reaches the preset value, and if not, it indicates that the N +1 th drag event has been predicted, and it may not be necessary to update the picture of the three-dimensional image. If so, the picture of the three-dimensional image can be updated to present the picture shot when the camera is positioned at the predicted position. Specifically, the determination may be made by the following inequality:
wherein t represents the preset value, and the specific size of the preset value may be determined according to a statistical and testing manner, which is not limited in this embodiment.
Through the design, for multiple combined dragging events, the three-dimensional images and the two-dimensional thumbnails can be basically displayed in the same way under the condition that linkage is not carried out through global variables, and therefore the problem that a user feels stuck is avoided.
Referring to fig. 5, the embodiment further provides an image linkage device 110, and the image linkage device 110 includes at least one functional module that can be stored in a machine-readable storage medium 120 in a software form. Functionally divided, the image linkage 110 may include a listening module 111 and a linkage module 112.
The monitoring module 111 is configured to determine, when a location update event of one of the three-dimensional image and the two-dimensional thumbnail is monitored, a current location where an observation camera of the three-dimensional image is located, and current coordinate information of a center of a picture captured when the camera is located at the current location.
In this embodiment, the listening module 111 may be configured to execute step S31, and the description of the listening module 111 may specifically refer to the description of step S31.
The linkage module 112 is configured to update the value of the global variable to the current coordinate information, so that the other one of the three-dimensional image and the two-dimensional thumbnail updates the displayed picture according to the current value of the global variable, thereby realizing linkage between the three-dimensional image and the two-dimensional thumbnail.
In this embodiment, the linkage module 112 may be configured to execute step S32, and the description of the linkage module 112 may specifically refer to the description of step S32.
Optionally, in this embodiment, the image linkage 110 may further include a first generating module 113.
The first generating module 113 is configured to generate the location update event according to the displacement event when the displacement event of the camera is detected.
Optionally, the image linkage 110 may further include a second generation module 114.
The second generating module 114 is configured to generate the location update event according to a target click event of the two-dimensional thumbnail when the target click event is detected.
Optionally, the second generating module 114 may be further configured to generate a corresponding location update event according to a subsequent dragging event of the two adjacent dragging events when detecting that any one of the dragging speed and the dragging direction of the two adjacent dragging events of the two-dimensional thumbnail is different.
Optionally, the monitoring module 111 may be further configured to determine, when it is detected that the dragging speeds and the dragging directions of two adjacent dragging events of the two-dimensional thumbnail are the same, a first interval distance of actual positions of the two adjacent dragging events in a first dimension of the two-dimensional thumbnail and a second interval distance of the actual positions of the two adjacent dragging events in a second dimension of the two-dimensional thumbnail, and determine a first duration between occurrence times of the two adjacent dragging events.
In this case, the image linkage 110 may further include a first prediction module 115, a second prediction module 116, and a picture update module 117.
The first prediction module 115 is configured to, when a new dragging event is detected, if a dragging speed and a dragging direction of the new dragging event are not changed with respect to a dragging speed and a dragging direction of the subsequent dragging event, obtain a first dragging speed of the new dragging event in the first dimension according to the first interval distance and the first time length prediction, and obtain a second dragging speed of the new dragging event in the second dimension according to the second interval distance and the first time length prediction.
The second prediction module 116 is configured to determine a predicted position of the new drag event according to the first drag speed, the second drag speed, and a second duration between the occurrence time of the new drag event and the next drag event.
The picture updating module 117 is configured to update the picture of the three-dimensional image to a picture taken by the camera at the predicted position when the distance between the predicted position and the actual position of the new drag event reaches a preset value.
For the description of the above modules, specific reference may be made to the detailed description of the relevant steps above.
In summary, the embodiments of the present application provide an image linkage method and apparatus, which are applied to a data processing device storing a three-dimensional image and a two-dimensional thumbnail generated based on any two dimensions of the three-dimensional image. When monitoring a position updating event of one of a three-dimensional image and a two-dimensional thumbnail, the data processing equipment determines the current position of a camera of the three-dimensional image and the current coordinate information of the center of a picture shot when the camera is at the current position; and updating the stored value of the global variable to the current coordinate information, so that the other one of the three-dimensional image and the two-dimensional thumbnail updates the displayed picture according to the current value of the global variable, thereby realizing the linkage of the three-dimensional image and the two-dimensional thumbnail. Thus, the local image and the global image can be linked.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (6)

1. An image linkage method is characterized by being applied to data processing equipment which stores a three-dimensional image and a two-dimensional thumbnail generated based on any two dimensions of the three-dimensional image, wherein the data processing equipment also stores a global variable; the method comprises the following steps:
when detecting that any one of the dragging speed and the dragging direction of two adjacent dragging events of the two-dimensional thumbnail is different, generating a corresponding position updating event according to the last dragging event of the two adjacent dragging events;
when the dragging speeds and the dragging directions of two adjacent dragging events of the two-dimensional thumbnail are detected to be the same, determining a first interval distance of actual positions of the two adjacent dragging events in a first dimension of the two-dimensional thumbnail and a second interval distance of the actual positions of the two adjacent dragging events in a second dimension of the two-dimensional thumbnail, and determining a first duration between occurrence moments of the two adjacent dragging events;
when a new dragging event is detected, if the dragging speed and the dragging direction of the new dragging event are unchanged relative to the dragging speed and the dragging direction of the next dragging event, predicting to obtain a first dragging speed of the new dragging event in the first dimension according to the first interval distance and the first time length, and predicting to obtain a second dragging speed of the new dragging event in the second dimension according to the second interval distance and the first time length;
determining the predicted position of the new dragging event according to the first dragging speed, the second dragging speed and a second duration between the occurrence moments of the new dragging event and the next dragging event;
when the distance between the predicted position and the actual position of the new dragging event reaches a preset value, updating the picture of the three-dimensional image into a picture shot by a camera at the predicted position;
when a position updating event of one of the three-dimensional image and the two-dimensional thumbnail is monitored, determining the current position of a camera of the three-dimensional image and the current coordinate information of the center of a picture shot when the camera is at the current position;
and updating the value of the global variable to the current coordinate information, so that the other one of the three-dimensional image and the two-dimensional thumbnail updates the displayed picture according to the current value of the global variable, and the linkage of the three-dimensional image and the two-dimensional thumbnail is realized.
2. The image linkage method according to claim 1, further comprising:
generating the location update event according to the displacement event when the displacement event of the camera is detected.
3. The image linkage method according to claim 1 or 2, characterized in that the method further comprises:
and when a target click event of the two-dimensional thumbnail is detected, generating the position updating event according to the target click event.
4. An image linkage device is applied to a data processing device which stores a three-dimensional image and a two-dimensional thumbnail generated based on any two dimensions of the three-dimensional image, and the data processing device also stores a global variable; the device comprises:
the second generation module is used for generating a corresponding position updating event according to the next dragging event in the two adjacent dragging events when detecting that any one of the dragging speed and the dragging direction of the two adjacent dragging events of the two-dimensional thumbnail is different;
the monitoring module is used for determining a first spacing distance of actual positions of two adjacent dragging events on a first dimension of the two-dimensional thumbnail and a second spacing distance of the actual positions of the two adjacent dragging events on a second dimension of the two-dimensional thumbnail when the dragging speed and the dragging direction of the two adjacent dragging events of the two-dimensional thumbnail are detected to be the same, and determining a first duration between occurrence moments of the two adjacent dragging events;
a first prediction module, configured to, when a new dragging event is detected, predict, according to the first interval distance and the first time length, a first dragging speed of the new dragging event in the first dimension if a dragging speed and a dragging direction of the new dragging event are unchanged from a dragging speed and a dragging direction of the subsequent dragging event, and predict, according to the second interval distance and the first time length, a second dragging speed of the new dragging event in the second dimension;
a second prediction module, configured to determine a predicted position of the new dragging event according to the first dragging speed, the second dragging speed, and a second duration between the new dragging event and an occurrence time of the next dragging event;
the picture updating module is used for updating the picture of the three-dimensional image into a picture shot by the camera at the predicted position when the distance between the predicted position and the actual position of the new dragging event reaches a preset value;
the monitoring module is further used for determining the current position of an observation camera of the three-dimensional image when a position updating event of one of the three-dimensional image and the two-dimensional thumbnail is monitored, and current coordinate information of a picture center shot when the camera is at the current position;
and the linkage module is used for updating the value of the global variable to the current coordinate information, so that the other one of the three-dimensional image and the two-dimensional thumbnail updates the displayed picture according to the current value of the global variable, and the linkage of the three-dimensional image and the two-dimensional thumbnail is realized.
5. The image linkage according to claim 4, wherein the device further comprises:
a first generation module, configured to generate the location update event according to a displacement event of the camera when the displacement event is detected.
6. The image linkage according to claim 4 or 5,
the second generating module is further configured to generate the location update event according to the target click event when the target click event of the two-dimensional thumbnail is detected.
CN201910133989.6A 2019-02-22 2019-02-22 Image linkage method and device Active CN109885369B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910133989.6A CN109885369B (en) 2019-02-22 2019-02-22 Image linkage method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910133989.6A CN109885369B (en) 2019-02-22 2019-02-22 Image linkage method and device

Publications (2)

Publication Number Publication Date
CN109885369A CN109885369A (en) 2019-06-14
CN109885369B true CN109885369B (en) 2022-04-29

Family

ID=66928877

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910133989.6A Active CN109885369B (en) 2019-02-22 2019-02-22 Image linkage method and device

Country Status (1)

Country Link
CN (1) CN109885369B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120244942A1 (en) * 2011-02-25 2012-09-27 3D Sports Technology, Inc. 3D Sports Playbook
US20140007201A1 (en) * 2011-08-18 2014-01-02 Utherverse Digital, Inc. Systems and methods of assessing permissions in virtual worlds
CN103679727A (en) * 2013-12-16 2014-03-26 中国科学院地理科学与资源研究所 Multi-dimensional space-time dynamic linkage analysis method and device
CN103995644A (en) * 2014-05-23 2014-08-20 中国电建集团成都勘测设计研究院有限公司 Method for achieving linkage fusion of three-dimensional geographic information system and three-dimensional graphic system
US8902219B1 (en) * 2010-09-22 2014-12-02 Trimble Navigation Limited Maintaining connection to embedded content using graphical elements
CN105354875A (en) * 2015-09-25 2016-02-24 厦门大学 Construction method and system for two-dimensional and three-dimensional joint model of indoor environment
CN107247591A (en) * 2017-06-09 2017-10-13 成都知道创宇信息技术有限公司 A kind of big data displaying interface alternation method based on map
CN107369205A (en) * 2017-07-04 2017-11-21 东南大学 A kind of three-dimensional linkage display methods of mobile terminal city two
CN107480174A (en) * 2017-06-30 2017-12-15 百度在线网络技术(北京)有限公司 The interlock method of D Urban model and two-dimensional map, device and computer-readable recording medium
CN107808009A (en) * 2017-11-17 2018-03-16 湖南优图信息技术有限公司 A kind of two three-dimensional map interlock methods based on Stamp platforms
CN108269305A (en) * 2017-12-27 2018-07-10 武汉网信安全技术股份有限公司 A kind of two dimension, three-dimensional data linkage methods of exhibiting and system
CN109068103A (en) * 2018-09-17 2018-12-21 北京智汇云舟科技有限公司 Dynamic video space-time virtual reality fusion method and system based on three-dimensional geographic information
CN109144393A (en) * 2018-08-28 2019-01-04 维沃移动通信有限公司 A kind of image display method and mobile terminal

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8902219B1 (en) * 2010-09-22 2014-12-02 Trimble Navigation Limited Maintaining connection to embedded content using graphical elements
US20120244942A1 (en) * 2011-02-25 2012-09-27 3D Sports Technology, Inc. 3D Sports Playbook
US20140007201A1 (en) * 2011-08-18 2014-01-02 Utherverse Digital, Inc. Systems and methods of assessing permissions in virtual worlds
CN103679727A (en) * 2013-12-16 2014-03-26 中国科学院地理科学与资源研究所 Multi-dimensional space-time dynamic linkage analysis method and device
CN103995644A (en) * 2014-05-23 2014-08-20 中国电建集团成都勘测设计研究院有限公司 Method for achieving linkage fusion of three-dimensional geographic information system and three-dimensional graphic system
CN105354875A (en) * 2015-09-25 2016-02-24 厦门大学 Construction method and system for two-dimensional and three-dimensional joint model of indoor environment
CN107247591A (en) * 2017-06-09 2017-10-13 成都知道创宇信息技术有限公司 A kind of big data displaying interface alternation method based on map
CN107480174A (en) * 2017-06-30 2017-12-15 百度在线网络技术(北京)有限公司 The interlock method of D Urban model and two-dimensional map, device and computer-readable recording medium
CN107369205A (en) * 2017-07-04 2017-11-21 东南大学 A kind of three-dimensional linkage display methods of mobile terminal city two
CN107808009A (en) * 2017-11-17 2018-03-16 湖南优图信息技术有限公司 A kind of two three-dimensional map interlock methods based on Stamp platforms
CN108269305A (en) * 2017-12-27 2018-07-10 武汉网信安全技术股份有限公司 A kind of two dimension, three-dimensional data linkage methods of exhibiting and system
CN109144393A (en) * 2018-08-28 2019-01-04 维沃移动通信有限公司 A kind of image display method and mobile terminal
CN109068103A (en) * 2018-09-17 2018-12-21 北京智汇云舟科技有限公司 Dynamic video space-time virtual reality fusion method and system based on three-dimensional geographic information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"二维GIS与三维GIS联动技术研究";程海洋;《浙江水利科技》;20100525(第3期);第31-32页 *

Also Published As

Publication number Publication date
CN109885369A (en) 2019-06-14

Similar Documents

Publication Publication Date Title
JP6170972B2 (en) Method and computer-readable recording medium for gallery application for content display
US9703446B2 (en) Zooming user interface frames embedded image frame sequence
JP5053404B2 (en) Capture and display digital images based on associated metadata
US20140146038A1 (en) Augmented display of internal system components
US9142044B2 (en) Apparatus, systems and methods for layout of scene graphs using node bounding areas
US20150149960A1 (en) Method of generating panorama image, computer-readable storage medium having recorded thereon the method, and panorama image generating device
US20090089705A1 (en) Virtual object navigation
US20150262019A1 (en) Information processing system, information processing method, and program
KR20150132527A (en) Segmentation of content delivery
US20110199517A1 (en) Method of showing video on a touch-sensitive display
JP5955491B2 (en) Information superimposed image display device and information superimposed image display program
US9405446B1 (en) Efficient and interactive presentation of item images
JP6505242B2 (en) Cluster-based photo navigation
KR102317013B1 (en) Object management and visualization using computing devices
JP2011039801A (en) Apparatus and method for processing image
CN109885369B (en) Image linkage method and device
EP3151243B1 (en) Accessing a video segment
US10782868B2 (en) Image navigation
EP2921944B1 (en) User interface
JP4914659B2 (en) VIDEO PROCESSING DEVICE, METHOD THEREOF, PROGRAM THEREOF, AND RECORDING MEDIUM CONTAINING THE PROGRAM
CN113703653A (en) Image processing method, device, equipment and computer readable storage medium
JP2012053309A (en) Image display device and program
JP2005309527A (en) Content information amount display device, method, and program, and storage medium storing content information amount display program
JP2011158956A (en) Information processor and information processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant