WO2016155377A1 - 图片展示方法和装置 - Google Patents
图片展示方法和装置 Download PDFInfo
- Publication number
- WO2016155377A1 WO2016155377A1 PCT/CN2015/099164 CN2015099164W WO2016155377A1 WO 2016155377 A1 WO2016155377 A1 WO 2016155377A1 CN 2015099164 W CN2015099164 W CN 2015099164W WO 2016155377 A1 WO2016155377 A1 WO 2016155377A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- displayed
- picture
- pictures
- scene
- image
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000009877 rendering Methods 0.000 claims description 62
- 230000007704 transition Effects 0.000 claims description 36
- 238000003384 imaging method Methods 0.000 claims description 11
- 238000007781 pre-processing Methods 0.000 claims description 11
- 238000003780 insertion Methods 0.000 claims description 8
- 230000037431 insertion Effects 0.000 claims description 8
- 230000003287 optical effect Effects 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 6
- 238000003702 image correction Methods 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims 2
- 230000002708 enhancing effect Effects 0.000 abstract description 3
- 230000008569 process Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 238000004590 computer program Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9537—Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/54—Browsing; Visualisation therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/5866—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/587—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/951—Indexing; Web crawling techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/003—Navigation within 3D models or images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/97—Determining parameters from multiple pictures
Definitions
- the present application relates to the field of computer technologies, and in particular, to the field of image processing, and in particular, to a method and apparatus for displaying pictures.
- the user experience is enhanced by displaying images of the same scene shared by a plurality of users (for example, photos taken by a user at a tourist attraction) to enhance the degree of association between the pictures shared by the users.
- a plurality of users for example, photos taken by a user at a tourist attraction
- the pictures containing the same scene are placed out of order, so that the scene in the picture displayed by the picture is single, and further, the picture cannot be reflected in the whole.
- the characteristics of the scene, so that the scene content of the picture expression is not rich enough, reducing the user experience of browsing pictures.
- the present application provides a picture display method and apparatus for solving the technical problems existing in the above background art.
- the present application provides a picture display method, including: acquiring an original picture set including the same scene; selecting a reconstructed picture set for reconstructing a three-dimensional structure of the scene from the original picture set, and adopting the reconstructed picture set Reconstructing a picture to reconstruct a three-dimensional structure of the scene, the three-dimensional structure of the scene includes a three-dimensional feature point; and selecting a picture set to be displayed from the reconstructed picture set, the number of three-dimensional feature points included in the image set to be displayed is greater than a preset threshold; The display of the image to be displayed in the image collection to be displayed And generating a picture display sequence based on the display order of the pictures to be displayed; continuously displaying the pictures in the picture display sequence.
- the present application provides a picture display apparatus, where the apparatus includes: an acquisition module, configured to acquire an original picture set that includes the same scene; and a reconstruction module, configured to select, from the original picture set, to reconstruct the scene.
- the reconstructed image set of the three-dimensional structure, and the three-dimensional structure of the reconstructed image reconstructed by the reconstructed image set, the three-dimensional structure of the scene includes a three-dimensional feature point; and the selecting module is configured to select a set of the image to be displayed from the reconstructed image set, to be displayed
- the number of the three-dimensional feature points included in the picture set is greater than a preset threshold;
- the determining module is configured to determine a display order of the to-be-displayed pictures in the set of pictures to be displayed, and generate a picture display sequence based on the display order of the pictures to be displayed; Continuously display images in the image display sequence.
- the image display method and device provided by the present application determine the display order of the pictures according to the association relationship between the pictures by acquiring the pictures including the same scene, and continuously display the pictures according to the determined display order, thereby enhancing the pictures in the picture,
- the spatial continuity of the scene enriches the content of the scene represented by the picture. Further, by inserting a transition picture between the pictures, a smooth transition is achieved in the process of continuously displaying the scene in the picture.
- FIG. 1 shows a flow chart of one embodiment of a picture display method of the present application
- FIG. 2 shows a flow chart of another embodiment of a picture display method of the present application
- FIG. 3 shows an exemplary schematic diagram of an insertion transition picture in the picture display method of the present application
- FIG. 4 is a schematic structural view showing an embodiment of a picture display device of the present application.
- Figure 5 shows a schematic block diagram of a computer system suitable for use in the present application.
- FIG. 1 illustrates a flow 100 of one embodiment of a picture presentation method in accordance with the present application.
- the method includes the following steps:
- Step 101 Acquire an original picture collection that includes the same scene.
- the original picture including the same scene may be a UGC (User Generated Content) picture shared by the user.
- an original picture containing the same scene can be obtained by actively capturing images from different websites.
- an original picture containing the name of the same tourist attraction in the picture name can be fetched from different websites to obtain an original picture containing the same tourist attraction.
- acquiring the original image set that includes the same scene includes: receiving a scene selection instruction, where the scene selection instruction includes geographic location information corresponding to the scene; and acquiring, according to the geographic location information, the location information corresponding to the geographic location information Original picture collection.
- a picture of a plurality of different scenes eg, tourist attractions
- the scene that the user desires to browse may be determined based on a user's selection operation (eg, a click operation) on the picture.
- the address information corresponding to the scenario is further determined, and the correspondence between the scenario and the address information of the scenario may be established in advance.
- the shooting position information recorded in the original information is matched with the address information of the scene, and the image in which the shooting position is consistent with the address information of the scene in the shooting position information is selected, thereby selecting the same The original picture of the scene.
- the method further includes: performing a pre-processing operation on the original image set; wherein the pre-processing operation includes at least one of the following: filtering The original image in the original image collection whose image quality is lower than the preset value; and, the original image in the original image collection Corrected to normal shooting posture.
- the original image can be filtered based on the EXIF (Exchangeable Image File) information in the image to filter out images with poor image quality.
- the image correction operation can be performed on the original image, that is, the image is adjusted to the normal shooting posture by rotating the image in the original image.
- a picture with a low degree of association with the three-dimensional structure reconstruction of the scene may be removed by selecting a picture.
- Step 102 Select a reconstructed picture set for reconstructing a three-dimensional structure of the scene from the original picture set, and reconstruct a three-dimensional structure of the scene by using the reconstructed picture in the reconstructed picture set.
- the three-dimensional structure of the scene may be reconstructed by using the original picture containing the same scene.
- a certain number of feature points can be selected from each original picture containing the same scene for reconstructing the three-dimensional structure of the scene.
- the feature point may be a point used to represent the outline of the scene in the original picture.
- the feature points may be points for representing the outline of the scene in the original picture, and may be multiple objects of the Buddha image in the Longmen Grottoes (for example, the face and eyes of the Buddha image) Feature points are selected from the outline of the part and the hand.
- the three-dimensional structure of the scene can be displayed on the interface of the user browsing the UGC image, and the shooting position of the image for synthesizing the three-dimensional structure of the scene can also be performed at the corresponding position of the three-dimensional structure of the scene. Labeling to further enhance the correlation between the picture and the scene.
- the reconstructing the image set for reconstructing the three-dimensional structure of the scene from the original image set includes: extracting the scale-invariant feature points of the original image in the original image set; A matching relationship between scale-invariant feature points is selected, and a reconstructed image set for reconstructing a three-dimensional structure of the scene is selected from the original image set.
- a reconstructed picture for reconstructing a three-dimensional structure of the scene may be selected from the original picture based on a SIFT (Scale-Invariant Feature Transform) feature point of the original picture. It is possible to first determine whether there is an association relationship between the original pictures based on the matching relationship of the SIFT feature points between the original pictures. The following is a way to determine whether there is a relationship between an original image and other original images. Cheng. Determining whether there is an association between an original picture and other original pictures may be such that the original picture can be matched with other original SIFT feature points (for example, by calculating the Euclidean distance between two feature points), Determine the number of matching SIFT feature points between pictures.
- SIFT Scale-Invariant Feature Transform
- the number of matched SIFT feature points is greater than a preset threshold, it may be determined that there is an association relationship between the original picture and other original pictures. Based on the above manner, it may be separately determined whether there is an association relationship between each of the original pictures in the plurality of original pictures and other original pictures in the other original pictures.
- a KD tree structure in a data structure that represents an association relationship between original pictures may be first established, wherein each original picture is represented by a node in a KD tree, and a KD tree structure may be used to derive an association between nodes. Relationship, thereby determining the association relationship between the original pictures corresponding to the nodes.
- a three-dimensional picture (also referred to as a reconstructed picture) for reconstructing the scene may be selected based on the original picture in which the association relationship exists. For example, a graph structure in a data structure is adopted, wherein each original image in the original image having an association relationship is represented by a node in the graph structure, and the association relationship between the original images can be represented by a connecting line.
- the subgraph with the largest number of nodes can be selected from a plurality of subgraphs that characterize the connection relationship between the nodes.
- the original picture corresponding to the node in the sub-picture is used as a reconstructed picture of the three-dimensional structure of the reconstructed scene.
- the reconstructed picture in the reconstructed picture set is used to reconstruct a three-dimensional structure of the scene
- the three-dimensional structure of the scene includes three-dimensional feature points including: according to the scale-invariant feature of the reconstructed picture in the reconstructed picture set Point, restore camera parameters, camera parameters include camera parameters and camera parameters, camera parameters including focal length, principal point offset; based on the scale-invariant feature points of the reconstructed image and camera parameters, reconstruct the three-dimensional structure of the scene.
- the main point offset may be: the distance between the intersection of the camera lens main axis and the plane of the camera sensor array to the center point of the picture.
- Off-camera parameters can be used to characterize the rotation and translation of the camera coordinate system relative to the world coordinate system, and the off-camera parameters can be determined based on in-camera parameters.
- the restored camera parameters may be optimized according to the scale-invariant feature points of the reconstructed picture in the reconstructed picture set, and the out-of-camera parameters may be preferentially optimized.
- Step 103 Select a set of pictures to be displayed from the set of reconstructed pictures, and the number of three-dimensional feature points included in the set of pictures to be displayed is greater than a preset threshold.
- some pictures displayed to the user may be selected from the pictures used to reconstruct the three-dimensional structure of the scene (also referred to as reconstructed pictures). image).
- a plurality of to-be-displayed images may be selected from the reconstructed images, and the plurality of to-be-displayed images may be selected as the images satisfying the following conditions: different features included between the selected plurality of to-be-displayed images Points can cover the three-dimensional structure of the reconstructed scene.
- the correspondence between the feature points of the plurality of pictures to be displayed and the three-dimensional feature points of the three-dimensional structure of the scene may be determined.
- the image to be displayed can express the scene more abundantly, that is, the image to be displayed can display the scene from the scene from multiple angles. .
- the step 103 may include: selecting a reference to be displayed image from the reconstructed image, and the number of the three-dimensional feature points included in the reference to-be-displayed image is greater than the three-dimensional included in the other arbitrary reconstructed image.
- the number of feature points; the plurality of subsequent to-be-displayed images satisfying the first preset condition are sequentially selected from the reconstructed image by the reference to be displayed, and the first preset condition includes: the subsequent to-be-displayed image and the previous selected subsequent
- the number of different three-dimensional feature points included between the to-be-displayed images is greater than the number of different three-dimensional feature points included between the reconstructed image that is not selected as the subsequent to-be-displayed image and the previous selected subsequent to-be-displayed image.
- the meaning of the included three-dimensional feature point of the to-be-displayed image may be a SIFT feature point corresponding to the three-dimensional feature point that participates in reconstructing the three-dimensional structure of the scene, where the SIFT feature point may be a matching SIFT feature between the reconstructed pictures. point.
- the three-dimensional structure reconstruction of the scene is performed by using the matched SIFT feature points between the reconstructed pictures, there is a correspondence between the SIFT feature points and the three-dimensional feature points of the three-dimensional structure of the scene.
- the process of selecting a picture to be displayed is illustrated by taking one of the objects in the reconstructed picture (for example, the Longmen Grottoes) as an example.
- the image with the most 3D feature points may be selected from the reconstructed image as the first image selected as the image to be displayed, and then the second image selected as the image to be displayed needs to be satisfied with other reconstructions. Compared to the image, it is included with the first image to be displayed. Different 3D point feature points are the most.
- the pictures to be displayed are sequentially selected until the different three-dimensional feature points included in the image to be displayed reach a threshold, that is, the number of different three-dimensional feature points included in the image set to be displayed is greater than a preset threshold.
- Step 104 Determine a display order of the to-be-displayed pictures in the set of pictures to be displayed, and generate a picture display sequence based on the display order of the pictures to be displayed.
- the display order of the image to be displayed may be determined, and then the image to be displayed is presented to the user.
- the order in which the pictures to be displayed are displayed may be determined according to the associated information of the picture to be displayed, for example, the shooting position and the shooting angle of the picture to be displayed.
- determining a display order of the to-be-displayed image in the to-be-displayed image set, and generating a photo display sequence based on the display order of the to-be-displayed image includes: according to an attribute parameter between the to-be-displayed images
- the association relationship determines the display order of the pictures to be displayed, and the attribute parameters of the picture to be displayed include the shooting position parameter and the shooting angle parameter; and the image display sequence is generated based on the display order of the pictures to be displayed.
- the process of determining the display order of the to-be-displayed picture is determined by taking the display order of the picture to be displayed according to the shooting position of the picture to be displayed and the shooting angle parameter.
- the order of the pictures to be displayed may be determined by: according to the distance from the shooting position of the picture to be displayed to the scene (for example, the Longmen Grottoes), the displayed pictures are sorted in order from the largest to the smallest, thereby determining to be displayed.
- the order in which the pictures are displayed that is, the order of display from the far side to the close of the Longmen Grottoes, so that the picture to be displayed can show the scene in the picture from the far side to the near side, so that the scene in the picture has spatial continuity. Sexuality further enriches the content expressed in the picture.
- the order in which the pictures to be displayed can be displayed can be determined according to the shooting angle. For example, based on the angle of deviation of the shooting angle with respect to the axis of the scene (for example, the Longmen Grottoes), the order of display of the to-be-displayed pictures is determined in order of the offset angles, so that the pictures to be displayed can be viewed from different angles of view.
- the user displays the scene in the picture so that the scene in the picture has spatial continuity.
- the related information used in determining the display order mode of the image to be displayed may be combined to determine the smooth display of the image to be displayed, for example, determining the image to be displayed by the shooting position and shooting angle of the image to be displayed. Order of presentation, In addition, it is also possible to set different weight values for attribute parameters such as the shooting position and the shooting angle, thereby determining the final display order.
- step 105 the pictures in the picture display sequence are continuously displayed.
- the pictures in the picture display sequence may be successively displayed.
- each picture can be rotated to further display the scene in the picture from multiple angles, and enhance the scene content expressed by the picture.
- the image display method provided by the above embodiment of the present application determines the display order of the pictures according to the association relationship between the pictures by acquiring the pictures including the same scene, and continuously displays the pictures according to the determined display order, thereby enhancing the pictures when the pictures are displayed.
- the spatial continuity of the scene in the scene enriches the content of the scene expressed by the picture. Further, by inserting a transition picture between the pictures, the scene in the picture is smoothed in the process of continuous display in space. transition.
- FIG. 2 illustrates a flow 200 of another embodiment of a picture presentation method in accordance with the present application.
- the method includes the following steps:
- Step 201 Acquire an original picture collection that includes the same scene.
- Step 202 Select a reconstructed picture set for reconstructing a three-dimensional structure of the scene from the original picture set, and reconstruct a three-dimensional structure of the scene by using the reconstructed picture in the reconstructed picture set.
- Step 203 Select a picture set to be displayed from the set of reconstructed pictures, and the number of three-dimensional feature points included in the picture set to be displayed is greater than a preset threshold.
- Step 204 Determine a display order of the to-be-displayed image based on the rendering cost between the images to be displayed, and insert a transition picture between each two adjacent to-be-displayed images according to the display order of the to-be-displayed image to generate a picture display sequence.
- the display order of the picture to be displayed is determined based on the rendering cost between the pictures to be displayed, and the picture insertion transition may be displayed when the picture to be displayed is continuously displayed.
- Pictures also known as virtual pictures
- a smooth display process can be formed, thus forming a smooth transition.
- FIG. 3 shows an insertion transition picture in the picture display method of the present application.
- FIG. 3 shows a scene object 301, a picture to be presented 302, and a transition picture 303 between the pictures 302 to be presented.
- the inserted transition picture 303 between the pictures 302 to be displayed may be referred to as a rendering process, and the inserted transition picture 303 between the pictures to be displayed is generated based on the picture to be displayed, and at the same time, in two different A transition picture 303 is inserted between the pictures to be displayed 302, corresponding to different rendering costs.
- the display order of the pictures to be displayed may be determined based on the rendering cost between the pictures to be displayed, so that a transition picture is inserted between the pictures to be displayed when the rendering cost is small, thereby forming a smooth transition effect.
- the method before determining the display order of the to-be-displayed image based on the rendering cost between the to-be-displayed images, the method further includes: separately calculating the rendering association parameter according to the rendering association parameter of the to-be-displayed image.
- Each of the rendering associated parameters corresponds to a sub-rendering cost, and the rendering associated parameter includes at least one of the following: a distortion amount parameter, a shooting position parameter, a shooting angle parameter, a resolution parameter, and an optical flow parameter; and corresponding to each rendering associated parameter
- the subrendering cost determines the rendering cost between the images to be displayed.
- the distortion amount parameter in the rendering association parameter is used to represent that when the transition picture 303 is inserted between the two to-be-displayed pictures 302, the three-dimensional feature points included in the picture 302 to be displayed are mapped to the virtual camera imaging surface. The resulting shape variable.
- the sub-rendering cost corresponding to the amount of distortion between the adjacent to-be-displayed pictures 302 may be changed from a rectangle to a non-polygon according to the three-dimensional feature points included in the picture 302 to be displayed being mapped to the virtual camera imaging surface.
- the size of the angle changes to determine.
- the sub-rendering cost corresponding to the shooting position parameter may be determined according to the coordinates of the shooting position of the picture 302 to be displayed.
- the coordinates of the shooting positions of two adjacent pictures to be displayed 302 are (X1, Y1, Z1), (X2, Y2, Z2), and the sub-rendering cost corresponding to the shooting position parameter may be a formula. Calculation.
- the sub-rendering cost corresponding to the shooting angle can be determined by calculating the absolute value of the difference of the shooting angles of the adjacent to-be-displayed pictures 302.
- the corresponding rendering overhead of the optical stream may be determined according to the pixel position of the matching SIFT feature points of the adjacent to-be-displayed picture on the two photos.
- the matching SIFT feature points of two adjacent to-be-displayed images may constitute one.
- the pixel coordinates are represented by (Xi1, Yi1), (Xi2, Yi2), respectively, where i indicates the position of the pair of matching feature points in the pair of matched SIFT feature points, and the optical flow corresponds to
- the rendering sub-cost can be calculated using the following formula:
- the rendering cost corresponding to the resolution can be determined based on the imaging range of the 3D point in the camera plane.
- the rendering generation corresponding to each parameter may be normalized to a range of 0-1.
- different weight values may be set for the parameter, for example, the weight value of the distortion amount parameter may be set to the maximum, the weight of the shooting position and the shooting angle is second, and the optical flow is The weight values of the parameters of the resolution and resolution are small. Based on the setting of the above weight values, it is possible to prevent the image from being excessively changed and the zooming situation. In this way, the rendering cost between the pictures to be rendered can be determined based on the weight value of the parameter and the normalized sub-rendered value. After determining the rendering cost between each picture to be displayed and other pictures to be displayed, the graph structure can be adopted, and each picture to be displayed is represented by a node in the figure, and the rendering value between the nodes can be used.
- the weights on the links between the nodes are characterized.
- the shortest path algorithm such as the Dijkstra algorithm, can be used to calculate the path corresponding to the sum of the weights of the plurality of nodes when the minimum value is obtained, based on the weights on the links between the nodes, Therefore, the path can be determined to represent the path of the picture to be displayed, and the order corresponding to the nodes on the path is the order in which the pictures to be displayed are displayed.
- a transition picture is inserted between every two adjacent to-be-displayed pictures according to a display order of the to-be-displayed picture
- generating a picture display sequence includes: displaying a sequence based on the pictures to be displayed Inserting a virtual camera at at least one position between each two adjacent shooting positions of the to-be-displayed picture, and interpolating the in-camera parameters to obtain an internal parameter of the virtual camera, the virtual camera including the virtual camera imaging surface; The internal parameters of the camera respectively map the three-dimensional feature points included in the image to be displayed to the virtual camera imaging surface to insert a transition picture to generate a picture display sequence.
- step 205 the pictures in the picture display sequence are continuously displayed.
- the pictures in the picture display sequence may be successively displayed.
- inserting a transition picture in the picture to be displayed in the display sequence that is, inserting a virtual camera between the pictures to be displayed, according to the virtual camera and the picture to be displayed
- the relationship between the camera parameters between the real cameras maps the three-dimensional feature points contained in the image to be displayed onto the virtual imaging surface of the virtual camera.
- FIG. 4 is a schematic structural diagram of an embodiment of a picture display device of the present application.
- the image display device 400 includes an acquisition module 401, configured to acquire an original image collection including the same scene, and a reconstruction module 402, configured to select, from the original image collection, a reconstructed image set for reconstructing a three-dimensional structure of the scene, and adopt a reconstructed image set.
- the reconstructed image reconstructs a three-dimensional structure of the scene, the three-dimensional structure of the scene includes a three-dimensional feature point, and the selecting module 403 is configured to select a set of the image to be displayed from the reconstructed image set, and the number of the three-dimensional feature points included in the image set to be displayed is greater than a preset
- the determining module 404 is configured to determine a display order of the to-be-displayed pictures in the set of pictures to be displayed, and generate a picture display sequence based on the display order of the pictures to be displayed; and a display module 405, configured to continuously display the pictures in the picture display sequence.
- the reconstruction module 402 further includes a reconstructed picture selection sub-module, where the picture selection sub-module is used to extract the scale-invariant feature points of the original picture in the original picture set; the picture selection sub-module further And for selecting, according to the matching relationship of the scale-invariant feature points between the original pictures, the reconstructed picture set for reconstructing the three-dimensional structure of the scene from the original picture set.
- the reconstruction module 402 further includes a scene reconstruction sub-module, where the scene reconstruction sub-module is configured to restore camera parameters according to the scale-invariant feature points of the reconstructed picture in the reconstructed picture set.
- the camera parameters include an in-camera parameter and an off-camera parameter, the in-camera parameter includes a focal length and a main offset; and the scene reconstruction sub-module is further configured to reconstruct the three-dimensional of the scene based on the scale-invariant feature point of the reconstructed image and the camera parameter. structure.
- the selecting module 403 is configured to select a reference to be displayed image from the reconstructed image set, and the number of the three-dimensional feature points included in the reference to-be-displayed image is greater than other reconstructions in the reconstructed image set.
- the selecting module 403 is further configured to select a plurality of subsequent to-be-displayed images that meet the first preset condition from the reconstructed image set by using the reference to be displayed, the first preset condition includes: The number of different three-dimensional feature points included between the subsequent to-be-displayed image and the previous selected subsequent to-be-displayed image is greater than the reconstructed image that is not selected as the subsequent to-be-displayed image. The number of different three-dimensional feature points contained between the slice and the last selected subsequent image to be displayed.
- the determining module 404 is configured to determine a display order of the to-be-displayed image according to an association relationship between the attribute parameters of the to-be-displayed image, where the attribute parameter of the to-be-displayed image includes a shooting location parameter, Shooting angle parameters; generating a picture display sequence based on the order in which the pictures to be displayed are displayed.
- the determining module 404 is further configured to determine a display order of the to-be-displayed image based on a rendering cost between the to-be-displayed images, where the rendering cost indicates that the transition picture is inserted between the to-be-displayed images.
- the determining module 404 is further configured to insert a transition picture between each two adjacent pictures to be displayed based on the display order of the pictures to be displayed, to generate a picture display sequence.
- the determining module 404 includes a rendering cost determining sub-module, and the rendering cost determining sub-module is configured to separately calculate each rendering association in the rendering association parameter according to the rendering association parameter of the to-be-displayed image.
- the sub-rendering cost corresponding to the parameter, the rendering association parameter includes at least one of the following: a distortion amount parameter, a shooting position parameter, a shooting angle parameter, a resolution parameter, and an optical flow parameter; the rendering cost determination sub-module is further configured to associate the parameter according to each rendering Corresponding sub-rendering costs determine the rendering cost between the images to be displayed.
- the determining module 404 includes a transition picture insertion sub-module, and the transition picture insertion sub-module is used to capture each two adjacent pictures to be displayed based on the display order of the pictures to be displayed. Inserting a virtual camera at least one position between the positions, and interpolating the parameters in the camera to obtain internal parameters of the virtual camera, the virtual camera includes a virtual camera imaging surface; and the transition picture insertion sub-module is further configured to be based on internal parameters of the virtual camera, The three-dimensional feature points included in the image to be displayed are respectively mapped to the virtual camera imaging surface to insert a transition picture, and a picture display sequence is generated.
- the acquiring module 401 is further configured to receive a scene selection instruction, where the scene selection instruction includes geographic location information corresponding to the scene, and acquire, according to the geographic location information, the original image collection corresponding to the geographic location information. .
- the apparatus 400 further includes a pre-processing module, where the pre-processing module is configured to perform pre-processing operations on the original image set; At least one of an image filtering sub-module and an image correction sub-module; the image filtering sub-module is configured to filter an original image whose image quality is lower than a preset value in the original image collection; the image correction sub-module is used to original in the original image collection The position of the picture is corrected.
- the pre-processing module is configured to perform pre-processing operations on the original image set
- At least one of an image filtering sub-module and an image correction sub-module the image filtering sub-module is configured to filter an original image whose image quality is lower than a preset value in the original image collection
- the image correction sub-module is used to original in the original image collection The position of the picture is corrected.
- FIG. 5 is a schematic structural diagram of a computer system according to an embodiment of the present application.
- FIG. 5 there is shown a block diagram of a computer system 500 suitable for use in implementing the apparatus of the embodiments of the present application.
- computer system 500 includes a central processing unit (CPU) 501 that can be loaded into a program in random access memory (RAM) 503 according to a program stored in read only memory (ROM) 502 or from storage portion 508. And perform various appropriate actions and processes.
- CPU central processing unit
- RAM random access memory
- ROM read only memory
- computer system 500 includes a central processing unit (CPU) 501 that can be loaded into a program in random access memory (RAM) 503 according to a program stored in read only memory (ROM) 502 or from storage portion 508. And perform various appropriate actions and processes.
- the CPU 501 executes the method described in the present application.
- the RAM 503 various programs and data required for the operation of the system 500 are also stored.
- the CPU 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504.
- An input/output (I/O) interface 505 is also coupled to bus 504.
- the following components are connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, etc.; an output portion 507 including, for example, a cathode ray tube (CRT), a liquid crystal display (LCD), and the like, and a storage portion 508 including a hard disk or the like. And a communication portion 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the Internet.
- Driver 510 is also coupled to I/O interface 505 as needed.
- a removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like is mounted on the drive 510 as needed so that a computer program read therefrom is installed into the storage portion 508 as needed.
- embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine readable medium, the computer program comprising program code for executing the method illustrated in the flowchart.
- the computer program can be downloaded and installed from the network via the communication portion 509, and/or installed from the removable medium 511.
- each block of the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented in a dedicated hardware-based system that performs the specified function or operation. Or it can be implemented by a combination of dedicated hardware and computer instructions.
- the present application further provides a computer readable storage medium, which may be a computer readable storage medium included in the apparatus in the foregoing embodiment, or may exist separately and not assembled.
- a computer readable storage medium stores one or more programs that are used by one or more processors to perform the methods described herein.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Library & Information Science (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Geometry (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Hardware Design (AREA)
- Processing Or Creating Images (AREA)
- Studio Devices (AREA)
- Image Generation (AREA)
Abstract
Description
Claims (22)
- 一种图片展示方法,其特征在于,所述方法包括:获取包含同一场景的原始图片集合;从所述原始图片集合中选取用于重建所述场景的三维结构的重建图片集合,以及采用所述重建图片集合中的重建图片重建所述场景的三维结构,所述场景的三维结构包括三维特征点;从所述重建图片集合中选取出待展示图片集合,所述待展示图片集合包含的三维特征点的数量大于预设阈值;确定所述待展示图片集合中的待展示图片的展示顺序,以及基于所述待展示图片的展示顺序生成图片展示序列;连续展示所述图片展示序列中的图片。
- 根据权利要求1所述的方法,其特征在于,所述从所述原始图片集合中选取用于重建所述场景的三维结构的重建图片集合包括:提取所述原始图片集合中的原始图片的尺度不变特征点;根据所述原始图片之间的尺度不变特征点的匹配关系,从所述原始图片集合中选取用于重建所述场景的三维结构的重建图片集合。
- 根据权利要求2所述的方法,其特征在于,所述采用所述重建图片集合中的重建图片重建所述场景的三维结构,所述场景的三维结构包括三维特征点包括:根据所述重建图片集合中的重建图片的尺度不变特征点,恢复相机参数,所述相机参数包括相机内参数和相机外参数,所述相机内参数包括焦距、主点偏移量;基于所述重建图片的尺度不变特征点以及所述相机参数,重建所述场景的三维结构。
- 根据权利要求1-3之一所述的方法,其特征在于,所述从所述重建图片集合中选取出待展示图片集合,所述待展示图片集合包含的 三维特征点的数量大于预设阈值包括:从所述重建图片集合中选取出基准待展示图片,所述基准待展示图片中包含的三维特征点的数量大于重建图片集合中的其他重建图片包含的三维特征点的数量;通过所述基准待展示图片,依次从所述重建图片集合中选取出满足第一预设条件的多个后续待展示图片,所述第一预设条件包括:所述后续待展示图片与上一个被选取的后续待展示图片之间所包含的不同的三维特征点的数量大于未被选取为后续待展示图片的重建图片与上一个被选取的后续待展示图片之间所包含的不同的所述三维特征点的数量。
- 根据权利要求4所述的方法,其特征在于,所述确定所述待展示图片集合中的待展示图片的展示顺序,以及基于所述待展示图片的展示顺序生成图片展示序列包括:根据所述待展示图片之间的属性参数的关联关系,确定所述待展示图片的展示顺序,所述待展示图片的属性参数包括拍摄位置参数、拍摄角度参数;基于所述待展示图片的展示顺序,生成图片展示序列。
- 根据权利要求4所述的方法,其特征在于,所述确定所述待展示图片集合中的待展示图片的展示顺序,以及基于所述待展示图片的展示顺序生成图片展示序列包括:基于所述待展示图片之间的渲染代价,确定所述待展示图片的展示顺序,所述渲染代价指示在所述待展示图片之间插入过渡图片的代价;基于所述待展示图片的展示顺序,在每两个相邻的所述待展示图片之间插入过渡图片,生成图片展示序列。
- 根据权利要求6所述的方法,其特征在于,在所述基于所述待展示图片之间的渲染代价,确定所述待展示图片的展示顺序之前,还 包括:根据所述待展示图片的渲染关联参数,分别计算所述渲染关联参数中的每一个渲染关联参数对应的子渲染代价,所述渲染关联参数包括以下至少一项:扭曲量参数、拍摄位置参数、拍摄角度参数、分辨率参数、光流参数;根据所述每一个渲染关联参数对应的子渲染代价,确定所述待展示图片之间的渲染代价。
- 根据权利要求7所述的方法,其特征在于,所述基于所述待展示图片的展示顺序,在每两个相邻的所述待展示图片之间插入过渡图片,生成图片展示序列包括:基于所述待展示图片的展示顺序,在每两个相邻的所述待展示图片的拍摄位置之间的至少一个位置插入虚拟相机,以及对拍摄所述每两个所述待展示图片的所述相机的内参数进行内插操作得到所述虚拟相机的内参数,所述虚拟相机包括虚拟相机成像面;基于所述虚拟相机的内参数,分别将每两个相邻的所述待展示图片包含的三维特征点映射至所述虚拟相机成像面以插入所述过渡图片,并生成图片展示序列。
- 根据权利要求1所述的方法,其特征在于,所述获取包含同一场景的原始图片集合包括:接收场景选择指令,所述场景选择指令包括所述场景对应的地理位置信息;基于所述地理位置信息,获取与所述地理位置信息对应的原始图片集合。
- 根据权利要求9所述的方法,其特征在于,在所述获取包含同一场景的原始图片集合之后,还包括,对所述原始图片集合进行预处理操作;其中,所述预处理操作至少包括以下任意一项:滤除所述原始图片集合中图像质量低于预设值的原始图片;以及,将所述原始图片集合中的原始图片的拍摄位置矫正至正常拍摄姿态。
- 一种图片展示装置,其特征在于,所述装置包括:获取模块,用于获取包含同一场景的原始图片集合;重建模块,用于从所述原始图片集合中选取用于重建所述场景的三维结构的重建图片集合,以及采用所述重建图片集合中的重建图片重建所述场景的三维结构,所述场景的三维结构包括三维特征点;选取模块,用于从所述重建图片集合中选取出待展示图片集合,所述待展示图片集合包含的三维特征点的数量大于预设阈值;确定模块,用于确定所述待展示图片集合中的待展示图片的展示顺序,以及基于所述待展示图片的展示顺序生成图片展示序列;展示模块,用于连续展示所述图片展示序列中的图片。
- 根据权利要求11所述的装置,其特征在于,所述重建模块还包括重建图片选取子模块,所述重建图片选取子模块用于提取所述原始图片集合中的原始图片的尺度不变特征点;所述重建图片选取子模块还用于根据所述原始图片之间的尺度不变特征点的匹配关系,从所述原始图片集合中选取用于重建所述场景的三维结构的重建图片集合。
- 根据权利要求12所述的装置,其特征在于,所述重建模块还包括场景重建子模块,所述场景重建子模块用于根据所述重建图片集合中的重建图片的尺度不变特征点,恢复相机参数,所述相机参数包括相机内参数和相机外参数,所述相机内参数包括焦距、主点偏移量;所述场景重建子模块还用于基于所述重建图片的尺度不变特征点以及所述相机参数,重建所述场景的三维结构。
- 根据权利要求11-13之一所述的装置,其特征在于,所述选取模块用于从所述重建图片集合中选取出基准待展示图片,所述基准待展示图片中包含的三维特征点的数量大于重建图片集合中的其他重建图片包含的三维特征点的数量;所述选取模块还用于通过所述基准待展示图片,依次从所述重建图片集合中选取出满足第一预设条件的多个后续待展示图片,所述第一预设条件包括:所述后续待展示图片与上一个被选取的后续待展示图片之间所包含的不同的三维特征点的数量大于未被选取为后续待展示图片的重建图片与上一个被选取的后续待展示图片之间所包含的不同的所述三维特征点的数量。
- 根据权利要求14所述的装置,其特征在于,所述确定模块用于根据所述待展示图片之间的属性参数的关联关系,确定所述待展示图片的展示顺序,所述待展示图片的属性参数包括拍摄位置参数、拍摄角度参数;基于所述待展示图片的展示顺序,生成图片展示序列。
- 根据权利要求14所述的装置,其特征在于,所述确定模块还用于基于所述待展示图片之间的渲染代价,确定所述待展示图片的展示顺序,所述渲染代价指示在所述待展示图片之间插入过渡图片的代价;所述确定模块还用于基于所述待展示图片的展示顺序,在每两个相邻的所述待展示图片之间插入过渡图片,生成图片展示序列。
- 根据权利要求16所述的装置,其特征在于,所述确定模块包括渲染代价确定子模块,所述渲染代价确定子模块用于根据所述待展示图片的渲染关联参数,分别计算所述渲染关联参数中的每一个渲染关联参数对应的子渲染代价,所述渲染关联参数包括以下至少一项:扭曲量参数、拍摄位置参数、拍摄角度参数、分辨率参数、光流参数;所述渲染代价确定子模块还用于根据所述每一个渲染关联参数对应的子渲染代价,确定所述待展示图片之间的渲染代价。
- 根据权利要求17所述的装置,其特征在于,所述确定模块包 括过渡图片插入子模块,所述过渡图片插入子模块用于基于所述待展示图片的展示顺序,在每两个相邻的所述待展示图片的拍摄位置之间的至少一个位置插入虚拟相机,以及对所述相机内参数进行内插操作得到所述虚拟相机的内参数,所述虚拟相机包括虚拟相机成像面;所述过渡图片插入子模块还用于基于所述虚拟相机的内参数,分别将所述待展示图片包含的三维特征点映射至所述虚拟相机成像面以插入所述过渡图片,生成图片展示序列。
- 根据权利要求11所述的装置,其特征在于,所述获取模块还用于:接收场景选择指令,所述场景选择指令包括所述场景对应的地理位置信息,并基于所述地理位置信息,获取与所述地理位置信息对应的原始图片集合。
- 根据权利要求19所述的装置,其特征在于,所述装置还包括预处理模块,所述预处理模块用于对所述原始图片集合进行预处理操作;所述预处理模块包括图像过滤子模块和图像矫正子模块的至少一者;所述图像过滤子模块用于过滤所述原始图片集合中图像质量低于预设值的原始图片;所述图像矫正子模块用于对所述原始图片集合中的原始图片的拍摄位置进行矫正。
- 一种设备,包括:处理器;和存储器,所述存储器中存储有能够被所述处理器执行的计算机可读指令,在所述计算机可读指令被执行时,所述处理器执行权利要求1至10中任一项所述的方法。
- 一种非易失性计算机存储介质,所述计算机存储介质存储有能够被处理器执行的计算机可读指令,当所述计算机可读指令被处理器执行时,所述处理器执行权利要求1至10中任一项所述的方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/301,356 US10410397B2 (en) | 2015-03-31 | 2015-12-28 | Picture presentation method and apparatus |
JP2016560524A JP6298540B2 (ja) | 2015-03-31 | 2015-12-28 | 画像表示方法及び装置 |
KR1020167027050A KR101820349B1 (ko) | 2015-03-31 | 2015-12-28 | 화상 표시 방법 및 장치 |
EP15886720.0A EP3279803B1 (en) | 2015-03-31 | 2015-12-28 | Picture display method and device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510150163.2 | 2015-03-31 | ||
CN201510150163.2A CN104699842B (zh) | 2015-03-31 | 2015-03-31 | 图片展示方法和装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016155377A1 true WO2016155377A1 (zh) | 2016-10-06 |
Family
ID=53346962
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2015/099164 WO2016155377A1 (zh) | 2015-03-31 | 2015-12-28 | 图片展示方法和装置 |
Country Status (6)
Country | Link |
---|---|
US (1) | US10410397B2 (zh) |
EP (1) | EP3279803B1 (zh) |
JP (1) | JP6298540B2 (zh) |
KR (1) | KR101820349B1 (zh) |
CN (1) | CN104699842B (zh) |
WO (1) | WO2016155377A1 (zh) |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104699842B (zh) | 2015-03-31 | 2019-03-26 | 百度在线网络技术(北京)有限公司 | 图片展示方法和装置 |
CN105139445B (zh) | 2015-08-03 | 2018-02-13 | 百度在线网络技术(北京)有限公司 | 场景重建方法及装置 |
CN105335473B (zh) * | 2015-09-30 | 2019-02-12 | 小米科技有限责任公司 | 图片播放方法和装置 |
US10111273B2 (en) * | 2016-05-24 | 2018-10-23 | Lenovo Enterprise Solutions (Singapore) Pte. Ltd. | Communication paths hierarchy for managed computing device |
US10637736B2 (en) | 2016-06-06 | 2020-04-28 | Lenovo Enterprise Solutions (Singapore) Pte. Ltd | Acquisition of information from managed computing device not communicatively connected to management computing device |
WO2018014849A1 (zh) * | 2016-07-20 | 2018-01-25 | 腾讯科技(深圳)有限公司 | 一种媒体信息展示方法、装置和计算机存储介质 |
CN108510433B (zh) * | 2017-02-28 | 2020-03-24 | 贝壳找房(北京)科技有限公司 | 空间展示方法、装置及终端 |
CN107153708A (zh) * | 2017-05-16 | 2017-09-12 | 珠海市魅族科技有限公司 | 一种图片查看方法及装置、计算机装置、计算机可读存储介质 |
CN108897757B (zh) * | 2018-05-14 | 2023-08-22 | 平安科技(深圳)有限公司 | 一种照片存储方法、存储介质和服务器 |
KR20200016443A (ko) * | 2018-08-07 | 2020-02-17 | 주식회사 마크애니 | 콘텐츠의 영상 데이터 복원방법 및 장치 |
CN109741245B (zh) * | 2018-12-28 | 2023-03-17 | 杭州睿琪软件有限公司 | 平面信息的插入方法及装置 |
CN110044353B (zh) * | 2019-03-14 | 2022-12-20 | 深圳先进技术研究院 | 一种飞行机构室内定位方法及定位系统 |
CN110400369A (zh) * | 2019-06-21 | 2019-11-01 | 苏州狗尾草智能科技有限公司 | 一种人脸重建的方法、系统平台及存储介质 |
CN112257731A (zh) * | 2019-07-05 | 2021-01-22 | 杭州海康威视数字技术股份有限公司 | 一种虚拟数据集的生成方法及装置 |
CN111078345B (zh) * | 2019-12-18 | 2023-09-19 | 北京金山安全软件有限公司 | 一种图片展示效果确定方法、装置、电子设备及存储介质 |
CN111882590A (zh) * | 2020-06-24 | 2020-11-03 | 广州万维创新科技有限公司 | 一种基于单张图片定位的ar场景应用方法 |
CN112015936B (zh) * | 2020-08-27 | 2021-10-26 | 北京字节跳动网络技术有限公司 | 用于生成物品展示图的方法、装置、电子设备和介质 |
CN114529690B (zh) * | 2020-10-30 | 2024-02-27 | 北京字跳网络技术有限公司 | 增强现实场景呈现方法、装置、终端设备和存储介质 |
CN112650422B (zh) * | 2020-12-17 | 2022-07-29 | 咪咕文化科技有限公司 | 设备的ar交互方法、装置、电子设备及存储介质 |
CN113221043A (zh) * | 2021-05-31 | 2021-08-06 | 口碑(上海)信息技术有限公司 | 图片生成方法、装置、计算机设备及计算机可读存储介质 |
CN113704527B (zh) * | 2021-09-02 | 2022-08-12 | 北京城市网邻信息技术有限公司 | 三维展示方法及三维展示装置、存储介质 |
CN114900679B (zh) * | 2022-05-25 | 2023-11-21 | 安天科技集团股份有限公司 | 一种三维模型展示方法、装置、电子设备及可读存储介质 |
CN115222896B (zh) * | 2022-09-20 | 2023-05-23 | 荣耀终端有限公司 | 三维重建方法、装置、电子设备及计算机可读存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6094198A (en) * | 1994-01-10 | 2000-07-25 | Cognitens, Ltd. | System and method for reconstructing surface elements of solid objects in a three-dimensional scene from a plurality of two dimensional images of the scene |
US20030198377A1 (en) * | 2002-04-18 | 2003-10-23 | Stmicroelectronics, Inc. | Method and system for 3D reconstruction of multiple views with altering search path and occlusion modeling |
CN101877143A (zh) * | 2009-12-09 | 2010-11-03 | 中国科学院自动化研究所 | 一种二维图像组的三维场景重建方法 |
CN103759670A (zh) * | 2014-01-06 | 2014-04-30 | 四川虹微技术有限公司 | 一种基于数字近景摄影的物体三维信息获取方法 |
CN104200523A (zh) * | 2014-09-11 | 2014-12-10 | 中国科学院自动化研究所 | 一种融合附加信息的大场景三维重建方法 |
CN104699842A (zh) * | 2015-03-31 | 2015-06-10 | 百度在线网络技术(北京)有限公司 | 图片展示方法和装置 |
Family Cites Families (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5913018A (en) * | 1996-07-24 | 1999-06-15 | Adobe Systems Incorporated | Print band rendering system |
JP3913410B2 (ja) * | 1999-07-23 | 2007-05-09 | 富士フイルム株式会社 | 画像検索方法 |
US6813395B1 (en) | 1999-07-14 | 2004-11-02 | Fuji Photo Film Co., Ltd. | Image searching method and image processing method |
US7292257B2 (en) | 2004-06-28 | 2007-11-06 | Microsoft Corporation | Interactive viewpoint video system and process |
KR100716977B1 (ko) | 2004-07-23 | 2007-05-10 | 삼성전자주식회사 | 디지털 영상 기기 |
JP4631731B2 (ja) | 2005-02-04 | 2011-02-16 | セイコーエプソン株式会社 | 動画に基づいて行う印刷 |
KR100855657B1 (ko) | 2006-09-28 | 2008-09-08 | 부천산업진흥재단 | 단안 줌 카메라를 이용한 이동로봇의 자기위치 추정 시스템및 방법 |
JP4939968B2 (ja) * | 2007-02-15 | 2012-05-30 | 株式会社日立製作所 | 監視画像処理方法、監視システム及び監視画像処理プログラム |
AU2008271910A1 (en) * | 2007-06-29 | 2009-01-08 | Three Pixels Wide Pty Ltd | Method and system for generating a 3D model from images |
JP5012615B2 (ja) * | 2008-03-27 | 2012-08-29 | ソニー株式会社 | 情報処理装置、および画像処理方法、並びにコンピュータ・プログラム |
KR100983007B1 (ko) | 2008-03-28 | 2010-09-17 | 나한길 | 인물 사진 품질 검사 시스템 및 방법 |
CN101339646B (zh) * | 2008-07-08 | 2010-06-02 | 庞涛 | 一种适用于互联网的三维图像制作及其互动展示方法 |
CN101763632B (zh) | 2008-12-26 | 2012-08-08 | 华为技术有限公司 | 摄像机标定的方法和装置 |
US20140016117A1 (en) * | 2009-09-23 | 2014-01-16 | Syracuse University | Noninvasive, continuous in vitro simultaneous measurement of turbidity and concentration |
US8121618B2 (en) * | 2009-10-28 | 2012-02-21 | Digimarc Corporation | Intuitive computing methods and systems |
CN101916456B (zh) * | 2010-08-11 | 2012-01-04 | 无锡幻影科技有限公司 | 一种个性化三维动漫的制作方法 |
JP5289412B2 (ja) * | 2010-11-05 | 2013-09-11 | 株式会社デンソーアイティーラボラトリ | 局所特徴量算出装置及び方法、並びに対応点探索装置及び方法 |
JP5740210B2 (ja) * | 2011-06-06 | 2015-06-24 | 株式会社東芝 | 顔画像検索システム、及び顔画像検索方法 |
WO2012172548A1 (en) * | 2011-06-14 | 2012-12-20 | Youval Nehmadi | Method for translating a movement and an orientation of a predefined object into a computer generated data |
US9229613B2 (en) * | 2012-02-01 | 2016-01-05 | Facebook, Inc. | Transitions among hierarchical user interface components |
US9519973B2 (en) * | 2013-09-08 | 2016-12-13 | Intel Corporation | Enabling use of three-dimensional locations of features images |
CN103971399B (zh) * | 2013-01-30 | 2018-07-24 | 深圳市腾讯计算机系统有限公司 | 街景图像过渡方法和装置 |
US9311756B2 (en) * | 2013-02-01 | 2016-04-12 | Apple Inc. | Image group processing and visualization |
US10115033B2 (en) * | 2013-07-30 | 2018-10-30 | Kodak Alaris Inc. | System and method for creating navigable views |
CN103747058B (zh) * | 2013-12-23 | 2018-02-09 | 乐视致新电子科技(天津)有限公司 | 一种展示图片的方法和装置 |
-
2015
- 2015-03-31 CN CN201510150163.2A patent/CN104699842B/zh active Active
- 2015-12-28 EP EP15886720.0A patent/EP3279803B1/en active Active
- 2015-12-28 KR KR1020167027050A patent/KR101820349B1/ko active IP Right Grant
- 2015-12-28 US US15/301,356 patent/US10410397B2/en active Active
- 2015-12-28 WO PCT/CN2015/099164 patent/WO2016155377A1/zh active Application Filing
- 2015-12-28 JP JP2016560524A patent/JP6298540B2/ja active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6094198A (en) * | 1994-01-10 | 2000-07-25 | Cognitens, Ltd. | System and method for reconstructing surface elements of solid objects in a three-dimensional scene from a plurality of two dimensional images of the scene |
US20030198377A1 (en) * | 2002-04-18 | 2003-10-23 | Stmicroelectronics, Inc. | Method and system for 3D reconstruction of multiple views with altering search path and occlusion modeling |
CN101877143A (zh) * | 2009-12-09 | 2010-11-03 | 中国科学院自动化研究所 | 一种二维图像组的三维场景重建方法 |
CN103759670A (zh) * | 2014-01-06 | 2014-04-30 | 四川虹微技术有限公司 | 一种基于数字近景摄影的物体三维信息获取方法 |
CN104200523A (zh) * | 2014-09-11 | 2014-12-10 | 中国科学院自动化研究所 | 一种融合附加信息的大场景三维重建方法 |
CN104699842A (zh) * | 2015-03-31 | 2015-06-10 | 百度在线网络技术(北京)有限公司 | 图片展示方法和装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3279803A4 * |
Also Published As
Publication number | Publication date |
---|---|
US20170186212A1 (en) | 2017-06-29 |
EP3279803A1 (en) | 2018-02-07 |
EP3279803B1 (en) | 2020-08-12 |
JP2017518556A (ja) | 2017-07-06 |
CN104699842A (zh) | 2015-06-10 |
US10410397B2 (en) | 2019-09-10 |
KR101820349B1 (ko) | 2018-01-19 |
JP6298540B2 (ja) | 2018-03-20 |
CN104699842B (zh) | 2019-03-26 |
KR20160130793A (ko) | 2016-11-14 |
EP3279803A4 (en) | 2019-03-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016155377A1 (zh) | 图片展示方法和装置 | |
US10540806B2 (en) | Systems and methods for depth-assisted perspective distortion correction | |
CN109615703B (zh) | 增强现实的图像展示方法、装置及设备 | |
US10080006B2 (en) | Stereoscopic (3D) panorama creation on handheld device | |
US11196939B2 (en) | Generating light painting images from a sequence of short exposure images | |
US9047706B1 (en) | Aligning digital 3D models using synthetic images | |
WO2015180659A1 (zh) | 图像处理方法和图像处理装置 | |
JP7116142B2 (ja) | 任意ビューの生成 | |
JPWO2018047687A1 (ja) | 三次元モデル生成装置及び三次元モデル生成方法 | |
US11620730B2 (en) | Method for merging multiple images and post-processing of panorama | |
JP6452360B2 (ja) | 画像処理装置、撮像装置、画像処理方法およびプログラム | |
JP6272071B2 (ja) | 画像処理装置、画像処理方法及びプログラム | |
CN114981845A (zh) | 图像扫描方法及装置、设备、存储介质 | |
JP2022518402A (ja) | 三次元再構成の方法及び装置 | |
KR101836238B1 (ko) | 다운 스케일링과 비용함수를 이용한 이미지 연결선 추정방법 | |
JP2022516298A (ja) | 対象物を3d再構築するための方法 | |
JP2016071496A (ja) | 情報端末装置、方法及びプログラム | |
WO2021190655A1 (en) | Method for merging multiple images and post-processing of panorama | |
CN109348132B (zh) | 全景拍摄方法及装置 | |
JP7322235B2 (ja) | 画像処理装置、画像処理方法、およびプログラム | |
CN116029931A (zh) | 基于智能云的图像增强系统、生成增强图像的方法 | |
JP5535840B2 (ja) | 3次元画像生成装置 | |
JPWO2018117099A1 (ja) | 画像処理装置及びプログラム | |
CN108876890A (zh) | 一种用于分析和操作图像和视频的方法 | |
WO2019071386A1 (zh) | 一种图像数据处理方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 20167027050 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2016560524 Country of ref document: JP Kind code of ref document: A |
|
REEP | Request for entry into the european phase |
Ref document number: 2015886720 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15301356 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15886720 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2015886720 Country of ref document: EP |