CN113160308A - Image processing method and device, electronic equipment and storage medium - Google Patents
Image processing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN113160308A CN113160308A CN202110378457.6A CN202110378457A CN113160308A CN 113160308 A CN113160308 A CN 113160308A CN 202110378457 A CN202110378457 A CN 202110378457A CN 113160308 A CN113160308 A CN 113160308A
- Authority
- CN
- China
- Prior art keywords
- depth information
- panoramic picture
- embedded model
- pixel
- embedded
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 25
- 238000009877 rendering Methods 0.000 claims description 13
- 230000004927 fusion Effects 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 8
- 238000000034 method Methods 0.000 abstract description 19
- 230000008569 process Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention provides an image processing method and device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring depth information of the panoramic picture and depth information of the embedded model; wherein the depth information of the panoramic picture comprises depth information of each pixel of the panoramic picture, and the depth information of the embedded model comprises depth information of each pixel of the embedded model which is finally drawn on a screen; comparing the depth information of the panoramic picture with the depth information of an embedded model, and confirming the shielding relation between the embedded model and the panoramic picture in a scene; and setting pixel values of the panoramic picture and the embedded model in a scene according to the shielding relation so as to fuse the panoramic picture and the embedded model. The method enables the panoramic picture and the embedded model to form a correct shielding relation, thereby improving the comprehensive expression capability of the panoramic picture.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
At present, due to the limitation of the performance and bandwidth of webpage end hardware, a high-precision scene model cannot be displayed. The high-precision model is usually rendered into a panoramic picture and displayed on a webpage side. And based on a high-precision model, interactive pictures or videos can be embedded in the panoramic picture, and the interaction is increased, and meanwhile, stronger information expression capacity is provided.
However, since the panoramic picture is a two-dimensional picture, in the process of rendering a picture, the relative position relationship of the object in the scene model is lost, which results in an erroneous occlusion relationship between the embedded picture or video and the panoramic picture.
Disclosure of Invention
The invention provides an image processing method, an image processing device and electronic equipment, which are used for solving the defect of wrong shielding of a panoramic picture and an embedded model in the prior art and realizing correct fusion of the panoramic picture and the embedded model.
The invention provides an image processing method, which comprises the following steps:
acquiring depth information of the panoramic picture and depth information of the embedded model; wherein the depth information of the panoramic picture comprises depth information of each pixel of the panoramic picture, and the depth information of the embedded model comprises depth information of each pixel of the embedded model which is finally drawn on a screen;
comparing the depth information of the panoramic picture with the depth information of an embedded model, and confirming the shielding relation between the embedded model and the panoramic picture in a scene;
and setting pixel values of the panoramic picture and the embedded model in a scene according to the shielding relation so as to fuse the panoramic picture and the embedded model.
According to an image processing method provided by the present invention, the acquiring depth information of the panoramic picture and depth information of the embedded model includes:
rendering a panoramic picture, calculating depth information corresponding to each pixel, and storing the depth information of the panoramic picture;
pasting panoramic pictures in the scene, and determining the position and the orientation of a camera;
acquiring an embedded model to which a picture and/or a video needs to be attached, and determining the size and the pose of the embedded model;
and calculating the depth information of the embedded model under the current camera.
According to an image processing method provided by the present invention, comparing the depth information of the panoramic image with the depth information of an embedded model, and confirming the occlusion relationship between the embedded model and the panoramic image in a scene comprises:
comparing the depth information of the embedded model pixels with the depth information of the panoramic picture pixels;
if the depth information of the embedded model pixel is larger than the depth information of the panoramic picture pixel, determining that the panoramic picture pixel blocks the embedded model pixel;
and if the depth information of the panoramic picture pixel is greater than the depth information of the embedded model pixel, determining that the embedded model pixel blocks the panoramic picture pixel.
According to an image processing method provided by the present invention, setting pixel values of the panoramic picture and the embedded model in a scene according to the occlusion relationship to fuse the panoramic picture and the embedded model comprises:
and according to the shielding relation, setting the transparency of the shielded panoramic picture pixel or the embedded model pixel as 0 so as to fuse the panoramic picture and the embedded model.
The present invention also provides an image processing apparatus comprising:
the depth information acquisition module is used for acquiring depth information of the panoramic picture and depth information of the embedded model; wherein the depth information of the panoramic picture comprises depth information of each pixel of the panoramic picture, and the depth information of the embedded model comprises depth information of each pixel of the embedded model which is finally drawn on a screen;
the fusion module is used for comparing the depth information of the panoramic picture with the depth information of the embedded model and confirming the shielding relation between the embedded model and the panoramic picture in a scene; and setting pixel values of the panoramic picture and the embedded model in a scene according to the shielding relation so as to fuse the panoramic picture and the embedded model.
According to an image processing apparatus provided by the present invention, the module for obtaining depth information is further configured to:
rendering a panoramic picture, calculating depth information corresponding to each pixel, and storing the depth information of the panoramic picture;
pasting panoramic pictures in the scene, and determining the position and the orientation of a camera;
acquiring an embedded model to which a picture and/or a video needs to be attached, and determining the size and the pose of the embedded model;
and calculating the depth information of the embedded model under the current camera.
According to an image processing apparatus provided by the present invention, the fusion module is further configured to:
comparing the depth information of the embedded model pixels with the depth information of the panoramic picture pixels;
if the depth information of the embedded model pixel is larger than the depth information of the panoramic picture pixel, determining that the panoramic picture pixel blocks the embedded model pixel;
and if the depth information of the panoramic picture pixel is greater than the depth information of the embedded model pixel, determining that the embedded model pixel blocks the panoramic picture pixel.
According to an image processing apparatus provided by the present invention, the fusion module is further configured to:
and according to the shielding relation, setting the transparency of the shielded panoramic picture pixel or the embedded model pixel as 0 so as to fuse the panoramic picture and the embedded model.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the image processing method as described in any one of the above when executing the program.
The invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the image processing method as any one of the above.
According to the image processing method and device provided by the invention, the depth information of the panoramic picture and the depth information of the embedded model are obtained in the process of rendering the panoramic picture so as to store the relative position relation of objects in the scene model, then the pixel values of the embedded model and the panoramic picture in the scene are set according to the depth information of the panoramic picture and the depth information of the embedded model, the embedded model and the panoramic picture are fused, and the panoramic picture and the embedded model can form a correct shielding relation, so that the comprehensive expression capacity of the panoramic picture is improved.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a flow chart of an image processing method according to an embodiment of the present invention;
FIG. 2 is a panoramic picture of an embedded picture video provided by an embodiment of the present invention;
fig. 3 is a schematic flowchart of a process of obtaining depth information of a panoramic picture and depth information of an embedded model according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of a process of comparing depth information of a panoramic picture with depth information of an embedded model to confirm an occlusion relationship between the embedded model and the panoramic picture in a scene according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present invention, and as shown in fig. 1, the method includes:
and step 110, acquiring the depth information of the panoramic picture and the depth information of the embedded model.
Wherein the depth information of the panoramic picture includes depth information of each pixel of the panoramic picture, and the depth information of the embedded model includes depth information of each pixel of the embedded model that is finally drawn on a screen.
The panoramic picture is formed by splicing one or more groups of pictures shot by a camera ring at 360 degrees, all scenes in a 360-degree spherical range are comprehensively displayed, the reality of a scene is kept to the maximum extent, and the planar pictures are processed by software to obtain a three-dimensional space sense. The embedded model is a model embedded into a scene to attach a picture video. The depth information represents the distance from the image grabber to various points in the scene. The depth information of the panoramic picture and the embedded model is usually obtained by performing projection transformation to obtain the depth corresponding to each pixel point through parameters of a rendering camera.
And 120, comparing the depth information of the panoramic picture with the depth information of the embedded model, and confirming the shielding relation between the embedded model and the panoramic picture in the scene.
Specifically, the deeper pixel regions are far from the image collector, and are blocked by the less deep pixel regions, and cannot be displayed in the current field.
And step 130, setting pixel values of the panoramic picture and the embedded model in a scene according to the shielding relation so as to fuse the panoramic picture and the embedded model.
In order to display the correct occlusion relationship between the panoramic picture and the embedded model, the region of the occluded part needs to be set to be transparent, and the transparent setting mode is usually to change the value of the alpha channel of the pixel of the occluded part.
Specifically, fig. 2 is a panoramic picture embedded in a picture video, as shown in fig. 2, there may be a plurality of embedding manners such as embedding a model above the panoramic picture, or inserting the panoramic picture in an inclined manner at a certain angle, and different embedding positions may cause a shielding relationship between the embedded model and the panoramic picture to be incorrect, which affects a viewing effect. The method comprises the steps of firstly obtaining depth information of an embedded model of a panoramic picture serving as a background and a video needing to be attached with the picture, comparing the depth information with the embedded model, and further confirming a shielding relation so as to display all or part of the video of the attached picture and enable a user to obtain the watching experience of a three-dimensional space.
According to the method provided by the embodiment of the invention, in the process of rendering the panoramic picture, the depth information of the panoramic picture and the depth information of the embedded model are obtained to store the relative position relation of objects in the scene model, then the pixel values of the embedded model and the panoramic picture in the scene are set according to the depth information of the panoramic picture and the depth information of the embedded model, and the embedded model and the panoramic picture are fused to ensure that the panoramic picture and the embedded model can form a correct shielding relation, so that the comprehensive expression capacity of the panoramic picture is improved.
Fig. 3 is a schematic flowchart of the process of obtaining the depth information of the panoramic picture and the depth information of the embedded model in step 110 according to an embodiment of the present invention, as shown in fig. 3, step 110 includes:
and 310, rendering the panoramic picture, calculating depth information corresponding to each pixel, and storing the depth information of the panoramic picture.
Specifically, in the process of rendering the panoramic picture, projection transformation is performed through parameters of a rendering camera, and depth information corresponding to each pixel of the panoramic picture is calculated and stored in an alpha channel of the panoramic picture or is separately stored as an independent depth image.
Where alpha channel refers to a special channel or "achromatic" channel. The value of the value is 0 to 1, and the value is used for storing the contribution degree of the current pixel to the picture, wherein 0 represents transparent, and 1 represents opaque. The alpha channel is therefore used primarily to save selections and edit selections. When generating an image file, the alpha channel does not have to be generated. Usually generated artificially during image processing and from which selected area information is read. Thus, the alpha channel is deleted at the time of output platemaking, regardless of the final generated image. However, sometimes, for example, when the three-dimensional software finally renders the output, an α channel is additionally generated for post-synthesis in the plane processing software. In the inventive embodiment, depth information may be saved in the alpha channel.
The depth image is an image taking the depth from the image collector to each point in the scene as a pixel value, and can directly reflect the geometric shape of the visible surface of an object in the scene.
And 330, acquiring an embedded model to which the picture and/or the video needs to be attached, and determining the size and the pose of the embedded model.
Wherein, the pose represents the spatial position of an object, and is the combined name of the position (x, y, z) and the gesture (RX, RY, RZ).
Fig. 4 is a schematic flowchart of comparing the depth information of the panoramic picture with the depth information of the embedded model in step 120 to confirm the occlusion relationship between the embedded model and the panoramic picture in the scene, as shown in fig. 4, where step 120 includes:
Specifically, the deeper pixel regions are far from the image collector, and are blocked by the shallower pixel regions, and cannot be displayed in the current field.
According to an embodiment of the present invention, setting the pixel values of the panoramic picture and the embedded model in the scene according to the occlusion relationship to fuse the panoramic picture and the embedded model comprises: and according to the shielding relation, setting the transparency of the shielded panoramic picture pixel or the embedded model pixel as 0 so as to fuse the panoramic picture and the embedded model.
It should be noted that, in the process of converting an image of an object in a three-dimensional space into a two-dimensional image in an image collector, an area occupied by the object closer to the image collector in the field of view is larger, and the object farther from the image collector may be blocked. In the generated two-dimensional image, the geometric shape of the occluded object is not displayed, so that the region of the occluded part needs to be set to be transparent, and the transparent mode is usually set by changing the value of the alpha channel of the pixel of the occluded part.
The following describes the image processing apparatus provided by the present invention, and the image processing apparatus described below and the image processing method described above may be referred to in correspondence with each other.
Fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention, as shown in the drawing, the apparatus includes: a get depth information module 510 and a fusion module 520.
An obtaining depth information module 510, configured to obtain depth information of the panoramic picture and depth information of the embedded model;
wherein the depth information of the panoramic picture comprises depth information of each pixel of the panoramic picture, and the depth information of the embedded model comprises depth information of each pixel of the embedded model which is finally drawn on a screen;
a fusion module 520, configured to compare the depth information of the panoramic image with the depth information of the embedded model, and determine an occlusion relationship between the embedded model and the panoramic image in a scene; and setting pixel values of the panoramic picture and the embedded model in a scene according to the shielding relation so as to fuse the panoramic picture and the embedded model.
According to an embodiment of the present invention, the obtain depth information module 510 is further configured to:
rendering a panoramic picture, calculating depth information corresponding to each pixel, and storing the depth information of the panoramic picture;
pasting panoramic pictures in the scene, and determining the position and the orientation of a camera;
acquiring an embedded model to which a picture and/or a video needs to be attached, and determining the size and the pose of the embedded model;
and calculating the depth information of the embedded model under the current camera.
According to an embodiment of the invention, the fusion module 520 is further configured to:
comparing the depth information of the embedded model pixels with the depth information of the panoramic picture pixels;
if the depth information of the embedded model pixel is larger than the depth information of the panoramic picture pixel, determining that the panoramic picture pixel blocks the embedded model pixel;
and if the depth information of the panoramic picture pixel is greater than the depth information of the embedded model pixel, determining that the embedded model pixel blocks the panoramic picture pixel.
According to an embodiment of the invention, the fusion module 520 is further configured to:
and according to the shielding relation, setting the transparency of the shielded panoramic picture pixel or the embedded model pixel as 0 so as to fuse the panoramic picture and the embedded model.
According to the image processing method and device provided by the invention, the depth information of the panoramic picture and the depth information of the embedded model are obtained in the process of rendering the panoramic picture so as to store the relative position relation of objects in the scene model, then the pixel values of the embedded model and the panoramic picture in the scene are set according to the depth information of the panoramic picture and the depth information of the embedded model, the embedded model and the panoramic picture are fused, and the panoramic picture and the embedded model can form a correct shielding relation, so that the comprehensive expression capacity of the panoramic picture is improved.
Fig. 6 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 6: a processor (processor)610, a communication Interface (Communications Interface)620, a memory (memory)630 and a communication bus 640, wherein the processor 610, the communication Interface 620 and the memory 630 communicate with each other via the communication bus 640. The processor 610 may invoke logic instructions in the memory 630 to perform an image processing method comprising: acquiring depth information of the panoramic picture and depth information of the embedded model; wherein the depth information of the panoramic picture comprises depth information of each pixel of the panoramic picture, and the depth information of the embedded model comprises depth information of each pixel of the embedded model which is finally drawn on a screen; comparing the depth information of the panoramic picture with the depth information of an embedded model, and confirming the shielding relation between the embedded model and the panoramic picture in a scene; and setting pixel values of the panoramic picture and the embedded model in a scene according to the shielding relation so as to fuse the panoramic picture and the embedded model.
In addition, the logic instructions in the memory 630 may be implemented in software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the image processing method provided by the above methods, the method comprising: acquiring depth information of the panoramic picture and depth information of the embedded model; wherein the depth information of the panoramic picture comprises depth information of each pixel of the panoramic picture, and the depth information of the embedded model comprises depth information of each pixel of the embedded model which is finally drawn on a screen; comparing the depth information of the panoramic picture with the depth information of an embedded model, and confirming the shielding relation between the embedded model and the panoramic picture in a scene; and setting pixel values of the panoramic picture and the embedded model in a scene according to the shielding relation so as to fuse the panoramic picture and the embedded model.
In yet another aspect, the present invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the image processing method provided above, the method comprising: acquiring depth information of the panoramic picture and depth information of the embedded model; wherein the depth information of the panoramic picture comprises depth information of each pixel of the panoramic picture, and the depth information of the embedded model comprises depth information of each pixel of the embedded model; comparing the depth information of the panoramic picture with the depth information of an embedded model, and confirming the shielding relation between the embedded model and the panoramic picture in a scene; and setting the pixel value of the overlapping area of the panoramic picture and the embedded model in the scene according to the shielding relation so as to fuse the panoramic picture and the embedded model.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. An image processing method, comprising:
acquiring depth information of the panoramic picture and depth information of the embedded model; wherein the depth information of the panoramic picture comprises depth information of each pixel of the panoramic picture, and the depth information of the embedded model comprises depth information of each pixel of the embedded model which is finally drawn on a screen;
comparing the depth information of the panoramic picture with the depth information of an embedded model, and confirming the shielding relation between the embedded model and the panoramic picture in a scene;
and setting pixel values of the panoramic picture and the embedded model in a scene according to the shielding relation so as to fuse the panoramic picture and the embedded model.
2. The image processing method according to claim 1, wherein the obtaining depth information of the panoramic picture and depth information of the embedded model comprises:
rendering a panoramic picture, calculating depth information corresponding to each pixel, and storing the depth information of the panoramic picture;
pasting panoramic pictures in the scene, and determining the position and the orientation of a camera;
acquiring an embedded model to which a picture and/or a video needs to be attached, and determining the size and the pose of the embedded model;
and calculating the depth information of the embedded model under the current camera.
3. The image processing method of claim 1, wherein comparing the depth information of the panoramic picture with the depth information of an embedded model, and wherein confirming the occlusion relationship of the embedded model and the panoramic picture in the scene comprises:
comparing the depth information of the embedded model pixels with the depth information of the panoramic picture pixels;
if the depth information of the embedded model pixel is larger than the depth information of the panoramic picture pixel, determining that the panoramic picture pixel blocks the embedded model pixel;
and if the depth information of the panoramic picture pixel is greater than the depth information of the embedded model pixel, determining that the embedded model pixel blocks the panoramic picture pixel.
4. The image processing method according to claim 3, wherein setting pixel values of the panoramic image and the embedded model in a scene according to the occlusion relationship to fuse the panoramic image and the embedded model comprises:
and according to the shielding relation, setting the transparency of the shielded panoramic picture pixel or the embedded model pixel as 0 so as to fuse the panoramic picture and the embedded model.
5. An image processing apparatus characterized by comprising:
the depth information acquisition module is used for acquiring depth information of the panoramic picture and depth information of the embedded model; wherein the depth information of the panoramic picture comprises depth information of each pixel of the panoramic picture, and the depth information of the embedded model comprises depth information of each pixel of the embedded model which is finally drawn on a screen;
the fusion module is used for comparing the depth information of the panoramic picture with the depth information of the embedded model and confirming the shielding relation between the embedded model and the panoramic picture in a scene; and setting pixel values of the panoramic picture and the embedded model in a scene according to the shielding relation so as to fuse the panoramic picture and the embedded model.
6. The image processing apparatus of claim 5, wherein the obtain depth information module is further configured to:
rendering a panoramic picture, calculating depth information corresponding to each pixel, and storing the depth information of the panoramic picture;
pasting panoramic pictures in the scene, and determining the position and the orientation of a camera;
acquiring an embedded model to which a picture and/or a video needs to be attached, and determining the size and the pose of the embedded model;
and calculating the depth information of the embedded model under the current camera.
7. The image processing apparatus of claim 5, wherein the fusion module is further configured to:
comparing the depth information of the embedded model pixels with the depth information of the panoramic picture pixels;
if the depth information of the embedded model pixel is larger than the depth information of the panoramic picture pixel, determining that the panoramic picture pixel blocks the embedded model pixel;
and if the depth information of the panoramic picture pixel is greater than the depth information of the embedded model pixel, determining that the embedded model pixel blocks the panoramic picture pixel.
8. The image processing apparatus of claim 7, wherein the fusion module is further configured to:
and according to the shielding relation, setting the transparency of the shielded panoramic picture pixel or the embedded model pixel as 0 so as to fuse the panoramic picture and the embedded model.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the image processing method according to any of claims 1 to 4 are implemented when the processor executes the program.
10. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110378457.6A CN113160308A (en) | 2021-04-08 | 2021-04-08 | Image processing method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110378457.6A CN113160308A (en) | 2021-04-08 | 2021-04-08 | Image processing method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113160308A true CN113160308A (en) | 2021-07-23 |
Family
ID=76889147
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110378457.6A Pending CN113160308A (en) | 2021-04-08 | 2021-04-08 | Image processing method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113160308A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020015048A1 (en) * | 2000-06-28 | 2002-02-07 | David Nister | System and method for median fusion of depth maps |
CN105513112A (en) * | 2014-10-16 | 2016-04-20 | 北京畅游天下网络技术有限公司 | Image processing method and device |
CN108182730A (en) * | 2018-01-12 | 2018-06-19 | 北京小米移动软件有限公司 | Actual situation object synthetic method and device |
CN110827376A (en) * | 2018-08-09 | 2020-02-21 | 北京微播视界科技有限公司 | Augmented reality multi-plane model animation interaction method, device, equipment and storage medium |
CN111223192A (en) * | 2020-01-09 | 2020-06-02 | 北京华捷艾米科技有限公司 | Image processing method and application method, device and equipment thereof |
-
2021
- 2021-04-08 CN CN202110378457.6A patent/CN113160308A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020015048A1 (en) * | 2000-06-28 | 2002-02-07 | David Nister | System and method for median fusion of depth maps |
CN105513112A (en) * | 2014-10-16 | 2016-04-20 | 北京畅游天下网络技术有限公司 | Image processing method and device |
CN108182730A (en) * | 2018-01-12 | 2018-06-19 | 北京小米移动软件有限公司 | Actual situation object synthetic method and device |
CN110827376A (en) * | 2018-08-09 | 2020-02-21 | 北京微播视界科技有限公司 | Augmented reality multi-plane model animation interaction method, device, equipment and storage medium |
CN111223192A (en) * | 2020-01-09 | 2020-06-02 | 北京华捷艾米科技有限公司 | Image processing method and application method, device and equipment thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11410320B2 (en) | Image processing method, apparatus, and storage medium | |
US10630956B2 (en) | Image processing method and apparatus | |
US10474227B2 (en) | Generation of virtual reality with 6 degrees of freedom from limited viewer data | |
US20180033209A1 (en) | Stereo image generation and interactive playback | |
CN111325693B (en) | Large-scale panoramic viewpoint synthesis method based on single viewpoint RGB-D image | |
Cheng et al. | Spatio-temporally consistent novel view synthesis algorithm from video-plus-depth sequences for autostereoscopic displays | |
US20120320152A1 (en) | Stereoscopic image generation apparatus and method | |
CN109660783A (en) | Virtual reality parallax correction | |
US20180182178A1 (en) | Geometric warping of a stereograph by positional contraints | |
CN109979013B (en) | Three-dimensional face mapping method and terminal equipment | |
CN113678168B (en) | Element localization in space | |
CN114463408A (en) | Free viewpoint image generation method, device, equipment and storage medium | |
CN110892706B (en) | Method for displaying content derived from light field data on a 2D display device | |
CN109712230B (en) | Three-dimensional model supplementing method and device, storage medium and processor | |
CN116524022A (en) | Offset data calculation method, image fusion device and electronic equipment | |
CN113160308A (en) | Image processing method and device, electronic equipment and storage medium | |
Sun et al. | Seamless view synthesis through texture optimization | |
CN112562060A (en) | Three-dimensional face modeling method and device, electronic equipment and storage medium | |
CN114092535A (en) | Depth map reconstruction method, system, device, storage medium and processor | |
CN112750195B (en) | Three-dimensional reconstruction method and device of target object, storage medium and electronic equipment | |
CN112634460B (en) | Outdoor panorama generation method and device based on Haar-like features | |
Lin et al. | Depth similarity enhanced image summarization algorithm for hole-filling in depth image-based rendering | |
CN113192208A (en) | Three-dimensional roaming method and device | |
Wei et al. | Novel multi-view synthesis from a stereo image pair for 3d display on mobile phone | |
CN117676049A (en) | Cesium-based three-dimensional massive video fusion method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |