CN114266854A - Image processing method and device, electronic equipment and readable storage medium - Google Patents

Image processing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN114266854A
CN114266854A CN202111618393.9A CN202111618393A CN114266854A CN 114266854 A CN114266854 A CN 114266854A CN 202111618393 A CN202111618393 A CN 202111618393A CN 114266854 A CN114266854 A CN 114266854A
Authority
CN
China
Prior art keywords
model
color
map
point
dimensional space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111618393.9A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chengshi Wanglin Information Technology Co Ltd
Original Assignee
Beijing Chengshi Wanglin Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chengshi Wanglin Information Technology Co Ltd filed Critical Beijing Chengshi Wanglin Information Technology Co Ltd
Priority to CN202111618393.9A priority Critical patent/CN114266854A/en
Publication of CN114266854A publication Critical patent/CN114266854A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The disclosure provides an image processing method, an image processing apparatus, an electronic device and a readable storage medium. The method comprises the following steps: acquiring data describing a three-dimensional space and a target object in the three-dimensional space, and generating a three-dimensional space model and a target object model based on the data, wherein the target object model is located at a preset position in the three-dimensional space model, the model of a first object is provided with a default map, and the first object comprises the three-dimensional space and/or the target object; acquiring one or more target images and a depth map captured at a first location in three-dimensional space; for a point on the model of the first object, determining to use a color obtained from the default map or a color obtained from the target image as the color of the point according to the distance from the point to the first position and the depth determined from the depth map corresponding to the point; and generating a map based on the determined color of the point. The generated model and the model map can provide better three-dimensional display effect and provide visual experience for users.

Description

Image processing method and device, electronic equipment and readable storage medium
Technical Field
The present disclosure relates to the field of information processing, and more particularly, to a method and an apparatus for image processing, an electronic device, and a readable storage medium.
Background
With the continuous development of technology, in many applications, a three-dimensional (3D) space can be digitally presented, so that a user can conveniently and quickly know a three-dimensional space of interest without going out, for example, the user desires to browse a digitized three-dimensional space of a house through a network to know house source information and the like.
In the prior art, digitally rendering a three-dimensional space includes rendering the three-dimensional space in the form of panoramic navigation, two-dimensional models, three-dimensional models, and the like. In a scheme for representing a three-dimensional space in the form of a three-dimensional model, a panorama of the three-dimensional space is generally acquired, and the panorama is cut and then pasted on a simple patch of the three-dimensional space model. However, such approaches do not take into account objects (e.g., furniture) in three-dimensional space, and thus the display effect is often not sufficiently stereoscopic to give the user an intuitive visual experience.
Therefore, an image processing method having a better stereoscopic display effect is desired.
Disclosure of Invention
In view of the above problems, the present disclosure provides an image processing method, an image processing apparatus, an electronic device, and a readable storage medium, where the model and the map of the model obtained by the method can provide a better stereoscopic display effect to a user, so as to provide a visual experience for the user.
According to an aspect of the present disclosure, there is provided an image processing method including: acquiring data describing a three-dimensional space and a target object in the three-dimensional space, and generating a three-dimensional space model and a target object model based on the data, wherein the target object model is located at a preset position in the three-dimensional space model, and a model of a first object has a default map, and the first object comprises the three-dimensional space and/or the target object; acquiring one or more target images and depth maps relating to the first object captured at a first location in the three-dimensional space; determining, for one or more points on the model of the first object, a color obtained from the default map or a color obtained from the target image as a color of the point as a function of a distance of the point to the first location and a depth determined from the depth map corresponding to the point; and generating a map of the model of the first object based on the determined color of the point.
According to some embodiments of the disclosure, wherein determining to use a color obtained from the default map or a color obtained from the target image as the color of the point according to the distance of the point to the first location and the depth determined from the depth map corresponding to the point comprises: determining whether the distance of the point to the first location is greater than the depth corresponding to the point; when the distance is greater than the corresponding depth, using a color obtained from the default map as the color of the point; and when the distance is equal to or less than the corresponding depth, using a color acquired from the target image as the color of the point.
According to some embodiments of the disclosure, wherein using the color obtained from the target image as the color of the point comprises: sampling the target image to obtain a sampling color according to the direction of the point relative to the first position, and taking the obtained sampling color as the color of the point.
According to some embodiments of the disclosure, wherein the default map contains a low-modulus map, using a color obtained from the default map as the color of the point comprises: and sampling the position corresponding to the point in the low-modulus mapping to obtain a sampling color, and taking the obtained sampling color as the color of the point.
According to some embodiments of the disclosure, the corresponding depth is determined based on a direction of the point relative to the first position.
According to some embodiments of the present disclosure, the point on the three-dimensional space model is a vertex of the three-dimensional space model and/or the point on the target object model is a vertex of the target object model.
According to some embodiments of the disclosure, wherein the target object model comprises a plurality of target object models, the method further comprising: for multiple target object models, two or more of the target object models are merged to generate a single target object model.
According to some embodiments of the disclosure, the method further comprises: the generated map of the model of the first object is subjected to a compression process to generate a compressed map.
According to some embodiments of the disclosure, the method further comprises: performing a predetermined process on the three-dimensional space model, the target object model, and the compressed map to generate a resource having a predetermined format; and storing the resource in a non-volatile computer-readable storage medium.
According to some embodiments of the present disclosure, the target image comprises a panoramic image, and/or the depth map comprises a panoramic depth map.
According to another aspect of the present disclosure, there is also provided an image processing method including: acquiring data describing a three-dimensional space and generating a three-dimensional space model based on the data, wherein the three-dimensional space model has a default map; acquiring one or more target images and depth maps relating to the three-dimensional space captured at a first location in the three-dimensional space; for one or more points on the three-dimensional space model, determining to use a color obtained from the default map or a color obtained from the target image as the color of the point according to the distance of the point to the first position and the depth determined from the depth map corresponding to the point; and generating a map of the three-dimensional space model based on the determined color of the point.
According to another aspect of the present disclosure, there is also provided an image processing apparatus including: a data acquisition and model generation unit configured to acquire data describing a three-dimensional space and a target object in the three-dimensional space, and generate a three-dimensional space model and a target object model based on the data, wherein the target object model is located at a preset position in the three-dimensional space model, and a model of a first object has a default map, the first object includes the three-dimensional space and/or the target object; a target image and depth map acquisition unit configured to acquire one or more target images and depth maps relating to the first object captured at a first location in the three-dimensional space; a color determination unit configured to determine, for one or more points on a model of the first object, to use, as a color of the point, a color obtained from the default map or a color obtained from the target image, according to a distance of the point to the first position and a depth corresponding to the point determined from the depth map; and a map generation unit configured to generate a map of the model of the first object based on the determined color of the point.
According to some embodiments of the disclosure, wherein the color determination unit is further configured to: determining whether the distance of the point to the first location is greater than the depth corresponding to the point; when the distance is greater than the corresponding depth, using a color obtained from the default map as the color of the point; when the distance is equal to or less than the corresponding depth, a color acquired from the target image is used as the color of the point.
According to some embodiments of the present disclosure, wherein the color determination unit further comprises a target image sampling module configured to sample the target image to obtain a sampled color according to an orientation of the point relative to the first position; the color determination unit is further configured to take the acquired sampling color as the color of the point.
According to some embodiments of the present disclosure, wherein the default map comprises a low-modulus map, the color determination unit further comprises a map sampling module configured to sample a location in the low-modulus map corresponding to the point to obtain a sampled color; the color determination unit is further configured to take the acquired sampling color as the color of the point.
According to some embodiments of the disclosure, the corresponding depth is determined based on a direction of the point relative to the first position.
According to some embodiments of the present disclosure, the point on the model of the first object is a vertex of the three-dimensional space model and/or the target object model.
According to some embodiments of the disclosure, wherein the target object model comprises a plurality of target object models, the apparatus further comprises a model merging unit configured to: for multiple target object models, two or more of the target object models are merged to generate a single target object model.
According to some embodiments of the disclosure, the apparatus further comprises a map compression unit configured to: the generated map of the model of the first object is subjected to a compression process to generate a compressed map.
According to some embodiments of the disclosure, the apparatus further comprises a processing unit configured to: performing a predetermined process on the three-dimensional space model, the target object model, and the compressed map to generate a resource having a predetermined format; and storing the resource in a non-volatile computer-readable storage medium.
According to some embodiments of the present disclosure, the target image comprises a panoramic image, and/or the depth map comprises a panoramic depth map.
According to another aspect of the present disclosure, there is also provided an electronic device including: a processor; and a memory, wherein the memory has stored therein computer readable code which, when executed by the processor, implements the image processing method of any of the above methods.
According to another aspect of the present disclosure, there is also provided a non-transitory computer readable storage medium storing computer readable instructions, wherein the computer readable instructions, when executed by a processor, implement the image processing method of any one of the above methods.
Therefore, according to the method and apparatus of the embodiment of the present disclosure, for a three-dimensional space and/or a target object in the three-dimensional space, a model with a default map is generated, and a target image and a depth map captured at a position in the three-dimensional space are acquired, and further, for each point on the model, it is determined whether to use the default map or color information of the target image to generate the map of the model based on the distance of the point on the model to the position and the corresponding depth determined from the depth map. Therefore, the generated model and the mapping of the model can provide better three-dimensional display effect for the user, and visual experience is provided for the user.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly introduced below. It is apparent that the drawings in the following description are only exemplary embodiments of the disclosure, and that other drawings may be derived from those drawings by a person of ordinary skill in the art without inventive effort.
FIG. 1 is a diagram illustrating the generation of a three-dimensional model and a map of the three-dimensional model in the prior art;
fig. 2 shows a flow chart of an image processing method according to a first embodiment of the present disclosure;
FIG. 3 is a schematic diagram showing the positional relationship of points in a model according to a first embodiment of the present disclosure;
fig. 4 shows a flowchart of determining a sampling color of an image processing method according to a first embodiment of the present disclosure;
FIG. 5 shows a schematic diagram of a three-dimensional space model and a target object model generated in accordance with a first embodiment of the present disclosure and a map of the model;
fig. 6 shows a flow chart of an image processing method according to a second embodiment of the present disclosure;
fig. 7 shows a flow chart of an image processing method according to a third embodiment of the present disclosure;
fig. 8 shows a block diagram of an image processing apparatus according to a fourth embodiment of the present disclosure;
fig. 9 shows a block diagram of a color determination unit in an image processing apparatus according to a fourth embodiment of the present disclosure;
fig. 10 shows a block diagram of an image processing apparatus according to a fifth embodiment of the present disclosure;
FIG. 11 illustrates a block diagram of an electronic device in accordance with some embodiments of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described below clearly and completely with reference to the accompanying drawings of the embodiments of the present disclosure. It is to be understood that the described embodiments are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the disclosure without any inventive step, are within the scope of protection of the disclosure.
Unless otherwise defined, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. The use of "first," "second," and similar terms in this disclosure is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that the element or item preceding the word covers the element or item listed after the word and its equivalent, but not the other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly. To maintain the following description of the embodiments of the present disclosure clear and concise, a detailed description of some known functions and components have been omitted from the present disclosure.
Flow charts are used in this disclosure to illustrate steps of methods according to embodiments of the disclosure. It should be understood that the preceding and following steps are not necessarily performed in the exact order in which they are performed. Rather, various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or steps may be removed from the processes.
In the specification and drawings of the present disclosure, elements are described in singular or plural forms according to embodiments. However, the singular and plural forms are appropriately selected for the proposed cases only for convenience of explanation and are not intended to limit the present disclosure thereto. Thus, the singular may include the plural and the plural may also include the singular, unless the context clearly dictates otherwise.
Further, in this disclosure, when the term "map" is used as a noun, it represents a picture (or texture, etc.) that has a particular correspondence (e.g., mapping) to a surface of the three-dimensional model that is available to be rendered with the three-dimensional model for presentation to a user. When used as a verb, the term "map" is sometimes referred to as texture mapping, meaning that graphics processing techniques are used to apply a picture (or texture, etc.) to a surface of a three-dimensional model to form a particular correspondence for subsequent graphics rendering or display.
FIG. 1 is a diagram illustrating a three-dimensional model and a map of the three-dimensional model generated in the prior art.
As shown in fig. 1, a 3D house view, which is an example of a three-dimensional space, is obtained by cutting a panorama and then pasting it on a simple panel. In which pieces of furniture such as sofas, end tables and kitchens are sheets, which are simply laid flat on the floor. Because the objects in the three-dimensional space are not considered, the display effect of the obtained 3D user-type diagram is not three-dimensional enough, and visual experience is difficult to give to a user.
In view of such a situation, the present disclosure may provide a better stereoscopic display effect to a user by selectively using color information of a target image or a default map to obtain a map of a model for each point on a model of a three-dimensional space and an object, thereby giving the user an intuitive visual experience.
< first embodiment >
Fig. 2 shows a flowchart of an image processing method according to a first embodiment of the present disclosure. As shown in fig. 2, the method comprises the steps of:
step S210, acquiring data describing a three-dimensional space and a target object in the three-dimensional space, and generating a three-dimensional space model and a target object model based on the data, wherein the target object model is located at a preset position in the three-dimensional space model, the model of a first object is provided with a default map, and the first object comprises the three-dimensional space and/or the target object;
step S220 of acquiring one or more target images and a depth map relating to a first object captured at a first location in three-dimensional space;
step S230, aiming at one or more points on the model of the first object, determining to use the color obtained from the default map or the color obtained from the target image as the color of the point according to the distance from the point to the first position and the depth determined from the depth map and corresponding to the point; and
in step S240, a map of the model of the first object is generated based on the determined color of the point.
Specifically, first, in step S210, data describing a three-dimensional space and a target object in the three-dimensional space are acquired, and a three-dimensional space model and a target object model are generated based on the data, wherein the target object model is located at a preset position in the three-dimensional space model, and a model of a first object has a default map, and the first object includes the three-dimensional space and/or the target object. .
In one example, the three-dimensional space may be an interior space and/or an exterior space of the physical scene. The physical scene may be any realistic physical scene capable of capturing image information using the device, and the target object may be any object present in the physical scene. For example, the internal space of the physical scene may be an internal space of a house, an office, or the like in a building, or an internal space of a vehicle, or the like, and the target object may be furniture, or the like in a house, an office, or a steering wheel, a seat, or the like in a vehicle. Additionally or alternatively, the exterior space of the physical scene may be an outdoor garden, street, natural landscape, etc., and the target object may be a sculpture, street lamp, flower bed, etc., therein.
In one example, the data describing the three-dimensional space and the target object in the three-dimensional space may be data in a format that facilitates generation of a three-dimensional space model and/or a target object model, such as json format, or it may be data that only describes the three-dimensional space and coordinates of the target object in the three-dimensional space. In addition, the data may also include data describing other properties, such as the material of the three-dimensional space and the target object in the three-dimensional space. In some examples, the data describing the three-dimensional space and the data describing the model in the three-dimensional space may each have a different data format. Further, the data describing the three-dimensional space and the data describing the model in the three-dimensional space may be acquired separately or separately.
In one example, the preset position of the target object model in the three-dimensional space model may be determined according to the position of the target object in the actual three-dimensional space, so that the positional relationship of the generated three-dimensional space model and the target object model corresponds to the positional relationship of the actual three-dimensional space and the target object therein, thereby conforming the generated model to the actual scene. It should be noted that the shapes of the generated three-dimensional space model and the target object model correspond to the shapes of the actual three-dimensional space and the target object therein.
In one example, the default map may be a map that was previously obtained or made by a technician. The default map may be used to present a stereoscopic display effect to the user along with the model. For example, the model and a default map of the model may be rendered to present the digitized three-dimensional space to the user. In one example, the data describing the three-dimensional space and the target object in the three-dimensional space and the default map may be obtained from a local storage device, or may be obtained from a server, a database, or the like via a network.
According to an embodiment of the present disclosure, the target object model may include a plurality of target object models. Wherein, for a plurality of target object models, two or more of the target object models may be merged to generate a single target object model. For example, target object models having similar features, including the same points, etc. may be merged. Due to the fact that the target object models have similar characteristics and comprise the same points and the like, data needing to be stored and data needing to be processed can be reduced through combination of the target object models, and therefore the combined target object models can be processed more quickly in the subsequent steps. For example, when two target object models have a plurality of the same points, if stored and processed as two models, data on the points need to be stored separately for the two models, and also the data need to be processed separately; when two target models with a plurality of identical points are merged, only one time of storage is needed and only one time of data about the points needs to be processed, so that resources required for storage and processing are reduced.
Similarly, in some cases, one or more target object models may also be incorporated into the three-dimensional spatial model for subsequent processing to reduce the resources required for storage and processing.
Next, in step S220, one or more target images and depth maps relating to the first object captured at a first location in three-dimensional space are acquired.
According to one embodiment of the present disclosure, the target image may comprise a panoramic (VR) image, and/or the depth map may comprise a panoramic depth map. The panoramic image or panoramic depth map may be a target image or depth map acquired in a panoramic format. The panorama format may include a hexahedron format, a sphere format, a bar graph format, and the like. In general, a panoramic image acquired in a panoramic manner may contain color information of all portions that can be captured at a first position for one three-dimensional space, and thus only one panoramic image may be captured and used for the three-dimensional space. Similarly, for one three-dimensional space, a panoramic depth map acquired in a panoramic manner may contain depth information of all parts that can be captured at a first location, and thus only one panoramic depth map may be captured and used.
In one example, images of different panoramic formats may be transformed into each other, and other formats may be generally transformed into a hexahedral format, in order to determine points in the three-dimensional model corresponding to points in the panoramic image and/or the panoramic depth map in a subsequent process, the specific use of which will be explained in detail below.
In one example, when all color information about a portion where a map is to be generated is contained in one target image, only the one target image may be acquired and used. Likewise, when all depth information about a portion where a map is to be generated is contained in one depth map, only the one depth map may be acquired and used.
Alternatively, when all color information about a portion where a map is to be generated is contained in a plurality of target images, these target images may be acquired and used; also, when all depth information about a portion where a map is to be generated is contained in a plurality of depth maps, these depth maps can be acquired and used. In some cases, the panoramic image may also be generated from a plurality of captured target images, and/or the panoramic depth map may be generated from a plurality of captured depth maps.
In one example, the first location has a corresponding location in the three-dimensional space model. In one example, to determine the corresponding location of the first location in the three-dimensional spatial model, the first location may be predetermined or may be obtained from location information contained in the captured target image and the depth map. The location information may represent the actual location at which the device is currently located for capture, which may be obtained by, for example, elements (e.g., chips, sensors, etc.) that can utilize the Global Positioning System (GPS), the beidou system, and other positioning systems.
In one example, the captured target image and/or depth map may also include angular information. The angle information may be current angular velocity information of the device, or information indicating a current direction of the device or a lens of the device calculated through an angular velocity, where the angular velocity information may be obtained by a gyro sensor. Preferably, the position information may also be calculated using information obtained by one or more of a GPS chip, a gyro sensor, an acceleration sensor, and other sensors built in and/or externally connected to the device, so that the position information is more accurate. Preferably, the angle information may also be calculated using information obtained from one or more of a gyro sensor, an acceleration sensor, and other sensors built in and/or externally connected to the apparatus, so that the angle information is more accurate. The specific function of the angle information will be described in detail below with reference to examples.
In one example, the device for capturing one or more target images and depth maps may be a camera or any type of portable device with image capture functionality, such as a smartphone, tablet, laptop, etc.
In one example, the target image and/or depth map captured by the device may be acquired directly via a network. In another example, the target image and/or depth map may be obtained from a server or database storing images, which may be transmitted by the device via a network or otherwise to the server or database. In one example, the network may be a wired network and/or a wireless network. For example, the wired network may perform data transmission by using twisted pair, coaxial cable, or optical fiber transmission, and the wireless network may perform data transmission by using 3G/4G/5G mobile communication network, bluetooth, Zigbee, or WiFi.
Then, in step S230, for one or more points on the model of the first object (i.e. only the three-dimensional space model and/or the target object model), it is determined to use the color obtained from the default map or the color obtained from the target image as the color of the point according to the distance of the point to the first location and the depth corresponding to the point determined from the depth map.
The points on the model may be any type of points having color information. In one example, the points on the model may be pixel points having a particular size. In another example, the points on the model may also be vertices in graphics processing techniques, which are described in detail below.
Typically, the model and the model's default map may be used to present a stereoscopic display effect to the user. However, using a default map is generally not a preferred solution due to reasons such as the fact that the default map is artificially created and does not truly represent the actual display effect of the three-dimensional space, the fact that the default map has a lower resolution than the target image, and the like. Therefore, it is preferable to generate a map on the model of the first object using color information acquired from the target image.
However, in some cases, the information contained in the target image captured at a certain location (e.g., the first location) within the three-dimensional space may be only a portion of the information about the model of the first object. For example, when there are one or more target objects in the three-dimensional space, since the target image for generating the map is captured at a certain position in the three-dimensional space, in a case where there may be some portions of the first object that are occluded (for example, a case where mutual occlusion between the target objects, or a portion of the target object is not directed to the position and thus cannot be captured, etc.), the captured target image may include only partial information of the first object, and not information of the occluded portion. In this case, a default map may be used to determine the map of the occluded portion.
To determine whether to apply the target image or the default map, a depth map may be acquired at a location in the three-dimensional space based on acquiring the target image at that location. Whether to generate a map using the acquired target image is determined by determining the distance of a point on the model to the location and the corresponding depth.
In this way, for example, when a portion of the first object is not occluded, a map of the model of the first object may be generated using the acquired target image. Thereby the display effect of the map can be better. Meanwhile, for a portion that cannot be mapped using the target image, mapping may be performed using a default picture set in advance therefor or color information thereof may be acquired to generate a map.
In one example, the default map may be a low-modulus (i.e., low resolution) map or a high-modulus (i.e., high resolution that approximates the resolution of the target image) map.
It should be noted that "high" and "low" are only intended to indicate a relative relationship. That is, a low die map may cause the generated map to have a relatively poor display effect with respect to the captured one or more target images. While a high-mode tile may result in a relatively better display of the generated tile than a low-mode image, e.g., similar to a tile generated using the target image.
Therefore, the model of the first object can have default maps with different resolutions according to actual requirements. In some cases, in order to make the display effect of the generated map relatively consistent with the default map using the target image, the default map may include only the high-modulus map. However, since generating high-die maps may generally take more time and cost than generating low-die maps, in some cases, in order to improve efficiency and save costs, portions of users that are of significant interest may be made to have high-die maps, while portions that are relatively unimportant may be made to have low-die maps; in other cases, the default map may contain only the low die map for further efficiency and cost savings.
According to an embodiment of the disclosure, the corresponding depth is determined based on a direction of the point relative to the first position.
In one example, as previously described, a corresponding location in the three-dimensional spatial model may be determined by the first location, and thus a distance from a point on the model to the first location (i.e., the corresponding location), and an orientation of the point on the model relative to the first location may be determined. Furthermore, as previously described, the depth map and/or the target image may include angle information, so a person skilled in the art may easily determine the depth corresponding to a point on the model and/or obtain the corresponding color from the target image as the color of the point through the angle information. That is, based on the direction of a point on the model with respect to the first location and the angle information of the depth map and/or the target image, depth information and/or color information corresponding to the direction of the point with respect to the first location may be determined in the depth map and/or the target image. Alternatively, when the depth map is a panoramic depth map and/or the target image is a panoramic image, due to the characteristics of the panoramic format, a person skilled in the art can determine the depth corresponding to the point and/or obtain the corresponding color from the target image as the color of the point through the corresponding relationship between the panoramic depth map and/or the panoramic image and the three-dimensional space model without additional angle information. As mentioned above, when the panoramic image and/or the panoramic depth map is in a hexahedral format, a person skilled in the art can relatively easily determine the depth corresponding to the point and/or obtain the corresponding color from the target image as the color of the point according to the corresponding relationship between the panoramic depth map and/or the panoramic image and the three-dimensional space model.
Specifically, fig. 3 shows a schematic diagram of the positional relationship of points in a model according to a first embodiment of the present disclosure. As shown in fig. 3, in the three-dimensional space model, the coordinates of the first position are (0,0,1), the coordinates of a point a on the front surface of a certain target object model (for example, a point on the front surface of a rectangular parallelepiped furniture) are (0, -3,1), the coordinates of a point B in the same direction on the back surface of the target object model (for example, a point in the same direction on the back surface of the rectangular parallelepiped furniture) are (0, -4,1), and the coordinates of a point C in the same direction on the three-dimensional space model (for example, a point in the same direction on a wall) are (0, -6, 1). Therefore, from the above positional relationship of the point A, B, C with respect to the first position, a point in the target image and the depth map corresponding to the point A, B, C on the model can be determined from the angle information and the like; it is also possible to determine the depth information and color information corresponding to the point A, B, C by determining a point corresponding to the point A, B, C on the model according to the correspondence in the panoramic image and the panoramic depth map.
According to embodiments of the present disclosure, it may be determined whether the distance of the point to the first location is greater than the depth corresponding to the point; when the distance is greater than the corresponding depth, the color obtained from the default map may be used as the color of the point; when the distance is equal to or less than the corresponding depth, a color obtained from the target image may be used as the color of the point. As described above, when a point on the model is a pixel point, the color obtained from the target image or the default map may be directly used as the color of the pixel point.
Continuing with the example of FIG. 3 above, it may be determined that the three points A, B, C are all in the same direction relative to the first position, based on the positional relationships shown in the figure. Thus, when an image is captured at the first position, point B and point C in that direction are both occluded by point a, and thus the captured target image has only color information about point a and no color information about point B and point C. Further, due to the presence of point a (coordinates of (0, -3,1)), the absolute value of the depth of the captured depth map in this direction is 3.
That is, when the point a is processed, it may be determined that the distance from the point a to the first position is equal to the corresponding depth, and thus a color obtained from the target image may be used as the color of the point a; when processing the points B, C, it may be determined that the distances of the points B, C to the first position are both greater than the corresponding depths, and thus the color obtained from the default maps of the points B and C may be used as its color.
It should also be noted here that, as described earlier, since the target object model is located at a preset position in the three-dimensional space model, the relative positional relationship between the three-dimensional space model and the target object model is consistent with the positional relationship between the actual three-dimensional space and the target object. This can ensure that, for example, the above-mentioned point A, B, C located in the same direction as the first position in the three-dimensional space model is also in that direction relative to the first position in the three-dimensional space without a deviation.
In one example, the methods described in this disclosure may be implemented using a Graphics Processing Unit (GPU). In an implementation of the present disclosure using a GPU, according to an embodiment of the present disclosure, a point on a model of the first object (i.e., the three-dimensional space model and/or the target object model) may be a vertex (vertex) on the three-dimensional space model and/or the target object model.
During processing by the GPU, the GPU may typically divide the surface of the model in the form of a mesh, such that each portion of the mesh may be rendered separately. For example, the surface of the model may be divided into primitives of different sizes and shapes composed of vertices (e.g., a primitive may be a vertex, two vertices constituting a line segment, a triangle composed of three vertices, or a polygon composed of multiple vertices). The vertices of these primitives each correspond to a point on the model (i.e., a vertex on the model). The model vertex may include information such as a position, a normal vector, texture coordinates, and a vertex color. Thus, the vertices of the primitive may have color information of the model vertices corresponding thereto. In one example, the color information of the model vertex may be obtained from the target image and/or the default map, and the model may be further processed by the GPU, such as rasterizing, to convert the primitive into a fragment, so that the map of the model may be generated; and the model and the map may be rendered to present the digitized three-dimensional space to the user.
Specifically, when the method of the present disclosure is implemented by using a GPU, the step of S230 may be embodied as using a GPU rendering manner to introduce model vertices into a Vertex Shader (VS) of the GPU, in the VS, converting tiled model texture coordinates into a coordinate range supported by the current GPU as a position output item of the VS, and converting the model vertex positions introduced into the VS into a world coordinate system, and simultaneously transmitting the model vertex positions of the world space coordinate system and the texture coordinates of a low-modulus map corresponding to the model vertices to a Fragment Shader (FS), and in a process from the VS to the FS, the GPU may perform operations such as interpolation. In FS, for each fragment, a vector and distance from the interpolated model vertex position in the world space coordinate system to the capture device (e.g., camera) position are calculated. And depth values can be obtained from the depth map. If the distance is less than or equal to the depth value, the current vertex of the model can be seen in the target image, and therefore color information can be obtained from the target image; if the distance is greater than the depth value, it means that the current vertex of the model is not visible in the target image, and thus the color information is obtained using the low-stencil map of the model. And taking the color information of each fragment as an output item of the FS, and storing the color information into the picture to obtain the final model map.
Finally, in step S240, a map of the model of the first object is generated based on the determined color of the point. A map applied to the model of the first object may be generated from the colors of one or more points on the model of the first object.
In addition, as shown in fig. 4, step S230 may further include the following steps:
step S410, sampling the target image to acquire a sampling color according to the direction of the point relative to the first position, and taking the acquired sampling color as the color of the point;
step S420, a position in the low-profile map corresponding to the point is sampled to obtain a sampling color, and the obtained sampling color is used as the color of the point.
In particular, for one or more points on the model of the first object, it may be determined whether the distance of the point to the first location is greater than the corresponding depth. When the distance of the point from the first position is not greater than (i.e., equal to or less than) the corresponding depth, according to an embodiment of the present disclosure, at S410, the target image may be sampled to obtain a sampling color according to the direction of the point with respect to the first position, and the obtained sampling color may be taken as the color of the point. In one example, the target image may be sampled to obtain a sampling color so that the map generated using the sampling color may conform to a real three-dimensional space, due to the fact that the pixel size of the point to generate the map may not coincide with the pixel size in the same direction of the captured target image, the pixel size of the point to generate the map may not completely overlap with the pixel size in the same direction on the captured target image, and so on. In one example, the sampling may include interpolation (i.e., downsampling), extrapolation (i.e., upsampling), and the like.
As previously described, the default map may contain a low die map. When the distance from the point to the first position is greater than the corresponding depth, according to an embodiment of the present disclosure, in the case that the default map contains a low bitmap, the position corresponding to the point in the low bitmap may be sampled to obtain a sampling color, and the obtained sampling color is taken as the color of the point in S420. The default sampling method of the map may refer to the sampling method of the target image, or may be different from the sampling method.
FIG. 5 shows a schematic diagram of a three-dimensional space model and a target object model generated according to a first embodiment of the present disclosure and a map of the model. As shown in fig. 5, furniture such as tea table, sofa, etc. all have stereoscopic display effect, which provides more real and intuitive visual experience compared to fig. 1.
Therefore, according to the image processing method of the first embodiment of the present disclosure, a three-dimensional space model having a default map and a target object model are generated for a three-dimensional space and a target object in the three-dimensional space, and a target image and a depth map captured at a position in the three-dimensional space are acquired, and further, for each point of the model, it is determined whether to generate a map of the model using the default map or color information of the target image based on a distance of the point on the model to the position and a corresponding depth determined from the depth map. Therefore, the generated model and the mapping of the model can provide better three-dimensional display effect for the user, and visual experience is provided for the user.
< second embodiment >
Fig. 6 illustrates an image processing method according to a second embodiment of the present disclosure. As shown in fig. 6, the method comprises the steps of:
step S210, acquiring data describing a three-dimensional space and a target object in the three-dimensional space, and generating a three-dimensional space model and a target object model based on the data, wherein the target object model is located at a preset position in the three-dimensional space model, the model of a first object is provided with a default map, and the first object comprises the three-dimensional space and/or the target object;
step S220 of acquiring one or more target images and a depth map relating to a first object captured at a first location in three-dimensional space;
step S230, aiming at one or more points on the model of the first object, determining to use the color obtained from the default map or the color obtained from the target image as the color of the point according to the distance from the point to the first position and the depth determined from the depth map and corresponding to the point;
step S240 of generating a map of the model of the first object based on the determined color of the point;
step S610, compressing the generated map of the model of the first object to generate a compressed map; and
step S620, performing predetermined processing on the three-dimensional space model, the target object model, and the compressed map to generate a resource having a predetermined format; and storing the resource in a non-volatile computer-readable storage medium.
Since steps S210 to S240 are substantially the same as the image processing method according to the first embodiment of the present disclosure, a detailed description thereof is omitted herein.
Further, according to the second embodiment of the present disclosure, after generating the map of the model of the first object, in step 610, the generated map of the model of the first object may be subjected to a compression process to generate a compressed map. The compressed map may use less storage space and less transmission network bandwidth than the map before compression.
Then, in step S620, a predetermined process may be performed on the three-dimensional space model, the target object model, and the compressed map to generate a resource having a predetermined format; and storing the resource in a non-volatile computer-readable storage medium. Resources having a predefined format may, on the one hand, use less storage space and network transmission bandwidth and, on the other hand, facilitate faster rendering of file content (i.e., models and maps) by the processing device, thereby quickly providing the user with the three-dimensional space and/or target objects desired to be browsed. Furthermore, the processed resources having the predefined format reduce the performance requirements of the processor (e.g., GPU) compared to unprocessed models and maps, and thus may also browse three-dimensional space and target objects when a user is using a relatively low performance device.
In some examples, the three-dimensional space model, the target object model, may be processed to generate a file in FBX format and converted to a GlTF format file along with the map. In some examples, the three-dimensional space model, the target object model, and the compressed map may also be processed into other file formats that may be quickly rendered and/or conveniently stored.
It should be noted that in some cases, the method described in the present disclosure may implement step S610 or step S620 separately, or may implement both step S610 and step S620.
Therefore, according to the image processing method of the second embodiment of the present disclosure, on one hand, by compressing and generating a smaller resource packet, the storage space and the transmission bandwidth can be reduced to improve the display efficiency of the front-end device; on the other hand, by generating resources with a predetermined format, the performance requirements on the processor can be reduced, so that low-end devices can also be used for browsing three-dimensional space and target objects.
< third embodiment >
Fig. 7 is a flowchart of an image processing method according to a third embodiment of the present disclosure. In the example of the method shown in fig. 7, as previously described, since one or more target object models in the three-dimensional space model may be incorporated into the three-dimensional space model, there may be only one three-dimensional space model to be processed. Furthermore, in some cases, there are also some three-dimensional spatial models, which have irregular structures. For example, if the top view of the three-dimensional space has a "convex" type structure, then the information acquired about the three-dimensional space may not be complete when the first location of the captured image and depth map is located at its corner. Therefore, the method shown in fig. 7 can be applied to the case where only the three-dimensional space model exists.
According to the embodiment of the present disclosure, in step S710, data describing a three-dimensional space is acquired, and a three-dimensional space model is generated based on the data, wherein the three-dimensional space model has a default map.
In step S720, one or more target images and depth maps relating to the three-dimensional space captured at a first location in the three-dimensional space are acquired.
In step S730, for one or more points on the three-dimensional space model, a color obtained from a default map or a color obtained from the target image is determined to be used as the color of the point according to the distance from the point to the first position and the depth determined from the depth map corresponding to the point.
In step S740, a map of the three-dimensional space model is generated based on the determined color of the point.
The image processing method of the second embodiment shown in fig. 7 is based on the same inventive concept as the image processing method of the first embodiment, and therefore, some specific details regarding the image processing method of the second embodiment may refer to the first embodiment, and the details of the same contents are not repeated.
Therefore, according to the image processing method of the third embodiment of the present disclosure, a three-dimensional space model having a default map is generated for a three-dimensional space, and a target image and a depth map captured at a position in the three-dimensional space are acquired, and further, for each point of the model, it is determined whether to generate a map of the model using the default map or color information of the target image based on a distance of the point on the model to the position and a corresponding depth determined from the depth map. Therefore, the generated model and the mapping of the model can provide better three-dimensional display effect for the user, and visual experience is provided for the user.
In some cases, the image processing method according to the third embodiment of the present disclosure may further adopt some methods in the image processing method according to the second embodiment, such as shown in fig. 6, for example, compressing the generated map of the three-dimensional space model to generate a compressed map; performing a predetermined process on the three-dimensional space model and the compressed map to generate a resource having a predetermined format, and storing the resource in a non-volatile computer-readable storage medium.
Therefore, according to the image processing method of the third embodiment of the present disclosure, in combination with the partial image processing method of the second embodiment, it is also possible to reduce the storage space and the transmission bandwidth for the three-dimensional space by compressing and generating smaller resource packets, on the one hand, so as to improve the display efficiency of the front-end device; on the other hand, the performance requirement on the processor is reduced by generating resources with a predetermined format, so that low-end equipment can be used for browsing the three-dimensional space.
< fourth embodiment >
The present disclosure provides an image processing apparatus in addition to the above-described image processing method, which will be described in detail with reference to fig. 8.
Fig. 8 shows a block diagram of an image processing apparatus 800 according to a fourth embodiment of the present disclosure. As shown in fig. 8, the image processing apparatus 800 according to the present disclosure may include: a data acquisition and model generation unit 810, a target image and depth map acquisition unit 820, a color determination unit 830, and a map generation unit 840.
According to an embodiment of the present disclosure, the data obtaining and model generating unit 810 may be configured to obtain data describing a three-dimensional space and a target object in the three-dimensional space, and generate a three-dimensional space model and a target object model based on the data, wherein the target object model is located at a preset position in the three-dimensional space model, and the model of the first object has a default map, and the first object includes the three-dimensional space and/or the target object.
In one example, the target object model may include a plurality of target object models, and the data acquisition and model generation unit 810 may further include a model merging module, which may be configured to: for multiple target object models, two or more of the target object models are merged to generate a single target object model.
According to an embodiment of the present disclosure, the target image and depth map acquisition unit 820 may be configured to acquire one or more target images and depth maps related to a first object captured at a first location in three-dimensional space.
According to an embodiment of the present disclosure, the color determination unit 830 may be configured to determine, for one or more points on the model of the first object, to use a color obtained from the default map or a color obtained from the target image as the color of the point, according to the distance of the point to the first location and the depth determined from the depth map corresponding to the point.
In one example, the color determination unit 830 may be further configured to determine whether the distance of the point to the first location is greater than the depth corresponding to the point; when the distance is greater than the corresponding depth, using the color obtained from the default map as the color of the point; when the distance is equal to or less than the corresponding depth, a color obtained from the target image is used as the color of the point.
In one example, the corresponding depth may be determined based on a direction of the point relative to the first location.
According to an embodiment of the present disclosure, the map generation unit 840 may be configured to generate a map of the model of the first object based on the determined color of the point.
In one example, the points on the model of the first object may be vertices of the three-dimensional spatial model and/or the target object model.
Fig. 9 shows a block diagram of the color determination unit 830 in the image processing apparatus 800 according to the fourth embodiment of the present disclosure.
As shown in fig. 9, according to an embodiment of the present disclosure, the color determination unit 830 may further include a target image sampling module 910, and the target image sampling module 910 may be configured to sample the target image according to the direction of the point relative to the first position to obtain a sampling color; the color determination unit 830 may be further configured to take the acquired sample color as the color of the point.
According to an embodiment of the present disclosure, the default map may include a low-modulus map, the color determination unit 830 may further include a map sampling module 920, and the map sampling module 920 may be configured to sample a position corresponding to a point in the low-modulus map to obtain a sampling color; the color determination unit 830 may be further configured to take the acquired sample color as the color of the point.
According to the image processing apparatus 800 of the fourth embodiment of the present disclosure, a three-dimensional space model having a default map and a target object model are generated for a three-dimensional space and a target object in the three-dimensional space, and a target image and a depth map captured at a position in the three-dimensional space are acquired, and further, for each point of the model, it is determined whether to generate a map of the model using the default map or color information of the target image based on a distance of the point on the model to the position and a corresponding depth determined from the depth map. Therefore, the generated model and the mapping of the model can provide better three-dimensional display effect for the user, and visual experience is provided for the user.
< fifth embodiment >
Fig. 10 shows a block diagram of an image processing apparatus 1000 according to a fifth embodiment of the present disclosure. As shown in fig. 10, the image processing apparatus 100 according to the present disclosure may include: a data acquisition and model generation unit 810, a target image and depth map acquisition unit 820, a color determination unit 830, a map generation unit 840, a compression unit 1010, and a processing unit 1020, wherein the color determination unit 830 may further include a target image sampling module 910 and a map sampling module 920. Since some components in fig. 10 are identical to those in fig. 8 and 9, the same components are labeled with the same reference numerals and are not described in detail in fig. 10.
As shown in fig. 10, according to an embodiment of the present disclosure, the map compression unit 1010 may be configured to perform a compression process on the generated map of the model of the first object to generate a compressed map.
Further, the processing unit 1020 may be configured to: performing a predetermined process on the three-dimensional space model, the target object model, and the compressed map to generate a resource having a predetermined format; and storing the resource in a non-volatile computer-readable storage medium.
Some details regarding the image devices shown in fig. 8-10 may also refer to the contents of the aforementioned image processing methods.
According to the image processing apparatus of the fifth embodiment of the present disclosure, on one hand, by compressing and generating a smaller resource packet, a storage space and a transmission bandwidth can be reduced to improve the display efficiency of the front-end device; on the other hand, by generating resources with a predetermined format, the performance requirements on the processor can be reduced, so that low-end devices can also be used for browsing three-dimensional space and target objects.
FIG. 11 illustrates a block diagram of an electronic device in accordance with some embodiments of the present disclosure.
Referring to fig. 11, an electronic device 1100 may include a processor 1110 and a memory 1120. The processor 1110 and the memory 1120 may each be coupled by a bus 1130. The electronic device 1100 may be any type of portable device (e.g., a smart camera, a smart phone, a tablet computer, etc.) or any type of stationary device (e.g., a desktop computer, a server, etc.).
The processor 1110 may perform various actions and processes according to programs stored in the memory 1120. In particular, processor 1110 may be an integrated circuit chip having signal processing capabilities. The processor may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which may be of the X86 or ARM architecture.
The memory 1120 stores computer-executable instructions that, when executed by the processor 1110, implement the image processing methods in the various embodiments described above. The memory 1120 may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), or flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Synchronous Link Dynamic Random Access Memory (SLDRAM), and direct memory bus random access memory (DR RAM). It should be noted that the memories of the methods described herein are intended to comprise, without being limited to, these and any other suitable types of memory.
Further, the image processing method according to the present disclosure may be recorded in a computer-readable storage medium. In particular, according to the present disclosure, a computer-readable storage medium may be provided having stored thereon computer-executable instructions that, when executed by a processor, may cause the processor to perform the image processing method as described above.
It is to be noted that the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises at least one executable instruction for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In general, the various example embodiments of this disclosure may be implemented in hardware or special purpose circuits, software, firmware, logic or any combination thereof. Certain aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device. While aspects of embodiments of the disclosure have been illustrated or described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that the blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The foregoing is illustrative of the present disclosure and is not to be construed as limiting thereof. Although a few exemplary embodiments of this disclosure have been described, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of this disclosure. Accordingly, all such modifications are intended to be included within the scope of this disclosure as defined in the claims. It is to be understood that the foregoing is illustrative of the present disclosure and is not to be construed as limited to the specific embodiments disclosed, and that modifications to the disclosed embodiments, as well as other embodiments, are intended to be included within the scope of the appended claims. The present disclosure is defined by the claims and their equivalents.

Claims (23)

1. A method of image processing, the method comprising:
acquiring data describing a three-dimensional space and a target object in the three-dimensional space, and generating a three-dimensional space model and a target object model based on the data, wherein the target object model is located at a preset position in the three-dimensional space model, and a model of a first object has a default map, and the first object comprises the three-dimensional space and/or the target object;
acquiring one or more target images and depth maps relating to the first object captured at a first location in the three-dimensional space;
determining, for one or more points on the model of the first object, a color obtained from the default map or a color obtained from the target image as a color of the point as a function of a distance of the point to the first location and a depth determined from the depth map corresponding to the point; and
generating a map of a model of the first object based on the determined color of the point.
2. The method of claim 1, wherein determining to use a color obtained from the default map or a color obtained from the target image as the color of the point as a function of the distance of the point to the first location and the depth determined from the depth map corresponding to the point comprises:
determining whether the distance of the point to the first location is greater than the depth corresponding to the point;
when the distance is greater than the corresponding depth, using a color obtained from the default map as the color of the point; and
when the distance is equal to or less than the corresponding depth, a color acquired from the target image is used as the color of the point.
3. The method of claim 2, wherein using a color obtained from the target image as the color of the point comprises:
sampling the target image to obtain a sampling color according to the direction of the point relative to the first position, and taking the obtained sampling color as the color of the point.
4. The method of claim 2, wherein the default map contains a low-modulus map, using a color obtained from the default map as the color of the point comprises:
and sampling the position corresponding to the point in the low-modulus mapping to obtain a sampling color, and taking the obtained sampling color as the color of the point.
5. The method of any of claims 1-4, wherein the corresponding depth is determined based on a direction of the point relative to the first location.
6. The method according to any of claims 1-4, wherein a point on the model of the first object is a vertex of the three-dimensional space model and/or the target object model.
7. The method according to any one of claims 1-4, wherein the target object model comprises a plurality of target object models,
the method further comprises the following steps:
for multiple target object models, two or more of the target object models are merged to generate a single target object model.
8. The method of any of claims 1-4, further comprising:
the generated map of the model of the first object is subjected to a compression process to generate a compressed map.
9. The method of claim 8, further comprising:
performing a predetermined process on the three-dimensional space model, the target object model, and the compressed map to generate a resource having a predetermined format; and
storing the resource in a non-volatile computer-readable storage medium.
10. The method of any of claims 1-4, wherein the target image comprises a panoramic image and/or the depth map comprises a panoramic depth map.
11. A method of image processing, the method comprising:
acquiring data describing a three-dimensional space and generating a three-dimensional space model based on the data, wherein the three-dimensional space model has a default map;
acquiring one or more target images and depth maps relating to the three-dimensional space captured at a first location in the three-dimensional space;
for one or more points on the three-dimensional space model, determining to use a color obtained from the default map or a color obtained from the target image as the color of the point according to the distance of the point to the first position and the depth determined from the depth map corresponding to the point; and
generating a map of the three-dimensional space model based on the determined color of the point.
12. An image processing apparatus, the apparatus comprising:
a data acquisition and model generation unit configured to acquire data describing a three-dimensional space and a target object in the three-dimensional space, and generate a three-dimensional space model and a target object model based on the data, wherein the target object model is located at a preset position in the three-dimensional space model, and a model of a first object has a default map, the first object includes the three-dimensional space and/or the target object;
a target image and depth map acquisition unit configured to acquire one or more target images and depth maps relating to the first object captured at a first location in the three-dimensional space;
a color determination unit configured to determine, for one or more points on a model of the first object, to use, as a color of the point, a color obtained from the default map or a color obtained from the target image, according to a distance of the point to the first position and a depth corresponding to the point determined from the depth map; and
a map generation unit configured to generate a map of a model of the first object based on the determined color of the point.
13. The apparatus of claim 12, wherein the color determination unit is further configured to:
determining whether the distance of the point to the first location is greater than the depth corresponding to the point;
when the distance is greater than the corresponding depth, using a color obtained from the default map as the color of the point;
when the distance is equal to or less than the corresponding depth, a color acquired from the target image is used as the color of the point.
14. The apparatus of claim 13, wherein the color determination unit further comprises a target image sampling module,
the target image sampling module is configured to sample the target image to obtain a sampling color according to the orientation of the point relative to the first position;
the color determination unit is further configured to take the acquired sampling color as the color of the point.
15. The apparatus of claim 13, wherein the default map comprises a low die map, the color determination unit further comprises a map sampling module,
the map sampling module is configured to sample a location in the low-die map corresponding to the point to obtain a sampling color;
the color determination unit is further configured to take the acquired sampling color as the color of the point.
16. The apparatus of any of claims 12-15, wherein the corresponding depth is determined based on a direction of the point relative to the first location.
17. The apparatus according to any of claims 12-15, wherein the point on the model of the first object is a vertex of the three-dimensional space model and/or the target object model.
18. The apparatus according to any of claims 12-15, wherein the target object model comprises a plurality of target object models,
the apparatus further comprises a model merging unit configured to:
for multiple target object models, two or more of the target object models are merged to generate a single target object model.
19. The apparatus of any one of claims 12-15, further comprising a map compression unit configured to:
the generated map of the model of the first object is subjected to a compression process to generate a compressed map.
20. The apparatus of claim 19, further comprising a processing unit configured to:
performing a predetermined process on the three-dimensional space model, the target object model, and the compressed map to generate a resource having a predetermined format; and
storing the resource in a non-volatile computer-readable storage medium.
21. The apparatus of any of claims 12-15, wherein the target image comprises a panoramic image and/or the depth map comprises a panoramic depth map.
22. An electronic device, comprising:
a processor; and
memory, wherein the memory has stored therein computer readable code, which when executed by the processor, implements the image processing method of any of claims 1-11.
23. A non-transitory computer readable storage medium storing computer readable instructions, wherein the computer readable instructions, when executed by a processor, implement the image processing method of any one of claims 1-11.
CN202111618393.9A 2021-12-27 2021-12-27 Image processing method and device, electronic equipment and readable storage medium Pending CN114266854A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111618393.9A CN114266854A (en) 2021-12-27 2021-12-27 Image processing method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111618393.9A CN114266854A (en) 2021-12-27 2021-12-27 Image processing method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN114266854A true CN114266854A (en) 2022-04-01

Family

ID=80830872

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111618393.9A Pending CN114266854A (en) 2021-12-27 2021-12-27 Image processing method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114266854A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272544A (en) * 2022-06-27 2022-11-01 北京五八信息技术有限公司 Mapping method and device, electronic equipment and storage medium
CN115830162A (en) * 2022-11-21 2023-03-21 北京城市网邻信息技术有限公司 Home map display method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160012646A1 (en) * 2014-07-10 2016-01-14 Perfetch, Llc Systems and methods for constructing a three dimensional (3d) color representation of an object
CN108564646A (en) * 2018-03-28 2018-09-21 腾讯科技(深圳)有限公司 Rendering intent and device, storage medium, the electronic device of object
CN110570510A (en) * 2019-09-10 2019-12-13 珠海天燕科技有限公司 Method and device for generating material map
CN113112581A (en) * 2021-05-13 2021-07-13 广东三维家信息科技有限公司 Texture map generation method, device and equipment for three-dimensional model and storage medium
CN113239442A (en) * 2021-06-03 2021-08-10 中移智行网络科技有限公司 Three-dimensional model construction method, device, equipment and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160012646A1 (en) * 2014-07-10 2016-01-14 Perfetch, Llc Systems and methods for constructing a three dimensional (3d) color representation of an object
CN108564646A (en) * 2018-03-28 2018-09-21 腾讯科技(深圳)有限公司 Rendering intent and device, storage medium, the electronic device of object
CN110570510A (en) * 2019-09-10 2019-12-13 珠海天燕科技有限公司 Method and device for generating material map
CN113112581A (en) * 2021-05-13 2021-07-13 广东三维家信息科技有限公司 Texture map generation method, device and equipment for three-dimensional model and storage medium
CN113239442A (en) * 2021-06-03 2021-08-10 中移智行网络科技有限公司 Three-dimensional model construction method, device, equipment and computer readable storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272544A (en) * 2022-06-27 2022-11-01 北京五八信息技术有限公司 Mapping method and device, electronic equipment and storage medium
CN115272544B (en) * 2022-06-27 2023-09-01 北京五八信息技术有限公司 Mapping method, mapping device, electronic equipment and storage medium
CN115830162A (en) * 2022-11-21 2023-03-21 北京城市网邻信息技术有限公司 Home map display method and device, electronic equipment and storage medium
CN115830162B (en) * 2022-11-21 2023-11-14 北京城市网邻信息技术有限公司 House type diagram display method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US11024092B2 (en) System and method for augmented reality content delivery in pre-captured environments
CN109327699B (en) Image processing method, terminal and server
CN111402374B (en) Multi-path video and three-dimensional model fusion method, device, equipment and storage medium thereof
CN114266854A (en) Image processing method and device, electronic equipment and readable storage medium
US10733786B2 (en) Rendering 360 depth content
WO2020017134A1 (en) File generation device and device for generating image based on file
EP3655928B1 (en) Soft-occlusion for computer graphics rendering
US9165397B2 (en) Texture blending between view-dependent texture and base texture in a geographic information system
US10719920B2 (en) Environment map generation and hole filling
KR101359011B1 (en) 3-dimensional visualization system for displaying earth environment image
CN111754381B (en) Graphics rendering method, apparatus, and computer-readable storage medium
WO2023207963A1 (en) Image processing method and apparatus, electronic device, and storage medium
US20160049001A1 (en) Curvature-Driven Normal Interpolation for Shading Applications
CN110910504A (en) Method and device for determining three-dimensional model of region
WO2017113729A1 (en) 360-degree image loading method and loading module, and mobile terminal
WO2016033275A1 (en) System and method for remote shadow rendering in a 3d virtual environment
US20160035127A1 (en) Three-dimensional image display system, server for three-dimensional image display system, and three-dimensional image display method
US20200288083A1 (en) Image capturing apparatus, image processing system, image processing method, and recording medium
JP2016009374A (en) Information processing device, method, and program
US10652514B2 (en) Rendering 360 depth content
KR101227155B1 (en) Graphic image processing apparatus and method for realtime transforming low resolution image into high resolution image
CN114419267A (en) Three-dimensional model construction method and device and storage medium
US11206427B2 (en) System architecture and method of processing data therein
CN110390717B (en) 3D model reconstruction method and device and electronic equipment
CN114359456B (en) Picture pasting method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220401