WO2023173727A1 - 图像处理方法、装置及电子设备 - Google Patents
图像处理方法、装置及电子设备 Download PDFInfo
- Publication number
- WO2023173727A1 WO2023173727A1 PCT/CN2022/123543 CN2022123543W WO2023173727A1 WO 2023173727 A1 WO2023173727 A1 WO 2023173727A1 CN 2022123543 W CN2022123543 W CN 2022123543W WO 2023173727 A1 WO2023173727 A1 WO 2023173727A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- area
- sampling
- texture
- vertex
- image
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 28
- 238000013507 mapping Methods 0.000 claims abstract description 148
- 238000000034 method Methods 0.000 claims abstract description 65
- 238000005070 sampling Methods 0.000 claims description 238
- 238000004590 computer program Methods 0.000 claims description 24
- 238000012545 processing Methods 0.000 claims description 23
- 230000000694 effects Effects 0.000 abstract description 13
- 238000010586 diagram Methods 0.000 description 21
- 230000006870 function Effects 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 101000822695 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C1 Proteins 0.000 description 1
- 101000655262 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C2 Proteins 0.000 description 1
- 101000655256 Paraclostridium bifermentans Small, acid-soluble spore protein alpha Proteins 0.000 description 1
- 101000655264 Paraclostridium bifermentans Small, acid-soluble spore protein beta Proteins 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2215/00—Indexing scheme for image rendering
- G06T2215/12—Shadow map, environment map
Definitions
- the present disclosure relates to the field of image processing technology, and in particular, to an image processing method, device, electronic equipment, computer-readable storage medium, computer program product, and computer program.
- Terminal devices can produce virtual reality (VR) videos through cube-mapping. For example, the terminal device pastes six texture images corresponding to the two-dimensional image on the six faces of the cube to obtain a VR image corresponding to the two-dimensional image, and then obtains a virtual reality video.
- VR virtual reality
- the vertex texture coordinates of each face of the model can be preset.
- the model can obtain multiple areas in the texture image according to the preset vertex texture coordinates and map them to the model to obtain a VR image.
- the cube model can obtain the target areas in the 6 texture images based on the preset vertex texture coordinates of each face, and then map the 6 target areas to the 6 faces. Get VR images.
- the area obtained by the model in the texture image based on the preset vertex texture coordinates is fixed, which leads to poor flexibility in obtaining VR images.
- the present disclosure provides an image processing method, device, electronic equipment, computer-readable storage medium, computer program product, and computer program to solve the technical problem of poor flexibility in acquiring VR images in the prior art.
- the present disclosure provides an image processing method, which method includes:
- N N texture images corresponding to the first image, and the correspondence between the N texture images and N model surfaces of the three-dimensional model, where N is an integer greater than 1;
- the mapping area corresponding to each model face is determined in the N texture images, and the mapping area is at least a partial area of the texture image;
- the present disclosure provides an image processing device, which includes a first determination module, a second determination module and a mapping module, wherein:
- the first determination module is used to determine the N texture images corresponding to the first image and the corresponding relationship between the N texture images and the N model surfaces of the three-dimensional model, where the N is an integer greater than 1;
- the second determination module is configured to determine the mapping area corresponding to each model face in the N texture images according to the corresponding relationship and the offset coefficient, and the mapping area is at least part of the texture image. area;
- the mapping module is configured to map the mapping areas in the N texture images to the three-dimensional model to obtain a three-dimensional image corresponding to the first image.
- embodiments of the present disclosure provide an electronic device, including: a processor and a memory;
- the memory stores computer execution instructions
- the processor executes the computer execution instructions stored in the memory, so that the at least one processor executes the image processing method described in the above first aspect and various possible designs of the first aspect.
- embodiments of the present disclosure provide a computer-readable storage medium.
- Computer-executable instructions are stored in the computer-readable storage medium.
- the processor executes the computer-executable instructions, the above first aspect and the first aspect are implemented. aspects of various possible designs of the described image processing methods.
- embodiments of the present disclosure provide a computer program product, including a computer program that, when executed by a processor, implements the image processing method described in the first aspect and various possible designs of the first aspect.
- embodiments of the present disclosure provide a computer program that, when executed by a processor, implements the image processing method described in the first aspect and various possible designs of the first aspect.
- the present disclosure provides an image processing method, device, electronic equipment, computer-readable storage medium, computer program product and computer program, which determines N texture images corresponding to the first image, and N models of the N texture images and the three-dimensional model. Correspondence between surfaces, where N is an integer greater than 1. According to the correspondence and offset coefficient, the mapping area corresponding to each model surface is determined in the N texture images, where the mapping area is at least In some areas, the mapping areas in the N texture images are mapped to the three-dimensional model to obtain the three-dimensional image corresponding to the first image.
- the terminal device can flexibly obtain the mapping area in the texture image through the corresponding relationship and offset coefficient, thereby improving the flexibility of obtaining VR images and improving the display effect of VR images.
- Figure 1 is a schematic diagram of an application scenario provided by an embodiment of the present disclosure
- FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the present disclosure
- Figure 3 is a schematic diagram of a process for determining correspondence relationships provided by an embodiment of the present disclosure
- Figure 4 is a schematic diagram of a process of obtaining a sampling area provided by an embodiment of the present disclosure
- Figure 5 is a schematic diagram of a process for determining a mapping area provided by an embodiment of the present disclosure
- Figure 6 is a schematic flowchart of a method for determining an offset coefficient provided by an embodiment of the present disclosure
- Figure 7 is a schematic process diagram of an image processing method provided by an embodiment of the present disclosure.
- Figure 8 is a schematic structural diagram of an image processing device provided by an embodiment of the present disclosure.
- Figure 9 is a schematic structural diagram of another image processing device provided by an embodiment of the present disclosure.
- FIG. 10 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
- the terminal device can paste multiple texture images corresponding to the two-dimensional image on the model surface to obtain a VR image of the two-dimensional image.
- the vertex texture coordinates of each face of the model can be preset, and the model can obtain multiple areas in the texture image according to the preset vertex texture coordinates and map them to the model face.
- the cube model can obtain a target area in each texture image based on the preset texture coordinates of each model surface, and then map the obtained 6 target areas to the corresponding in the model surface.
- the model can only obtain areas in the texture image fixedly based on preset vertex texture coordinates, which results in poor flexibility in obtaining VR images.
- embodiments of the present disclosure provide an image processing method that determines N texture images corresponding to the first image, and N texture images and N three-dimensional models. Correspondence between model faces. According to the correspondence, N texture images are combined into areas of L rows and K columns to obtain the sampling area. According to the sampling area and offset coefficient, the mapping area corresponding to the model face is determined, where, The mapping area is at least a partial area of the texture image, and the mapping area in the N texture images is mapped to the three-dimensional model to obtain a three-dimensional image corresponding to the first image.
- the terminal device can flexibly obtain the mapping area in the texture image through the corresponding relationship and offset coefficient, which can not only improve the display effect of the VR image, but also improve the flexibility of obtaining the VR image. .
- Figure 1 is a schematic diagram of an application scenario provided by an embodiment of the present disclosure. See Figure 1, including: first image and 3D model. Among them, the three-dimensional model is a cube model.
- the texture image corresponding to the first image is obtained through a preset algorithm, where the texture image includes texture image A, texture image B, texture image C, texture image D, texture image E, and texture image F.
- determine 6 mapping areas in the texture image and map the mapping area to the model surface of the corresponding cube model to obtain the third A three-dimensional image corresponding to an image.
- the obtained mapping area can be flexibly adjusted through the corresponding relationship and offset coefficient. This can avoid cracks between each model surface of the mapped three-dimensional image and improve the display effect of the three-dimensional image. Improve the flexibility of acquiring three-dimensional images.
- FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the present disclosure. See Figure 2, the method can include:
- the execution subject of the embodiment of the present disclosure may be a terminal device, or may be an image processing device provided in the terminal device.
- the image processing device can be implemented by software, or can be implemented by a combination of software and hardware.
- the first image may be an image in the video.
- the first image when converting a video to a VR video, the first image may be each frame image in the video, and by converting each frame image into a VR image, a VR video corresponding to the video is obtained.
- the first image may also be a depth image.
- the first image can be a landscape image, a scene image, or any other spatial image with image depth.
- the first image can be converted into a VR image.
- the texture image may be an image including the texture of the first image.
- the texture is used to indicate changes in color and grayscale of the first image. For example, an image shows irregularities in local areas, but shows a pattern as a whole. This kind of local irregularity but macroscopic regularity is called texture.
- the N texture images corresponding to the first image can be determined according to the following feasible implementation: process the first image through a preset algorithm to obtain N texture images corresponding to the first image.
- N is an integer greater than 1.
- N is 6, through CommandBuffer technology, 6 texture images corresponding to the first image can be obtained.
- the three-dimensional model can be a cube model.
- the structure of the three-dimensional model may be a cube.
- the three-dimensional model may include six model surfaces.
- correspondence is used to indicate the relationship between the texture image and each model face.
- the three-dimensional model includes six model surfaces, the number of texture images corresponding to the first image is six, and each texture image has a corresponding model surface.
- the preset algorithm obtains N texture images corresponding to the first image
- the corresponding relationship between the N texture images and the N model surfaces of the three-dimensional model may be output.
- the first texture image output by the preset algorithm corresponds to the left side of the cube model
- the second texture image corresponds to the front side of the cube model
- the third texture image corresponds to the right side of the cube model
- the third texture image corresponds to the right side of the cube model.
- Four texture images correspond to the bottom surface of the cube model
- the fifth texture image corresponds to the back surface of the cube model
- the sixth texture image corresponds to the top surface of the cube model.
- FIG. 3 is a schematic diagram of a process of determining corresponding relationships provided by an embodiment of the present disclosure. See Figure 3, including first image and cube model. Among them, the cube model is expanded into 6 faces. The first image is processed through a preset algorithm to obtain six texture images corresponding to the first image. Among them, texture image A is the first texture image, texture image B is the second texture image, texture image C is the third texture image, texture image D is the fourth texture image, and texture image E is the fifth texture. Image, texture image F is the sixth texture image.
- Texture image A corresponds to the left side of the cube model
- texture image B corresponds to the front side of the cube model
- texture image C corresponds to the right side of the cube model
- texture image D corresponds to the bottom side of the cube model
- texture image E corresponds to the cube
- the texture image F corresponds to the top surface of the cube model.
- mapping area corresponding to each model face in the N texture images According to the corresponding relationship and offset coefficient, determine the mapping area corresponding to each model face in the N texture images.
- the mapping area is at least a partial area of the texture image.
- the mapping area can be the upper area, lower area, middle area, etc. of the texture image.
- the three-dimensional model can flexibly obtain the mapping area in the texture image, thereby improving the flexibility of image mapping.
- the area may be a middle area of the texture image.
- the mapping area of the model surface of the three-dimensional model can be the middle area of the texture image (for example, the area remaining after the side length of the texture image is cropped by 0.5 pixels). In this way, the mapping area of the model surface can be accurately obtained, thereby preventing the model surface from being mapped. Cracks are generated between them to improve the display effect of the three-dimensional model.
- the offset coefficient is used to adjust the size of the mapping area.
- the terminal device can use the offset coefficient to crop the 0.5 pixel size around the texture image to obtain the mapping area.
- the mapping area corresponding to each model face can be determined in the N texture images according to the following feasible implementation: According to the corresponding relationship, the N texture images are combined into an area of L rows and K columns to obtain samples. area, determine the mapping area corresponding to the model plane based on the sampling area and offset coefficient.
- the product of L and K is N, and L and K are positive integers. For example, if the number of texture images corresponding to the first image is 6, the terminal device can combine the 6 texture images into a sampling area of 2 rows and 3 columns.
- FIG. 4 is a schematic diagram of a process of obtaining a sampling area according to an embodiment of the present disclosure. See Figure 4, including the first image.
- the first image is processed through a preset algorithm to obtain texture image A, texture image B, texture image C, texture image D, texture image E and texture image F.
- texture image A is the left side of the cube model
- texture image B is the front side of the cube model
- texture image C is the right side of the cube model
- texture image D is the bottom side of the cube model
- texture image E is the back side of the cube model
- the texture image F is the top surface of the cube model. Therefore, 6 texture images can be combined into a sampling area of 2 rows and 3 columns.
- the first row of the sampling area includes texture image A, texture image B and texture image C
- the second row of the sampling area includes texture image D, texture image E and texture image F.
- determine the mapping area corresponding to the model face according to the sampling area and the offset coefficient specifically: for any model face, determine the first position of the texture image corresponding to the model face in the sampling area.
- the sampling area corresponding to the cube model is the sampling area shown in Figure 4. If the model surface of the cube model is the bottom surface, the texture image corresponding to the bottom surface is the image in the 2nd row and 1st column in the first position of the sampling area.
- the vertex texture coordinates include the vertex texture abscissa coordinate and the vertex texture ordinate coordinate.
- a cube model includes 4 vertex texture coordinates per model face.
- the minimum value of the vertex texture abscissa coordinate and the vertex texture ordinate coordinate is 0, and the maximum value is 1.
- the model surface of the cube model includes four vertex texture coordinates, where the four vertex texture coordinates are (0,0), (0,1), (1,0) and (1,1) respectively.
- Vertex sampling coordinates are determined in the sampling area according to the vertex texture coordinates, the offset coefficient and the first position.
- the vertex sampling coordinates are the vertex coordinates of the mapping area.
- each vertex texture coordinate of the model surface includes corresponding vertex sampling coordinates in the mapping area, and the mapping area can be obtained in the sampling area through the vertex sampling coordinates.
- the vertex sampling coordinates can be determined according to the following feasible implementation methods: determine the abscissa of the vertex sampling coordinates based on the vertex texture coordinates, the first position, the number of columns in the sampling area, and the offset coefficient.
- the first position can be represented by the rows and columns of the sampling area.
- the first position may be the image position of the first row and the first column of the sampling area, and the first position may also be the image position of the second row and the third column of the sampling area.
- the abscissa of the vertex sampling coordinates can be determined according to the following formula:
- x′ x/columnNum+targetCol/columnNum ⁇ scalePercent.x
- targetCol is used to indicate the column of the vertex sampling coordinates in the sampling area
- destIndex is the sequence number of the texture image (for example, the first texture image in the sampling area, etc.).
- N 6
- the value range of destIndex is 0-5)
- columnNum is the number of columns in the sampling area
- scalePercent.x is the offset coefficient in the x direction
- x is the abscissa of the vertex texture coordinates
- x' is the abscissa of the vertex sampling coordinates.
- the above formula is only used as an example to illustrate the method of obtaining the abscissa coordinate of the vertex sampling coordinate.
- the ordinate of the vertex sampling coordinate is determined according to the vertex texture coordinate, the first position, the number of columns of the sampling area, the number of rows of the sampling area, and the offset coefficient.
- the ordinate of the vertex sampling coordinates can be determined according to the following formula:
- y′ y/rowNum+targetRow/rowNum ⁇ scalePercent.y
- targetRow is used to indicate the row of vertex sampling coordinates in the sampling area
- destIndex is the serial number of the texture image (for example, the first texture image in the sampling area, etc.).
- N 6
- the value range of destIndex is 0-5)
- columnNum is the number of columns in the sampling area
- rowNum is the number of rows in the sampling area
- scalePercent.y is the offset coefficient in the y direction
- y is the ordinate of the vertex texture coordinate
- y' is the ordinate of the vertex sampling coordinate .
- the above formula is only used as an example to illustrate the method of obtaining the ordinate of the vertex sampling coordinate.
- You can also use other formulas or methods to determine the row of the vertex sampling coordinate in the sampling area (for example, when the destIndex value range is different, targetRow It can also be calculated through other formulas, or the ordinate of the sampling vertex can be obtained through the shader), which is not limited by those skilled in the art.
- the vertex sampling coordinate is obtained.
- the pixel spacing (pixel difference) between texture images can be accurately determined through the sampling area.
- the terminal device can adjust the vertex sampling coordinates of each model surface according to the offset coefficient, and then according to Vertex sampling coordinates flexibly obtain the mapping area of each model face in the sampling area, improving the flexibility and accuracy of obtaining the mapping area, thereby improving the display effect of the three-dimensional model.
- the mapping area corresponding to the model face is determined.
- the number of rows in the sampling area is 2, determine the mapping area corresponding to the model face based on the vertex sampling coordinates and the first position.
- Case 1 The first position is located in row 1 of the sampling area.
- the mapping area corresponding to the model face is determined based on the vertex sampling coordinates. For example, when the sampling area includes 2 rows and 3 columns, the sampling area includes 6 texture images. If the texture image is located in the 1st row, the mapping area corresponding to the model face is determined based on the vertex sampling coordinates. Optionally, if the texture image is located in the first row of the sampling area, determine the first area corresponding to the vertex sampling coordinates in the sampling area, and determine the first area as the mapping area corresponding to the model surface.
- an area can be obtained in the sampling area, and then the area can be determined as the mapping area of the model face, so that in When the vertex sampling coordinates are located in the first row, the terminal device can quickly and accurately determine the mapping area of the model surface based on the vertex sampling coordinates.
- FIG. 5 is a schematic diagram of a process for determining a mapping area provided by an embodiment of the present disclosure. See Figure 5, including texture image and cube model.
- the front face of the cube model includes texture vertex A, texture vertex B, texture vertex C and texture vertex D.
- the texture image is the image corresponding to the front face of the cube model.
- the texture image includes sampling vertex E, sampling vertex F, sampling vertex G and sampling vertex H.
- texture vertex A corresponds to sampling vertex E
- texture vertex B corresponds to sampling vertex F
- texture vertex C corresponds to sampling vertex G
- texture vertex D corresponds to sampling vertex H.
- the area surrounded by the sampling vertex E, the sampling vertex F, the sampling vertex G, and the sampling vertex H is the mapping area of the front of the cube model.
- the size of the mapping area can be flexibly adjusted to avoid cracks between the model surfaces, improve the three-dimensional image display effect, and improve the flexibility of three-dimensional image acquisition.
- Case 2 The first position is located in row 2 of the sampling area.
- the vertex sampling coordinates are flipped at a preset angle to obtain the target vertex sampling coordinates, and the mapping area corresponding to the model face is determined based on the target vertex sampling coordinates. For example, in the actual application process, if the texture image is located in the 2nd row of the sampling area, the texture image is an image rotated 90 degrees to the right in the sampling area. Therefore, when the texture image is located in the 2nd row of the sampling area, you can Rotate the obtained vertex sampling coordinates 90 degrees to the left to obtain the target vertex sampling coordinates, and then obtain the forward display mapping area through the target vertex sampling coordinates.
- the vertex sampling coordinates can be flipped at a preset angle according to the following formula:
- xyscale vec2(1.0/columnNum, 1.0/rowNum)
- xyBegin vec2(targetCol/columnNum, targetRow/rowNum)
- xyEnd vec2(xyscale.x+targetCol/columnNum,xyscale.y+targetRow/rowNum)
- x′ (y′-xyBegin.y) ⁇ xyscale.x/xyscale.y+xyBegin.x
- y′ (xyEnd.x-x′) ⁇ xyscale.y/xyscale.x+xyBegin.y
- targetRow is used to indicate the row of the vertex sampling coordinates in the sampling area
- targetCol is used to indicate the column of the vertex sampling coordinates in the sampling area
- columnNum is the number of columns in the sampling area
- rowNum is the number of rows in the sampling area
- xyscale is the two-dimensional vector xy scale (for example, if rowNum is 2, xyscale.y is 0.5)
- xyBegin is the two-dimensional vector of the starting position of xy
- xyEnd is the two-dimensional vector of the ending position of xy
- x' is the abscissa of the vertex sampling coordinate
- y' The ordinate of the vertex sampling coordinate.
- the vertex sampling coordinates can be flipped at a preset angle (for example, 90-degree flipping) through the above formula.
- a preset angle for example, 90-degree flipping
- the above formula is only an example and does not limit the flipping method. Other methods can also be used to flip the coordinates. (For example, the sampling vertex coordinates are flipped through the shader), which is not limited in this embodiment of the disclosure.
- determine the mapping area corresponding to the model face according to the target vertex sampling coordinates specifically: determine the second area corresponding to the target vertex sampling coordinates in the sampling area, and determine the second area as the mapping area corresponding to the model face .
- determine the mapping area corresponding to the model face specifically: determine the second area corresponding to the target vertex sampling coordinates in the sampling area, and determine the second area as the mapping area corresponding to the model face .
- an area can be obtained in the sampling area, and then the area can be determined as the mapping area of the model surface.
- the mapping area for forward display can be obtained and the display effect of the three-dimensional model can be improved.
- the mapping area of each texture image can be mapped to the model surface of the three-dimensional model corresponding to the texture image to obtain a three-dimensional image corresponding to the first image. For example, if the mapping area of texture image A is area A and the model surface corresponding to texture image A is model surface A, then area A is mapped to model surface A; if the mapping area of texture image B is area B, texture image B If the corresponding model surface is model surface B, then area B is mapped to model surface B.
- each mapping area can be mapped to the corresponding model surface to obtain a three-dimensional image corresponding to the first image.
- the terminal device can process each frame of the video to obtain a three-dimensional image corresponding to the video.
- the terminal device can obtain 6 texture images corresponding to each frame of the video in real time, and determine the 6 mapping areas corresponding to each frame of image through the corresponding relationship and offset coefficient.
- a mapping area is mapped to the model surface of the cube model to obtain the VR video corresponding to the video.
- Embodiments of the present disclosure provide an image processing method that determines N texture images corresponding to the first image and the correspondence between the N texture images and N model surfaces of the three-dimensional model, and converts the N texture images into Combined into areas of L rows and K columns, a sampling area is obtained. According to the sampling area and offset coefficient, the mapping area corresponding to the model surface is determined, where the mapping area is at least part of the texture image, and the N texture images are The mapping area is mapped to the three-dimensional model to obtain a three-dimensional image corresponding to the first image.
- the terminal device can flexibly obtain the mapping area in the texture image through the corresponding relationship and offset coefficient, which can not only improve the display effect of the VR image, but also improve the flexibility of obtaining the VR image. .
- the above image processing method also includes a method of determining an offset coefficient.
- the method of determining the offset coefficient will be described with reference to FIG. 6 .
- FIG. 6 is a schematic flowchart of a method for determining an offset coefficient according to an embodiment of the present disclosure. Please refer to Figure 6. The method flow includes:
- the preset area can be the black border area of the texture image.
- the edges of the extracted texture image will also be There are black border areas.
- a sampling area is composed of N texture images, a smaller black border area (black border caused by pixel differences) will also appear between the edge junctions of the texture images.
- the preset area size can be obtained through an image detection algorithm.
- the size of the black border area on the edge of the texture image can be determined through an image detection algorithm.
- the black border area in the sampling area can also be detected through the image detection algorithm. size.
- the offset coefficient can be determined according to the following feasible implementation methods: obtain the first preset relationship.
- the first preset relationship includes at least one area size and a coefficient corresponding to each area size.
- the first preset relationship can be as shown in Table 1:
- Table 1 only illustrates the first preset relationship in the form of an example and does not limit the first preset relationship.
- the offset coefficient is determined according to the preset area size and the first preset relationship. For example, if the preset area size is area size 1, then the offset coefficient corresponding to the preset area size is coefficient 1; if the preset area size is area size 2, then the offset coefficient corresponding to the preset area size is coefficient 2; If the preset area size is area size 3, then the offset coefficient corresponding to the preset area size is coefficient 3.
- the texture image can be scaled using an offset coefficient to obtain the mapping area.
- the offset coefficient is greater than 1, a mapping area larger than the texture image size can be obtained in the sampling area, and when the offset coefficient is less than 1, a mapping area smaller than the texture image size can be obtained in the sampling area.
- the embodiment of the present disclosure provides a method for determining offset data, obtains the preset area size of N texture image edges, obtains the first preset relationship, and determines based on the preset area size and the first preset relationship.
- the offset coefficient In this way, the terminal device can flexibly adjust the offset coefficient according to the preset area size, and then according to the offset coefficient, flexibly and accurately obtain the mapping area that does not include black edges in the texture image. Since the mapping area does not include black edges, , Therefore, the display effect of VR images can be improved, thereby improving the flexibility of obtaining VR images.
- FIG. 7 is a schematic process diagram of an image processing method provided by an embodiment of the present disclosure. See Figure 7, including first image and cube model.
- the first image is processed through a preset algorithm to obtain a texture image corresponding to the first image, where the texture image includes texture image A, texture image B, texture image C, texture image D, texture image E, and texture image F.
- the texture images are combined to obtain a sampling area of 2 rows and 3 columns.
- texture image A corresponds to the left side
- texture image B corresponds to the front side
- texture image C corresponds to the right side
- texture image D corresponds to the bottom side
- texture image E corresponds to the back side
- texture image F corresponds to the top side.
- the mapping area corresponding to each model face is determined.
- the mapping area A is obtained at the first row and first column of the sampling area.
- the mapping area corresponding to each face of the cube model in the sampling area can be obtained (for example, the mapping area corresponding to the top surface is mapping area F). Map the mapping area to the corresponding cube model to obtain a three-dimensional image corresponding to the first image.
- mapping area A is mapped to the left side
- mapping area B is mapped to the front surface
- mapping area C is mapped to the right side
- mapping area D is mapped to the bottom surface
- mapping area E is mapped to the back surface
- mapping area F is mapped to the top surface.
- the range of the mapping area corresponding to the model surface can be flexibly adjusted according to the corresponding relationship and offset coefficient, and the size difference between the mapping area and the texture image can be within 1 pixel size ( For example, 0.5 pixels from the edge of the texture image can be cropped, leaving a mapping area), thereby avoiding cracks between the three-dimensional model surfaces, improving the display effect of the three-dimensional image, and improving the flexibility of obtaining the three-dimensional image.
- FIG. 8 is a schematic structural diagram of an image processing device provided by an embodiment of the present disclosure.
- the image processing device 10 includes a first determination module 11, a second determination module 12 and a mapping module 13, wherein:
- the first determination module 11 is used to determine the N texture images corresponding to the first image and the correspondence between the N texture images and the N model surfaces of the three-dimensional model, where the N is an integer greater than 1. ;
- the second determination module 12 is configured to determine the mapping area corresponding to each model face in the N texture images according to the corresponding relationship and the offset coefficient, and the mapping area is at least one of the texture images. partial area;
- the mapping module 13 is configured to map the mapping areas in the N texture images to the three-dimensional model to obtain a three-dimensional image corresponding to the first image.
- the second determination module 12 is specifically used to:
- the N texture images are combined into an area of L rows and K columns to obtain a sampling area.
- the product of the L and the K is the N, and the L and the K are positive integer;
- the mapping area corresponding to the model plane is determined.
- the second determination module 12 is specifically used to:
- the mapping area corresponding to the model plane is determined.
- the second determination module 12 is specifically used to:
- the vertex sampling coordinates are obtained according to the abscissa coordinates of the vertex sampling coordinates and the ordinate coordinates of the vertex sampling coordinates.
- the second determination module 12 is specifically used to:
- mapping area corresponding to the model plane according to the vertex sampling coordinates
- the vertex sampling coordinates are flipped at a preset angle to obtain the target vertex sampling coordinates, and the model is determined based on the target vertex sampling coordinates.
- the second determination module 12 is specifically used to:
- the first area is determined as the mapping area corresponding to the model plane.
- the second determination module 12 is specifically used to:
- the second area is determined as the mapping area corresponding to the model plane.
- the image processing device provided in this embodiment can be used to execute the technical solutions of the above method embodiments. Its implementation principles and technical effects are similar, and will not be described again in this embodiment.
- FIG. 9 is a schematic structural diagram of another image processing device provided by an embodiment of the present disclosure. Based on the embodiment shown in Figure 8, please refer to Figure 9.
- the image processing device 10 also includes an acquisition module 14, which is used to:
- the offset coefficient is determined according to the preset area size.
- the acquisition module 14 is specifically used to:
- Obtain a first preset relationship which includes at least one area size and a coefficient corresponding to each area size
- the offset coefficient is determined according to the preset area size and the first preset relationship.
- the image processing device provided in this embodiment can be used to execute the technical solutions of the above method embodiments. Its implementation principles and technical effects are similar, and will not be described again in this embodiment.
- FIG. 10 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
- the electronic device 900 may be a terminal device or a server.
- terminal devices may include but are not limited to mobile phones, laptops, digital broadcast receivers, personal digital assistants (Personal Digital Assistant, PDA for short), tablet computers (Portable Android Device, PAD for short), portable multimedia players (Portable Mobile terminals such as Media Player (PMP for short), vehicle-mounted terminals (such as vehicle-mounted navigation terminals), and fixed terminals such as digital TVs, desktop computers, etc.
- PDA Personal Digital Assistant
- PDA Personal Digital Assistant
- PAD Personal Android Device
- portable multimedia players Portable Mobile terminals such as Media Player (PMP for short
- vehicle-mounted terminals such as vehicle-mounted navigation terminals
- fixed terminals such as digital TVs, desktop computers, etc.
- the electronic device shown in FIG. 10 is only an example and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.
- the electronic device 900 may include a processing device (such as a central processing unit, a graphics processor, etc.) 901, which may process data according to a program stored in a read-only memory (Read Only Memory, ROM for short) 902 or from a storage device. 908 loads the program in the random access memory (Random Access Memory, RAM for short) 903 to perform various appropriate actions and processing. In the RAM 903, various programs and data required for the operation of the electronic device 900 are also stored.
- the processing device 901, ROM 902 and RAM 903 are connected to each other via a bus 904.
- An input/output (I/O) interface 905 is also connected to bus 904.
- the following devices can be connected to the I/O interface 905: input devices 906 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD). ), an output device 907 such as a speaker, a vibrator, etc.; a storage device 908 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 909.
- the communication device 909 may allow the electronic device 900 to communicate wirelessly or wiredly with other devices to exchange data.
- FIG. 10 illustrates electronic device 900 with various means, it should be understood that implementation or availability of all illustrated means is not required. More or fewer means may alternatively be implemented or provided.
- embodiments of the present disclosure include a computer program product including a computer program carried on a computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
- the computer program may be downloaded and installed from the network via communication device 909, or from storage device 908, or from ROM 902.
- the processing device 901 the above-mentioned functions defined in the method of the embodiment of the present disclosure are performed.
- the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
- the computer-readable storage medium may be, for example, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination thereof.
- Computer readable storage media may include, but are not limited to: an electrical connection having one or more wires, a portable computer disk, a hard drive, random access memory (RAM), read only memory (ROM), removable Programmable Read Only Memory (Erasable Programmable Read Only Memory, EPROM or Flash Memory), optical fiber, Portable Compact Disk Read Only Memory (CD-ROM), optical storage device, magnetic storage device, or any of the above suitable The combination.
- a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device.
- a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein.
- Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
- a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium that can send, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device .
- Program code contained on a computer-readable medium can be transmitted using any appropriate medium, including but not limited to: wires, optical cables, radio frequency (Radio Frequency, RF), etc., or any suitable combination of the above.
- the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; it may also exist independently without being assembled into the electronic device.
- the computer-readable medium carries one or more programs.
- the electronic device When the one or more programs are executed by the electronic device, the electronic device performs the method shown in the above embodiment.
- Computer program code for performing the operations of the present disclosure may be written in one or more programming languages, including object-oriented programming languages such as Java, Smalltalk, C++, and conventional Procedural programming language—such as "C" or a similar programming language.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer can be connected to the user's computer through any kind of network—including a Local Area Network (LAN) or a Wide Area Network (WAN)—or it can be connected to an external computer Computer (e.g. connected via the Internet using an Internet service provider).
- LAN Local Area Network
- WAN Wide Area Network
- each block in the flowchart or block diagram may represent a module, segment, or portion of code that contains one or more logic functions that implement the specified executable instructions.
- the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved.
- each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or operations. , or can be implemented using a combination of specialized hardware and computer instructions.
- the units involved in the embodiments of the present disclosure can be implemented in software or hardware.
- the name of the unit does not constitute a limitation on the unit itself under certain circumstances.
- the first acquisition unit can also be described as "the unit that acquires at least two Internet Protocol addresses.”
- exemplary types of hardware logic components include: field programmable gate array (Field Programmable Gate Array, FPGA), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), application specific standard product (Application Specific Standard Product (ASSP), System On a Chip (SOC), Complex Programmable Logic Device (CPLD), etc.
- a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
- Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any suitable combination of the foregoing.
- machine-readable storage media would include one or more wire-based electrical connections, laptop disks, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- RAM random access memory
- ROM read only memory
- EPROM or flash memory erasable programmable read only memory
- CD-ROM portable compact disk read-only memory
- magnetic storage device or any suitable combination of the above.
- one or more embodiments of the present disclosure provide an image processing method, which method includes:
- N N texture images corresponding to the first image, and the correspondence between the N texture images and N model surfaces of the three-dimensional model, where N is an integer greater than 1;
- the mapping area corresponding to each model face is determined in the N texture images, and the mapping area is at least a partial area of the texture image;
- the at least part of the area is the middle area of the texture image; according to the correspondence relationship and the offset coefficient, the mapping corresponding to each model face is determined in the N texture images. areas, including:
- the N texture images are combined into an area of L rows and K columns to obtain a sampling area.
- the product of the L and the K is the N, and the L and the K are positive integer;
- the mapping area corresponding to the model plane is determined.
- mapping area corresponding to the model surface according to the sampling area and the offset coefficient including:
- the mapping area corresponding to the model plane is determined.
- the vertex texture coordinates include a vertex texture abscissa coordinate and a vertex texture ordinate; according to the vertex texture coordinates, the offset coefficient and the first position, in the sampling area Determine the vertex sampling coordinates, including:
- the vertex sampling coordinates are obtained according to the abscissa coordinates of the vertex sampling coordinates and the ordinate coordinates of the vertex sampling coordinates.
- the number of rows in the sampling area is 2; determining the mapping area corresponding to the model plane according to the vertex sampling coordinates and the first position includes:
- mapping area corresponding to the model plane according to the vertex sampling coordinates
- the vertex sampling coordinates are flipped at a preset angle to obtain the target vertex sampling coordinates, and the model is determined based on the target vertex sampling coordinates.
- determining the mapping area corresponding to the model surface according to the vertex sampling coordinates includes:
- the first area is determined as the mapping area corresponding to the model plane.
- determining the mapping area corresponding to the model plane according to the target vertex sampling coordinates includes:
- the second area is determined as the mapping area corresponding to the model plane.
- the method further includes:
- the offset coefficient is determined according to the preset area size.
- determining the offset coefficient according to the preset area size includes:
- Obtain a first preset relationship which includes at least one area size and a coefficient corresponding to each area size
- the offset coefficient is determined according to the preset area size and the first preset relationship.
- one or more embodiments of the present disclosure provide an image processing apparatus, including a first determination module, a second determination module and a mapping module, wherein:
- the first determination module is used to determine the N texture images corresponding to the first image and the corresponding relationship between the N texture images and the N model surfaces of the three-dimensional model, where the N is an integer greater than 1;
- the second determination module is configured to determine the mapping area corresponding to each model face in the N texture images according to the corresponding relationship and the offset coefficient, and the mapping area is at least part of the texture image. area;
- the mapping module is configured to map the mapping areas in the N texture images to the three-dimensional model to obtain a three-dimensional image corresponding to the first image.
- the second determination module is specifically used to:
- the N texture images are combined into an area of L rows and K columns to obtain a sampling area.
- the product of the L and the K is the N, and the L and the K are positive integer;
- the mapping area corresponding to the model plane is determined.
- the second determination module is specifically used to:
- the mapping area corresponding to the model plane is determined.
- the second determination module is specifically used to:
- the vertex sampling coordinates are obtained according to the abscissa coordinates of the vertex sampling coordinates and the ordinate coordinates of the vertex sampling coordinates.
- the second determination module is specifically used to:
- mapping area corresponding to the model plane according to the vertex sampling coordinates
- the vertex sampling coordinates are flipped at a preset angle to obtain the target vertex sampling coordinates, and the model is determined based on the target vertex sampling coordinates.
- the second determination module is specifically used to:
- the first area is determined as the mapping area corresponding to the model plane.
- the second determination module is specifically used to:
- the second area is determined as the mapping area corresponding to the model plane.
- the image processing device further includes an acquisition module, where the acquisition module is used to:
- the offset coefficient is determined according to the preset area size.
- the acquisition module is specifically used to:
- Obtain a first preset relationship which includes at least one area size and a coefficient corresponding to each area size
- the offset coefficient is determined according to the preset area size and the first preset relationship.
- embodiments of the present disclosure provide an electronic device, including: a processor and a memory;
- the memory stores computer execution instructions
- the processor executes the computer execution instructions stored in the memory, so that the at least one processor executes the image processing method described in the above first aspect and various possible designs of the first aspect.
- embodiments of the present disclosure provide a computer-readable storage medium.
- Computer-executable instructions are stored in the computer-readable storage medium.
- the processor executes the computer-executable instructions, the above first aspect and the first aspect are implemented. aspects of various possible designs of the described image processing methods.
- embodiments of the present disclosure provide a computer program product, including a computer program that, when executed by a processor, implements the image processing method described in the first aspect and various possible designs of the first aspect.
- embodiments of the present disclosure provide a computer program that, when executed by a processor, implements the image processing method described in the first aspect and various possible designs of the first aspect.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Geometry (AREA)
- Image Generation (AREA)
Abstract
一种图像处理方法、装置(10)及电子设备(900),方法包括:确定第一图像对应的N个纹理图像、以及N个纹理图像与三维模型的N个模型面之间的对应关系,N为大于1的整数(S201);根据对应关系和偏移系数,在N个纹理图像中确定每个模型面对应的映射区域,映射区域为纹理图像的至少部分区域(S202);将N个纹理图像中的映射区域映射至三维模型,得到第一图像对应的三维图像(S203)。提高三维图像的显示效果,提高获取三维图像的灵活度。
Description
相关申请的交叉引用
本公开要求于2022年3月16日提交中国专利局、申请号为202210262157.6、申请名称为“图像处理方法、装置及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
本公开涉及图像处理技术领域,尤其涉及一种图像处理方法、装置、电子设备、计算机可读存储介质、计算机程序产品及计算机程序。
终端设备可以通过立方体映射(Cube-mapping)的方法制作虚拟现实(Virtual Reality,VR)视频。例如,终端设备将二维图像对应的6张纹理图像贴在正方体的六个面上,得到二维图像对应的VR图像,进而得到虚拟现实视频。
目前,可以预先设置模型的每个面的顶点纹理坐标,模型可以按照预先设置的顶点纹理坐标,在纹理图像中获取多个区域,并将其映射至模型,得到VR图像。例如,在正方体模型接收到6张纹理图像时,正方体模型可以根据每个面预设的顶点纹理坐标,分别在6张纹理图像中获取目标区域,进而将6个目标区域映射至6个面中得到VR图像。但是,根据上述方法,模型根据预设的顶点纹理坐标在纹理图像中获取的区域是固定不变的,进而导致获取VR图像的灵活度较差。
发明内容
本公开提供一种图像处理方法、装置、电子设备、计算机可读存储介质、计算机程序产品及计算机程序,用于解决现有技术中获取VR图像的灵活度较差的技术问题。
第一方面,本公开提供一种图像处理方法,该方法包括:
确定第一图像对应的N个纹理图像、以及所述N个纹理图像与三维模型的N个模型面之间的对应关系,所述N为大于1的整数;
根据所述对应关系和偏移系数,在所述N个纹理图像中确定每个模型面对应的映射区域,所述映射区域为所述纹理图像的至少部分区域;
将所述N个纹理图像中的映射区域映射至所述三维模型,得到所述第一图像对应的三维图像。
第二方面,本公开提供一种图像处理装置,该图像处理装置包括第一确定模块、第二确定模块和映射模块,其中:
所述第一确定模块用于,确定第一图像对应的N个纹理图像、以及所述N个纹理图像与三维模型的N个模型面之间的对应关系,所述N为大于1的整数;
所述第二确定模块用于,根据所述对应关系和偏移系数,在所述N个纹理图像中确定每个模型面对应的映射区域,所述映射区域为所述纹理图像的至少部分区域;
所述映射模块用于,将所述N个纹理图像中的映射区域映射至所述三维模型,得到所述第一图像对应的三维图像。
第三方面,本公开实施例提供一种电子设备,包括:处理器和存储器;
所述存储器存储计算机执行指令;
所述处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处理器执行如上第一方面以及第一方面各种可能的设计所述的图像处理方法。
第四方面,本公开实施例提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如上第一方面以及第一方面各种可能的设计所述的图像处理方法。
第五方面,本公开实施例提供一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现如上第一方面以及第一方面各种可能的设计所述的图像处理方法。
第六方面,本公开实施例提供一种计算机程序,所述计算机程序被处理器执行时实现如上第一方面以及第一方面各种可能的设计所述的图像处理方法。
本公开提供一种图像处理方法、装置、电子设备、计算机可读存储介质、计算机程序产品及计算机程序,确定第一图像对应的N个纹理图像、以及N个纹理图像与三维模型的N个模型面之间的对应关系,其中,N为大于1的整数,根据对应关系和偏移系数,在N个纹理图像中确定每个模型面对应的映射区域,其中,映射区域为纹理图像的至少部分区域,将N个纹理图像中的映射区域映射至三维模型,得到第一图像对应的三维图像。在上述方法中,终端设备可以通过对应关系和偏移系数,灵活的在纹理图像中获取映射区域,进而提高获取VR图像的灵活度,提高VR图像的显示效果。
图1为本公开实施例提供的一种应用场景示意图;
图2为本公开实施例提供的一种图像处理方法的流程示意图;
图3为本公开实施例提供的一种确定对应关系的过程示意图;
图4为本公开实施例提供的一种获取采样区域的过程示意图;
图5为本公开实施例提供的一种确定映射区域的过程示意图;
图6为本公开实施例提供的一种确定偏移系数的方法流程示意图;
图7为本公开实施例提供的一种图像处理方法的过程示意图;
图8为本公开实施例提供的一种图像处理装置的结构示意图;
图9为本公开实施例提供的另一种图像处理装置的结构示意图;
图10为本公开实施例提供的一种电子设备的结构示意图。
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排 他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
在相关技术中,终端设备可以将二维图像对应的多张纹理图像贴在模型面上,得到二维图像的VR图像。目前,可以预先设置模型的每个面的顶点纹理坐标,模型可以按照预先设置的顶点纹理坐标,在纹理图像中获取多个区域,并将其映射至模型面上。例如,在正方体模型接收到6张纹理图像时,正方体模型可以根据每个模型面预设的纹理坐标,在每张纹理图像中都获取一个目标区域,进而将得到的6个目标区域映射至对应的模型面中。但是,在纹理图像的质量较差时,模型只能固定的根据预设的顶点纹理坐标在纹理图像中获取区域,进而导致获取VR图像的灵活度较差。
为了解决相关技术中获取VR图像的灵活度较差的技术问题,本公开实施例提供一种图像处理方法,确定第一图像对应的N个纹理图像、以及N个纹理图像与三维模型的N个模型面之间的对应关系,根据对应关系,将N个纹理图像组合为L行、K列的区域,得到采样区域,根据采样区域和偏移系数,确定模型面对应的映射区域,其中,映射区域为纹理图像的至少部分区域,将N个纹理图像中的映射区域映射至三维模型,得到第一图像对应的三维图像。这样,在纹理图像的质量较差时,终端设备可以通过对应关系和偏移系数,灵活的在纹理图像中获取映射区域,不仅可以提高VR图像的显示效果,还可以提高获取VR图像的灵活度。
下面,结合图1,对本公开的应用场景进行说明。
图1为本公开实施例提供的一种应用场景示意图。请参见图1,包括:第一图像和三维模型。其中,三维模型为立方体模型。通过预设算法获取第一图像对应的纹理图像,其中,纹理图像包括纹理图像A、纹理图像B、纹理图像C、纹理图像D、纹理图像E和纹理图像F。确定每个纹理图像与立方体模型的模型面的对应关系,通过对应关系和偏移系数,在纹理图像中确定6个映射区域,并将映射区域映射至对应的立方体模型的模型面上,得到第一图像对应的三维图像。这样,在获取模型面的映射区域时,可以通过对应关系和偏移系数,灵活的调整获取的映射区域,这样可以避免映射的三维图像各个模型面之间产生裂缝,提高三维图像的显示效果,提高获取三维图像的灵活度。
下面以具体地实施例对本公开的技术方案以及本公开的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本公开的实施例进行描述。
图2为本公开实施例提供的一种图像处理方法的流程示意图。请参见图2,该方法可以包括:
S201、确定第一图像对应的N个纹理图像、以及N个纹理图像与三维模型的N个模型面之间的对应关系。
本公开实施例的执行主体可以为终端设备,也可以为设置在终端设备中的图像处理装置。其中,图像处理装置可以通过软件实现,也可以通过软件和硬件的结合方式实现。
可选的,第一图像可以为视频中的图像。例如,在将视频转换至VR视频时,第一图像可以为视频中的每一帧图像,通过将每一帧图像转换为VR图像,得到视频对应的VR视频。
可选的,第一图像也可以为深度图像。例如,第一图像可以为风景图像、场景图像等任 意具有图像深度的空间图像,通过对第一图像进行处理,可以将第一图像转换为VR图像。
可选的,纹理图像可以为包括第一图像的纹理的图像。其中,纹理用于指示第一图像的颜色和灰度的变化。例如,图像在局部区域内呈现不规则性,而在整体上表现出一种规律,把这种局部不规则而宏观有规律的特性称之为纹理。
可选的,可以根据如下可行的实现方式,确定第一图像对应的N个纹理图像:通过预设算法对第一图像进行处理,得到第一图像对应的N个纹理图像。其中,N为大于1的整数。例如,可以通过引擎为渲染命令缓冲区技术CommandBuffer的模型,在视频的视频帧画面中获取视频帧画面对应的纹理图像。例如,在N为6时,通过CommandBuffer技术,可以获取第一图像对应的6张纹理图像。
可选的,三维模型可以为立方体模型。例如,三维模型的结构可以为立方体,在三维模型为立方体模型时,三维模型可以包括6个模型面。可选的,对应关系用于指示纹理图像与每个模型面的关系。例如,在三维模型为立方体模型时,三维模型包括6个模型面,第一图像对应的纹理图像数量为6,每张纹理图像都有对应的一个模型面。
可选的,在预设算法获取第一图像对应的N个纹理图像时,可以输出N个纹理图像与三维模型的N个模型面的对应关系。例如,预设算法输出的第一张纹理图像与立方体模型的左侧面对应,第二张纹理图像与立方体模型的正面对应,第三张纹理图像与立方体模型的右侧面对应,第四张纹理图像与立方体模型的底面对应,第五张纹理图像与立方体模型的背面对应,第六张纹理图像与立方体模型的顶面对应。
下面,结合图3,对确定对应关系的过程进行说明。
图3为本公开实施例提供的一种确定对应关系的过程示意图。请参见图3,包括第一图像和立方体模型。其中,立方体模型展开为6个面。通过预设算法对第一图像进行处理,得到第一图像对应的6张纹理图像。其中,纹理图像A为第一张纹理图像,纹理图像B为第二张纹理图像,纹理图像C为第三张纹理图像,纹理图像D为第四张纹理图像,纹理图像E为第五张纹理图像,纹理图像F为第六张纹理图像。
请参见图3,纹理图像A对应立方体模型的左侧面,纹理图像B对应立方体模型的正面,纹理图像C对应立方体模型的右侧面,纹理图像D对应立方体模型的底面,纹理图像E对应立方体模型的背面,纹理图像F对应立方体模型的顶面。
S202、根据对应关系和偏移系数,在N个纹理图像中确定每个模型面对应的映射区域。
可选的,映射区域为纹理图像的至少部分区域。例如,映射区域可以为纹理图像的上部区域、下部区域、中部区域等,三维模型可以灵活的在纹理图像中获取映射区域,进而提高图像映射的灵活度。
可选的,至少部分区域可以为纹理图像的中部区域。例如,三维模型的模型面的映射区域可以为纹理图像的中部区域(如,纹理图像边长裁剪0.5个像素尺寸之后剩余的区域),这样可以准确的获取模型面的映射区域,进而避免模型面之间产生裂缝,提高三维模型的显示效果。
可选的,偏移系数用于调整映射区域的尺寸。例如,终端设备通过偏移系数可以将纹理图像的四周0.5个像素尺寸裁掉,进而得到映射区域。可选的,可以根据如下可行的实现方式,在N个纹理图像中确定每个模型面对应的映射区域:根据对应关系,将N个纹理图像组合为L行、K列的区域,得到采样区域,根据采样区域和偏移系数,确定模型面对应的映射区域。 其中,L与K的乘积为N,L和K为正整数。例如,若第一图像对应的纹理图像的数量为6,则终端设备可以将6张纹理图像组合为2行、3列的采样区域。
下面,结合图4,对获取采样区域的过程进行说明。
图4为本公开实施例提供的一种获取采样区域的过程示意图。请参见图4,包括第一图像。通过预设算法对第一图像进行处理,得到纹理图像A、纹理图像B、纹理图像C、纹理图像D、纹理图像E和纹理图像F。由于纹理图像A为立方体模型的左侧面,纹理图像B为立方体模型的正面,纹理图像C为立方体模型的右侧面,纹理图像D为立方体模型的底面,纹理图像E为立方体模型的背面,纹理图像F为立方体模型的顶面,因此,可以将6张纹理图像组合成2行、3列的采样区域。其中,采样区域的第1行包括纹理图像A、纹理图像B和纹理图像C,采样区域的第2行包括纹理图像D、纹理图像E和纹理图像F。
可选的,根据采样区域和偏移系数,确定模型面对应的映射区域,具体为:针对任意一个模型面,确定模型面对应的纹理图像在采样区域中的第一位置。例如,立方体模型对应的采样区域为图4所示的采样区域,若立方体模型的模型面为底面,则底面对应的纹理图像在采样区域的第一位置为第2行、第1列的图像。
获取模型面的顶点纹理坐标。其中,顶点纹理坐标包括顶点纹理横坐标和顶点纹理纵坐标。例如,立方体模型每个模型面包括4个顶点纹理坐标。可选的,顶点纹理横坐标和顶点纹理纵坐标的最小值为0,最大值为1。例如,立方体模型的模型面包括4个顶点纹理坐标,其中,4个顶点纹理坐标分别为(0,0)、(0,1)、(1,0)和(1,1)。
根据顶点纹理坐标、偏移系数和第一位置,在采样区域中确定顶点采样坐标。其中,顶点采样坐标为映射区域的顶点坐标。例如,模型面的每个顶点纹理坐标在映射区域中都包括对应的顶点采样坐标,通过顶点采样坐标可以在采样区域中获取映射区域。可选的,可以根据如下可行的实现方式,确定顶点采样坐标:根据顶点纹理坐标、第一位置、采样区域的列数和偏移系数,确定顶点采样坐标的横坐标。其中,可以通过采样区域的行列进行表示第一位置。例如,第一位置可以为采样区域的第1行、第1列的图像位置,第一位置也可以为采样区域的第2行、第3列的图像位置。
可选的,可以根据如下公式,确定顶点采样坐标的横坐标:
targetCol=mod(destIndex/columnNum)
x′=x/columnNum+targetCol/columnNum×scalePercent.x
其中,targetCol用于指示顶点采样坐标在采样区域的列;destIndex为纹理图像的序号(如,采样区域的第1张纹理图像等,可选的,在N为6时,destIndex的取值范围为0-5);columnNum为采样区域的列数;scalePercent.x为x方向的偏移系数;x为顶点纹理坐标的横坐标;x′为顶点采样坐标的横坐标。
可选的,上述公式只是以示例的形式示意获取顶点采样坐标的横坐标的方法,也可以通过其它的公式或方法确定顶点采样坐标在采样区域的列(如,destIndex取值范围不同时,targetCol可以通过其它的公式计算,也可以通过着色器Shader获取采样顶点横坐标),本领域技术人员对此不作限定。
根据顶点纹理坐标、第一位置、采样区域的列数、采样区域的行数和偏移系数,确定顶点采样坐标的纵坐标。
可选的,可以根据如下公式,确定顶点采样坐标的纵坐标:
targetRow=floor(destlndex,columnNum)
y′=y/rowNum+targetRow/rowNum×scalePercent.y
其中,targetRow用于指示顶点采样坐标在采样区域的行;destIndex为纹理图像的序号(如,采样区域的第1张纹理图像等,可选的,在N为6时,destIndex的取值范围为0-5);columnNum为采样区域的列数;rowNum为采样区域的行数;scalePercent.y为y方向的偏移系数;y为顶点纹理坐标的纵坐标;y′为顶点采样坐标的纵坐标。
可选的,上述公式只是以示例的形式示意获取顶点采样坐标的纵坐标的方法,也可以通过其它的公式或方法确定顶点采样坐标在采样区域的行(如,destIndex取值范围不同时,targetRow也可以通过其它的公式计算,也可以通过着色器Shader获取采样顶点纵坐标),本领域技术人员对此不作限定。
根据顶点采样坐标的横坐标、顶点采样坐标的纵坐标,得到顶点采样坐标。通过采样区域可以准确的确定纹理图像之间的像素间距(像素差),在确定模型面的映射区域时,终端设备可以根据偏移系数,对每个模型面的顶点采样坐标进行调整,进而根据顶点采样坐标灵活的在采样区域中获取每个模型面的映射区域,提高获取映射区域的灵活度和准确度,进而提高三维模型的显示效果。
根据顶点采样坐标和第一位置,确定模型面对应的映射区域。可选的,在采样区域的行数为2时,根据顶点采样坐标和第一位置,确定模型面对应的映射区域,有如下两种情况:
情况1:第一位置位于采样区域的第1行。
若第一位置位于采样区域的第1行,则根据顶点采样坐标,确定模型面对应的映射区域。例如,在采样区域包括2行、3列时,采样区域中包括6张纹理图像,若纹理图像位于第1行,则根据顶点采样坐标,确定模型面对应的映射区域。可选的,若纹理图像位于采样区域的第1行,则确定顶点采样坐标在采样区域中对应的第一区域,将第一区域确定为模型面对应的映射区域。例如,通过4个顶点采样坐标,或者2个顶点采样坐标(如,对角顶点的顶点采样坐标),可以在采样区域中获取一个区域,进而将该区域确定为模型面的映射区域,这样在顶点采样坐标位于第1行时,终端设备根据顶点采样坐标,可以快速、准确的确定模型面的映射区域。
下面,结合图5,对确定映射区域的过程进行说明。
图5为本公开实施例提供的一种确定映射区域的过程示意图。请参见图5,包括纹理图像和立方体模型。其中,立方体模型的正面包括纹理顶点A、纹理顶点B、纹理顶点C和纹理顶点D。纹理图像为立方里模型的正面对应的图像。纹理图像包括采样顶点E、采样顶点F、采样顶点G和采样顶点H。其中,纹理顶点A与采样顶点E对应,纹理顶点B与采样顶点F对应,纹理顶点C与采样顶点G对应,纹理顶点D与采样顶点H对应。其中,采样顶点E、采样顶点F、采样顶点G、采样顶点H围成的区域为立方体模型的正面的映射区域。这样,立方体模型在获取模型面的映射区域时,可以灵活的调整映射区域的大小,以避免模型面之间的裂缝,提高三维图像展示效果,提高三维图像的获取灵活度。
情况2:第一位置位于采样区域的第2行。
若第一位置位于采样区域的第2行,则将顶点采样坐标进行预设角度的翻转处理,得到目标顶点采样坐标,以及根据目标顶点采样坐标,确定模型面对应的映射区域。例如,在实际应用过程中,若纹理图像位于采样区域的第2行,则纹理图像在采样区域中为向右旋转90 度的图像,因此,在纹理图像位于采样区域的第2行时,可以将得到的顶点采样坐标向左旋转90度,得到目标顶点采样坐标,进而通过目标顶点采样坐标,获取正向显示的映射区域。
可选的,可以根据如下公式,对顶点采样坐标进行预设角度的翻转处理:
xyscale=vec2(1.0/columnNum,1.0/rowNum)
xyBegin=vec2(targetCol/columnNum,targetRow/rowNum)
xyEnd=vec2(xyscale.x+targetCol/columnNum,xyscale.y+targetRow/rowNum)
x′=(y′-xyBegin.y)×xyscale.x/xyscale.y+xyBegin.x
y′=(xyEnd.x-x′)×xyscale.y/xyscale.x+xyBegin.y
其中,targetRow用于指示顶点采样坐标在采样区域的行;targetCol用于指示顶点采样坐标在采样区域的列;columnNum为采样区域的列数;rowNum为采样区域的行数;xyscale为二维向量的xy比例(如,若rowNum为2,则xyscale.y为0.5);xyBegin为xy开始位置的二维向量;xyEnd为xy结束位置的二维向量;x′为顶点采样坐标的横坐标;y′为顶点采样坐标的纵坐标。
可选的,通过上述公式可以将顶点采样坐标进行预设角度的翻转处理(如,90度的翻转处理),上述公式只是示例,并不是对翻转方式的限定,也可以采用其它方法进行坐标翻转(如,通过着色器Shader对采样顶点坐标进行翻转),本公开实施例对此不作限定。
可选的,根据目标顶点采样坐标,确定模型面对应的映射区域,具体为:确定目标顶点采样坐标在采样区域中对应的第二区域,将第二区域确定为模型面对应的映射区域。例如,通过4个目标顶点采样坐标,或者2个目标顶点采样坐标(如,对角顶点的目标顶点采样坐标),可以在采样区域中获取一个区域,进而将该区域确定为模型面的映射区域,这样,通过对第2行的顶点采样坐标进行翻转,可以获取正向显示的映射区域,提高三维模型的显示效果。
S203、将N个纹理图像中的映射区域映射至三维模型,得到第一图像对应的三维图像。
可选的,可以将每个纹理图像的映射区域,映射至纹理图像对应的三维模型的模型面,得到第一图像对应的三维图像。例如,若纹理图像A的映射区域为区域A,纹理图像A对应的模型面为模型面A,则将区域A映射至模型面A中;若纹理图像B的映射区域为区域B,纹理图像B对应的模型面为模型面B,则将区域B映射至模型面B中,通过上述方法,可以将每个映射区域映射至对应的模型面中,得到第一图像对应的三维图像。
可选的,终端设备可以对视频中的每一帧图像进行处理,得到视频对应的三维图像。例如,终端设备可以实时的获取视频中每一帧的图像对应的6张纹理图像,并通过对应关系和偏移系数,确定每一帧图像对应的6个映射区域,通过将每帧图像的6个映射区域映射至立方体模型的模型面,得到视频对应的VR视频。
本公开实施例提供一种图像处理方法,确定第一图像对应的N个纹理图像、以及N个纹理图像与三维模型的N个模型面之间的对应关系,根据对应关系,将N个纹理图像组合为L行、K列的区域,得到采样区域,根据采样区域和偏移系数,确定模型面对应的映射区域,其中,映射区域为纹理图像的至少部分区域,将N个纹理图像中的映射区域映射至三维模型,得到第一图像对应的三维图像。这样,在纹理图像的质量较差时,终端设备可以通过对应关系和偏移系数,灵活的在纹理图像中获取映射区域,不仅可以提高VR图像的显示效果,还可以提高获取VR图像的灵活度。
在图2所示的实施例的基础上,上述图像处理方法还包括确定偏移系数的方法。下面,结合图6,对确定偏移系数的方法进行说明。
图6为本公开实施例提供的一种确定偏移系数的方法流程示意图。请参见图6,该方法流程包括:
S601、获取N个纹理图像边缘的预设区域尺寸。
可选的,预设区域可以为纹理图像的黑边区域。例如,通过预设算法得到第一图像对应的N个纹理图像时,若第一图像的图像质量较差(如,图像分辨率较低,图像存在黑边等),则提取的纹理图像边缘也存在黑边区域。例如,通过N个纹理图像组成采样区域时,纹理图像的边缘交界处之间也会出现较小的黑边区域(由于像素差异引起的黑边)。
可选的,可以通过图像检测算法获取预设区域尺寸。例如,在得到第一图像对应的纹理图像时,可以通过图像检测算法,确定纹理图像边缘的黑边区域的尺寸,在得到采样区域时,也可以通过图像检测算法,检测采样区域中的黑边尺寸。
S602、根据预设区域尺寸,确定偏移系数。
可选的,可以根据如下可行的实现方式,确定偏移系数:获取第一预设关系。其中,第一预设关系中包括至少一个区域尺寸,以及每个区域尺寸对应的系数。例如,第一预设关系可以如表1所示:
表1
区域尺寸 | 系数 |
区域尺寸1 | 系数1 |
区域尺寸2 | 系数2 |
区域尺寸3 | 系数3 |
…… | …… |
需要说明的是,表1只是以示例的形式示意第一预设关系,并非对第一预设关系的限定。
根据预设区域尺寸和第一预设关系,确定偏移系数。例如,若预设区域尺寸为区域尺寸1,则预设区域尺寸对应的偏移系数为系数1;若预设区域尺寸为区域尺寸2,则预设区域尺寸对应的偏移系数为系数2;若预设区域尺寸为区域尺寸3,则预设区域尺寸对应的偏移系数为系数3。
可选的,可以通过偏移系数对纹理图像进行缩放处理得到映射区域。例如,在偏移系数大于1时,可以在采样区域中获取大于纹理图像尺寸的映射区域,在偏移系数小于1时,可以在采样区域中获取小于纹理图像尺寸的映射区域。
本公开实施例提供一种确定偏移数据的方法,获取N个纹理图像边缘的预设区域尺寸,获取第一预设关系,根据所述预设区域尺寸和所述第一预设关系,确定所述偏移系数。这样,终端设备可以根据预设区域尺寸,灵活的调整偏移系数,进而根据偏移系数,灵活、准确的在纹理图像中获取不包括黑边的映射区域,由于映射区域中不包括黑边区域,因此,可以提高VR图像的显示效果,进而提高获取VR图像的灵活度。
在上述任意一个实施例的基础上,下面,结合图7,对上述图像处理方法的过程进行说明。
图7为本公开实施例提供的一种图像处理方法的过程示意图。请参见图7,包括第一图像和立方体模型。通过预设算法对第一图像进行处理,得到第一图像对应的纹理图像,其中,纹理图像包括纹理图像A、纹理图像B、纹理图像C、纹理图像D、纹理图像E和纹理图像F。 根据纹理图像与每个模型面的对应关系,将纹理图像组合得到2行、3列的采样区域。其中,纹理图像A对应左侧面,纹理图像B对应正面,纹理图像C对应右侧面,纹理图像D对应底面,纹理图像E对应背面,纹理图像F对应顶面。
请参见图7,根据对应关系和偏移系数,确定每个模型面对应的映射区域。在确定立方体模型的左侧面的映射区域时,在采样区域的第1行、第1列的位置获取映射区域A。通过相同的方法,可以获取立方体模型的每个面在采样区域中对应的映射区域(如,顶面对应的映射区域为映射区域F)。将映射区域映射至对应的立方体模型中,得到第一图像对应的三维图像。其中,映射区域A映射至左侧面,映射区域B映射至正面,映射区域C映射至右侧面,映射区域D映射至底面,映射区域E映射至背面,映射区域F映射至顶面。这样,在立方体模型获取映射区域时,可以根据对应关系和偏移系数,灵活的调整模型面对应的映射区域的范围,并且映射区域与纹理图像的尺寸相差可以在1个像素尺寸之内(如,可以将纹理图像边缘0.5个像素裁掉,留下映射区域),进而避免三维模型面之间的裂缝,提高三维图像的显示效果,提高获取三维图像的灵活度。
图8为本公开实施例提供的一种图像处理装置的结构示意图。请参见图8,该图像处理装置10包括第一确定模块11、第二确定模块12和映射模块13,其中:
所述第一确定模块11用于,确定第一图像对应的N个纹理图像、以及所述N个纹理图像与三维模型的N个模型面之间的对应关系,所述N为大于1的整数;
所述第二确定模块12用于,根据所述对应关系和偏移系数,在所述N个纹理图像中确定每个模型面对应的映射区域,所述映射区域为所述纹理图像的至少部分区域;
所述映射模块13用于,将所述N个纹理图像中的映射区域映射至所述三维模型,得到所述第一图像对应的三维图像。
本公开一个或多个实施例,所述第二确定模块12具体用于:
根据所述对应关系,将所述N个纹理图像组合为L行、K列的区域,得到采样区域,所述L与所述K的乘积为所述N,所述L和所述K为正整数;
根据所述采样区域和所述偏移系数,确定所述模型面对应的映射区域。
本公开一个或多个实施例,所述第二确定模块12具体用于:
确定所述模型面对应的所述纹理图像在所述采样区域中的第一位置;
获取所述模型面的顶点纹理坐标;
根据所述顶点纹理坐标、所述偏移系数和所述第一位置,在所述采样区域中确定顶点采样坐标;
根据所述顶点采样坐标和所述第一位置,确定所述模型面对应的映射区域。
本公开一个或多个实施例,所述第二确定模块12具体用于:
根据所述顶点纹理横坐标、所述第一位置、所述采样区域的列数和所述偏移系数,确定所述顶点采样坐标的横坐标;
根据所述顶点纹理纵坐标、所述第一位置、所述采样区域的列数、所述采样区域的行数和所述偏移系数,确定所述顶点采样坐标的纵坐标;
根据所述顶点采样坐标的横坐标、所述顶点采样坐标的纵坐标,得到所述顶点采样坐标。
本公开一个或多个实施例,所述第二确定模块12具体用于:
若所述第一位置位于所述采样区域的第1行,则根据所述顶点采样坐标,确定所述模型 面对应的映射区域;
若所述第一位置位于所述采样区域的第2行,则将所述顶点采样坐标进行预设角度的翻转处理,得到目标顶点采样坐标,以及根据所述目标顶点采样坐标,确定所述模型面对应的映射区域。
本公开一个或多个实施例,所述第二确定模块12具体用于:
确定所述顶点采样坐标在所述采样区域中对应的第一区域;
将所述第一区域确定为所述模型面对应的映射区域。
本公开一个或多个实施例,所述第二确定模块12具体用于:
确定所述目标顶点采样坐标在所述采样区域中对应的第二区域;
将所述第二区域确定为所述模型面对应的映射区域。
本实施例提供的图像处理装置,可用于执行上述方法实施例的技术方案,其实现原理和技术效果类似,本实施例此处不再赘述。
图9为本公开实施例提供的另一种图像处理装置的结构示意图。在图8所示的实施例的基础上,请参见图9,该图像处理装置10还包括获取模块14,所述获取模块14用于:
获取所述N个纹理图像边缘的预设区域尺寸;
根据所述预设区域尺寸,确定所述偏移系数。
本公开一个或多个实施例,所述获取模块14具体用于:
获取第一预设关系,所述第一预设关系中包括至少一个区域尺寸,以及每个区域尺寸对应的系数;
根据所述预设区域尺寸和所述第一预设关系,确定所述偏移系数。
本实施例提供的图像处理装置,可用于执行上述方法实施例的技术方案,其实现原理和技术效果类似,本实施例此处不再赘述。
图10为本公开实施例提供的一种电子设备的结构示意图。请参见图10,其示出了适于用来实现本公开实施例的电子设备900的结构示意图,该电子设备900可以为终端设备或服务器。其中,终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,简称PDA)、平板电脑(Portable Android Device,简称PAD)、便携式多媒体播放器(Portable Media Player,简称PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图10示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图10所示,电子设备900可以包括处理装置(例如中央处理器、图形处理器等)901,其可以根据存储在只读存储器(Read Only Memory,简称ROM)902中的程序或者从存储装置908加载到随机访问存储器(Random Access Memory,简称RAM)903中的程序而执行各种适当的动作和处理。在RAM 903中,还存储有电子设备900操作所需的各种程序和数据。处理装置901、ROM 902以及RAM 903通过总线904彼此相连。输入/输出(I/O)接口905也连接至总线904。
通常,以下装置可以连接至I/O接口905:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置906;包括例如液晶显示器(Liquid Crystal Display,简称LCD)、扬声器、振动器等的输出装置907;包括例如磁带、硬盘等的存储装置908;以及通信装置909。通信装置909可以允许电子设备900与其他设备进行无线或有线通信以交换 数据。虽然图10示出了具有各种装置的电子设备900,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置909从网络上被下载和安装,或者从存储装置908被安装,或者从ROM 902被安装。在该计算机程序被处理装置901执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(Erasable Programmable Read Only Memory,EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(Portable Compact Disk Read Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、射频(Radio Frequency,RF)等等,或者上述的任意合适的组合。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备执行上述实施例所示的方法。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(Local Area Network,简称LAN)或广域网(Wide Area Network,简称WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实 现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定,例如,第一获取单元还可以被描述为“获取至少两个网际协议地址的单元”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field Programmable Gate Array,FPGA)、专用集成电路(Application Specific Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Product,ASSP)、片上系统(System On a Chip,SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device,CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
第一方面,本公开一个或多个实施例,提供一种图像处理方法,该方法包括:
确定第一图像对应的N个纹理图像、以及所述N个纹理图像与三维模型的N个模型面之间的对应关系,所述N为大于1的整数;
根据所述对应关系和偏移系数,在所述N个纹理图像中确定每个模型面对应的映射区域,所述映射区域为所述纹理图像的至少部分区域;
将所述N个纹理图像中的映射区域映射至所述三维模型,得到所述第一图像对应的三维图像。
本公开一个或多个实施例,所述至少部分区域为所述纹理图像的中部区域;根据所述对应关系和偏移系数,在所述N个纹理图像中确定每个模型面对应的映射区域,包括:
根据所述对应关系,将所述N个纹理图像组合为L行、K列的区域,得到采样区域,所述L与所述K的乘积为所述N,所述L和所述K为正整数;
根据所述采样区域和所述偏移系数,确定所述模型面对应的映射区域。
本公开一个或多个实施例,针对于任意一个模型面;根据所述采样区域和所述偏移系数,确定所述模型面对应的映射区域,包括:
确定所述模型面对应的所述纹理图像在所述采样区域中的第一位置;
获取所述模型面的顶点纹理坐标;
根据所述顶点纹理坐标、所述偏移系数和所述第一位置,在所述采样区域中确定顶点采样坐标;
根据所述顶点采样坐标和所述第一位置,确定所述模型面对应的映射区域。
本公开一个或多个实施例,所述顶点纹理坐标包括顶点纹理横坐标和顶点纹理纵坐标;根据所述顶点纹理坐标、所述偏移系数和所述第一位置,在所述采样区域中确定顶点采样坐标,包括:
根据所述顶点纹理横坐标、所述第一位置、所述采样区域的列数和所述偏移系数,确定所述顶点采样坐标的横坐标;
根据所述顶点纹理纵坐标、所述第一位置、所述采样区域的列数、所述采样区域的行数和所述偏移系数,确定所述顶点采样坐标的纵坐标;
根据所述顶点采样坐标的横坐标、所述顶点采样坐标的纵坐标,得到所述顶点采样坐标。
本公开一个或多个实施例,所述采样区域的行数为2;根据所述顶点采样坐标和所述第一位置,确定所述模型面对应的映射区域,包括:
若所述第一位置位于所述采样区域的第1行,则根据所述顶点采样坐标,确定所述模型面对应的映射区域;
若所述第一位置位于所述采样区域的第2行,则将所述顶点采样坐标进行预设角度的翻转处理,得到目标顶点采样坐标,以及根据所述目标顶点采样坐标,确定所述模型面对应的映射区域。
本公开一个或多个实施例,根据所述顶点采样坐标,确定所述模型面对应的映射区域,包括:
确定所述顶点采样坐标在所述采样区域中对应的第一区域;
将所述第一区域确定为所述模型面对应的映射区域。
本公开一个或多个实施例,根据所述目标顶点采样坐标,确定所述模型面对应的映射区域,包括:
确定所述目标顶点采样坐标在所述采样区域中对应的第二区域;
将所述第二区域确定为所述模型面对应的映射区域。
本公开一个或多个实施例,根据所述对应关系和偏移系数,在所述N个纹理图像中确定每个模型面对应的映射区域之前,所述方法还包括:
获取所述N个纹理图像边缘的预设区域尺寸;
根据所述预设区域尺寸,确定所述偏移系数。
本公开一个或多个实施例,根据所述预设区域尺寸,确定所述偏移系数,包括:
获取第一预设关系,所述第一预设关系中包括至少一个区域尺寸,以及每个区域尺寸对应的系数;
根据所述预设区域尺寸和所述第一预设关系,确定所述偏移系数。
第二方面,本公开的一个或多个实施例,提供一种图像处理装置,包括第一确定模块、第二确定模块和映射模块,其中:
所述第一确定模块用于,确定第一图像对应的N个纹理图像、以及所述N个纹理图像与三维模型的N个模型面之间的对应关系,所述N为大于1的整数;
所述第二确定模块用于,根据所述对应关系和偏移系数,在所述N个纹理图像中确定每个模型面对应的映射区域,所述映射区域为所述纹理图像的至少部分区域;
所述映射模块用于,将所述N个纹理图像中的映射区域映射至所述三维模型,得到所述 第一图像对应的三维图像。
本公开一个或多个实施例,所述第二确定模块具体用于:
根据所述对应关系,将所述N个纹理图像组合为L行、K列的区域,得到采样区域,所述L与所述K的乘积为所述N,所述L和所述K为正整数;
根据所述采样区域和所述偏移系数,确定所述模型面对应的映射区域。
本公开一个或多个实施例,所述第二确定模块具体用于:
确定所述模型面对应的所述纹理图像在所述采样区域中的第一位置;
获取所述模型面的顶点纹理坐标;
根据所述顶点纹理坐标、所述偏移系数和所述第一位置,在所述采样区域中确定顶点采样坐标;
根据所述顶点采样坐标和所述第一位置,确定所述模型面对应的映射区域。
本公开一个或多个实施例,所述第二确定模块具体用于:
根据所述顶点纹理横坐标、所述第一位置、所述采样区域的列数和所述偏移系数,确定所述顶点采样坐标的横坐标;
根据所述顶点纹理纵坐标、所述第一位置、所述采样区域的列数、所述采样区域的行数和所述偏移系数,确定所述顶点采样坐标的纵坐标;
根据所述顶点采样坐标的横坐标、所述顶点采样坐标的纵坐标,得到所述顶点采样坐标。
本公开一个或多个实施例,所述第二确定模块具体用于:
若所述第一位置位于所述采样区域的第1行,则根据所述顶点采样坐标,确定所述模型面对应的映射区域;
若所述第一位置位于所述采样区域的第2行,则将所述顶点采样坐标进行预设角度的翻转处理,得到目标顶点采样坐标,以及根据所述目标顶点采样坐标,确定所述模型面对应的映射区域。
本公开一个或多个实施例,所述第二确定模块具体用于:
确定所述顶点采样坐标在所述采样区域中对应的第一区域;
将所述第一区域确定为所述模型面对应的映射区域。
本公开一个或多个实施例,所述第二确定模块具体用于:
确定所述目标顶点采样坐标在所述采样区域中对应的第二区域;
将所述第二区域确定为所述模型面对应的映射区域。
本公开一个或多个实施例,该图像处理装置还包括获取模块,所述获取模块用于:
获取所述N个纹理图像边缘的预设区域尺寸;
根据所述预设区域尺寸,确定所述偏移系数。
本公开一个或多个实施例,所述获取模块具体用于:
获取第一预设关系,所述第一预设关系中包括至少一个区域尺寸,以及每个区域尺寸对应的系数;
根据所述预设区域尺寸和所述第一预设关系,确定所述偏移系数。
第三方面,本公开实施例提供一种电子设备,包括:处理器和存储器;
所述存储器存储计算机执行指令;
所述处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处理器执行如上 第一方面以及第一方面各种可能的设计所述的图像处理方法。
第四方面,本公开实施例提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如上第一方面以及第一方面各种可能的设计所述的图像处理方法。
第五方面,本公开实施例提供一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现如上第一方面以及第一方面各种可能的设计所述的图像处理方法。
第六方面,本公开实施例提供一种计算机程序,所述计算机程序被处理器执行时实现如上第一方面以及第一方面各种可能的设计所述的图像处理方法。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。
Claims (14)
- 一种图像处理方法,包括:确定第一图像对应的N个纹理图像、以及所述N个纹理图像与三维模型的N个模型面之间的对应关系,所述N为大于1的整数;根据所述对应关系和偏移系数,在所述N个纹理图像中确定每个模型面对应的映射区域,所述映射区域为所述纹理图像的至少部分区域;将所述N个纹理图像中的映射区域映射至所述三维模型,得到所述第一图像对应的三维图像。
- 根据权利要求1所述的方法,其中,所述至少部分区域为所述纹理图像的中部区域;根据所述对应关系和偏移系数,在所述N个纹理图像中确定每个模型面对应的映射区域,包括:根据所述对应关系,将所述N个纹理图像组合为L行、K列的区域,得到采样区域,所述L与所述K的乘积为所述N,所述L和所述K为正整数;根据所述采样区域和所述偏移系数,确定所述模型面对应的映射区域。
- 根据权利要求2所述的方法,其中,针对于任意一个模型面;根据所述采样区域和所述偏移系数,确定所述模型面对应的映射区域,包括:确定所述模型面对应的所述纹理图像在所述采样区域中的第一位置;获取所述模型面的顶点纹理坐标;根据所述顶点纹理坐标、所述偏移系数和所述第一位置,在所述采样区域中确定顶点采样坐标;根据所述顶点采样坐标和所述第一位置,确定所述模型面对应的映射区域。
- 根据权利要求3所述的方法,其中,所述顶点纹理坐标包括顶点纹理横坐标和顶点纹理纵坐标;根据所述顶点纹理坐标、所述偏移系数和所述第一位置,在所述采样区域中确定顶点采样坐标,包括:根据所述顶点纹理横坐标、所述第一位置、所述采样区域的列数和所述偏移系数,确定所述顶点采样坐标的横坐标;根据所述顶点纹理纵坐标、所述第一位置、所述采样区域的列数、所述采样区域的行数和所述偏移系数,确定所述顶点采样坐标的纵坐标;根据所述顶点采样坐标的横坐标、所述顶点采样坐标的纵坐标,得到所述顶点采样坐标。
- 根据权利要求3或4所述的方法,其中,所述采样区域的行数为2;根据所述顶点采样坐标和所述第一位置,确定所述模型面对应的映射区域,包括:若所述第一位置位于所述采样区域的第1行,则根据所述顶点采样坐标,确定所述模型面对应的映射区域;若所述第一位置位于所述采样区域的第2行,则将所述顶点采样坐标进行预设角度的翻转处理,得到目标顶点采样坐标,以及根据所述目标顶点采样坐标,确定所述模型面对应的映射区域。
- 根据权利要求5所述的方法,其中,根据所述顶点采样坐标,确定所述模型面对应的映射区域,包括:确定所述顶点采样坐标在所述采样区域中对应的第一区域;将所述第一区域确定为所述模型面对应的映射区域。
- 根据权利要求5或6所述的方法,其中,根据所述目标顶点采样坐标,确定所述模型面对应的映射区域,包括:确定所述目标顶点采样坐标在所述采样区域中对应的第二区域;将所述第二区域确定为所述模型面对应的映射区域。
- 根据权利要求1至7任一项所述的方法,其中,根据所述对应关系和偏移系数,在所述N个纹理图像中确定每个模型面对应的映射区域之前,所述方法还包括:获取所述N个纹理图像边缘的预设区域尺寸;根据所述预设区域尺寸,确定所述偏移系数。
- 根据权利要求8所述的方法,其中,根据所述预设区域尺寸,确定所述偏移系数,包括:获取第一预设关系,所述第一预设关系中包括至少一个区域尺寸,以及每个区域尺寸对应的系数;根据所述预设区域尺寸和所述第一预设关系,确定所述偏移系数。
- 一种图像处理装置,包括第一确定模块、第二确定模块和映射模块,其中:所述第一确定模块用于,确定第一图像对应的N个纹理图像、以及所述N个纹理图像与三维模型的N个模型面之间的对应关系,所述N为大于1的整数;所述第二确定模块用于,根据所述对应关系和偏移系数,在所述N个纹理图像中确定每个模型面对应的映射区域,所述映射区域为所述纹理图像的至少部分区域;所述映射模块用于,将所述N个纹理图像中的映射区域映射至所述三维模型,得到所述第一图像对应的三维图像。
- 一种电子设备,包括:处理器和存储器;所述存储器存储计算机执行指令;所述处理器执行所述存储器存储的计算机执行指令,使得所述处理器执行如权利要求1至9任一项所述的图像处理方法。
- 一种计算机可读存储介质,其中,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如权利要求1至9任一项所述的图像处理方法。
- 一种计算机程序产品,包括计算机程序,其中,所述计算机程序被处理器执行时实现如权利要求1至9任一项所述的图像处理方法。
- 一种计算机程序,其中,所述计算机程序被处理器执行时实现如权利要求1至9任一项所述的图像处理方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22808577.5A EP4270322A4 (en) | 2022-03-16 | 2022-09-30 | IMAGE PROCESSING METHOD AND APPARATUS AND ELECTRONIC DEVICE |
US18/000,244 US20240212256A1 (en) | 2022-03-16 | 2022-09-30 | Image processing method, apparatus and electronic device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210262157.6 | 2022-03-16 | ||
CN202210262157.6A CN114596399A (zh) | 2022-03-16 | 2022-03-16 | 图像处理方法、装置及电子设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023173727A1 true WO2023173727A1 (zh) | 2023-09-21 |
Family
ID=81818793
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/123543 WO2023173727A1 (zh) | 2022-03-16 | 2022-09-30 | 图像处理方法、装置及电子设备 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20240212256A1 (zh) |
EP (1) | EP4270322A4 (zh) |
CN (1) | CN114596399A (zh) |
WO (1) | WO2023173727A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117556781A (zh) * | 2024-01-12 | 2024-02-13 | 杭州行芯科技有限公司 | 一种目标图形的确定方法、装置、电子设备及存储介质 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114596399A (zh) * | 2022-03-16 | 2022-06-07 | 北京字跳网络技术有限公司 | 图像处理方法、装置及电子设备 |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009003708A (ja) * | 2007-06-21 | 2009-01-08 | Sony Computer Entertainment Inc | 画像表示装置および画像表示方法 |
CN102682477A (zh) * | 2012-05-16 | 2012-09-19 | 南京邮电大学 | 一种基于结构先验的规则场景三维信息提取方法 |
US20150187126A1 (en) * | 2013-12-31 | 2015-07-02 | Nvidia Corporation | Using indirection maps for rendering texture space effects |
US20180174352A1 (en) * | 2016-12-20 | 2018-06-21 | Samsung Electronics Co., Ltd. | Graphics processing employing cube map texturing |
JP2018200504A (ja) * | 2017-05-25 | 2018-12-20 | 日本電信電話株式会社 | 幾何的合わせこみ装置、方法、及びプログラム |
CN109427087A (zh) * | 2017-08-22 | 2019-03-05 | 优酷网络技术(北京)有限公司 | 图像处理方法和装置 |
JP2019192299A (ja) * | 2019-07-26 | 2019-10-31 | 日本電信電話株式会社 | カメラ情報修正装置、カメラ情報修正方法、及びカメラ情報修正プログラム |
CN111369659A (zh) * | 2018-12-26 | 2020-07-03 | 杭州海康威视数字技术股份有限公司 | 一种基于三维模型的纹理映射方法、装置及设备 |
CN114596399A (zh) * | 2022-03-16 | 2022-06-07 | 北京字跳网络技术有限公司 | 图像处理方法、装置及电子设备 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019037558A1 (zh) * | 2017-08-22 | 2019-02-28 | 优酷网络技术(北京)有限公司 | 图像处理方法和装置 |
-
2022
- 2022-03-16 CN CN202210262157.6A patent/CN114596399A/zh active Pending
- 2022-09-30 WO PCT/CN2022/123543 patent/WO2023173727A1/zh active Application Filing
- 2022-09-30 EP EP22808577.5A patent/EP4270322A4/en active Pending
- 2022-09-30 US US18/000,244 patent/US20240212256A1/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009003708A (ja) * | 2007-06-21 | 2009-01-08 | Sony Computer Entertainment Inc | 画像表示装置および画像表示方法 |
CN102682477A (zh) * | 2012-05-16 | 2012-09-19 | 南京邮电大学 | 一种基于结构先验的规则场景三维信息提取方法 |
US20150187126A1 (en) * | 2013-12-31 | 2015-07-02 | Nvidia Corporation | Using indirection maps for rendering texture space effects |
US20180174352A1 (en) * | 2016-12-20 | 2018-06-21 | Samsung Electronics Co., Ltd. | Graphics processing employing cube map texturing |
JP2018200504A (ja) * | 2017-05-25 | 2018-12-20 | 日本電信電話株式会社 | 幾何的合わせこみ装置、方法、及びプログラム |
CN109427087A (zh) * | 2017-08-22 | 2019-03-05 | 优酷网络技术(北京)有限公司 | 图像处理方法和装置 |
CN111369659A (zh) * | 2018-12-26 | 2020-07-03 | 杭州海康威视数字技术股份有限公司 | 一种基于三维模型的纹理映射方法、装置及设备 |
JP2019192299A (ja) * | 2019-07-26 | 2019-10-31 | 日本電信電話株式会社 | カメラ情報修正装置、カメラ情報修正方法、及びカメラ情報修正プログラム |
CN114596399A (zh) * | 2022-03-16 | 2022-06-07 | 北京字跳网络技术有限公司 | 图像处理方法、装置及电子设备 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4270322A4 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117556781A (zh) * | 2024-01-12 | 2024-02-13 | 杭州行芯科技有限公司 | 一种目标图形的确定方法、装置、电子设备及存储介质 |
CN117556781B (zh) * | 2024-01-12 | 2024-05-24 | 杭州行芯科技有限公司 | 一种目标图形的确定方法、装置、电子设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
US20240212256A1 (en) | 2024-06-27 |
CN114596399A (zh) | 2022-06-07 |
EP4270322A4 (en) | 2024-01-24 |
EP4270322A1 (en) | 2023-11-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023173727A1 (zh) | 图像处理方法、装置及电子设备 | |
WO2021139408A1 (zh) | 显示特效的方法、装置、存储介质及电子设备 | |
TWI578266B (zh) | 藉由近似頂點至彎曲視埠上的投影在圖形處理中隨螢幕位置變化有效解析度 | |
TWI637355B (zh) | 紋理貼圖之壓縮方法及其相關圖像資料處理系統與產生360度全景視頻之方法 | |
US6885378B1 (en) | Method and apparatus for the implementation of full-scene anti-aliasing supersampling | |
CN110300292A (zh) | 投影畸变校正方法、装置、系统及存储介质 | |
US11127126B2 (en) | Image processing method, image processing device, image processing system and medium | |
WO2024198855A1 (zh) | 场景渲染方法、装置、设备、计算机可读存储介质及产品 | |
WO2024104248A1 (zh) | 虚拟全景图的渲染方法、装置、设备及存储介质 | |
WO2023207001A1 (zh) | 图像渲染方法、装置、电子设备及存储介质 | |
WO2023193639A1 (zh) | 图像渲染方法、装置、可读介质及电子设备 | |
WO2023207522A1 (zh) | 视频合成方法、装置、设备、介质及产品 | |
WO2022247630A1 (zh) | 图像处理方法、装置、电子设备及存储介质 | |
CN111127603B (zh) | 动画生成方法、装置、电子设备及计算机可读存储介质 | |
CN114049403A (zh) | 一种多角度三维人脸重建方法、装置及存储介质 | |
WO2024051756A1 (zh) | 特效图像绘制方法、装置、设备及介质 | |
WO2023179510A1 (zh) | 图像压缩传输方法、装置、电子设备及存储介质 | |
WO2023169287A1 (zh) | 美妆特效的生成方法、装置、设备、存储介质和程序产品 | |
WO2023193613A1 (zh) | 高光渲染方法、装置、介质及电子设备 | |
WO2023179341A1 (zh) | 在视频中放置虚拟对象的方法及相关设备 | |
WO2023109564A1 (zh) | 视频图像处理方法、装置、电子设备及存储介质 | |
CN111862342A (zh) | 增强现实的纹理处理方法、装置、电子设备及存储介质 | |
US20240233172A1 (en) | Video processing method and device, and electronic device | |
KR102534449B1 (ko) | 이미지 처리 방법, 장치, 전자 장치 및 컴퓨터 판독 가능 저장 매체 | |
US20240291937A1 (en) | Image processing method and apparatus, and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 18000244 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2022808577 Country of ref document: EP Effective date: 20221129 |