WO2024040789A1 - 图像处理方法及装置、存储介质 - Google Patents

图像处理方法及装置、存储介质 Download PDF

Info

Publication number
WO2024040789A1
WO2024040789A1 PCT/CN2022/136281 CN2022136281W WO2024040789A1 WO 2024040789 A1 WO2024040789 A1 WO 2024040789A1 CN 2022136281 W CN2022136281 W CN 2022136281W WO 2024040789 A1 WO2024040789 A1 WO 2024040789A1
Authority
WO
WIPO (PCT)
Prior art keywords
contour line
dimensional contour
blocked
dimensional
room
Prior art date
Application number
PCT/CN2022/136281
Other languages
English (en)
French (fr)
Inventor
李伟
胡洋
Original Assignee
如你所视(北京)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 如你所视(北京)科技有限公司 filed Critical 如你所视(北京)科技有限公司
Publication of WO2024040789A1 publication Critical patent/WO2024040789A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Definitions

  • the present invention relates to the technical field of image processing, and in particular, to an image processing method and device, and a storage medium.
  • the three-dimensional outline of the room can be obtained, and the three-dimensional outline can be displayed in the image of the house.
  • the three-dimensional contours of the room where the image is captured are directly rendered onto the image.
  • the three-dimensional contours are sometimes blocked by objects or walls in the image, especially in rooms with many objects, which visually creates a problem.
  • the rendering effect of the three-dimensional contour lines is reduced due to the cluttered feeling.
  • the present invention provides an image processing method, device, and storage medium to solve the problem in the prior art that the rendering effect of three-dimensional contour lines in room images is poor, and to improve the rendering effect of three-dimensional contour lines.
  • the invention provides an image processing method, including:
  • the preset rendering strategy Based on the result of the occlusion detection and a preset rendering strategy, at least part of the three-dimensional contour line is rendered to the image of the room.
  • the preset rendering strategy includes the unoccluded portion of the three-dimensional contour line.
  • the visual information is different from the visual information of the occluded portion.
  • occlusion detection is performed on the three-dimensional contour line at the shooting point of the image of the room, including:
  • the three-dimensional contour line is detected to be blocked by the object at the shooting point.
  • detecting that the three-dimensional contour line is blocked by a wall at the shooting point includes:
  • each vertex on the two-dimensional contour corresponding to each vertical contour of the three-dimensional contour based on the shooting point and the two-dimensional contour, for each vertex corresponding to The vertical contour line is used to detect whether it is blocked by the wall.
  • detecting that each contour point of the two-dimensional contour line is blocked by a wall includes:
  • detecting that the vertical contour line corresponding to each vertex is blocked by a wall includes: :
  • the vertical contour line corresponding to the vertex belongs to the part that is not blocked by the wall.
  • the detection of the three-dimensional contour line being blocked by an object at the shooting point includes:
  • the semantics of the pixel points in the image of the room corresponding to the contour points of the three-dimensional contour line are walls, it is determined that the contour points of the three-dimensional contour line belong to the part that is not blocked by the object;
  • the detection of the three-dimensional contour line being blocked by an object at the shooting point includes:
  • the portion of the three-dimensional contour line that is not blocked by the wall at the shooting point is detected to be blocked by the object.
  • the visual information of the part blocked by the wall is different from the visual information of the part blocked by the object.
  • the invention also provides an image processing device, including:
  • An occlusion detection module configured to perform occlusion detection on the three-dimensional contour line at the shooting point of the image of the room based on the three-dimensional contour line of the room and the image of the room;
  • a contour rendering module configured to render at least part of the three-dimensional contour line to the image of the room based on the result of the occlusion detection and a preset rendering strategy, the preset rendering strategy including the three-dimensional contour line
  • the visual information of the unoccluded part is different from the visual information of the occluded part.
  • an occlusion detection module is specifically used for:
  • the three-dimensional contour line is detected to be blocked by the object at the shooting point.
  • an occlusion detection module is specifically used for:
  • each vertex on the two-dimensional contour corresponding to each vertical contour of the three-dimensional contour based on the shooting point and the two-dimensional contour, for each vertex corresponding to The vertical contour line is used to detect whether it is blocked by the wall.
  • an occlusion detection module is specifically used for:
  • an occlusion detection module is specifically used for:
  • the vertical contour line corresponding to the vertex belongs to the part that is not blocked by the wall.
  • an occlusion detection module is specifically used for:
  • the semantics of the pixel points in the image of the room corresponding to the contour points of the three-dimensional contour line are walls, it is determined that the contour points of the three-dimensional contour line belong to the part that is not blocked by the object;
  • an occlusion detection module is specifically used for:
  • the portion of the three-dimensional contour line that is not blocked by the wall at the shooting point is detected to be blocked by the object.
  • the visual information of the part blocked by the wall is different from the visual information of the part blocked by the object.
  • the present invention also provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the processor executes the program, it implements any of the above image processing methods. .
  • the present invention also provides a non-transitory computer-readable storage medium on which a computer program is stored.
  • a computer program is stored on which a computer program is stored.
  • the computer program is executed by a processor, any one of the above image processing methods is implemented.
  • the present invention also provides a computer program product, which includes a computer program.
  • a computer program product which includes a computer program.
  • the computer program When executed by a processor, it implements any of the above image processing methods.
  • the image processing method provided by the present invention can perform occlusion detection on the three-dimensional contour line at the shooting point of the room image based on the three-dimensional contour line of the room and the image of the room. Therefore, it can be based on the results of the occlusion detection and the preset
  • the rendering strategy renders at least part of the three-dimensional contours to the image of the room, and uses the rendering strategy to distinguish the visual information of the blocked part from the visual information of the unblocked part, so that the user can intuitively and clearly feel the obstruction visually.
  • the difference between the part and the unoccluded part enables the distinction between the occluded part and the unoccluded part, making the rendered three-dimensional contour line more visually organized and improving the rendering effect of the three-dimensional contour line.
  • Figure 1 is one of the flow diagrams of the image processing method provided by the present invention.
  • Figure 2 is a schematic diagram of a three-dimensional contour provided by the present invention.
  • FIG. 3 is a second schematic flowchart of the image processing method provided by the present invention.
  • FIG. 4 is a third schematic flowchart of the image processing method provided by the present invention.
  • Figure 5 is one of the schematic diagrams of the two-dimensional contour provided by the present invention.
  • Figure 6 is the second schematic diagram of the two-dimensional contour provided by the present invention.
  • Figure 7 is the fourth schematic flowchart of the image processing method provided by the present invention.
  • Figure 8 is the fifth schematic flow chart of the image processing method provided by the present invention.
  • Figure 9 is the fifth schematic flow chart of the image processing method provided by the present invention.
  • Figure 10 is a schematic diagram of the rendering effect of the three-dimensional contour line provided by the present invention.
  • Figure 11 is a schematic structural diagram of the image processing device provided by the present invention.
  • Figure 12 is a schematic structural diagram of the electronic device provided by the present invention.
  • This embodiment provides an image processing method, which can be executed by software and/or hardware in a terminal or server. As shown in Figure 1, the image processing method at least includes the following steps:
  • Step 101 Based on the three-dimensional contour line of the room and the image of the room, perform occlusion detection on the three-dimensional contour line at the shooting point of the image of the room.
  • the image of the room can be a panorama or a normal image.
  • the three-dimensional contour line of the room can be extracted in advance and rendered into the image of the room based on the position and orientation of the shooting point.
  • the specific extraction method of the three-dimensional contour line can be implemented with reference to the relevant technology, which is not discussed here. To elaborate. Refer to Figure 2, which illustrates the three-dimensional outline of a room. However, the three-dimensional contour lines are sometimes blocked by objects or walls in the image, and the rendering effect of the three-dimensional contour lines is not good. In order to improve the rendering effect of the three-dimensional contour lines, this embodiment handles the occlusion situation.
  • the shooting points of the room image are different when shooting the image of the room, the images of the room obtained are different.
  • the parts of the three-dimensional contour lines rendered into the image are different.
  • the three-dimensional contours seen at different shooting points are different.
  • the occlusion conditions of lines are also different. Therefore, the three-dimensional contour lines and the image of the room can be combined to detect the occlusion of the three-dimensional contour line at the shooting point of the room image.
  • the result of the occlusion detection may include whether the three-dimensional contour line is blocked. When the three-dimensional contour line is blocked, it may also include position information of the blocked part, and may also include position information of the unoccluded part. It can be understood that when the three-dimensional contour line is not blocked, all three-dimensional contour lines are not blocked.
  • Step 102 Render at least part of the three-dimensional contour line to the image of the room based on the result of the occlusion detection and the preset rendering strategy.
  • the preset rendering strategy includes the visual information of the unoccluded part and the occluded part of the three-dimensional contour line. Visual information is different.
  • the captured image of the room may be a part of the room
  • the three-dimensional contour line is rendered into the image of the room
  • at least part of the three-dimensional contour line is rendered into the image of the room.
  • the rendering strategy can be pre-set according to actual needs to distinguish the visual information of the blocked part from the visual information of the unblocked part, so that the user can intuitively feel the blocked part visually and the unoccluded part, so that the occluded part and the unoccluded part can be distinguished.
  • the occlusion detection result and the preset rendering strategy can be used. Render at least part of the three-dimensional contour lines to the image of the room, and use the rendering strategy to distinguish the visual information of the blocked part from the visual information of the unblocked part, so that the user can intuitively and clearly feel the blocked part and the unblocked part visually.
  • the difference in the occluded parts enables the distinction between the occluded parts and the unoccluded parts, making the rendered three-dimensional contours more visually organized and improving the rendering effect of the three-dimensional contours.
  • occlusion detection is performed on the three-dimensional contour line at the shooting point of the image of the room, as shown in Figure 3, which may specifically include:
  • Step 301 Based on the three-dimensional contour line, detect whether the three-dimensional contour line is blocked by the wall at the shooting point.
  • Step 302 Based on the room image, detect whether the three-dimensional contour line is blocked by objects at the shooting point.
  • the contour line of an edge in the three-dimensional contour line at the shooting point, the contour line may be blocked by objects or other walls.
  • the contour line may have the following states:
  • the situation of being blocked by the wall and being blocked by the object can be detected respectively. Since the three-dimensional contour line can reflect the situation of the wall of the room, the three-dimensional contour can be detected at the shooting point based on the three-dimensional contour line.
  • the line is blocked by the wall to detect the situation accurately, and the image of the room can reflect the situation of the objects in the room. Therefore, based on the image of the room, the three-dimensional contour can be measured at the shooting point. Lines are used to detect obstructions by objects, thereby accurately detecting obstructions by objects.
  • step 302 based on the image of the room, the three-dimensional contour line is detected to be blocked by the object at the shooting point.
  • the specific implementation may include: based on the image of the room, at the shooting point Under the position, the part of the three-dimensional contour line that is not blocked by the wall is detected by the object.
  • the part of the three-dimensional contour line that is blocked by the wall is no longer blocked by objects, and the part of the three-dimensional contour line that is not blocked by the wall may still be blocked by objects, therefore, the three-dimensional contour can be directly captured at the shooting point.
  • the part of the line that is not blocked by the wall is detected by the object, which can improve the detection efficiency.
  • the three-dimensional contour line is detected to be blocked by the wall at the shooting point, as shown in Figure 4.
  • the specific implementation method may include:
  • Step 401 Obtain the two-dimensional contour line projected from the three-dimensional contour line to the horizontal plane.
  • Figure 5 uses the two-dimensional outline of seven rooms for illustration.
  • the two-dimensional outline of the room can also be used as a two-dimensional house plan.
  • the three-dimensional outline is obtained by stretching the two-dimensional house plan along the direction of gravity, and the projection of the roof and ground outlines on the horizontal plane is consistent.
  • Step 402 Based on the shooting point and the two-dimensional contour line, detect whether each contour point of the two-dimensional contour line is blocked by the wall.
  • the second-dimensional contour line can be discretized into multiple contour points. See Figure 6, which uses contour points A, B, C, D and E as an illustration. Each contour point of the two-dimensional contour line is divided into two contour points. Detection of body occlusion, so that the entire two-dimensional contour line is blocked by the wall.
  • Step 403 For each vertex on the two-dimensional contour line corresponding to each vertical contour line of the three-dimensional contour line, based on the shooting point and the two-dimensional contour line, the vertical contour line corresponding to each vertex is blocked. Detection of occlusions.
  • each vertical contour line of the three-dimensional contour line corresponds to each vertex of the two-dimensional contour line.
  • the characteristic is that if the vertex is not blocked by the wall, then the vertex corresponds to The vertical contour line is not blocked by the wall, and the vertex is blocked by the wall, then the vertical contour line corresponding to the vertex is blocked by the wall, and there will be no situation where part of the vertical contour line is blocked and part is not blocked. Therefore, it can The vertical contour lines corresponding to each vertex are detected as being blocked by the wall.
  • each discrete contour point of the two-dimensional contour line is detected to be blocked by the wall, and the vertical contour line corresponding to each vertex of the two-dimensional contour line is detected to be blocked by the wall.
  • the detection is more accurate and refined, thereby improving the accuracy of situations where the entire three-dimensional contour is blocked by a wall.
  • each contour point of the two-dimensional contour line is detected to be blocked by the wall, as shown in Figure 7.
  • Specific implementation methods may include:
  • Step 701 For each contour point of the two-dimensional contour line, obtain a line segment formed by the contour point of the two-dimensional contour line and the shooting point.
  • the shooting point O is the line segment formed by the contour points A, B, C, D and E of the two-dimensional contour line respectively.
  • Step 702 If the line segment where the contour point of the two-dimensional contour line is located intersects with the two-dimensional contour line other than the contour point of the two-dimensional contour line, determine that the contour point of the two-dimensional contour line belongs to the part blocked by the wall.
  • Step 703 If the line segment where the contour point of the two-dimensional contour line is located does not intersect with the two-dimensional contour line other than the contour point of the two-dimensional contour line, determine that the contour point of the two-dimensional contour line belongs to the part that is not blocked by the wall.
  • a two-dimensional contour is a polygon formed by multiple contours.
  • the position of the shooting point and the position of the contour point of the two-dimensional contour can be obtained.
  • the contour point and shooting point of the two-dimensional contour can be obtained.
  • An expression for the line segment formed by bits. Gets an expression for each contour line that forms a 2D contour. Through the expression of the line segment formed by the contour point of the two-dimensional contour line and the shooting point, and the expression of each contour line forming the two-dimensional contour line, determine the line segment where the contour point of the two-dimensional contour line is located and the two-dimensional contour line Whether each contour line intersects.
  • Contour points belonging to consecutive adjacent two-dimensional contour lines that are not blocked by the wall form a section of the contour line that is not blocked by the wall.
  • the contour points of consecutive adjacent two-dimensional contours belonging to the part blocked by the wall form a section of the contour blocked by the wall.
  • the contour point of the two-dimensional contour line belongs to the part that is not blocked by the wall or belongs to the wall.
  • the body occlusion part is not only simple and fast, but also the detection results are more accurate.
  • the vertical contour line corresponding to each vertex is detected to be blocked by the wall, as shown in Figure 8.
  • the specific implementation method may include:
  • Step 801 For each vertex, obtain the line segment formed by the vertex and the shooting point.
  • Step 802 If the line segment where the vertex is located and the two-dimensional contour line have an intersection other than the vertex, determine that the vertical contour line corresponding to the vertex belongs to the part blocked by the wall.
  • Step 803 If there is no intersection between the line segment where the vertex is located and the two-dimensional contour line other than the vertex, determine that the vertical contour line corresponding to the vertex belongs to the part that is not blocked by the wall.
  • the method of detecting whether the vertical contour line corresponding to each vertex is blocked by the wall is the same as the method of detecting whether each contour point of the two-dimensional contour line is blocked by the wall in steps 701 to 703. Similar methods can achieve similar effects.
  • the vertex of the two-dimensional contour line can also be the contour point of the two-dimensional contour line in steps 701 to 703
  • the detection result of the contour point as the vertex in the two-dimensional contour line can also be directly obtained.
  • the contour points A, B, C and D of the two-dimensional contour line are all vertices of the two-dimensional contour line, and the detection results of these contour points that are the vertices of the two-dimensional contour line can be directly obtained. In this way, there is no need to repeat the detection, thereby improving detection efficiency.
  • step 302 based on the image of the room, the three-dimensional contour line is detected to be blocked by the object at the shooting point, as shown in Figure 9.
  • the specific implementation may include:
  • Step 901 Based on the semantic segmentation image corresponding to the image of the room, determine the semantics of the pixel points in the image of the room corresponding to the contour points of at least part of the three-dimensional contour line.
  • the room image can be semantically segmented to obtain the semantic segmentation image corresponding to the room image.
  • Semantic segmentation can perform pixel-level segmentation of objects in images.
  • each pixel in the room image can be marked with which type of object it belongs to. For example, different colors can be used to represent different categories of objects. For example, each pixel can be Whether the pixels belong to the wall or other objects.
  • the semantics may be to determine the semantics of the pixel points in the image of the room corresponding to the contour points of at least part of the three-dimensional contour line that are not blocked by the wall. In this way, detection efficiency can be further improved.
  • Step 902 If the semantics of the pixel points in the room image corresponding to the contour points of the three-dimensional contour line are walls, determine that the contour points of the three-dimensional contour line belong to the part that is not blocked by the object.
  • Step 903 If the semantics of the pixels in the room image corresponding to the contour points of the three-dimensional contour line are not walls, determine that the contour points of the three-dimensional contour line belong to the part blocked by the object.
  • consecutive adjacent contour points belonging to the part that is not blocked by the object form a contour line that is not blocked by the object.
  • Continuous adjacent contour points belonging to the part occluded by the object form a contour line occluded by the object.
  • the semantics of the pixels in the image of the room can be obtained through semantic segmentation of the image.
  • the contour points of the at least part of the three-dimensional contours correspond to
  • the semantics of the pixels in the room image are walls, which means they are not blocked by objects. Otherwise, they are blocked by objects, so that the object blocking situation can be accurately determined.
  • the visual information may include visible or invisible information. If visible, it may also include at least one of the color, width and style of the outline.
  • the width of the contour line is also the thickness of the contour line.
  • Contour line styles can include solid lines, dashed lines, etc.
  • the contour line of the blocked part may be a dotted line
  • the contour line of the unoccluded part may be a solid line.
  • the solid and dotted lines are used to distinguish the blocked and unblocked parts, which is more in line with people's normal visual habits and allows users to quickly and accurately feel the structure of the room.
  • the contour line of one side of the three-dimensional contour line is completely blocked by the wall, then Invisible. If the contour of one side of the three-dimensional contour line has an unobstructed part and an occluded part (including a part blocked by a wall and/or a part blocked by an object), it is visible. Among them, the contour of the unobstructed part is a solid line, and the outline of the occluded part is a dotted line. See Figure 10, which illustrates three-dimensional contours rendered in an image of a room.
  • the visual information of the part occluded by the wall is different from the visual information of the part occluded by the object.
  • the part blocked by the wall and the part blocked by the object can be further distinguished, so that the user can further understand the difference between the visual information of the part blocked by the wall and the part blocked by the object.
  • the outline of the part covered by the object and the outline of the part blocked by the wall have different colors. The difference in color makes the visual distinction more obvious.
  • the length information of each contour line can also be rendered to help the user clearly feel the size of the room.
  • the three-dimensional contour line of each room of a house can also be obtained.
  • images of each room and multiple shooting points project the three-dimensional outline of each room to the horizontal plane to obtain the two-dimensional outline of each room, and traverse the two-dimensional outline of each room at each shooting point Line, determine whether the shooting point is within the two-dimensional contour line of the room, so as to obtain the corresponding relationship between the three-dimensional contour line of the room, the image of the room and the shooting point position.
  • the occlusion detection of the three-dimensional contour line can be performed at the shooting point of the image of the room.
  • the image processing device provided by the present invention will be described below.
  • the image processing device described below and the image processing method described above can be referenced correspondingly.
  • This embodiment provides an image processing device, as shown in Figure 11, including:
  • the occlusion detection module 1101 is used to perform occlusion detection on the three-dimensional contour line at the shooting point of the room image based on the three-dimensional contour line of the room and the image of the room;
  • the contour rendering module 1102 is configured to render at least part of the three-dimensional contour line to the image of the room based on the result of the occlusion detection and the preset rendering strategy.
  • the preset rendering strategy includes the visual information of the unoccluded part of the three-dimensional contour line and The visual information of the occluded parts is different.
  • the occlusion detection module 1101 is specifically used for:
  • the three-dimensional contour line is detected by objects at the shooting point.
  • the occlusion detection module 1101 is specifically used for:
  • each contour point of the two-dimensional contour line is detected to be blocked by the wall
  • the occlusion detection module 1101 is specifically used for:
  • the contour point of the two-dimensional contour line belongs to the part that is not blocked by the wall.
  • the occlusion detection module 1101 is specifically used for:
  • the vertical contour line corresponding to the vertex belongs to the part that is not blocked by the wall.
  • the occlusion detection module 1101 is specifically used for:
  • the semantics of the pixels in the image of the room corresponding to the contour points of the three-dimensional contour line are walls, determine that the contour points of the three-dimensional contour line belong to the part that is not blocked by the object;
  • the contour points of the three-dimensional contour line belong to the part occluded by the object.
  • the occlusion detection module 1101 is specifically used for:
  • the part of the three-dimensional contour line that is not blocked by the wall is detected by the object at the shooting point.
  • the visual information of the part occluded by the wall is different from the visual information of the part occluded by the object.
  • Figure 12 illustrates a schematic diagram of the physical structure of an electronic device.
  • the electronic device may include: a processor 1210, a communications interface 1220, a memory 1230 and a communication bus 1240.
  • the processor 1210, the communication interface 1220, and the memory 1230 complete communication with each other through the communication bus 1240.
  • the processor 1210 can call logical instructions in the memory 1230 to perform an image processing method, which includes:
  • occlusion detection is performed on the three-dimensional contours at the shooting point of the image of the room;
  • the preset rendering strategy includes that the visual information of the unoccluded part of the three-dimensional contour line is different from the visual information of the occluded part. .
  • the above-mentioned logical instructions in the memory 1230 can be implemented in the form of software functional units and can be stored in a computer-readable storage medium when sold or used as an independent product.
  • the technical solution of the present invention essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of the present invention.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program code. .
  • the present invention also provides a computer program product.
  • the computer program product includes a computer program.
  • the computer program can be stored on a non-transitory computer-readable storage medium.
  • the computer program can Execute the image processing method provided by each of the above methods, which method includes:
  • occlusion detection is performed on the three-dimensional contours at the shooting point of the image of the room;
  • the preset rendering strategy includes that the visual information of the unoccluded part of the three-dimensional contour line is different from the visual information of the occluded part. .
  • the present invention also provides a non-transitory computer-readable storage medium on which a computer program is stored.
  • the computer program is implemented when executed by a processor to perform the image processing method provided by each of the above methods.
  • the method includes:
  • occlusion detection is performed on the three-dimensional contours at the shooting point of the image of the room;
  • the preset rendering strategy includes that the visual information of the unoccluded part of the three-dimensional contour line is different from the visual information of the occluded part. .
  • the device embodiments described above are only illustrative.
  • the units described as separate components may or may not be physically separated.
  • the components shown as units may or may not be physical units, that is, they may be located in One location, or it can be distributed across multiple network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment. Persons of ordinary skill in the art can understand and implement the method without any creative effort.
  • each embodiment can be implemented by software plus a necessary general hardware platform, and of course, it can also be implemented by hardware.
  • the computer software product can be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disc, optical disk, etc., including a number of instructions to cause a computer device (which can be a personal computer, a server, or a network device, etc.) to execute the methods described in various embodiments or certain parts of the embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供一种图像处理方法及装置、存储介质,其中方法包括:基于房间的三维轮廓线以及房间的图像,在房间的图像的拍摄点位下对三维轮廓线进行遮挡检测;基于遮挡检测的结果以及预设的渲染策略,将至少部分的三维轮廓线渲染到房间的图像,预设的渲染策略包括三维轮廓线中未被遮挡部分的视觉信息和被遮挡部分的视觉信息不同。如此,可以让用户可以从视觉上直观清晰地感受到被遮挡部分和未被遮挡部分的不同,从而能够对被遮挡部分和未被遮挡部分进行区分,使得渲染的三维轮廓线视觉上更有条理,提升了三维轮廓线的渲染效果。

Description

图像处理方法及装置、存储介质
相关申请的交叉引用
本申请要求2022年08月23日提交的中国专利申请202211014358.0的权益,该申请的内容通过引用被合并于本文。
技术领域
本发明涉及图像处理技术领域,尤其涉及一种图像处理方法及装置、存储介质。
背景技术
目前,为了让用户可以通过图像清晰地感受房间的结构,可以得到房间的三维轮廓线,并将该三维轮廓线在房屋的图像中展示出来。现有技术中是直接将图像拍摄的房间的三维轮廓线渲染到图像上,但是,三维轮廓线有时会与图像中的物品或者墙体等存在遮挡,尤其是物品较多的房间,视觉上给人以杂乱的感觉,降低了三维轮廓线的渲染效果。
发明内容
本发明提供一种图像处理方法及装置、存储介质,用以解决现有技术中房间的图像中三维轮廓线的渲染效果较差的缺陷,实现了三维轮廓线的渲染效果的提升。
本发明提供一种图像处理方法,包括:
基于房间的三维轮廓线以及所述房间的图像,在所述房间的图像的拍摄点位下对所述三维轮廓线进行遮挡检测;
基于所述遮挡检测的结果以及预设的渲染策略,将至少部分的所述三维轮廓线渲染到所述房间的图像,所述预设的渲染策略包括所述三维轮廓线中未被遮挡部分的视觉信息和被遮挡部分的所述视觉信息不同。
根据本发明提供的一种图像处理方法,所述基于房间的三维轮廓线以及所述房间的图像,在所述房间的图像的拍摄点位下对所述三维轮廓线进行遮挡检测,包括:
基于所述三维轮廓线,在所述拍摄点位下对所述三维轮廓线进行被墙体遮挡的检测;
基于所述房间的图像,在所述拍摄点位下对所述三维轮廓线进行被物体遮挡的检测。
根据本发明提供的一种图像处理方法,所述基于所述三维轮廓线,在所述拍摄点位 下对所述三维轮廓线进行被墙体遮挡的检测,包括:
获取所述三维轮廓线投影至水平面的二维轮廓线;
基于所述拍摄点位以及所述二维轮廓线,对所述二维轮廓线的每个轮廓点进行被墙体遮挡的检测;
对所述二维轮廓线上与所述三维轮廓线的每个竖直轮廓线对应的每个顶点,基于所述拍摄点位以及所述二维轮廓线,对每个所述顶点对应的所述竖直轮廓线进行被墙体遮挡的检测。
根据本发明提供的一种图像处理方法,所述基于所述拍摄点位以及所述二维轮廓线,对所述二维轮廓线的每个轮廓点进行被墙体遮挡的检测,包括:
针对所述二维轮廓线的每个轮廓点,获取所述二维轮廓线的轮廓点与所述拍摄点位形成的线段;
若所述二维轮廓线的轮廓点所在的线段与所述二维轮廓线存在所述二维轮廓线的轮廓点以外的交点,确定所述二维轮廓线的轮廓点属于被墙体遮挡部分;
若所述二维轮廓线的轮廓点所在的线段与所述二维轮廓线不存在所述二维轮廓线的轮廓点以外的交点,确定所述二维轮廓线的轮廓点属于未被墙体遮挡部分。
根据本发明提供的一种图像处理方法,所述基于所述拍摄点位以及所述二维轮廓线,对每个所述顶点对应的所述竖直轮廓线进行被墙体遮挡的检测,包括:
针对每个所述顶点,获取所述顶点与所述拍摄点位形成的线段;
若所述顶点所在的线段与所述二维轮廓线存在所述顶点以外的交点,确定所述顶点对应的所述竖直轮廓线属于被墙体遮挡部分;
若所述顶点所在的线段与所述二维轮廓线不存在所述顶点以外的交点,确定所述顶点对应的所述竖直轮廓线属于未被墙体遮挡部分。
根据本发明提供的一种图像处理方法,所述基于所述房间的图像,在所述拍摄点位下对所述三维轮廓线进行被物体遮挡的检测,包括:
基于所述房间的图像对应的语义分割图像,确定所述至少部分的所述三维轮廓线的轮廓点对应的所述房间的图像中的像素点的语义;
若所述三维轮廓线的轮廓点对应的所述房间的图像中的像素点的语义为墙体,确定所述三维轮廓线的轮廓点属于未被物体遮挡部分;
若所述三维轮廓线的轮廓点对应的所述房间的图像中的像素点的语义不为墙体,确定所述三维轮廓线的轮廓点属于被物体遮挡部分。
根据本发明提供的一种图像处理方法,所述基于所述房间的图像,在所述拍摄点位下对所述三维轮廓线进行被物体遮挡的检测,包括:
基于所述房间的图像,在所述拍摄点位下对所述三维轮廓线未被墙体遮挡的部分进行被物体遮挡的检测。
根据本发明提供的一种图像处理方法,所述被遮挡部分的所述视觉信息中,被墙体遮挡部分的所述视觉信息和被物体遮挡部分的所述视觉信息不同。
本发明还提供一种图像处理装置,包括:
遮挡检测模块,用于基于房间的三维轮廓线以及所述房间的图像,在所述房间的图像的拍摄点位下对所述三维轮廓线进行遮挡检测;
轮廓渲染模块,用于基于所述遮挡检测的结果以及预设的渲染策略,将至少部分的所述三维轮廓线渲染到所述房间的图像,所述预设的渲染策略包括所述三维轮廓线中未被遮挡部分的视觉信息和被遮挡部分的所述视觉信息不同。
根据本发明提供的一种图像处理装置,遮挡检测模块,具体用于:
基于所述三维轮廓线,在所述拍摄点位下对所述三维轮廓线进行被墙体遮挡的检测;
基于所述房间的图像,在所述拍摄点位下对所述三维轮廓线进行被物体遮挡的检测。
根据本发明提供的一种图像处理装置,遮挡检测模块,具体用于:
获取所述三维轮廓线投影至水平面的二维轮廓线;
基于所述拍摄点位以及所述二维轮廓线,对所述二维轮廓线的每个轮廓点进行被墙体遮挡的检测;
对所述二维轮廓线上与所述三维轮廓线的每个竖直轮廓线对应的每个顶点,基于所述拍摄点位以及所述二维轮廓线,对每个所述顶点对应的所述竖直轮廓线进行被墙体遮挡的检测。
根据本发明提供的一种图像处理装置,遮挡检测模块,具体用于:
针对所述二维轮廓线的每个轮廓点,获取所述二维轮廓线的轮廓点与所述拍摄点位形成的线段;
若所述二维轮廓线的轮廓点所在的线段与所述二维轮廓线存在所述二维轮廓线的轮廓点以外的交点,确定所述二维轮廓线的轮廓点属于被墙体遮挡部分;
若所述二维轮廓线的轮廓点所在的线段与所述二维轮廓线不存在所述二维轮廓线的轮廓点以外的交点,确定所述二维轮廓线的轮廓点属于未被墙体遮挡部分。
根据本发明提供的一种图像处理装置,遮挡检测模块,具体用于:
针对每个所述顶点,获取所述顶点与所述拍摄点位形成的线段;
若所述顶点所在的线段与所述二维轮廓线存在所述顶点以外的交点,确定所述顶点对应的所述竖直轮廓线属于被墙体遮挡部分;
若所述顶点所在的线段与所述二维轮廓线不存在所述顶点以外的交点,确定所述顶点对应的所述竖直轮廓线属于未被墙体遮挡部分。
根据本发明提供的一种图像处理装置,遮挡检测模块,具体用于:
基于所述房间的图像对应的语义分割图像,确定所述至少部分的所述三维轮廓线的轮廓点对应的所述房间的图像中的像素点的语义;
若所述三维轮廓线的轮廓点对应的所述房间的图像中的像素点的语义为墙体,确定所述三维轮廓线的轮廓点属于未被物体遮挡部分;
若所述三维轮廓线的轮廓点对应的所述房间的图像中的像素点的语义不为墙体,确定所述三维轮廓线的轮廓点属于被物体遮挡部分。
根据本发明提供的一种图像处理装置,遮挡检测模块,具体用于:
基于所述房间的图像,在所述拍摄点位下对所述三维轮廓线未被墙体遮挡的部分进行被物体遮挡的检测。
根据本发明提供的一种图像处理装置,所述被遮挡部分的所述视觉信息中,被墙体遮挡部分的所述视觉信息和被物体遮挡部分的所述视觉信息不同。
本发明还提供一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现如上述任一种所述图像处理方法。
本发明还提供一种非暂态计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现如上述任一种所述图像处理方法。
本发明还提供一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现如上述任一种所述图像处理方法。
本发明提供的图像处理方法,由于可以基于房间的三维轮廓线以及房间的图像,在房间的图像的拍摄点位下对三维轮廓线进行遮挡检测,因此,可以基于遮挡检测的结果以及预设的渲染策略,将至少部分的三维轮廓线渲染到房间的图像,通过渲染策略针对被遮挡部分的视觉信息和未被遮挡部分的视觉信息进行区分,让用户可以从视觉上直观清晰地感受到被遮挡部分和未被遮挡部分的不同,从而能够对被遮挡部分和未被遮挡部分进行区分,使得渲染的三维轮廓线视觉上更有条理,提升了三维轮廓线的渲染效果。
附图说明
为了更清楚地说明本发明或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明提供的图像处理方法的流程示意图之一;
图2是本发明提供的三维轮廓线的示意图;
图3是本发明提供的图像处理方法的流程示意图之二;
图4是本发明提供的图像处理方法的流程示意图之三;
图5是本发明提供的二维轮廓线的示意图之一;
图6是本发明提供的二维轮廓线的示意图之二;
图7是本发明提供的图像处理方法的流程示意图之四;
图8是本发明提供的图像处理方法的流程示意图之五;
图9是本发明提供的图像处理方法的流程示意图之五;
图10是本发明提供的三维轮廓线的渲染效果示意图;
图11是本发明提供的图像处理装置的结构示意图;
图12是本发明提供的电子设备的结构示意图。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚,下面将结合本发明中的附图,对本发明中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
下面结合图1-图10描述本发明的图像处理方法。
本实施例提供一种图像处理方法,可以由终端或者服务器中的软件和/或硬件执行,如图1所示,该图像处理方法至少包括如下步骤:
步骤101、基于房间的三维轮廓线以及房间的图像,在房间的图像的拍摄点位下对三维轮廓线进行遮挡检测。
房间的图像可以是全景图,也可以是普通图像。为了通过图像清晰地感受房间的结构,可以预先提取房间的三维轮廓线并基于拍摄点位的位置和朝向渲染到房间的图像中, 三维轮廓线的具体提取方式可以参考相关技术实施,此处不做赘述。参见图2,示意了一个房间的三维轮廓线。但是,三维轮廓线有时会与图像中的物品或者墙体等存在遮挡,三维轮廓线的渲染效果不佳,为了提升三维轮廓线的渲染效果,本实施例中进行了关于遮挡情况的处理。由于在拍摄房间的图像时,房间的图像的拍摄点位不同,得到的房间的图像不同,相应的,三维轮廓线渲染到图像中的部分不同,在不同的拍摄点位处看到的三维轮廓线的遮挡情况也不同,因此,可以结合三维轮廓线以及房间的图像,在房间的图像的拍摄点位下对三维轮廓线进行遮挡检测。遮挡检测的结果可以包括三维轮廓线是否被遮挡,三维轮廓线被遮挡时,还可以包括被遮挡部分的位置信息,还可以包括未被遮挡部分的位置信息。可以理解的是,三维轮廓线未被遮挡时,三维轮廓线全部未被遮挡。
步骤102、基于遮挡检测的结果以及预设的渲染策略,将至少部分的三维轮廓线渲染到房间的图像,预设的渲染策略包括三维轮廓线中未被遮挡部分的视觉信息和被遮挡部分的视觉信息不同。
实际应用中,由于拍摄的房间的图像可能是房间的一部分,因此,将三维轮廓线渲染到房间的图像时,将至少部分三维轮廓线渲染到房间的图像中。
为了提升三维轮廓线的渲染效果,可以根据实际需求,预先设置渲染策略,针对被遮挡部分的视觉信息和未被遮挡部分的视觉信息进行区分,让用户可以从视觉上直观地感受到被遮挡部分和未被遮挡部分的不同,从而能够对被遮挡部分和未被遮挡部分进行区分。
本实施例中,由于可以基于房间的三维轮廓线以及房间的图像,在房间的图像的拍摄点位下对三维轮廓线进行遮挡检测,因此,可以基于遮挡检测的结果以及预设的渲染策略,将至少部分的三维轮廓线渲染到房间的图像,通过渲染策略针对被遮挡部分的视觉信息和未被遮挡部分的视觉信息进行区分,让用户可以从视觉上直观清晰地感受到被遮挡部分和未被遮挡部分的不同,从而能够对被遮挡部分和未被遮挡部分进行区分,使得渲染的三维轮廓线视觉上更有条理,提升了三维轮廓线的渲染效果。
在示例性实施例中,基于房间的三维轮廓线以及房间的图像,在房间的图像的拍摄点位下对三维轮廓线进行遮挡检测,如图3所示,具体可以包括:
步骤301、基于三维轮廓线,在拍摄点位下对三维轮廓线进行被墙体遮挡的检测。
步骤302、基于房间的图像,在拍摄点位下对三维轮廓线进行被物体遮挡的检测。
实际应用中,对三维轮廓线中的一条边的轮廓线来说,在拍摄点位处,轮廓线可能 被物体遮挡,也可能被其它墙体遮挡,示例性的,可以有以下几种状态:
一、完全被其它墙体遮挡而不可见。
二、部分被其它墙体遮挡,未遮挡部分被物体遮挡。
三、部分被其它墙体遮挡,未遮挡部分未被物体遮挡。
四、没有被其它墙体遮挡,但是被物体遮挡。
五、没有任何遮挡,完整可见。
本实施例中,可以分别对被墙体遮挡和被物体遮挡的情况进行检测,由于三维轮廓线可以反映房间的墙体的情况,因此,可以基于三维轮廓线,在拍摄点位下对三维轮廓线进行被墙体遮挡的检测,从而准确地检测出被墙体遮挡的情况,而房间的图像则可以反映房间的物体的情况,因此,可以基于房间的图像,在拍摄点位下对三维轮廓线进行被物体遮挡的检测,从而准确地检测出被物体遮挡的情况。
进一步的,在示例性实施例中,步骤302中,基于房间的图像,在拍摄点位下对三维轮廓线进行被物体遮挡的检测,其具体实现方式可以包括:基于房间的图像,在拍摄点位下对三维轮廓线未被墙体遮挡的部分进行被物体遮挡的检测。
由于三维轮廓线中被墙体遮挡的部分,不再有物体遮挡,而三维轮廓线中未被墙体遮挡的部分,还可能会被物体遮挡,因此,可以直接在拍摄点位下对三维轮廓线未被墙体遮挡的部分进行被物体遮挡的检测,可以提高检测效率。
在示例性实施例中,基于三维轮廓线,在拍摄点位下对三维轮廓线进行被墙体遮挡的检测,如图4所示,其具体实现方式可以包括:
步骤401、获取三维轮廓线投影至水平面的二维轮廓线。
参见图5,以一套房屋的每个房间的二维轮廓线进行示意,图5中以7个房间的二维轮廓线进行示意,房间的二维轮廓线也可以作为二维户型图。这里,三维轮廓线是由二维户型图沿重力方向拉伸得到的,房顶和地面的轮廓线在水平面的投影是一致的。
步骤402、基于拍摄点位以及二维轮廓线,对二维轮廓线的每个轮廓点进行被墙体遮挡的检测。
实际应用中,可以将第二维轮廓线离散成多个轮廓点,参见图6,以轮廓点A、B、C、D和E进行示意,对二维轮廓线的每个轮廓点进行被墙体遮挡的检测,从而可以得到整个二维轮廓线被墙体遮挡的情况。
步骤403、对二维轮廓线上与三维轮廓线的每个竖直轮廓线对应的每个顶点,基于拍摄点位以及二维轮廓线,对每个顶点对应的竖直轮廓线进行被墙体遮挡的检测。
由于二维轮廓线是由三维轮廓线投影至水平面得到的,三维轮廓线的每个竖直轮廓线对应二维轮廓线的每个顶点,其特点是,顶点未被墙体遮挡,则顶点对应的竖直轮廓线未被墙体遮挡,顶点被墙体遮挡,则顶点对应的竖直轮廓线被墙体遮挡,不会存在竖直轮廓线部分被遮挡部分未被遮挡的情况,因此,可以通过对每个顶点对应的竖直轮廓线进行被墙体遮挡的检测。
本实施例中,通过对二维轮廓线离散的每个轮廓点进行被墙体遮挡的检测,并且通过对二维轮廓线的每个顶点对应的竖直轮廓线进行被墙体遮挡的检测,检测更加准确、精细,从而提升了整个三维轮廓线被墙体遮挡的情况的准确性。
在示例性实施例中,基于拍摄点位以及二维轮廓线,对二维轮廓线的每个轮廓点进行被墙体遮挡的检测,如图7所示,具体实现方式可以包括:
步骤701、针对二维轮廓线的每个轮廓点,获取二维轮廓线的轮廓点与拍摄点位形成的线段。
参见图6,仍以二维轮廓线的轮廓点A、B、C、D和E进行举例,拍摄点位O分别与二维轮廓线的轮廓点A、B、C、D和E形成的线段为OA、OB、OC、OD和OE。
步骤702、若二维轮廓线的轮廓点所在的线段与二维轮廓线存在二维轮廓线的轮廓点以外的交点,确定二维轮廓线的轮廓点属于被墙体遮挡部分。
参见图6,从图6中的线段OE可以看出,若二维轮廓线的轮廓点所在的线段OE与二维轮廓线存在二维轮廓线的轮廓点以外的交点P,说明二维轮廓线的轮廓点E属于被墙体遮挡部分。
步骤703、若二维轮廓线的轮廓点所在的线段与二维轮廓线不存在二维轮廓线的轮廓点以外的交点,确定二维轮廓线的轮廓点属于未被墙体遮挡部分。
参见图6,从图6中的线段OA、OB、OC和OD可以看出,若二维轮廓线的轮廓点所在的线段与二维轮廓线不存在二维轮廓线的轮廓点以外的交点,说明二维轮廓线的轮廓点不属于被墙体遮挡部分。
二维轮廓线是多条轮廓线形成的一个多边形。实施中,可以获取拍摄点位的位置和二维轮廓线的轮廓点的位置,基于拍摄点位的位置和二维轮廓线的轮廓点的位置,可以得到二维轮廓线的轮廓点与拍摄点位形成的线段的表达式。获取形成二维轮廓线的每条轮廓线的表达式。通过二维轮廓线的轮廓点与拍摄点位形成的线段的表达式,以及形成二维轮廓线的每条轮廓线的表达式,确定二维轮廓线的轮廓点所在的线段与二维轮廓线的每条轮廓线是否相交。
属于未被墙体遮挡部分的连续相邻的二维轮廓线的轮廓点形成一段未被墙体遮挡的轮廓线。属于被墙体遮挡部分的连续相邻的二维轮廓线的轮廓点形成一段被墙体遮挡的轮廓线。
本实施例中,通过分析二维轮廓线的轮廓点与拍摄点位形成的线段与二维轮廓线的几何关系,来确定二维轮廓线的轮廓点属于未被墙体遮挡部分还是属于被墙体遮挡部分,不仅简单快速,而且检测结果更加准确。
在示例性实施例中,基于拍摄点位以及二维轮廓线,对每个顶点对应的竖直轮廓线进行被墙体遮挡的检测,如图8所示,其具体实现方式可以包括:
步骤801、针对每个顶点,获取顶点与拍摄点位形成的线段。
步骤802、若顶点所在的线段与二维轮廓线存在顶点以外的交点,确定顶点对应的竖直轮廓线属于被墙体遮挡部分。
步骤803、若顶点所在的线段与二维轮廓线不存在顶点以外的交点,确定顶点对应的竖直轮廓线属于未被墙体遮挡部分。
本实施例中,对每个顶点对应的竖直轮廓线进行被墙体遮挡的检测的方式,与步骤701~步骤703中对二维轮廓线的每个轮廓点进行被墙体遮挡的检测的方式相类似,可以达到相类似的效果。考虑到二维轮廓线的顶点也可以是步骤701~步骤703中二维轮廓线的轮廓点,在一种实现方式中,还可以直接获取二维轮廓线中作为顶点的轮廓点的检测结果。仍参见图6,二维轮廓线的轮廓点A、B、C和D都是二维轮廓线的顶点,可以直接获取这些作为二维轮廓线的顶点的轮廓点的检测结果。如此,无需再重复检测,从而提高了检测效率。
在示例性实施例中,步骤302中,基于房间的图像,在拍摄点位下对三维轮廓线进行被物体遮挡的检测,如图9所示,其具体实现方式可以包括:
步骤901、基于房间的图像对应的语义分割图像,确定至少部分的三维轮廓线的轮廓点对应的房间的图像中的像素点的语义。
实际应用中,可以对房间的图像进行语义分割,得到房间的图像对应的语义分割图像。语义分割可以对图像中物体进行像素级的分割,通过语义分割图像,可以对房间的图像中的每个像素点都标明属于哪类物体,例如可以通过不同的颜色表示不同类别的物体,例如每个像素点属于墙体还是其它的物体。
基于房间的图像,在拍摄点位下对三维轮廓线未被墙体遮挡的部分进行被物体遮挡的检测的情况下,确定至少部分的三维轮廓线的轮廓点对应的房间的图像中的像素点的 语义,具体可以是确定至少部分的三维轮廓线的未被墙体遮挡的轮廓点,对应的房间的图像中的像素点的语义。如此,可以进一步提升检测效率。
步骤902、若三维轮廓线的轮廓点对应的房间的图像中的像素点的语义为墙体,确定三维轮廓线的轮廓点属于未被物体遮挡部分。
步骤903、若三维轮廓线的轮廓点对应的房间的图像中的像素点的语义不为墙体,确定三维轮廓线的轮廓点属于被物体遮挡部分。
其中,属于未被物体遮挡部分的连续相邻的轮廓点形成一段未被物体遮挡的轮廓线。属于被物体遮挡部分的连续相邻的轮廓点形成一段被物体遮挡的轮廓线。
本实施例中,通过语义分割图像可以得到房间的图像中的像素点的语义,对于渲染到房间的图像中的至少部分的三维轮廓线来说,该至少部分的三维轮廓线的轮廓点对应的房间的图像中的像素点的语义为墙体,说明未被物体遮挡,否则,说明被物体遮挡了,从而可以准确地确定出物体遮挡情况。
视觉信息可以包括可见和不可见,可见的情况下,还可以包括轮廓线的颜色、宽度和样式中的至少一种。轮廓线的宽度也即轮廓线的粗细。轮廓线的样式可以包括实线和虚线等等。示例性的,被遮挡部分的轮廓线可以为虚线,未被遮挡部分的轮廓线可以为实线。通过实线和虚线来区分被遮挡部分和未被遮挡部分,更符合人们通常的视觉习惯,方便用户快速准确地感受到房间的结构。
示例性的,将至少部分的三维轮廓线渲染到房间的图像时,若房间的图像中渲染的至少部分的三维轮廓线中,若三维轮廓线的一条边的轮廓线完全被墙体遮挡,则不可见,若三维轮廓线的一条边的轮廓线存在未被遮挡部分以及被遮挡部分(包括被墙体遮挡部分和/或被物体遮挡部分),则可见,其中,未被遮挡部分的轮廓线为实线,被遮挡部分的轮廓线为虚线。参见图10,示意了一个房间的图像中渲染的三维轮廓线。
在示例性实施例中,被遮挡部分的视觉信息中,被墙体遮挡部分的视觉信息和被物体遮挡部分的视觉信息不同。为了进一步提高三维轮廓线的渲染效果,还可以进一步对被墙体遮挡部分和被物体遮挡部分进行区分,使得用户通过被墙体遮挡部分的视觉信息和被物体遮挡部分的视觉信息的不同,进一步清晰感受到被物体遮挡的情况和被墙体遮挡的情况。示例性的,被物体部分的轮廓线和被墙体遮挡部分的轮廓线的颜色不同。颜色的不同在视觉上区分效果更加明显。
另外,将至少部分的三维轮廓线渲染到房间的图像时,还可以渲染每条轮廓线的长度信息,帮助用户清晰地感受房间的尺寸。
在示例性实施例中,基于房间的三维轮廓线以及房间的图像,在房间的图像的拍摄点位下对三维轮廓线进行遮挡检测之前,还可以获取一套房屋的每个房间的三维轮廓线、每个房间的图像以及多个拍摄点位,将每个房间的三维轮廓线投影至水平面得到每个房间的二维轮廓线,在每个拍摄点位下,遍历每个房间的二维轮廓线,确定拍摄点位是否位于该房间的二维轮廓线内,以得到房间的三维轮廓线、房间的图像及拍摄点位的对应关系。如此,则可以针对每个房间,基于房间的三维轮廓线以及房间的图像,在房间的图像的拍摄点位下对三维轮廓线进行遮挡检测。
下面对本发明提供的图像处理装置进行描述,下文描述的图像处理装置与上文描述的图像处理方法可相互对应参照。
本实施例提供一种图像处理装置,如图11所示,包括:
遮挡检测模块1101,用于基于房间的三维轮廓线以及房间的图像,在房间的图像的拍摄点位下对三维轮廓线进行遮挡检测;
轮廓渲染模块1102,用于基于遮挡检测的结果以及预设的渲染策略,将至少部分的三维轮廓线渲染到房间的图像,预设的渲染策略包括三维轮廓线中未被遮挡部分的视觉信息和被遮挡部分的视觉信息不同。
在示例性实施例中,遮挡检测模块1101,具体用于:
基于三维轮廓线,在拍摄点位下对三维轮廓线进行被墙体遮挡的检测;
基于房间的图像,在拍摄点位下对三维轮廓线进行被物体遮挡的检测。
在示例性实施例中,遮挡检测模块1101,具体用于:
获取三维轮廓线投影至水平面的二维轮廓线;
基于拍摄点位以及二维轮廓线,对二维轮廓线的每个轮廓点进行被墙体遮挡的检测;
对二维轮廓线上与三维轮廓线的每个竖直轮廓线对应的每个顶点,基于拍摄点位以及二维轮廓线,对每个顶点对应的竖直轮廓线进行被墙体遮挡的检测。
在示例性实施例中,遮挡检测模块1101,具体用于:
针对二维轮廓线的每个轮廓点,获取二维轮廓线的轮廓点与拍摄点位形成的线段;
若二维轮廓线的轮廓点所在的线段与二维轮廓线存在二维轮廓线的轮廓点以外的交点,确定二维轮廓线的轮廓点属于被墙体遮挡部分;
若二维轮廓线的轮廓点所在的线段与二维轮廓线不存在二维轮廓线的轮廓点以外的交点,确定二维轮廓线的轮廓点属于未被墙体遮挡部分。
在示例性实施例中,遮挡检测模块1101,具体用于:
针对每个顶点,获取顶点与拍摄点位形成的线段;
若顶点所在的线段与二维轮廓线存在顶点以外的交点,确定顶点对应的竖直轮廓线属于被墙体遮挡部分;
若顶点所在的线段与二维轮廓线不存在顶点以外的交点,确定顶点对应的竖直轮廓线属于未被墙体遮挡部分。
在示例性实施例中,遮挡检测模块1101,具体用于:
基于房间的图像对应的语义分割图像,确定至少部分的三维轮廓线的轮廓点对应的房间的图像中的像素点的语义;
若三维轮廓线的轮廓点对应的房间的图像中的像素点的语义为墙体,确定三维轮廓线的轮廓点属于未被物体遮挡部分;
若三维轮廓线的轮廓点对应的房间的图像中的像素点的语义不为墙体,确定三维轮廓线的轮廓点属于被物体遮挡部分。
在示例性实施例中,遮挡检测模块1101,具体用于:
基于房间的图像,在拍摄点位下对三维轮廓线未被墙体遮挡的部分进行被物体遮挡的检测。
在示例性实施例中,被遮挡部分的视觉信息中,被墙体遮挡部分的视觉信息和被物体遮挡部分的视觉信息不同。
图12示例了一种电子设备的实体结构示意图,如图12所示,该电子设备可以包括:处理器(processor)1210、通信接口(Communications Interface)1220、存储器(memory)1230和通信总线1240,其中,处理器1210,通信接口1220,存储器1230通过通信总线1240完成相互间的通信。处理器1210可以调用存储器1230中的逻辑指令,以执行图像处理方法,该方法包括:
基于房间的三维轮廓线以及房间的图像,在房间的图像的拍摄点位下对三维轮廓线进行遮挡检测;
基于遮挡检测的结果以及预设的渲染策略,将至少部分的三维轮廓线渲染到房间的图像,预设的渲染策略包括三维轮廓线中未被遮挡部分的视觉信息和被遮挡部分的视觉信息不同。
此外,上述的存储器1230中的逻辑指令可以通过软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以 以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
另一方面,本发明还提供一种计算机程序产品,所述计算机程序产品包括计算机程序,计算机程序可存储在非暂态计算机可读存储介质上,所述计算机程序被处理器执行时,计算机能够执行上述各方法所提供的图像处理方法,该方法包括:
基于房间的三维轮廓线以及房间的图像,在房间的图像的拍摄点位下对三维轮廓线进行遮挡检测;
基于遮挡检测的结果以及预设的渲染策略,将至少部分的三维轮廓线渲染到房间的图像,预设的渲染策略包括三维轮廓线中未被遮挡部分的视觉信息和被遮挡部分的视觉信息不同。
又一方面,本发明还提供一种非暂态计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现以执行上述各方法提供的图像处理方法,该方法包括:
基于房间的三维轮廓线以及房间的图像,在房间的图像的拍摄点位下对三维轮廓线进行遮挡检测;
基于遮挡检测的结果以及预设的渲染策略,将至少部分的三维轮廓线渲染到房间的图像,预设的渲染策略包括三维轮廓线中未被遮挡部分的视觉信息和被遮挡部分的视觉信息不同。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等) 执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (17)

  1. 一种图像处理方法,其特征在于,包括:
    基于房间的三维轮廓线以及所述房间的图像,在所述房间的图像的拍摄点位下对所述三维轮廓线进行遮挡检测;
    基于所述遮挡检测的结果以及预设的渲染策略,将至少部分的所述三维轮廓线渲染到所述房间的图像,所述预设的渲染策略包括所述三维轮廓线中未被遮挡部分的视觉信息和被遮挡部分的所述视觉信息不同。
  2. 根据权利要求1所述的图像处理方法,其特征在于,所述基于房间的三维轮廓线以及所述房间的图像,在所述房间的图像的拍摄点位下对所述三维轮廓线进行遮挡检测,包括:
    基于所述三维轮廓线,在所述拍摄点位下对所述三维轮廓线进行被墙体遮挡的检测;
    基于所述房间的图像,在所述拍摄点位下对所述三维轮廓线进行被物体遮挡的检测。
  3. 根据权利要求2所述的图像处理方法,其特征在于,所述基于所述三维轮廓线,在所述拍摄点位下对所述三维轮廓线进行被墙体遮挡的检测,包括:
    获取所述三维轮廓线投影至水平面的二维轮廓线;
    基于所述拍摄点位以及所述二维轮廓线,对所述二维轮廓线的每个轮廓点进行被墙体遮挡的检测;
    对所述二维轮廓线上与所述三维轮廓线的每个竖直轮廓线对应的每个顶点,基于所述拍摄点位以及所述二维轮廓线,对每个所述顶点对应的所述竖直轮廓线进行被墙体遮挡的检测。
  4. 根据权利要求3所述的图像处理方法,其特征在于,所述基于所述拍摄点位以及所述二维轮廓线,对所述二维轮廓线的每个轮廓点进行被墙体遮挡的检测,包括:
    针对所述二维轮廓线的每个轮廓点,获取所述二维轮廓线的轮廓点与所述拍摄点位形成的线段;
    若所述二维轮廓线的轮廓点所在的线段与所述二维轮廓线存在所述二维轮廓线的轮廓点以外的交点,确定所述二维轮廓线的轮廓点属于被墙体遮挡部分;
    若所述二维轮廓线的轮廓点所在的线段与所述二维轮廓线不存在所述二维轮廓线 的轮廓点以外的交点,确定所述二维轮廓线的轮廓点属于未被墙体遮挡部分。
  5. 根据权利要求3所述的图像处理方法,其特征在于,所述基于所述拍摄点位以及所述二维轮廓线,对每个所述顶点对应的所述竖直轮廓线进行被墙体遮挡的检测,包括:
    针对每个所述顶点,获取所述顶点与所述拍摄点位形成的线段;
    若所述顶点所在的线段与所述二维轮廓线存在所述顶点以外的交点,确定所述顶点对应的所述竖直轮廓线属于被墙体遮挡部分;
    若所述顶点所在的线段与所述二维轮廓线不存在所述顶点以外的交点,确定所述顶点对应的所述竖直轮廓线属于未被墙体遮挡部分。
  6. 根据权利要求2所述的图像处理方法,其特征在于,所述基于所述房间的图像,在所述拍摄点位下对所述三维轮廓线进行被物体遮挡的检测,包括:
    基于所述房间的图像对应的语义分割图像,确定所述至少部分的所述三维轮廓线的轮廓点对应的所述房间的图像中的像素点的语义;
    若所述三维轮廓线的轮廓点对应的所述房间的图像中的像素点的语义为墙体,确定所述三维轮廓线的轮廓点属于未被物体遮挡部分;
    若所述三维轮廓线的轮廓点对应的所述房间的图像中的像素点的语义不为墙体,确定所述三维轮廓线的轮廓点属于被物体遮挡部分。
  7. 根据权利要求2至6任一项所述的图像处理方法,其特征在于,所述基于所述房间的图像,在所述拍摄点位下对所述三维轮廓线进行被物体遮挡的检测,包括:
    基于所述房间的图像,在所述拍摄点位下对所述三维轮廓线未被墙体遮挡的部分进行被物体遮挡的检测。
  8. 根据权利要求2至6任一项所述的图像处理方法,其特征在于,所述被遮挡部分的所述视觉信息中,被墙体遮挡部分的所述视觉信息和被物体遮挡部分的所述视觉信息不同。
  9. 一种图像处理装置,其特征在于,包括:
    遮挡检测模块,用于基于房间的三维轮廓线以及所述房间的图像,在所述房间的图 像的拍摄点位下对所述三维轮廓线进行遮挡检测;
    轮廓渲染模块,用于基于所述遮挡检测的结果以及预设的渲染策略,将至少部分的所述三维轮廓线渲染到所述房间的图像,所述预设的渲染策略包括所述三维轮廓线中未被遮挡部分的视觉信息和被遮挡部分的所述视觉信息不同。
  10. 根据权利要求9所述的图像处理装置,其特征在于,所述遮挡检测模块具体用于:
    基于所述三维轮廓线,在所述拍摄点位下对所述三维轮廓线进行被墙体遮挡的检测;
    基于所述房间的图像,在所述拍摄点位下对所述三维轮廓线进行被物体遮挡的检测。
  11. 根据权利要求10所述的图像处理装置,其特征在于,所述遮挡检测模块具体用于:
    获取所述三维轮廓线投影至水平面的二维轮廓线;
    基于所述拍摄点位以及所述二维轮廓线,对所述二维轮廓线的每个轮廓点进行被墙体遮挡的检测;
    对所述二维轮廓线上与所述三维轮廓线的每个竖直轮廓线对应的每个顶点,基于所述拍摄点位以及所述二维轮廓线,对每个所述顶点对应的所述竖直轮廓线进行被墙体遮挡的检测。
  12. 根据权利要求11所述的图像处理装置,其特征在于,所述遮挡检测模块具体用于:
    针对所述二维轮廓线的每个轮廓点,获取所述二维轮廓线的轮廓点与所述拍摄点位形成的线段;
    若所述二维轮廓线的轮廓点所在的线段与所述二维轮廓线存在所述二维轮廓线的轮廓点以外的交点,确定所述二维轮廓线的轮廓点属于被墙体遮挡部分;
    若所述二维轮廓线的轮廓点所在的线段与所述二维轮廓线不存在所述二维轮廓线的轮廓点以外的交点,确定所述二维轮廓线的轮廓点属于未被墙体遮挡部分。
  13. 根据权利要求11所述的图像处理装置,其特征在于,所述遮挡检测模块具体用于:
    针对每个所述顶点,获取所述顶点与所述拍摄点位形成的线段;
    若所述顶点所在的线段与所述二维轮廓线存在所述顶点以外的交点,确定所述顶点对应的所述竖直轮廓线属于被墙体遮挡部分;
    若所述顶点所在的线段与所述二维轮廓线不存在所述顶点以外的交点,确定所述顶点对应的所述竖直轮廓线属于未被墙体遮挡部分。
  14. 根据权利要求10所述的图像处理装置,其特征在于,所述遮挡检测模块具体用于:
    基于所述房间的图像对应的语义分割图像,确定所述至少部分的所述三维轮廓线的轮廓点对应的所述房间的图像中的像素点的语义;
    若所述三维轮廓线的轮廓点对应的所述房间的图像中的像素点的语义为墙体,确定所述三维轮廓线的轮廓点属于未被物体遮挡部分;
    若所述三维轮廓线的轮廓点对应的所述房间的图像中的像素点的语义不为墙体,确定所述三维轮廓线的轮廓点属于被物体遮挡部分。
  15. 根据权利要求10至14任一项所述的图像处理装置,其特征在于,所述遮挡检测模块具体用于:
    基于所述房间的图像,在所述拍摄点位下对所述三维轮廓线未被墙体遮挡的部分进行被物体遮挡的检测。
  16. 根据权利要求10至14任一项所述的图像处理装置,其特征在于,所述被遮挡部分的所述视觉信息中,被墙体遮挡部分的所述视觉信息和被物体遮挡部分的所述视觉信息不同。
  17. 一种非暂态计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至8任一项所述图像处理方法。
PCT/CN2022/136281 2022-08-23 2022-12-02 图像处理方法及装置、存储介质 WO2024040789A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211014358.0A CN115496711A (zh) 2022-08-23 2022-08-23 图像处理方法及装置、存储介质
CN202211014358.0 2022-08-23

Publications (1)

Publication Number Publication Date
WO2024040789A1 true WO2024040789A1 (zh) 2024-02-29

Family

ID=84466456

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/136281 WO2024040789A1 (zh) 2022-08-23 2022-12-02 图像处理方法及装置、存储介质

Country Status (2)

Country Link
CN (1) CN115496711A (zh)
WO (1) WO2024040789A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107393003A (zh) * 2017-08-07 2017-11-24 苍穹数码技术股份有限公司 一种基于云计算的三维房屋自动建模的方法与实现
CN111275801A (zh) * 2018-12-05 2020-06-12 中国移动通信集团广西有限公司 一种三维画面渲染方法及装置
CN111738191A (zh) * 2020-06-29 2020-10-02 广州小鹏车联网科技有限公司 一种车位显示的处理方法和车辆
CN111932666A (zh) * 2020-07-17 2020-11-13 北京字节跳动网络技术有限公司 房屋三维虚拟图像的重建方法、装置和电子设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107393003A (zh) * 2017-08-07 2017-11-24 苍穹数码技术股份有限公司 一种基于云计算的三维房屋自动建模的方法与实现
CN111275801A (zh) * 2018-12-05 2020-06-12 中国移动通信集团广西有限公司 一种三维画面渲染方法及装置
CN111738191A (zh) * 2020-06-29 2020-10-02 广州小鹏车联网科技有限公司 一种车位显示的处理方法和车辆
CN111932666A (zh) * 2020-07-17 2020-11-13 北京字节跳动网络技术有限公司 房屋三维虚拟图像的重建方法、装置和电子设备

Also Published As

Publication number Publication date
CN115496711A (zh) 2022-12-20

Similar Documents

Publication Publication Date Title
WO2020207191A1 (zh) 虚拟物体被遮挡的区域确定方法、装置及终端设备
US11182885B2 (en) Method and apparatus for implementing image enhancement, and electronic device
US8625849B2 (en) Multiple camera control system
AU2001294970B2 (en) Object tracking system using multiple cameras
KR101055411B1 (ko) 입체 영상 생성 방법 및 그 장치
TWI486629B (zh) 穿透型頭部穿戴式顯示系統與互動操作方法
CN112135041B (zh) 一种人脸特效的处理方法及装置、存储介质
EP2528035B1 (en) Apparatus and method for detecting a vertex of an image
KR20170102521A (ko) 이미지 처리 방법 및 장치
AU2001294970A1 (en) Object tracking system using multiple cameras
WO1999026198A2 (en) System and method for merging objects into an image sequence without prior knowledge of the scene in the image sequence
CN115439607A (zh) 一种三维重建方法、装置、电子设备及存储介质
CN108629799B (zh) 一种实现增强现实的方法及设备
WO2022237026A1 (zh) 平面信息检测方法及系统
WO2018201664A1 (zh) 一种立体图形显示的方法、装置及设备
CN108628442A (zh) 一种信息提示方法、装置以及电子设备
US11651533B2 (en) Method and apparatus for generating a floor plan
WO2024040789A1 (zh) 图像处理方法及装置、存储介质
US9558561B2 (en) Semiautomatic drawing tool for image segmentation
CN111767876A (zh) 一种有遮挡人脸图像的生成方法及装置
CN117808948A (zh) 一种端云混合渲染方法、系统、装置及存储介质
WO2024001617A1 (zh) 玩手机行为识别方法及装置
KR20120082319A (ko) 윈도우 형태의 증강현실을 제공하는 장치 및 방법
US10339702B2 (en) Method for improving occluded edge quality in augmented reality based on depth camera
CN115657893A (zh) 一种显示控制方法、显示控制装置及智能设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22956323

Country of ref document: EP

Kind code of ref document: A1