WO2021051249A1 - 模型自动剪切的方法、装置及存储介质 - Google Patents

模型自动剪切的方法、装置及存储介质 Download PDF

Info

Publication number
WO2021051249A1
WO2021051249A1 PCT/CN2019/106028 CN2019106028W WO2021051249A1 WO 2021051249 A1 WO2021051249 A1 WO 2021051249A1 CN 2019106028 W CN2019106028 W CN 2019106028W WO 2021051249 A1 WO2021051249 A1 WO 2021051249A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
sensor
projection
area
effective area
Prior art date
Application number
PCT/CN2019/106028
Other languages
English (en)
French (fr)
Inventor
黄胜
梁家斌
田艺
李思晋
李鑫超
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980034070.XA priority Critical patent/CN112204624A/zh
Priority to PCT/CN2019/106028 priority patent/WO2021051249A1/zh
Publication of WO2021051249A1 publication Critical patent/WO2021051249A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Definitions

  • This application relates to the technical field of model reconstruction, and in particular to a method, device and storage medium for automatically cutting a model.
  • this application provides a method for automatically cutting a model, a device for automatically cutting a model, and a storage medium.
  • this application provides a method for automatically cutting models, including:
  • the model is cut to remove ineffective areas in the model.
  • the present application provides an automatic model cutting device, the device including: a memory and a processor;
  • the memory is used to store a computer program
  • the processor is used to execute the computer program and when executing the computer program, implement the following steps:
  • the model is cut to remove ineffective areas in the model.
  • the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor realizes the automatic model cutting as described above. Cut method.
  • the embodiment of the application provides a method for automatically cutting a model, a device for automatically cutting a model, and a storage medium.
  • the effective area and the storage medium in the model are determined according to the acquired sensor information.
  • the model is cut to remove the ineffective area in the model; because the effective area and/or ineffective area in the model is determined according to the acquired sensor information, and the model is Cutting is done during the model reconstruction process without interrupting the model reconstruction process, so not only does not reduce the model reconstruction speed, but also cuts out the ineffective areas in the model, which can reduce the volume of the model and reduce the model. Occupying a three-dimensional space can avoid noise interference.
  • FIG. 1 is a schematic flowchart of an embodiment of a method for automatically cutting a model according to the present application
  • FIG. 2 is a schematic flowchart of another embodiment of the method for automatically cutting a model according to the present application
  • FIG. 3 is a schematic diagram of the projection point obtained by the position projection of the sensor in the practical application of the method for automatically cutting the model according to the present application;
  • FIG. 4 is a schematic flowchart of another embodiment of the method for automatically cutting a model according to the present application.
  • FIG. 5 is a schematic diagram of a specific method of forming a rectangular bounding box by projection points in a practical application of the method for automatically cutting a model according to the present application;
  • FIG. 6 is a schematic diagram of a specific method in which the projection point expands outward or shrinks inward in the method for automatically cutting the model according to the present application;
  • FIG. 7 is a schematic flowchart of another embodiment of the method for automatically cutting a model according to the present application.
  • FIG. 8 is a schematic diagram of the method for automatically cutting the model according to the present application, which is a schematic diagram of the intersection of the ray and the current surface of the model by taking a ray along the inclination direction of the sensor in an actual application;
  • FIG. 9 is a schematic flowchart of another embodiment of the method for automatically cutting a model according to the present application.
  • FIG. 10 is a schematic flowchart of another embodiment of the method for automatically cutting a model according to the present application.
  • FIG. 11 is a schematic flowchart of another embodiment of the method for automatically cutting a model according to the present application.
  • FIG. 12 is a schematic flowchart of another embodiment of the method for automatically cutting a model according to the present application.
  • FIG. 13 is a schematic flowchart of another embodiment of the method for automatically cutting a model according to the present application.
  • Figure 14 is a schematic diagram of an uncut model obtained by three-dimensional modeling using a series of pictures taken around a stone sculpture
  • Fig. 15 is a schematic diagram of the model after cutting in Fig. 14 using the automatic model cutting method of the present application;
  • FIG. 16 is a schematic flowchart of another embodiment of the method for automatically cutting a model according to the present application.
  • Fig. 17 is a schematic structural diagram of an embodiment of an automatic model cutting device of the present application.
  • ineffective areas such as areas that do not need attention occupy a large volume of three-dimensional space, resulting in an excessively large model, and also resulting in a decrease in the quality of the model reconstruction process.
  • Manually cutting the model must interrupt the model reconstruction process, which can easily reduce the reconstruction speed, and rely on user experience, which can easily lead to a decrease in model quality or an invalid model.
  • the effective area in the model and/or the ineffective area outside the effective area in the model is determined according to the acquired sensor information, and the model is cut and removed.
  • the ineffective area in the model since the effective area and/or ineffective area in the model is determined according to the acquired sensor information, and the model is cut, it is completed during the model reconstruction process without interrupting the model reconstruction process. Therefore, not only does not reduce the model reconstruction speed, but also cuts out the ineffective areas in the model, which can reduce the volume of the model, reduce the three-dimensional space occupied by the model, and avoid noise interference. On the contrary, it can speed up the model reconstruction speed and speed up the rendering speed. It allows users to focus only on the area of interest, which can guarantee or even increase the quality of model reconstruction; because according to the acquired sensor information, determining the effective area and/or ineffective area in the model and cutting the model are all in the model reconstruction The process is automatically completed and does not involve user participation. Therefore, it does not rely on user experience. It can ensure that the cut model retains the effective area, and the model quality is stable and repeatable.
  • Fig. 1 is a schematic flowchart of an embodiment of a method for automatically cutting a model according to the present application, and the method includes:
  • Step S101 In the process of model reconstruction using the data collected by the sensor, sensor information is acquired.
  • the model reconstruction is performed by using the data collected by the sensor, that is, the mathematical process and computer technology of using the data collected by the sensor to recover the three-dimensional information (for example: shape, etc.) of the collected scene, including data acquisition, Steps such as preprocessing, point cloud splicing and feature analysis.
  • the data collected by the sensor that is, the mathematical process and computer technology of using the data collected by the sensor to recover the three-dimensional information (for example: shape, etc.) of the collected scene, including data acquisition, Steps such as preprocessing, point cloud splicing and feature analysis.
  • sensors include but are not limited to: camera devices (for example: cameras, video cameras, cameras, etc.), scanners (for example: laser scanners, three-dimensional scanners, etc.), radars, and so on.
  • Sensor information refers to sensor-related information, such as: sensor setting information in the collected scene (for example: sensor position, sensor setting height, sensor inclination, etc.), sensor inherent parameter information (for example: sensor The range of the sensor, the sensitivity of the sensor, the accuracy of the sensor, the viewing cone of the camera, etc.).
  • the method of acquiring sensor information needs to be determined according to the specific sensor information. Some need to be calculated, such as the position of the sensor, the inclination angle of the sensor, etc.; some can be entered and stored in advance, and acquired when needed, for example: the sensor itself Inherent parameter information.
  • Step S102 Determine the effective area and/or the ineffective area in the model according to the sensor information, and the ineffective area is the area outside the effective area in the model.
  • the sensor is usually set around the Region Of Interest (ROI), and within a certain range near the sensor is the place that really needs attention in practice, or the area that the user really needs. Due to the inclination angle of the sensor, the range of the sensor, etc., the data collected in practice includes too much data of non-focused areas.
  • the effective area in the model includes the real needs, attention, and interest areas of the user, and the ineffective area is the area outside the effective area in the model.
  • the information of the sensor to determine the effective area in the model, or to determine the ineffective area in the model, or to determine the effective area and the ineffective area in the model, there are many specific implementation methods. For example: if multiple cameras are set at the same height and their inclination angles are downward, then the area above the height can be considered as an ineffective area; if their inclination angles are all upward, then the area under the height can be considered as ineffective area.
  • the range of the sensor is 10 meters, then within 10 meters of the acquisition direction of the sensor is the effective area, and 10 meters away is the non-effective area.
  • Step S103 Cut the model to remove ineffective areas in the model.
  • the ineffective area in the model can be cut out.
  • the effective area in the model and/or the ineffective area outside the effective area in the model is determined according to the acquired sensor information, and the model is cut and removed.
  • the ineffective area in the model since the effective area and/or ineffective area in the model is determined according to the acquired sensor information, and the model is cut, it is completed during the model reconstruction process without interrupting the model reconstruction process. Therefore, not only does not reduce the model reconstruction speed, but also cuts out the ineffective areas in the model, which can reduce the volume of the model, reduce the three-dimensional space occupied by the model, and avoid noise interference. On the contrary, it can speed up the model reconstruction speed and speed up the rendering speed.
  • the sensor information includes one or more of the position of the sensor, the inclination angle of the sensor, and the range of the sensor.
  • the position of the sensor and the inclination angle of the sensor are the position and the inclination angle of the sensor relative to the collected scene.
  • the position of the sensor and the inclination angle of the sensor are set by the user according to the real needs, the location of the area of interest, the location of the area of interest, etc.; the range of the sensor refers to the measurement range of the sensor.
  • the user selects the sensor Usually need to choose according to the real needs, the location of the area of interest, the location of the area of interest, etc.; therefore, according to one or more of the position of the sensor, the inclination of the sensor, and the range of the sensor, the effective area in the model can be determined And/or inactive area.
  • step S102 may be: determining the effective area and/or ineffective area in the model according to the position of the sensor and/or the inclination angle of the sensor.
  • step S102 can also be: according to the position of the sensor and/or the sensor in the three-dimensional coordinate system of the model The inclination angle of the model determines the effective area and/or ineffective area in the model.
  • the three-dimensional coordinate system includes a global coordinate system or a local coordinate system.
  • the global coordinate system is the coordinate system where the three-dimensional space objects are located, and the vertex coordinates of the model are expressed based on this coordinate system.
  • the local coordinate system is an imaginary coordinate system.
  • the local coordinate system takes the center of the object as the coordinate origin.
  • the rotation or translation of the object is performed around the local coordinate system.
  • the object model performs operations such as rotation or translation
  • the local coordinate also performs the corresponding rotation or translation operation.
  • the relative position of the local coordinate system and the object remains unchanged from beginning to end.
  • the purpose of imagining this local coordinate system is mainly to understand the "translation and rotation" operations performed on objects in a three-dimensional scene.
  • all transformation operations directly affect the local coordinate system. Since the relative position of the local coordinate system and the object is not correct, when the local coordinate system is "translated", “rotated” and “zoomed” When the time, the position and shape of the object in the scene will also change accordingly.
  • the following takes the position of the sensor, the inclination angle of the sensor, the position of the sensor and the inclination angle of the sensor, the position of the sensor and the range of the sensor as examples to illustrate in detail.
  • step S102 may include: sub-step S102A1 and sub-step S102A2.
  • Sub-step S102A1 project the position of the sensor in the three-dimensional coordinate system onto at least one projection surface of the three-dimensional coordinate system to obtain a projection point.
  • the triangle represents the sensor
  • the vertex of the triangle represents the position of the sensor
  • xoy, yoz, xoz are three projection surfaces
  • the five sensors in the figure are on the projection surface xoy Corresponding to five projection points E, F, G, H, I.
  • Sub-step S102A2 Determine the effective area and/or ineffective area in the model according to the projection points on the projection surface.
  • the range formed by the projection points of the position of the sensor on the projection surface in the three-dimensional coordinate system can basically determine the area of interest to the user with a high probability. Therefore, the effective area and/or non-effective area in the model can be determined according to the projection points on the projection surface.
  • the specific instructions are as follows:
  • a first straight line that intersects the projection surface is drawn from the projection point, and a plurality of first intersecting planes intersecting the projection plane can be obtained, and the effective area and/or ineffectiveness in the model are determined by the first intersecting plane. area.
  • the sub-step S102A2 may include: the sub-step S102A2a1 and the sub-step S102A2a2, as shown in FIG. 4.
  • Sub-step S102A2a1 draw a first straight line perpendicular to or oblique to the projection plane from the projection point, and then obtain a plurality of first intersecting planes that intersect the projection plane through the first straight line.
  • the first intersecting surface can be flat or non-planar.
  • the first intersecting plane may be a plane; when they are not coplanar, The first intersecting surface may be non-planar.
  • the first intersecting plane can be obtained by intersecting a plane formed by two adjacent first straight lines with the projection plane, or can be obtained by intersecting a plane formed by two non-adjacent first straight lines with the projection plane.
  • the specific method for obtaining the first intersecting surface can be determined according to actual applications, and is not limited here.
  • Sub-step S102A2a2 Determine the effective area and/or ineffective area in the model through the first intersection plane.
  • the first intersection surface divides the model into left and right sides, you can only set the left side of the first intersection surface as the inactive area.
  • the model on the right side of the first intersection surface can continue to determine the effective area in other ways And/or ineffective area; or, you can directly set the left side of the first intersecting surface as the effective area; or, the first intersecting surface can be combined with other methods to divide the model into multiple parts, and then directly determine the effective area and ineffective area in the model. Effective area, etc.
  • a first straight line that intersects the projection surface is drawn from the projection point, and a plurality of first intersection surfaces that intersect the projection surface can be obtained, and then the effective area and/or ineffective area in the model is determined by the first intersection surface.
  • the absolute position of the sensor is used to divide the cutting range, and the effective area and/or ineffective area in the model can be automatically determined simply and conveniently, so as to provide support for the subsequent automatic cutting of the model.
  • step S103 may include: cutting along the first intersecting surface and the projection surface to remove the ineffective area in the model.
  • the effective area and/or ineffective area in the model is determined by the three-dimensional space formed by the position of the sensor and the projection point in the three-dimensional coordinate system. That is, the sub-step S102A2 may further include: the sub-step S102A2b1.
  • Sub-step S102A2b1 Determine the effective area and/or ineffective area in the model through the three-dimensional space formed by the position of the sensor and the projection point in the three-dimensional coordinate system of the model.
  • the three-dimensional space in this embodiment may be a parallel polyhedron with parallel upper and lower bottom surfaces.
  • the parallel polyhedron includes a pyramid; the three-dimensional space may also be a polyhedron with non-parallel upper and lower bottom surfaces.
  • the shape of the three-dimensional space in this embodiment is specifically determined according to actual applications, and is not limited here.
  • the model outside the three-dimensional space can be determined as an ineffective area
  • the space enclosed by the three-dimensional space can also be determined as the effective area
  • the three-dimensional space can be combined with other methods to determine the effective area and the ineffective area in the model, and so on.
  • the effective area and/or ineffective area in the model is determined by the three-dimensional space composed of the position of the sensor in the three-dimensional coordinate system and the projection point. In this way, the absolute position of the sensor is used to perform another cutting range.
  • This kind of division can easily and conveniently automatically determine the effective area and/or the ineffective area in the model, so as to provide support for the subsequent automatic cutting of the model.
  • step S103 may include: cutting along several surfaces of the three-dimensional space to remove ineffective areas in the model.
  • the effective area and/or ineffective area in the model are determined directly according to the projection point on the projection surface, and the effective area and/or ineffective area required by the user cannot be obtained.
  • Area you can not directly use the above projection points, but process the projection points, replace the original unprocessed projection points with the processed projection points, and then determine the effective area in the model based on the processed projection points on the projection surface And/or ineffective area, so that the finally determined effective area and/or ineffective area meets the needs of the user.
  • the effective area and/or ineffective area in the model is determined according to the processed projection points on the projection surface.
  • the determination method is similar to the above-mentioned method of directly using the projection points. For details, please refer to the above description, which will not be repeated here. The following specifically describes the related content of processing projection points:
  • the projection point may also include: processing the projection point, which includes requiring the projection point to expand outward, requiring the projection point to contract inward, forming a rectangular bounding box through the projection point, and requiring the location of the sensor and the composition of the projection point
  • processing the projection point which includes requiring the projection point to expand outward, requiring the projection point to contract inward, forming a rectangular bounding box through the projection point, and requiring the location of the sensor and the composition of the projection point.
  • processing the projection point refers to processing the projection point that is related to determining the effective area and/or the ineffective area in the model and is used to meet the needs of the user.
  • the outward expansion of the projection point refers to the expansion of the projection point to the area outside the area enclosed by all the projection points on the projection surface.
  • the inward shrinkage of the projection point means that the projection point shrinks to the area enclosed by all the projection points on the projection surface and shrinks to the area enclosed by all the projection points on the projection surface.
  • Forming a bounding box by projecting points includes forming an AABB bounding box, a bounding sphere, a direction bounding box OBB, and a fixed direction convex hull FDH.
  • the type of bounding box is related to the shape of the model.
  • the projection points on the projection surface it is also possible to extend all the projection points on the projection surface to form several straight lines on the plane of the projection surface, and to form a bounding box around the shape of the area enclosed by the several straight lines, as shown in Figure 5, the four projection points
  • the shape of the enclosed area is a rectangle, and the enclosed area is extended to form a three-dimensional space, such as a rectangular parallelepiped or a pyramid enclosing box.
  • the first projection side TA and the second projection side TB are two projection sides that intersect the projection point T, and the projection point T can be expanded outwardly: the projection point T is along the first projection side TA.
  • the direction TC away from the first projection side TA expands outward, that is, the projection point T expands outward in the opposite direction TC of the first projection side TA; or, the projection point T extends away from the second projection side TB along the second projection side TB
  • the direction TD expands outward, that is, the projection point T expands outward along the opposite direction TD of the second projection side TB; or, the projection point T is in the angle range ATB between the first projection side TA and the second projection side TB, CTD outward Expansion, that is, the projection point T extends along the range enclosed by the CTD in the opposite direction of the included angle range ATB; or, the projection point T extends outward along the center line TP of the CTD; and so on.
  • the inward shrinkage of the projection point T may be: the projection point T shrinks inwardly along the first projection side TA in a direction away from the projection point T, that is, the projection point T moves in a direction away from the projection point T along the first projection side TA; or, The projection point T shrinks inwardly along the second projection side TB away from the projection point T, that is, the projection point T moves away from the projection point T along the second projection side TB; or, the projection point T is on the first projection side TA
  • the angle range ATB with the second projection side TB shrinks inward, that is, the projection point T shrinks along the range enclosed by the angle range ATB, and so on.
  • the inward contraction of multiple projection points can be combined with the above multiple methods.
  • the projection point can be adjusted according to user needs, thereby expanding or reducing the volume of the above-mentioned three-dimensional space, and automatically determining the effective model in a simple and convenient way. Area and/or ineffective area, so as to provide support for the subsequent automatic cutting of the model.
  • Step S102 may include: sub-step S102B1 and sub-step S102B2.
  • Sub-step S102B1 Taking the position of the sensor in the three-dimensional coordinate system as a starting point, make a ray along the inclination direction of the sensor to obtain the intersection point of the ray and the current surface of the model.
  • the front surface is parallel to any coordinate plane in the three-dimensional coordinate system, and its orientation is related to the inclination of the sensor.
  • the triangle represents the sensor
  • the vertex of the triangle represents the position of the sensor
  • the orientation of the triangle represents the inclination angle of the sensor.
  • the five sensors respectively make a ray along their respective inclination angle directions to obtain five rays.
  • the five rays all intersect the xoy plane, and the front surface parallel to the xoy plane can be determined in turn.
  • the intersection points of the five rays and the current surface of the model are J, K, L, M, and N respectively.
  • the extended surface can be parallel to any coordinate plane in the three-dimensional coordinate system, or it can be related to any coordinate in the three-dimensional coordinate system.
  • a quadrangular pyramid represents a camera
  • the vertex of the quadrangular pyramid represents the position of the camera
  • the orientation of the quadrangular pyramid represents the inclination of the camera in space.
  • the straight line connecting the vertex of the quadrangular pyramid and the center of the bottom surface is this implementation.
  • Sub-step S102B2 Determine the effective area and/or ineffective area in the model through the intersection.
  • the inclination angle of the sensor can be used to automatically determine the effective area and/or the ineffective area in the model simply and conveniently.
  • the sub-step S102B2 can be implemented in a variety of ways. Specifically, by drawing a second straight line that intersects the current surface from the intersection point, multiple second intersection surfaces that intersect with the current surface can be obtained; or, multiple second intersection surfaces can be obtained through the intersection point and the rays corresponding to the intersection point; or, Obtain multiple second intersecting surfaces through the midpoint of the line connecting the intersection point and the positions of any two sensors; or, obtain multiple second intersecting surfaces through the midpoint of the common perpendicular line segment of the ray corresponding to the intersection point and the intersection point; and then pass the first The two intersecting planes determine the effective area and/or the ineffective area in the model.
  • sub-step S102B2 may include: sub-step S102B2a1 and sub-step S102B2a2, as shown in FIG. 9; or, sub-step S102B2 may include: sub-step S102B2a3 and sub-step S102B2a2, as shown in FIG. 10; or, sub-step S102B2 may include: Step S102B2a4 and sub-step S102B2a2 are as shown in FIG. 11; or, sub-step S102B2 may include: sub-step S102B2a5 and sub-step S102B2a2.
  • Sub-step S102B2a1 draw a second straight line perpendicular to or oblique to the current surface from the intersection point, and then obtain multiple second intersecting surfaces that intersect the current surface through the second straight line.
  • the second intersecting surface may be flat or non-planar.
  • the second intersecting plane can be obtained by intersecting a plane formed by two adjacent second straight lines with the current surface, or can be obtained by intersecting a plane formed by two non-adjacent second straight lines with the current surface.
  • the specific method for obtaining the second intersecting surface can be determined according to actual applications, and is not limited here.
  • sub-step S102B2a3 Obtain multiple second intersection planes through the intersection point and the rays corresponding to the intersection point.
  • the second intersecting surface may be flat or non-planar.
  • the second intersecting surface obtained through two intersection points and two rays corresponding to the two intersection points may be a plane or a curved surface; the second intersecting surface obtained through an intersection point and a ray corresponding to the intersection point is a plane.
  • the ray corresponding to the intersection J in FIG. 8 that is, the ray forming the intersection J
  • any intersection can be used to construct the second intersection plane.
  • sub-step S102B2a4 Obtain multiple second intersection planes through the midpoint of the intersection point and the position of any two sensors.
  • the midpoint of the position connection of any two sensors may be the midpoint of the position connection of the two sensors corresponding to the intersection point. It can also be the midpoint of the line connecting the positions of two sensors that are not related to the intersection or are related to part of the intersection.
  • the second intersection surface can be constructed by using the midpoint of the line connecting the sensors corresponding to the intersection points J and K and the intersection points J and K in FIG. 8 (that is, the sensor whose inclination angle corresponds to the position of the ray forming the intersection point J and K). .
  • the second intersection surface can also be constructed by using the midpoint of the line connecting the intersection points J and K and the positions of the sensors corresponding to the intersection points N and M.
  • the second intersection surface can also be constructed by using the midpoint of the line connecting the intersection points J and K and the positions of the sensors corresponding to the intersection points J and M.
  • sub-step S102B2a5 Obtain a plurality of second intersection planes through the midpoint of the common perpendicular line segment of the ray corresponding to the intersection point and the intersection point.
  • the common perpendicular can be obtained.
  • the part of the male perpendicular sandwiched between any two rays is the male perpendicular segment.
  • the rays corresponding to the intersection points J and K in FIG. 8 can be used to obtain the common vertical line of the two rays, and the common vertical line can be clamped between the two rays.
  • the part of is a common vertical line segment, and the second intersection plane is constructed through the intersection points J, K and the midpoint of the common vertical line segment.
  • the common perpendicular line segment and the intersection J and M in the above example can also be used to construct the second intersection plane.
  • the common perpendicular line segment and the intersection points N and M in the above example can also be used to construct the second intersection plane.
  • Sub-step S102B2a2 Determine the effective area and/or ineffective area in the model through the second intersection plane.
  • multiple implementation methods for determining the effective area and/or ineffective area in the model through the intersection point are used.
  • the inclination angle of the sensor is used to divide the cutting range, which can automatically determine the effective area in the model simply and conveniently. Area and/or ineffective area, so as to provide support for the subsequent automatic cutting of the model.
  • step S103 may specifically include: cutting along the second intersecting surface and the current surface of the model to remove the ineffective area in the model.
  • step S102 may include: sub-step S102C1 and sub-step S102C2, as shown in FIG. 12.
  • Sub-step S102C1 Project the position of the sensor in the three-dimensional coordinate system onto at least one projection surface of the three-dimensional coordinate system to obtain the projection point. Taking the position of the sensor in the three-dimensional coordinate system as the starting point, make a ray along the inclination direction of the sensor to obtain the ray The point of intersection with the current face of the model.
  • Sub-step S102C2 Determine the effective area and/or ineffective area in the model according to the projection point and the intersection point.
  • the position of the sensor and the inclination angle of the sensor can be used to automatically determine the effective area and/or the ineffective area in the model simply and conveniently.
  • the intersection or union of the first cutting range determined according to the projection point and the second cutting range determined by the intersection point is used as the effective area, that is, sub-step S102C2, the effective area and/or ineffective area in the model is determined according to the projection point and the intersection point.
  • the area may include: sub-step S102C2a1 and sub-step S102C2a2, as shown in FIG. 13.
  • Sub-step S102C2a1 Determine the first cutting range of the model according to the projection point, and determine the second cutting range of the model according to the intersection point.
  • Sub-step S102C2a2 Use the intersection or union of the first cutting range and the second cutting range as the effective area.
  • the intersection or union of the first cutting range determined by the projection point and the second cutting range determined by the intersection point is used as the effective area.
  • the effective area and/or ineffective area in the model can be automatically determined simply and conveniently. Area, so as to provide support for the subsequent automatic cutting of the model.
  • the sub-step S102C2a2 takes the intersection of the first cutting range and the second cutting range as the effective area, the irrelevant content outside the user's area of interest can be further filtered, reducing the modeling time; when the sub-step S102C2a2 combines the first cutting range and the second cutting range When the union of the cutting range is used as the effective area, it can ensure that the content of the user's area of interest is completely included in the model, thereby reducing the error that may be caused by a single operation and avoiding the accidental loss of effective information.
  • Figures 14 and 15 show the model obtained by 3D modeling using a series of pictures taken around a stone sculpture.
  • Figure 14 is the uncut model
  • Figure 15 is the model after automatic range calculation and cutting.
  • the stone sculpture only accounts for a small part of the entire model.
  • Figure 15 does not have a multi-edge scene, and the stone sculpture occupies the main part of the entire model.
  • the volume of the model can be reduced, the three-dimensional space occupied by the model can be reduced, noise interference can be avoided, the model reconstruction speed can be accelerated, the rendering speed can be accelerated, and the user can only focus on the area of interest, which can guarantee or even increase the model reconstruction quality.
  • step S102 may include:
  • Step S102D Determine the effective area and/or ineffective area in the model according to the position of the sensor and the range of the sensor.
  • step S102D may include: taking the range between the position of the sensor and the range of the sensor as the effective area.
  • step S102D may include: determining the effective area and/or the ineffective area in the model according to the set ratio of the position of the sensor and the range of the sensor.
  • step S102D may also include: taking the range between the position of the sensor and the set ratio of the range of the sensor as the effective area.
  • the position of the sensor and the range of the sensor can be used to automatically determine the effective area and/or the ineffective area in the model simply and conveniently.
  • step S102 may include:
  • Step S102E Determine the effective area and/or ineffective area in the model according to the position of the camera device and the viewing cone of the camera device.
  • step S102E determining the effective area and/or ineffective area in the model according to the position of the camera device and the viewing cone of the camera device may specifically include: sub-step S102E1 and sub-step S102E2, as shown in FIG. 16.
  • Sub-step S102E1 using the position of each camera as a vertex, determine the stereo space corresponding to the viewing cone of each camera.
  • Sub-step S102E2 Use the union of the stereo spaces corresponding to the viewing cones of all camera devices as the effective area.
  • the position of the camera device and the viewing cone of the camera device can be used to automatically determine the effective area and/or the ineffective area in the model simply and conveniently.
  • FIG. 17 is a schematic structural diagram of an embodiment of the automatic model cutting device of the present application. It should be noted that the automatic model cutting device of this embodiment can perform the steps in the above-mentioned automatic model cutting method. For a detailed description of the content, please refer to the section of the method of automatic model cutting above, and I will not repeat it here.
  • the device 10 includes: a memory 11 and a processor 12; the memory 11 and the processor 12 are connected by a bus 13.
  • the processor 12 may be a micro control unit, a central processing unit, or a digital signal processor, and so on.
  • the memory 11 may be a Flash chip, a read-only memory, a magnetic disk, an optical disk, a U disk, or a mobile hard disk, etc.
  • the memory 11 is used to store computer programs
  • the processor 12 is used to execute a computer program and, when executing the computer program, implement the following steps:
  • sensor information is obtained; according to the sensor information, the effective area and/or ineffective area in the model is determined, and the ineffective area is the area outside the effective area in the model; Cut to remove ineffective areas in the model.
  • the effective area in the model and/or the ineffective area outside the effective area in the model is determined according to the acquired sensor information, and the model is cut and removed.
  • the ineffective area in the model since the effective area and/or ineffective area in the model is determined according to the acquired sensor information, and the model is cut, it is completed during the model reconstruction process without interrupting the model reconstruction process. Therefore, not only does not reduce the model reconstruction speed, but also cuts out the ineffective areas in the model, which can reduce the volume of the model, reduce the three-dimensional space occupied by the model, and avoid noise interference. On the contrary, it can speed up the model reconstruction speed and speed up the rendering speed.
  • the sensor information includes one or more of the position of the sensor, the inclination angle of the sensor, and the range of the sensor.
  • the position of the sensor and the inclination angle of the sensor are the position and the inclination angle of the sensor relative to the collected scene.
  • the processor executes the computer program, the following steps are implemented: according to the position of the sensor and/or the inclination angle of the sensor, the effective area and/or the ineffective area in the model are determined.
  • the processor executes the computer program, the following steps are implemented: according to the position of the sensor and/or the inclination angle of the sensor in the three-dimensional coordinate system of the model, the effective area and/or the ineffective area in the model are determined.
  • the processor when the processor executes the computer program, it implements the following steps: project the position of the sensor in the three-dimensional coordinate system onto at least one projection surface of the three-dimensional coordinate system to obtain a projection point; determine the effective model in the model according to the projection point on the projection surface Area and/or inactive area.
  • the processor when the processor executes the computer program, it implements the following steps: draw a first straight line perpendicular or inclined to the projection plane from the projection point, and then obtain a plurality of first intersection planes that intersect the projection plane through the first straight line; The effective area and/or ineffective area in the model is determined by the first intersection surface.
  • the processor when the processor executes the computer program, it implements the following steps: cutting along the first intersection surface and the projection surface to remove the ineffective area in the model.
  • the effective area and/or ineffective area in the model is determined by the three-dimensional space formed by the position of the sensor and the projection point in the three-dimensional coordinate system of the model.
  • the projection point is processed.
  • the processing includes requiring the projection point to expand outward, requiring the projection point to contract inward, forming a bounding box through the projection point, and requesting the position of the sensor and projection
  • the size of the three-dimensional space composed of points is greater than or equal to one or more of the spatial thresholds.
  • requiring the projection point to expand outward includes requiring the projection point to expand outward in the direction away from the first projection edge along the first projection edge, expand outward in the direction away from the second projection edge along the second projection edge, or in the direction away from the second projection edge.
  • One or more of the range of the angle between the projection side and the second projection side expands outward.
  • the first projection side and the second projection side are two projection sides that intersect the projection point; the requirement that the projection point shrinks inward includes the requirement The projection point shrinks inward in the direction away from the projection point along the first projection edge, shrinks inward in the direction away from the projection point along the second projection edge, or shrinks inward at the angle between the first projection edge and the second projection edge One or more of.
  • the processor executes the computer program, the following steps are implemented: taking the position of the sensor in the three-dimensional coordinate system as a starting point, make a ray along the inclination direction of the sensor to obtain the intersection point of the ray and the current surface of the model; determine the intersection point in the model by the intersection point Effective area and/or non-effective area.
  • the processor when the processor executes the computer program, it implements the following steps: draw a second straight line perpendicular or inclined to the current surface from the intersection point, and then obtain multiple second intersecting surfaces that intersect the current surface through the second straight line; or, Obtain multiple second intersection planes through the intersection point and the rays corresponding to the intersection point; or, obtain multiple second intersection planes through the midpoint of the intersection line and the position of any two sensors; or, through the intersection point and the ray corresponding to the intersection point
  • the processor when the processor executes the computer program, it implements the following steps: cut along the second intersection surface and the current surface of the model to remove the ineffective area in the model.
  • the processor when the processor executes the computer program, it implements the following steps: project the position of the sensor in the three-dimensional coordinate system onto at least one projection surface of the three-dimensional coordinate system to obtain a projection point, taking the position of the sensor in the three-dimensional coordinate system as a starting point, and The inclination direction of the sensor makes a ray to obtain the intersection point between the ray and the current surface of the model; the effective area and/or ineffective area in the model is determined according to the projection point and the intersection point.
  • the processor executes the computer program, the following steps are implemented: the first cutting range of the model is determined according to the projection point, the second cutting range of the model is determined according to the intersection point; the intersection or union of the first cutting range and the second cutting range As a valid area.
  • the three-dimensional coordinate system includes a global coordinate system or a local coordinate system.
  • the processor executes the computer program, the following steps are implemented: according to the position of the sensor and the range of the sensor, the effective area and/or the ineffective area in the model are determined.
  • the processor executes the computer program, the following steps are implemented: the range between the position of the sensor and the range of the sensor is taken as the effective area.
  • the processor executes the computer program, the following steps are implemented: according to the set ratio of the position of the sensor and the range of the sensor, the effective area and/or the ineffective area in the model are determined.
  • the processor executes the computer program, the following steps are implemented: the range between the position of the sensor and the set ratio of the range of the sensor is taken as the effective area.
  • the senor includes a camera, and the sensor information also includes the cone of the camera.
  • the processor executes the computer program, the following steps are implemented: according to the position of the camera device and the viewing cone of the camera device, the effective area and/or the ineffective area in the model are determined.
  • the processor when the processor executes the computer program, it implements the following steps: taking the position of each camera device as the vertex, determining the stereo space corresponding to the viewing cone of each camera device; combining the stereo space corresponding to the viewing cone of all camera devices Set as the effective area.
  • This application also provides a computer-readable storage medium that stores a computer program, and when the computer program is executed by a processor, the processor realizes a method for automatically cutting a model as described above.
  • a computer program when executed by a processor, the processor realizes a method for automatically cutting a model as described above.
  • the relevant content please refer to the above image processing method section, which will not be repeated here.
  • the computer-readable storage medium may be an internal storage unit of any image processing device described above, for example, a hard disk or a memory of the image processing device.
  • the computer-readable storage medium may also be an external storage device of the image processing device, such as a plug-in hard disk, a smart memory card, a secure digital card, a flash memory card, etc., equipped on the image processing device.
  • the effective area in the model and/or the ineffective area outside the effective area in the model is determined according to the acquired sensor information, and the model is cut and removed.
  • the ineffective area in the model since the effective area and/or ineffective area in the model is determined according to the acquired sensor information, and the model is cut, it is completed during the model reconstruction process without interrupting the model reconstruction process. Therefore, not only does not reduce the model reconstruction speed, but also cuts out the ineffective areas in the model, which can reduce the volume of the model, reduce the three-dimensional space occupied by the model, and avoid noise interference. On the contrary, it can speed up the model reconstruction speed and speed up the rendering speed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种模型自动剪切的方法、模型自动剪切的装置以及存储介质,该方法包括:在利用传感器所采集的数据进行模型重建过程中,获取传感器信息(S101);根据传感器信息,确定模型中的有效区域和/或非有效区域(S102);对模型进行剪切,以去掉模型中的非有效区域(S103)。

Description

模型自动剪切的方法、装置及存储介质 技术领域
本申请涉及模型重建技术领域,尤其涉及一种模型自动剪切的方法、装置及存储介质。
背景技术
利用立体视觉或激光扫描的三维重建技术日渐成熟。实际上,照片中拍摄的区域有很多是不需要关注的;激光扫描中也同样会扫描到很多不需要关注的区域。这些不是很重要的区域往往占据很大的立体空间,导致模型体积过大。另外,这些非关注区域的加入还会导致模型重建过程中的质量下降。
目前都是通过手动的方式对不需要关注的区域进行剪切。但是,手动剪切时重建必须中断;待手动剪切之后才能继续重建精细模型。这样容易降低重建速度,且手动剪切的效果依赖用户经验,用户经验欠缺容易导致模型质量下降或得到无效模型。
发明内容
基于此,本申请提供一种模型自动剪切的方法、模型自动剪切的装置及存储介质。
第一方面,本申请提供了一种模型自动剪切的方法,包括:
在利用传感器所采集的数据进行模型重建过程中,获取传感器信息;
根据所述传感器信息,确定所述模型中的有效区域和/或非有效区域,所述非有效区域是所述模型中有效区域之外的区域;
对所述模型进行剪切,以去掉所述模型中的非有效区域。
第二方面,本申请提供了一种模型自动剪切装置,所述装置包括:存储器和处理器;
所述存储器用于存储计算机程序;
所述处理器用于执行所述计算机程序并在执行所述计算机程序时,实现如下步骤:
在利用传感器所采集的数据进行模型重建过程中,获取传感器信息;
根据所述传感器信息,确定所述模型中的有效区域和/或非有效区域,所述非有效区域是所述模型中有效区域之外的区域;
对所述模型进行剪切,以去掉所述模型中的非有效区域。
第三方面,本申请提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现如上所述的模型自动剪切的方法。
本申请实施例提供了一种模型自动剪切的方法、模型自动剪切装置及存储介质,在利用传感器所采集的数据进行模型重建过程中,根据获取的传感器信息,确定模型中的有效区域和/或模型中有效区域之外的非有效区域,对模型进行剪切,去掉模型中的非有效区域;由于根据获取的传感器信息,确定模型中的有效区域和/或非有效区域,以及对模型进行剪切,是在模型重建过程中完成的,并没有中断模型重建过程,因此不仅不会降低模型重建速度,而且剪切去掉模型中的非有效区域,能够减小模型的体积,减小模型占有立体空间,能够避免噪声干扰,反而能够加快模型重建速度,加快渲染速度,可以让用户只专注于关心的区域,能够保证甚至增加模型重建质量;由于根据获取的传感器信息,确定模型中的有效区域和/或非有效区域,以及对模型进行剪切,均是在模型重建过程中自动完成的,不涉及用户参与,因此不会依赖用户经验,能够保证剪切后的模型保留有效区域,且模型质量稳定,可重复。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
附图说明
为了更清楚地说明本申请实施例技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以 根据这些附图获得其他的附图。
图1是本申请模型自动剪切的方法一实施例的流程示意图;
图2是本申请模型自动剪切的方法另一实施例的流程示意图;
图3是本申请模型自动剪切的方法一实际应用中传感器的位置投影得到投影点的示意图;
图4是本申请模型自动剪切的方法又一实施例的流程示意图;
图5是本申请模型自动剪切的方法一实际应用中投影点形成矩形包围盒的具体方式的示意图;
图6是本申请模型自动剪切的方法一实际应用中投影点向外扩展或向内收缩的具体方式的示意图;
图7是本申请模型自动剪切的方法又一实施例的流程示意图;
图8是本申请模型自动剪切的方法一实际应用中沿传感器的倾角方向做射线得到射线与模型的当前面的交点的示意图;
图9是本申请模型自动剪切的方法又一实施例的流程示意图;
图10是本申请模型自动剪切的方法又一实施例的流程示意图;
图11是本申请模型自动剪切的方法又一实施例的流程示意图;
图12是本申请模型自动剪切的方法又一实施例的流程示意图;
图13是本申请模型自动剪切的方法又一实施例的流程示意图;
图14是利用一系列环绕着一个石头雕塑进行拍摄得到的图片进行三维建模得到的未切割模型的示意图;
图15是图14利用本申请模型自动剪切的方法进行切割后的模型的示意图;
图16是本申请模型自动剪切的方法又一实施例的流程示意图;
图17是本申请模型自动剪切装置一实施例的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳 动前提下所获得的所有其他实施例,都属于本申请保护的范围。
附图中所示的流程图仅是示例说明,不是必须包括所有的内容和操作/步骤,也不是必须按所描述的顺序执行。例如,有的操作/步骤还可以分解、组合或部分合并,因此实际执行的顺序有可能根据实际情况改变。
在模型重建中,不需要关注的区域等非有效区域占据很大的立体空间,导致模型体积过大,还会导致模型重建过程中的质量下降。而手动剪切模型,必须中断模型重建过程,容易降低重建速度,且依赖用户经验,容易导致模型质量下降或得到无效模型。本申请实施例在利用传感器所采集的数据进行模型重建过程中,根据获取的传感器信息,确定模型中的有效区域和/或模型中有效区域之外的非有效区域,对模型进行剪切,去掉模型中的非有效区域;由于根据获取的传感器信息,确定模型中的有效区域和/或非有效区域,以及对模型进行剪切,是在模型重建过程中完成的,并没有中断模型重建过程,因此不仅不会降低模型重建速度,而且剪切去掉模型中的非有效区域,能够减小模型的体积,减小模型占有立体空间,能够避免噪声干扰,反而能够加快模型重建速度,加快渲染速度,可以让用户只专注于关心的区域,能够保证甚至增加模型重建质量;由于根据获取的传感器信息,确定模型中的有效区域和/或非有效区域,以及对模型进行剪切,均是在模型重建过程中自动完成的,不涉及用户参与,因此不会依赖用户经验,能够保证剪切后的模型保留有效区域,且模型质量稳定,可重复。
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
参见图1,图1是本申请模型自动剪切的方法一实施例的流程示意图,该方法包括:
步骤S101:在利用传感器所采集的数据进行模型重建过程中,获取传感器信息。
在本实施例中,模型重建是采用传感器所采集的数据进行重建的,即利用传感器所采集的数据恢复被采集场景的三维信息(例如:形状等)的数学过程和计算机技术,包括数据获取、预处理、点云拼接和特征分析等步骤。
其中,传感器包括但不限于:摄像装置(例如:相机、摄像机、摄像头, 等等)、扫描仪(例如:激光扫描仪、三维扫描仪,等等)、雷达,等等。传感器信息是指与传感器相关的信息,例如:传感器在被采集场景的设置信息(例如:传感器的位置、传感器设置的高度、传感器的倾角、等等),传感器自身固有的参数信息(例如:传感器的量程、传感器的灵敏度、传感器的精度、摄像装置的视锥,等等)。
获取传感器信息的方式需要根据具体的传感器信息来确定,有些需要通过计算得到,例如:传感器的位置、传感器的倾角,等等;有些可以预先输入并存储,在需要的时候获取,例如:传感器自身固有的参数信息。
步骤S102:根据传感器信息,确定模型中的有效区域和/或非有效区域,非有效区域是模型中有效区域之外的区域。
在实际应用中,传感器通常是设置在感兴趣区域(Region Of Interest,ROI)的周围,在传感器附近一定范围之内才是实际中真正需要关注的地方,或者是用户真正需求的区域。由于传感器的倾角、传感器的量程等问题,实际中采集的数据包括太多非关注区域的数据。在本实施例中,模型中的有效区域包括用户真正需求、关注、感兴趣的区域,非有效区域是模型中有效区域之外的区域。
根据传感器的信息,确定模型中的有效区域,或者确定模型中的非有效区域,或者确定模型中的有效区域和非有效区域,具体实现方式很多。例如:多个摄像头设置在同一高度,其倾角均是向下,那么可以认为该高度之上的区域为非有效区域;如果其倾角均是向上,那么可以认为该高度之下的区域为非有效区域。又如:传感器的量程是10米,那么传感器的采集方向10米之内是有效区域,10米之外是非有效区域。
步骤S103:对模型进行剪切,以去掉模型中的非有效区域。
确定出模型中的有效区域和/或非有效区域后,即可将模型中的非有效区域剪切去掉。
本申请实施例在利用传感器所采集的数据进行模型重建过程中,根据获取的传感器信息,确定模型中的有效区域和/或模型中有效区域之外的非有效区域,对模型进行剪切,去掉模型中的非有效区域;由于根据获取的传感器信息,确定模型中的有效区域和/或非有效区域,以及对模型进行剪切,是在模型重 建过程中完成的,并没有中断模型重建过程,因此不仅不会降低模型重建速度,而且剪切去掉模型中的非有效区域,能够减小模型的体积,减小模型占有立体空间,能够避免噪声干扰,反而能够加快模型重建速度,加快渲染速度,可以让用户只专注于关心的区域,能够保证甚至增加模型重建质量;由于根据获取的传感器信息,确定模型中的有效区域和/或非有效区域,以及对模型进行剪切,均是在模型重建过程中自动完成的,不涉及用户参与,因此不会依赖用户经验,能够保证剪切后的模型保留有效区域,且模型质量稳定,可重复。
下面以实际应用较为广泛的传感器信息为例,来说明上述步骤S102和步骤S103的具体实施过程。
在较为广泛的实际应用中,传感器信息包括传感器的位置、传感器的倾角、传感器的量程中的一个或多个。其中,传感器的位置和传感器的倾角是传感器相对被采集场景的位置和倾角。
在实际应用中,传感器的位置、传感器的倾角均是用户根据真正需求、感兴趣区域的位置、关注区域的位置等而设置的;传感器的量程是指传感器的测量范围,用户在选择传感器的时候,通常需要根据真正需求、感兴趣区域的位置、关注区域的位置等进行选择;因此,根据传感器的位置、传感器的倾角、传感器的量程中的一个或多个,即可确定模型中的有效区域和/或非有效区域。
例如:在一应用中,步骤S102可以是:根据传感器的位置和/或传感器的倾角,确定模型中的有效区域和/或非有效区域。
为了方便确定模型与传感器的位置和/或传感器的倾角之间的关系,可以在模型中建立三维坐标系,此时步骤S102还可以是:根据模型的三维坐标系中传感器的位置和/或传感器的倾角,确定模型中的有效区域和/或非有效区域。
进一步,三维坐标系包括全局坐标系或者局部坐标系。
全局坐标系是三维空间物体所在的坐标系,模型的顶点坐标就是基于这个坐标系来表达的。局部坐标系是一个假想的坐标系,局部坐标系以物体的中心为坐标原点,物体的旋转或平移等操作都是围绕局部坐标系进行的,当物体模型进行旋转或平移等操作时,局部坐标系也执行相应的旋转或平移操作。局部坐标系与物体的相对位置至始至终是不变的,假想出这个局部坐标系的目的主要是为了正向理解对三维场景中物体执行的“平移和旋转”操作。使用局部坐 标系理解模型变换时,所有的变换操作直接作用与局部坐标系,由于局部坐标系与物体的相对位置不对,因此,当对局部坐标系进行“平移”、“旋转”和“缩放”时,物体在场景中位置和形状也会发生相应的变化。
需要注意的是,全局坐标系和局部坐标系是两种理解模型变换的手段。在实际应用中,根据具体的应用需要,可以选择合适的三维坐标系。
下面以传感器的位置、传感器的倾角、传感器的位置和传感器的倾角、传感器的位置和传感器的量程为例来具体说明。
在第一个实际应用中,利用传感器的位置确定模型中的有效区域和/或非有效区域时,参见图2,步骤S102可以包括:子步骤S102A1和子步骤S102A2。
子步骤S102A1:将三维坐标系中传感器的位置投影到三维坐标系的至少一个投影面上,得到投影点。
例如,如图3所示,图中o-xyz坐标系统上,三角形表示传感器,三角形的顶点表示传感器的位置,xoy、yoz、xoz是三个投影面,图中五个传感器在投影面xoy上对应五个投影点E、F、G、H、I。
子步骤S102A2:根据投影面上的投影点确定模型中的有效区域和/或非有效区域。
在本实施例中,三维坐标系中传感器的位置在投影面上的投影点所构成的范围,基本可以大概率确定用户感兴趣的区域。因此,可以根据投影面上的投影点确定模型中的有效区域和/或非有效区域,具体说明如下:
在一实施例中,从投影点做相交于投影面的第一直线,可以得到多个与投影面相交的第一相交面,通过第一相交面确定模型中的有效区域和/或非有效区域。即子步骤S102A2可以包括:子步骤S102A2a1和子步骤S102A2a2,如图4所示。
子步骤S102A2a1:从投影点做垂直于或倾斜于投影面的第一直线,进而通过第一直线得到多个与投影面相交的第一相交面。
第一相交面可以是平面,也可以是非平面。当相邻的两个投影点对应的相邻的两条第一直线与两个相邻的投影点之间的投影直线共面时,第一相交面可以是平面;当不共面时,第一相交面可以是非平面。
第一相交面可以通过相邻的两条第一直线组成的平面与投影面相交得到, 也可以通过非相邻的两条第一直线组成的平面与投影面相交得到。第一相交面的具体获得方式,可以根据实际应用确定,在此不做限定。
子步骤S102A2a2:通过第一相交面确定模型中的有效区域和/或非有效区域。
例如:如果第一相交面将模型分为左右两边,可以仅设定第一相交面的左边为非有效区域,在此基础上,第一相交面的右边的模型可以采取其他方式继续确定有效区域和/或非有效区域;或者,可以直接设定第一相交面的左边为有效区域;或者,第一相交面结合其他方式将模型分为多个部分,然后直接确定模型中的有效区域和非有效区域,等等。
本实施例从投影点做相交于投影面的第一直线,可以得到多个与投影面相交的第一相交面,然后通过第一相交面确定模型中的有效区域和/或非有效区域,通过这种方式,利用了传感器的绝对位置来进行切割范围的划分,能够简单方便地自动确定模型中的有效区域和/或非有效区域,从而为后续能够自动对模型进行剪切提供支持。
此时,步骤S103可以包括:沿第一相交面和投影面切割,以去掉模型中的非有效区域。
在另一实施例中,通过三维坐标系中传感器的位置和投影点所组成的立体空间确定模型中的有效区域和/或非有效区域。即子步骤S102A2还可以包括:子步骤S102A2b1。
子步骤S102A2b1:通过模型的三维坐标系中传感器的位置和投影点所组成的立体空间确定模型中的有效区域和/或非有效区域。
本实施方式的立体空间可以是上下底面平行的平行多面体,例如:平行多面体包括棱台;该立体空间也可以是上底面与下底面不平行的多面体。本实施例的立体空间的形状,具体根据实际应用确定,在此不做限定。
在实际应用中,可以确定立体空间之外的模型为非有效区域,也可以确定立体空间包围的空间为有效区域,或者立体空间结合其他方式确定模型中的有效区域和非有效区域,等等。
本实施例通过三维坐标系中传感器的位置和投影点所组成的立体空间确定模型中的有效区域和/或非有效区域,通过这种方式,利用了传感器的绝对 位置来进行切割范围的另一种划分,能够简单方便地自动确定模型中的有效区域和/或非有效区域,从而为后续能够自动对模型进行剪切提供支持。
此时,步骤S103可以包括:沿立体空间的数个面进行切割,以去掉模型中的非有效区域。
在又一实施例中,如果投影点的位置不合适,直接根据投影面上的投影点确定模型中的有效区域和/或非有效区域,并不能得到用户所需要的有效区域和/或非有效区域,那么可以不直接利用上述投影点,而是对投影点进行处理,将处理后的投影点替代原来未处理的投影点,然后根据投影面上的处理后的投影点确定模型中的有效区域和/或非有效区域,使得最后确定的有效区域和/或非有效区域满足用户需要。根据投影面上的处理后的投影点确定模型中的有效区域和/或非有效区域,其确定方法和上述直接利用投影点的方法类似,具体请参见上述说明,在此不再赘叙。下面具体说明对投影点进行处理的相关内容:
在子步骤S102A2之前,还可以包括:对投影点进行处理,该处理包括要求投影点向外扩展、要求投影点向内收缩、通过投影点形成矩形包围盒、以及要求传感器的位置、投影点组成的立体空间的大小大于或等于空间阈值中的一种或多种。
其中,对投影点进行处理是指对投影点进行与确定模型中的有效区域和/或非有效区域相关的、用于满足用户需求的处理。投影点向外扩展是指该投影点向投影面上所有投影点所围成的区域之外的区域扩展。投影点向内收缩是指该投影点向投影面上所有投影点所围成的区域收缩和向投影面上所有投影点所围成的区域之内的区域收缩。通过投影点形成包围盒包括形成AABB包围盒、包围球、方向包围盒OBB、以及固定方向凸包FDH等。包围盒的类型与模型的形状相关。具体地,还可以对投影面上所有投影点在投影面平面上作延伸形成数条直线,并环绕数条直线所围成的区域的形状形成包围盒,如图5所示,四个投影点所围成的区域的形状是矩形,再由所围成的区域作延伸形成立体空间,例如长方体或棱台体包围盒。
具体地,请参见图6,第一投影边TA和第二投影边TB是相交于投影点T的两条投影边,投影点T向外扩展可以是:投影点T沿第一投影边TA向远离第一投影边TA的方向TC向外扩展,即投影点T沿第一投影边TA的反方向 TC向外扩展;或者,投影点T沿第二投影边TB向远离第二投影边TB的方向TD向外扩展,即投影点T沿第二投影边TB的反方向TD向外扩展;又或者,投影点T在第一投影边TA和第二投影边TB的夹角范围ATB向外CTD扩展,即投影点T沿夹角范围ATB的反方向CTD所围成的范围扩展;又或者,投影点T沿CTD的中心线TP向外扩展;等等。在实际应用时,多个投影点向外扩展可以结合上述多个方式。
投影点T向内收缩可以是:投影点T沿第一投影边TA向远离投影点T的方向向内收缩,即投影点T沿第一投影边TA向远离投影点T的方向移动;或者,投影点T沿第二投影边TB向远离投影点T的方向向内收缩,即投影点T沿第二投影边TB向远离投影点T的方向移动;或者,投影点T在第一投影边TA和第二投影边TB的夹角范围ATB向内收缩,即投影点T沿夹角范围ATB所围成的范围收缩,等等。在实际应用时,多个投影点向内收缩可以结合上述多个方式。
通过这种方式,在利用了传感器的绝对位置来进行切割范围的划分的同时,能够根据用户需求对投影点进行调整,进而扩大或缩小上述立体空间的体积,简单方便地自动确定模型中的有效区域和/或非有效区域,从而为后续能够自动对模型进行剪切提供支持。
在第二个实际应用中,利用传感器的倾角确定模型中的有效区域和/或非有效区域时,请参见图7,步骤S102可以包括:子步骤S102B1和子步骤S102B2。
子步骤S102B1:以三维坐标系中传感器的位置为起点,沿传感器的倾角方向做一条射线,得到射线与模型的当前面的交点。
当前面平行于三维坐标系中的任一坐标平面,其方位与传感器的倾角相关。例如,如图8所示,图中o-xyz坐标系统上,三角形表示传感器,三角形的顶点表示传感器的位置,三角形的朝向表示传感器的倾角。图中五个传感器分别沿各自的倾角方向做一条射线,得到五条射线,该五条射线均与xoy平面相交,可依次确定平行于xoy平面构造当前面。五条射线与模型的当前面的交点分别是J、K、L、M、N。或是,取模型当前与射线相交的第一个面作为当前面。或是,取建模时模型中场景水平最低点延伸出的面作为当前面;该延伸 出的面可以平行于三维坐标系中的任一坐标平面,也可以与三维坐标系中的任一坐标平面相交,较优地,该延伸出的面与场景通过场景水平最低点相交,而不对场景其他部分产生交割。
具体地,在一模型重建中,用一个四棱锥代表一个相机,四棱锥顶点表示相机的位置,四棱锥的朝向表示相机在空间中的倾角,连接四棱锥顶点和底面中心的直线即为本实施例中的射线。
子步骤S102B2:通过交点确定模型中的有效区域和/或非有效区域。
本实施例利用传感器的倾角,能够简单方便地自动确定模型中的有效区域和/或非有效区域。
子步骤S102B2可以通过多种方式实现。具体地,从交点做相交于当前面的第二直线,可以得到多个与当前面相交的第二相交面;或是,通过交点和交点对应的射线得到多个第二相交面;或是,通过交点和任意两个传感器的位置连线的中点得到多个第二相交面;或是,通过交点与交点对应的射线的公垂线段的中点得到多个第二相交面;然后通过第二相交面确定模型中的有效区域和/或非有效区域。即子步骤S102B2可以包括:子步骤S102B2a1和子步骤S102B2a2,如图9所示;或者,子步骤S102B2可以包括:子步骤S102B2a3和子步骤S102B2a2,如图10所示;或者,子步骤S102B2可以包括:子步骤S102B2a4和子步骤S102B2a2,如图11所示;或者,子步骤S102B2可以包括:子步骤S102B2a5和子步骤S102B2a2。
子步骤S102B2a1:从交点做垂直于或倾斜于当前面的第二直线,进而通过第二直线得到多个与当前面相交的第二相交面。
在本实施方式中,第二相交面可以是平面,也可以是非平面。第二相交面可以通过相邻的两条第二直线组成的平面与当前面相交得到,也可以通过非相邻的两条第二直线组成的平面与当前面相交得到。第二相交面的具体获得方式,可以根据实际应用确定,在此不做限定。
或是,子步骤S102B2a3:通过交点和交点对应的射线得到多个第二相交面。
在本实施方式中,第二相交面可以是平面,也可以是非平面。例如:通过两个交点和该两个交点对应的两条射线得到的第二相交面可以是平面或曲面; 通过一个交点和该交点对应的一条射线得到的第二相交面是平面。例举性地,可以利用图8中交点J对应的射线(即形成交点J的射线)和任一交点构造第二相交面。
或是,子步骤S102B2a4:通过交点和任意两个传感器的位置连线的中点得到多个第二相交面。
具体地,任意两个传感器的位置连线的中点可以是交点对应的两个传感器的位置连线的中点。也可以是与交点无关或与其中部分交点相关的两个传感器的位置连线的中点。例举性地,可以利用图8中交点J、K和交点J、K对应的传感器(即其倾角与形成交点J、K的射线对应的传感器)的位置连线的中点构造第二相交面。另一例举中,也可以利用交点J、K和交点N、M对应的传感器的位置连线的中点构造第二相交面。另一例举中,也可以利用交点J、K和交点J、M对应的传感器的位置连线的中点构造第二相交面。
或是,子步骤S102B2a5:通过交点与交点对应的射线的公垂线段的中点得到多个第二相交面。
具体地,取任意两条射线,可以得到其公垂线。该公垂线夹在该任意两条射线间的部分即公垂线段。例举性地,可以利用图8中交点J、K对应的射线(即形成交点J、K的射线)得到该两条射线的公垂线,并取该公垂线夹在该两条射线间的部分为公垂线段,通过交点J、K和该公垂线段的中点构造第二相交面。另一例举中,也可以利用上述例举中的公垂线段和交点J、M构造第二相交面。另一例举中,也可以利用上述例举中的公垂线段和交点N、M构造第二相交面。子步骤S102B2a2:通过第二相交面确定模型中的有效区域和/或非有效区域。
本实施例通过交点确定模型中的有效区域和/或非有效区域的多种实现方式,通过这种方式,利用了传感器的倾角来进行切割范围的划分,能够简单方便地自动确定模型中的有效区域和/或非有效区域,从而为后续能够自动对模型进行剪切提供支持。
此时,步骤S103具体可以包括:沿第二相交面和模型的当前面切割,以去掉模型中的非有效区域。
在第三个实际应用中,利用传感器的位置和传感器的倾角确定模型中的有 效区域和/或非有效区域时,步骤S102可以包括:子步骤S102C1和子步骤S102C2,如图12所示。
子步骤S102C1:将三维坐标系中传感器的位置投影到三维坐标系的至少一个投影面上,得到投影点,以三维坐标系中传感器的位置为起点,沿传感器的倾角方向做一条射线,得到射线与模型的当前面的交点。
子步骤S102C2:根据投影点和交点确定模型中的有效区域和/或非有效区域。
本实施例利用传感器的位置和传感器的倾角,能够简单方便地自动确定模型中的有效区域和/或非有效区域。
具体地,根据投影点确定的第一切割范围和交点确定的第二切割范围的交集或并集作为有效区域,即子步骤S102C2,根据投影点和交点确定模型中的有效区域和/或非有效区域,可以包括:子步骤S102C2a1和子步骤S102C2a2,如图13所示。
子步骤S102C2a1:根据投影点确定模型的第一切割范围,根据交点确定模型的第二切割范围。
子步骤S102C2a2:将第一切割范围和第二切割范围的交集或并集作为有效区域。
本实施例根据投影点确定的第一切割范围和交点确定的第二切割范围的交集或并集作为有效区域,通过这种方式,能够简单方便地自动确定模型中的有效区域和/或非有效区域,从而为后续能够自动对模型进行剪切提供支持。当子步骤S102C2a2将第一切割范围和第二切割范围的交集作为有效区域时,能进一步筛选用户感兴趣区域外的无关内容,减少建模时间;当子步骤S102C2a2将第一切割范围和第二切割范围的并集作为有效区域时,能确保用户感兴趣区域的内容被完整的囊括在模型内,从而减少单次操作可能带来的误差,避免有效信息的意外丢失。
参见图14和图15,图14和图15展示的是利用一系列环绕着一个石头雕塑进行拍摄得到的图片进行三维建模得到的模型。其中,图14是未切割的模型,图15是进行自动范围计算并进行切割后的模型;从图中可以看出,图14中明显有很多边缘场景,石头雕塑只占整个模型的一个很小部分,图15没有 多边缘场景,石头雕塑占整个模型的主体部分。通过这种方式,能够减小模型的体积,减小模型占有立体空间,能够避免噪声干扰,能够加快模型重建速度,加快渲染速度,可以让用户只专注于关心的区域,能够保证甚至增加模型重建质量。
在第四个实际应用中,利用传感器的位置和传感器的量程确定模型中的有效区域和/或非有效区域时,步骤S102可以包括:
步骤S102D:根据传感器的位置和传感器的量程,确定模型中的有效区域和/或非有效区域。
具体地,在一实施例中,步骤S102D可以包括:将传感器的位置和传感器的量程之间的范围作为有效区域。
在另一实施例中,步骤S102D可以包括:根据传感器的位置和传感器的量程的设定比例,确定模型中的有效区域和/或非有效区域。
进一步,步骤S102D还可以包括:将传感器的位置和传感器的量程的设定比例之间的范围作为有效区域。
本实施例利用传感器的位置和传感器的量程,能够简单方便地自动确定模型中的有效区域和/或非有效区域。
如果传感器包括摄像装置,还可以利用摄像装置的视锥确定模型中的有效区域和/或非有效区域。在第五个实际应用中,利用摄像装置的视锥确定模型中的有效区域和/或非有效区域时,步骤S102可以包括:
步骤S102E:根据摄像装置的位置和摄像装置的视锥,确定模型中的有效区域和/或非有效区域。
其中,步骤S102E,根据摄像装置的位置和摄像装置的视锥,确定模型中的有效区域和/或非有效区域,具体可以包括:子步骤S102E1和子步骤S102E2,如图16所示。
子步骤S102E1:以每个摄像装置的位置为顶点,确定每个摄像装置的视锥对应的立体空间。
子步骤S102E2:将所有摄像装置的视锥对应的立体空间的并集作为有效区域。
本实施例利用摄像装置的位置和摄像装置的视锥,能够简单方便地自动确 定模型中的有效区域和/或非有效区域。
参见图17,图17是本申请模型自动剪切装置一实施例的结构示意图,需要说明的是,本实施例的模型自动剪切装置能够执行上述的模型自动剪切的方法中的步骤,相关内容的详细说明,请参见上述模型自动剪切的方法部分,在此不再赘叙。
该装置10包括:存储器11和处理器12;存储器11和处理器12通过总线13连接。
其中,处理器12可以是微控制单元、中央处理单元或数字信号处理器,等等。
其中,存储器11可以是Flash芯片、只读存储器、磁盘、光盘、U盘或者移动硬盘等等。
存储器11用于存储计算机程序;
处理器12用于执行计算机程序并在执行计算机程序时,实现如下步骤:
在利用传感器所采集的数据进行模型重建过程中,获取传感器信息;根据传感器信息,确定模型中的有效区域和/或非有效区域,非有效区域是模型中有效区域之外的区域;对模型进行剪切,以去掉模型中的非有效区域。
本申请实施例在利用传感器所采集的数据进行模型重建过程中,根据获取的传感器信息,确定模型中的有效区域和/或模型中有效区域之外的非有效区域,对模型进行剪切,去掉模型中的非有效区域;由于根据获取的传感器信息,确定模型中的有效区域和/或非有效区域,以及对模型进行剪切,是在模型重建过程中完成的,并没有中断模型重建过程,因此不仅不会降低模型重建速度,而且剪切去掉模型中的非有效区域,能够减小模型的体积,减小模型占有立体空间,能够避免噪声干扰,反而能够加快模型重建速度,加快渲染速度,可以让用户只专注于关心的区域,能够保证甚至增加模型重建质量;由于根据获取的传感器信息,确定模型中的有效区域和/或非有效区域,以及对模型进行剪切,均是在模型重建过程中自动完成的,不涉及用户参与,因此不会依赖用户经验,能够保证剪切后的模型保留有效区域,且模型质量稳定,可重复。
其中,传感器信息包括传感器的位置、传感器的倾角、传感器的量程中的一个或多个,传感器的位置和传感器的倾角是传感器相对被采集场景的位置和 倾角。
其中,处理器在执行计算机程序时,实现如下步骤:根据传感器的位置和/或传感器的倾角,确定模型中的有效区域和/或非有效区域。
其中,处理器在执行计算机程序时,实现如下步骤:根据模型的三维坐标系中传感器的位置和/或传感器的倾角,确定模型中的有效区域和/或非有效区域。
其中,处理器在执行计算机程序时,实现如下步骤:将三维坐标系中传感器的位置投影到三维坐标系的至少一个投影面上,得到投影点;根据投影面上的投影点确定模型中的有效区域和/或非有效区域。
其中,处理器在执行计算机程序时,实现如下步骤:从投影点做垂直于或倾斜于投影面的第一直线,进而通过第一直线得到多个与投影面相交的第一相交面;通过第一相交面确定模型中的有效区域和/或非有效区域。
其中,处理器在执行计算机程序时,实现如下步骤:沿第一相交面和投影面切割,以去掉模型中的非有效区域。
其中,处理器在执行计算机程序时,实现如下步骤:通过模型的三维坐标系中传感器的位置和投影点所组成的立体空间确定模型中的有效区域和/或非有效区域。
其中,处理器在执行计算机程序时,实现如下步骤:沿立体空间的数个面进行切割,以去掉模型中的非有效区域。
其中,处理器在执行计算机程序时,实现如下步骤:对投影点进行处理,处理包括要求投影点向外扩展、要求投影点向内收缩、通过投影点形成包围盒、以及要求传感器的位置、投影点组成的立体空间的大小大于或等于空间阈值中的一种或多种。
其中,要求投影点向外扩展包括要求投影点沿第一投影边向远离第一投影边的方向向外扩展、沿第二投影边向远离第二投影边的方向向外扩展、或在第一投影边和第二投影边的夹角范围向外扩展中的一种或多种,第一投影边和第二投影边是相交于投影点的两条投影边;要求投影点向内收缩包括要求投影点沿第一投影边向远离投影点的方向向内收缩、沿第二投影边向远离投影点的方向向内收缩、或在第一投影边和第二投影边的夹角范围向内收缩中的一种或多 种。
其中,处理器在执行计算机程序时,实现如下步骤:以三维坐标系中传感器的位置为起点,沿传感器的倾角方向做一条射线,得到射线与模型的当前面的交点;通过交点确定模型中的有效区域和/或非有效区域。
其中,处理器在执行计算机程序时,实现如下步骤:从交点做垂直于或倾斜于当前面的第二直线,进而通过第二直线得到多个与当前面相交的第二相交面;或是,通过交点和交点对应的射线得到多个第二相交面;或是,通过交点和任意两个传感器的位置连线的中点得到多个第二相交面;或是,通过交点与交点对应的射线的公垂线段的中点得到多个第二相交面;通过第二相交面确定模型中的有效区域和/或非有效区域。
其中,处理器在执行计算机程序时,实现如下步骤:沿第二相交面和模型的当前面切割,以去掉模型中的非有效区域。
其中,处理器在执行计算机程序时,实现如下步骤:将三维坐标系中传感器的位置投影到三维坐标系的至少一个投影面上,得到投影点,以三维坐标系中传感器的位置为起点,沿传感器的倾角方向做一条射线,得到射线与模型的当前面的交点;根据投影点和交点确定模型中的有效区域和/或非有效区域。
其中,处理器在执行计算机程序时,实现如下步骤:根据投影点确定模型的第一切割范围,根据交点确定模型的第二切割范围;将第一切割范围和第二切割范围的交集或并集作为有效区域。
其中,三维坐标系包括全局坐标系或者局部坐标系。
其中,处理器在执行计算机程序时,实现如下步骤:根据传感器的位置和传感器的量程,确定模型中的有效区域和/或非有效区域。
其中,处理器在执行计算机程序时,实现如下步骤:将传感器的位置和传感器的量程之间的范围作为有效区域。
其中,处理器在执行计算机程序时,实现如下步骤:根据传感器的位置和传感器的量程的设定比例,确定模型中的有效区域和/或非有效区域。
其中,处理器在执行计算机程序时,实现如下步骤:将传感器的位置和传感器的量程的设定比例之间的范围作为有效区域。
其中,传感器包括摄像装置,传感器信息还包括摄像装置的视锥。
其中,处理器在执行计算机程序时,实现如下步骤:根据摄像装置的位置和摄像装置的视锥,确定模型中的有效区域和/或非有效区域。
其中,处理器在执行计算机程序时,实现如下步骤:以每个摄像装置的位置为顶点,确定每个摄像装置的视锥对应的立体空间;将所有摄像装置的视锥对应的立体空间的并集作为有效区域。
本申请还提供一种计算机可读存储介质,计算机可读存储介质存储有计算机程序,计算机程序被处理器执行时使处理器实现如上任一项的模型自动剪切的方法。相关内容的详细说明请参见上述图像处理方法部分,在此不再赘叙。
其中,该计算机可读存储介质可以是上述任一图像处理设备的内部存储单元,例如图像处理设备的硬盘或内存。该计算机可读存储介质也可以是图像处理设备的外部存储设备,例如图像处理设备上配备的插接式硬盘、智能存储卡、安全数字卡、闪存卡,等等。
本申请实施例在利用传感器所采集的数据进行模型重建过程中,根据获取的传感器信息,确定模型中的有效区域和/或模型中有效区域之外的非有效区域,对模型进行剪切,去掉模型中的非有效区域;由于根据获取的传感器信息,确定模型中的有效区域和/或非有效区域,以及对模型进行剪切,是在模型重建过程中完成的,并没有中断模型重建过程,因此不仅不会降低模型重建速度,而且剪切去掉模型中的非有效区域,能够减小模型的体积,减小模型占有立体空间,能够避免噪声干扰,反而能够加快模型重建速度,加快渲染速度,可以让用户只专注于关心的区域,能够保证甚至增加模型重建质量;由于根据获取的传感器信息,确定模型中的有效区域和/或非有效区域,以及对模型进行剪切,均是在模型重建过程中自动完成的,不涉及用户参与,因此不会依赖用户经验,能够保证剪切后的模型保留有效区域,且模型质量稳定,可重复。
应当理解,在本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
以上所述,仅为本申请的具体实施例,但本申请的保护范围并不局限于此, 任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (49)

  1. 一种模型自动剪切的方法,其特征在于,包括:
    在利用传感器所采集的数据进行模型重建过程中,获取传感器信息;
    根据所述传感器信息,确定所述模型中的有效区域和/或非有效区域,所述非有效区域是所述模型中有效区域之外的区域;
    对所述模型进行剪切,以去掉所述模型中的非有效区域。
  2. 根据权利要求1所述的方法,其特征在于,所述传感器信息包括传感器的位置、传感器的倾角、传感器的量程中的一个或多个,所述传感器的位置和传感器的倾角是所述传感器相对被采集场景的位置和倾角。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述传感器信息,确定所述模型中的有效区域和/或非有效区域,包括:
    根据所述传感器的位置和/或传感器的倾角,确定所述模型中的有效区域和/或非有效区域。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述传感器的位置和/或传感器的倾角,确定所述模型中的有效区域和/或非有效区域,包括:
    根据所述模型的三维坐标系中所述传感器的位置和/或传感器的倾角,确定所述模型中的有效区域和/或非有效区域。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述模型的三维坐标系中所述传感器的位置和/或传感器的倾角,确定所述模型中的有效区域和/或非有效区域,包括:
    将所述三维坐标系中所述传感器的位置投影到所述三维坐标系的至少一个投影面上,得到投影点;
    根据所述投影面上的投影点确定所述模型中的有效区域和/或非有效区域。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述投影面上的投影点确定所述模型中的有效区域和/或非有效区域,包括:
    从所述投影点做垂直于或倾斜于所述投影面的第一直线,进而通过所述第一直线得到多个与所述投影面相交的第一相交面;
    通过所述第一相交面确定所述模型中的有效区域和/或非有效区域。
  7. 根据权利要求6所述的方法,其特征在于,所述对所述模型进行剪切,以去掉所述模型中的非有效区域,包括:
    沿所述第一相交面和所述投影面切割,以去掉所述模型中的非有效区域。
  8. 根据权利要求5所述的方法,其特征在于,所述根据所述投影面上的投影点确定所述模型中的有效区域和/或非有效区域,包括:
    通过所述模型的三维坐标系中所述传感器的位置和所述投影点所组成的立体空间确定所述模型中的有效区域和/或非有效区域。
  9. 根据权利要求8所述的方法,其特征在于,所述对所述模型进行剪切,以去掉所述模型中的非有效区域,包括:
    沿所述立体空间的数个面进行切割,以去掉所述模型中的非有效区域。
  10. 根据权利要求6-9任一项所述的方法,其特征在于,所述根据所述投影面上的投影点确定所述模型中的有效区域和/或非有效区域之前,还包括:
    对所述投影点进行处理,所述处理包括要求所述投影点向外扩展、要求所述投影点向内收缩、通过所述投影点形成包围盒、以及要求所述传感器的位置、所述投影点组成的立体空间的大小大于或等于空间阈值中的一种或多种。
  11. 根据权利要求10所述的方法,其特征在于,所述要求投影点向外扩展包括要求投影点沿第一投影边向远离所述第一投影边的方向向外扩展、沿第二投影边向远离所述第二投影边的方向向外扩展、或在所述第一投影边和所述第二投影边的夹角范围向外扩展中的一种或多种,所述第一投影边和所述第二投影边是相交于所述投影点的两条投影边;
    所述要求投影点向内收缩包括要求投影点沿第一投影边向远离所述投影点的方向向内收缩、沿第二投影边向远离所述投影点的方向向内收缩、或在所述第一投影边和所述第二投影边的夹角范围向内收缩中的一种或多种。
  12. 根据权利要求4所述的方法,其特征在于,所述根据所述模型的三维坐标系中所述传感器的位置和/或传感器的倾角,确定所述模型中的有效区域和/或非有效区域,包括:
    以所述三维坐标系中所述传感器的位置为起点,沿所述传感器的倾角方向做一条射线,得到所述射线与所述模型的当前面的交点;
    通过所述交点确定所述模型中的有效区域和/或非有效区域。
  13. 根据权利要求12所述的方法,其特征在于,所述通过所述交点确定当前面所述模型中的有效区域和/或非有效区域,包括:
    从所述交点做垂直于或倾斜于所述当前面的第二直线,进而通过所述第二直线得到多个与所述当前面相交的第二相交面;
    或是,通过所述交点和所述交点对应的射线得到多个第二相交面;
    或是,通过所述交点,和任意两个所述传感器的位置连线的中点得到多个第二相交面;
    或是,通过交点与交点对应的射线的公垂线段的中点得到多个第二相交面;
    通过所述第二相交面确定所述模型中的有效区域和/或非有效区域。
  14. 根据权利要求13所述的方法,其特征在于,所述对所述模型进行剪切,以去掉所述模型中的非有效区域,包括:
    沿所述第二相交面和所述模型的当前面切割,以去掉所述模型中的非有效区域。
  15. 根据权利要求4所述的方法,其特征在于,所述根据所述模型的三维坐标系中所述传感器的位置和/或传感器的倾角,确定所述模型中的有效区域和/或非有效区域,包括:
    将所述三维坐标系中所述传感器的位置投影到所述三维坐标系的至少一个投影面上,得到投影点,以所述三维坐标系中所述传感器的位置为起点,沿所述传感器的倾角方向做一条射线,得到所述射线与所述模型的当前面的交点;
    根据所述投影点和所述交点确定所述模型中的有效区域和/或非有效区域。
  16. 根据权利要求15所述的方法,其特征在于,所述根据所述投影点和所述交点确定所述模型中的有效区域和/或非有效区域,包括:
    根据所述投影点确定所述模型的第一切割范围,根据所述交点确定所述模型的第二切割范围;
    将所述第一切割范围和所述第二切割范围的交集或并集作为有效区域。
  17. 根据权利要求4-16任一项所述的方法,其特征在于,所述三维坐标系包括全局坐标系或者局部坐标系。
  18. 根据权利要求2所述的方法,其特征在于,所述根据所述传感器信息,确定所述模型中的有效区域和/或非有效区域,包括:
    根据所述传感器的位置和所述传感器的量程,确定所述模型中的有效区域和/或非有效区域。
  19. 根据权利要求18所述的方法,其特征在于,所述根据所述传感器的位置和所述传感器的量程,确定所述模型中的有效区域和/或非有效区域,包括:
    将所述传感器的位置和所述传感器的量程之间的范围作为有效区域。
  20. 根据权利要求18所述的方法,其特征在于,所述根据所述传感器的位置和所述传感器的量程,确定所述模型中的有效区域和/或非有效区域,包括:
    根据所述传感器的位置和所述传感器的量程的设定比例,确定所述模型中的有效区域和/或非有效区域。
  21. 根据权利要求20所述的方法,其特征在于,所述根据所述传感器的位置和所述传感器的量程的设定比例,确定所述模型中的有效区域和/或非有效区域,包括:
    将所述传感器的位置和所述传感器的量程的设定比例之间的范围作为有效区域。
  22. 根据权利要求2所述的方法,其特征在于,所述传感器包括摄像装置,所述传感器信息还包括摄像装置的视锥。
  23. 根据权利要求22所述的方法,其特征在于,所述根据所述传感器信息,确定所述模型中的有效区域和/或非有效区域,包括:
    根据所述摄像装置的位置和所述摄像装置的视锥,确定所述模型中的有效区域和/或非有效区域。
  24. 根据权利要求23所述的方法,其特征在于,根据所述摄像装置的位置和所述摄像装置的视锥,确定所述模型中的有效区域和/或非有效区域,包括:
    以每个摄像装置的位置为顶点,确定每个摄像装置的视锥对应的立体空间;
    将所有摄像装置的视锥对应的立体空间的并集作为有效区域。
  25. 一种模型自动剪切装置,其特征在于,所述装置包括:存储器和处理器;
    所述存储器用于存储计算机程序;
    所述处理器用于执行所述计算机程序并在执行所述计算机程序时,实现如下步骤:
    在利用传感器所采集的数据进行模型重建过程中,获取传感器信息;
    根据所述传感器信息,确定所述模型中的有效区域和/或非有效区域,所述非有效区域是所述模型中有效区域之外的区域;
    对所述模型进行剪切,以去掉所述模型中的非有效区域。
  26. 根据权利要求25所述的装置,其特征在于,所述传感器信息包括传感器的位置、传感器的倾角、传感器的量程中的一个或多个,所述传感器的位置和传感器的倾角是所述传感器相对被采集场景的位置和倾角。
  27. 根据权利要求26所述的装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    根据所述传感器的位置和/或传感器的倾角,确定所述模型中的有效区域和/或非有效区域。
  28. 根据权利要求27所述的装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    根据所述模型的三维坐标系中所述传感器的位置和/或传感器的倾角,确定所述模型中的有效区域和/或非有效区域。
  29. 根据权利要求28所述的装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    将所述三维坐标系中所述传感器的位置投影到所述三维坐标系的至少一个投影面上,得到投影点;
    根据所述投影面上的投影点确定所述模型中的有效区域和/或非有效区域。
  30. 根据权利要求29所述的装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    从所述投影点做垂直于或倾斜于所述投影面的第一直线,进而通过所述第一直线得到多个与所述投影面相交的第一相交面;
    通过所述第一相交面确定所述模型中的有效区域和/或非有效区域。
  31. 根据权利要求30所述的装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    沿所述第一相交面和所述投影面切割,以去掉所述模型中的非有效区域。
  32. 根据权利要求29所述的装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    通过所述模型的三维坐标系中所述传感器的位置和所述投影点所组成的立体空间确定所述模型中的有效区域和/或非有效区域。
  33. 根据权利要求32所述的装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    沿所述立体空间的数个面进行切割,以去掉所述模型中的非有效区域。
  34. 根据权利要求30-33任一项所述的装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    对所述投影点进行处理,所述处理包括要求所述投影点向外扩展、要求所述投影点向内收缩、通过所述投影点形成矩形包围盒、以及要求所述传感器的位置、所述投影点组成的立体空间的大小大于或等于空间阈值中的一种或多种。
  35. 根据权利要求34所述的装置,其特征在于,所述要求投影点向外扩展包括要求投影点沿第一投影边向远离所述第一投影边的方向向外扩展、沿第二投影边向远离所述第二投影边的方向向外扩展、或在所述第一投影边和所述第二投影边的夹角范围向外扩展中的一种或多种,所述第一投影边和所述第二投影边是相交于所述投影点的两条投影边;
    所述要求投影点向内收缩包括要求投影点沿第一投影边向远离所述投影点的方向向内收缩、沿第二投影边向远离所述投影点的方向向内收缩、或在所述第一投影边和所述第二投影边的夹角范围向内收缩中的一种或多种。
  36. 根据权利要求28所述的装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    以所述三维坐标系中所述传感器的位置为起点,沿所述传感器的倾角方向做一条射线,得到所述射线与所述模型的当前面的交点;
    通过所述交点确定所述模型中的有效区域和/或非有效区域。
  37. 根据权利要求36所述的装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    从所述交点做垂直于或倾斜于所述当前面的第二直线,进而通过所述第二直线得到多个与所述当前面相交的第二相交面;
    或是,通过所述交点和所述交点对应的射线得到多个第二相交面;
    或是,通过所述交点和任意两个所述传感器的位置连线的中点得到多个第二相交面;
    或是,通过交点与交点对应的射线的公垂线段的中点得到多个第二相交面;
    通过所述第二相交面确定所述模型中的有效区域和/或非有效区域。
  38. 根据权利要求37所述的装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    沿所述第二相交面和所述模型的当前面切割,以去掉所述模型中的非有效区域。
  39. 根据权利要求28所述的装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    将所述三维坐标系中所述传感器的位置投影到所述三维坐标系的至少一个投影面上,得到投影点,以所述三维坐标系中所述传感器的位置为起点,沿所述传感器的倾角方向做一条射线,得到所述射线与所述模型的当前面的交点;
    根据所述投影点和所述交点确定所述模型中的有效区域和/或非有效区域。
  40. 根据权利要求39所述的装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    根据所述投影点确定所述模型的第一切割范围,根据所述交点确定所述模型的第二切割范围;
    将所述第一切割范围和所述第二切割范围的交集或并集作为有效区域。
  41. 根据权利要求28-40任一项所述的装置,其特征在于,所述三维坐标系包括全局坐标系或者局部坐标系。
  42. 根据权利要求26所述的装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    根据所述传感器的位置和所述传感器的量程,确定所述模型中的有效区域和/或非有效区域。
  43. 根据权利要求42所述的装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    将所述传感器的位置和所述传感器的量程之间的范围作为有效区域。
  44. 根据权利要求42所述的装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    根据所述传感器的位置和所述传感器的量程的设定比例,确定所述模型中的有效区域和/或非有效区域。
  45. 根据权利要求44所述的装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    将所述传感器的位置和所述传感器的量程的设定比例之间的范围作为有效区域。
  46. 根据权利要求26所述的装置,其特征在于,所述传感器包括摄像装置,所述传感器信息还包括摄像装置的视锥。
  47. 根据权利要求46所述的装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    根据所述摄像装置的位置和所述摄像装置的视锥,确定所述模型中的有效区域和/或非有效区域。
  48. 根据权利要求47所述的装置,其特征在于,所述处理器在执行所述计算机程序时,实现如下步骤:
    以每个摄像装置的位置为顶点,确定每个摄像装置的视锥对应的立体空 间;
    将所有摄像装置的视锥对应的立体空间的并集作为有效区域。
  49. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现如权利要求1至24任一项所述的模型自动剪切的方法。
PCT/CN2019/106028 2019-09-16 2019-09-16 模型自动剪切的方法、装置及存储介质 WO2021051249A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980034070.XA CN112204624A (zh) 2019-09-16 2019-09-16 模型自动剪切的方法、装置及存储介质
PCT/CN2019/106028 WO2021051249A1 (zh) 2019-09-16 2019-09-16 模型自动剪切的方法、装置及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/106028 WO2021051249A1 (zh) 2019-09-16 2019-09-16 模型自动剪切的方法、装置及存储介质

Publications (1)

Publication Number Publication Date
WO2021051249A1 true WO2021051249A1 (zh) 2021-03-25

Family

ID=74004608

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/106028 WO2021051249A1 (zh) 2019-09-16 2019-09-16 模型自动剪切的方法、装置及存储介质

Country Status (2)

Country Link
CN (1) CN112204624A (zh)
WO (1) WO2021051249A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113075659B (zh) * 2021-03-30 2022-10-04 北京环境特性研究所 一种平坦场景网格模型自适应分区预处理方法和系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100214288A1 (en) * 2009-02-25 2010-08-26 Jing Xiao Combining Subcomponent Models for Object Image Modeling
CN103871097A (zh) * 2014-02-26 2014-06-18 南京航空航天大学 基于牙齿预备体的数据柔性融合方法
CN105139443A (zh) * 2015-07-30 2015-12-09 芜湖卫健康物联网医疗科技有限公司 诊断结果的三维成像系统及方法
CN106133795A (zh) * 2014-01-17 2016-11-16 诺基亚技术有限公司 用于对3d渲染应用中地理定位的媒体内容进行视觉化的方法和装置
CN108597023A (zh) * 2018-05-09 2018-09-28 中国石油大学(华东) 一种基于单反相机的地质露头三维模型构建方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100214288A1 (en) * 2009-02-25 2010-08-26 Jing Xiao Combining Subcomponent Models for Object Image Modeling
CN106133795A (zh) * 2014-01-17 2016-11-16 诺基亚技术有限公司 用于对3d渲染应用中地理定位的媒体内容进行视觉化的方法和装置
CN103871097A (zh) * 2014-02-26 2014-06-18 南京航空航天大学 基于牙齿预备体的数据柔性融合方法
CN105139443A (zh) * 2015-07-30 2015-12-09 芜湖卫健康物联网医疗科技有限公司 诊断结果的三维成像系统及方法
CN108597023A (zh) * 2018-05-09 2018-09-28 中国石油大学(华东) 一种基于单反相机的地质露头三维模型构建方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PU YANGGUANG: "Multi-View Three-dimensional Reconstruction Feature Point Detection Matching and Point Cloud Area Clipping of Improvement", CHINESE MASTER'S THESES FULL-TEXT DATABASE, SHAANXI NORMAL UNIVERSITY, CN, 15 January 2019 (2019-01-15), CN, XP055792372, ISSN: 1674-0246 *

Also Published As

Publication number Publication date
CN112204624A (zh) 2021-01-08

Similar Documents

Publication Publication Date Title
US11354840B2 (en) Three dimensional acquisition and rendering
CN109658365B (zh) 图像处理方法、装置、系统和存储介质
WO2021120846A1 (zh) 三维重建方法、设备以及计算机可读介质
KR102096730B1 (ko) 이미지 디스플레이 방법, 곡면을 가지는 불규칙 스크린을 제조하기 위한 방법 및 헤드-장착 디스플레이 장치
WO2019024935A1 (zh) 一种全景图像生成方法及装置
TW201531871A (zh) 點雲模型貼圖系統及方法
CA2910649A1 (en) Automated texturing mapping and animation from images
Kersten et al. Potential of automatic 3D object reconstruction from multiple images for applications in architecture, cultural heritage and archaeology
Pagani et al. Dense 3D Point Cloud Generation from Multiple High-resolution Spherical Images.
CN111896032B (zh) 一种单目散斑投射器位置的标定系统及方法
Yemez et al. A volumetric fusion technique for surface reconstruction from silhouettes and range data
CN114549772A (zh) 基于工程独立坐标系的多源三维模型融合处理方法及系统
CN109064533A (zh) 一种3d漫游方法及系统
WO2021051249A1 (zh) 模型自动剪切的方法、装置及存储介质
Yu et al. Multiperspective modeling, rendering, and imaging
CN115311308A (zh) 电力井室的墙面切割方法、装置、计算设备及存储介质
JP2018032938A (ja) 画像処理装置、画像処理の方法およびプログラム
CN108629840A (zh) 一种建立logo三维轮廓的方法、装置和设备
Zhou et al. MR video fusion: interactive 3D modeling and stitching on wide-baseline videos
WO2023056879A1 (zh) 一种模型处理方法、装置、设备及介质
KR100490885B1 (ko) 직각 교차 실린더를 이용한 영상기반 렌더링 방법
CN116524109A (zh) 一种基于WebGL的三维桥梁可视化方法及相关设备
JP7195785B2 (ja) 3次元形状データを生成する装置、方法、及びプログラム
CN111833428B (zh) 一种可视域确定方法、装置及设备
Wang et al. Identifying and filling occlusion holes on planar surfaces for 3-D scene editing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19946166

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19946166

Country of ref document: EP

Kind code of ref document: A1