CN112204624A - Method and device for automatically shearing model and storage medium - Google Patents

Method and device for automatically shearing model and storage medium Download PDF

Info

Publication number
CN112204624A
CN112204624A CN201980034070.XA CN201980034070A CN112204624A CN 112204624 A CN112204624 A CN 112204624A CN 201980034070 A CN201980034070 A CN 201980034070A CN 112204624 A CN112204624 A CN 112204624A
Authority
CN
China
Prior art keywords
model
sensor
projection
area
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980034070.XA
Other languages
Chinese (zh)
Inventor
黄胜
梁家斌
田艺
李思晋
李鑫超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
SZ DJI Innovations Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN112204624A publication Critical patent/CN112204624A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A method for model automatic cutting, a device for model automatic cutting and a storage medium, the method comprises: acquiring sensor information in a process of performing model reconstruction by using data acquired by a sensor (S101); determining an active area and/or an inactive area in the model according to the sensor information (S102); the model is clipped to remove the non-effective region in the model (S103).

Description

Method and device for automatically shearing model and storage medium
Technical Field
The present application relates to the field of model reconstruction technologies, and in particular, to a method and an apparatus for automatically clipping a model, and a storage medium.
Background
Three-dimensional reconstruction techniques using stereoscopic vision or laser scanning are becoming mature. In fact, there are many areas in the photograph that are taken without concern; laser scanning also scans many areas that do not need attention. These less important areas tend to occupy a large volume of solid space, resulting in an oversized model. In addition, the addition of these non-regions of interest may also lead to a degradation of quality during the reconstruction of the model.
The areas not needing attention are cut manually at present. However, reconstruction must be interrupted during manual cropping; the fine model can be reconstructed continuously after manual shearing. This tends to slow down the reconstruction speed, and the effect of manual clipping depends on the user experience, which is likely to cause a reduction in the quality of the model or an invalid model.
Disclosure of Invention
Based on the method, the device and the storage medium, the model automatic shearing method, the model automatic shearing device and the storage medium are provided.
In a first aspect, the present application provides a method for automatically clipping a model, including:
acquiring sensor information in a process of performing model reconstruction by using data acquired by a sensor;
determining an active area and/or an inactive area in the model according to the sensor information, wherein the inactive area is an area outside the active area in the model;
and shearing the model to remove the non-effective area in the model.
In a second aspect, the present application provides a model auto-shearing apparatus, the apparatus comprising: a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute the computer program and, when executing the computer program, implement the steps of:
acquiring sensor information in a process of performing model reconstruction by using data acquired by a sensor;
determining an active area and/or an inactive area in the model according to the sensor information, wherein the inactive area is an area outside the active area in the model;
and shearing the model to remove the non-effective area in the model.
In a third aspect, the present application provides a computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to implement the method of model auto-clipping as described above.
The embodiment of the application provides a method for automatically shearing a model, an automatic shearing device for the model and a storage medium, wherein in the process of reconstructing the model by using data acquired by a sensor, an effective area in the model and/or a non-effective area outside the effective area in the model are determined according to acquired sensor information, the model is sheared, and the non-effective area in the model is removed; because the effective region and/or the ineffective region in the model are/is determined according to the acquired sensor information, and the model is cut, the model is completed in the model reconstruction process without interrupting the model reconstruction process, the model reconstruction speed is not reduced, the ineffective region in the model is cut off, the volume of the model can be reduced, the three-dimensional space occupied by the model is reduced, the noise interference can be avoided, the model reconstruction speed can be increased, the rendering speed is increased, a user can only concentrate on the concerned region, and the model reconstruction quality can be ensured or even increased; because the effective region and/or the non-effective region in the model are/is determined according to the acquired sensor information, and the model is cut, the determination is automatically completed in the model reconstruction process, and the participation of a user is not involved, so that the user experience is not relied on, the cut model can be ensured to keep the effective region, and the quality of the model is stable and repeatable.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram of an embodiment of a method for auto-cropping a model of the present application;
FIG. 2 is a schematic flow chart diagram of another embodiment of a method for auto-cropping a model of the present application;
FIG. 3 is a schematic diagram of a projected point obtained by projecting the position of a sensor in a practical application of the model auto-cropping method;
FIG. 4 is a schematic flow chart diagram of another embodiment of a method for auto-cropping a model of the present application;
FIG. 5 is a schematic diagram of a specific way of forming a rectangular bounding box by proxels in a practical application of the model auto-cropping method of the present application;
FIG. 6 is a diagram illustrating a specific manner of extending or contracting the projection points outwards in a practical application of the automatic clipping method of the present application model;
FIG. 7 is a schematic flow chart diagram of a further embodiment of a method for auto-cropping a model of the present application;
FIG. 8 is a schematic diagram of an intersection point of a ray and the current surface of the model obtained by taking a ray along the inclination direction of a sensor in an actual application of the method for automatically shearing the model;
FIG. 9 is a schematic flow chart diagram of a further embodiment of a method for auto-cropping a model of the present application;
FIG. 10 is a schematic flow chart diagram of a further embodiment of a method for auto-cropping a model of the present application;
FIG. 11 is a schematic flow chart diagram illustrating a method for auto-cropping a model of the present application;
FIG. 12 is a schematic flow chart diagram illustrating a method for auto-cropping a model of the present application;
FIG. 13 is a schematic flow chart diagram illustrating a method for auto-cropping a model of the present application;
FIG. 14 is a schematic illustration of an uncut model obtained by three-dimensional modeling using a series of pictures taken around a stone sculpture;
FIG. 15 is a schematic view of the model of FIG. 14 after cutting using the present model auto-cut method;
FIG. 16 is a schematic flow chart diagram of a further embodiment of a method for auto-cropping a model of the present application;
fig. 17 is a schematic structural diagram of an embodiment of an automatic shearing apparatus according to the present model.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The flow diagrams depicted in the figures are merely illustrative and do not necessarily include all of the elements and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
In the model reconstruction, non-effective regions such as regions that do not need attention occupy a large three-dimensional space, which results in an excessively large model volume and also results in a reduced quality in the model reconstruction process. And the manual shearing of the model has to interrupt the model reconstruction process, so that the reconstruction speed is easy to reduce, and the quality of the model is easy to reduce or an invalid model is easy to obtain depending on the experience of a user. In the embodiment of the application, in the process of reconstructing the model by using data acquired by a sensor, an effective area in the model and/or a non-effective area outside the effective area in the model are/is determined according to acquired sensor information, the model is cut, and the non-effective area in the model is removed; because the effective region and/or the ineffective region in the model are/is determined according to the acquired sensor information, and the model is cut, the model is completed in the model reconstruction process without interrupting the model reconstruction process, the model reconstruction speed is not reduced, the ineffective region in the model is cut off, the volume of the model can be reduced, the three-dimensional space occupied by the model is reduced, the noise interference can be avoided, the model reconstruction speed can be increased, the rendering speed is increased, a user can only concentrate on the concerned region, and the model reconstruction quality can be ensured or even increased; because the effective region and/or the non-effective region in the model are/is determined according to the acquired sensor information, and the model is cut, the determination is automatically completed in the model reconstruction process, and the participation of a user is not involved, so that the user experience is not relied on, the cut model can be ensured to keep the effective region, and the quality of the model is stable and repeatable.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of an automatic cropping method of the present application model, and the method includes:
step S101: and acquiring sensor information in the process of carrying out model reconstruction by using data acquired by the sensor.
In this embodiment, the model reconstruction is performed by using data acquired by a sensor, that is, a mathematical process and a computer technology for recovering three-dimensional information (for example, a shape and the like) of an acquired scene by using the data acquired by the sensor include steps of data acquisition, preprocessing, point cloud stitching, feature analysis and the like.
Among these, sensors include, but are not limited to: cameras (e.g., cameras, video cameras, etc.), scanners (e.g., laser scanners, three-dimensional scanners, etc.), radars, and so forth. Sensor information refers to information related to a sensor, such as: setting information of the sensor in the collected scene (such as the position of the sensor, the height of the sensor, the inclination angle of the sensor and the like), and parameter information inherent to the sensor (such as the measuring range of the sensor, the sensitivity of the sensor, the precision of the sensor, the viewing cone of the camera device and the like).
The manner of acquiring the sensor information needs to be determined according to specific sensor information, and some needs to be obtained through calculation, for example: the position of the sensor, the tilt angle of the sensor, etc.; some can be pre-entered and stored, and retrieved when needed, for example: parameter information inherent to the sensor itself.
Step S102: from the sensor information, an active area and/or an inactive area in the model is determined, the inactive area being an area outside the active area in the model.
In practical applications, the sensor is usually disposed around a Region Of Interest (ROI), and it is within a certain range near the sensor that the sensor actually needs attention, or the Region really needs by the user. Due to problems with the tilt angle of the sensor, the range of the sensor, etc., the data collected in practice includes too many data of the non-region of interest. In this embodiment, the valid region in the model includes the region really needed, paid attention to, and interested by the user, and the non-valid region is a region outside the valid region in the model.
According to the information of the sensor, an effective area in the model is determined, or a non-effective area in the model is determined, or an effective area and a non-effective area in the model are determined, and the specific implementation modes are many. For example: the plurality of cameras are arranged at the same height, and the inclination angles of the cameras are downward, so that the area above the height can be considered as a non-effective area; if the tilt angles are all upward, the area below the height can be considered as a non-active area. For another example: the measuring range of the sensor is 10 meters, and then the acquisition direction of the sensor is within 10 meters and is an effective area, and the acquisition direction of the sensor is outside 10 meters and is an ineffective area.
Step S103: and shearing the model to remove the non-effective area in the model.
And after the effective area and/or the non-effective area in the model are determined, the non-effective area in the model can be cut off.
In the embodiment of the application, in the process of reconstructing the model by using data acquired by a sensor, an effective area in the model and/or a non-effective area outside the effective area in the model are/is determined according to acquired sensor information, the model is cut, and the non-effective area in the model is removed; because the effective region and/or the ineffective region in the model are/is determined according to the acquired sensor information, and the model is cut, the model is completed in the model reconstruction process without interrupting the model reconstruction process, the model reconstruction speed is not reduced, the ineffective region in the model is cut off, the volume of the model can be reduced, the three-dimensional space occupied by the model is reduced, the noise interference can be avoided, the model reconstruction speed can be increased, the rendering speed is increased, a user can only concentrate on the concerned region, and the model reconstruction quality can be ensured or even increased; because the effective region and/or the non-effective region in the model are/is determined according to the acquired sensor information, and the model is cut, the determination is automatically completed in the model reconstruction process, and the participation of a user is not involved, so that the user experience is not relied on, the cut model can be ensured to keep the effective region, and the quality of the model is stable and repeatable.
The following describes specific implementation processes of step S102 and step S103 by taking sensor information that is widely used in practice as an example.
In a wider range of practical applications, the sensor information includes one or more of a position of the sensor, an inclination of the sensor, and a range of the sensor. Wherein the position of the sensor and the tilt angle of the sensor are the position and the tilt angle of the sensor relative to the acquired scene.
In practical application, the position of the sensor and the inclination angle of the sensor are set by a user according to real requirements, the position of an interested area, the position of an attention area and the like; the measuring range of the sensor refers to the measuring range of the sensor, and when a user selects the sensor, the user usually needs to select the sensor according to the real requirement, the position of an interested area, the position of a concerned area and the like; thus, the active area and/or the inactive area in the model may be determined based on one or more of the position of the sensor, the tilt angle of the sensor, and the range of the sensor.
For example: in an application, step S102 may be: an active area and/or a non-active area in the model is determined based on the position of the sensor and/or the tilt angle of the sensor.
In order to facilitate the determination of the relationship between the model and the position of the sensor and/or the inclination angle of the sensor, a three-dimensional coordinate system may be established in the model, and in this case, step S102 may further be: and determining an effective area and/or a non-effective area in the model according to the position of the sensor and/or the inclination angle of the sensor in the three-dimensional coordinate system of the model.
Further, the three-dimensional coordinate system includes a global coordinate system or a local coordinate system.
The global coordinate system is the coordinate system in which the three-dimensional space object is located, and the vertex coordinates of the model are expressed based on the coordinate system. The local coordinate system is a hypothetical coordinate system, the local coordinate system takes the center of the object as the origin of coordinates, the rotation or translation of the object is performed around the local coordinate system, and when the object model performs the rotation or translation, the local coordinate system also performs the corresponding rotation or translation. The relative position of the local coordinate system and the object is constant from beginning to end, and the purpose of the assumed local coordinate system is mainly to forward understanding of the "translation and rotation" operations performed on the object in the three-dimensional scene. When the local coordinate system is used for understanding model transformation, all transformation operations directly act on the local coordinate system, and because the relative positions of the local coordinate system and the object are not aligned, when the local coordinate system is subjected to translation, rotation and scaling, the position and the shape of the object in a scene are changed correspondingly.
It should be noted that the global coordinate system and the local coordinate system are two means of understanding the model transformation. In practical applications, a suitable three-dimensional coordinate system may be selected according to specific application requirements.
The following description will specifically take the position of the sensor, the tilt angle of the sensor, the position of the sensor, and the measurement range of the sensor as examples.
In the first practical application, when the position of the sensor is used to determine the effective region and/or the non-effective region in the model, referring to fig. 2, step S102 may include: substep S102a1 and substep S102a 2.
Sub-step S102a 1: and projecting the position of the sensor in the three-dimensional coordinate system onto at least one projection surface of the three-dimensional coordinate system to obtain a projection point.
For example, as shown in fig. 3, on the o-xyz coordinate system, the triangle represents the sensor, the vertex of the triangle represents the position of the sensor, xoy, yoz, xoz are three projection planes, and five sensors in the figure correspond to five projection points E, F, G, H, I on the projection plane xoy.
Sub-step S102a 2: and determining an effective area and/or an ineffective area in the model according to the projection points on the projection surface.
In this embodiment, the range formed by the projection points of the positions of the sensors on the projection plane in the three-dimensional coordinate system can substantially determine the region of interest of the user. Therefore, the effective region and/or the non-effective region in the model can be determined according to the projection point on the projection plane, which is specifically described as follows:
in an embodiment, a plurality of first intersecting surfaces intersecting the projection plane can be obtained by making a first straight line intersecting the projection plane from the projection point, and the effective area and/or the ineffective area in the model are/is determined through the first intersecting surfaces. That is, the sub-step S102a2 may include: substeps S102A2a1 and substep S102A2A2, as shown in fig. 4.
Sub-step S102A2a 1: and making a first straight line which is vertical to or inclined to the projection plane from the projection point, and further obtaining a plurality of first intersecting surfaces which are intersected with the projection plane through the first straight line.
The first intersecting surface may be planar or non-planar. When two adjacent first straight lines corresponding to two adjacent projection points are coplanar with a projection straight line between the two adjacent projection points, the first intersecting surface may be a plane; when not coplanar, the first intersecting surface may be non-planar.
The first intersecting surface can be obtained by intersecting a plane formed by two adjacent first straight lines with the projection surface, or can be obtained by intersecting a plane formed by two non-adjacent first straight lines with the projection surface. The specific obtaining manner of the first intersecting surface may be determined according to practical applications, and is not limited herein.
Sub-step S102A2a 2: the active area and/or the inactive area in the model is determined by the first intersecting surface.
For example: if the model is divided into the left side and the right side by the first intersecting surface, only the left side of the first intersecting surface can be set as a non-effective area, and on the basis, the model on the right side of the first intersecting surface can continuously determine an effective area and/or a non-effective area by adopting other modes; or, the left side of the first intersecting surface can be directly set as an effective area; alternatively, the first intersecting surface may divide the model into a plurality of parts in other ways, and then directly determine the effective region and the ineffective region in the model, and so on.
In the embodiment, a first straight line intersecting the projection plane is made from the projection point, so that a plurality of first intersecting planes intersecting the projection plane can be obtained, and then the effective area and/or the ineffective area in the model are determined through the first intersecting planes.
At this time, step S103 may include: and cutting along the first intersecting surface and the projection surface to remove the non-effective area in the model.
In another embodiment, the active and/or inactive regions in the model are determined by a volume space composed of the positions of the sensors and the projected points in the three-dimensional coordinate system. That is, the sub-step S102a2 may further include: sub-step S102A2b 1.
Sub-step S102A2b 1: and determining an effective area and/or a non-effective area in the model through a stereo space formed by the positions of the sensors and the projection points in the three-dimensional coordinate system of the model.
The three-dimensional space of the present embodiment may be a parallel polyhedron having upper and lower parallel bottom surfaces, for example: the parallel polyhedron comprises a frustum of a pyramid; the three-dimensional space may be a polyhedron in which the upper and lower bottom surfaces are not parallel. The shape of the three-dimensional space in this embodiment is determined according to practical applications, and is not limited herein.
In practical applications, a model outside a three-dimensional space may be determined as a non-effective region, a space surrounded by the three-dimensional space may be determined as an effective region, or the three-dimensional space may be combined with other methods to determine an effective region and a non-effective region in the model, and so on.
In the embodiment, the effective area and/or the non-effective area in the model are determined through a stereo space formed by the positions of the sensors and the projection points in the three-dimensional coordinate system, and in this way, the absolute positions of the sensors are utilized to perform another division of the cutting range, so that the effective area and/or the non-effective area in the model can be automatically determined simply and conveniently, and support is provided for automatically cutting the model subsequently.
At this time, step S103 may include: cuts are made along several faces of the volume to remove inactive areas in the model.
In another embodiment, if the positions of the projection points are not suitable, the effective area and/or the non-effective area in the model are determined directly according to the projection points on the projection surface, and the effective area and/or the non-effective area required by the user cannot be obtained, the projection points are not directly utilized, the projection points are processed, the processed projection points are used for replacing the original unprocessed projection points, and then the effective area and/or the non-effective area in the model are determined according to the processed projection points on the projection surface, so that the finally determined effective area and/or the non-effective area meet the requirements of the user. The determination method of the effective area and/or the non-effective area in the model according to the processed projection points on the projection surface is similar to the above method of directly utilizing the projection points, and the detailed description is referred to the above description, and is not repeated herein. The following describes the relevant content of processing the projection points in detail:
before the sub-step S102a2, the method may further include: and processing the projection points, wherein the processing comprises one or more of requiring the projection points to expand outwards, requiring the projection points to contract inwards, forming a rectangular bounding box through the projection points, and requiring the position of the sensor and the size of a stereoscopic space formed by the projection points to be larger than or equal to a space threshold value.
The processing of the projection point refers to processing of the projection point related to determining the effective area and/or the non-effective area in the model for meeting the user requirement. The outward expansion of the projection point means that the projection point expands to an area outside an area surrounded by all the projection points on the projection surface. The projection point shrinking inwards refers to that the projection point shrinks towards the area enclosed by all the projection points on the projection surface and shrinks towards the area inside the area enclosed by all the projection points on the projection surface. Forming bounding boxes by proxels includes forming AABB bounding boxes, bounding balls, directional bounding boxes OBB, and fixed directional convex hulls FDH, among others. The type of bounding box is related to the shape of the model. Specifically, all the projection points on the projection plane may extend on the projection plane to form a plurality of straight lines, and a bounding box may be formed by surrounding the area surrounded by the straight lines, as shown in fig. 5, the area surrounded by four projection points is rectangular, and then the area surrounded by four projection points extends to form a three-dimensional space, such as a rectangular parallelepiped or a frustum of a pyramid bounding box.
Specifically, referring to fig. 6, the first projection edge TA and the second projection edge TB are two projection edges intersecting at the projection point T, and the outward expansion of the projection point T may be: the projection point T extends outward along the first projection side TA in a direction TC away from the first projection side TA, that is, the projection point T extends outward along a direction TC opposite to the first projection side TA; or the projection point T expands outward along the second projection side TB in the direction TD away from the second projection side TB, that is, the projection point T expands outward along the opposite direction TD of the second projection side TB; or, the projection point T expands outward CTD in the range of the included angle between the first projection edge TA and the second projection edge TB, that is, the range surrounded by the CTD of the projection point T in the opposite direction of the range of the included angle ATB expands; or, the projection point T extends outward along the central line TP of the CTD; and so on. In practical applications, the outward expansion of the plurality of projection points may be combined with the above-mentioned plurality of manners.
The projection point T shrinking inward may be: the projection point T contracts inwards along the first projection side TA in a direction away from the projection point T, that is, the projection point T moves along the first projection side TA in a direction away from the projection point T; or the projection point T contracts inwards along the second projection side TB in the direction away from the projection point T, that is, the projection point T moves along the second projection side TB in the direction away from the projection point T; or, the projection point T is shrunk inward in the angle range ATB between the first projection edge TA and the second projection edge TB, that is, the projection point T is shrunk along the range surrounded by the angle range ATB, and so on. In practical applications, the inward contraction of the plurality of projection points may be combined with the above-mentioned manners.
By the method, the absolute position of the sensor is utilized to divide the cutting range, and meanwhile, the projection point can be adjusted according to the requirement of a user, so that the volume of the three-dimensional space is enlarged or reduced, the effective area and/or the ineffective area in the model can be simply and conveniently determined automatically, and support can be provided for automatically shearing the model subsequently.
In a second practical application, when determining the effective area and/or the non-effective area in the model by using the tilt angle of the sensor, referring to fig. 7, step S102 may include: substep S102B1 and substep S102B 2.
Sub-step S102B 1: and taking the position of the sensor in the three-dimensional coordinate system as a starting point, and making a ray along the inclination angle direction of the sensor to obtain the intersection point of the ray and the current surface of the model.
The front face is parallel to any coordinate plane in the three-dimensional coordinate system, and its orientation is related to the tilt angle of the sensor. For example, as shown in FIG. 8, on the o-xyz coordinate system, the triangle represents the sensor, the apex of the triangle represents the position of the sensor, and the orientation of the triangle represents the tilt angle of the sensor. In the figure, five sensors respectively make a ray along the direction of each inclination angle to obtain five rays, the five rays are intersected with the xoy plane, and the current surface of the structure parallel to the xoy plane can be sequentially determined. The intersections of the five rays with the current face of the model are J, K, L, M, N, respectively. Or, the first face of the model intersected with the ray currently is taken as the current face. Or, taking a surface extending from the lowest point of the scene level in the model during modeling as the current surface; the extended surface may be parallel to any coordinate plane in the three-dimensional coordinate system, or may intersect with any coordinate plane in the three-dimensional coordinate system, and preferably, the extended surface intersects with the scene through the lowest horizontal point of the scene, without intersecting other parts of the scene.
Specifically, in a model reconstruction, a camera is represented by a rectangular pyramid, the vertex of the rectangular pyramid represents the position of the camera, the orientation of the rectangular pyramid represents the inclination angle of the camera in space, and the straight line connecting the vertex of the rectangular pyramid and the center of the bottom surface is the ray in this embodiment.
Sub-step S102B 2: and determining the effective area and/or the ineffective area in the model through the intersection points.
The embodiment utilizes the inclination angle of the sensor, and can automatically determine the effective area and/or the non-effective area in the model simply and conveniently.
The sub-step S102B2 may be implemented in a variety of ways. Specifically, a second straight line intersecting the current surface is made from the intersection point, and a plurality of second intersecting surfaces intersecting the current surface can be obtained; or, a plurality of second intersecting surfaces are obtained through the intersection points and the rays corresponding to the intersection points; or a plurality of second intersecting surfaces are obtained through the intersection points and the midpoints of the position connecting lines of any two sensors; or, a plurality of second intersecting surfaces are obtained through the midpoints of the common vertical line segments of the rays corresponding to the intersection points and the intersection points; the active and/or inactive regions in the model are then determined by the second intersecting surface. That is, the sub-step S102B2 may include: substeps S102B2a1 and substeps 102B2a2, as shown in fig. 9; alternatively, the sub-step S102B2 may include: substeps S102B2a3 and substeps 102B2a2, as shown in fig. 10; alternatively, the sub-step S102B2 may include: substeps S102B2a4 and substeps 102B2a2, as shown in fig. 11; alternatively, the sub-step S102B2 may include: substeps S102B2a5 and substep S102B2a 2.
Sub-step S102B2a 1: and making a second straight line which is vertical or inclined to the current surface from the intersection point, and further obtaining a plurality of second intersection surfaces which are intersected with the current surface through the second straight line.
In the present embodiment, the second intersecting surface may be a plane or a non-plane. The second intersecting surface may be obtained by intersecting a plane formed by two adjacent second straight lines with the current surface, or may be obtained by intersecting a plane formed by two non-adjacent second straight lines with the current surface. The specific obtaining manner of the second intersecting surface may be determined according to practical applications, and is not limited herein.
Or, the sub-step S102B2a 3: and obtaining a plurality of second intersecting surfaces through the intersection points and the rays corresponding to the intersection points.
In the present embodiment, the second intersecting surface may be a plane or a non-plane. For example: the second intersecting surface obtained by the two intersecting points and the two rays corresponding to the two intersecting points can be a plane or a curved surface; the second intersecting surface obtained by one intersection point and one ray corresponding to the intersection point is a plane. Illustratively, the second intersecting surface may be constructed using the ray corresponding to the intersection point J in fig. 8 (i.e., the ray forming the intersection point J) and any of the intersection points.
Or, the sub-step S102B2a 4: and obtaining a plurality of second intersecting surfaces through the intersection points and the midpoints of the position connecting lines of any two sensors.
Specifically, the midpoint of the position connecting line of any two sensors may be the midpoint of the position connecting line of the two sensors corresponding to the intersection point. Or may be the midpoint of a line connecting the positions of the two sensors that is independent of the intersection or associated with some of the intersections. Illustratively, the second intersecting surface may be constructed using the midpoint of the line connecting the positions of the sensors corresponding to the intersection point J, K and the intersection point J, K (i.e., the sensor whose inclination angle corresponds to the ray forming the intersection point J, K) in fig. 8. As another example, the second intersecting surface may be formed by a midpoint of a line connecting positions of the sensors corresponding to the intersection point J, K and the intersection point N, M. As another example, the second intersecting surface may be formed by a midpoint of a line connecting positions of the sensors corresponding to the intersection point J, K and the intersection point J, M.
Or, the sub-step S102B2a 5: and obtaining a plurality of second intersecting surfaces through the midpoints of the intersection points and the common vertical line segments of the rays corresponding to the intersection points.
Specifically, the common perpendicular line can be obtained by taking any two rays. The part of the common perpendicular line sandwiched between any two rays is a common perpendicular line segment. Illustratively, a common perpendicular line of the two rays may be obtained by using the ray corresponding to the intersection point J, K in fig. 8 (i.e., the ray forming the intersection point J, K), and a portion of the common perpendicular line sandwiched between the two rays may be taken as a common perpendicular line segment, and a second intersecting surface may be constructed by the intersection point J, K and a midpoint of the common perpendicular line segment. In another example, the second intersecting surface may also be constructed using the common perpendicular line segments and intersection points J, M in the above example. In another example, the second intersecting surface may also be constructed using the common perpendicular line segments and intersection points N, M in the above example. Sub-step S102B2a 2: and determining the effective area and/or the ineffective area in the model through the second intersecting surface.
In the embodiment, multiple implementation modes of the effective region and/or the non-effective region in the model are determined through the intersection points, and by the mode, the cutting range is divided by utilizing the inclination angle of the sensor, the effective region and/or the non-effective region in the model can be simply and conveniently determined automatically, so that support is provided for automatically shearing the model subsequently.
In this case, step S103 may specifically include: and cutting along the second intersecting surface and the current surface of the model to remove the non-effective area in the model.
In a third practical application, when determining the active area and/or the inactive area in the model by using the position of the sensor and the inclination angle of the sensor, step S102 may include: substeps S102C1 and substeps 102C2, as shown in fig. 12.
Sub-step S102C 1: and projecting the position of the sensor in the three-dimensional coordinate system onto at least one projection surface of the three-dimensional coordinate system to obtain a projection point, and taking the position of the sensor in the three-dimensional coordinate system as a starting point, making a ray along the inclination direction of the sensor to obtain an intersection point of the ray and the current surface of the model.
Sub-step S102C 2: and determining an effective area and/or a non-effective area in the model according to the projection points and the intersection points.
The embodiment can automatically determine the effective area and/or the ineffective area in the model simply and conveniently by using the position of the sensor and the inclination angle of the sensor.
Specifically, the step of determining an effective region and/or a non-effective region in the model according to the intersection or the union of the first cutting range determined by the projection point and the second cutting range determined by the intersection point as the effective region, i.e. the step S102C2, may include: substeps S102C2a1 and substep S102C2a2, as shown in fig. 13.
Sub-step S102C2a 1: and determining a first cutting range of the model according to the projection points, and determining a second cutting range of the model according to the intersection points.
Sub-step S102C2a 2: and taking the intersection or union of the first cutting range and the second cutting range as an effective area.
According to the method, the intersection or the union of the first cutting range determined by the projection point and the second cutting range determined by the intersection point is used as the effective area, and the effective area and/or the ineffective area in the model can be automatically determined simply and conveniently, so that support can be provided for automatically shearing the model subsequently. When the sub-step S102C2a2 uses the intersection of the first cutting range and the second cutting range as the effective region, the irrelevant content outside the region of interest of the user can be further screened, and the modeling time is reduced; when the sub-step S102C2a2 uses the union of the first cutting range and the second cutting range as the effective region, it is ensured that the content of the region of interest of the user is completely included in the model, thereby reducing the error possibly caused by a single operation and avoiding the accidental loss of effective information.
Referring to fig. 14 and 15, fig. 14 and 15 show a model obtained by three-dimensional modeling using a series of pictures taken around a stone sculpture. Among them, fig. 14 is an uncut model, and fig. 15 is a cut model with automatic range calculation performed; as can be seen from the figure, there are clearly many edge scenes in fig. 14, the stone sculpture only occupies a small part of the whole model, fig. 15 does not have many edge scenes, and the stone sculpture occupies a main part of the whole model. Through this kind of mode, can reduce the volume of model, reduce the model and occupy the cubical space, can avoid noise interference, can accelerate the model and rebuild the speed, accelerate the speed of rendering up, can let the user only be absorbed in the region of concern, can guarantee to increase even the model and rebuild the quality.
In a fourth practical application, when determining the active area and/or the inactive area in the model by using the position of the sensor and the range of the sensor, step S102 may include:
step S102D: and determining an effective area and/or a non-effective area in the model according to the position of the sensor and the measuring range of the sensor.
Specifically, in one embodiment, step S102D may include: the range between the position of the sensor and the measuring range of the sensor is taken as the effective area.
In another embodiment, step S102D may include: and determining an effective area and/or a non-effective area in the model according to the set proportion of the positions of the sensors and the measuring ranges of the sensors.
Further, step S102D may further include: the range between the position of the sensor and the set ratio of the measurement range of the sensor is set as an effective area.
The embodiment can automatically determine the effective area and/or the ineffective area in the model simply and conveniently by using the position of the sensor and the measuring range of the sensor.
If the sensor comprises a camera, the viewing cone of the camera can also be used to determine the active area and/or the inactive area in the model. In a fifth practical application, when determining the effective region and/or the non-effective region in the model by using the viewing cone of the camera, step S102 may include:
step S102E: and determining an effective area and/or a non-effective area in the model according to the position of the camera device and the view cone of the camera device.
In step S102E, determining an effective region and/or a non-effective region in the model according to the position of the camera and the view cone of the camera, which may specifically include: substeps S102E1 and substep S102E2, as shown in fig. 16.
Sub-step S102E 1: and determining a three-dimensional space corresponding to the viewing cone of each camera by taking the position of each camera as a vertex.
Sub-step S102E 2: and taking the union of the three-dimensional spaces corresponding to the viewing cones of all the camera devices as an effective area.
The embodiment can automatically determine the effective area and/or the ineffective area in the model simply and conveniently by utilizing the position of the camera device and the viewing cone of the camera device.
Referring to fig. 17, fig. 17 is a schematic structural diagram of an embodiment of the model automatic shearing apparatus of the present application, it should be noted that the model automatic shearing apparatus of the present application can perform the steps in the above method for model automatic shearing, and for a detailed description of the relevant contents, please refer to the above method for model automatic shearing, which is not described in detail herein.
The apparatus 10 comprises: a memory 11 and a processor 12; the memory 11 and the processor 12 are connected by a bus 13.
The processor 12 may be a micro-control unit, a central processing unit, a digital signal processor, or the like.
The memory 11 may be a Flash chip, a read-only memory, a magnetic disk, an optical disk, a usb disk, or a removable hard disk, among others.
The memory 11 is used for storing a computer program;
the processor 12 is arranged to execute the computer program and when executing the computer program, to carry out the steps of:
acquiring sensor information in a process of performing model reconstruction by using data acquired by a sensor; determining an effective area and/or an ineffective area in the model according to the sensor information, wherein the ineffective area is an area outside the effective area in the model; and shearing the model to remove the non-effective area in the model.
In the embodiment of the application, in the process of reconstructing the model by using data acquired by a sensor, an effective area in the model and/or a non-effective area outside the effective area in the model are/is determined according to acquired sensor information, the model is cut, and the non-effective area in the model is removed; because the effective region and/or the ineffective region in the model are/is determined according to the acquired sensor information, and the model is cut, the model is completed in the model reconstruction process without interrupting the model reconstruction process, the model reconstruction speed is not reduced, the ineffective region in the model is cut off, the volume of the model can be reduced, the three-dimensional space occupied by the model is reduced, the noise interference can be avoided, the model reconstruction speed can be increased, the rendering speed is increased, a user can only concentrate on the concerned region, and the model reconstruction quality can be ensured or even increased; because the effective region and/or the non-effective region in the model are/is determined according to the acquired sensor information, and the model is cut, the determination is automatically completed in the model reconstruction process, and the participation of a user is not involved, so that the user experience is not relied on, the cut model can be ensured to keep the effective region, and the quality of the model is stable and repeatable.
The sensor information comprises one or more of the position of the sensor, the inclination angle of the sensor and the measuring range of the sensor, and the position of the sensor and the inclination angle of the sensor are the position and the inclination angle of the sensor relative to the acquired scene.
Wherein, when the processor executes the computer program, the following steps are realized: an active area and/or a non-active area in the model is determined based on the position of the sensor and/or the tilt angle of the sensor.
Wherein, when the processor executes the computer program, the following steps are realized: and determining an effective area and/or a non-effective area in the model according to the position of the sensor and/or the inclination angle of the sensor in the three-dimensional coordinate system of the model.
Wherein, when the processor executes the computer program, the following steps are realized: projecting the position of the sensor in the three-dimensional coordinate system onto at least one projection surface of the three-dimensional coordinate system to obtain a projection point; and determining an effective area and/or an ineffective area in the model according to the projection points on the projection surface.
Wherein, when the processor executes the computer program, the following steps are realized: making a first straight line which is vertical to or inclined to the projection plane from the projection point, and further obtaining a plurality of first intersecting surfaces which are intersected with the projection plane through the first straight line; the active area and/or the inactive area in the model is determined by the first intersecting surface.
Wherein, when the processor executes the computer program, the following steps are realized: and cutting along the first intersecting surface and the projection surface to remove the non-effective area in the model.
Wherein, when the processor executes the computer program, the following steps are realized: and determining an effective area and/or a non-effective area in the model through a stereo space formed by the positions of the sensors and the projection points in the three-dimensional coordinate system of the model.
Wherein, when the processor executes the computer program, the following steps are realized: cuts are made along several faces of the volume to remove inactive areas in the model.
Wherein, when the processor executes the computer program, the following steps are realized: and processing the projection points, wherein the processing comprises one or more of requiring the projection points to expand outwards, requiring the projection points to contract inwards, forming a bounding box through the projection points, and requiring the position of the sensor and the size of a stereoscopic space formed by the projection points to be larger than or equal to a space threshold value.
The requirement for outward expansion of the projection point comprises one or more of the requirement for outward expansion of the projection point along a first projection edge in a direction far away from the first projection edge, the requirement for outward expansion along a second projection edge in a direction far away from the second projection edge, or the requirement for outward expansion in an included angle range of the first projection edge and the second projection edge, wherein the first projection edge and the second projection edge are two projection edges which intersect at the projection point; the requirement for the projection point to shrink inwards comprises one or more of the requirement for the projection point to shrink inwards along the first projection edge in a direction away from the projection point, the requirement for the projection point to shrink inwards along the second projection edge in a direction away from the projection point, or the requirement for the projection point to shrink inwards within the range of the included angle between the first projection edge and the second projection edge.
Wherein, when the processor executes the computer program, the following steps are realized: taking the position of the sensor in the three-dimensional coordinate system as a starting point, and making a ray along the inclination angle direction of the sensor to obtain an intersection point of the ray and the current surface of the model; and determining the effective area and/or the ineffective area in the model through the intersection points.
Wherein, when the processor executes the computer program, the following steps are realized: making a second straight line perpendicular to or inclined to the current surface from the intersection point, and further obtaining a plurality of second intersection surfaces intersected with the current surface through the second straight line; or, a plurality of second intersecting surfaces are obtained through the intersection points and the rays corresponding to the intersection points; or a plurality of second intersecting surfaces are obtained through the intersection points and the midpoints of the position connecting lines of any two sensors; or, a plurality of second intersecting surfaces are obtained through the midpoints of the common vertical line segments of the rays corresponding to the intersection points and the intersection points; and determining the effective area and/or the ineffective area in the model through the second intersecting surface.
Wherein, when the processor executes the computer program, the following steps are realized: and cutting along the second intersecting surface and the current surface of the model to remove the non-effective area in the model.
Wherein, when the processor executes the computer program, the following steps are realized: projecting the position of the sensor in the three-dimensional coordinate system onto at least one projection surface of the three-dimensional coordinate system to obtain a projection point, and taking the position of the sensor in the three-dimensional coordinate system as a starting point, making a ray along the inclination angle direction of the sensor to obtain an intersection point of the ray and the current surface of the model; and determining an effective area and/or a non-effective area in the model according to the projection points and the intersection points.
Wherein, when the processor executes the computer program, the following steps are realized: determining a first cutting range of the model according to the projection points, and determining a second cutting range of the model according to the intersection points; and taking the intersection or union of the first cutting range and the second cutting range as an effective area.
Wherein the three-dimensional coordinate system comprises a global coordinate system or a local coordinate system.
Wherein, when the processor executes the computer program, the following steps are realized: and determining an effective area and/or a non-effective area in the model according to the position of the sensor and the measuring range of the sensor.
Wherein, when the processor executes the computer program, the following steps are realized: the range between the position of the sensor and the measuring range of the sensor is taken as the effective area.
Wherein, when the processor executes the computer program, the following steps are realized: and determining an effective area and/or a non-effective area in the model according to the set proportion of the positions of the sensors and the measuring ranges of the sensors.
Wherein, when the processor executes the computer program, the following steps are realized: the range between the position of the sensor and the set ratio of the measurement range of the sensor is set as an effective area.
Wherein the sensor comprises a camera device, and the sensor information further comprises a viewing cone of the camera device.
Wherein, when the processor executes the computer program, the following steps are realized: and determining an effective area and/or a non-effective area in the model according to the position of the camera device and the view cone of the camera device.
Wherein, when the processor executes the computer program, the following steps are realized: determining a three-dimensional space corresponding to a viewing cone of each camera by taking the position of each camera as a vertex; and taking the union of the three-dimensional spaces corresponding to the viewing cones of all the camera devices as an effective area.
The present application also provides a computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to implement a method of model auto-cropping as in any of the above. For a detailed description of relevant contents, refer to the above-mentioned image processing method section, which is not described in detail herein.
The computer readable storage medium may be an internal storage unit of any one of the image processing apparatuses, for example, a hard disk or a memory of the image processing apparatus. The computer readable storage medium may also be an external storage device of the image processing apparatus, such as a plug-in hard disk, a smart memory card, a secure digital card, a flash memory card, etc., provided on the image processing apparatus.
In the embodiment of the application, in the process of reconstructing the model by using data acquired by a sensor, an effective area in the model and/or a non-effective area outside the effective area in the model are/is determined according to acquired sensor information, the model is cut, and the non-effective area in the model is removed; because the effective region and/or the ineffective region in the model are/is determined according to the acquired sensor information, and the model is cut, the model is completed in the model reconstruction process without interrupting the model reconstruction process, the model reconstruction speed is not reduced, the ineffective region in the model is cut off, the volume of the model can be reduced, the three-dimensional space occupied by the model is reduced, the noise interference can be avoided, the model reconstruction speed can be increased, the rendering speed is increased, a user can only concentrate on the concerned region, and the model reconstruction quality can be ensured or even increased; because the effective region and/or the non-effective region in the model are/is determined according to the acquired sensor information, and the model is cut, the determination is automatically completed in the model reconstruction process, and the participation of a user is not involved, so that the user experience is not relied on, the cut model can be ensured to keep the effective region, and the quality of the model is stable and repeatable.
It is to be understood that the terminology used in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
The above description is only for the specific embodiment of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (49)

1. A method for automatically cutting a model is characterized by comprising the following steps:
acquiring sensor information in a process of performing model reconstruction by using data acquired by a sensor;
determining an active area and/or an inactive area in the model according to the sensor information, wherein the inactive area is an area outside the active area in the model;
and shearing the model to remove the non-effective area in the model.
2. The method of claim 1, wherein the sensor information comprises one or more of a position of a sensor, an inclination of a sensor, a range of a sensor, the position of a sensor and the inclination of a sensor being a position and an inclination of the sensor relative to an acquired scene.
3. The method of claim 2, wherein determining active and/or inactive regions in the model from the sensor information comprises:
and determining an effective area and/or a non-effective area in the model according to the position of the sensor and/or the inclination angle of the sensor.
4. The method of claim 3, wherein determining the active area and/or the inactive area in the model based on the position of the sensor and/or the tilt angle of the sensor comprises:
and determining an effective area and/or a non-effective area in the model according to the position of the sensor and/or the inclination angle of the sensor in the three-dimensional coordinate system of the model.
5. The method of claim 4, wherein determining the active area and/or the inactive area in the model based on the position of the sensor and/or the inclination of the sensor in the three-dimensional coordinate system of the model comprises:
projecting the position of the sensor in the three-dimensional coordinate system onto at least one projection surface of the three-dimensional coordinate system to obtain a projection point;
and determining an effective area and/or a non-effective area in the model according to the projection point on the projection plane.
6. The method of claim 5, wherein determining the active area and/or the inactive area in the model from the projection points on the projection surface comprises:
making a first straight line which is perpendicular to or inclined to the projection plane from the projection point, and further obtaining a plurality of first intersecting surfaces which are intersected with the projection plane through the first straight line;
and determining an effective area and/or an ineffective area in the model through the first intersecting surface.
7. The method of claim 6, wherein said cropping the model to remove non-active areas in the model comprises:
and cutting along the first intersecting surface and the projection surface to remove the non-effective area in the model.
8. The method of claim 5, wherein determining the active area and/or the inactive area in the model from the projection points on the projection surface comprises:
and determining an effective area and/or an ineffective area in the model through a stereo space formed by the positions of the sensors and the projection points in the three-dimensional coordinate system of the model.
9. The method of claim 8, wherein said cropping the model to remove non-active areas in the model comprises:
cuts are made along several faces of the three-dimensional space to remove inactive areas in the model.
10. The method according to any one of claims 6-9, wherein before determining the active region and/or the inactive region in the model from the projection points on the projection plane, further comprising:
processing the proxels, the processing including one or more of requiring the proxels to expand outward, requiring the proxels to contract inward, forming a bounding box by the proxels, and requiring the position of the sensor, the size of the volumetric space made up of the proxels to be greater than or equal to a spatial threshold.
11. The method of claim 10, wherein the requiring the projection point to expand outward comprises requiring one or more of the projection point to expand outward along a first projection edge in a direction away from the first projection edge, expand outward along a second projection edge in a direction away from the second projection edge, or expand outward at an included angle range of the first projection edge and the second projection edge, the first projection edge and the second projection edge being two projection edges intersecting the projection point;
the requirement for inward contraction of the projection point comprises one or more of requirement for inward contraction of the projection point along a first projection edge in a direction away from the projection point, requirement for inward contraction of the projection point along a second projection edge in a direction away from the projection point, or requirement for inward contraction in an included angle range of the first projection edge and the second projection edge.
12. The method of claim 4, wherein determining the active area and/or the inactive area in the model based on the position of the sensor and/or the inclination of the sensor in the three-dimensional coordinate system of the model comprises:
taking the position of the sensor in the three-dimensional coordinate system as a starting point, and making a ray along the inclination angle direction of the sensor to obtain an intersection point of the ray and the current surface of the model;
and determining an effective area and/or an ineffective area in the model through the intersection points.
13. The method of claim 12, wherein said determining the active area and/or the inactive area in the current model through the intersection comprises:
making a second straight line which is perpendicular to or inclined to the current surface from the intersection point, and further obtaining a plurality of second intersection surfaces which are intersected with the current surface through the second straight line;
or obtaining a plurality of second intersecting surfaces through the intersection points and the rays corresponding to the intersection points;
or a plurality of second intersecting surfaces are obtained through the intersection points and the midpoints of the position connecting lines of any two sensors;
or, a plurality of second intersecting surfaces are obtained through the midpoints of the common vertical line segments of the rays corresponding to the intersection points and the intersection points;
and determining an effective area and/or an ineffective area in the model through the second intersecting surface.
14. The method of claim 13, wherein said cropping the model to remove non-active areas in the model comprises:
and cutting along the second intersecting surface and the current surface of the model to remove the non-effective area in the model.
15. The method of claim 4, wherein determining the active area and/or the inactive area in the model based on the position of the sensor and/or the inclination of the sensor in the three-dimensional coordinate system of the model comprises:
projecting the position of the sensor in the three-dimensional coordinate system onto at least one projection surface of the three-dimensional coordinate system to obtain a projection point, and taking the position of the sensor in the three-dimensional coordinate system as a starting point, making a ray along the inclination direction of the sensor to obtain an intersection point of the ray and the current surface of the model;
and determining an effective region and/or a non-effective region in the model according to the projection points and the intersection points.
16. The method of claim 15, wherein determining the active region and/or the inactive region in the model from the projection points and the intersection points comprises:
determining a first cutting range of the model according to the projection point, and determining a second cutting range of the model according to the intersection point;
and taking the intersection or union of the first cutting range and the second cutting range as an effective area.
17. The method according to any one of claims 4-16, wherein the three-dimensional coordinate system comprises a global coordinate system or a local coordinate system.
18. The method of claim 2, wherein determining active and/or inactive regions in the model from the sensor information comprises:
and determining an effective area and/or a non-effective area in the model according to the position of the sensor and the measuring range of the sensor.
19. The method of claim 18, wherein determining the active area and/or the inactive area in the model based on the position of the sensor and the range of the sensor comprises:
the range between the position of the sensor and the measuring range of the sensor is taken as an effective area.
20. The method of claim 18, wherein determining the active area and/or the inactive area in the model based on the position of the sensor and the range of the sensor comprises:
and determining an effective area and/or an ineffective area in the model according to the set proportion of the positions of the sensors and the measuring ranges of the sensors.
21. The method of claim 20, wherein determining the active area and/or the inactive area in the model based on the position of the sensor and the set ratio of the span of the sensor comprises:
the range between the position of the sensor and the set proportion of the range of the sensor is taken as an effective area.
22. The method of claim 2, wherein the sensor comprises a camera and the sensor information further comprises a viewing cone of the camera.
23. The method of claim 22, wherein determining active and/or inactive regions in the model from the sensor information comprises:
and determining an effective area and/or an ineffective area in the model according to the position of the camera device and the viewing cone of the camera device.
24. The method of claim 23, wherein determining the active area and/or the inactive area in the model based on the position of the camera and the viewing cone of the camera comprises:
determining a three-dimensional space corresponding to a viewing cone of each camera by taking the position of each camera as a vertex;
and taking the union of the three-dimensional spaces corresponding to the viewing cones of all the camera devices as an effective area.
25. An apparatus for automatically shearing a model, said apparatus comprising: a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute the computer program and, when executing the computer program, implement the steps of:
acquiring sensor information in a process of performing model reconstruction by using data acquired by a sensor;
determining an active area and/or an inactive area in the model according to the sensor information, wherein the inactive area is an area outside the active area in the model;
and shearing the model to remove the non-effective area in the model.
26. The apparatus of claim 25, wherein the sensor information comprises one or more of a position of a sensor, an inclination of a sensor, a range of a sensor, the position of a sensor and the inclination of a sensor being a position and an inclination of the sensor relative to the captured scene.
27. The apparatus of claim 26, wherein the processor, when executing the computer program, performs the steps of:
and determining an effective area and/or a non-effective area in the model according to the position of the sensor and/or the inclination angle of the sensor.
28. The apparatus of claim 27, wherein the processor, when executing the computer program, performs the steps of:
and determining an effective area and/or a non-effective area in the model according to the position of the sensor and/or the inclination angle of the sensor in the three-dimensional coordinate system of the model.
29. The apparatus of claim 28, wherein the processor, when executing the computer program, performs the steps of:
projecting the position of the sensor in the three-dimensional coordinate system onto at least one projection surface of the three-dimensional coordinate system to obtain a projection point;
and determining an effective area and/or a non-effective area in the model according to the projection point on the projection plane.
30. The apparatus of claim 29, wherein the processor, when executing the computer program, performs the steps of:
making a first straight line which is perpendicular to or inclined to the projection plane from the projection point, and further obtaining a plurality of first intersecting surfaces which are intersected with the projection plane through the first straight line;
and determining an effective area and/or an ineffective area in the model through the first intersecting surface.
31. The apparatus of claim 30, wherein the processor, when executing the computer program, performs the steps of:
and cutting along the first intersecting surface and the projection surface to remove the non-effective area in the model.
32. The apparatus of claim 29, wherein the processor, when executing the computer program, performs the steps of:
and determining an effective area and/or an ineffective area in the model through a stereo space formed by the positions of the sensors and the projection points in the three-dimensional coordinate system of the model.
33. The apparatus of claim 32, wherein the processor, when executing the computer program, performs the steps of:
cuts are made along several faces of the three-dimensional space to remove inactive areas in the model.
34. The apparatus according to any of the claims 30-33, wherein the processor, when executing the computer program, performs the steps of:
processing the proxels, the processing including one or more of requiring the proxels to expand outward, requiring the proxels to contract inward, forming a rectangular bounding box by the proxels, and requiring the position of the sensor, the size of the volumetric space made up of the proxels to be greater than or equal to a spatial threshold.
35. The apparatus of claim 34, wherein the requirement for the projection point to expand outward comprises one or more of a requirement for the projection point to expand outward along a first projection edge away from the first projection edge, an expansion outward along a second projection edge away from the second projection edge, or an expansion outward over an included angle between the first projection edge and the second projection edge, the first projection edge and the second projection edge being two projection edges intersecting the projection point;
the requirement for inward contraction of the projection point comprises one or more of requirement for inward contraction of the projection point along a first projection edge in a direction away from the projection point, requirement for inward contraction of the projection point along a second projection edge in a direction away from the projection point, or requirement for inward contraction in an included angle range of the first projection edge and the second projection edge.
36. The apparatus of claim 28, wherein the processor, when executing the computer program, performs the steps of:
taking the position of the sensor in the three-dimensional coordinate system as a starting point, and making a ray along the inclination angle direction of the sensor to obtain an intersection point of the ray and the current surface of the model;
and determining an effective area and/or an ineffective area in the model through the intersection points.
37. The apparatus of claim 36, wherein the processor, when executing the computer program, performs the steps of:
making a second straight line which is perpendicular to or inclined to the current surface from the intersection point, and further obtaining a plurality of second intersection surfaces which are intersected with the current surface through the second straight line;
or obtaining a plurality of second intersecting surfaces through the intersection points and the rays corresponding to the intersection points;
or a plurality of second intersecting surfaces are obtained through the intersection points and the midpoints of the position connecting lines of any two sensors;
or, a plurality of second intersecting surfaces are obtained through the midpoints of the common vertical line segments of the rays corresponding to the intersection points and the intersection points;
and determining an effective area and/or an ineffective area in the model through the second intersecting surface.
38. The apparatus of claim 37, wherein the processor, when executing the computer program, performs the steps of:
and cutting along the second intersecting surface and the current surface of the model to remove the non-effective area in the model.
39. The apparatus of claim 28, wherein the processor, when executing the computer program, performs the steps of:
projecting the position of the sensor in the three-dimensional coordinate system onto at least one projection surface of the three-dimensional coordinate system to obtain a projection point, and taking the position of the sensor in the three-dimensional coordinate system as a starting point, making a ray along the inclination direction of the sensor to obtain an intersection point of the ray and the current surface of the model;
and determining an effective region and/or a non-effective region in the model according to the projection points and the intersection points.
40. The apparatus according to claim 39, wherein the processor, when executing the computer program, performs the steps of:
determining a first cutting range of the model according to the projection point, and determining a second cutting range of the model according to the intersection point;
and taking the intersection or union of the first cutting range and the second cutting range as an effective area.
41. The apparatus of any one of claims 28-40, wherein the three-dimensional coordinate system comprises a global coordinate system or a local coordinate system.
42. The apparatus of claim 26, wherein the processor, when executing the computer program, performs the steps of:
and determining an effective area and/or a non-effective area in the model according to the position of the sensor and the measuring range of the sensor.
43. The apparatus according to claim 42, wherein the processor, when executing the computer program, performs the steps of:
the range between the position of the sensor and the measuring range of the sensor is taken as an effective area.
44. The apparatus according to claim 42, wherein the processor, when executing the computer program, performs the steps of:
and determining an effective area and/or an ineffective area in the model according to the set proportion of the positions of the sensors and the measuring ranges of the sensors.
45. The apparatus according to claim 44, wherein the processor, when executing the computer program, performs the steps of:
the range between the position of the sensor and the set proportion of the range of the sensor is taken as an effective area.
46. The apparatus of claim 26, wherein the sensor comprises a camera, and wherein the sensor information further comprises a viewing cone of the camera.
47. The apparatus according to claim 46, wherein the processor, when executing the computer program, performs the steps of:
and determining an effective area and/or an ineffective area in the model according to the position of the camera device and the viewing cone of the camera device.
48. The apparatus according to claim 47, wherein the processor, when executing the computer program, performs the steps of:
determining a three-dimensional space corresponding to a viewing cone of each camera by taking the position of each camera as a vertex;
and taking the union of the three-dimensional spaces corresponding to the viewing cones of all the camera devices as an effective area.
49. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to implement a method of model auto-clipping as claimed in any one of claims 1 to 24.
CN201980034070.XA 2019-09-16 2019-09-16 Method and device for automatically shearing model and storage medium Pending CN112204624A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/106028 WO2021051249A1 (en) 2019-09-16 2019-09-16 Method and apparatus for automatically shearing model, and storage medium

Publications (1)

Publication Number Publication Date
CN112204624A true CN112204624A (en) 2021-01-08

Family

ID=74004608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980034070.XA Pending CN112204624A (en) 2019-09-16 2019-09-16 Method and device for automatically shearing model and storage medium

Country Status (2)

Country Link
CN (1) CN112204624A (en)
WO (1) WO2021051249A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113075659A (en) * 2021-03-30 2021-07-06 北京环境特性研究所 Self-adaptive partitioning preprocessing method and system for grid model of flat scene

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108597023A (en) * 2018-05-09 2018-09-28 中国石油大学(华东) A kind of geology based on slr camera is appeared 3 D model construction method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8208717B2 (en) * 2009-02-25 2012-06-26 Seiko Epson Corporation Combining subcomponent models for object image modeling
US20150206337A1 (en) * 2014-01-17 2015-07-23 Nokia Corporation Method and apparatus for visualization of geo-located media contents in 3d rendering applications
CN103871097B (en) * 2014-02-26 2017-01-04 南京航空航天大学 Data fusion technique fusion method based on dental preparations
CN105139443A (en) * 2015-07-30 2015-12-09 芜湖卫健康物联网医疗科技有限公司 Three-dimensional imaging system and method of diagnosis result

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108597023A (en) * 2018-05-09 2018-09-28 中国石油大学(华东) A kind of geology based on slr camera is appeared 3 D model construction method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
普阳光: "多视图三维重建特征点检测匹配和点云区域裁剪算法改进", 中国优秀硕士学位论文全文数据库-信息科技辑, no. 01, pages 4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113075659A (en) * 2021-03-30 2021-07-06 北京环境特性研究所 Self-adaptive partitioning preprocessing method and system for grid model of flat scene
CN113075659B (en) * 2021-03-30 2022-10-04 北京环境特性研究所 Self-adaptive partitioning preprocessing method and system for grid model of flat scene

Also Published As

Publication number Publication date
WO2021051249A1 (en) 2021-03-25

Similar Documents

Publication Publication Date Title
CN113424224B (en) Method for identifying and reserving merging point cloud of preferred points
JP6261489B2 (en) Non-primary computer-readable medium storing method, image processing apparatus, and program for extracting plane from three-dimensional point cloud
US8270704B2 (en) Method and apparatus for reconstructing 3D shape model of object by using multi-view image information
CN105574921B (en) Automated texture mapping and animation from images
US20170038942A1 (en) Playback initialization tool for panoramic videos
US20070133865A1 (en) Method for reconstructing three-dimensional structure using silhouette information in two-dimensional image
CN108352082B (en) Techniques to crowd 3D objects into a plane
US6515658B1 (en) 3D shape generation apparatus
CN109410213A (en) Polygon pel method of cutting out, computer readable storage medium, electronic equipment based on bounding box
CN112307553A (en) Method for extracting and simplifying three-dimensional road model
US20060066613A1 (en) Method and system for partitioning the surface of a three dimentional digital object model in order to map a texture
CN109064533B (en) 3D roaming method and system
CN114549772A (en) Multi-source three-dimensional model fusion processing method and system based on engineering independent coordinate system
JP6736422B2 (en) Image processing apparatus, image processing method and program
CN113570666B (en) Task allocation method, device, server and computer readable storage medium
CN111340889A (en) Method for automatically acquiring matched image block and point cloud ball based on vehicle-mounted laser scanning
CN112204624A (en) Method and device for automatically shearing model and storage medium
CN117333648A (en) Method and system for fusing GIS three-dimensional terrain and digital information model
US8692827B1 (en) Carving buildings from a three-dimensional model, and applications thereof
CN116051980B (en) Building identification method, system, electronic equipment and medium based on oblique photography
JP2022015115A (en) Information processing apparatus, information processing method, and program
JP7195785B2 (en) Apparatus, method and program for generating 3D shape data
CN106780693B (en) Method and system for selecting object in three-dimensional scene through drawing mode
KR100490885B1 (en) Image-based rendering method using orthogonal cross cylinder
JP2020035216A (en) Image processing device, method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210108