WO2019100216A1 - 3d建模方法、电子设备、存储介质及程序产品 - Google Patents

3d建模方法、电子设备、存储介质及程序产品 Download PDF

Info

Publication number
WO2019100216A1
WO2019100216A1 PCT/CN2017/112194 CN2017112194W WO2019100216A1 WO 2019100216 A1 WO2019100216 A1 WO 2019100216A1 CN 2017112194 W CN2017112194 W CN 2017112194W WO 2019100216 A1 WO2019100216 A1 WO 2019100216A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature point
image
coordinate
axis
feature
Prior art date
Application number
PCT/CN2017/112194
Other languages
English (en)
French (fr)
Inventor
邓伍华
刘兴慧
Original Assignee
深圳市柔宇科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市柔宇科技有限公司 filed Critical 深圳市柔宇科技有限公司
Priority to CN201780092159.2A priority Critical patent/CN110785792A/zh
Priority to PCT/CN2017/112194 priority patent/WO2019100216A1/zh
Publication of WO2019100216A1 publication Critical patent/WO2019100216A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Definitions

  • the present invention relates to the field of video processing, and in particular, to a 3D modeling method, an electronic device, a storage medium, and a program product.
  • most of the two cameras use a multi-camera to perform multi-angle shooting on a space object, and then locate the ranging of each feature point on the space object according to the principle of triangulation, wherein the ranging of the feature points is a feature point and a dual camera.
  • the dual cameras are in the same plane, according to the distance between the two cameras, the distance between the focal plane of the dual camera and the plane of the dual camera, and the distance difference between the positions of the same feature point in different captured images.
  • the distance between the feature point and the plane of the dual camera can be calculated, that is, the ranging of the feature point is obtained, and then the spatial object is 3D-modeled according to the ranging of the feature point.
  • the cost of using a dual camera is high, and the dual camera has a certain focus error due to the focusing problem, thereby affecting the accuracy of the ranging, thereby affecting the accuracy of the 3D model.
  • the present invention aims to solve at least one of the technical problems in the related art to some extent.
  • an object of the present invention is to provide a 3D modeling method that can realize 3D modeling of a space object by using a single camera device to solve the existing 3D modeling using a dual camera. There is a high cost and a low accuracy of the constructed 3D model.
  • Another object of the present invention is to propose a 3D modeling apparatus.
  • Another object of the present invention is to provide an electronic device.
  • Another object of the present invention is to provide a non-transitory computer readable storage medium.
  • Another object of the present invention is to provide a computer program product.
  • the 3D modeling method proposed by the first aspect of the present invention includes: turning on the imaging device to perform omnidirectional shooting on the target object to be modeled; and identifying the feature points of the target object one by one in the omnidirectional shooting process. Obtaining a motion trajectory of the camera device in the process of identifying each feature point; determining spatial coordinates of the feature point according to the motion trajectory corresponding to each feature point; based on spatial coordinates of each feature point, The target object Perform 3D modeling.
  • the 3D modeling method of the embodiment of the present invention performs the omnidirectional shooting of the target object to be modeled by turning on the camera device, and identifies the feature points of the target object one by one in the omnidirectional shooting process, and acquires the recognition process of the camera device at each feature point.
  • the motion trajectory in the motion determines the spatial coordinates of the feature points according to the motion trajectory corresponding to each feature point, and performs 3D modeling on the target object based on the spatial coordinates of each feature point.
  • the dual camera is used for 3D modeling. Since the dual camera needs focusing, and the focusing error is introduced during the focusing process, the feature point ranging of the object is not accurate.
  • only one camera device can avoid the focus error introduced by the focusing process, thereby improving the accuracy of the 3D modeling.
  • the object of 3D modeling of the target object can be realized only by a single imaging device, and only one imaging device is required to reduce the cost compared with the related art in which the dual camera is used for 3D modeling.
  • the second aspect of the present invention provides a 3D modeling apparatus, including:
  • a shooting module for turning on the camera to perform all-round shooting on the target object to be modeled
  • a recognition module configured to identify feature points of the target object one by one during omnidirectional shooting
  • An acquiring module configured to acquire a motion track of the camera device during the recognition process of each feature point
  • a determining module configured to determine a spatial coordinate of the feature point according to the motion trajectory corresponding to each feature point
  • a modeling module is configured to perform 3D modeling on the target object based on spatial coordinates of each feature point.
  • the 3D modeling device of the embodiment of the present invention performs the omnidirectional shooting of the target object to be modeled by turning on the imaging device, and recognizes the feature points of the target object one by one in the omnidirectional shooting process, and acquires the recognition process of the camera device at each feature point.
  • the motion trajectory in the motion determines the spatial coordinates of the feature points according to the motion trajectory corresponding to each feature point, and performs 3D modeling on the target object based on the spatial coordinates of each feature point.
  • the dual camera is used for 3D modeling. Since the dual camera needs focusing, and the focusing error is introduced during the focusing process, the feature point ranging of the object is not accurate.
  • only one camera device can avoid the focus error introduced by the focusing process, thereby improving the accuracy of the 3D modeling.
  • the object of 3D modeling of the target object can be realized only by a single imaging device, and only one imaging device is required to reduce the cost compared with the related art in which the dual camera is used for 3D modeling.
  • an electronic device includes: a memory, a processor, and a computer program stored on the memory and operable on the processor, the processor executing the When the program is used to implement:
  • the camera device performs a full-scale shooting of the target object to be modeled
  • the target object is 3D modeled based on the spatial coordinates of each feature point.
  • a non-transitory computer readable storage medium according to an embodiment of the present invention, wherein a computer program is stored thereon, and the program is executed by the processor to implement 3D according to the first aspect of the present invention.
  • Modeling method
  • a computer program product when the instructions in the computer program product are executed by a processor, implements a 3D modeling method according to the first aspect of the present invention.
  • FIG. 1 is a schematic flowchart of a 3D modeling method according to an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart diagram of another 3D modeling method according to an embodiment of the present disclosure.
  • FIG. 3 is a schematic diagram of a position of a camera device and a feature point in a space coordinate system according to an embodiment of the present invention
  • FIG. 4 is a schematic diagram showing positions of an imaging device and a projection feature point in a virtual coordinate system when the imaging device according to the embodiment of the present invention is in a C1 position;
  • FIG. 5 is a schematic diagram 1 showing a position change of a projection feature point in a virtual coordinate system when the image pickup apparatus is moved along the Z axis according to an embodiment of the present invention
  • FIG. 6 is a second schematic diagram showing the position change of a projection feature point in a virtual coordinate system when the camera apparatus is moved along the Z axis according to an embodiment of the present invention
  • FIG. 7 is a schematic diagram showing changes in position of a projected feature point in a virtual coordinate system when the image pickup apparatus is moved in an X-Y plane according to an embodiment of the present invention
  • FIG. 8 is a schematic diagram showing changes in position of a feature point in a spherical coordinate system when the image pickup apparatus is rotated in place according to an embodiment of the present invention
  • FIG. 9 is a schematic structural diagram of a 3D modeling apparatus according to an embodiment of the present invention.
  • the cost of using a dual camera is high, and the dual camera has a certain focus error due to the focusing problem, thereby affecting the accuracy of the ranging, thereby affecting the accuracy of the 3D model.
  • FIG. 1 is a schematic flowchart of a 3D modeling method according to an embodiment of the present invention. As shown in FIG. 1 , the 3D modeling method includes the following steps:
  • Step 101 Turn on the camera to perform all-round shooting on the target object to be modeled.
  • a single camera device can be placed in the vicinity of the target object, the camera device is turned on, and the target object is captured in all directions by the mobile camera device.
  • the imaging device moves around the target object, and the target object can be photographed in various orientations.
  • the movement mode of the imaging device is not limited, and the imaging device can be randomly moved in the space to realize The target object is captured in all directions during the movement.
  • the imaging device moves in any one of the spaces, and can be vector-decomposed into: rotating in place, moving in the vertical direction, moving back and forth, and the like.
  • the moving of the camera device in the vertical direction can ensure complete shooting in the vertical direction; when the camera device moves back and forth, the size of the object in the image is different, so that the distance between the camera device and the target object can be adjusted during the shooting process. In order to ensure that the image of the object in the image is different in proportion.
  • Step 102 Identify feature points of the target object one by one during the omnidirectional shooting process.
  • feature points refer to the smallest combination of pixels that can be used to distinguish other feature points.
  • the face Take the face as an example, the nose, eyes, mouth, etc. are facial features. It can be understood that the feature information of the feature points such as color, brightness, and the like is significantly different from other feature points.
  • 3D modeling is performed according to feature points. Since a single camera device performs shooting at a certain position, all the feature points of the target object cannot be completely recognized according to the captured image, and therefore it is necessary to identify the feature points of the target object one by one in the omnidirectional shooting process. When identifying an object feature point, it can be identified based on feature information such as color, brightness, and the like.
  • Step 103 Acquire a motion trajectory of the camera device during the recognition process of each feature point.
  • the motion trajectory can be understood as the manner in which the camera device moves in the process of recognizing each feature point.
  • Step 104 Determine a spatial coordinate of the feature point according to the motion trajectory corresponding to each feature point.
  • the spatial coordinates of the feature points are determined according to the motion trajectory corresponding to each feature point, such as the moving manner of the camera device.
  • Step 105 Perform 3D modeling on the target object based on the spatial coordinates of each feature point.
  • the target object is modeled in 3D.
  • a 3D model of the face can be established according to the spatial coordinates of facial feature points such as the nose, eyes, eyebrows, mouth, and ears.
  • FIG. 2 is a schematic flow chart of another 3D modeling method proposed by the present invention.
  • the 3D modeling method includes the following steps:
  • Step 201 Turn on the camera to perform all-round shooting on the target object to be modeled.
  • a single camera device can be placed in the vicinity of the target object, the camera device is turned on, and the target object is captured in all directions by the mobile camera device.
  • Step 202 Identify feature points according to feature information of pixel points in the image during omnidirectional shooting.
  • feature information of each pixel point such as a pixel point value, a color of a pixel point, a brightness of a pixel point, and the like, are extracted from a current frame image captured by the imaging device. Comparing the feature information of the pixel points, the pixel points with similar feature information are taken as one candidate feature point. For example, in the captured face image, the pixel information such as the pixel value, the color, and the brightness of the pixel corresponding to the nose are relatively similar, so that the pixel with the similar feature information can be used as a candidate feature point.
  • the candidate feature points When the difference of the feature information of the candidate feature points in the consecutive preset frames is within a preset range, it indicates that the candidate feature points should be feature points in the target object that are distinct from other parts, and the candidate feature points can be identified as the target object.
  • a feature point A feature point.
  • the feature points may be marked, and the marked feature points are added to the preset feature point set.
  • the mark here can be the number of feature points.
  • Step 203 Acquire a motion trajectory of the camera device during the recognition process of each feature point.
  • the state information of the acquiring camera device can be tracked in real time by using an external camera device or a motion sensor inside the camera device.
  • the state information includes coordinate information and angle information of the camera device, and the coordinate information may be coordinate information in the three-dimensional space coordinate system during the process of capturing the target object by the camera device, and the angle information may be coordinates in the camera device and the space coordinate system. The angle between the axes.
  • the frame image currently captured by the camera device is taken as a boundary image. Starting from the first frame image after the identified boundary image corresponding to the previous feature point, until the boundary image when at least one feature point is recognized, using the state information corresponding to each image between the two boundary images to form A motion trajectory corresponding to at least one feature point.
  • Step 204 Starting from the first frame image of the feature point, each time a frame of the image is captured, the current motion track is vector-decomposed, and the camera frame of each set orientation is captured and the current frame image is captured. Change information between.
  • a spatial rectangular coordinate system is established in advance, wherein the origin can be arbitrarily selected, in order to facilitate calculation,
  • the initial position of the camera can be used as the origin to establish a spatial coordinate system.
  • the motion trajectory is vector-decomposed, that is, the vector trajectory corresponding to the motion trajectory corresponding to the feature point is decomposed.
  • the motion trajectory can be decomposed into the motion trajectory in the set orientation.
  • the set orientation may be along the Z-axis direction, along the X-Y plane, and the like.
  • the change information includes position change information, angle change information, and the like. For example, if the set orientation is along the Z-axis direction, the moving distance of the photographing device along the Z-axis can be calculated according to the coordinates of the photographing device when photographing the previous frame image and the coordinates when photographing the current frame image.
  • Step 205 For each set orientation, continuously update the first spatial coordinate of the feature point according to the change information and the first image coordinate of the feature point in the previous frame image and the second image coordinate in the current frame image until the update Up to the corresponding frame image when the feature point is recognized, to obtain the final first space coordinate of the feature point.
  • the center point of the frame image may be used as the origin to establish a virtual coordinate system.
  • the X-axis, the Y-axis, and the Z-axis of the virtual coordinate system are parallel to the X-axis, the Y-axis, and the Z-axis of the spatial rectangular coordinate system, and have the same direction.
  • the change information between the previous frame image and the current frame image may be captured according to the camera at the set orientation, and the first image coordinates and the current frame of the feature point in the previous frame image.
  • a second image coordinate in the image continuously updating the first spatial coordinate of the feature point until updating to identify the frame image corresponding to the feature point, that is, updating to the boundary image identifying the feature point, to obtain the feature point
  • the final first space coordinate is
  • the first image coordinate is a coordinate in a virtual coordinate system established by the feature point in the image of the previous frame
  • the second image coordinate is a coordinate in the virtual coordinate system established by the feature point in the current frame image
  • Step 206 Perform vector synthesis on the final first spatial coordinates of each set orientation to obtain spatial coordinates of the feature points.
  • the motion trajectory is vector-decomposed in step 204, after the final first spatial coordinates of the feature points in each set orientation are calculated, the final first spatial coordinates of each set orientation are vector-combined to obtain features.
  • the spatial coordinates of the point are vector-composed in step 204, after the final first spatial coordinates of the feature points in each set orientation are calculated, the final first spatial coordinates of each set orientation are vector-combined to obtain features. The spatial coordinates of the point.
  • some feature points may appear after disappearing in the frame image. If the feature information such as color, brightness, etc. of the feature points and the feature information of the surrounding feature points are determined as the same feature point, it can be compared before and after. For the spatial coordinates calculated twice, if the difference between the spatial coordinates calculated twice is within the preset range, the average of the two calculated spatial coordinates may be taken as the spatial coordinates of the feature points.
  • Step 207 Obtain spatial coordinates of each feature point from the feature point set.
  • the spatial coordinates of each feature point are obtained from the feature point set.
  • Step 208 Perform 3D construction according to the mark and space coordinates of each feature point to form a 3D model of the target object.
  • 3D modeling is performed to obtain a 3D model of the target object.
  • the 3D modeling method calculates the feature points by performing vector decomposition on the motion trajectory corresponding to the feature points according to the change information of the camera device itself, such as position change information, angle change information, and the image coordinates of the object feature points.
  • the spatial coordinates to achieve 3D modeling of the target object.
  • the spatial coordinates of the feature points are calculated in the process of identifying the feature points.
  • the spatial coordinates of the at least one feature point may be determined according to the motion track corresponding to the current at least one feature point after the current at least one feature point is identified.
  • the space of each feature point is calculated according to the image of each frame captured by the capturing device and the state information of the capturing device when each frame of the image is captured. coordinate.
  • the first spatial coordinate of the feature point is continuously updated according to the change information and the first image coordinate of the feature point in the previous frame image and the second image coordinate in the current frame image.
  • the set orientation includes a Z-axis direction in a preset space coordinate system, a horizontal plane composed of an X-axis and a Y-axis in the space coordinate system, and an in-situ rotation.
  • any point in the space can be selected as the origin.
  • a space coordinate system is established for the purpose of facilitating calculation with the starting position of the imaging device as the origin.
  • the setting direction is the Z-axis direction.
  • the camera device moves in the Z-axis direction to acquire the vertical displacement amount of the image of the previous frame and the image of the current frame.
  • the X-axis coordinate and the Y-axis coordinate in the first spatial coordinate of the feature point are calculated according to the vertical displacement amount and the first angle and the second angle.
  • the Z-axis coordinate in the first spatial coordinate of the feature point is calculated according to the first image coordinate, the second image coordinate, the first angle, and the second angle.
  • the positions of the imaging device and the feature point P in the space coordinate system are respectively C1 and P1, and the spatial coordinate system takes C1 as the origin.
  • the center point O'1 of the frame image is The virtual coordinate system is established at the origin, the position of the imaging device in the virtual coordinate system is C'1, the position of the feature point in the virtual coordinate system is P'1, and ⁇ is the shooting FOV of the imaging device.
  • the coordinate axes of the spatial coordinate system and the coordinate axes of the virtual coordinate system are parallel to each other and have the same direction.
  • the position of the feature point at the imaging point P'1 of the imaging device at the C1 position to the center point O'1 of the frame image can also be calculated, that is, ⁇ 1, ⁇ 2, and O ⁇ 1P'1 are known amounts.
  • the imaging device is displaced from the C1 position to the C2 position, the vertical displacement amount C1C2 of the imaging device in the space coordinate system, and the displacement C'1_C ⁇ 2 of the imaging device in the virtual coordinate system can also be obtained.
  • the conversion ratio of the virtual coordinates to the spatial coordinates of this process is ⁇ :
  • the angle between the O ⁇ 1P ⁇ 1 and the X ⁇ axis is equal to the angle between the O1P1 and the X axis in the space coordinate system, and the angle between the O ⁇ 1P ⁇ 1 and the Y' axis is in the space coordinate system.
  • the angle between O1P1 and Y axis is equal.
  • the angle ⁇ is the angle between the P'1 and the X' axis of the previous frame of the image, and is a known value.
  • X_P1, Y_P1, and Z_P1 constitute the first spatial coordinate in the spatial coordinate system when the feature point P is the origin of the C1 position.
  • the coordinate method of calculating the feature point when the camera device moves C2 from C1 may be calculated, until the camera device moves to recognize the feature point, and the feature point is completed.
  • the calculation process of the first spatial coordinates may be calculated, until the camera device moves to recognize the feature point, and the feature point is completed.
  • the image pickup device moves on a horizontal plane X-Y plane composed of the X-axis and the Y-axis in the space coordinate system.
  • the horizontal displacement amount of the image of the previous frame and the current frame image is acquired by the imaging device. Then, obtaining a third angle between the projected feature point and the moved X' axis and a fourth clip between the reference point in the previous frame image and the reference point in the current frame image and the X' axis after the movement
  • the projection feature point is an imaging point of the feature point in the initial frame image captured by the camera device; after the movement, the X' axis is a horizontal coordinate axis formed by using the reference point in the current frame image as an origin.
  • the first displacement amount between the reference point in the image of the previous frame and the reference point in the current frame image is obtained.
  • the vertical distance of the feature point to the Z axis is calculated according to the horizontal displacement amount, the first displacement amount, and the first image coordinate.
  • the camera translates along the X-Y plane, panning from the C1 position to the C2 position. As shown in FIG. 7, in the virtual coordinate system, the camera is translated from C'1 to C'2.
  • the center point of the frame image is O'1, that is, the reference point O'1 in the image of the previous frame, and the virtual coordinate system is established and the coordinate axes are X' axis, Y' axis, Z'.
  • the projected feature point of the feature point P is P'
  • the fifth angle between the P' and the X' axis is ⁇ 3.
  • the first spatial coordinate of each feature point with respect to the origin is calculated by using the C1 position as the origin of the spatial coordinate system.
  • the center point of the frame image is O'2, that is, the reference point of the current frame image is O'2.
  • the virtual coordinate system is established with O ⁇ 2 as the origin.
  • the coordinate axes are called the X' axis after the movement, the Y' axis after the movement, and the Z' axis after the movement.
  • the third angle between the projected feature point P' and the moved X' axis is ⁇ 1
  • the fourth angle between the O ⁇ 1_O ⁇ 2 and the moved X' axis is ⁇ 2.
  • O ⁇ 1_P ⁇ , O ⁇ 2_P ⁇ , ⁇ 1, ⁇ 2 are known quantities, and the angle between O ⁇ 1_O ⁇ 2 and O ⁇ 2_P ⁇ is ⁇ 2- ⁇ 1.
  • O ⁇ 1_P ⁇ , O ⁇ 2_P ⁇ , ⁇ 1, and ⁇ 2 are known quantities, the first displacement amount O'1_O'2 can be obtained.
  • the X-axis coordinate of the feature point P is:
  • the Y-axis coordinate of the feature point P is:
  • the coordinate method of calculating the feature point when the camera device moves C2 from C1 may be calculated, until the camera moves to recognize the feature point, and the feature point is completed.
  • the angular offset of the camera device is obtained according to the first angle and the second angle, and the first spherical coordinate and the angular offset of the feature point when capturing the image of the previous frame are obtained.
  • the second spherical coordinate of the feature point when the current frame image is captured, and then the first spatial coordinate of the feature point is calculated according to the second spherical coordinate.
  • the spherical coordinate system is established with the imaging device as the coordinate origin.
  • the first spherical coordinate of the feature point P when the image of the previous frame is taken is r is the distance from the feature point P to the imaging device, which can be obtained by the first two movement methods.
  • ⁇ 1 is the angle between the line connecting the line P and the origin and the Z-axis, that is, the first angle, which is a known amount.
  • the angle between the projection of the line between P and the origin in the XY plane and the X-axis is a known amount.
  • the second spherical coordinate of the feature point P when capturing the current frame image is The angular offset of the camera can be accurately measured by the built-in sensor of the camera or by external camera monitoring. Assume that the angle between the line connecting the P and the origin and the Z axis is ⁇ , and the angle between the projection of the line between the P and the origin in the XY plane and the X axis is ⁇ . Then you can get:
  • the coordinates of the X-axis, the Y-axis, and the Z-axis of the feature point P in the space coordinate system can be obtained:
  • X_p, Y_p, Z_p form the first spatial coordinates of the feature point when the current frame image is captured.
  • the angular offset of the camera relative to the previous frame of image can be measured, so that the spherical coordinates and angle of the feature point when the previous frame is captured can be used.
  • the shifting amount is obtained, and the spherical coordinate of the feature point when the image capturing apparatus captures the frame image is obtained, and then the first spatial coordinate of the feature point when the image capturing apparatus captures the frame image can be calculated according to the spherical coordinate until the camera rotates to recognize the feature. Point, complete the calculation process of the first spatial coordinate of the feature point.
  • the motion trajectory of the imaging device can be vector-decomposed in the Z-axis direction, the XY plane, and the origin rotation setting direction in the space coordinate system, and the above manner is adopted.
  • the 3D model of the target object can be obtained by performing 3D modeling based on the spatial coordinates of the spatial coordinates of all the feature points.
  • the 3D modeling method of the embodiment of the present invention performs the omnidirectional shooting of the target object to be modeled by turning on the camera device, and identifies the feature points of the target object one by one in the omnidirectional shooting process, and acquires the recognition process of the camera device at each feature point.
  • the motion trajectory in the motion determines the spatial coordinates of the feature points according to the motion trajectory corresponding to each feature point, and performs 3D modeling on the target object based on the spatial coordinates of each feature point.
  • the dual camera is used for 3D modeling. Since the dual camera needs focusing, and the focusing error is introduced during the focusing process, the feature point ranging of the object is not accurate.
  • only one camera device can avoid the focus error introduced by the focusing process, thereby improving the accuracy of the 3D modeling.
  • the object of 3D modeling of the target object can be realized only by a single imaging device, and only one imaging device is required to reduce the cost compared with the related art in which the dual camera is used for 3D modeling.
  • FIG. 9 is a schematic structural diagram of a 3D modeling apparatus according to an embodiment of the present invention.
  • the device includes: a shooting module 910 , an identification module 920 , an obtaining module 930 , a determining module 940 , and a modeling module 950 .
  • the photographing module 910 is configured to turn on the omnidirectional photographing of the target object to be modeled by the camera.
  • the identification module 920 is configured to identify feature points of the target object one by one during the omnidirectional shooting process.
  • the obtaining module 930 is configured to acquire a motion track of the camera device during the identification process of each feature point.
  • the determining module 940 is configured to determine a spatial coordinate of the feature point according to the motion trajectory corresponding to each feature point.
  • the modeling module 950 is configured to perform 3D modeling on the target object based on the spatial coordinates of each feature point.
  • the obtaining module 930 is further configured to:
  • real-time tracking acquires state information of the image captured by the camera device to the current frame;
  • the state information includes coordinate information and angle information of the camera device;
  • the frame image currently captured by the camera device is taken as a boundary image
  • the identification module 920 is further configured to:
  • Pixel points with similar feature information are used as one candidate feature point
  • the candidate feature points are identified as one feature point.
  • the apparatus may further include:
  • the marking module is configured to mark the feature points each time a feature point is recognized; and add the marked feature points to the preset feature point set.
  • the modeling module 950 is further configured to:
  • the 3D construction is performed according to the mark and space coordinates of each feature point to form a 3D model of the target object.
  • the determining module 940 is further configured to:
  • the current motion track is vector-decomposed every time the image is captured, and the change between the previous frame image and the current frame image of the camera device in each set orientation is obtained. information;
  • the first spatial coordinate of the feature point is continuously updated according to the change information and the first image coordinate of the feature point in the previous frame image and the second image coordinate in the current frame image until updating to identify The feature point corresponds to the corresponding frame image to obtain the final first spatial coordinate of the feature point;
  • the final first spatial coordinates of each set orientation are vector-combined to obtain the spatial coordinates of the feature points.
  • the set orientation includes a Z-axis direction in a preset space coordinate system, a horizontal plane composed of an X-axis and a Y-axis in the space coordinate system, and an in-situ rotation;
  • the spatial coordinate system is a coordinate system formed by taking the starting position of the camera as a coordinate origin; the determining module 940 is further configured to:
  • the Z-axis coordinate in the first spatial coordinate of the feature point is calculated according to the first image coordinate, the second image coordinate, the first angle, and the second angle.
  • the determining module 940 is further configured to:
  • the projection feature point is an imaging point of the feature point in the initial frame image captured by the camera device; after the movement, the X′ axis is a horizontal coordinate axis formed by using a reference point in the current frame image as an origin;
  • the X-axis is the horizontal coordinate axis formed by the reference point in the previous frame image as the origin.
  • the determining module 940 is further configured to:
  • the second spherical coordinate of the feature point when the current frame image is captured is obtained;
  • the first spatial coordinate of the feature point is calculated according to the second spherical coordinate.
  • the 3D modeling device of the embodiment of the present invention performs the omnidirectional shooting of the target object to be modeled by turning on the imaging device, and recognizes the feature points of the target object one by one in the omnidirectional shooting process, and acquires the recognition process of the camera device at each feature point.
  • the motion trajectory in the motion determines the spatial coordinates of the feature points according to the motion trajectory corresponding to each feature point, and performs 3D modeling on the target object based on the spatial coordinates of each feature point.
  • the dual camera is used for 3D modeling. Since the dual camera needs focusing, and the focusing error is introduced during the focusing process, the feature point ranging of the object is not accurate.
  • only one camera device can avoid the focus error introduced by the focusing process, thereby improving the accuracy of the 3D modeling.
  • the object of 3D modeling of the target object can be realized only by a single imaging device, and only one imaging device is required to reduce the cost compared with the related art in which the dual camera is used for 3D modeling.
  • the present invention further provides an electronic device, including a memory, a processor, and a computer program stored on the memory and operable on the processor, when the processor executes the program, to implement:
  • the target object is 3D modeled based on the spatial coordinates of each feature point.
  • real-time tracking acquires state information of the image captured by the camera device to the current frame;
  • the state information includes coordinate information and angle information of the camera device;
  • the frame image currently captured by the camera device is taken as a boundary image
  • Pixel points with similar feature information are used as one candidate feature point
  • the candidate feature points are identified as one of the feature points.
  • the feature points are marked; and the marked feature points are added to the preset feature point sets.
  • the 3D construction is performed according to the mark and space coordinates of each feature point to form a 3D model of the target object.
  • the current motion trajectory is vector-decomposed. Obtaining change information between a previous frame image and a current frame image of the camera device in each set orientation;
  • the first spatial coordinate of the feature point is continuously updated according to the change information and the first image coordinate of the feature point in the previous frame image and the second image coordinate in the current frame image until updating to identify The feature point corresponds to the corresponding frame image to obtain the final first spatial coordinate of the feature point;
  • the final first spatial coordinates of each set orientation are vector-combined to obtain the spatial coordinates of the feature points.
  • the set orientation includes a Z-axis direction in a preset space coordinate system, a horizontal plane composed of an X-axis and a Y-axis in the space coordinate system, and an in-situ rotation;
  • the spatial coordinate system is a coordinate system formed by using a starting position of the imaging device as a coordinate origin;
  • the Z-axis coordinate in the first spatial coordinate of the feature point is calculated according to the first image coordinate, the second image coordinate, the first angle, and the second angle.
  • the projection feature point is an imaging point of the feature point in the initial frame image captured by the camera device; after the movement, the X′ axis is a horizontal coordinate axis formed by using a reference point in the current frame image as an origin;
  • the X-axis is the horizontal coordinate axis formed by the reference point in the previous frame image as the origin.
  • the second spherical coordinate of the feature point when the current frame image is captured is obtained;
  • the first spatial coordinate of the feature point is calculated according to the second spherical coordinate.
  • the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements the 3D modeling method as described in any of the preceding embodiments.
  • the program implements the following 3D modeling method when executed by the processor:
  • the target object is 3D modeled based on the spatial coordinates of each feature point.
  • the present invention also provides a computer program product that, when executed by a processor, executes a 3D modeling method as described in any of the preceding embodiments.
  • each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like.

Abstract

一种3D建模方法、电子设备、存储介质及程序产品,其中,方法包括:开启摄像装置对待建模的目标物体进行全方位拍摄(101),在全方位拍摄过程中逐个识别目标物体的特征点(102),获取摄像装置在每个特征点的识别过程中的运动轨迹(103),根据每个特征点对应的运动轨迹,确定特征点的空间坐标(104),基于每个特征点的空间坐标,对目标物体进行3D建模(105)。该方法利用单个摄像装置即可实现对目标物体进行3D建模的目的,仅用一个摄像装置可以避免聚焦过程引入的对焦失误,可以提高3D建模的准确度,而且仅需要一个摄像装置可以降低成本。

Description

3D建模方法、电子设备、存储介质及程序产品 技术领域
本发明涉及视频处理领域,尤其涉及一种3D建模方法、电子设备、存储介质及程序产品。
背景技术
相关技术中,大部分采用双摄像头对空间物体进行多角度拍摄,然后根据三角测距原理定位出空间物体上的每个特征点的测距,其中,特征点的测距为特征点与双摄像头所在平面之间的距离,然后根据每个特征点的测距实现对空间物体的3D建模。
具体而言,假设双摄像头在同一平面内,根据双摄像头之间的距离、双摄像头焦平面与双摄像头所在平面之间的距离,以及同一特征点在不同拍摄图像中的位置之间的距离差,可以计算出特征点与双摄像头所在平面之间的距离,即获得特征点的测距,进而根据特征点的测距对空间物体进行3D建模。
但是,采用双摄像头成本高,而且双摄像头因为聚焦问题,会存在一定的对焦误差,从而影响测距的精度,进而影响3D模型准确度。
发明内容
本发明旨在至少在一定程度上解决相关技术中的技术问题之一。
为此,本发明的一个目的在于提出一种3D建模方法,该方法利用单个摄像装置即可实现对空间的物体进行3D建模的目的,以解决现有的利用双摄像头进行3D建模时存在的成本高以及构建的3D模型准确度低的问题。
本发明的另一个目的在于提出一种3D建模装置。
本发明的另一个目的在于提出一种电子设备。
本发明的另一个目的在于提出一种非临时性计算机可读存储介质。
本发明的另一个目的在于提出一种计算机程序产品。
为达到上述目的,本发明第一方面实施例提出的3D建模方法,包括:开启摄像装置对待建模的目标物体进行全方位拍摄;在全方位拍摄过程中逐个识别所述目标物体的特征点;获取所述摄像装置在每个特征点的识别过程中的运动轨迹;根据每个特征点对应的所述运动轨迹,确定所述特征点的空间坐标;基于每个特征点的空间坐标,对所述目标物体 进行3D建模。
本发明实施例的3D建模方法,通过开启摄像装置对待建模的目标物体进行全方位拍摄,在全方位拍摄过程中逐个识别目标物体的特征点,获取摄像装置在每个特征点的识别过程中的运动轨迹,根据每个特征点对应的运动轨迹,确定特征点的空间坐标,基于每个特征点的空间坐标,对目标物体进行3D建模。相关技术中利用双摄像头进行3D建模,由于双摄像头需要聚焦,而且在聚焦过程中会引入对焦误差,使得物体的特征点测距不准确。而本实施例中,仅用一个摄像装置可以避免聚焦过程引入的对焦失误,从而可以提高3D建模的准确度。进一步地,本实施例中,仅通过单个摄像装置即可实现对目标物体的3D建模的目的,与相关技术中利用双摄像头进行3D建模相比,仅需要一个摄像装置可以降低成本。
为达到上述目的,本发明第二方面实施例提出一种3D建模装置,包括:
拍摄模块,用于开启摄像装置对待建模的目标物体进行全方位拍摄;
识别模块,用于在全方位拍摄过程中逐个识别所述目标物体的特征点;
获取模块,用于获取所述摄像装置在每个特征点的识别过程中的运动轨迹;
确定模块,用于根据每个特征点对应的所述运动轨迹,确定所述特征点的空间坐标;
建模模块,用于基于每个特征点的空间坐标,对所述目标物体进行3D建模。
本发明实施例的3D建模装置,通过开启摄像装置对待建模的目标物体进行全方位拍摄,在全方位拍摄过程中逐个识别目标物体的特征点,获取摄像装置在每个特征点的识别过程中的运动轨迹,根据每个特征点对应的运动轨迹,确定特征点的空间坐标,基于每个特征点的空间坐标,对目标物体进行3D建模。相关技术中利用双摄像头进行3D建模,由于双摄像头需要聚焦,而且在聚焦过程中会引入对焦误差,使得物体的特征点测距不准确。而本实施例中,仅用一个摄像装置可以避免聚焦过程引入的对焦失误,从而可以提高3D建模的准确度。进一步地,本实施例中,仅通过单个摄像装置即可实现对目标物体的3D建模的目的,与相关技术中利用双摄像头进行3D建模相比,仅需要一个摄像装置可以降低成本。
为达到上述目的,本发明第三方面实施例提出的电子设备,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述程序时,以用于实现:
启摄像装置对待建模的目标物体进行全方位拍摄;
在全方位拍摄过程中逐个识别所述目标物体的特征点;
获取所述摄像装置在每个特征点的识别过程中的运动轨迹;
根据每个特征点对应的所述运动轨迹,确定所述特征点的空间坐标;
基于每个特征点的空间坐标,对所述目标物体进行3D建模。
为达到上述目的,本发明第四方面实施例提出的非临时性计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如本发明第一方面实施例所述的3D建模方法。
为达到上述目的,本发明第五方面实施例提出的计算机程序产品,当所述计算机程序产品中的指令由处理器执行时,实现如本发明第一方面实施例所述的3D建模方法。
本发明附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实践了解到。
附图说明
本发明上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1为本发明实施例提供的一种3D建模方法的流程示意图;
图2为本发明实施例提供的另一种3D建模方法的流程示意图;
图3为本发明实施例提供的摄像装置和特征点在空间坐标系中的位置示意图;
图4为本发明实施例提供的摄像装置在C1位置时,摄像装置和投影特征点在虚拟坐标系中的位置示意图;
图5为本发明实施例提供的摄像装置沿Z轴移动时,投影特征点在虚拟坐标系中的位置变化示意图一;
图6为本发明实施例提供的摄像装置沿Z轴移动时,投影特征点在虚拟坐标系中的位置变化示意图二;
图7为本发明实施例提供的摄像装置在X-Y平面内移动时,投影特征点在虚拟坐标系中的位置变化示意图;
图8为本发明实施例提供的摄像装置原地旋转时,特征点在球坐标系中的位置变化示意图;
图9为本发明实施例提供的一种3D建模装置的结构示意图。
具体实施方式
下面详细描述本发明的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的模块或具有相同或类似功能的模块。下面通过参考附图描述的实施例是示例性的,仅用于解释本发明,而不能理解为对本发明的限制。相反,本发明的实施例包括落入所附加权利要求书的精神和内涵范围内的所有变化、修改和等同物。
下面参考附图描述本发明实施例的3D建模方法、电子设备、存储介质及程序产品。
相关技术中,大部分采用双摄像头对3D对象进行多角度拍摄,根据三角测距原理定位对象的特征点的测距,实现对对象的3D建模。
但是,采用双摄像头成本高,而且双摄像头因为聚焦问题,会存在一定的对焦误差,从而影响测距的精度,进而影响3D模型准确度。
图1为本发明实施例提供的一种3D建模方法的流程示意图,如图1所示,该3D建模方法包括以下步骤:
步骤101,开启摄像装置对待建模的目标物体进行全方位拍摄。
当需要对一个物体进行3D建模时,可将单个摄像装置放置在目标物体的附近,开启摄像装置,通过移动摄像装置对目标物体进行全方位拍摄。
可以理解的是,摄像装置围绕目标物体移动,可以对目标物体进行各个方位的拍摄,本实施例中,并不对摄像装置的移动方式进行限定,摄像装置可以在在空间随意地移动,以实现在移动过程中对目标物体进行全方位拍摄。本实施例中,摄像装置在空间的任意一个移动,可以矢量分解成:原地旋转、沿垂直方向移动、前后移动等。其中,摄像装置沿垂直方向移动可以保证在垂直方向上进行完整拍摄;摄像装置前后移动时,物体在图像中的大小不同,从而在拍摄过程中,可以通过调整摄像装置与目标物体之间的距离,以保证拍摄出物体在的图像中所占比例不同的图像。
步骤102,在全方位拍摄过程中逐个识别目标物体的特征点。
其中,特征点是指可以用作区分其他特征点的最小像素组合。以人脸为例,鼻子、眼睛、嘴巴等为脸部特征点。可以理解的是,特征点的特征信息如色彩、亮度等明显异于其他特征点。
本发明实施例中,根据特征点进行3D建模。由于单个摄像装置在某一位置进行拍摄时,根据拍摄的图像不能完全识别出目标物体的所有特征点,因此需要在全方位拍摄过程中逐个识别目标物体的特征点。在识别物体特征点时,可以根据特征信息如色彩、亮度等进行识别。
步骤103,获取摄像装置在每个特征点的识别过程中的运动轨迹。
其中,运动轨迹可以理解为在识别每个特征点的过程中,摄像装置的移动方式。
步骤104,根据每个特征点对应的运动轨迹,确定特征点的空间坐标。
本发明实施例中,根据每个特征点对应的运动轨迹,如摄像装置的移动方式,来确定特征点的空间坐标。
步骤105,基于每个特征点的空间坐标,对目标物体进行3D建模。
在计算出目标物体所有特征点的空间坐标后,可以根据每个特征点的空间坐标,对目 标物体进行3D建模。例如,对人脸进行3D建模时,可根据鼻子、眼睛、眉毛、嘴巴、耳朵等脸部特征点的空间坐标,建立人脸3D模型。
为了更清楚的说明上述实施例,下面通过另一个实施例,解释本发明提出的3D建模方法。图2为本发明提出的另一种3D建模方法的流程示意图。
如图2所示,该3D建模方法包括以下步骤:
步骤201,开启摄像装置对待建模的目标物体进行全方位拍摄。
当需要对一个物体进行3D建模时,可将单个摄像装置放置在目标物体的附近,开启摄像装置,通过移动摄像装置对目标物体进行全方位拍摄。
步骤202,在全方位拍摄过程中根据图像中像素点的特征信息识别特征点。
具体而言,从摄像装置拍摄的当前帧图像中提取每个像素点的特征信息,如像素点值、像素点的色彩、像素点的亮度等。比较像素点的特征信息,将特征信息相近的像素点作为一个候选特征点。例如,拍摄的人脸图像中,鼻子所对应的像素点的像素值、色彩、亮度等特征信息比较相近,因此可以将特征信息相近的像素点作为一个候选特征点。
当连续预设帧数中候选特征点的特征信息变化差异在预设的范围内,说明该候选特征点应该是目标物体中明显异于其他部位的特征点,可以将候选特征点识别为目标物体的一个特征点。
为了避免后续重复统计特征点,本实施例中,每当识别出一个特征点后,可对特征点进行标记,并将标记后的特征点加入到预设的特征点集合中。这里的标记,可以是对特征点进行编号。
步骤203,获取摄像装置在每个特征点的识别过程中的运动轨迹。
作为一种可能的实现方式,从初始帧图像开始,可利用外部的摄像装置,或者摄像装置内部的运动传感器实时跟踪获取摄像装置的状态信息。其中,状态信息包括摄像装置的坐标信息和角度信息,坐标信息可以是摄像装置在拍摄目标物体的过程中,在三维空间坐标系中的坐标信息,角度信息可以是摄像装置与空间坐标系中坐标轴的夹角。
每当识别出至少一个特征点时,将摄像装置当前所拍摄的帧图像作为分界图像。从识别出的前一个特征点对应的分界图像之后的第一帧图像开始,直到识别出至少一个特征点时的分界图像,利用位于两个分界图像之间的每个图像对应的状态信息,形成至少一个特征点对应的运动轨迹。
步骤204,从特征点的第一帧图像开始,每当拍摄一帧图像后对当前的运动轨迹进行矢量分解,获取每个设定方位上的摄装置拍摄前一帧图像与拍摄当前帧图像之间的变化信息。
本实施例中,预先建立空间直角坐标系,其中,原点可以任意选择,为了便于计算, 可以摄像装置的初始位置为原点,建立空间坐标系。
在摄像装置进行全方位拍摄的过程中,从特征点的第一帧图像开始,也就是从识别出特征点时的分界图像后的第一帧图像开始,每当拍摄一帧图像后对当前的运动轨迹进行矢量分解,也就是对特征点所对应的运动轨迹进行矢量分解。为了便于计算,在进行矢量分解时,可将运动轨迹分解为设定方位上的运动轨迹。其中,设定方位可以是沿Z轴方向、沿X-Y平面等。
在将运动轨迹矢量分解后,可根据摄像装置拍摄前一帧图像与拍摄当前帧图像时的状态信息,获取每个设定方位上摄像装置拍摄前一帧图像与拍摄当前帧图像之间的变化信息。其中,变化信息包括位置变化信息、角度变化信息等。如对于设定方位为沿Z轴方向,根据拍摄装置在拍摄前一帧图像时的坐标和拍摄当前帧图像时的坐标,可以计算出拍摄装置沿Z轴的移动距离。
步骤205,针对每个设定方位,根据变化信息以及特征点在前一帧图像中的第一图像坐标和当前帧图像中的第二图像坐标,持续更新特征点的第一空间坐标,直到更新到识别出特征点时对应的帧图像为止,以获取到特征点的最终的第一空间坐标。
本实施例中,为了便于计算特征点的空间坐标,针对每个帧图像,可以帧图像的中心点为原点,建立虚拟坐标系。其中,虚拟坐标系的X`轴、Y`轴、Z`轴与空间直角坐标系的X轴、Y轴、Z轴相互平行,且方向相同。
针对每个设定方位,可根据在设定方位上的拍摄装置拍摄前一帧图像与拍摄当前帧图像之间的变化信息,以及特征点在前一帧图像中的第一图像坐标和当前帧图像中的第二图像坐标,持续更新特征点的第一空间坐标,直到更新到识别出特征点对应的帧图像为止,也即更新到识别出特征点的分界图像为止,以获取到特征点的最终的第一空间坐标。
其中,第一图像坐标是特征点在前一帧图像中建立的虚拟坐标系中的坐标,第二图像坐标是特征点在当前帧图像中建立的虚拟坐标系中的坐标。
步骤206,将每个设定方位的最终的第一空间坐标进行矢量合成,得到特征点的空间坐标。
由于步骤204中对运动轨迹进行了矢量分解,因此在计算出特征点在每个设定方位的最终第一空间坐标后,将每个设定方位的最终第一空间坐标进行矢量合成,得到特征点的空间坐标。
在具体实现时,某些特征点可能会在帧图像中消失后再出现,若根据特征点的特征信息如色彩、亮度等,以及周围特征点的特征信息,确定为同一特征点,可比较前后两次计算的空间坐标,若两次计算的空间坐标的差值在预设范围内,可将两次计算的空间坐标的平均值作为特征点的空间坐标。
步骤207,从特征点集合中,获取每个特征点的空间坐标。
在计算完所有识别出的特征点的空间坐标后,从特征点集合中,获取每个特征点的空间坐标。
步骤208,根据每个特征点的标记和空间坐标进行3D构建,形成目标物体的3D模型。
根据每个特征点的标记和空间坐标,进行3D建模,从而得到目标物体的3D模型。
本发明实施例的3D建模方法,根据摄像装置本身的变化信息如位置变化信息、角度变化信息等,以及物体特征点的图像坐标,通过对特征点对应的运动轨迹进行矢量分解,计算特征点的空间坐标,从而实现对目标物体的3D建模。
上述实施例中,是在识别特征点的过程中,计算特征点的空间坐标。可选的,在摄像装置全方位拍摄目标物体的过程中,还可在识别出当前至少一个特征点后,根据当前至少一个特征点对应的运动轨迹,确定至少一个特征点的空间坐标。
可以理解的是,还可在摄像装置全方位对目标物体拍摄完成后,根据拍摄装置拍摄的每帧图像,以及拍摄每帧图像时拍摄装置的状态信息,利用上述方式计算每个特征点的空间坐标。
进一步地,对于步骤205针对每个设定方位,根据变化信息以及特征点在前一帧图像中的第一图像坐标和当前帧图像中的第二图像坐标,持续更新特征点的第一空间坐标。本实施例中,设定方位包括预设的空间坐标系中的Z轴方向、由空间坐标系中的X轴与Y轴组成的水平面以及原地旋转。在建立空间坐标时,可以选择空间中的任一点作为原点,本实施例中,为了便于计算以摄像装置的起始位置为原点,建立空间坐标系。
下面针对上述三种设定方位,解释每个设定方位上计算特征点的第一空间坐标的方法。
(1)设定方向为Z轴方向。
针对摄像装置沿Z轴方向移动,获取摄像装置拍摄前一帧图像与当前帧图像的垂直位移量。
获取摄像装置拍摄前一帧图像时特征点与摄像装置的连线与Z轴之间的第一夹角和拍摄当前帧图像时特征点与摄像装置的连线与Z轴之间的第二夹角;其中,特征点为非Z轴上的特征点。
根据垂直位移量和第一夹角和第二夹角,计算特征点的第一空间坐标中的X轴坐标和Y轴坐标。
根据第一图像坐标、第二图像坐标、第一夹角和第二夹角,计算特征点的第一空间坐标中的Z轴坐标。
如图3所示,在建立空间坐标系后,摄像装置、特征点P在空间坐标系中的位置分别为C1、P1,且空间坐标系以C1为原点。对应的,如图4所示,以帧图像的中心点O`1为 原点建立虚拟坐标系,摄像装置在虚拟坐标系中的位置为C`1,特征点在虚拟坐标系中的位置为P`1,θ为摄像装置的拍摄FOV。其中,空间坐标系的坐标轴与虚拟坐标系的坐标轴相互平行,且方向相同。
如图5所示,当摄像装置从C1位置移动到C2位置时,摄像装置在虚拟坐标中的位置由C`1移动到C`2,特征点在虚拟坐标系中位置变为P`2,当前帧图像的中心点为O`2。
将P`1,P`2分别在虚拟坐标系中的变化关系用图6表示。∠O`1C`1P`1=α1为摄像装置在C1位置时,特征点在虚拟坐标中的位置P`1,相对镜头正前方的偏移的角度,此角度可以通过摄像装置的FOV计算出来,且与第一夹角相等。∠O`1C`2P`1=α2为摄像装置在C2位置时,特征点在虚拟坐标系中的位置P`2,相对镜头正前方的偏移角度,此角度同样可以通过摄像装置的FOV计算出来,且与第二夹角相等。
特征点在摄像装置在C1位置时的成像点P`1到帧图像中心点O`1的位置,也可计算出来,即α1、α2、O`1P`1为已知量。
从上面的已知量可以得到:C`1_O`1=O`1_P`1*cotα1,C`2_O`1=O`1_P`1*cotα2。
于是,摄像装置在虚拟坐标系中沿Z`轴方向移动的位移:
C`1_C`2=C`1_O`1-C`2_O`1
=O`1_P`1*cotα1-O`1_P`1*cotα2
=O`1_P`1*(cotα1-cotα2)
由上面分析,可以得出摄像装置从C1位置位移到C2位置,摄像装置在空间坐标系中的垂直位移量C1C2,同时也可得出摄像装置在虚拟坐标系中的位移C`1_C`2。假设,此过程虚拟坐标与空间坐标的换算比例为ε:
ε=C`1_C`2/C1C2=O`1_P`1*(cotα1-cotα2)/C1C2
从而,可以得到C`1_O`1=ε*C1O1。
也即,C1O1=C`1_O`1/ε=O`1_P`1*cotα1/ε=O`1_P`1*cotα1*C1C2/(cotα1-cotα2)。
由此,可以得到特征点P相对摄像装置位移起点C1的,沿Z轴方向的坐标:
Z_P1=C1O1=O`1_P`1*cotα1*C1C2/(cotα1-cotα2)
同理,O1P1=O`1_P`1/ε=C1C2/(cotα1-cotα2)
在虚拟坐标系中,O`1P`1与X`轴的夹角与空间坐标系中O1P1与X轴的夹角相等,O`1P`1与和Y`轴的夹角与空间坐标系中O1P1与Y轴的夹角相等。
且夹角β为P`1与拍摄前一帧图像时的X`轴的夹角,为已知值。
即可得到:X_P1=O1P1*cosβ=O`1_P`1*cosβ/ε=C1C2*cosβ/(cotα1-cotα2);
Y_P1=O1P1*sinβ=O`1_P`1*sinβ/ε=C1C2*sinβ/(cotα1-cotα2);
由此,X_P1、Y_P1、Z_P1组成了特征点P在以C1位置为原点时的空间坐标系中的第一空间坐标。
当摄像装置在Z轴上继续移动从C2移动到C3位置时,可以按照上述摄像装置从C1移动C2时计算特征点的坐标方法进行计算,直到摄像装置移动到识别完该特征点,完成特征点第一空间坐标的计算过程。
(2)摄像装置在由空间坐标系中的X轴与Y轴组成的水平面X-Y平面上移动。
针对摄像装置在水平面内移动时,获取摄像装置拍摄前一帧图像与当前帧图像的水平位移量。然后,获取投影特征点与移动后X`轴之间的第三夹角和前一帧图像中的参考点与当前帧图像中参考点的连线与移动后X`轴之间的第四夹角;其中,投影特征点为特征点在摄像装置拍摄到的初始帧图像中的成像点;移动后X`轴是以当前帧图像中的参考点为原点形成的水平坐标轴。
之后,根据第一图像坐标、第二图像坐标、第三夹角和第四夹角,获取前一帧图像中参考点与当前帧图像中参考点之间的第一位移量。
根据水平位移量、第一位移量以及第一图像坐标,计算特征点到Z轴的垂直距离。
根据特征点到Z轴的垂直距离和前一帧图像中的参考点与投影特征点的连线与X`轴之间的第五夹角,计算特征点的第一空间坐标的X轴坐标和Y轴坐标;其中,X`轴是以前一帧图像中的参考点为原点形成的水平坐标轴。
举例来说,摄像装置沿X-Y平面平移,从C1位置平移到C2位置。如图7所示,在虚拟坐标系中,摄像装置从C`1平移到C`2。
当摄像装置在C1位置时,以帧图像的中心点为O`1,即前一帧图像中的参考点O`1,建立虚拟坐标系且坐标轴为X`轴、Y`轴、Z`轴,特征点P的投影特征点为P`,P`与X`轴之间的第五夹角为Φ3。本实施例中,以C1位置为空间坐标系的原点,计算各特征点相对原点的第一空间坐标。
当摄像装置位移到C2位置时,帧图像的中心点为O`2,即当前帧图像的参考点为O`2。以O`2为原点建立虚拟坐标系,坐标轴称为移动后X`轴、移动后Y`轴、移动后Z`轴。投影特征点P`与移动后X`轴之间的第三夹角为Φ1,O`1_O`2与移动后X`轴之间的第四夹角为Φ2。
其中,O`1_P`,O`2_P`,Φ1,Φ2为已知量,O`1_O`2与O`2_P`的夹角为Φ2-Φ1。
由于摄像装置是在X-Y平面内平移,所以C`1_C`2=O`1_O`2。当摄像装置从C1位置移动到C2位置时,可由摄像装置内部运动传感器,或外部摄装置精确测量移动距离,即摄像装置拍摄前一帧图像与当前帧图像的水平位移量C1C2也是已知的。
在三角形O`1O`2P`中,由余弦定理可以得到,
(O`1_P`)2=(O`1_O`2)2+(O`2_P`)2-2*O`1_O`2*O`2_P`*cos(Φ2-Φ1)
由于O`1_P`,O`2_P`,Φ1,Φ2为已知量,因此可以求出第一位移量O`1_O`2。
由于C1C2为已知量,那么可以得到摄像装置在空间坐标系中的水平位移量C1C2和在虚拟坐标系中的第一位移量O`1_O`2的比例关系δ=O`1_O`2/C1C2,也即δ为已知量。
由此可以得出,当摄像装置在C1位置时,特征点P到Z轴的垂直距离O1_P=O`1_P`/δ=O`1_P`*C1C2/O`1_O`2。
由图7可知,在以O`1为原点的虚拟坐标系中,从P`点向X`轴和Y`轴做垂线,分别得到交点X1_P`和Y1_P`。
那么O`1与X1_P`之间的距离为O`1_P`*cosΦ3,O`1与Y1_P`之间的距离为O`1_P`*sinΦ3。
特征点P的X轴坐标为:
X_P=O1_P*cosΦ3=O`1_P`*cosΦ3/δ=O`1_P`*cosΦ3*C1C2/O`1_O`2
特征点P的Y轴坐标为:
Y_P=O1_P*sinΦ3=O`1_P`*sinΦ3/δ=O`1_P`*sinΦ3*C1C2/O`1_O`2
可以理解的是,摄像装置沿X-Y平面平移时,特征点P的Z轴坐标不变。
当摄像装置在X-Y平面上继续移动从C2移动到C3位置时,可以按照上述摄像装置从C1移动C2时计算特征点的坐标方法进行计算,直到摄像装置移动到识别完该特征点,完成特征点第一空间坐标的计算过程。
(3)摄像装置在原地旋转。
针对摄像装置在原地旋转时,根据第一夹角和第二夹角,获取摄像装置的角度偏移量,根据特征点在拍摄前一帧图像时的第一球面坐标和角度偏移量,得到特征点在拍摄当前帧图像时的第二球面坐标,进而根据第二球面坐标,计算特征点的第一空间坐标。
在这种情况下,以摄像装置为坐标原点建立球坐标系。如图8所示,特征点P在拍摄前一帧图像时的第一球面坐标为
Figure PCTCN2017112194-appb-000001
r为特征点P到摄像装置的距离,可以通过前面两种移动方式得到,θ1为P和原点之间的连线与Z轴的夹角即第一夹角,为已知量。
Figure PCTCN2017112194-appb-000002
为P和原点之间的连线在X-Y平面的投影与X轴之间的夹角,为已知量。
如图8所示,当摄像装置的在原点做角度旋转时,特征点P在拍摄当前帧图像时的第二球面坐标为
Figure PCTCN2017112194-appb-000003
通过摄像装置的内置传感器,或者通过外部摄像装置监测,可以精准测量摄像装置的角度偏移量。假设测量得到P和原点之间的连线与Z轴的角度偏移量△θ,P和原点之间的连线在X-Y平面的投影与X轴之间的角度偏移量为△
Figure PCTCN2017112194-appb-000004
那么可以得到:
θ2=θ1+△θ;
Figure PCTCN2017112194-appb-000005
根据球坐标计算公式,可以得到特征点P在空间坐标系中X轴、Y轴、Z轴的坐标:
Figure PCTCN2017112194-appb-000006
Figure PCTCN2017112194-appb-000007
Z_p=rcosθ2=rcos(θ1+△θ)。
从而,X_p、Y_p、Z_p形成了特征点在拍摄当前帧图像时的第一空间坐标。
当摄像装置继续原地旋转,拍摄下一帧图像时,可以测量出摄像装置相对拍摄上一帧图像时的角度偏移量,从而可以根据拍摄上一帧图像时特征点的球面坐标和角度偏移量,得到摄像装置拍摄该帧图像时特征点的球面坐标,进而根据球面坐标可以计算出摄像装置在拍摄该帧图像时,特征点的第一空间坐标,直到摄像装置转动到识别完该特征点,完成特征点第一空间坐标的计算过程。
由此,当摄像装置在全方位拍摄目标物体的过程中,可将摄像装置的运动轨迹在空间坐标系中的Z轴方向、X-Y平面、原点旋转设定方向上进行矢量分解,并通过上述方式获取每个设定方向上的第一空间坐标,并持续更新特征点的第一空间坐标得到最终的第一空间坐标,将每个设定方向上的最终第一空间坐标进行矢量合成,得到特征点在空间坐标系中的空间坐标,从而根据所有特征点的空间坐标,进行3D建模,可以得到目标物体的3D模型。
本发明实施例的3D建模方法,通过开启摄像装置对待建模的目标物体进行全方位拍摄,在全方位拍摄过程中逐个识别目标物体的特征点,获取摄像装置在每个特征点的识别过程中的运动轨迹,根据每个特征点对应的运动轨迹,确定特征点的空间坐标,基于每个特征点的空间坐标,对目标物体进行3D建模。相关技术中利用双摄像头进行3D建模,由于双摄像头需要聚焦,而且在聚焦过程中会引入对焦误差,使得物体的特征点测距不准确。而本实施例中,仅用一个摄像装置可以避免聚焦过程引入的对焦失误,从而可以提高3D建模的准确度。进一步地,本实施例中,仅通过单个摄像装置即可实现对目标物体的3D建模的目的,与相关技术中利用双摄像头进行3D建模相比,仅需要一个摄像装置可以降低成本。
为了实现上述实施例,本发明还提出一种3D建模装置。图9为本发明实施例提供的一种3D建模装置的结构示意图。
如图9所示,该装置包括:拍摄模块910、识别模块920、获取模块930、确定模块940、建模模块950。
其中,拍摄模块910,用于开启摄像装置对待建模的目标物体进行全方位拍摄。
识别模块920,用于在全方位拍摄过程中逐个识别目标物体的特征点。
获取模块930,用于获取摄像装置在每个特征点的识别过程中的运动轨迹。
确定模块940,用于根据每个特征点对应的运动轨迹,确定特征点的空间坐标。
建模模块950,用于基于每个特征点的空间坐标,对目标物体进行3D建模。
在本发明的一个实施例中,获取模块930还用于:
从初始帧图像开始,实时跟踪获取摄像装置扫描到当前帧图像的状态信息;状态信息中包括摄像装置的坐标信息和角度信息;
每当识别出至少一个特征点时,将摄像装置当前所拍摄的帧图像作为分界图像;
从识别出的前一个特征点对应的分界图像之后的第一帧图像开始,直到识别出至少一个特征点时的所述分界图像,利用位于两个分界图像之间的每个图像对应的状态信息,形成至少一个特征点对应的运动轨迹。
在本发明的一个实施例中,识别模块920还用于:
从当前帧图像中提取每个像素点的特征信息;
将特征信息相近的像素点作为一个候选特征点;
当连续预设帧数中候选特征点的特征信息变化差异在预设的范围内,则将候选特征点识别为一个特征点。
在本发明的一个实施例中,该装置还可包括:
标记模块,用于每当识别出一个特征点,则对特征点进行标记;将标记后的特征点加入到预设的特征点集合中。
在本发明的一个实施例中,建模模块950还用于:
从特征点集合中,获取每个特征点的空间坐标;
根据每个特征点的标记和空间坐标进行3D构建,形成目标物体的3D模型。
在本发明的一个实施例中,确定模块940还用于:
从特征点的第一帧图像开始,每当拍摄一帧图像后对当前的运动轨迹进行矢量分解,获取每个设定方位上的摄装置拍摄前一帧图像与拍摄当前帧图像之间的变化信息;
针对每个设定方位,根据变化信息以及特征点在前一帧图像中的第一图像坐标和当前帧图像中的第二图像坐标,持续更新特征点的第一空间坐标,直到更新到识别出特征点时对应的帧图像为止,以获取到特征点的最终的第一空间坐标;
将每个设定方位的最终的第一空间坐标进行矢量合成,得到特征点的空间坐标。
在本发明的一个实施例中,设定方位包括预设的空间坐标系中的Z轴方向、由空间坐标系中的X轴与Y轴组成的水平面以及原地旋转;其中,空间坐标系为以摄像装置起始位置为坐标原点形成的坐标系;确定模块940还用于:
针对摄像装置沿Z轴方向移动,获取摄像装置拍摄前一帧图像与当前帧图像的垂直位移量;
获取摄像装置拍摄前一帧图像时特征点与摄像装置的连线与Z轴之间的第一夹角和拍摄当前帧图像时特征点与摄像装置的连线与Z轴之间的第二夹角;其中,特征点为非Z轴上的特征点;
根据垂直位移量和第一夹角和第二夹角,计算特征点的第一空间坐标中的X轴坐标和Y轴坐标;
根据第一图像坐标、第二图像坐标、第一夹角和所述第二夹角,计算特征点的第一空间坐标中的Z轴坐标。
在本发明的一个实施例中,确定模块940还用于:
针对摄像装置在所述水平面内移动时,获取摄像装置拍摄前一帧图像与当前帧图像的水平位移量;
获取投影特征点与移动后X`轴之间的第三夹角和前一帧图像中的参考点与当前帧图像中参考点的连线与移动后X`轴之间的第四夹角;其中,投影特征点为特征点在摄像装置拍摄到的初始帧图像中的成像点;移动后X`轴是以当前帧图像中的参考点为原点形成的水平坐标轴;
根据第一图像坐标、第二图像坐标、第三夹角和第四夹角,获取前一帧图像中参考点与当前帧图像中参考点之间的第一位移量;
根据水平位移量第一位移量以及第一图像坐标,计算特征点到Z轴的垂直距离;
根据特征点到Z轴的垂直距离和前一帧图像中的参考点与投影特征点的连线与X′轴之间的第五夹角,计算特征点的第一空间坐标的X轴坐标和Y轴坐标;X`轴是以前一帧图像中的参考点为原点形成的水平坐标轴。
在本发明的一个实施例中,确定模块940还用于:
针对摄像装置在原地旋转时,根据第一夹角和第二夹角,获取摄像装置的角度偏移量;
根据特征点在拍摄前一帧图像时的第一球面坐标和角度偏移量,得到特征点在拍摄当前帧图像时的第二球面坐标;
根据第二球坐标,计算特征点的第一空间坐标。
本发明实施例的3D建模装置,通过开启摄像装置对待建模的目标物体进行全方位拍摄,在全方位拍摄过程中逐个识别目标物体的特征点,获取摄像装置在每个特征点的识别过程中的运动轨迹,根据每个特征点对应的运动轨迹,确定特征点的空间坐标,基于每个特征点的空间坐标,对目标物体进行3D建模。相关技术中利用双摄像头进行3D建模,由于双摄像头需要聚焦,而且在聚焦过程中会引入对焦误差,使得物体的特征点测距不准确。 而本实施例中,仅用一个摄像装置可以避免聚焦过程引入的对焦失误,从而可以提高3D建模的准确度。进一步地,本实施例中,仅通过单个摄像装置即可实现对目标物体的3D建模的目的,与相关技术中利用双摄像头进行3D建模相比,仅需要一个摄像装置可以降低成本。
为了实现上述实施例,本发明还提出一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行程序时,以用于实现:
处理器执行程序时可实现以下3D建模方法:
开启摄像装置对待建模的目标物体进行全方位拍摄;
在全方位拍摄过程中逐个识别目标物体的特征点;
获取摄像装置在每个特征点的识别过程中的运动轨迹;
根据每个特征点对应的运动轨迹,确定特征点的空间坐标;
基于每个特征点的空间坐标,对目标物体进行3D建模。
在本发明的一个实施例中,处理器执行程序时,以具体实现:
从初始帧图像开始,实时跟踪获取摄像装置扫描到当前帧图像的状态信息;状态信息中包括摄像装置的坐标信息和角度信息;
每当识别出至少一个特征点时,将摄像装置当前所拍摄的帧图像作为分界图像;
从识别出的前一个特征点对应的分界图像之后的第一帧图像开始,直到识别出至少一个特征点时的分界图像,利用位于两个分界图像之间的每个图像对应的状态信息,形成至少一个特征点对应的运动轨迹。
在本发明的一个实施例中,处理器执行程序时,以具体实现:
从当前帧图像中提取每个像素点的特征信息;
将特征信息相近的像素点作为一个候选特征点;
当连续预设帧数中候选特征点的特征信息变化差异在预设的范围内,则将候选特征点识别为一个所述特征点。
在本发明的一个实施例中,处理器执行程序时,还以用于实现:
在将候选特征点识别为一个特征点之后,每当识别出一个所述特征点,则对特征点进行标记;将标记后的特征点加入到预设的特征点集合中。
在本发明的一个实施例中,处理器执行程序时,以具体实现:
从特征点集合中,获取每个特征点的空间坐标;
根据每个特征点的标记和空间坐标进行3D构建,形成目标物体的3D模型。
在本发明的一个实施例中,处理器执行程序时,以具体实现:
从特征点的第一帧图像开始,每当拍摄一帧图像后对当前的运动轨迹进行矢量分解, 获取每个设定方位上的摄装置拍摄前一帧图像与拍摄当前帧图像之间的变化信息;
针对每个设定方位,根据变化信息以及特征点在前一帧图像中的第一图像坐标和当前帧图像中的第二图像坐标,持续更新特征点的第一空间坐标,直到更新到识别出特征点时对应的帧图像为止,以获取到特征点的最终的第一空间坐标;
将每个设定方位的最终的第一空间坐标进行矢量合成,得到特征点的空间坐标。
在本发明的一个实施例中,设定方位包括预设的空间坐标系中的Z轴方向、由空间坐标系中的X轴与Y轴组成的水平面以及原地旋转;其中,空间坐标系为以摄像装置起始位置为坐标原点形成的坐标系;
处理器执行程序时,还以用于实现:
针对摄像装置沿Z轴方向移动,获取摄像装置拍摄前一帧图像与当前帧图像的垂直位移量;
获取摄像装置拍摄前一帧图像时特征点与所述摄像装置的连线与Z轴之间的第一夹角和拍摄当前帧图像时特征点与所述摄像装置的连线与Z轴之间的第二夹角;其中,特征点为非Z轴上的特征点;
根据垂直位移量和第一夹角和所述第二夹角,计算特征点的第一空间坐标中的X轴坐标和Y轴坐标;
根据第一图像坐标、第二图像坐标、第一夹角和第二夹角,计算特征点的第一空间坐标中的Z轴坐标。
在本发明的一个实施例中,处理器执行程序时,还以用于实现:
针对摄像装置在水平面内移动时,获取摄像装置拍摄前一帧图像与当前帧图像的水平位移量;
获取投影特征点与移动后X`轴之间的第三夹角和前一帧图像中的参考点与当前帧图像中参考点的连线与移动后X`轴之间的第四夹角;其中,投影特征点为特征点在摄像装置拍摄到的初始帧图像中的成像点;移动后X`轴是以当前帧图像中的参考点为原点形成的水平坐标轴;
根据第一图像坐标、第二图像坐标、第三夹角和第四夹角,获取前一帧图像中参考点与当前帧图像中参考点之间的第一位移量;
根据水平位移量、第一位移量以及第一图像坐标,计算特征点到Z轴的垂直距离;
根据特征点到Z轴的垂直距离和前一帧图像中的参考点与投影特征点的连线与X`轴之间的第五夹角,计算特征点的第一空间坐标的X轴坐标和Y轴坐标;X`轴是以前一帧图像中的参考点为原点形成的水平坐标轴。
在本发明的一个实施例中,处理器执行程序时,还以用于实现:
针对摄像装置在原地旋转时,根据第一夹角和第二夹角,获取摄像装置的角度偏移量;
根据特征点在拍摄前一帧图像时的第一球面坐标和角度偏移量,得到特征点在拍摄当前帧图像时的第二球面坐标;
根据第二球坐标,计算特征点的第一空间坐标。
为了实现上述实施例,本发明还提出一种非临时性计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如前述任一实施例所述的3D建模方法。
例如,该程序被处理器执行时实现以下3D建模方法:
开启摄像装置对待建模的目标物体进行全方位拍摄;
在全方位拍摄过程中逐个识别目标物体的特征点;
获取摄像装置在每个特征点的识别过程中的运动轨迹;
根据每个特征点对应的运动轨迹,确定特征点的空间坐标;
基于每个特征点的空间坐标,对目标物体进行3D建模。
为了实现上述实施例,本发明还提出一种计算机程序产品,当计算机程序产品中的指令由处理器执行时,执行如前述任一实施例所述的3D建模方法。
需要说明的是,在本发明的描述中,术语“第一”、“第二”等仅用于描述目的,而不能理解为指示或暗示相对重要性。此外,在本发明的描述中,除非另有说明,“多个”的含义是两个或两个以上。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。
应当理解,本发明的各部分模块或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中, 该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本发明各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。
尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (20)

  1. 一种3D建模方法,其特征在于,包括:
    开启摄像装置对待建模的目标物体进行全方位拍摄;
    在全方位拍摄过程中逐个识别所述目标物体的特征点;
    获取所述摄像装置在每个特征点的识别过程中的运动轨迹;
    根据每个特征点对应的所述运动轨迹,确定所述特征点的空间坐标;
    基于每个特征点的空间坐标,对所述目标物体进行3D建模。
  2. 根据权利要求1所述的方法,其特征在于,所述获取所述摄像装置在每个特征点的识别过程中的运动轨迹,包括:
    从初始帧图像开始,实时跟踪获取所述摄像装置扫描到当前帧图像的状态信息;所述状态信息中包括所述摄像装置的坐标信息和角度信息;
    每当识别出至少一个特征点时,将所述摄像装置当前所拍摄的帧图像作为分界图像;
    从识别出的前一个特征点对应的所述分界图像之后的第一帧图像开始,直到识别出所述至少一个特征点时的所述分界图像,利用位于两个分界图像之间的每个图像对应的所述状态信息,形成所述至少一个特征点对应的所述运动轨迹。
  3. 根据权利要求1所述的方法,其特征在于,所述在全方位拍摄过程中逐个识别所述目标物体的特征点,包括:
    从当前帧图像中提取每个像素点的特征信息;
    将所述特征信息相近的像素点作为一个候选特征点;
    当连续预设帧数中所述候选特征点的所述特征信息变化差异在预设的范围内,则将所述候选特征点识别为一个所述特征点。
  4. 根据权利要求3所述的方法,其特征在于,所述将所述候选特征点识别为一个所述特征点之后,还包括:
    每当识别出一个所述特征点,则对所述特征点进行标记;
    将所述标记后的所述特征点加入到预设的特征点集合中。
  5. 根据权利要求4所述的方法,其特征在于,所述基于每个特征点的空间坐标,对所述目标物体进行3D建模,包括:
    从所述特征点集合中,获取每个所述特征点的空间坐标;
    根据每个特征点的所述标记和所述空间坐标进行3D构建,形成所述目标物体的3D模型。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述根据每个特征点对应的所述运动轨迹,确定所述特征点的空间坐标,包括:
    从所述特征点的第一帧图像开始,每当拍摄一帧图像后对当前的所述运动轨迹进行矢量分解,获取每个设定方位上的所述摄装置拍摄前一帧图像与拍摄当前帧图像之间的变化信息;
    针对每个设定方位,根据所述变化信息以及所述特征点在前一帧图像中的第一图像坐标和当前帧图像中的第二图像坐标,持续更新所述特征点的第一空间坐标,直到更新到识别出所述特征点时对应的帧图像为止,以获取到所述特征点的最终的第一空间坐标;
    将每个设定方位的所述最终的第一空间坐标进行矢量合成,得到所述特征点的所述空间坐标。
  7. 根据权利要求6所述的方法,其特征在于,所述设定方位包括预设的空间坐标系中的Z轴方向、由所述空间坐标系中的X轴与Y轴组成的水平面以及原地旋转;其中,所述空间坐标系为以所述摄像装置起始位置为坐标原点形成的坐标系;
    针对所述摄像装置沿所述Z轴方向移动,获取所述摄像装置拍摄前一帧图像与当前帧图像的垂直位移量;
    获取所述摄像装置拍摄前一帧图像时所述特征点与所述摄像装置的连线与所述Z轴之间的第一夹角和拍摄当前帧图像时所述特征点与所述摄像装置的连线与所述Z轴之间的第二夹角;其中,所述特征点为非Z轴上的特征点;
    根据所述垂直位移量和所述第一夹角和所述第二夹角,计算所述特征点的所述第一空间坐标中的X轴坐标和Y轴坐标;
    根据所述第一图像坐标、所述第二图像坐标、所述第一夹角和所述第二夹角,计算所述特征点的所述第一空间坐标中的Z轴坐标。
  8. 根据权利要求7所述的方法,其特征在于,还包括:
    针对所述摄像装置在所述水平面内移动时,获取所述摄像装置拍摄前一帧图像与当前帧图像的水平位移量;
    获取投影特征点与移动后X`轴之间的第三夹角和所述前一帧图像中的参考点与所述当前帧图像中参考点的连线与所述移动后X`轴之间的第四夹角;其中,所述投影特征点为所述特征点在所述摄像装置拍摄到的初始帧图像中的成像点;所述移动后X`轴是以所述当前帧图像中的参考点为原点形成的水平坐标轴;
    根据所述第一图像坐标、所述第二图像坐标、所述第三夹角和所述第四夹角,获取所述前一帧图像中参考点与所述当前帧图像中参考点之间的第一位移量;
    根据所述水平位移量、所述第一位移量以及所述第一图像坐标,计算所述特征点到所述Z轴的垂直距离;
    根据所述特征点到所述Z轴的垂直距离和所述前一帧图像中的参考点与所述投影特征点的连线与所述X`轴之间的第五夹角,计算所述特征点的所述第一空间坐标的所述X轴坐标和Y轴坐标;所述X`轴是以所述前一帧图像中的参考点为原点形成的水平坐标轴。
  9. 根据权利要求8所述的方法,其特征在于,还包括:
    针对所述摄像装置在所述原地旋转时,根据所述第一夹角和所述第二夹角,获取所述摄像装置的角度偏移量;
    根据所述特征点在拍摄所述前一帧图像时的第一球面坐标和所述角度偏移量,得到所述特征点在拍摄所述当前帧图像时的第二球面坐标;
    根据所述第二球坐标,计算所述特征点的所述第一空间坐标。
  10. 一种电子设备,其特征在于,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时,以用于实现:
    开启摄像装置对待建模的目标物体进行全方位拍摄;
    在全方位拍摄过程中逐个识别所述目标物体的特征点;
    获取所述摄像装置在每个特征点的识别过程中的运动轨迹;
    根据每个特征点对应的所述运动轨迹,确定所述特征点的空间坐标;
    基于每个特征点的空间坐标,对所述目标物体进行3D建模。
  11. 根据权利要求10所述的电子设备,其特征在于,所述处理器执行所述程序时,以具体实现:
    从初始帧图像开始,实时跟踪获取所述摄像装置扫描到当前帧图像的状态信息;所述状态信息中包括所述摄像装置的坐标信息和角度信息;
    每当识别出至少一个特征点时,将所述摄像装置当前所拍摄的帧图像作为分界图像;
    从识别出的前一个特征点对应的所述分界图像之后的第一帧图像开始,直到识别出所述至少一个特征点时的所述分界图像,利用位于两个分界图像之间的每个图像对应的所述状态信息,形成所述至少一个特征点对应的所述运动轨迹。
  12. 根据权利要求10所述的电子设备,其特征在于,所述处理器执行所述程序时,以具体实现:
    从当前帧图像中提取每个像素点的特征信息;
    将所述特征信息相近的像素点作为一个候选特征点;
    当连续预设帧数中所述候选特征点的所述特征信息变化差异在预设的范围内,则将所述候选特征点识别为一个所述特征点。
  13. 根据权利要求12所述的电子设备,其特征在于,所述处理器执行所述程序时,还以用于实现:
    在所述将所述候选特征点识别为一个所述特征点之后,每当识别出一个所述特征点,则对所述特征点进行标记;将所述标记后的所述特征点加入到预设的特征点集合中。
  14. 根据权利要求13所述的电子设备,其特征在于,所述处理器执行所述程序时,以具体实现:
    从所述特征点集合中,获取每个所述特征点的空间坐标;
    根据每个特征点的所述标记和所述空间坐标进行3D构建,形成所述目标物体的3D模型。
  15. 根据权利要求10-14所述的电子设备,其特征在于,所述处理器执行所述程序时,以具体实现:
    从所述特征点的第一帧图像开始,每当拍摄一帧图像后对当前的所述运动轨迹进行矢量分解,获取每个设定方位上的所述摄装置拍摄前一帧图像与拍摄当前帧图像之间的变化信息;
    针对每个设定方位,根据所述变化信息以及所述特征点在前一帧图像中的第一图像坐标和当前帧图像中的第二图像坐标,持续更新所述特征点的第一空间坐标,直到更新到识别出所述特征点时对应的帧图像为止,以获取到所述特征点的最终的第一空间坐标;
    将每个设定方位的所述最终的第一空间坐标进行矢量合成,得到所述特征点的所述空间坐标。
  16. 根据权利要求15所述的电子设备,其特征在于,所述设定方位包括预设的空间坐标系中的Z轴方向、由所述空间坐标系中的X轴与Y轴组成的水平面以及原地旋转;其中,所述空间坐标系为以所述摄像装置起始位置为坐标原点形成的坐标系;
    所述处理器执行所述程序时,还以用于实现:
    针对所述摄像装置沿所述Z轴方向移动,获取所述摄像装置拍摄前一帧图像与当前帧图像的垂直位移量;
    获取所述摄像装置拍摄前一帧图像时所述特征点与所述摄像装置的连线与所述Z轴之间的第一夹角和拍摄当前帧图像时所述特征点与所述摄像装置的连线与所述Z轴之间的第二夹角;其中,所述特征点为非Z轴上的特征点;
    根据所述垂直位移量和所述第一夹角和所述第二夹角,计算所述特征点的所述第一空 间坐标中的X轴坐标和Y轴坐标;
    根据所述第一图像坐标、所述第二图像坐标、所述第一夹角和所述第二夹角,计算所述特征点的所述第一空间坐标中的Z轴坐标。
  17. 根据权利要求16所述的电子设备,其特征在于,所述处理器执行所述程序时,还以用于实现:
    针对所述摄像装置在所述水平面内移动时,获取所述摄像装置拍摄前一帧图像与当前帧图像的水平位移量;
    获取投影特征点与移动后X`轴之间的第三夹角和所述前一帧图像中的参考点与所述当前帧图像中参考点的连线与所述移动后X`轴之间的第四夹角;其中,所述投影特征点为所述特征点在所述摄像装置拍摄到的初始帧图像中的成像点;所述移动后X`轴是以所述当前帧图像中的参考点为原点形成的水平坐标轴;
    根据所述第一图像坐标、所述第二图像坐标、所述第三夹角和所述第四夹角,获取所述前一帧图像中参考点与所述当前帧图像中参考点之间的第一位移量;
    根据所述水平位移量、所述第一位移量以及所述第一图像坐标,计算所述特征点到所述Z轴的垂直距离;
    根据所述特征点到所述Z轴的垂直距离和所述前一帧图像中的参考点与所述投影特征点的连线与所述X`轴之间的第五夹角,计算所述特征点的所述第一空间坐标的所述X轴坐标和Y轴坐标;所述X`轴是以所述前一帧图像中的参考点为原点形成的水平坐标轴。
  18. 根据权利要求17所述的电子设备,其特征在于,所述处理器执行所述程序时,还以用于实现:
    针对所述摄像装置在所述原地旋转时,根据所述第一夹角和所述第二夹角,获取所述摄像装置的角度偏移量;
    根据所述特征点在拍摄所述前一帧图像时的第一球面坐标和所述角度偏移量,得到所述特征点在拍摄所述当前帧图像时的第二球面坐标;
    根据所述第二球坐标,计算所述特征点的所述第一空间坐标。
  19. 一种非临时性计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-9中任一所述的3D建模方法。
  20. 一种计算机程序产品,其特征在于,当所述计算机程序产品中的指令由处理器执行时,执行如权利要求1-9中任一所述的3D建模方法。
PCT/CN2017/112194 2017-11-21 2017-11-21 3d建模方法、电子设备、存储介质及程序产品 WO2019100216A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201780092159.2A CN110785792A (zh) 2017-11-21 2017-11-21 3d建模方法、电子设备、存储介质及程序产品
PCT/CN2017/112194 WO2019100216A1 (zh) 2017-11-21 2017-11-21 3d建模方法、电子设备、存储介质及程序产品

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/112194 WO2019100216A1 (zh) 2017-11-21 2017-11-21 3d建模方法、电子设备、存储介质及程序产品

Publications (1)

Publication Number Publication Date
WO2019100216A1 true WO2019100216A1 (zh) 2019-05-31

Family

ID=66631309

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/112194 WO2019100216A1 (zh) 2017-11-21 2017-11-21 3d建模方法、电子设备、存储介质及程序产品

Country Status (2)

Country Link
CN (1) CN110785792A (zh)
WO (1) WO2019100216A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114252015B (zh) * 2021-12-27 2022-08-12 同济大学 回转运动物体位移的非接触式测量方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318604A (zh) * 2014-10-21 2015-01-28 四川华雁信息产业股份有限公司 一种3d图像拼接方法及装置
CN106384380A (zh) * 2016-08-31 2017-02-08 重庆七腾软件有限公司 3d人体扫描建模量测方法及其系统
CN106469465A (zh) * 2016-08-31 2017-03-01 深圳市唯特视科技有限公司 一种基于灰度和深度信息的三维人脸重建方法
US20170316598A1 (en) * 2015-05-22 2017-11-02 Tencent Technology (Shenzhen) Company Limited 3d human face reconstruction method, apparatus and server

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101195942B1 (ko) * 2006-03-20 2012-10-29 삼성전자주식회사 카메라 보정 방법 및 이를 이용한 3차원 물체 재구성 방법
EP2966867A1 (en) * 2014-07-09 2016-01-13 Thomson Licensing Methods and devices for encoding and decoding a sequence of frames representing a 3D scene, and corresponding computer program products and computer-readable medium
CN106296797A (zh) * 2015-06-10 2017-01-04 西安蒜泥电子科技有限责任公司 一种三维扫描仪特征点建模数据处理方法
CN105844696B (zh) * 2015-12-31 2019-02-05 清华大学 基于射线模型三维重构的图像定位方法以及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318604A (zh) * 2014-10-21 2015-01-28 四川华雁信息产业股份有限公司 一种3d图像拼接方法及装置
US20170316598A1 (en) * 2015-05-22 2017-11-02 Tencent Technology (Shenzhen) Company Limited 3d human face reconstruction method, apparatus and server
CN106384380A (zh) * 2016-08-31 2017-02-08 重庆七腾软件有限公司 3d人体扫描建模量测方法及其系统
CN106469465A (zh) * 2016-08-31 2017-03-01 深圳市唯特视科技有限公司 一种基于灰度和深度信息的三维人脸重建方法

Also Published As

Publication number Publication date
CN110785792A (zh) 2020-02-11

Similar Documents

Publication Publication Date Title
Forster et al. SVO: Semidirect visual odometry for monocular and multicamera systems
TWI555379B (zh) 一種全景魚眼相機影像校正、合成與景深重建方法與其系統
EP3028252B1 (en) Rolling sequential bundle adjustment
JP4825980B2 (ja) 魚眼カメラの校正方法。
Herrera et al. Dt-slam: Deferred triangulation for robust slam
US20190012804A1 (en) Methods and apparatuses for panoramic image processing
JP2018151696A (ja) 自由視点移動表示装置
US20150187140A1 (en) System and method for image composition thereof
JP6615545B2 (ja) 画像処理装置、画像処理方法および画像処理用プログラム
US10825249B2 (en) Method and device for blurring a virtual object in a video
CN108830906B (zh) 一种基于虚拟双目视觉原理的摄像机参数自动标定方法
CN111750820A (zh) 影像定位方法及其系统
JP2007024647A (ja) 距離算出装置、距離算出方法、構造解析装置及び構造解析方法。
WO2023060964A1 (zh) 标定方法及相关装置、设备、存储介质和计算机程序产品
JP4132068B2 (ja) 画像処理装置及び三次元計測装置並びに画像処理装置用プログラム
WO2018209592A1 (zh) 一种机器人的运动控制方法、机器人及控制器
TW202217755A (zh) 視覺定位方法、設備和電腦可讀儲存介質
CN110544278A (zh) 刚体运动捕捉方法及装置、agv位姿捕捉系统
JP2016148956A (ja) 位置合わせ装置、位置合わせ方法及び位置合わせ用コンピュータプログラム
WO2022052409A1 (zh) 用于多机位摄像的自动控制方法和系统
WO2019100216A1 (zh) 3d建模方法、电子设备、存储介质及程序产品
WO2022036512A1 (zh) 数据处理方法、装置、终端和存储介质
JP2005031044A (ja) 三次元誤差測定装置
WO2017057426A1 (ja) 投影装置、コンテンツ決定装置、投影方法、および、プログラム
WO2018150086A2 (en) Methods and apparatuses for determining positions of multi-directional image capture apparatuses

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17933049

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17933049

Country of ref document: EP

Kind code of ref document: A1