WO2021238070A1 - 三维图像的生成方法、装置及计算机设备 - Google Patents

三维图像的生成方法、装置及计算机设备 Download PDF

Info

Publication number
WO2021238070A1
WO2021238070A1 PCT/CN2020/127202 CN2020127202W WO2021238070A1 WO 2021238070 A1 WO2021238070 A1 WO 2021238070A1 CN 2020127202 W CN2020127202 W CN 2020127202W WO 2021238070 A1 WO2021238070 A1 WO 2021238070A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
imaging
dimensional image
camera
distance
Prior art date
Application number
PCT/CN2020/127202
Other languages
English (en)
French (fr)
Inventor
郑勇
许仕哲
刘毓森
潘濛濛
李政
戴志涛
Original Assignee
深圳市沃特沃德股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市沃特沃德股份有限公司 filed Critical 深圳市沃特沃德股份有限公司
Publication of WO2021238070A1 publication Critical patent/WO2021238070A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures

Definitions

  • This application relates to the field of camera technology, in particular to a method, device and computer equipment for generating a three-dimensional image.
  • the camera usually can only shoot images in a two-dimensional plane.
  • TOF technology is a distance measurement technology based on infrared light sources.
  • TOF technology cameras can only measure the infrared contour of the object, and can only use different colors to represent the three-dimensional contour of the object in topographic maps of different distances. It is presented in a way that is not convenient for human eyes to watch.
  • the main purpose of this application is to provide a method, device and computer equipment for generating a three-dimensional image, which aims to solve the technical problem of inconvenient viewing of three-dimensional images generated by TOF cameras in the prior art.
  • an embodiment of the present application proposes a method for generating a three-dimensional image, which is applied to a smart device.
  • the smart device includes a TOF camera and a visible light camera.
  • the above-mentioned three-dimensional image generation method includes:
  • the depth information is the distance information from the target to the TOF camera;
  • a three-dimensional image of overlapping imaging is generated, and the three-dimensional image includes all the targets in the overlapping imaging.
  • the step of acquiring the overlapped imaging of the TOF camera and the visible light camera in the imaging range further includes:
  • the step of obtaining overlapping imaging in the visible imaging includes:
  • the TOF camera range of the TOF camera is obtained
  • the imaging area of the visible light camera within the TOF imaging range is obtained
  • the calculation is based on the fact that the imaging area occupies the visible imaging area when the visible light camera is imaging, and the overlapping imaging is obtained.
  • the step of performing target recognition in overlapping imaging includes:
  • the image corresponding to the overlap imaging is recognized through the preset recognition model, and each target in the overlap imaging is obtained.
  • steps of performing distance measurement on each identified target through the TOF camera to obtain the depth information of each target include:
  • contour information includes pixel information of the contour of the target
  • the target is ranged according to the pixel information, and the distance information from the physical point corresponding to each pixel point in the target's contour to the TOF camera is obtained.
  • the step of generating a three-dimensional image corresponding to each target according to the depth information includes:
  • the target data includes the actual distance between each target and the contour information of each target;
  • a three-dimensional electronic map corresponding to each target is constructed according to the distance of each target and the outline of each target to obtain a three-dimensional image.
  • the method includes:
  • the three-dimensional information of each target in the three-dimensional image includes the distance information between each target;
  • the three-dimensional image and three-dimensional information are displayed on the display screen.
  • the embodiment of the present application also proposes a device for generating a three-dimensional image, including:
  • the acquisition imaging unit is used to acquire the overlapping imaging of the TOF camera and the visible light camera in the imaging range;
  • Target recognition unit used for target recognition in overlapping imaging
  • the target ranging unit is used to measure the distance of each identified target through the TOF camera to obtain the depth information of each target, and the depth information is the distance information from the target to the TOF camera;
  • the image generation unit is used to generate a three-dimensional image of the overlap imaging according to the depth information, and the three-dimensional image includes all the targets in the overlap imaging.
  • acquiring the imaging unit includes:
  • the acquisition imaging subunit is used to acquire the visible imaging within the imaging range of the visible light camera
  • the obtaining overlap subunit is used to obtain the overlap imaging in the visible imaging according to the maximum object distance detectable by the TOF camera and the distance between the TOF camera and the lens optical center of the visible light camera.
  • obtaining overlapping subunits includes:
  • Obtain the range module which is used to obtain the TOF camera range of the TOF camera according to the focal length of the TOF camera and the maximum object distance detectable by the TOF camera;
  • Obtaining area module used to obtain the imaging area of the visible light camera within the TOF imaging range according to the distance between the TOF camera and the visible light camera's lens optical center and the focal length of the visible light camera;
  • the calculation area module is used to calculate the area that the imaging area occupies the visible imaging during imaging by the visible light camera to obtain overlapping imaging.
  • identifying the target unit includes:
  • the model recognition subunit is used for recognizing the image corresponding to the overlapping imaging through the preset recognition model, and obtaining each target in the overlapping imaging.
  • the target ranging unit includes:
  • the label target subunit is used to label each target in the overlap imaging to obtain contour information of each target, and the contour information includes pixel information of the contour of the target;
  • the target ranging sub-unit is used for ranging the target according to the pixel information to obtain the distance information from the physical point corresponding to each pixel point in the contour of the target to the TOF camera.
  • the generating image unit includes:
  • the calculation data subunit is used to calculate the target data of each target according to the depth information, and the target data includes the actual distance between each target and the contour information of each target;
  • the constructing image subunit is used to construct a three-dimensional electronic map corresponding to each target in the preset actual coordinate system according to the distance of each target and the contour of each target, so as to obtain a three-dimensional image.
  • the device for generating a three-dimensional image further includes:
  • the obtaining distance unit is used to obtain the three-dimensional information of each target in the three-dimensional image, and the three-dimensional information includes the distance information between each target;
  • the display image unit is used to display three-dimensional images and three-dimensional information on the display screen.
  • the embodiment of the present application also proposes a storage medium, which is a computer-readable storage medium on which a computer program is stored, and when the computer program is executed, the method for generating a three-dimensional image as described above is realized.
  • the embodiment of the present application also proposes a computer device, which includes a processor, a memory, and a computer program stored on the memory and capable of running on the processor.
  • a computer program stored on the memory and capable of running on the processor.
  • This application proposes a three-dimensional image generation method, device and computer equipment.
  • the three-dimensional image generation method the overlapping imaging area of the TOF camera and the visible light camera in the imaging range is obtained, and then the target in the area is identified , And then measure the target to obtain the depth information, so as to construct the corresponding 3D image. Since the visible light camera is used for imaging and the TOF camera is used to obtain the depth information, a 3D image with the same color as the original contour of the target can be constructed. In this way, a realistic three-dimensional image can be obtained through the combined processing of the two cameras, and the amount of calculation is small, which effectively saves resources.
  • FIG. 1 is a schematic flowchart of a method for generating a three-dimensional image according to an embodiment of the present application
  • FIG. 2 is a plane imaging diagram of a coordinate system established with the lens optical center of the TOF camera as the origin according to an embodiment of the present application;
  • FIG. 3 is a schematic block diagram of the structure of a device for generating a three-dimensional image according to an embodiment of the present application
  • FIG. 4 is a schematic block diagram of the structure of a target recognition unit according to an embodiment of the present application.
  • FIG. 5 is a schematic block diagram of the structure of a target ranging unit according to an embodiment of the present application.
  • FIG. 6 is a schematic block diagram of the structure of an image generating unit according to an embodiment of the present application.
  • FIG. 7 is a schematic block diagram of a structure of an imaging unit according to an embodiment of the present application.
  • FIG. 8 is a schematic block diagram of the structure of obtaining overlapping subunits according to an embodiment of the present application.
  • FIG. 9 is a schematic block diagram of the structure of an apparatus for generating a three-dimensional image according to another embodiment of the present application.
  • FIG. 10 is a schematic block diagram of the structure of an embodiment of the storage medium of the present application.
  • FIG. 11 is a schematic structural block diagram of an embodiment of a computer device of the present application.
  • a method for generating a three-dimensional image is provided.
  • the method is applied to a smart device.
  • the smart device includes a TOF camera and a visible light camera. Specifically, the above method includes:
  • Step S1 Obtain overlapping imaging of the TOF camera and the visible light camera overlapping within the imaging range;
  • Step S2 Perform target recognition in the overlapping imaging
  • Step S3 Perform distance measurement on each identified target through the TOF camera to obtain depth information of each target, where the depth information is the distance information from the target to the TOF camera;
  • Step S4 Generate the three-dimensional image of the overlapping imaging according to the depth information, the three-dimensional image including all the targets in the overlapping imaging.
  • the aforementioned TOF camera is made based on the existing TOF technology, and the aforementioned TOF technology is Time of Flight (Time of Flight).
  • the abbreviation of Flight) technology that is, the sensor emits modulated near-infrared light and reflects after encountering an object. The sensor calculates the time difference or phase difference between the light emission and reflection to convert the distance between the camera and the photographed scene to obtain depth information.
  • the smart device includes multiple modes, such as a mode for shooting through a TOF camera or a visible light camera alone. At this time, the three-dimensional contour image of the object and the ordinary two-dimensional image are obtained respectively. In addition, it also includes The 3D image mode of the scene generated by the TOF camera or the visible light camera. When the smart device enters this mode, the 3D modeling mode is turned on. First, obtain the overlapping imaging of the TOF camera and the visible light camera in the camera range. You need to know when the camera shoots the object. There is a corresponding imaging surface. The scene in the imaging surface is determined according to the optical center, focal length and the size of the imaging plane. The TOF camera has the farthest distance measurement.
  • the farthest distance measurement is denoted as L, TOF
  • the imaging range of the camera is within the farthest distance measurement.
  • the visible light camera does not limit the distance of the shooting object. Therefore, the overlapping area of the two in the imaging range will be within the farthest distance measurement.
  • the above-mentioned overlapping imaging is the imaging of the above-mentioned overlapping area. And it is part of the visible imaging of the visible light camera, so the appearance of the photographed object can be displayed in the overlapping imaging, and the depth information of the object can also be obtained through the TOF camera.
  • step S2 perform target recognition on the above-mentioned overlapping imaging, for example, identifying a scene through a model or determining a scene in the overlapping imaging by comparing pictures in a preset database, where the scene is the above-mentioned identified target.
  • the target in the overlapping imaging is recognized through the preset recognition model, then the above step S2 includes:
  • Step S21 Recognizing the image corresponding to the overlapping imaging by using a preset recognition model to obtain each target in the overlapping imaging.
  • the image corresponding to the overlap imaging is recognized through a preset recognition model.
  • the AI intelligent recognition algorithm is used to perform object recognition processing on the pixels of the overlap imaging to identify multiple targets in the overlap imaging area, for example, recognize
  • the above-mentioned preset recognition model can be realized by SSD algorithm or DSST algorithm.
  • the above-mentioned recognition model is SSD algorithm model, which adopts the VGG16 network structure.
  • the connection layer and the pooling layer first extract the target features through the CNN network, and then calculate the classification of each target through the VGG16 network.
  • the classification includes the categories of commonly used household appliances such as tables, sofas, chairs, and refrigerators.
  • the samples include samples of the above tables, sofas, chairs, refrigerators, etc., and then enter the preset initial model for training, and then use methods such as compensation and expansion Improve the performance of the SSD algorithm, such as calculating the loss value through the loss function, then calculating the parameter gradient through the network backpropagation, and then updating the parameters of the model until the model converges to obtain the above-mentioned identification model.
  • the TOF camera is used to measure the distance of each identified target to obtain the corresponding depth information, that is, to measure the distance of each identified target through the TOF technology, if each target is actually facing
  • the plane facing the TOF camera is a relative plane, and the distance from each point on the relative plane to the TOF camera is the depth information.
  • the above step S3 includes:
  • Step S31 label each target in the overlapping imaging to obtain contour information of each target, where the contour information includes pixel information of the contour of the target;
  • Step S32 Perform distance measurement on the target according to the pixel information, and obtain distance information from the physical point corresponding to each pixel point in the contour of the target to the TOF camera.
  • each target in the overlap imaging is marked. While distinguishing each target, the contour information corresponding to each target can be obtained according to the marking.
  • the contour information includes the contour information of the target.
  • Pixel information the pixels of the target outline are used to represent the entire outline of the target's image display. For example, the camera pixel of a visible light camera is a fixed value.
  • the target When marking the target, the target can be framed by a rectangular box, and then divided according to the edge pixels of each target Draw out the contour of each target, measure the distance of the target according to the pixel size corresponding to each target, and obtain the distance information of each pixel in each target contour, that is, the above-mentioned depth information, and the above-mentioned distance information is the contour of the target
  • the distance information from each physical point to the TOF camera, and each physical point corresponds to the pixel point one-to-one. It is necessary to know that each surface of the target is composed of points.
  • the physical point here is the actual side of the target facing the TOF camera Each point; it is necessary to know that the pixel is the basic unit of image display.
  • the depth information of each target contour used for display is obtained, which provides a basis for the subsequent establishment of the three-dimensional graphics of each target.
  • step S4 after obtaining the depth information of each target, 3D modeling is used to construct a three-dimensional image corresponding to each target based on the depth information. Because the depth information includes the actual point corresponding to each pixel when the target is imaged. In practice, the distance information to the TOF camera can be used to obtain the three-dimensional shape of each target, and then construct a three-dimensional map of the target.
  • the target is the target in the visible imaging of the visible light camera, so the three-dimensional outline of the target is constructed.
  • the RGB primary colors actually shot by each target are obtained through the imaging of the visible light camera at the same time, that is, the colors displayed by the superposition of the three color channels of red (R), green (G), and blue (B) in reality, so that The constructed three-dimensional image is consistent with the actual target original.
  • step S4 includes:
  • Step S41 Calculate target data of each target according to the depth information, the target data including the actual distance between each target and contour information of each target;
  • Step S42 Construct a three-dimensional electronic map corresponding to each target in the preset actual coordinate system according to the distance of each target and the contour of each target, so as to obtain the three-dimensional image.
  • the target data of each target is obtained through the depth information.
  • These target data are the contour information of the target, such as the length, width, height, geometric center, centroid, depth, etc. of the target, as well as those calculated from the contour information.
  • the TOF camera is used as the origin of the coordinate or the ground is used as the reference surface of the XY plane to construct the three-dimensional coordinates of multiple targets, and then use Calculate the side length of a plane triangle, or calculate the distance between two points in space geometry in three-dimensional coordinates to obtain the distance between each target.
  • the distance between each target is obtained, and a three-dimensional electronic map of the target is constructed to obtain the above-mentioned three-dimensional image.
  • step S1 includes:
  • Step S11 Obtain visible imaging within the imaging range of the visible light camera
  • Step S12 According to the maximum object distance detected by the TOF camera and the distance between the TOF camera and the lens optical center of the visible light camera, the overlapping imaging is acquired in the visible imaging.
  • the visible imaging within the imaging range of the visible light camera is first obtained, and then the visible imaging area is found in the visible imaging that overlaps the imaging range of the TOF camera and the visible light camera.
  • the largest object can be detected by the TOF camera.
  • the maximum object distance detected by the TOF camera is the maximum distance measurement of the above TOF camera. Since the TOF camera has the maximum distance measurement, it can be determined by the principle of small hole imaging. Get the TOF camera's shooting range and the corresponding TOF imaging. Similarly, you can also get the visible light camera's shooting range and imaging surface.
  • the two lenses are in the same plane, so it can be known that the optical centers of the two lenses are in the same plane. The closer the distance, the greater the overlap, the farther the distance between the optical centers of the lenses, and the less the overlap between the two lenses.
  • This embodiment does not limit the distance between the two lenses, and can be set according to actual needs.
  • the above step S12 includes:
  • Step S121 Obtain the TOF camera range of the TOF camera according to the focal length of the TOF camera and the maximum object distance detected by the TOF camera;
  • Step S122 According to the distance between the TOF camera and the lens optical center of the visible light camera, and the focal length of the visible light camera, obtain the imaging area of the visible light camera within the TOF imaging range;
  • Step S123 Calculate the overlapping imaging based on the imaging area occupies the visible imaging area when the visible light camera is imaging.
  • the TOF camera's focal length and the TOF camera detect the maximum object distance to obtain the TOF camera range of the TOF camera. Because in the TOF camera, the focal length and its maximum object distance are both determined, the corresponding imaging plane is The size can also be determined.
  • the TOF camera range of the TOF camera can be determined by the principle of small hole imaging, and the visible camera range of the visible light camera can also be determined by the focal length of the visible light camera. If the distance between the optical centers of the two lenses is fixed, the difference between the two lenses can be obtained.
  • Camera overlap area you can also get the imaging area of the visible light camera within the TOF imaging range, and then calculate the area occupied by the visible light camera in the imaging area to obtain the above overlap imaging. Specifically, it can be calculated by a preset formula For example, it can be calculated by a formula set by the principle of similar triangles, or it can be calculated by establishing a coordinate system to calculate the coordinates of overlapping imaging.
  • the lens optical center of the TOF camera taking the lens optical center of the TOF camera as the origin O, the line connecting the TOF camera and the visible light camera lens optical centers as the Y axis, and the line passing through the origin and perpendicular to the line
  • the straight line is used as the X axis to establish a coordinate system; since the TOF camera has an effective distance measurement, the effective distance measurement L of the TOF camera is set, and the distance between the lens optical center of the visible light camera and the lens optical center of the TOF camera is K.
  • the coordinate value of the overlapping imaging boundary can be obtained, and then the pixel corresponding to the coordinate can be obtained according to the size of the imaging pixel.
  • the effective distance measurement of the TOF camera is L
  • the focal length of the TOF camera is F1
  • the focal length of the visible light camera is F2
  • the optical center distance between the TOF camera and the visible light camera is K.
  • the principle is that the overlapping imaging range of the TOF camera and the visible light camera is BG
  • the imaging area of this range in the imaging plane of the visible light camera is DE
  • the upper edge coordinates of the imaging plane of the TOF camera are A (X1, Y1), where X1 is the focal length F1, Y1 are the longitudinal dimensions of the known imaging plane of the TOF camera.
  • the upper and lower edge positions of the overlapping imaging area in a plane are obtained, and the upper and lower edge positions of the overlapping imaging areas in all planes can be obtained in the same way according to the above method, that is, all the edge coordinates of the overlapping imaging area in the three-dimensional space can be obtained.
  • step S4 after the above step S4, it further includes:
  • Step S5 Acquire three-dimensional information of each of the targets in the three-dimensional image, where the three-dimensional information includes distance information between each of the targets;
  • Step S6 Display the three-dimensional image and three-dimensional information on the display screen.
  • the above-mentioned three-dimensional image can be displayed on the display screen.
  • the specific information of each target can also be displayed, so that the user can better understand the situation of each target.
  • the three-dimensional information of the target includes the distance information between each target and the shape information of each target, such as length, width, height, etc., and then the above-mentioned three-dimensional image is displayed on the display screen, and the target of the image is superscripted at the same time. Out the above three-dimensional information.
  • This application also proposes a three-dimensional image generation device for executing the above-mentioned three-dimensional image generation method.
  • the three-dimensional image generation device can be implemented in the form of software or hardware.
  • 3, the above-mentioned three-dimensional image generating device includes:
  • the acquiring imaging unit 100 is configured to acquire overlapping imaging in which the TOF camera and the visible light camera overlap within the imaging range;
  • the target recognition unit 200 is configured to perform target recognition in the overlapping imaging
  • the target ranging unit 300 is configured to perform ranging of each identified target through the TOF camera to obtain depth information of each target, where the depth information is the distance information from the target to the TOF camera;
  • the image generating unit 400 is configured to generate the three-dimensional image of the overlapping imaging according to the depth information, and the three-dimensional image includes all the targets in the overlapping imaging.
  • the aforementioned TOF camera is made based on the existing TOF technology, and the aforementioned TOF technology is Time of Flight (Time of Flight).
  • the abbreviation of Flight) technology that is, the sensor emits modulated near-infrared light and reflects after encountering an object. The sensor calculates the time difference or phase difference between the light emission and reflection to convert the distance between the camera and the photographed scene to obtain depth information.
  • the above-mentioned smart device includes a variety of modes, for example, a mode of shooting through a TOF camera or a visible light camera alone. At this time, the three-dimensional contour image of the object and the ordinary two-dimensional image are obtained respectively. In addition, It includes the 3D image mode of the scene generated by the TOF camera or the visible light camera. When the smart device enters this mode, the 3D modeling mode is turned on. First, obtain the overlapping imaging of the TOF camera and the visible light camera in the camera range. You need to know the camera shooting Each object has a corresponding imaging surface. The scene in the imaging surface is determined according to the optical center, focal length and the size of the imaging plane of the lens.
  • the TOF camera has the farthest distance measurement.
  • the farthest distance measurement is marked as L
  • the imaging range of the TOF camera is within the farthest distance measurement.
  • the visible light camera does not limit the distance of the shooting object. Therefore, the overlapping area of the two in the imaging range will be within the farthest distance measurement.
  • the above-mentioned overlapping imaging is the imaging of the above-mentioned overlapping area.
  • And is part of the visible imaging of the visible light camera, so the appearance of the object can be displayed in the overlapping imaging, and the depth information of the object can also be obtained through the TOF camera.
  • the above-mentioned overlapped imaging is performed for target recognition, for example, a scene is recognized through a model or a scene in the overlapped imaging is determined by comparing pictures in a preset database.
  • the scene here is the identified target.
  • the target in the overlapping imaging is recognized through a preset recognition model, and the foregoing recognition target unit 200 includes:
  • the model recognition subunit 201 is configured to recognize the image corresponding to the overlapping imaging by using a preset recognition model to obtain each target in the overlapping imaging.
  • the image corresponding to the overlap imaging is recognized through a preset recognition model.
  • the AI intelligent recognition algorithm is used to perform object recognition processing on the pixels of the overlap imaging to identify multiple targets in the overlap imaging area, for example, recognize
  • the above-mentioned preset recognition model can be realized by SSD algorithm or DSST algorithm.
  • the above-mentioned recognition model is SSD algorithm model, which adopts the VGG16 network structure.
  • the connection layer and the pooling layer first extract the target features through the CNN network, and then calculate the classification of each target through the VGG16 network.
  • the classification includes the categories of commonly used household appliances such as tables, sofas, chairs, and refrigerators.
  • the samples include samples of the above tables, sofas, chairs, refrigerators, etc., and then enter the preset initial model for training, and then use methods such as compensation and expansion Improve the performance of the SSD algorithm, such as calculating the loss value through the loss function, then calculating the parameter gradient through the network backpropagation, and then updating the parameters of the model until the model converges to obtain the above-mentioned identification model.
  • the TOF camera is used to measure the distance of each identified target to obtain the corresponding depth information, that is, the distance of each identified target is measured through the TOF technology. If each target is actually When facing the TOF camera, the plane facing the TOF camera is a relative plane, and the distance from each point on the relative plane to the TOF camera is the depth information.
  • the above-mentioned target ranging unit 300 include:
  • the marking target subunit 301 is configured to mark each target in the overlapping imaging to obtain contour information of each target, where the contour information includes pixel information of the contour of the target;
  • the target ranging subunit 302 is configured to perform ranging of the target according to the pixel information, and obtain the distance information from the physical point corresponding to each pixel point in the contour of the target to the TOF camera.
  • each target in the overlap imaging is marked. While distinguishing each target, the contour information corresponding to each target can be obtained according to the marking.
  • the contour information includes the contour information of the target.
  • Pixel information the pixels of the target outline are used to represent the entire outline of the target's image display. For example, the camera pixel of a visible light camera is a fixed value.
  • the target When marking the target, the target can be framed by a rectangular box, and then divided according to the edge pixels of each target Draw out the contour of each target, measure the distance of the target according to the pixel size corresponding to each target, and obtain the distance information of each pixel in each target contour, that is, the above-mentioned depth information, and the above-mentioned distance information is the contour of the target
  • the distance information from each physical point to the TOF camera, and each physical point corresponds to the pixel point one-to-one. It is necessary to know that each surface of the target is composed of points.
  • the physical point here is the actual side of the target facing the TOF camera Each point; it is necessary to know that the pixel is the basic unit of image display.
  • the depth information of each target contour used for display is obtained, which provides a basis for the subsequent establishment of the three-dimensional graphics of each target.
  • 3D modeling is used to construct a three-dimensional image corresponding to each target according to the depth information. Because the depth information includes the actual object corresponding to each pixel when the target is imaged The distance information from the point to the TOF camera in reality, through which the three-dimensional shape of each target can be obtained, and then the three-dimensional contour map of the target is constructed.
  • the above-mentioned target is the target in the visible imaging of the visible light camera, so the above-mentioned target is constructed At the same time, the RGB primary colors actually shot by each target are obtained through the imaging of the visible light camera, that is, the colors displayed by the superimposition of the three color channels of red (R), green (G), and blue (B) in reality , So that the constructed three-dimensional image is consistent with the actual target original.
  • the image generation unit 400 described above includes:
  • the calculation data subunit 401 is configured to calculate target data of each of the targets according to the depth information, the target data including the actual distance between each of the targets and contour information of each of the targets;
  • the image building subunit 402 is configured to construct a three-dimensional electronic map corresponding to each target in a preset actual coordinate system according to the distance of each target and the outline of each target to obtain the three-dimensional image.
  • the target data of each target is obtained through the depth information.
  • These target data are the contour information of the target, such as the length, width, height, geometric center, centroid, depth, etc. of the target, as well as those calculated from the contour information.
  • the TOF camera is used as the origin of the coordinate or the ground is used as the reference surface of the XY plane to construct the three-dimensional coordinates of multiple targets, and then use Calculate the side length of a plane triangle, or calculate the distance between two points in space geometry in three-dimensional coordinates to obtain the distance between each target.
  • the distance between each target is obtained, and a three-dimensional electronic map of the target is constructed to obtain the above-mentioned three-dimensional image.
  • the above-mentioned acquiring and imaging unit 100 includes:
  • the acquiring imaging subunit 101 is configured to acquire visible imaging within the imaging range of the visible light camera;
  • the obtaining overlap subunit 102 is configured to obtain the overlap imaging in the visible imaging according to the maximum object distance detected by the TOF camera and the distance between the TOF camera and the lens optical center of the visible light camera.
  • the visible imaging within the imaging range of the visible light camera is first obtained, and then the visible imaging area is found in the visible imaging that overlaps the imaging range of the TOF camera and the visible light camera.
  • the largest object can be detected by the TOF camera.
  • the maximum object distance detected by the TOF camera is the maximum distance measurement of the above TOF camera. Since the TOF camera has the maximum distance measurement, it can be determined by the principle of small hole imaging. Get the TOF camera's shooting range and the corresponding TOF imaging. Similarly, you can also get the visible light camera's shooting range and imaging surface.
  • the two lenses are in the same plane, so it can be known that the optical centers of the two lenses are in the same plane. The closer the distance, the greater the overlap, the farther the distance between the optical centers of the lenses, and the less the overlap between the two lenses.
  • This embodiment does not limit the distance between the two lenses, and can be set according to actual needs.
  • the above-mentioned obtaining overlap subunit 102 includes:
  • An obtaining range module 1021 is used to obtain the TOF camera range of the TOF camera according to the focal length of the TOF camera and the maximum object distance detected by the TOF camera;
  • An obtaining area module 1022 configured to obtain the imaging area of the visible light camera within the TOF imaging range according to the distance between the TOF camera and the lens optical center of the visible light camera, and the focal length of the visible light camera;
  • the calculation area module 1023 is configured to calculate the overlap imaging based on the area occupied by the imaging area of the visible imaging when the visible light camera is imaging.
  • the TOF camera's focal length and the TOF camera detect the maximum object distance to obtain the TOF camera range of the TOF camera. Because in the TOF camera, the focal length and its maximum object distance are both determined, the corresponding imaging plane is The size can also be determined.
  • the TOF camera range of the TOF camera can be determined by the principle of small hole imaging, and the visible camera range of the visible light camera can also be determined by the focal length of the visible light camera. If the distance between the optical centers of the two lenses is fixed, the difference between the two lenses can be obtained.
  • Camera overlap area you can also get the imaging area of the visible light camera within the TOF imaging range, and then calculate the area occupied by the visible light camera in the imaging area to obtain the above overlap imaging. Specifically, it can be calculated by a preset formula For example, it can be calculated by a formula set by the principle of similar triangles, or it can be calculated by establishing a coordinate system to calculate the coordinates of overlapping imaging.
  • the lens optical center of the TOF camera taking the lens optical center of the TOF camera as the origin O, the line connecting the TOF camera and the visible light camera lens optical centers as the Y axis, and the line passing through the origin and perpendicular to the line
  • the straight line is used as the X axis to establish a coordinate system; since the TOF camera has an effective distance measurement, the effective distance measurement L of the TOF camera is set, and the distance between the lens optical center of the visible light camera and the lens optical center of the TOF camera is K.
  • the coordinate value of the overlapping imaging boundary can be obtained, and then the pixel corresponding to the coordinate can be obtained according to the size of the imaging pixel.
  • the effective distance measurement of the TOF camera is L
  • the focal length of the TOF camera is F1
  • the focal length of the visible light camera is F2
  • the optical center distance between the TOF camera and the visible light camera is K.
  • the principle is that the overlapping imaging range of the TOF camera and the visible light camera is BG
  • the imaging area of this range in the imaging plane of the visible light camera is DE
  • the upper edge coordinates of the imaging plane of the TOF camera are A (X1, Y1), where X1 is the focal length F1, Y1 are the longitudinal dimensions of the known imaging plane of the TOF camera.
  • the upper and lower edge positions of the overlapping imaging area in a plane are obtained, and the upper and lower edge positions of the overlapping imaging areas in all planes can be obtained in the same way according to the above method, that is, all the edge coordinates of the overlapping imaging area in the three-dimensional space can be obtained.
  • the above-mentioned three-dimensional image generating device further includes:
  • the obtaining distance unit 500 is configured to obtain three-dimensional information of each of the targets in the three-dimensional image, where the three-dimensional information includes distance information between each of the targets;
  • the image display unit 600 is used to display the three-dimensional image and three-dimensional information on a display screen.
  • the above-mentioned three-dimensional image can be displayed on the display screen.
  • the specific information of each target can also be displayed, so that the user can better understand the situation of each target.
  • the three-dimensional information of the target includes the distance information between each target and the shape information of each target, such as length, width, height, etc., and then the above-mentioned three-dimensional image is displayed on the display screen, and the target of the image is superscripted at the same time. Out the above three-dimensional information.
  • the present application also provides a computer-readable storage medium 21, the storage medium 21 stores a computer program 22, when it runs on a computer, the computer executes the three-dimensional image generation method described in the above embodiments .
  • the present application also provides a computer device 34 containing instructions.
  • the computer device includes a memory 31 and a processor 33.
  • the memory 31 stores a computer program 22.
  • the processor 33 executes the computer program 22 to implement the description in the above embodiment. The method of generating three-dimensional images.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions may be transmitted from a website, computer, server, or data center. Transmission to another website, computer, server or data center via wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.).
  • wired such as coaxial cable, optical fiber, digital subscriber line (DSL)
  • wireless such as infrared, wireless, microwave, etc.
  • the computer-readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid-state hard disk). State Disk (SSD)) etc.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本申请揭示了一种三维图像的生成方法、装置及计算机设备,由于获取的为TOF摄像头与可见光摄像头在摄像范围内重叠的重叠成像区域,且通过可见光摄像头进行成像及TOF摄像头获取深度信息,构建出与目标原物轮廓颜色一致的三维图像,不但图像形象逼真且计算量较少。

Description

三维图像的生成方法、装置及计算机设备
本申请要求于2020年5月29日提交中国专利局、申请号为CN202010479011.8,发明名称为“三维图像的生成方法、装置、存储介质及计算机设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及相机技术领域,具体涉及一种三维图像的生成方法、装置及计算机设备。
背景技术
现有技术中,相机通常只能拍摄出二维平面的图像,但随着科技的发展,人们不再满足于这种拍摄方式,开始追求拍摄出更加形象逼真的三维图像,目前,三维立体图像通常是通过TOF相机来进行拍摄,而TOF技术是基于红外光源的测距技术,TOF技术的相机只能测量出物体的红外轮廓,只能将物体的三维轮廓以不同颜色代表不同距离的地形图方式呈现出来,不方便人眼观看。
技术问题
本申请的主要目的为提供一种三维图像的生成方法、装置及计算机设备,旨在解决现有技术中通过TOF相机生成的三维图像观看不便利的技术问题。
技术解决方案
基于上述发明目的,本申请实施例提出一种三维图像的生成方法,该方法应用于智能设备,智能设备包括TOF摄像头与可见光摄像头,上述三维图像的生成方法包括:
获取TOF摄像头与可见光摄像头在摄像范围内重叠的重叠成像;
在重叠成像中进行目标识别;
通过TOF摄像头对识别出的各个目标进行测距,得到各个目标的深度信息,深度信息为目标到TOF摄像头的距离信息;
依据深度信息生成重叠成像的三维图像,三维图像包括重叠成像中所有的目标。
进一步地,获取TOF摄像头与可见光摄像头在摄像范围内重叠的重叠成像的步骤,还包括:
获取可见光摄像头摄像范围内的可见成像;
依据TOF摄像头检测最大的物距,以及TOF摄像头与可见光摄像头的镜头光心的距离,在可见成像中获取重叠成像。
进一步地,依据TOF摄像头可检测的最大物距,以及TOF摄像头与可见光摄像头的镜头光心的距离,在可见成像中获取重叠成像的步骤,包括:
依据TOF摄像头的焦距以及TOF摄像头可检测的最大物距,得到TOF摄像头的TOF摄像范围;
依据TOF摄像头与可见光摄像头的镜头光心的距离,以及可见光摄像头的焦距,得到可见光摄像头在TOF摄像范围内的摄像区域;
计算出依据在可见光摄像头成像时摄像区域占据可见成像的区域,得到重叠成像。
进一步地,在重叠成像中进行目标识别的步骤,包括:
通过预设的识别模型对重叠成像所对应的图像进行识别,得到重叠成像中各个目标。
进一步地,通过TOF摄像头对识别出的各个目标进行测距,得到各个目标的深度信息的步骤,包括:
对重叠成像中各目标进行标注,以获取各目标的轮廓信息,轮廓信息包括目标的轮廓的像素信息;
依据像素信息对目标进行测距,得到目标的轮廓内对应各个像素点的实物点到TOF摄像头的距离信息。
进一步地,依据深度信息生成对应各目标的三维图像的步骤,包括:
依据深度信息计算出各目标的目标数据,目标数据包括各目标之间的实际距离以及各目标的轮廓信息;
在预设的实际坐标系中依据各目标的距离以及各目标的轮廓构建对应各目标的三维电子地图,以得到三维图像。
进一步地,依据深度信息生成重叠成像的三维图像的步骤之后,包括:
获取三维图像中各目标的三维信息,三维信息包括各目标之间的距离信息;
将三维图像以及三维信息通过显示屏进行显示。
本申请实施例还提出了一种三维图像的生成装置,包括:
获取成像单元,用于获取TOF摄像头与可见光摄像头在摄像范围内重叠的重叠成像;
识别目标单元,用于在重叠成像中进行目标识别;
目标测距单元,用于通过TOF摄像头对识别出的各个目标进行测距,得到各个目标的深度信息,深度信息为目标到TOF摄像头的距离信息;
生成图像单元,用于依据深度信息生成重叠成像的三维图像,三维图像包括重叠成像中所有的目标。
进一步地,获取成像单元包括:
获取成像子单元,用于获取可见光摄像头摄像范围内的可见成像;
获取重叠子单元,用于依据TOF摄像头可检测的最大物距,以及TOF摄像头与可见光摄像头的镜头光心的距离,在可见成像中获取重叠成像。
进一步地,获取重叠子单元包括:
得到范围模块,用于依据TOF摄像头的焦距以及TOF摄像头可检测的最大物距,得到TOF摄像头的TOF摄像范围;
获得区域模块,用于依据TOF摄像头与可见光摄像头的镜头光心的距离,以及可见光摄像头的焦距,得到可见光摄像头在TOF摄像范围内的摄像区域;
计算区域模块,用于计算出依据在可见光摄像头成像时摄像区域占据可见成像的区域,得到重叠成像。
进一步地,识别目标单元包括:
模型识别子单元,用于通过预设的识别模型对重叠成像所对应的图像进行识别,得到重叠成像中各个目标。
进一步地,目标测距单元包括:
标注目标子单元,用于对重叠成像中各目标进行标注,以获取各目标的轮廓信息,轮廓信息包括目标的轮廓的像素信息;
目标测距子单元,用于依据像素信息对目标进行测距,得到目标的轮廓内对应各个像素点的实物点到TOF摄像头的距离信息。
进一步地,生成图像单元包括:
计算数据子单元,用于依据深度信息计算出各目标的目标数据,目标数据包括各目标之间的实际距离以及各目标的轮廓信息;
构建图像子单元,用于在预设的实际坐标系中依据各目标的距离以及各目标的轮廓构建对应各目标的三维电子地图,以得到三维图像。
进一步地,三维图像的生成装置还包括:
获取距离单元,用于获取三维图像中各目标的三维信息,三维信息包括各目标之间的距离信息;
显示图像单元,用于将三维图像以及三维信息通过显示屏进行显示。
本申请实施例还提出了一种存储介质,其为计算机可读的存储介质,其上存储有计算机程序,所述计算机程序被执行时实现如上述三维图像的生成方法。
本申请实施例还提出了一种计算机设备,其包括处理器、存储器及存储于所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被执行时实现如上述三维图像的生成方法。
有益效果
本申请提出了一种三维图像的生成方法、装置及计算机设备,该三维图像的生成方法中,通过获取TOF摄像头与可见光摄像头在摄像范围内重叠的重叠成像区域,再识别出该区域内的目标,然后对目标进行测距得到深度信息,以此构建对应的三维图像,由于通过在可见光摄像头进行成像,且通过TOF摄像头获取深度信息,故而可构建出与目标原物轮廓颜色一致的三维图像,这样通过两种摄像头结合处理即可得到形象逼真的三维图像,且计算量较少,有效节省资源。
附图说明
图1是本申请一实施例的三维图像的生成方法的流程示意图;
图2是本申请一实施例的以TOF摄像头的镜头光心为原点建立坐标系的一平面成像图;
图3是本申请一实施例的三维图像的生成装置的结构示意框图;
图4是本申请一实施例的识别目标单元的结构示意框图;
图5是本申请一实施例的目标测距单元的结构示意框图;
图6是本申请一实施例的生成图像单元的结构示意框图;
图7是本申请一实施例的获取成像单元的结构示意框图;
图8是本申请一实施例的获取重叠子单元的结构示意框图;
图9是本申请另一实施例的三维图像的生成装置的结构示意框图;
图10是本申请的存储介质的一实施例的结构示意框图;
图11是本申请的计算机设备的一实施例的结构示意框图。
本发明最佳的实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请的一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
参照图1,本申请提供的一种三维图像的生成方法的流程示意图,该方法可以由三维图像的生成装置来执行,三维图像的生成装置具体可通过软件或硬件的形式实现,本申请实施例提供了一种三维图像的生成方法,该方法应用于智能设备中,该智能设备包括TOF摄像头与可见光摄像头,具体地,上述方法包括:
步骤S1:获取所述TOF摄像头与所述可见光摄像头在摄像范围内重叠的重叠成像;
步骤S2:在所述重叠成像中进行目标识别;
步骤S3:通过所述TOF摄像头对识别出的各个目标进行测距,得到各个目标的深度信息,所述深度信息为所述目标到所述TOF摄像头的距离信息;
步骤S4:依据所述深度信息生成所述重叠成像的三维图像,所述三维图像包括所述重叠成像中所有的所述目标。
本实施例中,上述TOF摄像头基于现有的TOF技术而制成,上述TOF技术为飞行时间(Time of Flight)技术的缩写,即传感器发出经调制的近红外光,遇物体后反射,传感器通过计算光线发射和反射的时间差或相位差,来换算摄像头与被拍摄景物的距离,得到深度信息。
如上述步骤S1所述,上述智能设备包括多种模式,例如单独通过TOF摄像头或可见光摄像头进行拍摄的模式,此时得到的分别为物体的三维轮廓图像以及普通的二维图像,此外还包括通过TOF摄像头或可见光摄像头生成的景物的三维图像的模式,当智能设备进入该模式后,3D建模模式开启,首先获取TOF摄像头与可见光摄像头在摄像范围内重叠的重叠成像,需知摄像头拍摄物体时均有对应的成像面,该成像面内的景物依据镜头的光心、焦距以及成像平面的大小而确定,TOF摄像头具有最远测距,为了便于描述,该最远测距记为L,TOF摄像头的摄像范围即在最远测距内,可见光摄像头并没有限制拍摄物体的距离,故两者在摄像范围内重叠的区域会在最远测距内,上述重叠成像为上述重叠区域的成像,且为可见光摄像头的可见成像中的一部分,故而在重叠成像中可显示被拍摄物体的外观,同时也可通过TOF摄像头获取当中的物体的深度信息。
如上述步骤S2所述,对上述重叠成像进行目标识别,例如通过模型识别出景物或通过比较预设数据库中的图片确定重叠成像中的景物,此处的景物即上述识别出的目标。在一个实施例中,通过预设的识别模型识别出重叠成像中的目标,则上述步骤S2,包括:
步骤S21:通过预设的识别模型对所述重叠成像所对应的图像进行识别,得到所述重叠成像中各个目标。
本实施例中,通过预设的识别模型对重叠成像所对应的图像进行识别,如利用AI智能识别算法,对重叠成像的像素做物体识别处理,识别重叠成像区域内的多个目标,例如识别出桌子,椅子,箱子等目标,上述预设识别模型可通过SSD算法或DSST算法实现,举例地,上述识别模型为SSD算法模型,该模型采用VGG16网络结构,该网络结构包括卷积层、全连接层以及池化层,首先通过CNN网络提取目标特征,然后通过VGG16网络进行计算得到各目标的分类,该分类包括桌子,沙发,椅子,冰箱等家庭常用器件类别。训练上述识别模型时,先采集样本得到数据集,样本中包括上述桌子,沙发,椅子,冰箱等多个类别的样本,然后输入预设的初始模型进行训练,再通过补损和增扩等方法提升SSD算法的性能,例如通过损失函数计算损失值,再通过网络反向传播计算参数梯度,然后更新模型的参数直到模型收敛,得到上述识别模型。
如上述步骤S3所述,通过TOF摄像头对识别出的各个目标进行测距,得到相应的深度信息,也即通过TOF技术对识别出的各个目标进行测距,若实际中每个目标在面对TOF摄像头时,其朝向TOF摄像头的平面为相对平面,则相对平面上的各个点到TOF摄像头的距离即为深度信息,在一个实施例中,上述步骤S3,包括:
步骤S31:对所述重叠成像中各目标进行标注,以获取各所述目标的轮廓信息,所述轮廓信息包括所述目标的轮廓的像素信息;
步骤S32:依据所述像素信息对所述目标进行测距,得到所述目标的轮廓内对应各个像素点的实物点到所述TOF摄像头的距离信息。
本实施例中,为了获取到更详细的深度信息,首先对重叠成像中的各个目标进行标注,在区分每个目标的同时可依据标注获取对应各个目标的轮廓信息,该轮廓信息包括目标轮廓的像素信息,目标轮廓的像素用于表示目标进行图像显示的整个轮廓,例如可见光摄像头的摄像头像素为一固定值,对目标进行标注时可通过矩形框框出目标,然后依据每个目标的边缘像素划分出每个目标的轮廓,按照每个目标对应的像素大小对目标进行测距,得到每个目标轮廓中的每个像素点的距离信息,也即上述深度信息,上述距离信息即是目标的轮廓内各实物点到TOF摄像头的距离信息,且各实物点与像素点一一对应,需知目标的每个面是由点组成,此处的实物点即为实际中目标朝向TOF摄像头那一面的各个点;需知像素是影像显示的基本单位,通过获取目标对应像素的深度信息,即得到用于显示的每个目标轮廓的深度信息,进而为后续建立每个目标的立体图形提供基础。
如上述步骤S4所述,获得每个目标的深度信息后,依据这些深度信息通过3D建模构建对应每个目标的三维图像,由于深度信息包括目标在成像时每个像素点所对应实物点在实际中到TOF摄像头的距离信息,通过这些信息可得到每个目标的三维外形,然后构建出目标的三维图,且上述目标为可见光摄像头的可见成像中的目标,故在构建上述目标的三维轮廓图的同时通过可见光摄像头的成像得到每个目标实际拍摄出的RGB原色,也即在现实中红(R)、绿(G)、蓝(B)三个颜色通道叠加显示出的颜色,从而使得构建出的三维图像与实际中的目标原物一致。
具体而言,上述步骤S4,包括:
步骤S41:依据所述深度信息计算出各所述目标的目标数据,所述目标数据包括各所述目标之间的实际距离以及各所述目标的轮廓信息;
步骤S42:在预设的实际坐标系中依据各所述目标的距离以及各所述目标的轮廓构建对应各所述目标的三维电子地图,以得到所述三维图像。
本实施例中,通过深度信息得到各个目标的目标数据,这些目标数据为目标的轮廓信息,如包括目标的长、宽、高、几何中心、质心,深度等,以及通过这些轮廓信息计算出的各个目标之间的实际距离,其中,首选计算各个目标的中心点,若目标的形状为规则形状,则可通过目标的轮廓计算出几何中心作为中心点,若目标的形状为不规则形状,则以目标的质心作为中心点,然后建立一个预设的实际坐标系,该实际坐标系中,以TOF摄像头做坐标原点或者将地面作为XY面的参照面,构建多个目标的三维坐标,再利用平面三角形边长计算,或者三维坐标内,空间几何两点坐标距离计算公式进行计算,得到各个目标之间的距离,例如,通过摄像头可以首先观测地平面,以地平面作为上述坐标系的XY参照面,以垂直于XY面的交汇处的垂线作为Z轴,则空间中两个目标中心点之间的距离,可通过空间中点到点的坐标距离公式求得到两者的距离,这样求得各个目标之间的距离,以此,构建目标的三维电子地图,从而得到上述三维图像。
在一个实施例中,上述步骤S1,包括:
步骤S11:获取所述可见光摄像头摄像范围内的可见成像;
步骤S12:依据所述TOF摄像头检测最大的物距,以及所述TOF摄像头与所述可见光摄像头的镜头光心的距离,在所述可见成像中获取所述重叠成像。
本实施例中,首先获取可见光摄像头摄像范围内的可见成像,再在该可见成像中找出由上述获取TOF摄像头与可见光摄像头在摄像范围内重叠的成像区域,具体可通过TOF摄像头检测最大的物距,以及TOF摄像头与可见光摄像头的镜头光心的距离来确定,此处的TOF摄像头检测最大的物距即上述TOF摄像头的最大测距,由于TOF摄像头具有最大测距,通过小孔成像原理可得TOF摄像头的拍摄范围以及对应的TOF成像,同理,也可以得到可见光摄像头的可摄范围以及成像面,在智能设备中,两个镜头的在同一平面中,故而可知两者镜头光心的距离越近,重叠部分越多,镜头光心距离越远,两者重叠部分越少,本实施例不限制两者镜头的距离,可根据实际需要而设置。
优选地,上述步骤S12,包括:
步骤S121:依据所述TOF摄像头的焦距以及所述TOF摄像头检测最大的物距,得到所述TOF摄像头的TOF摄像范围;
步骤S122:依据所述TOF摄像头与所述可见光摄像头的镜头光心的距离,以及所述可见光摄像头的焦距,得到所述可见光摄像头在所述TOF摄像范围内的摄像区域;
步骤S123:计算出依据在所述可见光摄像头成像时所述摄像区域占据所述可见成像的区域,得到所述重叠成像。
本实施例中,首先通过TOF摄像头的焦距以及所述TOF摄像头检测最大的物距,得到TOF摄像头的TOF摄像范围,由于在TOF摄像头中,焦距以及其最大物距均确定,对应的成像平面的大小也可以确定,通过小孔成像原理进而确定TOF摄像头的TOF摄像范围,对应的也可通过可见光摄像头的焦距确定可见光摄像头的可见摄像范围,而两者镜头光心距离固定,则得到两者的摄像重叠区域,也即可得到可见光摄像头在TOF摄像范围内的摄像区域,然后计算在摄像区域在可见光摄像头成像时所占据的区域,得到上述重叠成像,具体而言,可通过预设公式计算得到,例如通过相似三角形原理设定的公式来计算,也可以通过建立坐标系,计算出重叠成像的坐标。
举例地,参照图2,以所述TOF摄像头的镜头光心为原点O,以TOF摄像头与可见光摄像头的镜头光心连线所在的直线为Y轴,与以穿过原点并垂直于连线的直线作为X轴,建立坐标系;由于TOF摄像头存在有效测距,设TOF相机的有效测距L,可见光摄像头的镜头光心到TOF摄像头的镜头光心的距离为K。按照相似三角形原理,可以求得重叠成像边界的坐标值,再根据成像像素的大小,得到对应坐标的像素。
以过两个光心的其中一个平面为例:TOF摄像头的有效测距为L,TOF摄像头的焦距为F1,可见光摄像头的焦距为F2,TOF摄像头与可见光摄像头的光心距离为K,由成像原理得TOF摄像头与可见光摄像头的重叠摄像范围为BG,该范围在可见光摄像头的成像平面内的成像区域是DE,TOF摄像头的成像平面的上边缘坐标为A(X1,Y1),其中X1为焦距F1,Y1为TOF摄像头已知的成像平面的纵向尺寸,根据相似三角形原理,得到TOF摄像头最大物距下边缘B的纵坐标Y2,其横轴坐标X2为-L,则Y2=Y1* L / F1;可见光摄像头的光心坐标为C(0,K),对应的重叠成像的上边缘点D(X4,Y4),其纵坐标Y4为:Y4=|Y2-K| *F2 / L+ K,其横坐标X4为焦距F2,可见光摄像头成像平面下边缘点E的坐标为(X5,Y5),其中X5为可见光摄像头的焦距F2,Y5为可见光摄像头已知的成像平面的纵向尺寸。
如此得出一个平面内重叠成像区的上下边缘位置,按照上述方法同理可以得到所有平面内的重叠成像区的上下边缘位置,也即得到三维空间内的重叠成像区域的所有边缘坐标。
在一个实施例中,上述步骤S4之后,还包括:
步骤S5:获取所述三维图像中各所述目标的三维信息,所述三维信息包括各所述目标之间的距离信息;
步骤S6:将所述三维图像以及三维信息通过显示屏进行显示。
本实施例中,构建上述三维图像之后,可通过显示屏将上述三维图像显示出来,与此同时,还可以将各个目标的具体信息进行显示,以让用户更了解各目标的情况,首先获取各个目标的三维信息,该三维信息包括各个目标之间的距离信息以及各个目标的外形信息,例如长、宽、高等信息,然后在显示屏中将上述三维图像显示出来,同时在图像的目标上标出上述三维信息。
本申请还提出了一种三维图像的生成装置,用于执行上述三维图像的生成方法,三维图像的生成装置具体可通过软件或硬件的形式实现。参照图3,上述三维图像的生成装置包括:
获取成像单元100,用于获取所述TOF摄像头与所述可见光摄像头在摄像范围内重叠的重叠成像;
识别目标单元200,用于在所述重叠成像中进行目标识别;
目标测距单元300,用于通过所述TOF摄像头对识别出的各个目标进行测距,得到各个目标的深度信息,所述深度信息为所述目标到所述TOF摄像头的距离信息;
生成图像单元400,用于依据所述深度信息生成所述重叠成像的三维图像,所述三维图像包括所述重叠成像中所有的所述目标。
本实施例中,上述TOF摄像头基于现有的TOF技术而制成,上述TOF技术为飞行时间(Time of Flight)技术的缩写,即传感器发出经调制的近红外光,遇物体后反射,传感器通过计算光线发射和反射的时间差或相位差,来换算摄像头与被拍摄景物的距离,得到深度信息。
如上述获取成像单元100所述,上述智能设备包括多种模式,例如单独通过TOF摄像头或可见光摄像头进行拍摄的模式,此时得到的分别为物体的三维轮廓图像以及普通的二维图像,此外还包括通过TOF摄像头或可见光摄像头生成的景物的三维图像的模式,当智能设备进入该模式后,3D建模模式开启,首先获取TOF摄像头与可见光摄像头在摄像范围内重叠的重叠成像,需知摄像头拍摄物体时均有对应的成像面,该成像面内的景物依据镜头的光心、焦距以及成像平面的大小而确定,TOF摄像头具有最远测距,为了便于描述该最远测距记为L,TOF摄像头的摄像范围即在最远测距内,可见光摄像头并没有限制拍摄物体的距离,故两者在摄像范围内重叠的区域会在最远测距内,上述重叠成像为上述重叠区域的成像,且为可见光摄像头的可见成像中的一部分,故而在重叠成像中可显示被拍摄物体的外观,同时也可通过TOF摄像头获取当中的物体的深度信息。
如上述识别目标单元200所述,对上述重叠成像进行目标识别,例如通过模型识别出景物或通过比较预设数据库中的图片确定重叠成像中的景物,此处的景物即上述识别出的目标。参照图4,在一个实施例中,通过预设的识别模型识别出重叠成像中的目标,则上述识别目标单元200,包括:
模型识别子单元201,用于通过预设的识别模型对所述重叠成像所对应的图像进行识别,得到所述重叠成像中各个目标。
本实施例中,通过预设的识别模型对重叠成像所对应的图像进行识别,如利用AI智能识别算法,对重叠成像的像素做物体识别处理,识别重叠成像区域内的多个目标,例如识别出桌子,椅子,箱子等目标,上述预设识别模型可通过SSD算法或DSST算法实现,举例地,上述识别模型为SSD算法模型,该模型采用VGG16网络结构,该网络结构包括卷积层、全连接层以及池化层,首先通过CNN网络提取目标特征,然后通过VGG16网络进行计算得到各目标的分类,该分类包括桌子,沙发,椅子,冰箱等家庭常用器件类别。训练上述识别模型时,先采集样本得到数据集,样本中包括上述桌子,沙发,椅子,冰箱等多个类别的样本,然后输入预设的初始模型进行训练,再通过补损和增扩等方法提升SSD算法的性能,例如通过损失函数计算损失值,再通过网络反向传播计算参数梯度,然后更新模型的参数直到模型收敛,得到上述识别模型。
如上述目标测距单元300所述,通过TOF摄像头对识别出的各个目标进行测距,得到相应的深度信息,也即通过TOF技术对识别出的各个目标进行测距,若实际中每个目标在面对TOF摄像头时,其朝向TOF摄像头的平面为相对平面,则相对平面上的各个点到TOF摄像头的距离即为深度信息,参照图5,在一个实施例中,上述目标测距单元300,包括:
标注目标子单元301,用于对所述重叠成像中各目标进行标注,以获取各所述目标的轮廓信息,所述轮廓信息包括所述目标的轮廓的像素信息;
目标测距子单元302,用于依据所述像素信息对所述目标进行测距,得到所述目标的轮廓内对应各个像素点的实物点到所述TOF摄像头的距离信息。
本实施例中,为了获取到更详细的深度信息,首先对重叠成像中的各个目标进行标注,在区分每个目标的同时可依据标注获取对应各个目标的轮廓信息,该轮廓信息包括目标轮廓的像素信息,目标轮廓的像素用于表示目标进行图像显示的整个轮廓,例如可见光摄像头的摄像头像素为一固定值,对目标进行标注时可通过矩形框框出目标,然后依据每个目标的边缘像素划分出每个目标的轮廓,按照每个目标对应的像素大小对目标进行测距,得到每个目标轮廓中的每个像素点的距离信息,也即上述深度信息,上述距离信息即是目标的轮廓内各实物点到TOF摄像头的距离信息,且各实物点与像素点一一对应,需知目标的每个面是由点组成,此处的实物点即为实际中目标朝向TOF摄像头那一面的各个点;需知像素是影像显示的基本单位,通过获取目标对应像素的深度信息,即得到用于显示的每个目标轮廓的深度信息,进而为后续建立每个目标的立体图形提供基础。
如上述生成图像单元400所述,获得每个目标的深度信息后,依据这些深度信息通过3D建模构建对应每个目标的三维图像,由于深度信息包括目标在成像时每个像素点所对应实物点在实际中到TOF摄像头的距离信息,通过这些信息可得到每个目标的三维外形,然后构建出目标的三维轮廓图,且上述目标为可见光摄像头的可见成像中的目标,故在构建上述目标的三维轮廓图的同时通过可见光摄像头的成像得到每个目标实际拍摄出的RGB原色,也即在现实中红(R)、绿(G)、蓝(B)三个颜色通道叠加显示出的颜色,从而使得构建出的三维图像与实际中的目标原物一致。
具体而言,参照图6,上述生成图像单元400,包括:
计算数据子单元401,用于依据所述深度信息计算出各所述目标的目标数据,所述目标数据包括各所述目标之间的实际距离以及各所述目标的轮廓信息;
构建图像子单元402,用于在预设的实际坐标系中依据各所述目标的距离以及各所述目标的轮廓构建对应各所述目标的三维电子地图,以得到所述三维图像。
本实施例中,通过深度信息得到各个目标的目标数据,这些目标数据为目标的轮廓信息,如包括目标的长、宽、高、几何中心、质心,深度等,以及通过这些轮廓信息计算出的各个目标之间的实际距离,其中,首选计算各个目标的中心点,若目标的形状为规则形状,则可通过目标的轮廓计算出几何中心作为中心点,若目标的形状为不规则形状,则以目标的质心作为中心点,然后建立一个预设的实际坐标系,该实际坐标系中,以TOF摄像头做坐标原点或者将地面作为XY面的参照面,构建多个目标的三维坐标,再利用平面三角形边长计算,或者三维坐标内,空间几何两点坐标距离计算公式进行计算,得到各个目标之间的距离,例如,通过摄像头可以首先观测地平面,以地平面作为上述坐标系的XY参照面,以垂直于XY面的交汇处的垂线作为Z轴,则空间中两个目标中心点之间的距离,可通过空间中点到点的坐标距离公式求得到两者的距离,这样求得各个目标之间的距离,以此,构建目标的三维电子地图,从而得到上述三维图像。
参照图7,在一个实施例中,上述获取成像单元100,包括:
获取成像子单元101,用于获取所述可见光摄像头摄像范围内的可见成像;
获取重叠子单元102,用于依据所述TOF摄像头检测最大的物距,以及所述TOF摄像头与所述可见光摄像头的镜头光心的距离,在所述可见成像中获取所述重叠成像。
本实施例中,首先获取可见光摄像头摄像范围内的可见成像,再在该可见成像中找出由上述获取TOF摄像头与可见光摄像头在摄像范围内重叠的成像区域,具体可通过TOF摄像头检测最大的物距,以及TOF摄像头与可见光摄像头的镜头光心的距离来确定,此处的TOF摄像头检测最大的物距即上述TOF摄像头的最大测距,由于TOF摄像头具有最大测距,通过小孔成像原理可得TOF摄像头的拍摄范围以及对应的TOF成像,同理,也可以得到可见光摄像头的可摄范围以及成像面,在智能设备中,两个镜头的在同一平面中,故而可知两者镜头光心的距离越近,重叠部分越多,镜头光心距离越远,两者重叠部分越少,本实施例不限制两者镜头的距离,可根据实际需要而设置。
优选地,参照图8,上述获取重叠子单元102,包括:
得到范围模块1021,用于依据所述TOF摄像头的焦距以及所述TOF摄像头检测最大的物距,得到所述TOF摄像头的TOF摄像范围;
获得区域模块1022,用于依据所述TOF摄像头与所述可见光摄像头的镜头光心的距离,以及所述可见光摄像头的焦距,得到所述可见光摄像头在所述TOF摄像范围内的摄像区域;
计算区域模块1023,用于计算出依据在所述可见光摄像头成像时所述摄像区域占据所述可见成像的区域,得到所述重叠成像。
本实施例中,首先通过TOF摄像头的焦距以及所述TOF摄像头检测最大的物距,得到TOF摄像头的TOF摄像范围,由于在TOF摄像头中,焦距以及其最大物距均确定,对应的成像平面的大小也可以确定,通过小孔成像原理进而确定TOF摄像头的TOF摄像范围,对应的也可通过可见光摄像头的焦距确定可见光摄像头的可见摄像范围,而两者镜头光心距离固定,则得到两者的摄像重叠区域,也即可得到可见光摄像头在TOF摄像范围内的摄像区域,然后计算在摄像区域在可见光摄像头成像时所占据的区域,得到上述重叠成像,具体而言,可通过预设公式计算得到,例如通过相似三角形原理设定的公式来计算,也可以通过建立坐标系,计算出重叠成像的坐标。
举例地,参照图2,以所述TOF摄像头的镜头光心为原点O,以TOF摄像头与可见光摄像头的镜头光心连线所在的直线为Y轴,与以穿过原点并垂直于连线的直线作为X轴,建立坐标系;由于TOF摄像头存在有效测距,设TOF相机的有效测距L,可见光摄像头的镜头光心到TOF摄像头的镜头光心的距离为K。按照相似三角形原理,可以求得重叠成像边界的坐标值,再根据成像像素的大小,得到对应坐标的像素。
以过两个光心的其中一个平面为例:TOF摄像头的有效测距为L,TOF摄像头的焦距为F1,可见光摄像头的焦距为F2,TOF摄像头与可见光摄像头的光心距离为K,由成像原理得TOF摄像头与可见光摄像头的重叠摄像范围为BG,该范围在可见光摄像头的成像平面内的成像区域是DE,TOF摄像头的成像平面的上边缘坐标为A(X1,Y1),其中X1为焦距F1,Y1为TOF摄像头已知的成像平面的纵向尺寸,根据相似三角形原理,得到TOF摄像头最大物距下边缘B的纵坐标Y2,其横轴坐标X2为-L,则Y2=Y1* L / F1;可见光摄像头的光心坐标为C(0,K),对应的重叠成像的上边缘点D(X4,Y4),其纵坐标Y4为:Y4=|Y2-K| *F2 / L+ K,其横坐标X4为焦距F2,可见光摄像头成像平面下边缘点E的坐标为(X5,Y5),其中X5为可见光摄像头的焦距F2,Y5为可见光摄像头已知的成像平面的纵向尺寸。
如此得出一个平面内重叠成像区的上下边缘位置,按照上述方法同理可以得到所有平面内的重叠成像区的上下边缘位置,也即得到三维空间内的重叠成像区域的所有边缘坐标。
在一个实施例中,参照图9,上述三维图像的生成装置,还包括:
获取距离单元500,用于获取所述三维图像中各所述目标的三维信息,所述三维信息包括各所述目标之间的距离信息;
显示图像单元600,用于将所述三维图像以及三维信息通过显示屏进行显示。
本实施例中,构建上述三维图像之后,可通过显示屏将上述三维图像显示出来,与此同时,还可以将各个目标的具体信息进行显示,以让用户更了解各目标的情况,首先获取各个目标的三维信息,该三维信息包括各个目标之间的距离信息以及各个目标的外形信息,例如长、宽、高等信息,然后在显示屏中将上述三维图像显示出来,同时在图像的目标上标出上述三维信息。
参考图10,本申请还提供了一种计算机可读的存储介质21,存储介质21中存储有计算机程序22,当其在计算机上运行时,使得计算机执行以上实施例所描述三维图像的生成方法。
参考图11,本申请还提供了一种包含指令的计算机设备34,计算机设备包括存储器31和处理器33,存储器31存储有计算机程序22,处理器33执行计算机程序22时实现以上实施例所描述的三维图像的生成方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。
所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。

Claims (15)

  1. 一种三维图像的生成方法,应用于智能设备,其特征在于,所述智能设备包括TOF摄像头与可见光摄像头,所述三维图像的生成方法包括:
    获取所述TOF摄像头与所述可见光摄像头在摄像范围内重叠的重叠成像;
    在所述重叠成像中进行目标识别;
    通过所述TOF摄像头对识别出的各个目标进行测距,得到各个目标的深度信息,所述深度信息为所述目标到所述TOF摄像头的距离信息;
    依据所述深度信息生成所述重叠成像的三维图像,所述三维图像包括所述重叠成像中所有的所述目标。
  2. 如权利要求1所述的三维图像的生成方法,其特征在于,所述获取所述TOF摄像头与所述可见光摄像头在摄像范围内重叠的重叠成像的步骤,还包括:
    获取所述可见光摄像头摄像范围内的可见成像;
    依据所述TOF摄像头可检测的最大物距,以及所述TOF摄像头与所述可见光摄像头的镜头光心的距离,在所述可见成像中获取所述重叠成像。
  3. 如权利要求2所述的三维图像的生成方法,其特征在于,所述依据所述TOF摄像头检测最大的物距,以及所述TOF摄像头与所述可见光摄像头的镜头光心的距离,在所述可见成像中获取所述重叠成像的步骤,包括:
    依据所述TOF摄像头的焦距以及所述TOF摄像头可检测的最大物距,得到所述TOF摄像头的TOF摄像范围;
    依据所述TOF摄像头与所述可见光摄像头的镜头光心的距离,以及所述可见光摄像头的焦距,得到所述可见光摄像头在所述TOF摄像范围内的摄像区域;
    计算出依据在所述可见光摄像头成像时所述摄像区域占据所述可见成像的区域,得到所述重叠成像。
  4. 如权利要求1所述的三维图像的生成方法,其特征在于,所述在所述重叠成像中进行目标识别的步骤,包括:
    通过预设的识别模型对所述重叠成像所对应的图像进行识别,得到所述重叠成像中各个目标。
  5. 如权利要求1所述的三维图像的生成方法,其特征在于,所述通过所述TOF摄像头对识别出的各个目标进行测距,得到各个目标的深度信息的步骤,包括:
    对所述重叠成像中各目标进行标注,以获取各所述目标的轮廓信息,所述轮廓信息包括所述目标的轮廓的像素信息;
    依据所述像素信息对所述目标进行测距,得到所述目标的轮廓内对应各个像素点的实物点到所述TOF摄像头的距离信息。
  6. 如权利要求1所述的三维图像的生成方法,其特征在于,所述依据所述深度信息生成对应各所述目标的三维图像的步骤,包括:
    依据所述深度信息计算出各所述目标的目标数据,所述目标数据包括各所述目标之间的实际距离以及各所述目标的轮廓信息;
    在预设的实际坐标系中依据各所述目标的距离以及各所述目标的轮廓构建对应各所述目标的三维电子地图,以得到所述三维图像。
  7. 如权利要求1所述的三维图像的生成方法,其特征在于,所述依据所述深度信息生成所述重叠成像的三维图像的步骤之后,包括:
    获取所述三维图像中各所述目标的三维信息,所述三维信息包括各所述目标之间的距离信息;
    将所述三维图像以及三维信息通过显示屏进行显示。
  8. 一种三维图像的生成装置,其特征在于,包括:
    获取成像单元,用于获取所述TOF摄像头与所述可见光摄像头在摄像范围内重叠的重叠成像;
    识别目标单元,用于在所述重叠成像中进行目标识别;
    目标测距单元,用于通过所述TOF摄像头对识别出的各个目标进行测距,得到各个目标的深度信息,所述深度信息为所述目标到所述TOF摄像头的距离信息;
    生成图像单元,用于依据所述深度信息生成所述重叠成像的三维图像,所述三维图像包括所述重叠成像中所有的所述目标。
  9. 如权利要求8所述的三维图像的生成装置,其特征在于,所述获取成像单元包括:
    获取成像子单元,用于获取所述可见光摄像头摄像范围内的可见成像;
    获取重叠子单元,用于依据所述TOF摄像头可检测的最大物距,以及所述TOF摄像头与所述可见光摄像头的镜头光心的距离,在所述可见成像中获取所述重叠成像。
  10. 如权利要求9所述的三维图像的生成装置,其特征在于,所述获取重叠子单元包括:
    得到范围模块,用于依据所述TOF摄像头的焦距以及所述TOF摄像头可检测的最大物距,得到所述TOF摄像头的TOF摄像范围;
    获得区域模块,用于依据所述TOF摄像头与所述可见光摄像头的镜头光心的距离,以及所述可见光摄像头的焦距,得到所述可见光摄像头在所述TOF摄像范围内的摄像区域;
    计算区域模块,用于计算出依据在所述可见光摄像头成像时所述摄像区域占据所述可见成像的区域,得到所述重叠成像。
  11. 如权利要求8所述的三维图像的生成装置,其特征在于,所述识别目标单元包括:
    模型识别子单元,用于通过预设的识别模型对所述重叠成像所对应的图像进行识别,得到所述重叠成像中各个目标。
  12. 如权利要求8所述的三维图像的生成装置,其特征在于,所述目标测距单元包括:
    标注目标子单元,用于对所述重叠成像中各目标进行标注,以获取各所述目标的轮廓信息,所述轮廓信息包括所述目标的轮廓的像素信息;
    目标测距子单元,用于依据所述像素信息对所述目标进行测距,得到所述目标的轮廓内对应各个像素点的实物点到所述TOF摄像头的距离信息。
  13. 如权利要求8所述的三维图像的生成装置,其特征在于,所述生成图像单元包括:
    计算数据子单元,用于依据所述深度信息计算出各所述目标的目标数据,所述目标数据包括各所述目标之间的实际距离以及各所述目标的轮廓信息;
    构建图像子单元,用于在预设的实际坐标系中依据各所述目标的距离以及各所述目标的轮廓构建对应各所述目标的三维电子地图,以得到所述三维图像。
  14. 如权利要求8所述的三维图像的生成装置,其特征在于,还包括:
    获取距离单元,用于获取所述三维图像中各所述目标的三维信息,所述三维信息包括各所述目标之间的距离信息;
    显示图像单元,用于将所述三维图像以及三维信息通过显示屏进行显示。
  15. 一种计算机设备,其特征在于,其包括处理器、存储器及存储于所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被执行时实现如权利要求1~7任一项所述的三维图像的生成方法。
PCT/CN2020/127202 2020-05-29 2020-11-06 三维图像的生成方法、装置及计算机设备 WO2021238070A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010479011.8A CN111787303B (zh) 2020-05-29 2020-05-29 三维图像的生成方法、装置、存储介质及计算机设备
CN202010479011.8 2020-05-29

Publications (1)

Publication Number Publication Date
WO2021238070A1 true WO2021238070A1 (zh) 2021-12-02

Family

ID=72754493

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/127202 WO2021238070A1 (zh) 2020-05-29 2020-11-06 三维图像的生成方法、装置及计算机设备

Country Status (2)

Country Link
CN (1) CN111787303B (zh)
WO (1) WO2021238070A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117253195A (zh) * 2023-11-13 2023-12-19 广东申立信息工程股份有限公司 一种ipc安全监测方法、监测系统、计算机设备和可读存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111787303B (zh) * 2020-05-29 2022-04-15 深圳市沃特沃德软件技术有限公司 三维图像的生成方法、装置、存储介质及计算机设备
CN112672076A (zh) * 2020-12-11 2021-04-16 展讯半导体(成都)有限公司 一种图像的显示方法和电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104599314A (zh) * 2014-06-12 2015-05-06 深圳奥比中光科技有限公司 三维模型重建方法与系统
US20170064235A1 (en) * 2015-08-27 2017-03-02 Samsung Electronics Co., Ltd. Epipolar plane single-pulse indirect tof imaging for automotives
US20180205963A1 (en) * 2017-01-17 2018-07-19 Seiko Epson Corporation Encoding Free View Point Data in Movie Data Container
CN108681726A (zh) * 2018-06-26 2018-10-19 深圳阜时科技有限公司 3d芯片模组、身份识别装置及电子设备
CN109814127A (zh) * 2017-11-22 2019-05-28 浙江舜宇智能光学技术有限公司 高分辨tof成像系统及方法
CN111787303A (zh) * 2020-05-29 2020-10-16 深圳市沃特沃德股份有限公司 三维图像的生成方法、装置、存储介质及计算机设备

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7192772B2 (ja) * 2017-08-22 2022-12-20 ソニーグループ株式会社 画像処理装置および画像処理方法
CN110827408B (zh) * 2019-10-31 2023-03-28 上海师范大学 一种基于深度传感器的实时三维重建方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104599314A (zh) * 2014-06-12 2015-05-06 深圳奥比中光科技有限公司 三维模型重建方法与系统
US20170064235A1 (en) * 2015-08-27 2017-03-02 Samsung Electronics Co., Ltd. Epipolar plane single-pulse indirect tof imaging for automotives
US20180205963A1 (en) * 2017-01-17 2018-07-19 Seiko Epson Corporation Encoding Free View Point Data in Movie Data Container
CN109814127A (zh) * 2017-11-22 2019-05-28 浙江舜宇智能光学技术有限公司 高分辨tof成像系统及方法
CN108681726A (zh) * 2018-06-26 2018-10-19 深圳阜时科技有限公司 3d芯片模组、身份识别装置及电子设备
CN111787303A (zh) * 2020-05-29 2020-10-16 深圳市沃特沃德股份有限公司 三维图像的生成方法、装置、存储介质及计算机设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117253195A (zh) * 2023-11-13 2023-12-19 广东申立信息工程股份有限公司 一种ipc安全监测方法、监测系统、计算机设备和可读存储介质
CN117253195B (zh) * 2023-11-13 2024-02-27 广东申立信息工程股份有限公司 一种ipc安全监测方法、监测系统、计算机设备和可读存储介质

Also Published As

Publication number Publication date
CN111787303A (zh) 2020-10-16
CN111787303B (zh) 2022-04-15

Similar Documents

Publication Publication Date Title
CN110568447B (zh) 视觉定位的方法、装置及计算机可读介质
US10977818B2 (en) Machine learning based model localization system
CN108961395B (zh) 一种基于拍照重建三维空间场景的方法
WO2021238070A1 (zh) 三维图像的生成方法、装置及计算机设备
TWI574223B (zh) 運用擴增實境技術之導航系統
CN102938844B (zh) 利用立体成像生成自由视点视频
JP7059355B6 (ja) シーンの表現を生成するための装置及び方法
US9129435B2 (en) Method for creating 3-D models by stitching multiple partial 3-D models
KR100953931B1 (ko) 혼합현실 구현 시스템 및 그 방법
TW201915944A (zh) 圖像處理方法、裝置、系統和儲存介質
JP2020535536A5 (zh)
WO2020024684A1 (zh) 三维场景建模方法及装置、电子装置、可读存储介质及计算机设备
CN104715479A (zh) 基于增强虚拟的场景复现检测方法
US10560683B2 (en) System, method and software for producing three-dimensional images that appear to project forward of or vertically above a display medium using a virtual 3D model made from the simultaneous localization and depth-mapping of the physical features of real objects
AU2020332683A1 (en) Systems and methods for real-time multiple modality image alignment
CN110567441B (zh) 基于粒子滤波的定位方法、定位装置、建图及定位的方法
TW202011353A (zh) 深度資料處理系統的操作方法
US20220067974A1 (en) Cloud-Based Camera Calibration
CN110827392A (zh) 具好的场景易用性的单目图像三维重建方法、系统及装置
WO2021104308A1 (zh) 全景深度测量方法、四目鱼眼相机及双目鱼眼相机
TWI599987B (zh) 點雲拼接系統及方法
CN110880161A (zh) 一种多主机多深度摄像头的深度图像拼接融合方法及系统
CN111914790B (zh) 基于双摄像头的不同场景下实时人体转动角度识别方法
WO2023088127A1 (zh) 室内导航方法、服务器、装置和终端
CN111932446B (zh) 三维全景地图的构建方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20937404

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20937404

Country of ref document: EP

Kind code of ref document: A1