CN107864372B - Stereo photographing method and device and terminal - Google Patents

Stereo photographing method and device and terminal Download PDF

Info

Publication number
CN107864372B
CN107864372B CN201710863476.1A CN201710863476A CN107864372B CN 107864372 B CN107864372 B CN 107864372B CN 201710863476 A CN201710863476 A CN 201710863476A CN 107864372 B CN107864372 B CN 107864372B
Authority
CN
China
Prior art keywords
images
group
pixel point
target object
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710863476.1A
Other languages
Chinese (zh)
Other versions
CN107864372A (en
Inventor
李英翠
王根在
黄帆
石文平
邹方绍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiekai Communications Shenzhen Co Ltd
Original Assignee
Jiekai Communications Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiekai Communications Shenzhen Co Ltd filed Critical Jiekai Communications Shenzhen Co Ltd
Priority to CN201710863476.1A priority Critical patent/CN107864372B/en
Publication of CN107864372A publication Critical patent/CN107864372A/en
Application granted granted Critical
Publication of CN107864372B publication Critical patent/CN107864372B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a stereoscopic photographing method, a stereoscopic photographing device and a terminal. By providing the stereo photographing method, a target object is photographed at multiple angles through two groups of cameras to obtain two groups of images and space information of each pixel point on the images; and matching the two groups of images to obtain image information of each pixel point of the first group of images and corresponding spatial information thereof so as to construct a three-dimensional model of the target object, thereby improving the fidelity of the three-dimensional image.

Description

Stereo photographing method and device and terminal
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method, an apparatus, and a terminal for stereoscopic photographing.
Background
People can see the three-dimensional scenery because the two eyes see things independently and a certain distance exists between the left eye and the right eye, so that the vision of the two eyes has slight difference (namely binocular parallax), the human brain can skillfully fuse images with difference from the two eyes, and the three-dimensional visual effect with space sense is formed in the brain.
A terminal in the prior art basically has two cameras, when two cameras shoot an object at the same time, two images with parallax are generated, and the two images are synthesized by using an image processing technology to finally form a stereoscopic image. Although the synthesized stereo image is more vivid than the conventional 2D image, the stereo image can only show one side of the photographed object, and the fidelity is insufficient compared with the real photographed object.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a stereoscopic photographing method, device and terminal, which can improve the fidelity of stereoscopic images.
In order to solve the technical problem, the application adopts a technical scheme that: provided is a stereoscopic photographing method, including: shooting a target object at multiple angles through a first double camera to obtain a first group of images; shooting the target object at multiple angles through a second double-camera to obtain a second group of images and space information of each pixel point on the second group of images; and matching the first group of images with the second group of images to obtain image information of each pixel point of the first group of images and corresponding spatial information thereof so as to construct a three-dimensional model of the target object.
In order to solve the above technical problem, the second technical solution adopted by the present application is: the mobile terminal comprises a body, a first double-camera and a second double-camera which are arranged on the body, and a processor connected with the first double-camera and the second double-camera; the first double cameras are used for shooting a target object at multiple angles to obtain a first group of images; the second double cameras are used for shooting the target object at multiple angles to obtain a second group of images and space information of each pixel point on the second group of images; the processor is used for matching the first group of images with the second group of images to obtain image information of each pixel point of the first group of images and corresponding spatial information thereof so as to construct a three-dimensional model of the target object.
In order to solve the above technical problem, the third technical solution adopted by the present application is: there is provided an apparatus having a storage function, the apparatus storing program data executable to implement the stereolithography method as described in the first aspect above.
The beneficial effect of this application is: different from the situation of the prior art, the application provides a stereo photographing method, which comprises the steps of carrying out multi-angle photographing on a target object through two groups of cameras to obtain two groups of images and space information of each pixel point on the images; and matching the two groups of images to obtain image information of each pixel point of the first group of images and corresponding spatial information thereof so as to construct a three-dimensional model of the target object, thereby improving the fidelity of the three-dimensional image.
Drawings
Fig. 1 is a schematic flow chart of a first embodiment of a stereo photography method of the present application;
fig. 2 is a schematic structural diagram of an embodiment based on S101 expansion in the stereo photography method of the present application;
FIG. 3 is a schematic flow chart of a second embodiment of the stereo photography method of the present application;
FIG. 4 is a schematic flowchart of a third embodiment of a stereo photography method according to the present application;
FIG. 5 is a schematic flow chart of a third embodiment of a stereo photography method of the present application;
fig. 6 is a schematic structural diagram of a stereo photographing terminal according to the present application;
fig. 7 is a schematic structural diagram of a device having a storage function.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is further described in detail below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart illustrating an embodiment of a stereo photographing method, where an execution main body in the embodiment may be a terminal device (such as a notebook, a computer, a mobile phone, a wearable device, and the like), and the terminal device may be implemented in a hardware or software manner. It should be noted that, if the result is substantially the same, the method of the present embodiment is not limited to the flow sequence shown in fig. 1. As shown, the method comprises the following steps:
in S101, a first group of images is obtained by multi-angle shooting of a target object by a first dual camera.
Specifically, the multi-angle image capturing device includes a front side, a left side, a back side, and a right side of a target object, and guides a user to capture images of at least the front side, the left side, the back side, and the right side of the target object around the target object using a human body center line of the target object as an axis to obtain a first set of two-dimensional images, and further obtain reference coordinates of each pixel point on the first set of two-dimensional images, and obtain a reference object distance according to an algorithm based on a reference height and a reference width of each pixel point on the first set of two-dimensional images, and obtain the reference coordinates based on the reference height, the reference width, and the reference object distance, which is specifically shown in fig. 2 and.
Optionally, as shown in fig. 2, a reference object distance may also be calculated from the first set of two-dimensional images; wherein the object distance is the horizontal distance of the target object to the camera plane. The object distance calculation method comprises the following steps: acquiring the focal length f of the first double cameras, the center distance T of the first double cameras and the physical distance x from the projection point of the target point on the image plane of the first double cameras to the leftmost side of the respective image planelAnd xrThe left image plane and the right image plane corresponding to the first double-camera are both rectangular planes and are located on the same imaging plane, the optical center projections of the left camera and the right camera are respectively located at the centers of the corresponding image planes, and then the parallax d is as follows: d ═ xl-xr
The measured object distance is calculated by utilizing a triangular similarity principle as follows: z is f T/d;
wherein f is a focal length of the first dual cameras, d is a parallax value of a pixel point of the target object, and T is a center distance of the first dual cameras.
In S102, multi-angle shooting is performed on the target object by the second dual-camera to obtain a second group of images and spatial information of each pixel point on the second group of images.
Specifically, a second group of images are obtained through the second dual cameras, reference coordinates of each pixel point on the second group of images can be obtained by referring to step S102, in addition, according to the characteristics of the second dual cameras, an object distance corresponding to each pixel point on the second group of images can be measured, and spatial information of each pixel point on the second group of images can be obtained based on the object distance data and an algorithm, wherein the spatial information includes three-dimensional coordinates of each pixel point.
In S103, the first group of images and the second group of images are matched to obtain image information of each pixel point on the first group of images and corresponding spatial information thereof, so as to construct a three-dimensional model of the target object.
Specifically, the image information of each pixel point of the first group of images is matched with the image information of each pixel point of the second group of images by using an image data processing technology, the image information of each pixel point of the first group of images can be determined by matching or correspondingly processing the reference coordinates of each pixel point on the first group of images with the reference coordinates of each pixel point on the second group of images, and then the spatial information of each pixel point of the second group of images is combined to obtain the image information of each pixel point of the first group of images and the corresponding spatial information thereof, so that the three-dimensional coordinates of each point on the target object are finally determined, and then the three-dimensional model of the target object is constructed.
Optionally, determining the appearance and the dressing style of the target object based on the three-dimensional model, and matching a hairstyle and a garment suitable for the target object by combining a hairstyle and a dressing database; and fusing the matched hairstyle and clothes with the three-dimensional model, and displaying the fused hairstyle and clothes to a user.
Alternatively, a user design interface is provided; wherein the user design interface is an interface for a user to perform hairstyle and garment design operations on the three-dimensional model; fusing the hairstyle and the clothes designed for the three-dimensional model by the user with the three-dimensional model, and displaying the hairstyle and the clothes to the user.
Optionally, the three-dimensional characteristics of the target object are determined based on the three-dimensional model, and a furniture placing and decoration style suitable for the target object is matched through a static real object three-dimensional image in combination with a home database, and is displayed to a user.
As can be seen from the above, in the embodiment, the target object is shot in multiple angles by the two groups of cameras, so that the two groups of images and the spatial information of each pixel point on the images are obtained; and matching the two groups of images to obtain image information of each pixel point of the first group of images and corresponding spatial information thereof so as to construct a three-dimensional model of the target object, thereby improving the fidelity of the three-dimensional image.
Referring to fig. 3, a second embodiment of the stereo photography method of the present application is a further extension of the step S102 in the first embodiment based on the first embodiment of the stereo photography method of the present application, and the present embodiment includes:
in S301, infrared rays are emitted outwards through the second dual cameras to obtain an object distance, where the object distance is a horizontal distance from the target object to a camera plane.
Optionally, the emitting infrared rays outwards through the second dual cameras to obtain the object distance includes: emitting infrared rays outwards; receiving the infrared rays reflected back when the infrared rays touch the target object, and calculating to obtain the object distance; and the second double cameras are infrared double cameras.
Specifically, the object distance measurement uses the non-diffusion principle of infrared propagation: because the refractive index of infrared rays is small when the infrared rays pass through other substances, the infrared rays are considered in long-distance measurement, and the propagation of the infrared rays needs time, and when the infrared rays are emitted from the distance measuring device and hit a reflector, the infrared rays are reflected back to be received by the distance measuring device. And then the distance from the pixel point of the target object to the plane of the camera, namely the object distance (namely the real object distance Z0 described below) is calculated according to the binocular ranging principle. In addition, when infrared rays are emitted, the second double cameras are also shooting a second group of images, so that each pixel point on the second group of images corresponds to a real object distance, and the object distance of a certain pixel point is marked as Z0, so as to be referred to below.
In S302, the second group of images is subjected to image processing, and the width and height of each pixel point on the second group of images are obtained.
Specifically, based on the two-dimensional coordinates of the second group of images, a plurality of target pixel points are selected from the second group of images, and the two-dimensional coordinates (X0, Y0) of the target pixel points are determined, namely the width and the height of each pixel point on the second group of images.
In S303, based on the object distance, the width and the height of each pixel point on the second group of images, the three-dimensional coordinates of each pixel point on the second group of images are determined.
Specifically, the object distance Z0 measured above and the two-dimensional coordinates (X0, Y0) of a certain pixel point on the second group of images can determine the three-dimensional coordinates (X0, Y0, Z0) of a certain pixel point on the second group of images, and so on, can determine the three-dimensional coordinates of each pixel point on the second group of images.
In this embodiment, since the object distance is a real distance measured by infrared rays, it is more accurate than data calculated by a formula, and width and height data corresponding to a two-dimensional coordinate of a certain pixel point can be corrected according to the object distance, so that a plurality of determined three-dimensional coordinates are closer to the real data. The following description will be given by using a scene to correct the width and height data corresponding to the two-dimensional coordinates of a certain pixel point according to the object distance:
in this embodiment, the second group of images captured by the second dual-camera is subjected to image processing to obtain the width and height of each pixel point on the second group of images, for example, in the second group of images, a determined pixel point is selected, according to the initial width and height on the images, a reference width and a reference height close to the target object can be calculated according to a preset ratio, which can be marked as (X1, Y1), and according to (X1, Y1) and a binocular distance measurement principle, a reference object distance Z1 corresponding to the pixel point can be calculated to determine coordinates (X1, Y1, Z1) of the pixel point. However, since the preset ratio is a numerical value simulated or given by other methods according to the database, and has a certain error from the actual value, it is necessary to correct the actual value accordingly to obtain more accurate data. Calculating to obtain an error rate based on the measured object distance Z0 and the calculated reference object distance Z1, calculating to obtain a corrected real width and real height (X0, Y0) by using the reference width and the reference height (X1, Y1) and the error rate, finally determining the three-dimensional coordinates (X0, Y0, Z0) of a certain pixel point on the second group of images, and repeating the steps to determine the three-dimensional coordinates of each pixel point on the second group of images, namely the three-dimensional coordinates of each pixel point of the target object.
Referring to fig. 4, a third embodiment of the stereo photography method of the present application is a further extension of the step S103 in the first embodiment based on the first embodiment of the stereo photography method of the present application, and the present embodiment includes the following steps:
in S401, the first group of images and the second group of images are subjected to image processing, so as to obtain position information of the three-dimensional coordinates of each pixel point on the first group of images on the target object.
Specifically, by using an image processing technique, the reference coordinates of each pixel point on the first group of images and the reference coordinates of each pixel point on the second group of images can be subjected to image processing to determine the position of each pixel point on the first group of images corresponding to the target object, so as to obtain position information.
In S402, based on the position information, the three-dimensional coordinates of each pixel point of the target object are determined.
Specifically, the three-dimensional coordinates of each pixel point on the target object are determined by combining the real three-dimensional coordinates of each pixel point on the second group of images calculated in step S303 according to the real object distance corresponding to each pixel point on the second group of images.
In S403, a three-dimensional model of the target object is constructed based on the three-dimensional coordinates of each pixel point of the target object.
Specifically, a real three-dimensional model of the target object is constructed by splicing based on the three-dimensional coordinates of each pixel point of the target object.
In this embodiment, two groups of images are processed based on an image processing technique, and a position of each pixel point in the images corresponding to the target object is determined, so that the three-dimensional coordinates of each pixel point are determined more accurately, and the three-dimensional coordinates of a plurality of pixel points are collected to form a three-dimensional model of the target object.
Referring to fig. 5, fig. 5 is a flowchart illustrating a fourth embodiment of a stereo photographing method, where an execution main body in this embodiment may be a terminal device (such as a notebook, a computer, a mobile phone, a wearable device, and the like), and the terminal device may be implemented in a hardware or software manner. It should be noted that, if the result is substantially the same, the method of the present embodiment is not limited to the flow sequence shown in fig. 5. As shown, the method comprises the following steps:
in S501, it is determined whether the ambient brightness is lower than a preset brightness.
Specifically, the ambient brightness may be the brightness of the natural environment outside the terminal, or may be a certain specific ambient brightness, which may be set according to the real-time situation, and the preset brightness may be set according to the ambient brightness and the specific requirement.
In S502, if the ambient brightness is lower than the preset brightness, only the second dual cameras are started to perform multi-angle shooting, so as to obtain a second group of images and three-dimensional coordinates of each pixel point on the second group of images.
Specifically, when the ambient brightness is lower than the preset brightness, for example, at night, the ambient brightness is lower than the preset brightness, the second dual camera is started, where the second dual camera may be an infrared camera or another camera that can capture a normal image in a case where the ambient brightness is low.
In S503, the second group of images and the three-dimensional coordinates of each pixel point on the second group of images are subjected to image data processing to construct a three-dimensional model of the target object.
Specifically, the second group of images are subjected to image processing to obtain the width and the height of each pixel point on the images, then three-dimensional coordinates of each pixel point on the images are obtained according to the object distance which is measured by the second double cameras through infrared ray emission or other modes, and a three-dimensional model of the target object is constructed according to the three-dimensional coordinates.
Optionally, if the ambient brightness is higher than the preset brightness, the first dual cameras and the second dual cameras are started to perform multi-angle shooting.
Optionally, when the second dual-camera shoots, the two cameras are synchronously moved with the target object as a center, and the two cameras are triggered to synchronously capture the image of the target object in the same plane along the same shooting direction in real time, wherein the focal lengths of the two cameras are kept consistent in the process of synchronously capturing the image of the target object.
And aiming at the first image and the second image which are acquired by the two cameras synchronously at each time, the following processing is carried out: acquiring three-dimensional information and color information of each pixel point which simultaneously appears in the two cameras according to the two cameras acquired by the synchronous acquisition, wherein the three-dimensional information of each pixel point consists of the object distance of the pixel point and the width and height of the pixel point in the first image; and splicing the pixel points acquired this time with the current stereo model according to the three-dimensional information and the color information of all the pixel points acquired this time, and replacing the current stereo model with the stereo model acquired this time.
And when the two cameras are stopped to be triggered to synchronously acquire the images of the target object, outputting a stereo image containing the current stereo model.
In this embodiment, when the ambient brightness is less than the preset brightness, the target object is photographed by a specific group of cameras, and the coordinates of each pixel point on the image are obtained according to image data processing, so as to construct a three-dimensional model of the target object.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a stereo camera terminal according to the present application, which includes a body 60, a first dual camera and a second dual camera disposed on the body 60, and a processor 605 connected to the first dual camera and the second dual camera.
The first dual cameras comprise a first camera 601 and a second camera 602; the second double cameras are infrared cameras and comprise a first infrared camera 603 and a second infrared camera 604; the first double cameras and the second double cameras are arranged at the upper end part of the body 60 according to a first preset interval; the first infrared camera 603 and the second infrared camera 604 are arranged between the first camera 601 and the second camera 602 at an interval according to a second preset distance.
The first dual cameras 601 and 602 are used for shooting a target object in multiple angles to obtain a first group of images; the second dual cameras 603 and 604 are configured to perform multi-angle shooting on the target object to obtain a second group of images and spatial information of each pixel point on the second group of images; the processor 605 is configured to match the first group of images with the second group of images to obtain image information of each pixel point on the first group of images and spatial information corresponding to the image information, so as to construct a three-dimensional model of the target object.
Specifically, the multi-angle comprises the front, the left side, the back and the right side of the target object, the center line of the human body of the target object is used as an axis, a user is guided to surround the target object and shoot images of at least the front, the left side, the back and the right side of the target object by using the double cameras, and a group of two-dimensional images are obtained. And acquiring a second group of images through the second double cameras, and in addition, according to the characteristics of the second double cameras, acquiring the spatial information of each pixel point on the second group of images except for a group of two-dimensional images, wherein the spatial information comprises the three-dimensional coordinates of each pixel point. And matching the image information of each pixel point of the first group of images with the image information of each pixel point of the second group of images by utilizing an image data processing technology to obtain the image information of each pixel point of the first group of images and the corresponding spatial information thereof, thereby determining the three-dimensional coordinates of each point on the target object and further constructing the three-dimensional model of the target object.
Optionally, the processor 605 is further configured to determine whether the ambient brightness is lower than a preset brightness; if the ambient brightness is lower than the preset brightness, only starting the second double cameras to carry out multi-angle shooting to obtain a second group of images and three-dimensional coordinates of all pixel points on the second group of images; and carrying out image data processing on the second group of images and the three-dimensional coordinates of each pixel point on the second group of images to construct a three-dimensional model of the target object.
Specifically, the ambient brightness may be the brightness of the natural environment outside the terminal, or may be a certain specific ambient brightness, which may be set according to the real-time situation, and the preset brightness may be set according to the ambient brightness and the specific requirement. And when the ambient brightness is lower than the preset brightness, for example, at night, the ambient brightness is lower than the preset brightness, starting a second dual camera, where the second dual camera may be an infrared camera or another camera capable of shooting a normal image under the condition of lower ambient brightness. And carrying out image processing on the second group of images to obtain the width and the height of each pixel point on the images, then obtaining the three-dimensional coordinates of each pixel point on the images according to the object distance which is measured by the second double cameras by emitting infrared rays or other modes, and constructing a three-dimensional model of the target object according to the three-dimensional coordinates.
It should be noted that the working principle and the detailed scheme of each element in the terminal scheme are please participate in the description of the stereo photographing method, and are not described herein again.
As can be seen from the above, in the embodiment, the two groups of cameras are arranged on the terminal to perform multi-angle shooting on the target object, so that two groups of images and spatial information of each pixel point on the images are obtained; and matching the two groups of images to obtain image information of each pixel point of the first group of images and corresponding spatial information thereof so as to construct a three-dimensional model of the target object, thereby improving the fidelity of the three-dimensional image.
Referring to fig. 7, a device 70 with a storage function has program data 701 stored thereon, and the program data 701 can be executed to implement the method of the above-mentioned stereo photography method.
The beneficial effect of this application is: different from the situation of the prior art, the application provides a stereo photographing method, which comprises the steps of carrying out multi-angle photographing on a target object through two groups of cameras to obtain two groups of images and space information of each pixel point on the images; and matching the two groups of images to obtain image information of each pixel point of the first group of images and corresponding spatial information thereof so as to construct a three-dimensional model of the target object, thereby improving the fidelity of the three-dimensional image.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (7)

1. A method of stereoscopic photography, the method comprising:
shooting a target object at multiple angles through a first double-camera to obtain a first group of images, wherein the first group of images are two-dimensional images;
shooting the target object at multiple angles through a second double-camera to obtain a second group of images and space information of each pixel point on the second group of images;
the first double cameras comprise a first camera and a second camera; the second double cameras comprise a first infrared camera and a second infrared camera, and the first infrared camera and the second infrared camera are positioned between the first camera and the second camera;
matching the first group of images with the second group of images to obtain image information of each pixel point on the first group of images and corresponding spatial information thereof so as to construct a three-dimensional model of the target object;
matching the first group of images with the second group of images to obtain image information of each pixel point on the first group of images and corresponding spatial information thereof, so as to construct a three-dimensional model of the target object, wherein the step of matching the first group of images with the second group of images comprises the following steps:
performing image processing on the first group of images and the second group of images to obtain position information of three-dimensional coordinates of each pixel point on the first group of images on the target object;
determining the three-dimensional coordinates of each pixel point of the target object based on the position information;
constructing a three-dimensional model of the target object based on the three-dimensional coordinates of each pixel point of the target object;
the second dual-camera shooting the target object at multiple angles to obtain a second group of images and spatial information of each pixel point on the second group of images comprises:
obtaining object distance through the second double cameras, and carrying out image processing on the second group of images to obtain the width and height of each pixel point on the second group of images; and correcting the obtained width and height according to the object distance, wherein the specific correction process is as follows: in the second group of images, for each pixel point, according to the width and the height of the pixel point on the image, according to a preset proportion, calculating a reference width and a reference height (X1, Y1) close to a target object, and according to the reference width and the reference height, calculating a reference object distance Z1 corresponding to the reference width and the reference height so as to determine a reference three-dimensional coordinate (X1, Y1, Z1) of the pixel point; and calculating to obtain an error rate based on the object distance obtained by the second double cameras and the calculated reference object distance Z1, calculating to obtain a corrected real width and a corrected real height by using the reference width, the reference height and the error rate, and finally determining the three-dimensional coordinate of a certain pixel point on the second group of images.
2. The method of claim 1,
the multi-angle shooting of the target object through the second double-camera comprises the following steps of:
emitting infrared rays outwards through the second double cameras to obtain an object distance, wherein the object distance is the horizontal distance from the target object to a camera plane;
performing image processing on the second group of images to obtain the width and height of each pixel point on the second group of images;
and determining the three-dimensional coordinates of each pixel point on the second group of images based on the object distance, the width and the height of each pixel point on the second group of images.
3. The method of claim 2,
the emitting infrared rays outwards through the second dual cameras to obtain an object distance includes:
emitting infrared rays outwards;
receiving the infrared rays reflected back when the infrared rays touch the target object, and calculating to obtain the object distance;
and the second double cameras are infrared double cameras.
4. The method of claim 1, further comprising:
judging whether the ambient brightness is lower than the preset brightness;
if the ambient brightness is lower than the preset brightness, only starting the second double cameras to carry out multi-angle shooting to obtain a second group of images and three-dimensional coordinates of all pixel points on the second group of images;
and carrying out image data processing on the second group of images and the three-dimensional coordinates of each pixel point on the second group of images to construct a three-dimensional model of the target object.
5. The method of claim 4, further comprising:
and if the ambient brightness is higher than the preset brightness, starting the first double cameras and the second double cameras to carry out multi-angle shooting.
6. A stereo camera terminal, characterized in that the terminal comprises:
the camera comprises a body, a first double camera and a second double camera which are arranged on the body, and a processor connected with the first double camera and the second double camera; the first double cameras comprise a first camera and a second camera; the second double cameras are infrared cameras and comprise a first infrared camera and a second infrared camera; the first double cameras and the second double cameras are arranged at the upper end part of the body according to a first preset interval; the first infrared camera and the second infrared camera are arranged between the first camera and the second camera at intervals according to a second preset distance;
the first double cameras are used for shooting a target object in multiple angles to obtain a first group of images, and the first group of images are two-dimensional images;
the second double cameras are used for shooting the target object at multiple angles to obtain a second group of images and space information of each pixel point on the second group of images;
the processor is used for matching the first group of images with the second group of images to obtain image information of each pixel point of the first group of images and corresponding spatial information thereof so as to construct a three-dimensional model of the target object;
matching the first group of images with the second group of images to obtain image information of each pixel point on the first group of images and corresponding spatial information thereof, so as to construct a three-dimensional model of the target object, wherein the step of matching the first group of images with the second group of images comprises the following steps:
performing image processing on the first group of images and the second group of images to obtain position information of three-dimensional coordinates of each pixel point on the first group of images on the target object;
determining the three-dimensional coordinates of each pixel point of the target object based on the position information;
constructing a three-dimensional model of the target object based on the three-dimensional coordinates of each pixel point of the target object;
the second dual cameras are used for shooting the target object at multiple angles, and obtaining a second group of images and space information of each pixel point on the second group of images comprises:
obtaining object distance through the second double cameras, and carrying out image processing on the second group of images to obtain the width and height of each pixel point on the second group of images; and correcting the obtained width and height according to the object distance, wherein the specific correction process is as follows: in the second group of images, for each pixel point, according to the width and the height of the pixel point on the image, according to a preset proportion, calculating a reference width and a reference height (X1, Y1) close to a target object, and according to the reference width and the reference height, calculating a reference object distance Z1 corresponding to the reference width and the reference height so as to determine a reference three-dimensional coordinate (X1, Y1, Z1) of the pixel point; and calculating to obtain an error rate based on the object distance obtained by the second double cameras and the calculated reference object distance Z1, calculating to obtain a corrected real width and a corrected real height by using the reference width, the reference height and the error rate, and finally determining the three-dimensional coordinate of a certain pixel point on the second group of images.
7. The terminal of claim 6, wherein the processor is further configured to:
judging whether the ambient brightness is lower than the preset brightness;
if the ambient brightness is lower than the preset brightness, only starting the second double cameras to carry out multi-angle shooting to obtain a second group of images and three-dimensional coordinates of all pixel points on the second group of images;
and carrying out image data processing on the second group of images and the three-dimensional coordinates of each pixel point on the second group of images to construct a three-dimensional model of the target object.
CN201710863476.1A 2017-09-22 2017-09-22 Stereo photographing method and device and terminal Active CN107864372B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710863476.1A CN107864372B (en) 2017-09-22 2017-09-22 Stereo photographing method and device and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710863476.1A CN107864372B (en) 2017-09-22 2017-09-22 Stereo photographing method and device and terminal

Publications (2)

Publication Number Publication Date
CN107864372A CN107864372A (en) 2018-03-30
CN107864372B true CN107864372B (en) 2021-02-26

Family

ID=61699540

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710863476.1A Active CN107864372B (en) 2017-09-22 2017-09-22 Stereo photographing method and device and terminal

Country Status (1)

Country Link
CN (1) CN107864372B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111062369A (en) * 2020-01-05 2020-04-24 异起(上海)智能科技有限公司 Method and device for sensing dynamic object
CN111277811B (en) * 2020-01-22 2021-11-09 上海爱德赞医疗科技有限公司 Three-dimensional space camera and photographing method thereof
CN112235559B (en) * 2020-10-14 2021-09-14 贝壳找房(北京)科技有限公司 Method, device and system for generating video
CN114029243B (en) * 2021-11-11 2023-05-26 江苏昱博自动化设备有限公司 Soft object grabbing and identifying method for sorting robot

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009155688A1 (en) * 2008-06-23 2009-12-30 Craig Summers Method for seeing ordinary video in 3d on handheld media players without 3d glasses or lenticular optics
CN104333747A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Stereoscopic photographing method and stereoscopic photographing equipment
CN104539934A (en) * 2015-01-05 2015-04-22 京东方科技集团股份有限公司 Image collecting device and image processing method and system
CN104883502A (en) * 2015-05-19 2015-09-02 广东欧珀移动通信有限公司 Focusing method and apparatus for mobile terminal
CN106157360A (en) * 2015-04-28 2016-11-23 宇龙计算机通信科技(深圳)有限公司 A kind of three-dimensional modeling method based on dual camera and device
CN106898022A (en) * 2017-01-17 2017-06-27 徐渊 A kind of hand-held quick three-dimensional scanning system and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009155688A1 (en) * 2008-06-23 2009-12-30 Craig Summers Method for seeing ordinary video in 3d on handheld media players without 3d glasses or lenticular optics
CN104333747A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Stereoscopic photographing method and stereoscopic photographing equipment
CN104539934A (en) * 2015-01-05 2015-04-22 京东方科技集团股份有限公司 Image collecting device and image processing method and system
CN106157360A (en) * 2015-04-28 2016-11-23 宇龙计算机通信科技(深圳)有限公司 A kind of three-dimensional modeling method based on dual camera and device
CN104883502A (en) * 2015-05-19 2015-09-02 广东欧珀移动通信有限公司 Focusing method and apparatus for mobile terminal
CN106898022A (en) * 2017-01-17 2017-06-27 徐渊 A kind of hand-held quick three-dimensional scanning system and method

Also Published As

Publication number Publication date
CN107864372A (en) 2018-03-30

Similar Documents

Publication Publication Date Title
US11928838B2 (en) Calibration system and method to align a 3D virtual scene and a 3D real world for a stereoscopic head-mounted display
CN107864372B (en) Stereo photographing method and device and terminal
US20210152802A1 (en) Apparatus and method for generating a representation of a scene
CN103795998B (en) Image processing method and image processing equipment
US10560683B2 (en) System, method and software for producing three-dimensional images that appear to project forward of or vertically above a display medium using a virtual 3D model made from the simultaneous localization and depth-mapping of the physical features of real objects
EP2779091B1 (en) Automatic stereoscopic camera calibration
US20160295194A1 (en) Stereoscopic vision system generatng stereoscopic images with a monoscopic endoscope and an external adapter lens and method using the same to generate stereoscopic images
US9443338B2 (en) Techniques for producing baseline stereo parameters for stereoscopic computer animation
CN109510977A (en) Three-dimensional light field panorama is generated using concentric observation circle
CN101247530A (en) Three-dimensional image display apparatus and method for enhancing stereoscopic effect of image
CN109285189B (en) Method for quickly calculating straight-line track without binocular synchronization
CN108885342A (en) Wide Baseline Stereo for low latency rendering
CN104599317A (en) Mobile terminal and method for achieving 3D (three-dimensional) scanning modeling function
WO2018032841A1 (en) Method, device and system for drawing three-dimensional image
CN110335307A (en) Scaling method, device, computer storage medium and terminal device
JP2024019662A (en) Method and device for angle detection
KR20120102202A (en) Stereo camera appratus and vergence control method thereof
CN113870213A (en) Image display method, image display device, storage medium, and electronic apparatus
CN108012139B (en) The image generating method and device shown applied to the nearly eye of the sense of reality
CN110784728B (en) Image data processing method and device and computer readable storage medium
CN113485547A (en) Interaction method and device applied to holographic sand table
CN107222689B (en) Real scene switching method and device based on VR (virtual reality) lens
JP2011205385A (en) Three-dimensional video control device, and three-dimensional video control method
TWI559731B (en) Production method for a three dimensional image
CN110880187A (en) Camera position information determining method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant