CN1945625A - Realizing method for forming three dimension image and terminal device - Google Patents

Realizing method for forming three dimension image and terminal device Download PDF

Info

Publication number
CN1945625A
CN1945625A CNA2006101402267A CN200610140226A CN1945625A CN 1945625 A CN1945625 A CN 1945625A CN A2006101402267 A CNA2006101402267 A CN A2006101402267A CN 200610140226 A CN200610140226 A CN 200610140226A CN 1945625 A CN1945625 A CN 1945625A
Authority
CN
China
Prior art keywords
view
image
dimensional
terminal device
dimensional image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2006101402267A
Other languages
Chinese (zh)
Other versions
CN100454335C (en
Inventor
石彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CNB2006101402267A priority Critical patent/CN100454335C/en
Publication of CN1945625A publication Critical patent/CN1945625A/en
Priority to PCT/CN2007/070922 priority patent/WO2008049370A1/en
Application granted granted Critical
Publication of CN100454335C publication Critical patent/CN100454335C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The invention discloses a method for generating 3D image, including the following steps: selecting the input information from 2D image, managing the 2D image with optical algorithms to create 3D image. The invention also discloses a terminal device generating 3D image, including input module, selecting module, processing module and output module. The invention realizes the transform from 2D image into 3D shot by the image acquisition device, to make users produce 3D image according to their demands.

Description

Generate the implementation method and the terminal device of 3-D view
Technical field
The present invention relates to image processing field, relate in particular to implementation method and terminal device that a kind of two dimensional image of taking by image collecting device generates 3-D view.
Background technology
Along with the developing rapidly of 3-D technology, 3-D technology many new technologies, new method had constantly occurred at aspects such as data acquisition, modelings in recent years.And the case of utilizing 3-D technology to show in growing field, simulate is of common occurrence.Three-dimensional model, 3-D view and three-dimensional animation etc. have progressively become a kind of multimedia role commonly used, and be directly perceived more, be rich in expressive force.Popularization and application dynamics for 3-D technology will constantly increase from now on.
The wright of existing 3-D view, three-dimensional animation only limits to grasp the producer of three-dimensional manufacturing technology, and numerous users can not make required 3-D view according to oneself requirement.Make the not performance effect widely of advantage of 3-D technology.
Summary of the invention
The problem to be solved in the present invention provides a kind of for video equipment provides the terminal device and the implementation method of three-dimensional image information, to solve the defective that the user can not make 3-D view according to self-demand in the prior art.
For achieving the above object, one embodiment of the invention provide a kind of implementation method that generates 3-D view, may further comprise the steps:
From two dimensional image, choose input information;
According to described input information two dimensional image is carried out Flame Image Process, obtain three-dimensional spatial information, form 3-D view.
Another embodiment of the present invention also provides a kind of terminal device that generates 3-D view, comprises input block, selected cell, processing unit and output unit;
Described input block is used to obtain the required two dimensional image of Flame Image Process;
Described selected cell is used for choosing the required input information of Flame Image Process from the two dimensional image of input;
Described processing unit is used for calculating the volume coordinate information of being had a few according to described input information;
Described output unit, the generation and the output that are used to calculate backward three-dimensional viewing.
Compared with prior art, the present invention has the following advantages:
The present invention has realized the conversion of two dimensional image to 3-D view by choosing input information and the optical algorithms of described input information is handled from two dimensional image.
By this method, the user can produce required 3-D view, thereby expand the range of application of 3-D technology greatly according to the two dimensional image of oneself requirement utilization shooting.And this terminal device also can be integrated on the high-end mobile phone, directly finishes on mobile phone thereby 3-D view is handled.
Description of drawings
Fig. 1 is the present invention provides the implementation method of three-dimensional image information for video equipment by the dual camera of image collecting device a know-why synoptic diagram;
Fig. 2 A and Fig. 2 B are the present invention provides the implementation method of three-dimensional image information for video equipment by the single camera of image collecting device know-why synoptic diagram;
Fig. 3 is the process flow diagram of the embodiment of the present invention's implementation method of generating 3-D view;
Fig. 4 is the synoptic diagram that the present invention generates the embodiment that the input information of 3-D view chooses;
Fig. 5 is the structural drawing of the embodiment of the present invention's terminal device of generating 3-D view.
Embodiment
A kind of know-why of 3-D view that generates of the present invention is shown in Fig. 1, Fig. 2 A and Fig. 2 B.
When Fig. 1 takes same object simultaneously for dual camera, the know-why synoptic diagram of compositing 3 d images.Two CCD centre distance are fixed among the figure, and relative position is known, and according to Optical Formula, the position of the position of object and object picture on CCD can be calculated mutually and be drawn.If when wired A imaged on this two CCD, 1 a imaging was respectively a1, a2 on the A line; Another c in the space (some c and line A position relation are unknown) imaging is respectively c1, c2.If need the position relation of judging point c and line A, then at first calculate the distance of a1 and c1, if c is the last point of line A, according to this apart from the position that just can determine on the online A of a c; According to this position, the theoretical position of some c imaging c2 on another CCD also can be determined.If c2 is identical with the measured distance between c2 and a2 with theoretical between a2, but judging point c with put a with on the online A.
If there is 1 b on the not online A to distinguish imaging b1, b2 on two CCD.At first calculate distance between a1 and b1, if put on the online A of b, in like manner can calculate the position of b imaging point b3 on another CCD, the b3 position of b2 does not overlap, and illustrates on the not online A of some b.Change the distance setting value of calculating mid point b and line A, when the calculating invocation point b3 of institute overlaps with actual point b2 position, can determine the distance of a b to line A.
When Fig. 2 A and Fig. 2 B are depicted as single camera and take same object at twice, the know-why synoptic diagram of compositing 3 d images.Wherein, Fig. 2 A is single-lens first time of the imaging synoptic diagram during shot object, at the some c1 and the straight line a1b1 that take for the first time on the imaging, and corresponding respectively some c and the straight line ab on material object, the relative position relation of some c and straight line ab is known; Fig. 2 B is that the single-lens back that rotates to an angle (should keep in the image of twice shooting, at least comprise in kind 80% same section), imaging synoptic diagram when taking for the second time same object, at the some c2 and the straight line a2b2 that take for the second time on the imaging, same corresponding respectively some c and the straight line ab on material object.In Fig. 2 A and Fig. 2 B, l2 and l4 are respectively the distance of picture with the CCD of single camera object when taking same object for twice respectively, are known conditions, can calculate the value apart from l1 and l3 of object and CCD in twice shooting thus respectively.The position of picture last some c1 that becomes and straight line a1b1 concerns relatively for the first time, position relation with picture last some c2 that becomes for the second time and straight line a2b2, according to this two position relation and optics formula, the y axle of CCD coordinate, z axle are respectively as the anglec of rotation of turning axle in the time of can determining CCD in the imaging for the second time with first time imaging.Thus, the position of CCD according to the position of two imagings and two CCD, can calculate the locus of each point in actual object in two imagings when CCD took with respect to the first time in can determining to take for the second time.
According to described principle, take same object simultaneously or single camera is taken under the situation of same object at least at twice at dual camera, by CCD being gone up the scanning of the pointwise of imaging, can obtain all coordinates of spatial points, thereby restore the three-dimensional original appearance of object by line.
The invention provides a kind of implementation method that generates 3-D view, wherein, image collecting device can or be made a video recording first-class for mobile phone or digital camera or video camera with camera function.When this image collector was changed to the mobile phone with camera function, the generation of 3-D view can directly be finished on mobile phone, and needn't download to the PC end.
When Figure 3 shows that two images that use the same object that dual camera with 3-D view systematic function mobile phone takes simultaneously, the embodiment of the generation of 3-D view comprises step:
Step s301 chooses input information in two dimensional image.An embodiment who chooses as shown in Figure 4, the user clicks by same point k imaging k1 in kind and k2 in two images of the same object that the dual camera of mobile phone is taken simultaneously.
Step s302 carries out optical algorithms according to described input information to two dimensional image and handles, and forms 3-D view.Known point k1 and k2 that terminal device is chosen according to the user handle physical location and the volume coordinate of determining that k is ordered by optical algorithms, and other points for the user clicks repeat same step; For the point that the user does not click, utilize two groups of numerical value in the ccd register to obtain two groups of potential functions, by relatively obtaining the not volume coordinate of clicking point of client as gradient fields; The total space is pursued line level, vertical curve match, can obtain 3-D view by institute's fitting result.
When image collecting device does not have the function of utilizing two dimensional image directly to generate 3-D view, two dimensional image in the image collecting device earlier need be downloaded to the PC end by modes such as usb data line, infrared, bluetooths, generate 3-D view according to the described step of Fig. 3 then, this flow process no longer is repeated in this description at this.
When used image is when having the different images of the same object that the single camera of 3-D view systematic function mobile phone takes at twice, to comprise the steps:
Step s501 chooses input information in two dimensional image.The user in the different images of the same object that the single camera of image collecting device is taken at twice, with to similar operation shown in Figure 4, click by putting l, m, n imaging in two images on the material object, be designated as l1, m1, n1 and l2, m2, n2 respectively.
Step s502 carries out optical algorithms according to described input information to two dimensional image and handles, and forms 3-D view.Position relation between known point l1, the m1 that terminal device is at first chosen according to the user, the position relation between n1 and l2, m2, n2 is handled the mutual alignment relation of determining two CCD by optical algorithms.According to the mutual alignment relation of two CCD and the position of two imagings, determine physical location and volume coordinate that l, m, n are ordered.Other points for user in the image clicks calculate according to Same Way; The point of not clicking for the user under the situation of the mutual alignment of known two CCD relation, calculates according to the method identical with step s302, the volume coordinate that can obtain on the material object being had a few, and then match obtains 3-D view.
Similar with the situation of dual camera, when not having the function of utilizing two dimensional image directly to generate 3-D view, need that two dimensional image is downloaded to the PC end in advance and handle again as if image collecting device.
Another embodiment of the present invention also provides the terminal device that generates 3-D view, as shown in Figure 5, comprises input block 10, selected cell 20, algorithm unit 30 and output unit 40.Wherein input block 10 is used to obtain the required two dimensional image of processing; Selected cell 20 is used for the user and chooses the required information of algorithm process from the two dimensional image of input; Algorithm unit 30 is used for calculating the volume coordinate information of being had a few according to user's input information; Output unit 40 is used to calculate the generation and the output of backward three-dimensional viewing.Input block is the dual camera or the single camera of image collecting device, is used for the same part of shot object; When input block is dual camera, comprise that a fixed lens adds a rotating lens, perhaps two camera lenses are rotating lens; When input block was single camera, single camera was taken the same part of same object at least at twice with different angles.The mode that algorithm unit obtains three-dimensional spatial information comprises: the calculation of optics formula, gradient fields, potential function and level, vertical curve match.This terminal device can directly be integrated in the high-end mobile phone, makes 3-D view handle needn't to be downloaded to the PC end and directly finishes on high-end mobile phone.
More than disclosed only be several specific embodiment of the present invention, still, the present invention is not limited thereto, any those skilled in the art can think variation all should fall into protection scope of the present invention.

Claims (10)

1, a kind of implementation method that generates 3-D view is characterized in that, may further comprise the steps:
From two dimensional image, choose input information;
According to described input information two dimensional image is carried out Flame Image Process, obtain three-dimensional spatial information, form 3-D view.
2, generate the implementation method of 3-D view according to claim 1, it is characterized in that, from two dimensional image, choose and also comprise step before the input information:
Two dimensional image in the image collecting device is downloaded to the PC end by usb data line, infrared, bluetooth approach.
3, generate the implementation method of 3-D view according to claim 1, it is characterized in that, described two dimensional image is at least two width of cloth two dimensional images that comprise same object same section.
4, generate the implementation method of 3-D view according to claim 1, it is characterized in that, described input information is the same section of the same object of taking the photograph chosen at least two width of cloth two dimensional images.
5, generate the implementation method of 3-D view according to claim 1, it is characterized in that, described Flame Image Process comprises:
Determine the physical location and the volume coordinate of known identical point according to described input information;
Utilize the numerical value of unknown point in two groups of ccd registers to do gradient fields and obtain two groups of potential functions, by relatively obtaining the volume coordinate of unknown point;
The total space is pursued line level, vertical curve match;
Obtain 3-D view by the gained fitting result.
6, a kind of terminal device that generates 3-D view is characterized in that, comprises input block, selected cell, processing unit and output unit;
Described input block is used to obtain the required two dimensional image of Flame Image Process;
Described selected cell is used for choosing the required input information of Flame Image Process from the two dimensional image of input;
Described processing unit is used for calculating the volume coordinate information of being had a few according to described input information;
Described output unit, the generation and the output that are used to calculate backward three-dimensional viewing.
7, as the terminal device of generation 3-D view as described in the claim 6, it is characterized in that described input block is the dual camera or the single camera of image collecting device.
8, as the terminal device of generation 3-D view as described in the claim 7, it is characterized in that, when described input block is dual camera, comprise a rotating lens at least, be used for the same part of shot object.
9, as the terminal device of generation 3-D view as described in the claim 7, it is characterized in that, when described input block is single camera, at least twice a same part of taking same object with different angles.
As the terminal device of generation 3-D view as described in the claim 6, it is characterized in that 10, the mode that described algorithm unit obtains three-dimensional spatial information comprises: the calculation of optics formula, gradient fields, potential function and level, vertical curve match.
CNB2006101402267A 2006-10-23 2006-10-23 Realizing method for forming three dimension image and terminal device Active CN100454335C (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CNB2006101402267A CN100454335C (en) 2006-10-23 2006-10-23 Realizing method for forming three dimension image and terminal device
PCT/CN2007/070922 WO2008049370A1 (en) 2006-10-23 2007-10-18 Realizing method of generating three-dimensional image and terminal device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006101402267A CN100454335C (en) 2006-10-23 2006-10-23 Realizing method for forming three dimension image and terminal device

Publications (2)

Publication Number Publication Date
CN1945625A true CN1945625A (en) 2007-04-11
CN100454335C CN100454335C (en) 2009-01-21

Family

ID=38045020

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006101402267A Active CN100454335C (en) 2006-10-23 2006-10-23 Realizing method for forming three dimension image and terminal device

Country Status (2)

Country Link
CN (1) CN100454335C (en)
WO (1) WO2008049370A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296574A (en) * 2016-08-02 2017-01-04 乐视控股(北京)有限公司 3-d photographs generates method and apparatus
CN106934777A (en) * 2017-03-10 2017-07-07 北京小米移动软件有限公司 Scan image acquisition methods and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08194737A (en) * 1995-01-17 1996-07-30 Dainippon Printing Co Ltd Measurement device for difference in levels of minute pattern
CN1115546C (en) * 1999-12-29 2003-07-23 宝山钢铁股份有限公司 Surface three-dimensional appearance testing method and equipment
CN1123321C (en) * 2000-10-09 2003-10-08 清华大学 Human hand movement image 3D real time testing method
CN1567384A (en) * 2003-06-27 2005-01-19 史中超 Method of image acquisition, digitized measure and reconstruction of three-dimensional object
US20050088515A1 (en) * 2003-10-23 2005-04-28 Geng Z. J. Camera ring for three-dimensional (3D) surface imaging

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296574A (en) * 2016-08-02 2017-01-04 乐视控股(北京)有限公司 3-d photographs generates method and apparatus
CN106934777A (en) * 2017-03-10 2017-07-07 北京小米移动软件有限公司 Scan image acquisition methods and device
CN106934777B (en) * 2017-03-10 2020-07-14 北京小米移动软件有限公司 Scanning image acquisition method and device

Also Published As

Publication number Publication date
CN100454335C (en) 2009-01-21
WO2008049370A1 (en) 2008-05-02

Similar Documents

Publication Publication Date Title
Newcombe et al. Live dense reconstruction with a single moving camera
Gledhill et al. Panoramic imaging—a review
JP6883608B2 (en) Depth data processing system that can optimize depth data by aligning images with respect to depth maps
CN1878318A (en) Three-dimensional small-sized scene rebuilding method based on dual-camera and its device
CN109147029B (en) Monocular polarization three-dimensional reconstruction method
Larsson et al. Revisiting radial distortion absolute pose
CN1799041A (en) Hand-held device having three-dimensional viewing function with tilt sensor and display system using the same
CN106157246A (en) A kind of full automatic quick cylinder panoramic image joining method
CN115239871A (en) Multi-view stereo network three-dimensional reconstruction method
CN113793387A (en) Calibration method, device and terminal of monocular speckle structured light system
CN108460797B (en) Method and device for calculating relative pose of depth camera and height of scene plane
Ye et al. Accurate and dense point cloud generation for industrial Measurement via target-free photogrammetry
CN100454335C (en) Realizing method for forming three dimension image and terminal device
CN114820563A (en) Industrial component size estimation method and system based on multi-view stereo vision
CN109785429A (en) A kind of method and apparatus of three-dimensional reconstruction
CN107203984A (en) Correction system is merged in projection for third party software
EP2856424A1 (en) Method of three-dimensional measurements by stereo-correlation using a parametric representation of the measured object
JP5086120B2 (en) Depth information acquisition method, depth information acquisition device, program, and recording medium
JP3924576B2 (en) Three-dimensional measurement method and apparatus by photogrammetry
Wang et al. Automatic measurement based on stereo vision system using a single PTZ camera
CN1555097A (en) Quick algorithm in planar charge coupling device array super resolution imaging technology
Melendez et al. Relightable Buildings from Images.
TWI834493B (en) Three-dimensional reconstruction system and method based on multiple coding patterns
Thomas et al. Portable Mini Turntable for Close-Range Photogrammetry: A Preliminary Study
Wang et al. Characteristic line of planar homography matrix and its applications in camera calibration

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant