CN108876840A - A method of vertical or forward projection 3-D image is generated using virtual 3d model - Google Patents

A method of vertical or forward projection 3-D image is generated using virtual 3d model Download PDF

Info

Publication number
CN108876840A
CN108876840A CN201810823619.0A CN201810823619A CN108876840A CN 108876840 A CN108876840 A CN 108876840A CN 201810823619 A CN201810823619 A CN 201810823619A CN 108876840 A CN108876840 A CN 108876840A
Authority
CN
China
Prior art keywords
virtual
model
image
camera
depth map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201810823619.0A
Other languages
Chinese (zh)
Inventor
周冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangyin Jia Heng Software Technology Co Ltd
Original Assignee
Jiangyin Jia Heng Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangyin Jia Heng Software Technology Co Ltd filed Critical Jiangyin Jia Heng Software Technology Co Ltd
Priority to CN201810823619.0A priority Critical patent/CN108876840A/en
Publication of CN108876840A publication Critical patent/CN108876840A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images

Abstract

Vertical or forward projection 3-D image method is generated using virtual 3d model the invention discloses a kind of, wherein 3D model is created by positioning while the physical features of real object and depth map, camera is used for from first the first image of viewing angles, and from subsequent viewing angles subsequent image, wherein autofocus system provides first group of depth map data and one group of subsequent depth map data, first group of depth map data and subsequent depth map data set are for generating parallax mapping, creation virtual 3d model is mapped from parallax, virtual 3d model is imaged to obtain the image that can be considered as three-dimensional, the 3D effect of enhancing is added in virtual 3d model, it causes the various aspects of image to seem to extend in the top of display medium or front.The present invention is a kind of to generate vertical or forward projection 3-D image method using virtual 3d model, embodies the 3D effect of enhancing.

Description

A method of vertical or forward projection 3-D image is generated using virtual 3d model
Technical field
The present invention relates to the methods for using virtual 3d model creation virtual three-dimensional and/or autostereoscopic image, specifically It is related to a kind of generating vertical or forward projection 3-D image method using virtual 3d model.
Background technique
Camera decades ago has an object lens to must be directed at the object being imaged, then object lens must Manual focusing exist On the object.With advances in technology, camera is developed, and can be focused automatically, that is to say, that phase chance is autofocusing at On target object before camera, automatic focus usually using time-of-flight system is completed.In time-of-flight system, such as The transmitter of infrared light supply etc emits infrared light on the direction that camera lens are directed toward, and then infrared light can travel to object And it is reflected back toward camera, video camera includes infrared sensor, which captures in frequency range used in transmitter Reflection infrared light, by detection emit and receive between emitted energy flight time, the distance of object can be calculated, then Use the camera lens of the automatic focusing camera of the information.
The camera that present many people use is not special equipment, on the contrary, video camera is already integrated into handheld-type intelligent mobile phone In tablet computer, therefore, anyone for carrying smart phone also carries camera, the phase used on smart phone and tablet computer Machine has small object lens.In addition, these object lens are unable to Manual focusing, therefore, the camera on smart phone must rely on auto-focusing System captures clearly image.Although hourage system still may be used to determine object be focused automatically from camera away from From, but more useful information is obtained usually using depth map.In smart phone, basic depth map is usually using structuring Photosystem is realized, in structured-light system, by the object before the infrared light pattern projection to camera of such as grid, due to Lattice be in infrared middle transmitting, so lattice for be visually it is sightless, the lattice of projection is by its institute The shape distortion of the object hit.It, can will be in lattice using available processing capacity in smart phone or tablet computer Distortion be converted into indicate target object shape depth map, depth map be comprising depth-related information map or each pixel Data correspond to the physical form for the object being mapped.Therefore, depth value is assigned to the unit of each pixel data, so Afterwards, these data can be used to the accurate three-dimensional model that creation is mapped object.
There are two cameras for some smart phones, they are configured to three-dimensional a pair, by comparing left/right image and calculating The distance between each point on object and video camera can easily realize some depth maps using stereoscopic camera.Such system is used in its " project Tango " three-dimensional depth mapping system, however, most of smart phones exist Only one camera of the screen side of smart phone or a camera, for shooting self-timer.Smart phone may include one Or multiple infrared transmitters, for autofocus system or it is used for dedicated depth mapping system, such as in apple iPhone X In depth mapping system.However, from single camera point obtain accurate depth map be it is complicated, it is red using single camera Outer hourage system and/or structured light system are used to obtain the depth data about single image, then by comparing Normalization shift value in consecutive image generates disparity map.In many modern smart phones, synchronous positioning and mapping are used (SLAM) Software Create disparity map, SLAM software tracks one group of target pixel points by continuous camera frame, and uses these Track to carry out triangulation to the position coordinates in real space.Meanwhile the estimated location in real space is used to count Calculate the camera position for being able to observe that them.When the camera of smart phone is opened, there are two different images to come for it Distance is calculated, however, being once continuously shot two images, data are just made available by.In addition it is possible to use the acceleration of smart phone It spends flowmeter sensor and obtains additional data from smart phone, can be used for estimating the camera position between the first image and subsequent image Variation.Therefore, the optical difference between subsequent image becomes it is known that and position and mesh of the camera on smart phone The corresponding difference of target.Other than carrying out triangulation to each target signature in subsequent image, SLAM system is also to each Difference between feature and the relationship of other features in image is compared, the result is that smart phone have to target object it is different View, it is known that the approximate angle of focus, it is known that the distance between the position that video camera uses, and track known feature and it Between relationship.Smart phone can be well close to how each feature is located in association with other features as a result, In real space, the three-dimensional mapping of target point is generated substantially in observed space.Once completing three-dimensional mapping, X-Y scheme As that can be wrapped into three-dimensional mapping by matching corresponding picture point, the result is that a virtual three-dimensional model, many systems For creating three-dimensional and autostereoscopic image from virtual three-dimensional model.However, most prior art system generates 3-D image, These images there appear to be behind the plane of electronic curtain or lower section, observe image on this plane.Create 3D effect meeting Above the screen for watching image on it or front is more difficult, is shown in the virtual of above display or front to create Image, it is necessary to bring complicated adjustment in the creation of image into.In the prior art, by the way that virtual 3D is imaged from three-dimensional viewpoint Model realizes such 3D effect.Change the parallax between three-dimensional viewpoint, or viewpoint is shifted to add 3D effect.At Slight adjustment is only carried out to virtual 3d model as before.
It has been found that the 3D rendering of virtual 3d model can be more true to nature and be more clearly created, in addition to controlling image camera Except viewpoint, creatively change 3D model, improved technology represents the progress for the technology for being described below and requiring.
Technical solution
Vertical or forward projection three is generated using virtual 3d model the invention mainly solves the technical problem of providing a kind of The method for tieing up image, wherein the virtual image includes to look like three-dimensional aspect when watching on the display medium, It is characterized in that, the described method comprises the following steps:Camera is provided at physics scene, wherein the camera body has showed auto-focusing System;From the first image of physics scene described in the first viewing angles, wherein the autofocus system is provided corresponding to described First group of depth map data of the first image;Using the camera from the subsequent figure of physics scene described in subsequent viewing angles Picture, wherein the autofocus system provides the successive depths mapping data set for corresponding to the subsequent image;Use described One group of depth map data and successive depths mapping data set generation parallax mapping;The object is created from parallax mapping Manage the virtual 3d model of scene;The virtual 3d model is imaged to obtain the perspective view of the virtual 3d model from three-dimensional viewpoint Picture;The stereo-picture is shown on the display medium.
It optionally, further include changing the virtual 3d model with to the virtual 3d model additive effect, the virtual 3D Model on the display medium when watching, so that the various aspects in the stereo-picture of the virtual 3d model seem to stretch out institute State display medium.
Optionally, wherein changing the virtual 3d model includes tilting at least some of described virtual 3d model.
The beneficial effects of the invention are as follows:
The present invention is a kind of to generate vertical or forward projection 3-D image method using virtual 3d model, embodies enhancing 3D effect.
Detailed description of the invention
Fig. 1 shows creation and the system hardware required using the present invention.
Fig. 2 is the perspective view of the exemplary embodiment of virtual scene.
Fig. 3 is the side view of the virtual scene of Fig. 2.
Embodiment
The preferred embodiments of the present invention will be described in detail below so that advantages and features of the invention can be easier to by It will be appreciated by those skilled in the art that so as to make a clearer definition of the protection scope of the present invention.
Referring to Fig.1, it will be understood that the present invention is used to generate the production image 10 with enhancing 3D effect, in display medium Viewing production image 10 in the diagram example of the display 12 of (such as printer page) or electronic equipment 14, production image 10 can be with It is static image or video.Anyway, when watching on the display 12, production image 10 seems with three-dimensional side Face.In addition, at least some production images 10 embody the 3D effect of enhancing, so that the various aspects of production image 10 appear in display On the surface plane of device 12 or front.If electronic equipment 14 is traditional LED or LCD display, 3D glasses must be used Viewing production image 10, to observe the 3-D effect in production image 10.Equally, if production image 10 is printed on paper Jie In matter, then production image 10 must be watched with 3D glasses.Alternatively, if electronic equipment 14 has autostereoscopic display, Can detect by an unaided eye the 3D effect for producing and enhancing in image 10.The production image 10 of 3D effect comprising enhancing initially as by The physics scene 15 that camera 17 captures, camera 17 can be S.L.R or stereoscopic camera, and camera has autofocus system 18, emit signal and receive the reflection of these signals, with determining at a distance from the object of 17 front of camera.Camera 17 is preferably It is embodied in the form of smart phone or tablet computer in hand-hold electronic equipments 14.Hand-held electronic equipment 14 has their own Processor 20, and autofocus system 18 is run, which focuses photograph using flight time and/or structured light subsystem Camera 17.Flight time and/or the transmitting of structure light autofocus system and detection signal, such as from hand-hold electronic equipments 14 Infrared light or ultrasonic signal.Camera 17 is used to shoot the more than one two dimensional image 22 of physics scene 15, this can make It is realized with stereoscopic camera, wherein obtaining two images simultaneously.Using single mirror camera, the position of camera 17 and/or by As the position of object is slightly moved between each image 22, which generates initial pictures 22a and at least one is subsequent Image 22n, the physics scene 15 captured by camera 17 generally comprise main object 24.In the example shown, main object 24 is Toy dinosaur 26.It is understood, however, that the set of any main object or main object can be imaged.In imaging, shine The autofocus system 18 of camera 17 is that each 2D image 22 creates depth map data, then, by comparing consecutive image Normalization shift value in 22a to 22n generates anaglyph 21.In many modern smart phones, using by their own It is positioned while processor 20 is run and mapping (SLAM) software 30 generates anaglyph 21, SLAM software 30 is by continuously taking the photograph Camera image 22A~22N tracks one group of target pixel points, and is carried out using these tracks to the position coordinates in real space Triangulation, meanwhile, the estimated location in real space is used to calculate the camera position for being able to observe that them.Cause This, the optical difference between subsequent image 22a~22n become it is known that and camera 17 on hand-hold electronic equipments 14 The corresponding difference of position and target.Other than carrying out triangulation to each target signature in subsequent image 22a~22n, SLAM software 30 is also compared each feature with the difference of the relationship of other features in image 22, as a result, hand-held electronic Equipment 14 can with by image scene other aspect relatively generate main body 24 how to be located in it is good in real space Approximation, this enables SLAM software 30 to generate the three-dimensional parallax image 21 of target point in observed space, once it is three-dimensional Anaglyph 21 is completed, and available image package software 28 can be used by one or more images 22 and be wrapped in three-dimensional parallax figure Around 21.This is by matching the point on 2D image 22 on the three-dimensional parallax image 21 generated by SLAM software 30 to accordingly Picture point is realized, the result is that a virtual dinosaur model 34, represents original toy dinosaur 26.
Referring to Fig. 2 and Fig. 3 combination Fig. 1, example virtual scene 31 is shown, virtual scene 31 includes initially in physical field The all elements being imaged in scape 15, virtual scene 31 generally comprise main object 24, and main object 24 is usually for creating virtual 3D Element in the virtual scene 31 of model 32.In the example shown, main main object 24 corresponds to imaging toy dinosaur 26 Dinosaur model 34, reference planes 36 are defined in virtual scene 31, reference planes 36 can be from each of dinosaur model 34 Aspect appears in arbitrary plane above and/or following.In an illustrated embodiment, reference planes 36 and 34 institute of dinosaur model The surface orientation of standing, the reference planes 36 of virtual scene 31 will be along electronical displays when being shown on electronic console 12 The plane of device 12 orients.Therefore, when virtual scene 31 is converted into production image 10, what is be imaged above reference planes 36 fears Any aspect of imperial model 34 will be to front projection, and is presented on 12 top of 12 front of display or display, depends on display 12 direction.On the contrary, back projection will occur in any aspect for the dinosaur model 34 being imaged under reference planes 36, and When viewing produces image 10, the plane of display 12 or less or back will appear in, if to print production image 10, select Reference planes 36 are selected to correspond to the plane of the paper of printing production image 10.Once selecting in the virtual scene 31 to be modeled Element, just generate virtual 3d model 32, use the three-dimensional view of virtual camera viewpoint shooting virtual 3d model 32, it is three-dimensional View is derived from virtual left camera views 37 and virtual right camera viewpoint 38.Virtual camera viewpoint 37,38 and virtual camera shooting The distance between the elevation angle A1 of machine viewpoint 37,38 D1 depends on the range of virtual 3d model 32.Virtual 3d model 32 is created as Shown on electronic console 12, the shape of most of electronic consoles is rectangle, width in length 50% to 80% Between, therefore, virtual 3d model 32 is created in boundary, so that virtual 3d model 32 is suitable for exemplary electronic display 12 Size and scale.Boundary includes front border 39, back boundary 40 and two lateral boundaries 41,42, is created from virtual 3d model 32 Any production image 10 is necessarily present in boundary 39,40,41,42 to be seen.
Posterior images boundary 40 is set for production image 10, all visual aspects in virtual scene appear at rear image The front on boundary 40, dinosaur model 34 have a height H1, and virtual camera viewpoint 37,38 is arranged to the second height H2, and second Height H2 is the function of object height H1 and rear image boundary 40.Second height H2 of virtual camera viewpoint 37,38 is sufficiently high, So that the top for the dinosaur model 34 watched from virtual camera viewpoint 37,38 does not extend to above posterior images boundary 40.It is empty The elevation angle of quasi- camera views 37,38 and the convergent angle of camera views 37,38 have direct Pythagoras relationship, should The scene boundary 39,40,41,42 and height H1 that relationship depends on dinosaur model 34 are as main object 24.Virtual camera view Point 37,38 can be adjusted to parallactic angle, so that virtual camera viewpoint 37,38 intersect at reference planes 36.Also It is to say, two virtual camera viewpoints 37,38 realize parallax free at reference planes 36, are preferably chosen convergence point P and correspond to probably The bottom of imperial model 34 and point near tail portion are as main subject 24, if main subject 24 rests on datum plane 36 On.For example, in an illustrated embodiment, reference planes 36 correspond to the ground that dinosaur model 34 is stood, virtual camera view Point 37,38 is directed toward the ground below the rear of dinosaur model 34, virtual camera viewpoint 37, and 38 angle is adjusted by frame by frame, The dinosaur model 34 mobile relative to reference planes 36 as main object 24.
The sequencing of above embodiments is not only for ease of description, represent the advantages or disadvantages of the embodiments.
Finally it should be noted that:The above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although Present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that:It still may be used To modify the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features; And these are modified or replaceed, technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution spirit and Range.

Claims (3)

1. a kind of generate vertical or forward projection 3-D image method using virtual 3d model, wherein the virtual image includes In terms of looking like three-dimensional when watching on the display medium, which is characterized in that the described method comprises the following steps:In object It manages and camera is provided at scene, wherein the camera body has showed autofocus system;From physics scene described in the first viewing angles First image, wherein the autofocus system provides first group of depth map data for corresponding to the first image;It utilizes The camera is from the subsequent image of physics scene described in subsequent viewing angles, wherein the autofocus system, which provides, corresponds to institute State the successive depths mapping data set of subsequent image;Number is mapped using first group of depth map data and the successive depths Parallax mapping is generated according to collection;The virtual 3d model of the physics scene is created from parallax mapping;Institute is imaged from three-dimensional viewpoint Virtual 3d model is stated to obtain the stereo-picture of the virtual 3d model;The stereo-picture is shown on the display medium.
2. it is according to claim 1 it is a kind of generate vertical or forward projection 3-D image method using virtual 3d model, It is characterized in that:It further include changing the virtual 3d model with to the virtual 3d model additive effect, the virtual 3d model When being watched on the display medium, so that the various aspects in the stereo-picture of the virtual 3d model seem to stretch out described show Show medium.
3. it is according to claim 1 it is a kind of generate vertical or forward projection 3-D image method using virtual 3d model, It is characterized in that:Wherein, changing the virtual 3d model includes tilting at least some of described virtual 3d model.
CN201810823619.0A 2018-07-25 2018-07-25 A method of vertical or forward projection 3-D image is generated using virtual 3d model Withdrawn CN108876840A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810823619.0A CN108876840A (en) 2018-07-25 2018-07-25 A method of vertical or forward projection 3-D image is generated using virtual 3d model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810823619.0A CN108876840A (en) 2018-07-25 2018-07-25 A method of vertical or forward projection 3-D image is generated using virtual 3d model

Publications (1)

Publication Number Publication Date
CN108876840A true CN108876840A (en) 2018-11-23

Family

ID=64305114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810823619.0A Withdrawn CN108876840A (en) 2018-07-25 2018-07-25 A method of vertical or forward projection 3-D image is generated using virtual 3d model

Country Status (1)

Country Link
CN (1) CN108876840A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109657630A (en) * 2018-12-25 2019-04-19 上海天马微电子有限公司 Display panel, the touch identification method of display panel and display device
CN113538318A (en) * 2021-08-24 2021-10-22 北京奇艺世纪科技有限公司 Image processing method, image processing device, terminal device and readable storage medium
CN113724309A (en) * 2021-08-27 2021-11-30 杭州海康威视数字技术股份有限公司 Image generation method, device, equipment and storage medium
WO2023035960A1 (en) * 2021-09-07 2023-03-16 北京字跳网络技术有限公司 Photographing guiding method and apparatus, and electronic device and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109657630A (en) * 2018-12-25 2019-04-19 上海天马微电子有限公司 Display panel, the touch identification method of display panel and display device
CN113538318A (en) * 2021-08-24 2021-10-22 北京奇艺世纪科技有限公司 Image processing method, image processing device, terminal device and readable storage medium
CN113538318B (en) * 2021-08-24 2023-12-15 北京奇艺世纪科技有限公司 Image processing method, device, terminal equipment and readable storage medium
CN113724309A (en) * 2021-08-27 2021-11-30 杭州海康威视数字技术股份有限公司 Image generation method, device, equipment and storage medium
WO2023035960A1 (en) * 2021-09-07 2023-03-16 北京字跳网络技术有限公司 Photographing guiding method and apparatus, and electronic device and storage medium

Similar Documents

Publication Publication Date Title
US10560683B2 (en) System, method and software for producing three-dimensional images that appear to project forward of or vertically above a display medium using a virtual 3D model made from the simultaneous localization and depth-mapping of the physical features of real objects
US10116922B2 (en) Method and system for automatic 3-D image creation
CN108876840A (en) A method of vertical or forward projection 3-D image is generated using virtual 3d model
CN103974055B (en) 3D photo generation system and method
KR20160140452A (en) Method and apparatus for displaying a light field based image on a user's device, and corresponding computer program product
TWI702568B (en) Image processing device, encoding device, decoding device, image processing method, program, encoding method, and decoding method
CN109087382A (en) A kind of three-dimensional reconstruction method and 3-D imaging system
US9338426B2 (en) Three-dimensional image processing apparatus, three-dimensional imaging apparatus, and three-dimensional image processing method
TWI496452B (en) Stereoscopic image system, stereoscopic image generating method, stereoscopic image adjusting apparatus and method thereof
EP2239538A1 (en) Apparatus for detecting three-dimensional distance
Zhang et al. Stereoscopic video synthesis from a monocular video
KR20120110297A (en) Image synthesis and multiview image generation using control of layered depth image
CN114359406A (en) Calibration of auto-focusing binocular camera, 3D vision and depth point cloud calculation method
KR101348929B1 (en) A multiview image generation method using control of layer-based depth image
WO2018187743A1 (en) Producing three-dimensional images using a virtual 3d model
CN106713893B (en) Mobile phone 3D solid picture-taking methods
Knorr et al. From 2D-to stereo-to multi-view video
JP6595878B2 (en) Element image group generation apparatus and program thereof
JP2011205385A (en) Three-dimensional video control device, and three-dimensional video control method
Louis et al. Rendering stereoscopic augmented reality scenes with occlusions using depth from stereo and texture mapping
Kanamaru et al. Acquisition of 3D image representation in multimedia ambiance communication using 3D laser scanner and digital camera
JP2001241947A (en) Image depth computing method
CN113382225B (en) Binocular holographic display method and device based on holographic sand table
Alam et al. 3D visualization of 2D/360° image and navigation in virtual reality through motion processing via smartphone sensors
Jin et al. Intermediate view synthesis for multi-view 3D displays using belief propagation-based stereo matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20181123

WW01 Invention patent application withdrawn after publication