CN101414383A - Image processing apparatus and image processing method - Google Patents

Image processing apparatus and image processing method Download PDF

Info

Publication number
CN101414383A
CN101414383A CNA2008101715595A CN200810171559A CN101414383A CN 101414383 A CN101414383 A CN 101414383A CN A2008101715595 A CNA2008101715595 A CN A2008101715595A CN 200810171559 A CN200810171559 A CN 200810171559A CN 101414383 A CN101414383 A CN 101414383A
Authority
CN
China
Prior art keywords
image
picture
viewpoint
virtual space
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008101715595A
Other languages
Chinese (zh)
Other versions
CN101414383B (en
Inventor
富手要
藤木真和
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Publication of CN101414383A publication Critical patent/CN101414383A/en
Application granted granted Critical
Publication of CN101414383B publication Critical patent/CN101414383B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to an image processing device and an image processing method. A CPU (201) updates scene data (206) by changing the management order of the data of virtual objects in the scene data (206) based on the processing result of a search of the scene data (206), which is executed upon generating an image of virtual space viewed from a first viewpoint. The CPU (201) sets the updated scene data (206) as scene data (206) to be used to generate an image viewed from a second viewpoint different from the first viewpoint.

Description

Image processing equipment and image processing method
Technical field
The present invention relates to a kind of image processing equipment and image processing method.
Background technology
Along with the raising of computer process ability in recent years, be used for providing the research of the virtual reality technology of actual experience to be developed (with reference to non-patent literature 1) to the user.By utilizing computer graphical performance Virtual Space and it being presented on HMD (HeadMounted Display, head-mounted display) or the wall display-device (wall-typedisplay), realize this technology.
In this field, being used for providing high-quality to experience necessary key element to the user is the speed that image generates.Usually supposition required processing speed of generation virtual space image when the viewpoint of following the user moves was 10~15 frame/seconds.In order to satisfy this requirement, developed a kind of technology that is used in the higher than before expressive ability of maintenance, generating at a high speed image.
In recent years, Cheng Shu computing machine parallelization and Virtual Space treatment technology have made previous impossible real time ray tracing (ray tracing) method can realize (with reference to non-patent literature 2).Especially, disclosed ray tracing method in the non-patent literature 2 is called real time ray tracing, and is able to broad research.This technology allows the performance of performance, the generation of high speed shade and the global illumination of reflection and refraction, and these all are difficult concerning traditional rasterization (rasterization) method.Therefore, can generate high quality graphic.
The raising of the expressive ability of generate handling along with image obtains the required calculated load of high quality graphic and constantly increases.Data volume to be processed is also increasing the requirement that shows object in real time to satisfy in the Virtual Space.Because these reasons, even when realizing real time ray tracing, the calculated load mitigation technique also is absolutely necessary, to export with high frame frequency when keeping high expressive ability.
Patent documentation 1 disclose a kind of be used for by service time serial correlation ray trace improve the method for the efficient that animation generates.The picture that animation gradually changes by renewal (frame) shows motion.The picture that gradually changes comprises the time series correlativity (continuity) of the position relation that be in the object in the sight line etc.
Usually, generate at the image that uses ray trace and to handle, required time of the search in the ray trace is the longest.In patent documentation 1, the continuity between the service time sequence image for the part that does not change, re-uses the result of former frame between former frame and present frame, thereby shortens the search time in the ray tracing method.
[non-patent literature 1] VR world Agencies Building gimmick,
Figure A200810171559D0005080658QIETU
Repair Museum The The む,
Figure A200810171559D0005190045QIETU
Figure A200810171559D0005080748QIETU
The Pei Wind Museum of Co., Ltd. of the capable institute of Wide shallow Xiao, development, 2000
[non-patent literature 2] Ingo Wald, Carsten Benthin, Markus Wagner, and Philipp Slusallek, " Interactive Rendering with CoherentRay-Tracing " in Computer Graphics Forum/Proceedings of theEUROGRAPHICS 2001, pp.153-164,20 (3), Manchester, UnitedKingdom, September 3-7,2001
[patent documentation 1] Japanese Patent No. 2532055
Said method has improved image by ray trace and has generated time series efficient and the speed of handling.Yet, basic a plurality of treatment of picture of not considering once to generate different points of view.For example, in order to make the user experience Virtual Space that has on HMD, must generate two images of right and left eyes simultaneously, and they are presented to him as stereo-picture.In stereo-picture generates, generate eye image and left-eye image based on different viewpoint positions and orientation, so the reflection direction difference of light.Because this reason, can not service time serial correlation.In order to generate the stereo-picture that to present to this user, must generate at the equal carries out image of right and left eyes and handle.
Summary of the invention
Consider that the problems referred to above have made the present invention, and the purpose of this invention is to provide a kind of technology that is used for efficiently generating a plurality of virtual space image.
According to a first aspect of the invention, a kind of image processing equipment is used for from the common object of a plurality of drawing viewpoints, and described image processing equipment comprises:
First module, it draws first image; And
Unit second, it draws second image,
Wherein, the zone of not drawing all by with reference to handling the information that is obtained by the drafting of another described unit, is drawn in described first module and described Unit second.
According to a second aspect of the invention, a kind of image processing method of carrying out by image processing equipment, described image processing equipment is used for from the common object of a plurality of drawing viewpoints, and described image processing method comprises:
First step is used to draw first image; And
Second step is used to draw second image,
Wherein, in described first step and described second step, all, draw the zone of not drawing by with reference to handling the information that is obtained by the drafting in another described step.
By below with reference to the explanation of accompanying drawing to exemplary embodiments, further feature of the present invention will be apparent.
Description of drawings
Fig. 1 is the figure that is used to explain according to the overview of the Flame Image Process of the first embodiment of the present invention;
Fig. 2 is the block diagram that illustrates applicable to according to the hardware structure of computer of the image processing equipment of the first embodiment of the present invention;
Fig. 3 illustrates for the virtual space image that generates the right and left eyes that will present to the user, by the process flow diagram of the performed processing of the computing machine with structure shown in Figure 2;
Fig. 4 is used to the figure that explains that ray tracing method and Virtual Space are cut apart;
Fig. 5 is the figure that illustrates by the example of structure of the represented tree of contextual data 206;
Fig. 6 is the figure that the scene tree search is shown;
Fig. 7 is the figure that is used to explain the overview of Flame Image Process according to a second embodiment of the present invention;
Fig. 8 is the figure that is used to explain the overview of Flame Image Process according to a second embodiment of the present invention;
Fig. 9 illustrates according to a second embodiment of the present invention for the virtual space image that generates the right and left eyes that will present to the user, by the process flow diagram of the performed processing of the computing machine with structure shown in Figure 2;
Figure 10 A and 10B are the figure that is used to explain the processing of self-adaptation divided frame;
Figure 11 is used to explain that the image of a third embodiment in accordance with the invention generates the figure that handles;
Figure 12 illustrates a fourth embodiment in accordance with the invention for the virtual space image that generates the right and left eyes that will present to the user, by the process flow diagram of the performed processing of the computing machine with structure shown in Figure 2;
Figure 13 is used to explain that the image that utilizes general rasterisation generates the figure of the sequence of handling;
Figure 14 is the figure that is used to explain the three-dimensional coordinate method of estimation of utilizing stereoscopic vision;
Figure 15 is the figure that is used to explain the overview of Flame Image Process according to a fifth embodiment of the invention; And
Figure 16 illustrates according to a fifth embodiment of the invention for the virtual space image that generates the right and left eyes that will present to the user, by the process flow diagram of the performed processing of the computing machine with structure shown in Figure 2.
Embodiment
Now, with preferred embodiment of the present invention will be described in detail by reference to the drawing.Example as the preferred disposition of the present invention described in claims illustrates these embodiment, but the present invention is not limited to the embodiment of the following stated.
First embodiment
The present embodiment supposition utilizes ray tracing method to generate the virtual space image that will present to user (beholder) right and left eyes.More specifically, in the present embodiment, generation will be presented to the virtual space image of an eye, and storage is handled the result of calculation that is obtained by generating.Then, use this result of calculation to generate the virtual space image that to present to the another eye.This has improved the efficient and the speed of the processing that is used to generate the virtual space image that will present to eyes.
Fig. 1 is the figure that is used to explain according to the overview of the Flame Image Process of present embodiment.Present embodiment supposition: when the user wants the experiencing virtual space, utilize ray tracing method to generate the virtual space image in different points of view position and orientation for left eye and right eye.Therefore, will be presented at respectively based on the virtual space image in different points of view position and orientation on picture 101 and the picture 102, wherein, picture 101 is used to show the image of the left eye that will present to the user, and picture 102 is used to show the image of the right eye that will present to the user.
Suppose that the viewpoint position that is presented at picture 101 and image on 102 and orientation as the distance between two of people, do not have too big difference.Because this reason in the present embodiment, is supposed and is also being seen the dummy object that is presented on the picture 101 on the picture 102.Even making, this supposition also can use by generating the result of calculation that virtual space image obtained of left eye in the processing of the virtual space image that is used for generating right eye.
Image processing equipment 103 utilizes ray tracing method to generate the virtual space image that is displayed on picture 101 and 102.
Picture 101 and 102 zone 104 are not overlapping in the visual field of right and left eyes.For zone 104, can not re-use the result of calculation that when generation is displayed on a virtual space image on the picture, is obtained.Therefore, generate in the processing and must calculate at each image.
In the Flame Image Process of using ray tracing method is calculated, the longest processing time of processing needs that is used to search for the dummy object that intersects with light.
What of dummy object are the difficulty of search depend on, that is, and and the complexity of scene.Suppose and in scene, have 10000 dummy objects.If the 10000th object in the scene tree is rendered object, then must carry out 10000 search when generating the virtual space image of a frame handles, wherein, the management of scene tree use tree structure is included in the element (back will describe the dummy object search in detail and handle) in the Virtual Space.
In the present embodiment, when this eye of search is with the dummy object seen from contextual data in order to generate the virtual space image that will present to an eye, this Search Results is applied to be used for generating the processing of the virtual space image that will present to the another eye.This has reduced total calculating load, has quickened image and has generated processing.
Fig. 2 is the block diagram that illustrates applicable to according to the hardware structure of computer of the image processing equipment of present embodiment.Structure shown in the hardware of computing machine is not limited to.Also can use computing machine, as long as this computing machine mainly comprises the performance element of execution processing and the storage unit of stored programme and data with any other structure.
In Fig. 2, use the computer program and the data that are stored among RAM 202 and the ROM 203,201 pairs of whole computing machines of CPU are controlled, and carry out each processing that illustrates later, as the processing of this computing machine.
RAM 202 has and is used for the handling procedure 205 (computer program) that interim storage loaded from external memory 204 and the zone of contextual data 206, and has the employed perform region of the CPU 201 various types of processing of execution.That is, RAM 202 can provide various types of zones as required.
The boot of ROM 203 storage computation machines and data are set.
External memory 204 is to be the high capacity information-storing device of representative with the hard disk drive.External memory 204 storage OS (operating system) and handling procedure 205 and contextual data 206.External memory 204 is also stored the employed in practice information of those skilled in the art in the Given information that illustrates later and the following explanation.When needed, the computer program and the data that will be stored under the control of CPU 201 in the external memory 204 are written into RAM 202.CPU 201 uses the computer program and the data of being loaded to carry out various types of processing.
Be stored in handling procedure 205 in the external memory 204 and be various types of processing of CPU 201 being carried out illustrate later computer program as the processing of computing machine.
Contextual data 206 is the data that are included in the element in the Virtual Space with tree-like formula (tree structure) management.For example, when dummy object comprised well-known polygon, contextual data 206 comprised the coordinate figure data (hereinafter these data being called geological information) on this polygonal color data and normal vector data and this each summit of polygon.If dummy object has texture (texture mapping), then contextual data 206 also comprises texture map data.Contextual data 206 also comprises the type of the virtual light source that shines the Virtual Space and the information of brightness.
When using ray tracing method to draw virtual space image, contextual data 206 also comprises the space segmentation information that is used for making that the intersection that is easy to carry out light and dummy object is judged.In the present embodiment, as mentioned above, utilize ray tracing method to generate virtual space image.
The user's of virtual space image each position and azimuth information of right and left eyes watched in input media 207 input.Therefore, various devices can be used as input media 207.
For example, can use keyboard or mouse as input media 207.In this case, the user uses the input media 207 manually position and the azimuth information of every eye of input.
Can the use location and aspect sensor as input media 207.In this case, position and aspect sensor are worn on user's the head.Position and aspect sensor are imported RAM 202 with measurement result as data.CPU 201 uses these measurement result data to obtain the position of right and left eyes and position and the position relation between the position between orientation, position and aspect sensor and the right eye and position relation and position and aspect sensor and the left eye.
As mentioned above, can make ins all sorts of ways obtains the position and the azimuth information of user's right and left eyes, and the present invention is not limited to a kind of method.Determine to be used as the device of input media 207 according to employed method.
Display device 208 shows the virtual space image of the right eye that is generated by CPU 201 and the virtual space image of left eye, and display device 208 is formed by for example CRT or liquid crystal panel.Display device 208 can also show any out of Memory certainly.More specifically, display device 208 can be used the result of image or text display CPU 201.
Bus 209 connects said units.
Then, will be with reference to figure 3 explanation for the virtual space image (right eye virtual space image and left eye virtual space image) that generates the right eye that will present to the user and left eye, by the processing that the computing machine with structure shown in Figure 2 is carried out, Fig. 3 is the process flow diagram that is used to illustrate this processing.
Note, will with so that CPU 201 execution be stored in the external memory 204 according to the computer program (comprising handling procedure 205) and the data (comprising contextual data 206) of the processing of process flow diagram shown in Figure 3.As mentioned above, when needed, under the control of CPU 201, this computer program and data are written into RAM 202.CPU 201 uses the computer program and the data of being loaded to carry out processing.Therefore, computing machine is carried out this processing according to process flow diagram shown in Figure 3.
At step S300, to processing execution initialization process subsequently.Initialization process comprises from external memory 204 reads handling procedure 205 and it is written into the processing of RAM 202.This initialization process also is included in the processing in the zone that distribution processing subsequently will be used among the RAM 202.
At step S301, read contextual data 206 from external memory 204, and on RAM 202 sequential deployment contextual data 206.The data of launching on RAM 202 this moment comprise the scene tree and the nodal information of the tree construction of describing whole scene.Nodal information comprises as the geological information of the dummy object of each element of scene tree and material information and virtual light source information.
At step S302, in RAM 202, obtain the position and the azimuth information of each viewpoint (right eye and left eye) in the Virtual Space.As mentioned above, can be undertaken this by the whole bag of tricks obtains.In the present embodiment, the user uses manually this information of input of input media 207.Yet, can use viewpoint position and the azimuth information of predetermined fixed value as right eye and left eye.
At step S303, the position and the azimuth information of data set that use is obtained in RAM 202 at step S301 and the right and left eyes that obtained in RAM 202 at step S302, generation will be presented to the virtual space image (first picture) of an eye (first viewpoint).First picture that is generated is presented in the display frame of display device 208.The back will describe the processing of this step in detail.
At step S304, use the contextual data 206 after in the processing of step S303, upgrading, generation will be presented to the virtual space image (second picture) of another eye (second viewpoint).Second picture that is generated is presented in the display frame of display device 208.The back will describe the processing of this step in detail.
At step S305, judge whether to stop this processing.In order to stop this processing, for example, the user uses the instruction of input media 207 end of input.Alternatively, can set in advance the processing termination condition.Stop this processing if judge, then this processing finishes.If do not stop this processing, then handle turning back to step S302, with the processing of the right eye virtual space image and the left eye virtual space image that are used to generate next frame.
Then, will be with reference to figure 4 explanation ray tracing methods.Fig. 4 is the figure of cutting apart that is used to explain ray tracing method and Virtual Space.
In ray tracing method, carry out the processing that is used for light is projected to from the viewpoint 401 that is arranged on the Virtual Space each pixel of virtual screen 402.Intersection between the dummy object in the Virtual Space of determining to cut apart 403 by each light and for example Octree of virtual screen 402.When light intersects with dummy object, the information of this dummy object of search from contextual data 206.Calculating by the represented dummy object of the information that searches to reflection of light from virtual light source 405, thereby determine pixel value on the virtual screen 402.All pixels to virtual screen 402 are carried out this processing.
Virtual Space 403 carried out Octree is cut apart so that the intersection that is easy to carry out between light and the dummy object is judged.For ray tracing method, proposed many be used for making by partition space be easy to intersect the technology of judging.Example has the kd tree to cut apart and bounding volume hierarchy (BVH) (BVH, Boundary Volume Hierarchy).Present embodiment does not rely on the space segmentation algorithm.Therefore, can use any space segmentation method.
By in contextual data 206, searching for scene tree, obtain the information of the dummy object that intersects with light.
Then, contextual data 206 will be described.Fig. 5 is the figure that illustrates by the example of structure of the represented tree of contextual data 206.
" World " the 501st is with the corresponding node of base (root) node of scene (Virtual Space).The absolute coordinates of this node definition scene.
" Camera " the 502nd, the node at the visual angle of memory location and orientation and viewpoint.
" Object " the 503rd, the node of various types of information of maintenance dummy object.Usually, because scene comprises a plurality of dummy objects, thereby be arranged with " Sub-object " 505 that the dummy object in the scene is divided into groups at " Object " 503.
" Transform " the 506th, definition is with respect to the position of " Object " 503 of the absolute coordinates of " World " 501 and the parameter in orientation.
" Sub-object " the 505th is to " object1 " 507, " object 2 " as the least unit of representing dummy object ... the node that divides into groups." Sub-object " 505 times, will be interrelated with the as many Object node of dummy object that appears in the scene.
" object1 " 507 has the information of " shape " 508, " material " 509 and " transform " 510.
" shape " 508 has geometry information such as the coordinate figure data on each summit of polygon of " object 1 " 507 and normal vector data.
" material " 509 is stored as attribute data with the texture information of " object 1 " 507 and from the diffuse reflection information and the direct reflection information of the light of light source.
The position and the azimuth information of " transform " 510 expression " object 1 " 507.
" Light " 504 has the information of the virtual light source of irradiation Virtual Space scene, and the data of the position of storing virtual light source (geological information), type (for example, direct light source (directlight), pointolite (point source) or spotlight light source (spot light)) and monochrome information (comprising color information).
In above-mentioned configuration,, must carry out the processing (search of contextual data) that number of times is as many with the quantity that light intersects, be used for searching for scene tree shown in Figure 5 for the information of the dummy object that obtains to intersect with light.That is, when light in the dark position of hierarchy or comprising in the scene of many dummy objects that when intersecting with dummy object, the quantitative change that search is handled gets very big.
Then, with the scene tree search in the explanation contextual data 206.Fig. 6 is the figure that the scene tree search is shown.
Scene tree 601 is the scene tree under the original state.The dummy object that each child node 602 expression in the scene tree 601 are seen from given viewpoint (interested viewpoint).Searching route 603 expressions are in order to search for the path of the node of this dummy object when intersecting at light and dummy object (child node 602).Set in advance this path.
Usually, in the processing that is used for generating the virtual space image of seeing from interested viewpoint, along with display panel on the as many searching route 603 of the pixel of child node 602 corresponding dummy objects, search child node 602.In scene tree 601, because child node 602 is positioned at the end of searching route 603, thereby each search is all very consuming time.In the present embodiment, for the shortening time, the position of child node 602 is moved to the top of searching route 603, thereby upgrade scene tree 601.
More specifically, when finding child node 602, the establishment of beginning new scene tree in RAM 202.At first, in RAM 202, create the copy of scene tree 601.The position of each child node 602 in the scene tree of being duplicated is moved to the top of searching route 603, thereby upgrade the scene tree of being duplicated.That is, the supervisory sequence of the data by changing the dummy object in the contextual data is upgraded contextual data.In Fig. 6, Reference numeral 604 expression moves to the scene tree of being duplicated that the tip position of searching route 603 upgrades by the position with child node 602.Shown in Reference numeral 605, the path that is used to search for child node 602 is shorter than searching route 603.Put in order (supervisory sequence) of the node in the child node 602 is not particularly limited.
When finishing to be used to generate the processing of the virtual space image of seeing from interested viewpoint, scene tree 601 is updated to the scene tree of being duplicated.Scene tree 601 after upgrading is set to be used to generate the scene tree of the virtual space image that other viewpoint outside the interested viewpoint sees.
As mentioned above, use the scene tree after the processing of the virtual space image that is used for generating given viewpoint is upgraded, generate the virtual space image of next viewpoint.Therefore, when generating the virtual space image of a plurality of viewpoints, be in genesis sequence the later viewpoint, it is shorter that detection range becomes.
The node configuration that will not search in the processing that is used for generating virtual space image is in the rearmost end position of new scene tree (searching route 603).Even the node that will not search also copies to the new scene tree.Because this reason, even when first picture and second picture have the different virtual object in the visual field, also can not go wrong.That is, the dummy object that exists only in second picture is not the object that reconfigures in the new scene tree.Yet, because the information of this dummy object of new scene tree storage, thereby can show this dummy object without a doubt.
As mentioned above, when generating the virtual space image of a viewpoint, the position of the node of the dummy object that will search in scene tree reconfigures the tip position place in searching route, thereby realizes the effective search processing.
The node of creating in new scene tree reconfigures in the operation, can count the frequency of search, and can be not based on search order, but according to the order of search rate, reconfigure scene tree.More specifically, the number of times of the search of each node is counted.When the generation that finishes a virtual space image is handled, begin configuration node from the top of searching route according to the descending of count value.
In addition, if in the processing that the image of first picture generates, in the visual field, do not have dummy object, then in second picture, do not need to carry out the scene tree search and handle.Therefore, when in the image processing process of first picture, when being illustrated in the information that does not have dummy object in the visual field and adding information to new scene tree, can carry out second picture image at a high speed and generate and handle.
Second picture image when not having dummy object in the visual field of first picture generates to be handled, and prepares background image in advance as texture, and unfavorable utilize texture to draw to carry out image with ray trace generate processing.
In some cases, only in second picture, comprise dummy object.In this case, can not use the information that has or not of dummy object.Yet, when causing parallax big owing to the extremely short distance between dummy object and the viewpoint position, only phenomenon that eye can be seen dummy object takes place.In order to use the information that has or not of dummy object, limit so that dummy object is present in the predetermined depth value or further from viewpoint, thereby prevent that parallax from becoming big.This restriction makes it possible to use the information that has or not of dummy object.
The viewpoint position in the viewpoint position during if first picture image generates and orientation and the generation of second picture image and the orientation is identical or almost identical, then the Search Results of first picture is equal to the Search Results of second picture.This makes it possible to second picture image is generated the Search Results that processing re-uses first picture.
Generate in the processing (for example, grating method, volume drawing (volume rendering) method and particle are drawn (particale rendering) method) at various types of images, the problem of scene search is the problem that must run into all the time.Therefore, even when changing image generation processing, the method that improves the efficient of image generation by scene rebuilding also is effective.Therefore, present embodiment generates applicable to various types of images usually and handles.
In the present embodiment, the display device 208 of computing machine shows first picture and second picture.Yet any other display device can show first picture and second picture.For example, when HMD was connected with computing machine, the right eye display frame of HMD can show the right eye virtual space image, and left eye display frame can show the left eye virtual space image.
Second embodiment
In first embodiment, generate first picture and second picture by sequential processes.Yet, in a second embodiment, parallel cut apart and generate first picture and second picture.
Fig. 7 is the figure that is used to explain according to the overview of the Flame Image Process of this embodiment.
Present embodiment supposition: first picture and second picture all are divided into two zones up and down, and handle each zone by a CPU.
With reference to figure 7, the upper area (subregion) of Reference numeral 701 expressions first picture, and the lower area of Reference numeral 703 expressions first picture.Two parts obtain upper area 701 and lower area 703 by first picture being divided into up and down.This upper area and lower area are not overlapping.
With reference to figure 7, the upper area of Reference numeral 702 expressions second picture, and the lower area of Reference numeral 704 expressions second picture.Two parts obtain upper area 702 and lower area 704 by becoming second picture segmentation up and down.This upper area and lower area are not overlapping.
In the present embodiment, the parallel upper area of a picture and the lower area of another picture of generating.In the generation of the upper area of a picture is handled, the replicating original contextual data, and as among first embodiment, upgrade the scene tree duplicated, with the node motion of the dummy object in the upper area that will appear at a picture top to searching route.In the generation of the lower area of another picture is handled, the replicating original contextual data, and as among first embodiment, upgrade the scene tree duplicated, with the node motion of the dummy object in the lower area that will appear at another picture top to searching route.That is, in the present embodiment, generate the contextual data (first contextual data) of a picture and the contextual data (second contextual data) of another picture.
Then, the parallel lower area of a picture and the upper area of another picture of generating.In the generation of the lower area of a picture is handled, use the contextual data of upgrading in the processing of the lower area that is used for generating another picture.In the generation of the upper area of another picture is handled, use the contextual data of upgrading in the processing of the upper area that is used for generating a picture.
As mentioned above, two processing of executed in parallel.Therefore, when image generates a half that is accomplished to picture, can begin the scene tree search efficiently and handle.
Fig. 8 is the figure that is used to explain according to the overview of the Flame Image Process of present embodiment.In Fig. 8, represent same section with Reference numeral identical among Fig. 1, with the explanation that no longer repeats it.In the present embodiment, for parallel first picture 101 and second picture 102 of generating, increased picture area and cut apart control module 801.This makes and can each picture segmentation is become the zone, and the image of carrying out in first half and the latter half generate processing when the image of parallel generation first picture 101 and second picture 102.
In addition, in the present embodiment, in the latter half that image generates, carry out message exchange.Therefore, be different from first embodiment, also will generate the information of being obtained in handling at the image of second picture 102 and export to image processing equipment 103, and this information is used for the image generation of first picture 101.
Then, will be with reference to the processing of figure 9 explanations according to present embodiment, this processing is carried out by the computing machine with structure shown in Figure 2, and to generate the virtual space image of the right and left eyes that will present to the user, Fig. 9 is the process flow diagram that this processing is shown.
Note, computer program (comprising handling procedure 205) and the data (comprising contextual data 206) that make CPU 201 execution according to the processing of process flow diagram shown in Figure 9 are stored in the external memory 204.As mentioned above, when needed, under the control of CPU 201, this computer program and data are written into RAM 202.CPU 201 uses the computer program and the data of being loaded to carry out processing.Therefore, computing machine is carried out this processing according to process flow diagram shown in Figure 9.
At step S900, to processing execution initialization process subsequently.This initialization process comprises and is used for reading handling procedure 205 and it being written into the processing of RAM 202 from external memory 204.This initialization process also comprises the processing that is used for distributing at RAM 202 zone that will use with aftertreatment.
At step S901, read contextual data 206 from external memory 204, and on RAM 202 sequential deployment contextual data 206.The data of being launched on RAM 202 this moment comprise the scene tree and the nodal information of the tree construction of describing whole scene.Nodal information comprises as the geological information of the dummy object of each element of scene tree and material information and virtual light source information.
At step S902, the picture of right eye and the picture of left eye are divided into two zones up and down.Generate the area information in the zone that expression respectively cuts apart and it is stored among the RAM202.Area information comprises for example represents that this zone belongs to information and the upper left corner that should the zone and the coordinate position in the lower right corner of which picture (right eye picture or left eye picture).
At step S903, in RAM 202, obtain the position and the azimuth information of each viewpoint (right and left eyes) in the Virtual Space.As mentioned above, can carry out obtaining of position and azimuth information by the whole bag of tricks.In the present embodiment, the user uses manually this information of input of input media 207.Yet, can use predetermined fixed value as the position of right eye and the position and the azimuth information of azimuth information and left eye.
At step S904, the position and the azimuth information of data set that use is obtained in RAM 202 at step S901 and S902 and the right and left eyes that obtained in RAM 202 at step S903, generation will be presented to the upper area of the virtual space image (first picture) of an eye.The upper area of first picture that generated is presented in the display frame of display device 208.
With the step S904 processing among the execution in step S905 concurrently.At step S905, the position and the azimuth information of data set that use is obtained in RAM 202 at step S901 and S902 and the right and left eyes that is obtained at RAM 202 in step S903, generation will be presented to the lower area of the virtual space image (second picture) of another eye.The lower area of second picture that generated is presented in the display frame of display device 208.
At step S907, use the contextual data after in step S905, upgrading, generate the lower area of first picture.The lower area of first picture that generated is presented in the display frame of display device 208.
With the parallel step S908 of step S907 in, use the contextual data after in step S904, upgrading, generate the upper area of second picture.The upper area of second picture that generated is presented in the display frame of display device 208.
At step S909, judge whether to stop this processing.In order to stop this processing, for example, the user uses the instruction of input media 207 end of input.Alternatively, can set in advance the processing termination condition.Stop this processing if be judged as, then finish this processing.If do not stop this processing, then handle turning back to step S903, with the processing of the right eye virtual space image and the left eye virtual space image that are used to generate next frame.Then, next frame is carried out subsequently processing.
Present embodiment supposition: be set to 2 by the counting that walks abreast, in two processing, carry out the image generation of first picture and the image of second picture simultaneously and generate.Yet parallel counting needn't one be decided to be 2.It is 3 or more situation that present embodiment can also be handled parallel counting.In this case, according to parallel counting divided frame zone, and generate the scene tree that is used to improve search efficiency in the reason throughout.
Present embodiment supposition: come carries out image to generate by two zones about picture segmentation is become and handle.Yet, be not picture to be divided into two zones up and down.Two zones about picture can being divided into.When parallel counting increases, can correspondingly change the picture area dividing method.Can select preferred picture segmentation method according to the scene that the system that will make up maybe will experience.In any case, in the present embodiment, carry out repeatedly image and generate, with the parallel virtual space image of a viewpoint and the virtual space image of another viewpoint of generating.
Variation
Here, will the example of self-adaptation picture segmentation be described.
Figure 10 A and 10B are the figure that is used to explain the processing of self-adaptation divided frame.
With reference to figure 10A, the image from position 1001 beginnings first picture 101 in the upper left corner generates to be handled.Image from position 1004 beginnings second picture 102 in the lower right corner generates to be handled.Shown in Figure 10 B, when the current locations of pixels of just handling has arrived identical position in these two pictures, first picture is used the contextual data of being upgraded when generating second picture, to handle remaining area.For second picture, use the contextual data of when generating first picture, being upgraded.In Figure 10 B, the zone of Reference numeral 1002 processing of expression when the current locations of pixels of just handling has arrived same position in these two pictures.
As mentioned above,, generate the parallel processing capability divided frame of handling, with separately carries out image generation processing according to image according to present embodiment.This makes it possible to high-speed and high-efficiency ground carries out image and generates processing.
The 3rd embodiment
In first and second embodiment, each pixel order is carried out image generate.A great difference of the 3rd embodiment and these two embodiment is: generate in the first half of handling at image and only the image generation is carried out in the subregion, contextual data is upgraded, and then carries out image generates processing in more detail.
Figure 11 is used to explain that the image according to present embodiment generates the figure that handles.Be included in virtual space image (picture) 1104 from the visual field that viewpoint begins corresponding to first or second picture.Specific region 1101 is set in image 1104.In Figure 11, the equally spaced discrete specific region 1101 that is provided with.In the present embodiment, each specific region 1101 is corresponding to a pixel of image 1104.Yet the size of specific region 1101 is not particularly limited.In addition, the layout shown in the specific region needn't necessarily have.
In the present embodiment, generate a virtual space image with two steps.In the generation (first generates) of first step, generate the image in the specific region.In the generation of second step (second generates), with first embodiment in identical method, use the contextual data of in first generates, being upgraded, generate the image in the remaining area (zone except that specific region 1101).
In the present embodiment, as mentioned above, dispersing is provided with the zone that light projects to.This make can the high speed reconstruction whole scene contextual data, and need not to calculate all pixels on the picture.
To the zone except that specific region 1101, generate processing by after scene rebuilding, carrying out detailed image once more, can carry out image efficiently and generate.The images such as ray trace that the method for present embodiment is calculated each pixel during mainly for the pixel value on determining picture generate handle very effective.
Scene rebuilding in the specific region can be applied to first picture 101 and second picture 102.Alternatively, the image that can be applicable in first picture and second picture by the result that one of them obtained who scene rebuilding is applied to these two pictures generates processing.
The 4th embodiment
In first, second and the 3rd embodiment, illustrated to be used for generating at each image and handled the contextual data of being exported and efficiently generate treatment of picture by exchange.The 4th embodiment is with the different of these embodiment: generate in the processing at the image that utilizes rasterisation, right and left eyes image is separately generated the depth value of being exported in the processing carry out eye coordinates, and use the depth value after the processing.
According to the overview of the Flame Image Process of present embodiment and the overview described in second embodiment many identical points are arranged, difference will only be described.
In the present embodiment, at first,, calculate brightness value by the non-overlapping portions zone of upper area that Region Segmentation obtained 701 shown in Figure 7 and lower area 704 by common method for drafting.The depth value (value of Z buffer memory) that obtained in the processing of calculating brightness value this moment and the material information of target object are stored among the RAM 202.
Be transformed into depth value in the coordinate system of another viewpoint with being stored in depth value among the RAM 202.If can obtain the depth value that obtains by this conversion and the material information of target object, then can calculate brightness value.
Then, will be with reference to the processing of Figure 12 explanation according to present embodiment, this processing is carried out by the computing machine with structure shown in Figure 2, and to generate the virtual space image of the right and left eyes that will present to the user, wherein, Figure 12 is the process flow diagram that this processing is shown.Among Figure 12 with Fig. 9 in identical number of steps represent to carry out the step of same treatment.
At step S904 and S905, except that the processing described in second embodiment, also be used to the processing of brightness value of each pixel in the zone that obtains respectively to cut apart.
At step S1201, the depth value (value of Z buffer memory) that will be obtained in the processing of step S904 and S905, the normal information on each summit of target object and the material information of target object are stored among the RAM 202.
At step S1202, the depth value that is stored among the RAM 202 is carried out eye coordinates in step S1201.This coordinate transform is to be used for the processing that depth value with left eye is transformed into the depth value of right eye and the depth value of right eye is transformed into the depth value of left eye.
At last, at step S907 and S908, except that the processing described in second embodiment, the processing below also the depth value after the conversion in step S1202 being carried out:, calculate the brightness value of each pixel in the zone of respectively cutting apart based on the material information of in step S1201, being stored.
Then, the brightness value in the drafting processing of explanation present embodiment is determined method.
Figure 13 is used to explain utilize common rasterisation to carry out the figure that image generates the sequence of handling.
In model transferring 1301, the information (three-dimensional coordinate) that is stored in the contextual data in the external memory 204 is written into RAM 202, and it is transformed into world coordinates.That is, in model transferring 1301, in three dimensions, rotate and the distortion dummy object.This conversion also is included in basic coordinates conversion such as amplification on the part space/dwindle and rotation.The data that obtained when transaction module conversion 1301 do not rely on viewpoint position and orientation.Therefore, right and left eyes can be shared these data.
In viewpoint change 1302, based on the position and the orientation of virtual camera, position and orientation that will the position of defined dummy object becomes local coordinate to fasten with bearing change on world coordinates.More specifically, obtain the matrix of viewpoint change in advance, and use matrix operation to carry out viewpoint change each summit of dummy object.As a result, original three-dimensional scenic is transformed into scene on the coordinate system of observing from virtual camera.
Depend on each viewpoint in the data of carrying out being obtained when viewpoint change 1302 is handled.Therefore, right and left eyes can not be shared these data.
In projective transformation 1303, carry out from utilizing of the conversion of the defined three-dimensional system of coordinate of virtual camera to two-dimensional coordinate system.By projective transformation 1303, the Virtual Space is mapped as two-dimensional signal from the plane (virtual screen) that virtual camera is observed.
In rasterisation 1304, after execution cuts processing and hidden face removal, calculate the brightness value of each pixel of the two dimensional image that is projected in the scene on the virtual screen.
In cutting processing, abandon the polygon of the dummy object outside the visual field, and only shear the polygon in the visual field.In hidden face is removed, abandon the polygon that does not point to viewpoint, that is, in theory from the sightless polygon of viewpoint.At this moment, according to the descending of the distance of viewpoint, will should write the Z buffer memory by visible polygon from this viewpoint.When order covers these values, calculate and the corresponding depth value of each pixel value, and only select from the visible polygon of this viewpoint.
In rasterisation 1304, after hidden face is removed, extract the normal information on each summit and the material information of dummy object from the contextual data of Shadows Processing (shading processing).When needed, also extract texture information.If right and left eyes is seen same object, then can share the material information of this object.Based on the position and the azimuth information of data of being extracted and virtual view, calculate reflected light.Then, calculate the brightness value of each pixel on the virtual screen.Yet, change according to the position and the orientation of viewpoint according to the result of the Shadows Processing that material information calculated of object, therefore, can not share this result.
Showing in 1305, the pixel of finished pigmented is being presented on monitor or other display device.
When in general rasterization process when different viewpoints are observed common dummy object, can share the data that obtained by model transferring 1301 and the material information of object, and need not conversion process.Yet, can also use the depth value that in the processing of rasterisation 1304, is obtained by eye coordinates.Yet, may stop according to the relation of the position between viewpoint and the object, accurately the compute depth value.In this case, determine each brightness value by the respective pixel values of reference former frame.
Then, explanation is used for calculating according to the depth value that utilizes eye coordinates to obtain the method for the brightness value of each pixel.
Figure 14 is the figure that is used to explain the three-dimensional coordinate method of estimation of utilizing stereoscopic vision.
As shown in figure 14, definition xyz absolute coordinate system 1401 in three dimensions.Dispose left camera lens and right camera lens, make the absolute coordinates at its center be set to distance d isolated 0 L=(0,0,0) and 0 R=(d, 0,0).The focal length of supposing camera lens is f, that is, the distance of each from the optical center to the left plane of delineation and the right plane of delineation is f.Observe dummy object 1402 from the virtual view of such setting.The right eye picture and the left eye picture of the image that projection on it is observed to some extent are defined as virtual screen 1403R and 1403L respectively.
When right eye is observed some P on the dummy object 1402, a P is projected to some P on the virtual screen 1403R R(x R, y R).When left eye is observed some P on the dummy object 1402, a P is projected to some P on the virtual screen 1403L L(x L, y L).Point P LAnd P RCoordinate be based on the relative coordinate of the initial point at the center that is set to virtual screen 1403L and 1403R respectively.
At this moment, by using the formed leg-of-mutton triangulation in center, can obtain lip-deep some P (x of target object by measurement point and these two cameras p, y p, z p).
If known point P LAnd P RAnd various parameters, three-dimensional coordinate that then can calculating object.This is to use the general depth estimation method of the stereoscopic vision in the computer vision.
In the present embodiment, during when the pixel value on one of them the picture of known right and left eyes and with the three-dimensional coordinate of the corresponding object of this pixel, use the pixel value of determining the respective pixel on the picture of another eye based on the depth estimation method of stereoscopic vision.For example, when putting P R(x R, y R) and lip-deep some P (x of target object p, y p, z p) when conduct is imported, calculation level P L(x L, y L).
Therefore, if known viewpoint position and orientation, and can use some P on the depth value calculating object object surfaces that obtains by eye coordinates, then can calculate the respective point on the virtual screen.
At step S907 and S908, except that the processing described in second embodiment, also carry out following processing: with by calculate on the virtual screen obtained point accordingly, load the material information that is stored in the object among the RAM 202.Processing execution Shadows Processing and texture by rasterisation 1304 are handled, thereby calculate each brightness value.Repeat this processing, till calculating all pixels.Utilize the image generating method of rasterisation to be to use to be used to and carry out the known technology that hardware that general figures handles is realized.
In above-mentioned processing, share the information that processing obtained, thereby calculate the brightness value of the respective regions on another picture by the brightness value that is used to calculate a cut zone on the picture.
The 5th embodiment
In first~the 4th embodiment, illustrated that being used for handling the information that is obtained by the image generation of exchange when generating stereo-picture efficiently generates treatment of picture.The 5th embodiment suppose use two or many camera arrangement carry out image and generate, it is with the different of the foregoing description: picture area is divided into two or more zones, and each area applications image is generated processing.
Figure 15 is the figure that is used to explain according to the overview of the Flame Image Process of present embodiment.
The drawing result of three viewpoints is presented at respectively on first picture 1501, second picture 1502 and the three-picture 1503.In the present embodiment, each picture segmentation is become three zones, and in each zone, draw.
Zone 1504~1506th at first begins the drawing area of handling in each picture.In the present embodiment, zone 1504~1506 is set to not overlapping.When the drafting in the zone 1504~1506 of having finished picture, the result of calculation by reference zone 1504~1506 begins the zone 1507~1509 of not calculating is calculated.When the calculating finished the zone 1507~1509 of not calculating, carry out calculating to remaining area.
Then, will be with reference to the processing of Figure 16 explanation according to present embodiment, this processing is carried out the virtual space image that will present to three eyes to generate by the computing machine with structure shown in Figure 2, and wherein, Figure 16 is the process flow diagram that this processing is shown.The number of steps identical with Fig. 9 represents to be used to carry out the step of same treatment among Figure 16.
At step S902, with three pictures (first picture, second picture and three-picture) all be divided into, neutralize following three sections (corresponding to the quantity of camera).Do not limit this especially and cut apart form.Each picture vertically can be divided into equal part.Alternatively, can change the zone of cutting apart according to the complexity of scene.
At step S1601a, carry out the drafting of the epimere of first picture and handle.At step S1601b, carry out the drafting in the stage casing of second picture and handle.At step S1601c, carry out the drafting of the hypomere of three-picture and handle.Handle with method execution in step S1601a, S1601b identical and each the drafting among the S1601c with other embodiment.
At step S1602, the drawing result of step S1601a, S1601b and S1601c is stored among the RAM 202.
At step S1603a, by with reference to being stored in the drawing result of epimere among the RAM 202, first picture, determine the brightness value in the zone in stage casing of first picture, and the drafting of carrying out this stage casing is handled.At step S1603b, by with reference to being stored in the drawing result in stage casing among the RAM 202, second picture, determine the brightness value in the zone of hypomere of second picture, and the drafting of carrying out this hypomere is handled.At step S1603c, by with reference to being stored in the drawing result of hypomere among the RAM 202, three-picture, determine the brightness value in the zone of epimere of three-picture, and the drafting of carrying out this epimere is handled.Yet the zone of drawing at step S1603a, S1603b and S1603c can be an arbitrary region, the zone that needn't be only calculate when calculating in this zone.
At step S1604a, by drawing result, determine the brightness value in the zone of hypomere of first picture, and the drafting of carrying out this hypomere is handled with reference to the stage casing of first picture.At step S1604b, by drawing result, determine the brightness value in the zone of epimere of second picture, and the drafting of carrying out this epimere is handled with reference to the hypomere of second picture.At step S1604c, by drawing result, determine the brightness value in the zone in stage casing of three-picture, and the drafting of carrying out this stage casing is handled with reference to the epimere of three-picture.
In the present embodiment, the result of calculation of step S1603a, S1603b and S1603c is not stored among the RAM 202.Yet,, can utilize many information that calculate at step S1603a, S1603b and S1603c to cover many information of being stored among the step S1602 according to constructed system.
Suppose: with point of view configuration in the horizontal direction, the result that picture obtained who disposes corresponding to viewpoint by the configuration sequence according to viewpoint is first picture, second picture and three-picture.In this case, have the drafting result of second picture of more near-sighted point by use, can be than the image that generates first picture by the result who uses three-picture more efficiently.Therefore, for reference to the information that is obtained, can select suitable components according to constructed system.
As mentioned above, even when using two or more virtual views to observe common dummy object, also can come high-speed and high-efficiency ground to carry out and draw by the zone that each picture is divided into any amount, also shared each regional result of calculation.
The 6th embodiment
In Region Segmentation, when the zone was not overlapping between first picture and second picture, efficient can maximize.Yet if the zone is not overlapping fully, it is obvious that the edge of boundary member becomes.Can be by the overlapping region of some pixels be provided near each boundary edge, and synthetic and smoothly by calculating the image that is obtained, make the edge become not obvious.Can be by using suitable components, the size of the overlapping region when determining Region Segmentation according to constructed system.
Other embodiment
Purpose of the present invention also realizes by following method.The recording medium (or storage medium) of the software program code of the function of record in order to realize the foregoing description is provided to system or equipment.This storage medium is computer-readable recording medium certainly.The program code that is stored in this recording medium is read and carried out to the computing machine of this system or equipment (or CPU or MPU).In this case, the program code of reading from recording medium itself is realized the function of the foregoing description.The recording medium that writes down this program code constitutes the present invention.
Computing machine is carried out the program code of being read, and operation operating system (OS) on computers is based on instruction operating part or whole actual treatment of this program code, thus the function of realization the foregoing description.
Suppose in the storer that will write the functional expansion unit that inserts the expansion board the computing machine or be connected from the program code that this recording medium is read with computing machine.The CPU of this expansion board or functional expansion unit is based on instruction operating part or whole actual treatment of this program code, thus the function of realization the foregoing description.
Use recorded medium stores of the present invention and above-mentioned process flow diagram corresponding programs code.
Although the present invention has been described, should be appreciated that the present invention is not limited to disclosed exemplary embodiments with reference to exemplary embodiments.The scope of appended claims meets the wideest explanation, to comprise all these class modifications, equivalent structure and function.

Claims (10)

1. an image processing equipment is used for from the common object of a plurality of drawing viewpoints, and described image processing equipment comprises:
First module, it draws first image; And
Unit second, it draws second image,
Wherein, the zone of not drawing all by with reference to handling the information that is obtained by the drafting of another described unit, is drawn in described first module and described Unit second.
2. image processing equipment according to claim 1 is characterized in that, also comprises the unit that the data of each included in Virtual Space dummy object are managed as contextual data,
Wherein, described first module and described Unit second all are drawing units, this drawing unit is according to predefined procedure, the data of search included dummy object from the visual field that set viewpoint begins in described contextual data, and, draw image from the Virtual Space that described set viewpoint begins based on the data that searched; And
Described first module is based on the result of the described search of carrying out when the image that generates from the observed Virtual Space of first viewpoint, the supervisory sequence of the data by changing the dummy object in the described contextual data, upgrade described contextual data, and the described contextual data after upgrading is set to described Unit second in order to generate from the contextual data of the image of the observed Virtual Space of second viewpoint.
3. image processing equipment according to claim 2, it is characterized in that, described first module is upgraded described contextual data, places the top of described predefined procedure with the dummy object that will search when the image of drawing from the observed Virtual Space of described first viewpoint.
4. image processing equipment according to claim 2, it is characterized in that, described first module is upgraded described contextual data, with descending according to search rate, the dummy object that is searched when beginning to be configured in drafting from the top of described predefined procedure from the image of the observed Virtual Space of described first viewpoint.
5. image processing equipment according to claim 1 is characterized in that, also comprises the unit that the data with each included in Virtual Space dummy object manage as contextual data,
Wherein, described first module is being provided with a plurality of zones since the visual field of first viewpoint, in described contextual data, search for the data of dummy object included in each set zone according to predefined procedure, based on the image of the data drafting that is searched from the Virtual Space that described first viewpoint begins, change the supervisory sequence of described contextual data by result based on described search performed when drawing, upgrade described contextual data, and the described contextual data after use upgrading generates the image of the Virtual Space in the zone beyond described a plurality of zones in the described visual field.
6. image processing equipment according to claim 1 is characterized in that, described first module and described Unit second are all drawn by being transformed into from the coordinate system of another viewpoint by the depth value that described drafting processing is obtained.
7. image processing equipment according to claim 2 is characterized in that, described first viewpoint is different from described second viewpoint.
8. image processing equipment according to claim 2 is characterized in that, described first viewpoint is corresponding to beholder's a eye, and described second viewpoint is corresponding to described beholder's another eye.
9. image processing equipment according to claim 1 is characterized in that, described first module and described Unit second are all cut apart drawing area and made the zone of drafting not overlapping.
10. image processing method of carrying out by image processing equipment, described image processing equipment is used for from the common object of a plurality of drawing viewpoints, and described image processing method comprises:
First step is used to draw first image; And
Second step is used to draw second image,
Wherein, in described first step and described second step, all, draw the zone of not drawing by with reference to handling the information that is obtained by the drafting in another described step.
CN200810171559.5A 2007-10-19 2008-10-17 Image processing apparatus and image processing method Expired - Fee Related CN101414383B (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2007-273088 2007-10-19
JP2007273088 2007-10-19
JP2007273088 2007-10-19
JP2008185295A JP5055214B2 (en) 2007-10-19 2008-07-16 Image processing apparatus and image processing method
JP2008-185295 2008-07-16
JP2008185295 2008-07-16

Publications (2)

Publication Number Publication Date
CN101414383A true CN101414383A (en) 2009-04-22
CN101414383B CN101414383B (en) 2014-06-18

Family

ID=40594908

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200810171559.5A Expired - Fee Related CN101414383B (en) 2007-10-19 2008-10-17 Image processing apparatus and image processing method

Country Status (2)

Country Link
JP (1) JP5055214B2 (en)
CN (1) CN101414383B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102971770A (en) * 2011-03-31 2013-03-13 松下电器产业株式会社 Image rendering device for rendering entire circumferential three-dimensional image, image rendering method, and image rendering program
CN105657408A (en) * 2015-12-31 2016-06-08 北京小鸟看看科技有限公司 Method for implementing virtual reality scene and virtual reality apparatus
CN107576352A (en) * 2017-07-18 2018-01-12 朱跃龙 A kind of detection device, method for establishing model and computer-processing equipment
CN108292490A (en) * 2015-12-02 2018-07-17 索尼互动娱乐股份有限公司 Display control unit and display control method
CN110276260A (en) * 2019-05-22 2019-09-24 杭州电子科技大学 A kind of commodity detection method based on depth camera
CN110663256A (en) * 2017-05-31 2020-01-07 维里逊专利及许可公司 Method and system for rendering frames of a virtual scene from different vantage points based on a virtual entity description frame of the virtual scene
CN113938580A (en) * 2016-05-25 2022-01-14 佳能株式会社 Information processing apparatus, control method thereof, and computer-readable storage medium

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5565126B2 (en) * 2010-06-17 2014-08-06 大日本印刷株式会社 Three-dimensional printed material production support device, plug-in program, three-dimensional printed material production method, and three-dimensional printed material
EP2461587A1 (en) * 2010-12-01 2012-06-06 Alcatel Lucent Method and devices for transmitting 3D video information from a server to a client
JP6915237B2 (en) * 2016-07-05 2021-08-04 富士通株式会社 Information processing device, simulator result display method, and simulator result display program
WO2022195818A1 (en) * 2021-03-18 2022-09-22 株式会社ソニー・インタラクティブエンタテインメント Image generation system and image generation method
CN115665541B (en) * 2022-10-11 2023-06-23 深圳市田仓文化传播有限公司 Multifunctional digital film studio system based on intelligent control

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070057944A1 (en) * 2003-09-17 2007-03-15 Koninklijke Philips Electronics N.V. System and method for rendering 3-d images on a 3-d image display screen
JP2007264966A (en) * 2006-03-28 2007-10-11 Seiko Epson Corp Stereoscopic image generation apparatus, stereoscopic image generation method, stereoscopic image generation program, recording medium, and stereoscopic image printer

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0528280A (en) * 1991-07-22 1993-02-05 Nippon Telegr & Teleph Corp <Ntt> Light beam tracking method
JPH10232953A (en) * 1997-02-20 1998-09-02 Mitsubishi Electric Corp Stereoscopic image generator
JP4039676B2 (en) * 2004-03-31 2008-01-30 株式会社コナミデジタルエンタテインメント Image processing apparatus, image processing method, and program
EP1643758B1 (en) * 2004-09-30 2013-06-19 Canon Kabushiki Kaisha Image-capturing device, image-processing device, method for controlling image-capturing device, and associated storage medium
JP2006163547A (en) * 2004-12-03 2006-06-22 Canon Inc Program, system and apparatus for solid image generation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070057944A1 (en) * 2003-09-17 2007-03-15 Koninklijke Philips Electronics N.V. System and method for rendering 3-d images on a 3-d image display screen
JP2007264966A (en) * 2006-03-28 2007-10-11 Seiko Epson Corp Stereoscopic image generation apparatus, stereoscopic image generation method, stereoscopic image generation program, recording medium, and stereoscopic image printer

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102971770A (en) * 2011-03-31 2013-03-13 松下电器产业株式会社 Image rendering device for rendering entire circumferential three-dimensional image, image rendering method, and image rendering program
CN108292490A (en) * 2015-12-02 2018-07-17 索尼互动娱乐股份有限公司 Display control unit and display control method
US11768383B2 (en) 2015-12-02 2023-09-26 Sony Interactive Entertainment Inc. Display control apparatus and display control method
CN105657408A (en) * 2015-12-31 2016-06-08 北京小鸟看看科技有限公司 Method for implementing virtual reality scene and virtual reality apparatus
CN105657408B (en) * 2015-12-31 2018-11-30 北京小鸟看看科技有限公司 The implementation method and virtual reality device of virtual reality scenario
US10277882B2 (en) 2015-12-31 2019-04-30 Beijing Pico Technology Co., Ltd. Virtual reality scene implementation method and a virtual reality apparatus
CN113938580A (en) * 2016-05-25 2022-01-14 佳能株式会社 Information processing apparatus, control method thereof, and computer-readable storage medium
CN113938579A (en) * 2016-05-25 2022-01-14 佳能株式会社 Information processing apparatus, control method thereof, and computer-readable storage medium
CN110663256A (en) * 2017-05-31 2020-01-07 维里逊专利及许可公司 Method and system for rendering frames of a virtual scene from different vantage points based on a virtual entity description frame of the virtual scene
CN110663256B (en) * 2017-05-31 2021-12-14 维里逊专利及许可公司 Method and system for rendering frames of a virtual scene from different vantage points based on a virtual entity description frame of the virtual scene
CN107576352A (en) * 2017-07-18 2018-01-12 朱跃龙 A kind of detection device, method for establishing model and computer-processing equipment
CN110276260A (en) * 2019-05-22 2019-09-24 杭州电子科技大学 A kind of commodity detection method based on depth camera

Also Published As

Publication number Publication date
JP2009116856A (en) 2009-05-28
CN101414383B (en) 2014-06-18
JP5055214B2 (en) 2012-10-24

Similar Documents

Publication Publication Date Title
CN101414383B (en) Image processing apparatus and image processing method
US11263820B2 (en) Multi-stage block mesh simplification
EP2051533B1 (en) 3D image rendering apparatus and method
CN111701238B (en) Virtual picture volume display method, device, equipment and storage medium
EP3321889A1 (en) Device and method for generating and displaying 3d map
JP5592011B2 (en) Multi-scale 3D orientation
US11328481B2 (en) Multi-resolution voxel meshing
KR20140082610A (en) Method and apaaratus for augmented exhibition contents in portable terminal
US10325403B2 (en) Image based rendering techniques for virtual reality
Vaaraniemi et al. Temporally coherent real-time labeling of dynamic scenes
Marton et al. Natural exploration of 3D massive models on large-scale light field displays using the FOX proximal navigation technique
CN111161398A (en) Image generation method, device, equipment and storage medium
CN116097316A (en) Object recognition neural network for modeless central prediction
Trapp et al. Strategies for visualising 3D points-of-interest on mobile devices
JP2006163547A (en) Program, system and apparatus for solid image generation
KR100693134B1 (en) Three dimensional image processing
Li et al. A fast fusion method for multi-videos with three-dimensional GIS scenes
KR102388715B1 (en) Apparatus for feeling to remodeling historic cites
Buhr et al. Real-Time Aspects of VR Systems
Xing et al. MR environments constructed for a large indoor physical space
EP3776488B1 (en) Using a low-detail representation of surfaces to influence a high-detail representation of the surfaces
Zheng et al. Rendering and Optimization Algorithm of Digital City’s 3D Artistic Landscape Based on Virtual Reality
JP3559336B2 (en) CG data creation device and CG animation editing device
Hamadouche Augmented reality X-ray vision on optical see-through head mounted displays
JPH0773342A (en) Image generator

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140618

Termination date: 20181017

CF01 Termination of patent right due to non-payment of annual fee