CN105488845A - Method for generating three-dimensional image and electronic device - Google Patents

Method for generating three-dimensional image and electronic device Download PDF

Info

Publication number
CN105488845A
CN105488845A CN201410474170.3A CN201410474170A CN105488845A CN 105488845 A CN105488845 A CN 105488845A CN 201410474170 A CN201410474170 A CN 201410474170A CN 105488845 A CN105488845 A CN 105488845A
Authority
CN
China
Prior art keywords
image
profile
burnt section
burnt
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410474170.3A
Other languages
Chinese (zh)
Other versions
CN105488845B (en
Inventor
丁奎评
杨朝光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Acer Inc
Original Assignee
Acer Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Acer Inc filed Critical Acer Inc
Priority to CN201410474170.3A priority Critical patent/CN105488845B/en
Publication of CN105488845A publication Critical patent/CN105488845A/en
Application granted granted Critical
Publication of CN105488845B publication Critical patent/CN105488845B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides a method for generating a three-dimensional image and an electronic device. The method includes the steps of: obtaining a plurality of images corresponding to a plurality of focal lengths among which a plurality of focal length gaps exist; selecting a reference image from the plurality of images, and using the reference image as a three-dimensional reference plane in a three-dimensional space; performing edge detection on the images according to a sharpness reference value, so as to find out at least one profile corresponding to the sharpness reference value in the images; in the three-dimensional space, arranging the images based on the focal length gaps and the three-dimensional reference plane; and executing interpolation operation between the at least one profile of the images to generate a three-dimensional image.

Description

Produce method and the electronic installation thereof of 3-D view
Technical field
The invention relates to a kind of method and the electronic installation thereof that produce image, and relate to a kind of method and the electronic installation thereof that produce 3-D view especially.
Background technology
In the modern life, the various intelligent artifact with camera function becomes a part indispensable in people's life already.In order to meet the demand of taking pictures that consumer increases day by day, existing Duo Jia manufacturer is devoted to research and develop various taking pictures and image procossing application program, and it has such as U.S. flesh, special efficacy, additional pinup picture, conversion photo situation respectively and two dimensional image is converted to the functions such as 3-D view.
Existing, two dimensional image is converted in 3-D view function, generally need take two photos by two camera lenses that intelligent artifact is arranged simultaneously, produce 3-D view based on these two photos again, but this kind of mechanism also cannot be applicable to the product only with single lens.
In addition, mode for allowing the product only with single lens produce 3-D view in existing is then allow product at different viewing angles multiple pictures by the mode of translation, parallax between eyes is simulated again by the horizontal range difference between photo, and then corresponding generation 3-D view.But this kind of mode of operation is also inconvenient for user.
Summary of the invention
In view of this, the present invention proposes a kind of method and the electronic installation thereof that produce 3-D view, and it can produce 3-D view based on the multiple pictures corresponding to different burnt section, and user thus can be allowed to obtain 3-D view with the product only with single lens simply.
The invention provides a kind of method producing 3-D view, be suitable for electronic installation.Described method comprises: obtain the multiple images corresponding to multiple burnt section, have multiple burnt section gap between wherein said multiple burnt section; Reference picture is selected from described multiple image, and using reference picture as the three-dimensional reference planes in three dimensions; Rim detection is carried out to each image, to find out at least one profile corresponding to sharpness reference value in each image according to sharpness reference value; In three dimensions, each image is arranged based on each burnt section gap and three-dimensional reference planes; And between at least one profile of each image, perform interpolating operations to produce 3-D view.
The invention provides a kind of electronic installation, for generation of 3-D view.Described electronic installation comprises taking unit, storage unit and processing unit.The multiple module of cell stores.Processing unit, connects taking unit and storage unit, accesses and perform described multiple module.Described multiple module comprises acquisition module, Choosing module, detection module, arrangement module and generation module.Acquisition module controls taking unit and obtains the multiple images corresponding to multiple burnt section, has multiple burnt section gap between wherein said multiple burnt section.Choosing module selects reference picture from described multiple image, and using reference picture as the three-dimensional reference planes in three dimensions.Detection module carries out rim detection to each image, to find out at least one profile corresponding to sharpness reference value in each image according to sharpness reference value.Arrangement module in three dimensions, arranges each image based on each burnt section gap and three-dimensional reference planes.Generation module performs interpolating operations to produce 3-D view between at least one profile of each image.
Based on above-mentioned, these images after obtaining the multiple images corresponding to different burnt section, can be carried out suitable arrangement according to these burnt sections by the method for the generation 3-D view that the embodiment of the present invention proposes and electronic installation in three dimensions thereof.Then, electronic installation can perform rim detection to each image find out profile in each image, and performs interpolating operations between profile in each image, and then produces the 3-D view corresponding to obtained multiple images.
For above-mentioned feature and advantage of the present invention can be become apparent, special embodiment below, and coordinate accompanying drawing to be described in detail below.
Accompanying drawing explanation
Fig. 1 is the electronic installation schematic diagram shown in one embodiment of the invention;
Fig. 2 is the process flow diagram of the generation 3-D view method shown in one embodiment of the invention;
Fig. 3 A to Fig. 3 F is the generation 3-D view schematic diagram shown in one embodiment of the invention.
Description of reference numerals:
100: electronic installation;
110: taking unit;
120: storage unit;
121: acquisition module;
122: Choosing module;
123: detection module;
124: arrangement module;
125: generation module;
130: processing unit;
140: gyroscope;
310: reference contours;
320: the first profiles;
330: the second profiles;
D1: the first burnt section gap;
D2: the second burnt section gap;
DI ': specific burnt section gap;
I1: the first image;
I2: the second image;
S210 ~ S250: step;
RI: reference picture.
Embodiment
Fig. 1 is the electronic installation schematic diagram shown in one embodiment of the invention.In the present embodiment, electronic installation 100 can be smart mobile phone, panel computer, personal digital assistant, notebook computer (NotebookPC) or other similar devices.Electronic installation 100 comprises taking unit 110, storage unit 120 and processing unit 130.
Taking unit 110 can be anyly have charge coupled cell (Chargecoupleddevice, CCD) camera lens, complementary metal oxide semiconductor (CMOS) (Complementarymetaloxidesemiconductortransistors, CMOS) camera lens, or the video camera of infrared ray camera lens, also can be the image acquisition equipment that can obtain depth information, such as, be degree of depth video camera (depthcamera) or stereocamera.Storage unit 120 is such as storer, hard disk or other any can be used for store the element of data, and can in order to record multiple module.
Processing unit 130 couples taking unit 110 and storage unit 120.Processing unit 130 can be general service processor, special purpose processors, traditional processor, digital signal processor, multi-microprocessor (microprocessor), the microprocessor of one or more combined digital signal processor core, controller, microcontroller, Application Specific Integrated Circuit (ApplicationSpecificIntegratedCircuit, ASIC), field programmable gate array (FieldProgrammableGateArray, FPGA), the integrated circuit of any other kind, state machine, based on advanced reduced instruction set machine (AdvancedRISCMachine, ARM) processor and similar product.
In the present embodiment, processing unit 130 can access memory cell 120 acquisition module 121, Choosing module 122, detection module 123, arrangement module 124 and the generation module 125 that store to perform each step of generation 3-D view method of the present invention's proposition.
Fig. 2 is the process flow diagram of the generation 3-D view method shown in one embodiment of the invention.Fig. 3 A to Fig. 3 F is according to the generation 3-D view schematic diagram shown in one embodiment of the invention.The method of the present embodiment can be performed by the electronic installation 100 of Fig. 1, and the element of the Fig. 1 that namely arranges in pairs or groups below is to illustrate the detailed step of this method.
In step S210, the controlled taking unit 110 processed of acquisition module 121 obtains the multiple images corresponding to multiple burnt section.Specifically, taking unit 110 can obtain multiple image according to different burnt sections to Same Scene.In addition, in order to ensure method of the present invention instantaneity on the implementation, the time that taking unit 110 obtains described multiple image can carry out suitable adjustment by deviser, such as, in one second, obtain 5 images etc.It is to be understood that when the capture speed of electronic installation 100 is higher, the amount of images that taking unit 110 can obtain is namely higher.Also namely, the quantity of described multiple image is proportional to the capture speed of electronic installation 100, but embodiment of the present invention is not limited thereto.
In step S220, Choosing module 122 can select reference picture from described multiple image, and using reference picture as the three-dimensional reference planes in three dimensions.Reference picture has the image of the maximum burnt section in described multiple burnt section in this way in described multiple image.In other words, Choosing module 122 can adopt the most clearly that image is as with reference to image (because its burnt section is maximum), but embodiment of the present invention is not limited thereto.Described three dimensions can be characterized by X-axis, Y-axis and Z axis, and Choosing module 122 such as can be labelled to the X-Y plane in this three dimensions with reference to image, to define described three-dimensional reference planes.
As shown in Figure 3A, it is such as that Choosing module 122 is labelled to the schematic diagram after X-Y plane with reference to image RI.Or in other embodiments, deviser also can be labelled to any plane in three dimensions to define three-dimensional reference planes with reference to image.
In step S230, detection module 123 can carry out rim detection according to sharpness reference value to each image, to find out at least one profile corresponding to sharpness reference value in each image.Described sharpness reference value is such as the numerical value (such as 0.3) between 0 to 1, and it can be decided in its sole discretion according to demand by deviser.After decision sharpness reference value, detection module 123 can find out corresponding profile according to this in each image.
Suppose that described multiple image comprises the first image, and this first image comprises multiple pixel.Described multiple pixel comprises the first pixel and the second pixel adjacent to the first pixel, and the first pixel and the second pixel have the first grey decision-making and the second grey decision-making respectively.Concept of the present invention for convenience of explanation, in following length, described first image is all assumed to be the image with the first burnt section, and described first burnt section is only second to the maximum burnt section of reference picture, and has the first burnt section gap between the first burnt section and maximum burnt section.
When detection module 123 finds out for the first image the profile wherein corresponding to sharpness reference value, for each the first adjacent pixel and the second pixel, detection module 123 calculates the gap between the first grey decision-making and the second grey decision-making.Further, when this gap is greater than predetermined threshold value (being such as 30%), detection module 123 define the first pixel and the second pixel one of them be the contour pixel of the first image.Also be, when the grey decision-making that detection module 123 detects between adjacent pixel have significantly change time, detection module 123 can judge to there is border herein, and one of them pixel (being such as the pixel with higher gray scale value) is defined as contour pixel.Afterwards, detection module 123 can find out contour pixels all in the first image, and defines one or more first profile in the first image according to this.For example, adjacent or neighbouring contour pixel can connect for described profile by detection module 123, but embodiment of the present invention is not limited thereto.
For other images outside the first image, those skilled in the art should find out the profile corresponding to sharpness reference value in each image in other images according to above-mentioned teaching, do not repeat them here.Please refer to Fig. 3 B, for convenience of explanation, the profile found out in reference image R I can be characterized by reference contours 310.
Afterwards, in step S240, arrangement module 124 can arrange each image based on each burnt section gap and three-dimensional reference planes in three dimensions.Specifically, as shown in Figure 3 C, the first image I1 can be parallel to and the primary importance of reference image R I at a distance of the first burnt section gap D1 by arrangement module 124, and the first image I1 wherein after arrangement is aligned in reference image R I.It is to be understood that also can comprise the first profile 320 found out by detection module 123 in the first image I1.
Suppose in described multiple image, also to comprise the second image corresponding to the second burnt section (being less than the first burnt section), and there is between the second burnt section and the first burnt section the second burnt section gap, then arrange module 124 can more based on above-mentioned mechanism by the second graphical arrangement in three dimensions.
Please refer to Fig. 3 D, the second image I2 can be parallel to and the second place of the first image I1 at a distance of the second burnt section gap D2 by arrangement module 124, and the second image I2 wherein after arrangement is aligned in the first image I1.As shown in Figure 3 D, the first image I1 and the second image I2 is positioned at the same side of reference image R I, and the specific burnt section gap DI ' between the second image I2 and reference image R I is the summation of the burnt section gap D2 of the first burnt section gap D1 and second.It is to be understood that also can comprise the second profile 330 found out by detection module 123 in the second image I2.
Please referring again to Fig. 2, in step s 250, generation module 125 can perform interpolating operations to produce 3-D view between at least one profile of each image.Please refer to Fig. 3 E, hypothetical reference profile 310, first profile 320 and the second profile 330 all correspond to the same object (such as a mountain) in scene, then generation module 125 can perform interpolating operations to connect the first profile 320 and reference contours 310 between the first profile 320 and reference contours 310, and between the second profile 330 and the first profile 320, performs interpolating operations to connect the second profile 330 and the first profile 320.
In brief, (namely burnt section corresponding to each image can be converted to Z axis elevation information in three dimensions by electronic installation 100, each burnt section gap), and then according to these Z axis elevation informations, each image is arranged in three dimensions to suitable position.Then, electronic installation 100 can perform interpolating operations between the profile in each image, and then produces the such as 3-D view shown in Fig. 3 E.
Will be appreciated that, owing to the reference image R I deciding three-dimensional reference planes being the image with maximum burnt section, therefore when the 3-D view in Fig. 3 E be presented to user view and admire time, the top (as illustrated in Figure 3 F) that electronic installation 100 should be 3-D view with negative Z-direction, but not the top being 3-D view with positive Z-direction as shown in FIGURE 3 E, but embodiment of the present invention is not limited thereto.
In other embodiments, electronic installation 100 also can comprise the gyroscope 140 being connected to processing unit 130.Therefore, processing unit 130 can according to the sensing signal rotated three dimensional image of gyroscope 140.Thus, user further can experience the visual effect that 3-D view brings when watching this 3-D view.
In sum, these images after obtaining the multiple images corresponding to different burnt section, can be carried out suitable arrangement according to these burnt sections by the method for generation 3-D view that proposes of the embodiment of the present invention and electronic installation in three dimensions thereof.Then, electronic installation can perform rim detection to each image find out profile in each image, and performs interpolating operations between profile in each image, and then produces the 3-D view corresponding to obtained multiple images.Thus, even if electronic installation is only configured with single taking unit, electronic installation still can smoothly and produce 3-D view expediently, and the user that user thus can be provided to be different from the past experiences.
Last it is noted that above each embodiment is only in order to illustrate technical scheme of the present invention, be not intended to limit; Although with reference to foregoing embodiments to invention has been detailed description, those of ordinary skill in the art is to be understood that: it still can be modified to the technical scheme described in foregoing embodiments, or carries out equivalent replacement to wherein some or all of technical characteristic; And these amendments or replacement, do not make the essence of appropriate technical solution depart from the scope of various embodiments of the present invention technical scheme.

Claims (10)

1. produce a method for 3-D view, be suitable for electronic installation, it is characterized in that, comprising:
Obtain the multiple images corresponding to multiple burnt section, wherein there is between those burnt sections multiple burnt section gap;
Reference picture is selected from those images, and using this reference picture as the three-dimensional reference planes in three dimensions;
According to sharpness reference value, rim detection is carried out to each this image, to find out at least one profile corresponding to this sharpness reference value in each this image;
In this three dimensions, based on respectively this burnt section gap and this three-dimensional reference planes arrange respectively this image; And
Interpolating operations is performed to produce 3-D view between this at least one profile of each this image.
2. method according to claim 1, is characterized in that, those images correspond to Same Scene, and this reference picture has the maximum burnt section in those burnt sections.
3. method according to claim 2, it is characterized in that, those images comprise the first image corresponding to the first burnt section, there is between this first burnt section and this maximum burnt section the first burnt section gap, this reference picture comprises the reference contours corresponding to this sharpness reference value, and comprises based on the step that respectively this burnt section gap and this three-dimensional reference planes arrange respectively this image:
Be parallel to by this first image and the primary importance of this reference picture at a distance of this first burnt section gap, this first image alignment wherein after arrangement is in this reference picture.
4. method according to claim 3, it is characterized in that, those images also comprise the second image corresponding to the second burnt section, there is between this second burnt section and this first burnt section the second burnt section gap, and this first image being parallel to this reference picture after the step of this primary importance of this first burnt section gap, also comprise:
This second image is parallel to and this first image at a distance of the second place of this second burnt section gap, this second image alignment wherein after arrangement in this first image,
Wherein, this first image and this second image are positioned at the same side of this reference picture, and the specific burnt section gap between this second image and this reference picture is the summation of this first burnt section gap and this second burnt section gap.
5. method according to claim 3, it is characterized in that, this first image comprises the first profile corresponding to this sharpness reference value, this reference picture comprises the reference contours corresponding to this sharpness reference value, this first profile and this reference contours correspond to the first object, and between this at least one profile of each this image, perform this interpolating operations comprise with the step producing this 3-D view:
This interpolating operations is performed to connect this first profile and this reference contours between this first profile and this reference contours.
6. method according to claim 5, it is characterized in that, those images also comprise the second image, this second image comprises the second profile corresponding to this sharpness reference value, this second profile corresponds to this first object, wherein after the step connecting this first profile and this reference contours, also comprise:
This interpolating operations is performed to connect this second profile and this first profile between this second profile and this first profile.
7. method according to claim 1, is characterized in that, the quantity of those images is proportional to the capture speed of this electronic installation.
8. method according to claim 1, it is characterized in that, those images comprise the first image, this first image comprises multiple pixel, those pixels comprise the first pixel and the second pixel adjacent to this first pixel, and this first pixel has the first grey decision-making, and this second pixel has the second grey decision-making, and according to this sharpness reference value, this rim detection is carried out to each this image, comprise with the step found out corresponding to this at least one profile of this sharpness reference value in each this image:
Calculate the gap between this first grey decision-making and this second grey decision-making;
When this gap is greater than predetermined threshold value, one of them defining this first pixel and this second pixel is the contour pixel of this first image; And
Find out this contour pixels all in this first image, and define this at least one profile in this first image according to this.
9. method according to claim 1, is characterized in that, after the step producing this 3-D view, also comprises:
Gyrostatic sensing signal according to this electronic installation rotates this 3-D view.
10. an electronic installation, for generation of 3-D view, is characterized in that, comprising:
Taking unit;
Storage unit, stores multiple module; And
Processing unit, connects this taking unit and this storage unit, and access and perform those modules, those modules comprise:
Acquisition module, controls this taking unit and obtains the multiple images corresponding to multiple burnt section, wherein have multiple burnt section gap between those burnt sections;
Choosing module, selects reference picture from those images, and using this reference picture as the three-dimensional reference planes in three dimensions;
Detection module, carries out rim detection according to sharpness reference value to each this image, to find out at least one profile corresponding to this sharpness reference value in each this image;
Arrangement module, in this three dimensions, based on respectively this burnt section gap and this three-dimensional reference planes arrange respectively this image; And
Generation module, performs interpolating operations to produce 3-D view between this at least one profile of each this image.
CN201410474170.3A 2014-09-17 2014-09-17 Generate the method and its electronic device of 3-D view Active CN105488845B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410474170.3A CN105488845B (en) 2014-09-17 2014-09-17 Generate the method and its electronic device of 3-D view

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410474170.3A CN105488845B (en) 2014-09-17 2014-09-17 Generate the method and its electronic device of 3-D view

Publications (2)

Publication Number Publication Date
CN105488845A true CN105488845A (en) 2016-04-13
CN105488845B CN105488845B (en) 2018-09-25

Family

ID=55675809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410474170.3A Active CN105488845B (en) 2014-09-17 2014-09-17 Generate the method and its electronic device of 3-D view

Country Status (1)

Country Link
CN (1) CN105488845B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446908A (en) * 2016-08-31 2017-02-22 乐视控股(北京)有限公司 Method and device for detecting object in image
CN107452008A (en) * 2016-06-01 2017-12-08 上海东方传媒技术有限公司 Method for detecting image edge and device
CN111179291A (en) * 2019-12-27 2020-05-19 凌云光技术集团有限责任公司 Edge pixel point extraction method and device based on neighborhood relationship

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101008104A (en) * 2006-12-28 2007-08-01 西安理工大学 Melt liquid level position detecting method for CZ method monocrystalline silicon growth
CN101858741A (en) * 2010-05-26 2010-10-13 沈阳理工大学 Zoom ranging method based on single camera
CN102204263A (en) * 2008-11-03 2011-09-28 微软公司 Converting 2D video into stereo video
CN103379267A (en) * 2012-04-16 2013-10-30 鸿富锦精密工业(深圳)有限公司 Three-dimensional space image acquisition system and method
CN103578133A (en) * 2012-08-03 2014-02-12 浙江大华技术股份有限公司 Method and device for reconstructing two-dimensional image information in three-dimensional mode
CN103782234A (en) * 2011-09-09 2014-05-07 富士胶片株式会社 Stereoscopic image capture device and method
US20140218484A1 (en) * 2013-02-05 2014-08-07 Canon Kabushiki Kaisha Stereoscopic image pickup apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101008104A (en) * 2006-12-28 2007-08-01 西安理工大学 Melt liquid level position detecting method for CZ method monocrystalline silicon growth
CN102204263A (en) * 2008-11-03 2011-09-28 微软公司 Converting 2D video into stereo video
CN101858741A (en) * 2010-05-26 2010-10-13 沈阳理工大学 Zoom ranging method based on single camera
CN103782234A (en) * 2011-09-09 2014-05-07 富士胶片株式会社 Stereoscopic image capture device and method
CN103379267A (en) * 2012-04-16 2013-10-30 鸿富锦精密工业(深圳)有限公司 Three-dimensional space image acquisition system and method
CN103578133A (en) * 2012-08-03 2014-02-12 浙江大华技术股份有限公司 Method and device for reconstructing two-dimensional image information in three-dimensional mode
US20140218484A1 (en) * 2013-02-05 2014-08-07 Canon Kabushiki Kaisha Stereoscopic image pickup apparatus

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107452008A (en) * 2016-06-01 2017-12-08 上海东方传媒技术有限公司 Method for detecting image edge and device
CN106446908A (en) * 2016-08-31 2017-02-22 乐视控股(北京)有限公司 Method and device for detecting object in image
CN111179291A (en) * 2019-12-27 2020-05-19 凌云光技术集团有限责任公司 Edge pixel point extraction method and device based on neighborhood relationship
CN111179291B (en) * 2019-12-27 2023-10-03 凌云光技术股份有限公司 Edge pixel point extraction method and device based on neighborhood relation

Also Published As

Publication number Publication date
CN105488845B (en) 2018-09-25

Similar Documents

Publication Publication Date Title
US10685446B2 (en) Method and system of recurrent semantic segmentation for image processing
CN107925755B (en) Method and system for planar surface detection for image processing
US10580140B2 (en) Method and system of real-time image segmentation for image processing
CN107079100B (en) Method and system for lens shift correction for camera arrays
US10509954B2 (en) Method and system of image segmentation refinement for image processing
US9973672B2 (en) Photographing for dual-lens device using photographing environment determined using depth estimation
US9852513B2 (en) Tracking regions of interest across video frames with corresponding depth maps
CN111819601A (en) Method and system for point cloud registration for image processing
US20160100148A1 (en) Method and system of lens shading color correction using block matching
US9661298B2 (en) Depth image enhancement for hardware generated depth images
US10735769B2 (en) Local motion compensated temporal noise reduction with sub-frame latency
CN102595146B (en) Panoramic image generation method and device
CN110099220B (en) Panoramic stitching method and device
US9807313B2 (en) Method and system of increasing integer disparity accuracy for camera images with a diagonal layout
CN111757080A (en) Virtual view interpolation between camera views for immersive visual experience
WO2018063606A1 (en) Robust disparity estimation in the presence of significant intensity variations for camera arrays
CN110944164A (en) Immersive viewing using planar arrays of cameras
CN105488845A (en) Method for generating three-dimensional image and electronic device
TWI549478B (en) Method for generating 3d image and electronic apparatus using the same
US20230281916A1 (en) Three dimensional scene inpainting using stereo extraction
KR20210018348A (en) Prediction for light field coding and decoding
US9531943B2 (en) Block-based digital refocusing system and method thereof
CN117710259A (en) Image processing method, device, equipment and storage medium
US20160196636A1 (en) Image processing method and mobile electronic device
JP2013109660A (en) Image processing device and image processing method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant