CN106331680A - Method and system for 2D-to-3D adaptive cloud unloading on handset - Google Patents

Method and system for 2D-to-3D adaptive cloud unloading on handset Download PDF

Info

Publication number
CN106331680A
CN106331680A CN201610657639.6A CN201610657639A CN106331680A CN 106331680 A CN106331680 A CN 106331680A CN 201610657639 A CN201610657639 A CN 201610657639A CN 106331680 A CN106331680 A CN 106331680A
Authority
CN
China
Prior art keywords
view
image
cloud
algorithm
image block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610657639.6A
Other languages
Chinese (zh)
Other versions
CN106331680B (en
Inventor
金欣
李倩
戴琼海
张新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Weilai Media Technology Research Institute
Shenzhen Graduate School Tsinghua University
Original Assignee
Shenzhen Weilai Media Technology Research Institute
Shenzhen Graduate School Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Weilai Media Technology Research Institute, Shenzhen Graduate School Tsinghua University filed Critical Shenzhen Weilai Media Technology Research Institute
Priority to CN201610657639.6A priority Critical patent/CN106331680B/en
Publication of CN106331680A publication Critical patent/CN106331680A/en
Application granted granted Critical
Publication of CN106331680B publication Critical patent/CN106331680B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/261Image signal generators with monoscopic-to-stereoscopic image conversion

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present invention discloses a method and a system for 2D-to-3D adaptive cloud unloading on a handset. The method comprises a first step of inputting a frame of 2D monocular image, and dividing the image into N equal image blocks; a second step of classifying the image blocks according to perspective view, linear view and common view; a third step of according to the category of the classified views, calculating algorithm complexity of depth estimation of each image block; a fourth step of substituting the algorithm complexity of each image block into a cloud unloading dynamic resource allocation model, and optimizing to obtain an allocation result; and a fifth step of according to the allocation result obtained in the fourth step, performing depth estimation at the handset end and the cloud end, so as to generate a depth graph. The cloud unloading dynamic resource allocation model is established, so that the cloud computing-based method and system for 2D-to-3D adaptive unloading on the handset are formed. Complex computing at the handset end is unloaded to the cloud end, so that the stored resource of the handset end is released, the handset processing speed is improved, and the handset power consumption is lowered, further, a high-quality depth graph which is reasonable in depth estimation and has high running speed is obtained.

Description

Mobile phone end 2D-to-3D self-adaptive cloud unloading method and system
Technical Field
The invention relates to the field of computer vision and digital image processing, in particular to a method and a system for self-adaptive cloud unloading by converting 2D (two-dimensional) to 3D (three-dimensional) at a mobile phone end.
Background
In recent years, with the continuous improvement of mobile device technology and the gradual popularization of mobile phones, people have entered the era of smart phones, and users are more and more accustomed to using mobile device terminals to watch pictures and videos instead of traditional devices. The development of hardware processing equipment such as a GPU (graphics processing unit) and the like is memorized by a multi-core processor of the smart phone, so that the functions of the mobile phone are more and more powerful, related applications of the mobile phone are continuously upgraded, and the 3D-TV of the mobile phone is one of the main directions. However, the 3D video is complex to shoot, the resource is relatively lack due to the factors such as long post-production period and the like, and the development of the 3D video of the mobile terminal is severely restricted. Converting the original large amount of 2D resources into 3D is an effective way to solve this problem.
Two key steps in the 2D to 3D technique are depth estimation and virtual viewpoint synthesis. Depth estimation is the efficient extraction of depth information from one or more images, and the reconstructed depth map can be used in 3D modeling, virtual perspective rendering, video editing, and so on. The high-quality depth map not only reflects the correct depth of each point in the image at the corresponding point in space, but also can accurately process the problems of image noise, the depth of a low-texture area and area occlusion. As a basis for many applications, the effect of depth estimation also plays a crucial role in stereo vision.
The existing depth estimation algorithm of the 2D image has limited adaptation to scenes, poor three-dimensional effect and high complexity. The technology of converting 2D into 3D based on the mobile phone terminal is more immature in development, and the limitation factor is that the 2D to 3D conversion needs quite large storage resources, which is undoubtedly a huge challenge for the mobile phone terminal with very limited storage capacity; secondly, the processing speed of the mobile phone is very limited, and the real-time requirement of converting 2D into 3D is difficult to meet; moreover, both video conversion and video playing consume a large amount of electricity, and even though many mobile phones support the flash charging technology, inconvenience is still caused to users.
Disclosure of Invention
The invention mainly aims to overcome the defects of the prior art and provide a method and a system for adaptive cloud unloading from 2D to 3D of a mobile phone end, which are used for unloading the complex calculation of the mobile phone end to a cloud end, so that the storage resource of the mobile phone end is released, the processing speed of the mobile phone is improved, and the power consumption of the mobile phone is reduced.
The invention provides a method for self-adaptive cloud unloading from 2D to 3D conversion of a mobile phone end, which is characterized by comprising the following steps of:
A1. inputting a frame of 2D monocular image, and equally dividing the image into N image blocks;
A2. classifying the image blocks into a long-range view, a linear view and a common view;
A3. respectively calculating the algorithm complexity of the depth estimation of each image block according to the types of the divided views;
A4. substituting the algorithm complexity of each image block obtained by the calculation of A3 into a cloud unloading dynamic resource allocation model, and optimizing to obtain an allocation result;
A5. and according to the distribution result obtained by optimizing the A4, performing depth estimation at the mobile phone end and the cloud end respectively to generate a depth map.
Preferably, the step a2 image block classification includes the following steps:
A201. converting each image block from an RGB space to an HSI space, calculating a pixel value of the HSI space, and classifying a distant view and a non-distant view according to a set threshold;
A202. and (4) carrying out vanishing point detection on the non-distant view, if the vanishing point can be detected, the non-distant view is a linear view, and otherwise, the non-distant view is a common view.
Further preferably, the method for classifying the perspective view and the non-perspective view in step a201 is as follows: calculating HSI space pixel value H (x, y), S (x, y), I (x, y) at image coordinate (x, y), if 100<H(x,y)<180 and 100<S(x,y)<255, then Sky (x, y) = 1; if 50<H(x,y)<100 and 100<S(x,y)<255, then group (x, y) = 1; is provided withIf the Amount is larger than a set threshold value, the view is a distant view, otherwise, the view is a non-distant view.
Further preferably, the vanishing point detection of step a202 is as follows: and (3) calculating the edge of the image by using a Canny operator, detecting edge straight lines by using Hough, and detecting vanishing points according to intersection points of the straight lines.
Still further preferably, the vanishing point detection has a detection formula as follows:
wherein (x)0,y0) Is the vanishing point of the image block in the image plane (rho)ii) For the point (x) of the image block corresponding to the image planei,yi) In polar coordinates, WiIs the corresponding weight.
Preferably, the calculating algorithm complexity in the step a3 includes the following steps:
A301. for a distant view, dividing an image block by using a k-means algorithm, and approximating the algorithm complexity of depth estimation by using the execution time of the k-means algorithm;
A302. for a common view, detecting an image block by using a graph cut algorithm, wherein a detectable region is a foreground, otherwise, the detectable region is a background, calculating the execution time of the graph cut algorithm for the foreground and the background respectively, and approximating the algorithm complexity of depth estimation by using the execution time of the graph cut algorithm;
A303. for the linear view, the image blocks are segmented and vanishing point detected by using a k-means algorithm, the execution time of the algorithm is respectively calculated, and the algorithm complexity of the depth estimation is approximated by taking a larger execution time.
The algorithm complexity of the depth estimation depends on the complexity of each algorithm, and the algorithm complexity is in positive correlation with the execution time of the algorithm, so that the algorithm complexity of the depth estimation can be approximated by the execution time of the algorithm.
Further preferably, the algorithm complexity of the image block of the distant view is expressed as the following formula:
C1≈(8+12αi) WH/S, equation (2)
The algorithm complexity of the common view image block is expressed as the following formula:
C3≈βiWH/S, formula (3)
The algorithm complexity of a linear view image block is expressed as the following formula:
C2≈max(γ,8+12αi) WH/S, equation (4)
Wherein,
βithe iteration times of the graph cut algorithm are shown; c1、C2、C3The algorithm complexity of the image blocks of the long-range view, the linear view and the common view respectively, W and H are the width and the height of the current image block respectively, S is the area for normalizing the image block, S =87296, αiIs the number of clusters divided in the k-means algorithm, Oiγ is the time for vanishing point detection for S-sized images, which is the number of closed contours in the image.
Preferably, the cloud offload dynamic resource allocation model in step a4 optimizes the dynamic resource allocation model to obtain an allocation result by minimizing power consumption of the mobile phone.
First, when there is no cloud offload, the power consumption of the mobile phone end may be defined as:
wherein, PcAnd PtrPower consumption of the handset during calculation and data transfer, respectively, CallThe number of instructions of the algorithm required. f is the processing speed of the mobile phone end, and the unit is the instruction number per second. D is the size of data transmitted by the cloud end and the mobile phone end, and B is the bandwidth.
When the cloud is unloaded, the power consumption of the mobile phone end is as follows:
wherein, PiThe calculation speed is the power consumption of the idle mobile phone, and S is the calculation speed of the cloud,CcAnd CmThe algorithm complexity assigned to the cloud and the mobile phone respectively.
In order to minimize the power consumption of the mobile phone end, the computing power of the mobile phone end and the cloud end and the computing rate between the mobile phone and the cloud end are considered, and a dynamic resource allocation model for converting 2D to 3D of the mobile phone end based on cloud end unloading is provided according to the algorithm complexity of depth estimation.
Further preferably, the expression of the cloud offload dynamic resource allocation model is as follows:
wherein,
nc1,nc2and nc3The numbers of the perspective view, the linear view and the common view which are unloaded to the cloud end are respectively.
nall1,nall2And nall3The total number of perspective view, linear view and normal view respectively.
Andthe relationship of (1) is expressed as:the constraint indicates that the number of blocks offloaded to the cloud should be less than the total number of blocks and the hands offloaded using the cloudThe power consumption of the machine should be lower than when cloud offload is not used.
The total algorithm complexity of the cloud end and the mobile phone end is respectively the sum of the algorithm complexity of the image blocks of different types distributed above:
Cc=(nc1,nc2,nc3)×(C1,C2,C3)Tequation (12)
Cm=(nm1,nm2,nm3)×(C1,C2,C3)TEquation (13)
nm1,nm2And nm3The number of blocks of the long-range view, the linear view and the normal view, which are respectively unloaded to the mobile phone end;
C1,C2and C3Computational complexity in depth estimation for images of corresponding perspective view, linear view and normal view, respectively.
By optimizing the formula (9), theThe values of the variables in the system correspond to the numbers of image blocks of the long-range view, the linear view and the common view which are respectively unloaded to the cloud.
The invention also provides a system for self-adaptive cloud unloading by converting 2D to 3D at the mobile phone end, which comprises an image dividing module, an image block classifying module, a complexity calculating module, a dynamic resource allocation model module and a depth estimating module; the image dividing module is used for dividing the 2D monocular image; the image block classification module is used for classifying image blocks and classifying the image blocks into a distant view, a linear view and a common view; the complexity calculating module is used for calculating the algorithm complexity of each image block; the dynamic resource allocation model module is used for optimizing allocation of cloud unloading dynamic resources; the depth estimation module is used for estimating the depth of the image block at the mobile phone end or the cloud end to generate a depth map.
The invention has the beneficial effects that: by establishing a cloud unloading dynamic resource allocation model, a cloud computing-based mobile phone end 2D-to-3D adaptive unloading method and system are formed, and complicated computing of the mobile phone end is unloaded to the cloud, so that storage resources of the mobile phone end are released, the processing speed of the mobile phone is improved, and the power consumption of the mobile phone is reduced. By the depth estimation method and the depth estimation system, a high-quality depth map with reasonable depth estimation and high running speed can be obtained.
The embodiment of the invention also has the following beneficial effects: through calculation of HIS space pixel values and vanishing point detection, image blocks can be classified more accurately; the image blocks of different categories are detected by respectively using a k-means algorithm, a graph cut algorithm and vanishing point detection, so that the algorithm efficiency can be improved; by considering the computing power of the mobile phone end and the cloud end and the transmission rate between the mobile phone and the cloud end, the cloud end unloading dynamic resource allocation model for minimizing the power consumption of the mobile phone end is provided, and the power consumption of the mobile phone can be reduced to the minimum.
Drawings
Fig. 1 is a schematic diagram of a method for adaptive cloud offloading from 2D to 3D at a mobile phone end according to an embodiment of the present invention.
Fig. 2 is an input image of an embodiment of the present invention, fig. 2a is a plain, fig. 2b is a mountain, fig. 2c is a highway, fig. 2d is a train, fig. 2e is a beach, and fig. 2f is a butterfly.
Fig. 3 is a schematic diagram of power consumption saving under different network broadband conditions according to an embodiment of the present invention.
Fig. 4 is the depth map generated, fig. 4a is plain, fig. 4b is mountain, fig. 4c is highway, fig. 4d is train, fig. 4e is beach, fig. 4f is butterfly.
Detailed Description
The present invention is described in further detail below with reference to specific embodiments and with reference to the attached drawings, it should be emphasized that the following description is only exemplary and is not intended to limit the scope and application of the present invention.
A flow chart of the method for adaptive cloud offloading from 2D to 3D at the mobile phone end in this embodiment is shown in fig. 1.
A1. Inputting a frame of 2D monocular image, and equally dividing the image into N image blocks;
A2. converting each image block from an RGB space to an HSI space, calculating a pixel value of the HSI space, and classifying a distant view and a non-distant view according to a set threshold; performing vanishing point detection on the non-distant view, calculating the edge of the image by using a Canny operator, performing edge straight line detection by using Hough, and detecting vanishing points according to the intersection points of straight lines, wherein if the vanishing points can be detected, the view is a linear view, and otherwise, the view is a common view;
A3. according to the types of the divided views, respectively calculating the algorithm complexity | of the depth estimation of each image block: for a distant view, dividing an image block by using a k-means algorithm, and approximating the algorithm complexity of depth estimation by using the execution time of the k-means algorithm; for a common view, detecting an image block by using a graph cut algorithm, wherein a detectable region is a foreground, otherwise, the detectable region is a background, calculating the execution time of the graph cut algorithm for the foreground and the background respectively, and approximating the algorithm complexity of depth estimation by using the execution time of the graph cut algorithm; for the linear view, the image blocks are segmented and vanishing point detected by using a k-means algorithm, the execution time of the algorithm is respectively calculated, and the algorithm complexity of the depth estimation is approximated by taking a larger execution time.
A4. Substituting the algorithm complexity of each image block calculated by A3 into a cloud unloading dynamic resource allocation model:
obtaining a distribution result by minimizing the power consumption optimization of the mobile phone end;
A5. and according to the distribution result obtained by optimizing the A4, performing depth estimation at the mobile phone end or the cloud end respectively to generate a depth map.
And carrying out depth fusion on the depth maps respectively generated by the mobile phone end and the cloud end, synthesizing a 3D viewpoint and displaying a 3D view.
The HP iPAQ PDA of the test platform data of the experiment has P data parameterscIs 0.9W, PiIs 0.3W, PtrIt was 1.3W.
The original test image is shown in fig. 2, fig. 2a is plain, fig. 2b is mountain, fig. 2c is highway, fig. 2d is train, fig. 2e is beach, and fig. 2f is butterfly. After being equally divided into 9 blocks, the number of the image blocks of each view corresponding to each image is shown in table 1, nall1,nall2And nall3The total number of perspective view, linear view and normal view respectively.
TABLE 1 number of image blocks of various views
When the test network bandwidth is 0.5Mbps, 1.5Mbps, 2.5Mbps, 3.5Mbps, and 4.5Mbps, the data parameters and the saved energy unloaded by the cloud are as shown in table 2 below. The corresponding graph of table 2 is shown in fig. 3. The depth map generated by the experiment is shown in fig. 4, fig. 4a is plain, fig. 4b is mountain, fig. 4c is highway, fig. 4d is train, fig. 4e is beach, and fig. 4f is butterfly.
TABLE 2 cloud offload dynamic resource allocation results and energy savings
In Table 2, nc1,nc2And nc3Are respectively pairedThe number of different blocks unloaded to the cloud under the transmission bandwidth is required, and savedeergy is the percentage of power consumption saved when the system is unloaded by the cloud compared with the system without the cloud unloading.

Claims (10)

1. A method for self-adaptive cloud unloading from 2D to 3D conversion at a mobile phone end is characterized by comprising the following steps:
A1. inputting a frame of 2D monocular image, and equally dividing the image into N image blocks;
A2. classifying the image blocks into a long-range view, a linear view and a common view;
A3. respectively calculating the algorithm complexity of the depth estimation of each image block according to the types of the divided views;
A4. substituting the algorithm complexity of each image block obtained by the calculation of A3 into a cloud unloading dynamic resource allocation model, and optimizing to obtain an allocation result;
A5. and according to the distribution result obtained by optimizing the A4, performing depth estimation at the mobile phone end and the cloud end respectively to generate a depth map.
2. The method as claimed in claim 1, wherein said step a2 image block classification comprises the steps of:
A201. converting each image block from an RGB space to an HSI space, calculating a pixel value of the HSI space, and classifying a distant view and a non-distant view according to a set threshold;
A202. and (4) carrying out vanishing point detection on the non-distant view, if the vanishing point can be detected, the non-distant view is a linear view, and otherwise, the non-distant view is a common view.
3. The method as claimed in claim 2, wherein the step a201 classification method of the perspective view and the non-perspective view is: calculating HSI space pixel value H (x, y), S (x, y), I (x, y) at image coordinate (x, y), if 100<H(x,y)<180 and 100<S(x,y)<255, Sky (x, y) is 1; if 50<H(x,y)<100 and 100<S(x,y)<255, then group (x, y) is 1; is provided withIf the Amount is larger than a set threshold value, the view is a distant view, otherwise, the view is a non-distant view.
4. The method of claim 2, wherein the step a202 vanishing point detection is: and (3) calculating the edge of the image by using a Canny operator, detecting edge straight lines by using Hough, and detecting vanishing points according to intersection points of the straight lines.
5. The method of claim 4, wherein the vanishing point detection is performed by the formula:
min x 0 , y 0 &Sigma; i = 1 n W i ( &rho; i - x 0 cos&theta; i - y 0 sin&theta; i ) 2 ;
wherein (x)0,y0) Is the vanishing point of the image block in the image plane (rho)ii) For the point (x) of the image block corresponding to the image planei,yi) In polar coordinates, WiIs the corresponding weight.
6. The method of claim 1, wherein the algorithm complexity is calculated in step A3,
the method comprises the following steps:
A301. for a distant view, dividing an image block by using a k-means algorithm, and approximating the algorithm complexity of depth estimation by using the execution time of the k-means algorithm;
A302. for a common view, detecting an image block by using a graph cut algorithm, wherein a detectable region is a foreground, otherwise, the detectable region is a background, calculating the execution time of the graph cut algorithm for the foreground and the background respectively, and approximating the algorithm complexity of depth estimation by using the execution time of the graph cut algorithm;
A303. for the linear view, the image blocks are segmented and vanishing point detected by using a k-means algorithm, the execution time of the algorithm is respectively calculated, and the algorithm complexity of the depth estimation is approximated by taking a larger execution time.
7. The method as claimed in claim 6, wherein the algorithmic complexity of the distant view image block is represented by the following formula: c1≈(8+12αi) WH/S, the algorithm complexity of the common view image block is expressed as the following formula: c3≈βiWH/S, the algorithm complexity of a linear view image block is expressed as the following formula: c2≈max(γ,8+12αi)WH/S,
Wherein,βithe iteration times of the graph cut algorithm are shown;
C1、C2、C3the algorithm complexity of the image blocks of the long-range view, the linear view and the common view respectively,
w and H are the width and height, respectively, of the current image block, S is the area used to normalize the image block, S is 87296, αiIs the number of clusters divided in the k-means algorithm, Oiγ is the time for vanishing point detection for S-sized images, which is the number of closed contours in the image.
8. The method of claim 1, wherein the cloud offload dynamic resource allocation model of step a4 is optimized to obtain the allocation result by minimizing power consumption of the mobile phone.
9. The method of claim 8, wherein the cloud offload dynamic resource allocation model is expressed as:
n c = argmin ( P c &times; C m f + P i &times; C c s + P t r &times; D B ) , ;
wherein, Cm=(nm1,nm2,nm3)×(C1,C2,C3)T,Cc=(nc1,nc2,nc3)×(C1,C2,C3)T,
PcFor the power consumption of the handset in the calculation, PiFor the power consumption of the mobile phone when idle, PtrThe power consumption of the mobile phone during data transmission is shown; cmComplexity of the algorithm to be allocated to the mobile phone side, CcThe algorithm complexity assigned to the cloud; f is the processing speed of the mobile phone end, S is the calculation speed of the cloud end, D is the size of data transmitted by the cloud end and the mobile phone end, and B is the bandwidth; n isc1,nc2And nc3The number of the perspective view, the linear view and the common view which are unloaded to the cloud end are respectively; n isall1,nall2And nall3The total number of perspective view, linear view and normal view respectively.
10. A system for self-adaptive cloud unloading from 2D to 3D conversion at a mobile phone end is characterized by comprising an image dividing module, an image block classifying module, a complexity calculating module, a dynamic resource allocation model module and a depth estimating module; the image dividing module is used for dividing the 2D monocular image; the image block classification module is used for classifying image blocks and classifying the image blocks into a distant view, a linear view and a common view; the complexity calculating module is used for calculating the algorithm complexity of each image block; the dynamic resource allocation model module is used for optimizing allocation of cloud unloading dynamic resources; the depth estimation module is used for image block depth estimation of a mobile phone end or a cloud end.
CN201610657639.6A 2016-08-10 2016-08-10 A kind of mobile phone terminal 2D turns the adaptive cloud discharging methods of 3D and system Active CN106331680B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610657639.6A CN106331680B (en) 2016-08-10 2016-08-10 A kind of mobile phone terminal 2D turns the adaptive cloud discharging methods of 3D and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610657639.6A CN106331680B (en) 2016-08-10 2016-08-10 A kind of mobile phone terminal 2D turns the adaptive cloud discharging methods of 3D and system

Publications (2)

Publication Number Publication Date
CN106331680A true CN106331680A (en) 2017-01-11
CN106331680B CN106331680B (en) 2018-05-29

Family

ID=57739204

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610657639.6A Active CN106331680B (en) 2016-08-10 2016-08-10 A kind of mobile phone terminal 2D turns the adaptive cloud discharging methods of 3D and system

Country Status (1)

Country Link
CN (1) CN106331680B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107507206A (en) * 2017-06-09 2017-12-22 合肥工业大学 A kind of depth map extracting method based on conspicuousness detection
CN109328459A (en) * 2017-12-29 2019-02-12 深圳配天智能技术研究院有限公司 Intelligent terminal and its 3D imaging method, 3D imaging system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102724532A (en) * 2012-06-19 2012-10-10 清华大学 Planar video three-dimensional conversion method and system using same
CN105631868A (en) * 2015-12-25 2016-06-01 清华大学深圳研究生院 Depth information extraction method based on image classification
WO2016105362A1 (en) * 2014-12-23 2016-06-30 Hewlett Packard Enterprise Development Lp Resource predictors indicative of predicted resource usage

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102724532A (en) * 2012-06-19 2012-10-10 清华大学 Planar video three-dimensional conversion method and system using same
WO2016105362A1 (en) * 2014-12-23 2016-06-30 Hewlett Packard Enterprise Development Lp Resource predictors indicative of predicted resource usage
CN105631868A (en) * 2015-12-25 2016-06-01 清华大学深圳研究生院 Depth information extraction method based on image classification

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ANDREA MATESSI等: "Vanishing point detection in the hough transform space", 《EUROPEAN CONFERENCE ON PARALLEL PROCESSING》 *
KARTHIK KUMAR等: "Cloud Computing for Mobile Users: Can Offloading Computation Save Energy?", 《COMPUTER》 *
YEA-SHUAN HUANG等: "Creating Depth Map from 2D Scene Classification", 《INNOVATIVE COMPUTING INFORMATION AND CONTROL, 2008. ICICIC "08. 3RD INTERNATIONAL CONFERENCE ON》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107507206A (en) * 2017-06-09 2017-12-22 合肥工业大学 A kind of depth map extracting method based on conspicuousness detection
CN107507206B (en) * 2017-06-09 2021-08-20 合肥工业大学 Depth map extraction method based on significance detection
CN109328459A (en) * 2017-12-29 2019-02-12 深圳配天智能技术研究院有限公司 Intelligent terminal and its 3D imaging method, 3D imaging system
CN109328459B (en) * 2017-12-29 2021-02-26 深圳配天智能技术研究院有限公司 Intelligent terminal, 3D imaging method thereof and 3D imaging system

Also Published As

Publication number Publication date
CN106331680B (en) 2018-05-29

Similar Documents

Publication Publication Date Title
KR102319177B1 (en) Method and apparatus, equipment, and storage medium for determining object pose in an image
CN108846440B (en) Image processing method and device, computer readable medium and electronic equipment
CN108198141B (en) Image processing method and device for realizing face thinning special effect and computing equipment
US8472746B2 (en) Fast depth map generation for 2D to 3D conversion
CN111476710B (en) Video face changing method and system based on mobile platform
US8897542B2 (en) Depth map generation based on soft classification
US10970824B2 (en) Method and apparatus for removing turbid objects in an image
CN108648161A (en) The binocular vision obstacle detection system and method for asymmetric nuclear convolutional neural networks
WO2020062191A1 (en) Image processing method, apparatus and device
CN104954780A (en) DIBR (depth image-based rendering) virtual image restoration method applicable to high-definition 2D/3D (two-dimensional/three-dimensional) conversion
WO2018045789A1 (en) Method and device for adjusting grayscale values of image
CN112233212A (en) Portrait editing and composition
CN111951368A (en) Point cloud, voxel and multi-view fusion deep learning method
CN112966608A (en) Target detection method, system and storage medium based on edge-side cooperation
CN106331680B (en) A kind of mobile phone terminal 2D turns the adaptive cloud discharging methods of 3D and system
CN110211017A (en) Image processing method, device and electronic equipment
US20150161436A1 (en) Multiple layer block matching method and system for image denoising
CN105023246B (en) A kind of image enchancing method based on contrast and structural similarity
US20130336577A1 (en) Two-Dimensional to Stereoscopic Conversion Systems and Methods
CN111951345A (en) GPU-based real-time image video oil painting stylization method
CN113077477B (en) Image vectorization method and device and terminal equipment
CN108961268B (en) Saliency map calculation method and related device
CN103426162A (en) Image processing apparatus, image processing method, and program
US10152818B2 (en) Techniques for stereo three dimensional image mapping
CN110197459B (en) Image stylization generation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant