CN108154549A - A kind of three dimensional image processing method - Google Patents

A kind of three dimensional image processing method Download PDF

Info

Publication number
CN108154549A
CN108154549A CN201711421179.8A CN201711421179A CN108154549A CN 108154549 A CN108154549 A CN 108154549A CN 201711421179 A CN201711421179 A CN 201711421179A CN 108154549 A CN108154549 A CN 108154549A
Authority
CN
China
Prior art keywords
pixel
formula
web camera
real world
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711421179.8A
Other languages
Chinese (zh)
Inventor
李建亿
伊恩·罗伊·舒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pacific Future Ltd
Original Assignee
Pacific Future Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pacific Future Ltd filed Critical Pacific Future Ltd
Priority to CN201711421179.8A priority Critical patent/CN108154549A/en
Publication of CN108154549A publication Critical patent/CN108154549A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/529Depth or shape recovery from texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of three dimensional image processing method, this method analyzes the three-dimensional informations such as the direction of real world object shadow and scene depth figure in image by acquiring image information;And according to above- mentioned information, when being put into dummy object in the picture, the size and three-dimensional information of dummy object is adaptively adjusted.

Description

A kind of three dimensional image processing method
Technical field
The present invention relates to image processing fields, and in particular to a kind of three dimensional image processing method.
Background technology
Virtual reality (virtual reality, VR) technology be emulation technology with computer graphics human-machine interface technology, The set of the multiple technologies such as multimedia technology, sensing technology, network technology is that a challenging interleaving techniques forward position is learned Section and research field.VR mainly includes simulated environment, perception, natural technical ability and sensing equipment etc..Simulated environment is by counting The generation of calculation machine, real-time dynamic 3 D stereo photorealism.
Virtual reality fusion scene generation technique based on picture material is becoming virtual reality and the skill in augmented reality direction Art development trend and cross-section study hot spot.In order to ensure that the image later stage can carry out the processing of actual situation shadow consistency, it is necessary to protect The dummy object hatching effect that the dummy object added in the picture with image there is shade to match is demonstrate,proved, i.e., with identical projection Direction and scene depth.
The function of traditional offer VR elements, the VR elements of drafting generally in the fixed position of image, fix by ways of presentation, Cause addition VR elements cannot picture element corresponding with the real scene that mobile terminal is shot merge well, influence entirety The authenticity of picture.
Three-dimensionalreconstruction and fusion are carried out to the dummy object in merging image, first have to obtain three of real world object in image Image information is tieed up, 3-D graphic information includes the colour information and depth information of object.Existing 3 Dimension Image Technique mainly leads to Colour information and depth information that sensor obtains object are crossed, colour information and depth information are merged, obtain graphics As information.
Invention content
The present invention provides a kind of three dimensional image processing method, and this method is analyzed existing in image by acquiring image information The three-dimensional informations such as the direction of real object shadow and scene depth;And it according to above- mentioned information, when being put into dummy object in the picture, fits Answering property adjusts the size and three-dimensional information of dummy object.
To achieve these goals, the present invention provides a kind of three dimensional image processing method, and this method specifically includes following step Suddenly:
S1. web camera remote collection image information is sent to image processing server;
S2. all real world object three-dimensional features in image processing server identification image;
S3. it is placed in dummy object in the picture;
S4. according to the three-dimensional feature of dummy object periphery real world object, be adaptively adjusted the size of dummy object with And three-dimensional information.
Preferably, the three-dimensional information feature includes three-dimensional depth information and projecting direction.
Preferably, in the step S2, including S21:By the phase for calculating adjacent pixel in each real world object in image Like degree, and then the depth of the pixel is obtained, obtain the three-dimensional depth information of real world object.
Preferably, the similarity for calculating adjacent pixel in each real world object in image, is propagated using adjacent pixel It is similar with iterative calculation adjacent pixel including horizontal pixel is initialized as random planar with the multi-cycle algorithm of random optimization The step of spending.
Preferably, it is described that horizontal pixel is initialized as random planar, including step:
S211. horizontal pixel is initialized as random planar;
S212. the initial depth of each pixel is set as random value, is set by the normal to a surface vector of each pixel It is set to random unitary vector.
Preferably, it is described that horizontal pixel is initialized as random planar, including following process:
By the plane of the depth coordinate of the horizontal pixel, represented by formula (1),
Z=f1▽px+f2▽py+f3 (1)
Wherein, z is the depth coordinate and p of the horizontal pixelxAnd pyFor random planar, f1、f2And f3Respectively such as formula (2- 1), formula (2-2) and formula (2-3) are shown,
f1=-n1/n3 (2-1)
f2=-n2/n3 (2-2)
f3=(n1·x0+n2·y0+n3·z0)/n3 (2-3)
In formula (2-1), formula (2-2) and formula (2-3), n1、n2And n3It is scalar, is as the numerical value vector as shown in formula (3) Plane, x are possible to where the minimum polymerization cost of expression0And y0The coordinate values of the horizontal pixel respectively initialized, z0The initial depth value of the horizontal pixel for initialization,
M is provided by formula (4) in formula (3),
In formula (4), w be used to implement it is adaptive weighted, w by formula (5) provide;E represents Similarity measures factor, and E is by formula (10) it provides;▽ represents Grad, WpExpression concentrates on a square window of p,
In formula (5), | | Ip-Iq| | represent the distance between two adjacent pixel ps and q, p is horizontal pixel, and q is same flat with p Adjacent pixel in face,
E=α | | Ii-Ij||+(1-α)||▽Ii-▽Ij|| (6)
In formula (6), I is the intensity of pixel in real world object, and subscript i, j are the indexes of real world object, Ii, IjIt represents respectively I-th, the intensity of the respective pixel in j-th of real world object, IiAnd IjIt is projected onto identical spatial point, IiAnd IjCoordinate by Formula (11) is calculated, | | Ii-Ij| | for the I in rgb spaceiAnd IjColor manhatton distance, ▽ IiWith ▽ IjIt is pixel Gray value gradient, | | ▽ Ii-▽Ij| | it represents in IiAnd IjThe absolute difference of the shade of gray of calculating, α are the weights of no unit The factor, for balancing the influence of color and gradual change item;
In formula (7), uiIt is the local coordinate for the pixel for corresponding to the point that coordinate is y and z in each real world object.
Preferably, the similarity of the iterative calculation adjacent pixel, including step:
S213. it initializes a horizontal pixel in a random planar and calculates its depth coordinate and vector value, calculate It polymerize cost, using this polymerization cost as with reference to polymerization cost;
S214. calculate any one adjacent pixel with horizontal pixel in step S213. in the same plane polymerize cost;
S215. it polymerize cost with adjacent pixel in step S214. with reference to polymerization cost in comparison step S213.;
S216. using polymerization cost respective pixel smaller in step S215. as new reference value;
S217., reference value respective pixel new in step S216. is set as to the respective pixel upper left with the comparison reference value It is adjacent;
S218. it imposes a condition:New reference value corresponds to depth value in the permitted maximum range in step S216.;
S219. it if step S218. conditions are set up, recycles and performs step S213. to step S218.;
S220. step S218. conditions are invalid, using in last time circulation step S217. as image Far Left pixel;
S221. on the basis of step S220, image bottom right carries out declining even iteration;
S222. the calculation times of each pixel are calculated according to the iterations of step S221..
Preferably, the S2 further includes step S22:Analysis obtains the direction of real world object shadow:
S221., real world object in image is divided into the texture region of N number of same size, wherein, N >=2;
S222. the target view frustums using eye position as viewpoint are generated;
S223 by the target view frustums by closely to being far divided into N number of sub- view frustums, wherein, the region of every sub- view frustums By closely to being far sequentially increased;
S224. the rendering parameter of the sub- view frustums is generated, the rendering parameter includes viewing matrix and projection matrix;
S225. according to the rendering parameter of the sub- view frustums, the sub- view frustums are rendered in a texture region Real world object in corresponding image obtains the real world object shadow direction using light source position as viewpoint.
Preferably, following steps are further included in the step S4:
S41. the projecting direction analyzed according to S22 determines the light-receiving surface and shady face of dummy object;
S42. the lighting color of gradual change is configured in the transitional region for the light-receiving surface and the shady face;
S43. the lighting color received according to each pixel of the light-receiving surface, determines each pixel of light-receiving surface respectively The color of point;
S44. the lighting color received according to each pixel of the shady face, determines each pixel of shady face respectively The color of point.
Preferably, the lighting color of gradual change is configured in the transitional region for the light-receiving surface and the shady face, including:
Out of dot product value range [- 1,1], choose an interval [- x1, x2], wherein, x1 and x2 be positive number and Less than 1;
A transitional region is selected from the dummy object surface, wherein, each pixel corresponds in the transitional region Dot product in the interval [- x1, x2];
Make the second lighting color gradual change that the first lighting color that the transitional region side receives is received to opposite side, In, fixed-illumination color of first lighting color for the light-receiving surface reception in addition to the transitional region, second light The fixed-illumination color received according to color for the shady face in addition to the transitional region.
Preferably, the lighting color received according to each pixel of the light-receiving surface, determines the light-receiving surface respectively The color of each pixel, including:
Determine the corresponding shadow factor of each light receiving pixel point in the light-receiving surface, wherein, the size of the shadow factor It is related to the shade shield coverage suffered by the light receiving pixel point;
According to texture color of the light receiving pixel point itself, the lighting color received, environment intrinsic colour, the shade The factor determines the color of the light receiving pixel point.
Preferably, it is described to determine the corresponding shadow factor of each light receiving pixel point in the light-receiving surface, including:
The M reference image vegetarian refreshments that the light receiving pixel selects periphery is chosen, wherein, M >=2;
Depth information of scene is obtained using the step S2, determines that the light receiving pixel is selected and whether is the reference image vegetarian refreshments In shade;
If not in shade, the shadow factor value of the light receiving pixel point is 0;If there are m pixel in the moon In shadow, then the corresponding default shading value of each pixel in the m pixel determines the moon of the light receiving pixel point Shadow factor value, wherein, 0<m≤M+1.
Preferably, in step sl, the data transmission of web camera and image processing server is led to using instant encryption Letter, before instant encryption communication, web camera generates session key WK temporarily;Image processing server obtains web camera Then identity public key and the basic public key of key agreement form public private key pair with image processing server and roll generation to negotiate to calculate father Table initial key N_CC, detailed process are as follows:
By T_SKA/T_PKA, NB_SKB/NB_PKB, by scalar multiplication algorithm on elliptic curve, the key of web camera is calculated Negotiate first part Part1;
Web camera key agreement first part Part1=DPSM2(TSKA, NBPKB);
By NB_SKA/NB_PKA, T_SKB/T_PKB, by scalar multiplication algorithm on elliptic curve, the key of web camera is calculated Negotiate second part Part2;
Web camera key agreement second part Part2=DPSM2(NBSKA, TPKB);
By NB_SKA/NB_PKA, NB_SKB/NB_PKB, by scalar multiplication algorithm on elliptic curve, calculate web camera and calculate hair The key agreement Part III Part3 for the side of sending;
Web camera key agreement Part III Part3=DPSM2(NBSKA, NBPKB);
By web camera key agreement first part Part1, web camera key agreement second part Part2, net Network video camera key agreement Part III Part3 connects into web camera key components KM;
Web camera key components KM=Part1 | | Part2 | | Part3);
By web camera key components KM and the first character string with SM3 compression algorithms into the web camera of 256bit Father rolls and represents initial key N_CC;
Initial key N_CC=HSM3 (KM | | the first character string)
According to scalar multiplication algorithm on elliptic curve feature, by the calculating process, web camera key and image processing services Device both sides calculate consistent father's rolling and represent initial key N_CC.
The present invention has the following advantages and beneficial effect:(1) present invention analyzes to obtain three using the real world object in image Dimensional feature, the fusion of the dummy object for merging, which provides to be adaptively adjusted, provides accurate three-dimensional coefficient;(2) the application carries The new calculation method of real world images three dimensional depth extraction is gone out, this method calculates the similarity of pixel in each real world object, examines The continuity on surface is considered, to improve resolution ratio, it is excessively multiple to solve algorithm in current depth of 3 D picture information extraction process The problem of miscellaneous;(3) present invention proposes the projecting direction that real world object in image is obtained using sub- view frustums, quick and saving money Source;(4) image data communication of the present invention carries out data exchange using instantaneous safety communication mode, it is ensured that the peace of data transmission Quan Xing prevents information leakage.
Description of the drawings
Fig. 1 shows the graphic system block diagram that a kind of three dimensional image processing method of the present invention is based on;
Fig. 2 shows a kind of flow charts of three dimensional image processing method of the present invention;
Fig. 3 shows that a kind of analysis of the present invention obtains the flow chart of the three-dimensional feature of real world object in image;
Fig. 4 shows that a kind of analysis of the present invention obtains the flow chart of real world object projecting direction;
Fig. 5 shows a kind of flow chart of generation dummy object of the present invention.
Specific embodiment
Innovative point for a better understanding of the present invention below in conjunction with the accompanying drawings, is explained the specific implementation method of the present invention It states.
Fig. 1 shows the graphic system block diagram that a kind of three dimensional image processing method of the present invention is based on, the system Including web camera 1, for acquiring the image information of reality, and it is transmitted through the network to image processing server 2.The figure As server 2 includes:For storing the image storage module 21 of the image of video camera transmission, for analyzing real world images in image Depth information three dimensional depth analysis module 22, for analyzing the projecting direction analysis module of the projecting direction in image 23, according to for being placed in the dummy object generation module 24 of dummy object in the picture, to real world object in image and virtual object The image co-registration module 25 and the image output module 26 to the display output image of equipment 3 that body is merged.
Fig. 2 shows a kind of flow charts for the three dimensional image processing method for utilizing the present invention.A kind of 3-D view processing side Method, this method specifically comprise the following steps:
S1. web camera remote collection image information is sent to image processing server;
S2. all real world object three-dimensional features in image processing server identification image;Preferably, the three-dimensional information is special Sign includes three-dimensional depth information and projecting direction.
S3. it is placed in dummy object in the picture;
S4. according to the three-dimensional feature of dummy object periphery real world object, be adaptively adjusted the size of dummy object with And three-dimensional information.
Referring to Fig. 3, in the step S2, including S21:By the phase for calculating adjacent pixel in each real world object in image Like degree, and then the depth of the pixel is obtained, obtain the three-dimensional depth information of real world object.
Preferably, the similarity for calculating adjacent pixel in each real world object in image, is propagated using adjacent pixel With the multi-cycle algorithm of random optimization.
Preferably, including horizontal pixel is initialized as random planar and the step of similarity for iterating to calculate adjacent pixel Suddenly.
It is described that horizontal pixel is initialized as random planar, including step:
S211. horizontal pixel is initialized as random planar;
S212. the initial depth of each pixel is set as random value, is set by the normal to a surface vector of each pixel It is set to random unitary vector.
It is further preferred that described be initialized as random planar by horizontal pixel, including following process:
By the plane of the depth coordinate of the horizontal pixel, represented by formula (1),
Z=f1▽px+f2▽py+f3 (1)
Wherein, z is the depth coordinate and p of the horizontal pixelxAnd pyFor random planar, f1、f2And f3Respectively such as formula (2- 1), formula (2-2) and formula (2-3) are shown,
f1=-n1/n3 (2-1)
f2=-n2/n3 (2-2)
f3=(n1·x0+n2·y0+n3·z0)/n3 (2-3)
In formula (2-1), formula (2-2) and formula (2-3), n1、n2And n3It is scalar, is as the numerical value vector as shown in formula (3) Plane, x are possible to where the minimum polymerization cost of expression0And y0The coordinate values of the horizontal pixel respectively initialized, z0The initial depth value of the horizontal pixel for initialization,
M is provided by formula (4) in formula (3),
In formula (4), w be used to implement it is adaptive weighted, w by formula (5) provide;E represents Similarity measures factor, and E is by formula (10) it provides;▽ represents Grad, WpExpression concentrates on a square window of p,
In formula (5), | | Ip-Iq| | represent the distance between two adjacent pixel ps and q, p is horizontal pixel, and q is same flat with p Adjacent pixel in face,
E=α | | Ii-Ij||+(1-α)||▽Ii-▽Ij|| (6)
In formula (6), I is the intensity of pixel in real world object, and subscript i, j are the indexes of real world object, Ii, IjIt represents respectively I-th, the intensity of the respective pixel in j-th of real world object, IiAnd IjIt is projected onto identical spatial point, IiAnd IjCoordinate by Formula (11) is calculated, | | Ii-Ij| | for the I in rgb spaceiAnd IjColor manhatton distance, ▽ IiWith ▽ IjIt is pixel Gray value gradient, | | ▽ Ii-▽Ij| | it represents in IiAnd IjThe absolute difference of the shade of gray of calculating, α are the weights of no unit The factor, for balancing the influence of color and gradual change item;
In formula (7), uiIt is the local coordinate for the pixel for corresponding to the point that coordinate is y and z in each real world object.
Preferably, the similarity of the iterative calculation adjacent pixel, including step:
S213. it initializes a horizontal pixel in a random planar and calculates its depth coordinate and vector value, calculate It polymerize cost, using this polymerization cost as with reference to polymerization cost;
S214. calculate any one adjacent pixel with horizontal pixel in step S213. in the same plane polymerize cost;
S215. it polymerize cost with adjacent pixel in step S214. with reference to polymerization cost in comparison step S213.;
S216. using polymerization cost respective pixel smaller in step S215. as new reference value;
S217., reference value respective pixel new in step S216. is set as to the respective pixel upper left with the comparison reference value It is adjacent;
S218. it imposes a condition:New reference value corresponds to depth value in the permitted maximum range in step S216.;
S219. it if step S218. conditions are set up, recycles and performs step S213. to step S218.;
S220. step S218. conditions are invalid, using in last time circulation step S217. as image Far Left pixel;
S221. on the basis of step S220, image bottom right carries out declining even iteration;
S222. the calculation times of each pixel are calculated according to the iterations of step S221..
The similarity of the iterative calculation adjacent pixel, including spatial and the step of plane refine;
In the step of spatial, neighbor pixel is set as in the same plane, first by formula (4) assessment not With the cost m of situation,
In formula (4), p represents current pixel, fpIt is the vector of its corresponding plane, q is the adjacent pixel of p, in p (x0, y0) It is lower to use f respectivelypAnd fqIt calculates, to assess the cost of both of these case;Shown in inspection condition such as formula (8),
m(x0, y0, fp')<m(x0, y0, fp); (8)
In formula (8) and obtained respectively by formula (4);
If the expression formula shown in formula (8) is set up, fqIt is accepted as the new vector of p, i.e. fp=fq
In odd number iteration, q is the left side and coboundary;
In even number iteration, q is right margin and lower boundary;
In the step of plane refine, by fpBe converted to normal vector np, two parameter ▽ z and ▽ n are defined as limiting respectively Z processed0With the maximum allowable variation of n, z0' it is calculated as z0'=z0+ ▽ z, wherein ▽ z are located at [- ▽ zmax, ▽ zmax], and n'=u (n+ ▽ n), u () represent unit of account vector, and ▽ n are located at [- ▽ nmax, ▽ nmax];
Finally, a new f is obtained by p and n'p', if m (x0, y0, fp')<m(x0, y0, fp), then fp=fp';
In the step of plane refine, from setting ▽ zmax=maxdisp/2 starts, and wherein maxdisp is allowed most Big parallax, ▽ nmax=1, every time after refinement, parameter will be updated to ▽ zmax=▽ zmax/2、▽nmax=▽ nmax/2;Until ▽ zmax<Resolution/2, the resolution ratio minimized;For odd number iteration, since on the left of image, carried out to bottom right Even number iteration;
The similarity of adjacent pixel is obtained after iteration, and then obtains the depth of the real world object.
Referring to Fig. 4, the S2 further includes step S22:Analysis obtains the direction of real world object shadow:
S221., real world object in image is divided into the texture region of N number of same size, wherein, N >=2;
S222. the target view frustums using eye position as viewpoint are generated;
S223 by the target view frustums by closely to being far divided into N number of sub- view frustums, wherein, the region of every sub- view frustums By closely to being far sequentially increased;
S224. the rendering parameter of the sub- view frustums is generated, the rendering parameter includes viewing matrix and projection matrix;
S225. according to the rendering parameter of the sub- view frustums, the sub- view frustums are rendered in a texture region Real world object in corresponding image obtains the real world object shadow direction using light source position as viewpoint.
Referring to Fig. 5, following steps are further included in the step S4:
S41. the projecting direction analyzed according to S22 determines the light-receiving surface and shady face of dummy object;
S42. the lighting color of gradual change is configured in the transitional region for the light-receiving surface and the shady face;
S43. the lighting color received according to each pixel of the light-receiving surface, determines each pixel of light-receiving surface respectively The color of point;
S44. the lighting color received according to each pixel of the shady face, determines each pixel of shady face respectively The color of point.
Preferably, the lighting color of gradual change is configured in the transitional region for the light-receiving surface and the shady face, including:
Out of dot product value range [- 1,1], choose an interval [- x1, x2], wherein, x1 and x2 be positive number and Less than 1;
A transitional region is selected from the dummy object surface, wherein, each pixel corresponds in the transitional region Dot product in the interval [- x1, x2];
Make the second lighting color gradual change that the first lighting color that the transitional region side receives is received to opposite side, In, fixed-illumination color of first lighting color for the light-receiving surface reception in addition to the transitional region, second light The fixed-illumination color received according to color for the shady face in addition to the transitional region.
Preferably, the lighting color received according to each pixel of the light-receiving surface, determines the light-receiving surface respectively The color of each pixel, including:
Determine the corresponding shadow factor of each light receiving pixel point in the light-receiving surface, wherein, the size of the shadow factor It is related to the shade shield coverage suffered by the light receiving pixel point;
According to texture color of the light receiving pixel point itself, the lighting color received, environment intrinsic colour, the shade The factor determines the color of the light receiving pixel point.
Preferably, it is described to determine the corresponding shadow factor of each light receiving pixel point in the light-receiving surface, including:
The M reference image vegetarian refreshments that the light receiving pixel selects periphery is chosen, wherein, M >=2;
Depth information of scene is obtained using the step S2, determines that the light receiving pixel is selected and whether is the reference image vegetarian refreshments In shade;
If not in shade, the shadow factor value of the light receiving pixel point is 0;If there are m pixel in the moon In shadow, then the corresponding default shading value of each pixel in the m pixel determines the moon of the light receiving pixel point Shadow factor value, wherein, 0<m≤M+1.
Preferably, in step sl, the data transmission of web camera and image processing server is led to using instant encryption Letter, before instant encryption communication, web camera generates session key WK temporarily;Image processing server obtains web camera Then identity public key and the basic public key of key agreement form public private key pair with image processing server and roll generation to negotiate to calculate father Table initial key N_CC, detailed process are as follows:
By T_SKA/T_PKA, NB_SKB/NB_PKB, by scalar multiplication algorithm on elliptic curve, the key of web camera is calculated Negotiate first part Part1;
Web camera key agreement first part Part1=DPSM2(TSKA, NBPKB);
By NB_SKA/NB_PKA, T_SKB/T_PKB, by scalar multiplication algorithm on elliptic curve, the key of web camera is calculated Negotiate second part Part2;
Web camera key agreement second part Part2=DPSM2(NBSKA, TPKB);
By NB_SKA/NB_PKA, NB_SKB/NB_PKB, by scalar multiplication algorithm on elliptic curve, calculate web camera and calculate hair The key agreement Part III Part3 for the side of sending;
Web camera key agreement Part III Part3=DPSM2(NBSKA, NBPKB);
By web camera key agreement first part Part1, web camera key agreement second part Part2, net Network video camera key agreement Part III Part3 connects into web camera key components KM;
Web camera key components KM=Part1 | | Part2 | | Part3);
By web camera key components KM and the first character string with SM3 compression algorithms into the web camera of 256bit Father rolls and represents initial key N_CC;
Initial key N_CC=HSM3 (KM | | the first character string)
According to scalar multiplication algorithm on elliptic curve feature, by the calculating process, web camera key and image processing services Device both sides calculate consistent father's rolling and represent initial key N_CC.
Those of ordinary skill in the art may realize that each exemplary lists described with reference to the embodiments described herein Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is performed with hardware or software mode, specific application and design constraint depending on technical solution.Professional technician Described function can be realized using distinct methods to each specific application, but this realization is it is not considered that exceed The scope of the present invention.
Although it as described above, is illustrated according to the embodiment and attached drawing that embodiment is limited, to the art It can carry out various modifications and deform from above-mentioned record for technical staff with general knowledge.For example, according to explanation Technology illustrated in the mutually different sequence of method carry out and/or according to system, structure, device, circuit with explanation etc. The mutually different form of method illustrated by inscape is combined or combines or carried out according to other inscapes or equipollent It replaces or displacement also may achieve appropriate effect.For those of ordinary skill in the art to which the present invention belongs, it is not taking off Under the premise of from present inventive concept, several equivalent substitute or obvious modifications are made, and performance or use is identical, should be all considered as It belongs to the scope of protection of the present invention.

Claims (10)

1. a kind of three dimensional image processing method, this method specifically comprise the following steps:
S1. web camera remote collection image information is sent to image processing server;
S2. all real world object three-dimensional features in image processing server identification image;
S3. it is placed in dummy object in the picture;
S4. according to the three-dimensional feature of dummy object periphery real world object, the size and three of dummy object is adaptively adjusted Tie up information.
2. the method as described in claim 1, which is characterized in that the three-dimensional information feature includes three-dimensional depth information and projection Direction.
3. method as claimed in claim 2, which is characterized in that in the step S2, including S21:It is every in image by calculating The similarity of adjacent pixel in a real world object, and then the depth of the pixel is obtained, obtain the three dimensional depth letter of real world object Breath.
4. method as claimed in claim 3, which is characterized in that adjacent pixel in each real world object in the calculating image Similarity propagates the multi-cycle algorithm with random optimization, including horizontal pixel is initialized as random planar using adjacent pixel With iterative calculation adjacent pixel similarity the step of.
5. method as claimed in claim 4, which is characterized in that it is described that horizontal pixel is initialized as random planar, including step Suddenly:
S211. horizontal pixel is initialized as random planar;
S212. the initial depth of each pixel is set as random value, is arranged to by the normal to a surface vector of each pixel Random unitary vector.
6. method as claimed in claim 5, which is characterized in that it is described that horizontal pixel is initialized as random planar, including such as Lower process:
By the plane of the depth coordinate of the horizontal pixel, represented by formula (1),
Wherein, z is the depth coordinate and p of the horizontal pixelxAnd pyFor random planar, f1、f2And f3Respectively such as formula (2-1), formula Shown in (2-2) and formula (2-3),
f1=-n1/n3 (2-1)
f2=-n2/n3 (2-2)
f3=(n1·x0+n2·y0+n3·z0)/n3 (2-3)
In formula (2-1), formula (2-2) and formula (2-3), n1、n2And n3It is scalar, is as the numerical value vector as shown in formula (3)It represents Plane, x are possible to where minimum polymerization cost0And y0The coordinate values of the horizontal pixel respectively initialized, z0For The initial depth value of the horizontal pixel of initialization,
M is provided by formula (4) in formula (3),
In formula (4), w be used to implement it is adaptive weighted, w by formula (5) provide;E represents Similarity measures factor, and E is carried by formula (10) For;Represent Grad, WpExpression concentrates on a square window of p,
In formula (5), | | Ip-Iq| | represent the distance between two adjacent pixel ps and q, p is horizontal pixel, q be with p in the same plane Adjacent pixel,
In formula (6), I is the intensity of pixel in real world object, and subscript i, j are the indexes of real world object, Ii, IjI-th is represented respectively, The intensity of respective pixel in j-th of real world object, IiAnd IjIt is projected onto identical spatial point, IiAnd IjCoordinate by formula (11) it is calculated, | | Ii-Ij| | for the I in rgb spaceiAnd IjColor manhatton distance,WithIt is the ash of pixel Angle value gradient,It represents in IiAnd IjThe absolute difference of the shade of gray of calculating, α are the weight factors of no unit, For balancing the influence of color and gradual change item;
In formula (7), uiIt is the local coordinate for the pixel for corresponding to the point that coordinate is y and z in each real world object.
7. method as claimed in claim 6, which is characterized in that the similarity of the iterative calculation adjacent pixel, including step:
S213. it initializes a horizontal pixel in a random planar and calculates its depth coordinate and vector value, it is poly- to calculate it Synthesis originally, using this polymerization cost as reference polymerize cost;
S214. calculate any one adjacent pixel with horizontal pixel in step S213. in the same plane polymerize cost;
S215. it polymerize cost with adjacent pixel in step S214. with reference to polymerization cost in comparison step S213.;
S216. using polymerization cost respective pixel smaller in step S215. as new reference value;
S217. reference value respective pixel new in step S216. is set as adjacent with the respective pixel upper left of the comparison reference value;
S218. it imposes a condition:New reference value corresponds to depth value in the permitted maximum range in step S216.;
S219. it if step S218. conditions are set up, recycles and performs step S213. to step S218.;
S220. step S218. conditions are invalid, using in last time circulation step S217. as image Far Left pixel;
S221. on the basis of step S220, image bottom right carries out declining even iteration;
S222. the calculation times of each pixel are calculated according to the iterations of step S221..
8. the method as described in claim 2-7 is any, which is characterized in that the S2 further includes step S22:Analysis obtains reality The direction of shadows of objects:
S221., real world object in image is divided into the texture region of N number of same size, wherein, N >=2;
S222. the target view frustums using eye position as viewpoint are generated;
S223 by the target view frustums by closely to being far divided into N number of sub- view frustums, wherein, the region of every sub- view frustums is by near To being far sequentially increased;
S224. the rendering parameter of the sub- view frustums is generated, the rendering parameter includes viewing matrix and projection matrix;
S225. according to the rendering parameter of the sub- view frustums, the sub- view frustums is rendered in a texture region and are corresponded to Image in real world object, obtain the real world object shadow direction using light source position as viewpoint.
9. method as claimed in claim 8, which is characterized in that, further include following steps in the step S4:
S41. the projecting direction analyzed according to S22 determines the light-receiving surface and shady face of dummy object;
S42. the lighting color of gradual change is configured in the transitional region for the light-receiving surface and the shady face;
S43. the lighting color received according to each pixel of the light-receiving surface, determines each pixel of light-receiving surface respectively Color;
S44. the lighting color received according to each pixel of the shady face, determines each pixel of shady face respectively Color;
The lighting color of gradual change is configured in the transitional region for the light-receiving surface and the shady face, including:
In from dot product value range [- 1,1], an interval [- x1, x2] is chosen, wherein, x1 and x2 are positive number and are less than 1;
A transitional region is selected from the dummy object surface, wherein, the corresponding point of each pixel in the transitional region Product is in the interval [- x1, x2];
Make the second lighting color gradual change that the first lighting color that the transitional region side receives is received to opposite side, wherein, The fixed-illumination color that first lighting color is received for the light-receiving surface in addition to the transitional region, the second illumination face The fixed-illumination color that color is received for the shady face in addition to the transitional region.
10. the method as described in claim 1, which is characterized in that in step sl, web camera and image processing server Data transmission using instant encryption communicate, instant encryption communication before, web camera generates session key WK temporarily;At image It manages server and obtains the identity public key of web camera and the basic public key of key agreement, then formed with image processing server public For private key to representing initial key N_CC to negotiate to calculate father's rolling, detailed process is as follows:
By T_SKA/T_PKA, NB_SKB/NB_PKB, by scalar multiplication algorithm on elliptic curve, the key agreement of web camera is calculated First part Part1;
Web camera key agreement first part Part1=DPSM2(TSKA, NBPKB);
By NB_SKA/NB_PKA, T_SKB/T_PKB, by scalar multiplication algorithm on elliptic curve, the key agreement of web camera is calculated Second part Part2;
Web camera key agreement second part Part2=DPSM2(NBSKA, TPKB);
By NB_SKA/NB_PKA, NB_SKB/NB_PKB, by scalar multiplication algorithm on elliptic curve, calculate web camera and calculate sender Key agreement Part III Part3;
Web camera key agreement Part III Part3=DPSM2(NBSKA, NBPKB);
Web camera key agreement first part Part1, web camera key agreement second part Part2, network are taken the photograph Camera key agreement Part III Part3 connects into web camera key components KM;
Web camera key components KM=Part1 | | Part2 | | Part3);
Web camera key components KM and the first character string are rolled with the father of SM3 compression algorithms into the web camera of 256bit It is dynamic to represent initial key N_CC;
Initial key N_CC=HSM3 (KM | | the first character string)
According to scalar multiplication algorithm on elliptic curve feature, by the calculating process, web camera key and image processing server are double Side calculates consistent father's rolling and represents initial key N_CC.
CN201711421179.8A 2017-12-25 2017-12-25 A kind of three dimensional image processing method Pending CN108154549A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711421179.8A CN108154549A (en) 2017-12-25 2017-12-25 A kind of three dimensional image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711421179.8A CN108154549A (en) 2017-12-25 2017-12-25 A kind of three dimensional image processing method

Publications (1)

Publication Number Publication Date
CN108154549A true CN108154549A (en) 2018-06-12

Family

ID=62465860

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711421179.8A Pending CN108154549A (en) 2017-12-25 2017-12-25 A kind of three dimensional image processing method

Country Status (1)

Country Link
CN (1) CN108154549A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109856703A (en) * 2019-03-25 2019-06-07 大夏数据服务有限公司 A kind of weather monitoring station data calculating analysis system
WO2023142264A1 (en) * 2022-01-28 2023-08-03 歌尔股份有限公司 Image display method and apparatus, and ar head-mounted device and storage medium
TWI834741B (en) * 2018-10-17 2024-03-11 安地卡及巴布達商區塊鏈控股有限公司 Computer-implemented system and method including public key combination verification

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160019718A1 (en) * 2014-07-16 2016-01-21 Wipro Limited Method and system for providing visual feedback in a virtual reality environment
CN105306492A (en) * 2015-11-25 2016-02-03 成都三零瑞通移动通信有限公司 Asynchronous key negotiation method and device aiming at secure instant messaging
CN106230920A (en) * 2016-07-27 2016-12-14 吴东辉 A kind of method and system of AR
CN107274476A (en) * 2017-08-16 2017-10-20 城市生活(北京)资讯有限公司 The generation method and device of a kind of echo
CN107330930A (en) * 2017-06-27 2017-11-07 晋江市潮波光电科技有限公司 Depth of 3 D picture information extracting method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160019718A1 (en) * 2014-07-16 2016-01-21 Wipro Limited Method and system for providing visual feedback in a virtual reality environment
CN105306492A (en) * 2015-11-25 2016-02-03 成都三零瑞通移动通信有限公司 Asynchronous key negotiation method and device aiming at secure instant messaging
CN106230920A (en) * 2016-07-27 2016-12-14 吴东辉 A kind of method and system of AR
CN107330930A (en) * 2017-06-27 2017-11-07 晋江市潮波光电科技有限公司 Depth of 3 D picture information extracting method
CN107274476A (en) * 2017-08-16 2017-10-20 城市生活(北京)资讯有限公司 The generation method and device of a kind of echo

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈宝权等: "混合现实中的虚实融合与人机智能交融", 《中国科学:信息科学》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI834741B (en) * 2018-10-17 2024-03-11 安地卡及巴布達商區塊鏈控股有限公司 Computer-implemented system and method including public key combination verification
CN109856703A (en) * 2019-03-25 2019-06-07 大夏数据服务有限公司 A kind of weather monitoring station data calculating analysis system
WO2023142264A1 (en) * 2022-01-28 2023-08-03 歌尔股份有限公司 Image display method and apparatus, and ar head-mounted device and storage medium

Similar Documents

Publication Publication Date Title
CN100594519C (en) Method for real-time generating reinforced reality surroundings by spherical surface panoramic camera
Nalpantidis et al. Biologically and psychophysically inspired adaptive support weights algorithm for stereo correspondence
CN101610425B (en) Method for evaluating stereo image quality and device
CN102685369B (en) Eliminate the method for right and left eyes image ghost image, ghost canceller and 3D player
CN112233165B (en) Baseline expansion implementation method based on multi-plane image learning visual angle synthesis
CN103325120A (en) Rapid self-adaption binocular vision stereo matching method capable of supporting weight
CN104599284A (en) Three-dimensional facial reconstruction method based on multi-view cellphone selfie pictures
CN104010180B (en) Method and device for filtering three-dimensional video
CN108154549A (en) A kind of three dimensional image processing method
CN104599317A (en) Mobile terminal and method for achieving 3D (three-dimensional) scanning modeling function
CN103458259B (en) A kind of 3D video causes detection method, the Apparatus and system of people&#39;s eye fatigue
CN101729920A (en) Method for displaying stereoscopic video with free visual angles
Templin et al. Highlight microdisparity for improved gloss depiction
CN107360416A (en) Stereo image quality evaluation method based on local multivariate Gaussian description
CN109218706A (en) A method of 3 D visual image is generated by single image
Mulligan et al. Real time trinocular stereo for tele-immersion
CN111079673A (en) Near-infrared face recognition method based on naked eye three-dimension
CN106991715A (en) Grating prism Three-dimensional Display rendering intent based on optical field acquisition
Finlayson et al. Lookup-table-based gradient field reconstruction
Zhou et al. Single-view view synthesis with self-rectified pseudo-stereo
Vangorp et al. Depth from HDR: depth induction or increased realism?
US12081722B2 (en) Stereo image generation method and electronic apparatus using the same
EP4283566A2 (en) Single image 3d photography with soft-layering and depth-aware inpainting
Seitner et al. Trifocal system for high-quality inter-camera mapping and virtual view synthesis
Cheng et al. 51.3: An Ultra‐Low‐Cost 2‐D/3‐D Video‐Conversion System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180612

RJ01 Rejection of invention patent application after publication