CN109922331A - A kind of image processing method and device - Google Patents

A kind of image processing method and device Download PDF

Info

Publication number
CN109922331A
CN109922331A CN201910037310.3A CN201910037310A CN109922331A CN 109922331 A CN109922331 A CN 109922331A CN 201910037310 A CN201910037310 A CN 201910037310A CN 109922331 A CN109922331 A CN 109922331A
Authority
CN
China
Prior art keywords
image
depth map
lens group
depth
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910037310.3A
Other languages
Chinese (zh)
Other versions
CN109922331B (en
Inventor
杨萌
戴付建
赵烈烽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Sunny Optics Co Ltd
Original Assignee
Zhejiang Sunny Optics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Sunny Optics Co Ltd filed Critical Zhejiang Sunny Optics Co Ltd
Priority to CN201910037310.3A priority Critical patent/CN109922331B/en
Publication of CN109922331A publication Critical patent/CN109922331A/en
Application granted granted Critical
Publication of CN109922331B publication Critical patent/CN109922331B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The present invention provides a kind of image processing method and devices, wherein this method comprises: being shot to obtain the first image, the second image and third image respectively to scene by the first lens group, the second lens group and third lens group;Three depth maps are determined by the offset of pixel in two images in the first image, second image and the third image;One or more of described three depth maps are combined with the image information received, generate virtual media information;One of the virtual media information and the first image, second image and the third image are combined, therefore, it can solve and how not increase the problem of extras complete depth information process using the mobile terminal that user carries in the related technology, it realizes in the case where not increasing extras, virtual reality is completed by mobile terminal to combine, and improves the effect of user experience.

Description

A kind of image processing method and device
Technical field
The present invention relates to the communications fields, in particular to a kind of image processing method and device.
Background technique
Augmented reality (Augmented Reality, referred to as AR) is on the basis of true real world visual information The upper technology that the virtual media information generated including other computers such as video, image, text, sound is added.The one of the technology A important applied field is to aid in the experience that can not be contacted in physical distance or on the time in user experience current scene, Increase or promoted perception of the user to the information in real-world scene.But AR technology may require dedicated system or hard Part equipment, such as head-mounted display, intelligent glasses, the computer with independent display card etc., these require certain cost or Person's use environment is virtually limiting the usage scenario of AR.Depth information process process in especially AR be AR system or Person's equipment realizes the key point that virtual scene is merged with reality scene, can carry not increasing additional equipment utilization user Mobile terminal to complete depth information process be still current major issue to be solved.
For in the related technology how do not increasing extras using user carry mobile terminal complete depth information The problem of processing, not yet proposition solution.
Summary of the invention
The embodiment of the invention provides a kind of image processing method and devices, how at least to solve in the related technology not Increase the problem of extras complete depth information process using the mobile terminal that user carries.
According to one embodiment of present invention, a kind of image processing method is provided, comprising:
Respectively scene is shot to obtain the first figure by the first lens group, the second lens group and third lens group Picture, the second image and third image;
The first depth map is determined by the offset of pixel in the first image and second image, passes through described the The offset of pixel determines the second depth map in two images and the third image, and passes through the first image and the third The offset of pixel determines third depth map in image;
By one or more of first depth map, second depth map and described third depth map and reception To image information be combined, generate virtual media information;
One of the virtual media information and the first image, second image and the third image are carried out Combination.
Optionally, by one or more in first depth map, second depth map and the third depth map A to be combined with the image information received, generating virtual media information includes:
Determine first depth map, the smallest depth of error in second depth map and the third depth map Figure, obtains the depth information of environmental characteristic in the smallest depth map of the error, by the depth information and the image received Information is combined, and generates virtual media information;Or
Determine first depth map, the smallest depth of error in second depth map and the third depth map Figure, by the lesser part of error in the depth map in addition to the smallest depth map of the error to the smallest depth of the error The biggish corresponding part of error is supplemented in figure, the depth of environmental characteristic in the smallest depth map of the error after obtaining supplement Information is spent, the depth information is combined with the image information received, generates virtual media information;
Optionally, by one or more in first depth map, second depth map and the third depth map A to be combined with the image information received, generating virtual media information includes:
Determine first depth map, the highest depth of clarity in second depth map and the third depth map Figure, obtains the depth information of environmental characteristic in the highest depth map of the clarity, by the depth information and the figure received As information is combined, virtual media information is generated;Or
Determine first depth map, the highest depth of clarity in second depth map and the third depth map Figure, it is highest to the clarity by the lesser part of error in the depth map in addition to the highest depth map of the clarity The biggish corresponding part of error is supplemented in depth map, and environment is special in the highest depth map of the clarity after obtaining supplement The depth information is combined by the depth information of sign with the image information received, generates virtual media information.
Optionally, by one or more in first depth map, second depth map and the third depth map A to be combined with the image information received, generating virtual media information includes:
According to one or more depth in first depth map, second depth map and the third depth map Size, direction of rotation and the moving direction for the described image information that figure adjustment receives;
According to one or more depth in first depth map, second depth map and the third depth map Figure establishes three-dimensional scenic;
Described image information adjusted is positioned in the three-dimensional scenic, generates the virtual media information.
Optionally, respectively scene shoot by the first lens group, the second lens group and third lens group To after the first image, the second image and third image, the method also includes:
Adjust the brightness and contrast of the first image, second image and the third image.
Optionally, the offset is the difference of the coordinate of same pixel in two images;Or
The offset is the difference of the coordinate of same pixel in the projected image of two images, and the projected image is to pass through The first image, second image and the third image are converted according to the correction matrix pre-saved respectively It obtains.
Optionally, first lens group, second lens group and the third lens group are located along the same line, And second lens group is between first lens group and the third lens group.
Optionally, the distance between first lens group and second lens group are less than second lens group and institute State the distance of third lens group.
Optionally, first lens group, second lens group and the third lens group visual field having the same Angle;And/or
First lens group, second lens group and the third lens group are imaged in infrared band.
According to another embodiment of the invention, a kind of image processing apparatus is additionally provided, comprising:
Shooting module, for being clapped respectively by the first lens group, the second lens group and third lens group scene It takes the photograph to obtain the first image, the second image and third image;
Determining module, for determining the first depth by the offset of pixel in the first image and second image Figure determines the second depth map by the offset of pixel in second image and the third image, and passes through described first The offset of pixel determines third depth map in image and the third image;
Generation module, for by one in first depth map, second depth map and the third depth map It is a or it is multiple be combined with the image information received, generate virtual media information;
Composite module, for by the virtual media information and the first image, second image and described the One of three images are combined.
Optionally, the generation module includes:
First generation unit, for determining first depth map, second depth map and the third depth map The middle the smallest depth map of error, obtains the depth information of environmental characteristic in the smallest depth map of the error, the depth is believed The image information for ceasing and receiving is combined, and generates virtual media information;
Second generation unit, for determining first depth map, second depth map and the third depth map The middle the smallest depth map of error, by the lesser part of error in the depth map in addition to the smallest depth map of the error to institute It states the biggish corresponding part of error in the smallest depth map of error to be supplemented, the smallest depth of the error after obtaining supplement The depth information is combined by the depth information of environmental characteristic in figure with the image information received, generates virtual media Information;
Optionally, the generation module includes:
Third generation unit, for determining first depth map, second depth map and the third depth map The middle highest depth map of clarity, obtains the depth information of environmental characteristic in the highest depth map of the clarity, by the depth Degree information is combined with the image information received, generates virtual media information;
4th generation unit, for determining first depth map, second depth map and the third depth map The middle highest depth map of clarity passes through the lesser part of error in the depth map in addition to the highest depth map of the clarity The biggish corresponding part of error in the highest depth map of the clarity is supplemented, the clarity after obtaining supplement is most The depth information is combined by the depth information of environmental characteristic in high depth map with the image information received, is generated Virtual media information.
Optionally, the generation module includes:
Adjustment unit, for according in first depth map, second depth map and the third depth map Size, direction of rotation and the moving direction for the described image information that one or more depth map adjustment receive;
Unit is established, for according in first depth map, second depth map and the third depth map One or more depth maps establish three-dimensional scenic;
Simultaneously generation unit is positioned, it is raw for positioning described image information adjusted in the three-dimensional scenic At the virtual media information.
Optionally, described device further include:
Module is adjusted, for adjusting the brightness of the first image, second image and the third image and right Degree of ratio.
Optionally, the offset is the difference of the coordinate of same pixel in two images;Or
The offset is the difference of the coordinate of same pixel in the projected image of two images, and the projected image is to pass through The first image, second image and the third image are converted according to the correction matrix pre-saved respectively It obtains.
Optionally, first lens group, second lens group and the third lens group are located along the same line, And second lens group is between first lens group and the third lens group.
Optionally, the distance between first lens group and second lens group are less than second lens group and institute State the distance of third lens group.
Optionally, first lens group, second lens group and the third lens group visual field having the same Angle;And/or
First lens group, second lens group and the third lens group are imaged in infrared band.
According to still another embodiment of the invention, a kind of storage medium is additionally provided, meter is stored in the storage medium Calculation machine program, wherein the computer program is arranged to execute the step in any of the above-described embodiment of the method when operation.
According to still another embodiment of the invention, a kind of electronic device, including memory and processor are additionally provided, it is described Computer program is stored in memory, the processor is arranged to run the computer program to execute any of the above-described Step in embodiment of the method.
Through the invention, scene is shot by the first lens group, the second lens group and third lens group respectively Obtain the first image, the second image and third image;Pass through the offset of pixel in the first image and second image It measures and determines the first depth map, the second depth map is determined by the offset of pixel in second image and the third image, And third depth map is determined by the offset of pixel in the first image and the third image;By first depth One or more of figure, second depth map and described third depth map are combined with the image information received, Generate virtual media information;By the virtual media information and the first image, second image and the third figure One of picture is combined, and therefore, be can solve and how not to be increased extras using the movement of user's carrying in the related technology Terminal completes the problem of depth information process, realizes in the case where not increasing extras, is completed by mobile terminal empty Quasi- reality combines, and improves the effect of user experience.
Detailed description of the invention
The drawings described herein are used to provide a further understanding of the present invention, constitutes part of this application, this hair Bright illustrative embodiments and their description are used to explain the present invention, and are not constituted improper limitations of the present invention.In the accompanying drawings:
Fig. 1 is a kind of hardware block diagram of the mobile terminal of image processing method of the embodiment of the present invention;
Fig. 2 is a kind of flow chart of image processing method according to an embodiment of the present invention;
Fig. 3 is the schematic diagram of alternate position spike between measurement image according to an embodiment of the present invention;
Fig. 4 is the schematic diagram of lens group according to an embodiment of the present invention;
Fig. 5 is a kind of block diagram of image processing apparatus according to an embodiment of the present invention;
Fig. 6 is a kind of block diagram of image processing apparatus according to the preferred embodiment of the invention.
Specific embodiment
Hereinafter, the present invention will be described in detail with reference to the accompanying drawings and in combination with Examples.It should be noted that not conflicting In the case of, the features in the embodiments and the embodiments of the present application can be combined with each other.
It should be noted that description and claims of this specification and term " first " in above-mentioned attached drawing, " Two " etc. be to be used to distinguish similar objects, without being used to describe a particular order or precedence order.
Embodiment 1
Embodiment of the method provided by the embodiment of the present application one can be in mobile terminal, terminal or similar fortune It calculates and is executed in device.For running on mobile terminals, Fig. 1 is a kind of movement of image processing method of the embodiment of the present invention The hardware block diagram of terminal, as shown in Figure 1, mobile terminal 10 may include at one or more (only showing one in Fig. 1) It manages device 102 (processing unit that processor 102 can include but is not limited to Micro-processor MCV or programmable logic device FPGA etc.) Memory 104 for storing data, optionally, above-mentioned mobile terminal can also include the transmission device for communication function 106 and input-output equipment 108.It will appreciated by the skilled person that structure shown in FIG. 1 is only to illustrate, simultaneously The structure of above-mentioned mobile terminal is not caused to limit.For example, mobile terminal 10 may also include it is more than shown in Fig. 1 or less Component, or with the configuration different from shown in Fig. 1.
Memory 104 can be used for storing computer program, for example, the software program and module of application software, such as this hair The corresponding computer program of message method of reseptance in bright embodiment, processor 102 are stored in memory 104 by operation Computer program realizes above-mentioned method thereby executing various function application and data processing.Memory 104 may include High speed random access memory, may also include nonvolatile memory, as one or more magnetic storage device, flash memory or its His non-volatile solid state memory.In some instances, memory 104 can further comprise remotely setting relative to processor 102 The memory set, these remote memories can pass through network connection to mobile terminal 10.The example of above-mentioned network includes but not It is limited to internet, intranet, local area network, mobile radio communication and combinations thereof.
Transmitting device 106 is used to that data to be received or sent via a network.Above-mentioned network specific example may include The wireless network that the communication providers of mobile terminal 10 provide.In an example, transmitting device 106 includes a Network adaptation Device (Network Interface Controller, referred to as NIC), can be connected by base station with other network equipments to It can be communicated with internet.In an example, transmitting device 106 can for radio frequency (Radio Frequency, referred to as RF) module is used to wirelessly be communicated with internet.
A kind of image processing method for running on above-mentioned mobile terminal or the network architecture, Fig. 2 are provided in the present embodiment It is a kind of flow chart of image processing method according to an embodiment of the present invention, as shown in Fig. 2, the process includes the following steps:
Step S202 shoot to scene by the first lens group, the second lens group and third lens group respectively To the first image, the second image and third image;
Step S204 determines the first depth map by the offset of pixel in the first image and second image, The second depth map is determined by the offset of pixel in second image and the third image, and passes through the first image Third depth map is determined with the offset of pixel in the third image;
Step S206, by one in first depth map, second depth map and the third depth map or It is multiple to be combined with the image information received, generate virtual media information;
Step S208, by the virtual media information and the first image, second image and the third figure One of picture is combined.
S202 to S208 through the above steps passes through the first lens group, the second lens group and third lens group pair respectively Scene is shot to obtain the first image, the second image and third image;Pass through the first image and second image The offset of middle pixel determines the first depth map, is determined by the offset of pixel in second image and the third image Second depth map, and third depth map is determined by the offset of pixel in the first image and the third image;By institute The image information stating one or more of the first depth map, second depth map and described third depth map and receiving It is combined, generates virtual media information;By the virtual media information and the first image, second image and institute It states one of third image to be combined, therefore, can solve and how taken using user not increasing extras in the related technology The mobile terminal of band completes the problem of depth information process, realizes in the case where not increasing extras, by mobile whole End is completed virtual reality and is combined, and the effect of user experience is improved.
The embodiment of the present invention obtains three depth maps according to combination two-by-two any in three lens groups, according to all three A depth map judges the depth information (such as scene distance, position, edge, angle etc.) of environmental characteristic, from communicating Another user terminal receives image information (such as object, personage), and image information is generated and reality in conjunction with depth information Scene being capable of married virtual objects.
In one embodiment, above-mentioned steps S206 can specifically include:
Determine first depth map, the smallest depth of error in second depth map and the third depth map Figure, obtains the depth information of environmental characteristic in the smallest depth map of the error, by the depth information and the image received Information is combined, and generates virtual media information;
Determine first depth map, the smallest depth of error in second depth map and the third depth map Figure, by the lesser part of error in the depth map in addition to the smallest depth map of the error to the smallest depth of the error The biggish corresponding part of error is supplemented in figure, the depth of environmental characteristic in the smallest depth map of the error after obtaining supplement Information is spent, the depth information is combined with the image information received, generates virtual media information;
In another embodiment, above-mentioned steps S206 specifically can also include:
Determine first depth map, the highest depth of clarity in second depth map and the third depth map Figure, obtains the depth information of environmental characteristic in the highest depth map of the clarity, by the depth information and the figure received As information is combined, virtual media information is generated;
Determine first depth map, the highest depth of clarity in second depth map and the third depth map Figure, it is highest to the clarity by the lesser part of error in the depth map in addition to the highest depth map of the clarity The biggish corresponding part of error is supplemented in depth map, and environment is special in the highest depth map of the clarity after obtaining supplement The depth information is combined by the depth information of sign with the image information received, generates virtual media information.
In the embodiment of the present invention can using weighted array, averaging, choose error it is the smallest one or choose it is clearest One.
In another embodiment, above-mentioned steps S206 can also include: according to first depth map, second depth The size for the described image information that one or more depth maps adjustment in figure and the third depth map receives, rotation side To and moving direction;According to one in first depth map, second depth map and the third depth map or Multiple depth maps establish three-dimensional scenic;Described image information adjusted is positioned in the three-dimensional scenic, generates institute State virtual media information.
Three-dimensional map is mapped according to depth map, and shows the three-dimensional mould of received image in three-dimensional map Type, such as the less open portion of shelter is shown in three-dimensional map, and can be according to the coordinate in three-dimensional map It is moved.
In the embodiment of the present invention, it is too strong excessively weak for environment light situations such as, adjustable the first image, described second The brightness and contrast of image and the third image.
In embodiments of the present invention, when lens group coaxial and coplanar error does not occur, the offset is two figures The difference of the coordinate of same pixel as in;Or
When error occurs in lens group coaxial and coplanar, correction matrix is obtained by the camera lens error prestored, specifically, The offset is the difference of the coordinate of same pixel in the projected image of two images, and the projected image is by by described the One image, second image and the third image carry out transformation acquirement according to the correction matrix pre-saved respectively.
In the embodiment of the present invention, first lens group, second lens group and the third lens group are located at same On one straight line, and second lens group is between first lens group and the third lens group.Variation can be passed through Baseline length changes the computer capacity of the depth of field, can be selected among three lens groups according to relative distance.Further, The distance between first lens group and second lens group are less than second lens group and the third lens group Distance.Baseline length selection as much as possible, the range that the optimization depth of field is adjusted can be set.
In the embodiment of the present invention, first lens group, second lens group and the third lens group have phase Same field angle, in order to preferably reduce error, which is specifically as follows 60 degree, 80 degree, 100 degree etc.;And/or described One lens group, second lens group and the third lens group are imaged in infrared band, in order to reduce error, the infrared waves Section is specifically as follows 850-1050nm etc. or smaller.
The embodiment of the present invention is being taken the photograph using two lens group photographeds in lens group, using same scenery in different figures more Position gap in piece judges the depth of field of the scenery.Scenery in the ideal case, phase in picture captured by two lens groups Same scenery will be located on identical shooting level line, and then carrying out image flame detection by pre-stored data under non-ideal conditions will Image, which is converted into, is equivalent to the coaxial coplanar ideal situation of lens group, searches for the target to match on shooting level line later, Various well known image matching methods can be used in search, utilize the various colors, bright in pixel, picture element matrix or window The features such as degree, energy are matched.It searches after matching target, calculates the target in the i-th lens group and jth lens group institute At two images on alternate position spike Li-Lj, then can determine that the depth of field is under a proportional relationship
Z=EFL [1+ (Bi/Li-Lj)],
Wherein i=1,2,3 ..., j=1,2,3 ... and be not equal to i, EFL is focal length.And so on can also obtain whole figure The depth of field distribution map of picture, the depth of field distribution map can be used for being inserted into the 3D object of AR, and it is enable preferably to match with environment.
Since alternate position spike has its limitation, for example, it is too small when be easy to cause error, and minimum is also impossible to more than one pixel Scale, therefore the depth of field detected has its upper limit, and close to upper limit when is easy error.Otherwise explanation uses improper when excessive Baseline length or object it is excessively close.At this time because image gap itself is excessive, error in judgement will also result in, and can increase and seek Look for the time-consuming of matching characteristic.Therefore, three take the photograph or mobile phone with more camera modules in, can be multiple by suitably optimizing The distance between any two among lens group, to enable change when depth detection used in lens group come with most suitable Combination is to carry out ranging.For example, in the present embodiment, the precision of such as three multiple depth maps, resolution ratio, neat in edge degree meeting Because camera internal parameter factor and mutually block, the difference of external parameters factor such as illumination and there are different errors, especially It is not to be computed correctly there may be depth and the cavity for the value that leaves some space.Quality Optimal Error can be chosen in multiple depth maps The smallest depth map is carried out using can also be made up in a certain depth map using the lesser part of error in other depth maps Above-mentioned cavity is especially replenished in the biggish part of error, is combined depth map using other modes also possible.
It illustrates below and the embodiment of the present invention is described in detail.
Fig. 3 is the schematic diagram of alternate position spike between measurement image according to an embodiment of the present invention, as shown in figure 3, work as i=1, j When=2, alternate position spike L of the target on two images formed by the first lens group and the second lens group is calculated1-L2, then according to than Example relationship can determine that the depth of field is
Z=EFL [1+ (B1/L1-L2)],
EFL is focal length.And so on can also obtain the depth of field distribution map of whole image, which can be used for It is inserted into the 3D object of AR, and it is enable preferably to match with environment.
Work as 5=1, when j=2, calculates position of the target on two images formed by the first lens group and the second lens group Set poor L5-L2, then can determine that the depth of field is under a proportional relationship
Z=EFL [1+ (B5/L5-L2)],
EFL is focal length.And so on can also obtain the depth of field distribution map of whole image, which can be used for It is inserted into the 3D object of AR, and it is enable preferably to match with environment.Above-mentioned is only that the embodiment of the present invention is said in citing It is bright, the embodiment of the present invention is not defined.
Due to L1-L2Have its limitation, for example, it is too small when be easy to cause error, and minimum be also impossible to more than pixel Scale, therefore the depth of field detected has its upper limit, and close to upper limit when is easy error.Anyway explanation uses improperly when excessive B1 or B2 or object are excessively close, at this time because image gap itself is excessive, will also result in error in judgement, and can increase searching Time-consuming with feature.Therefore, it is taken the photograph in mobile phone three, Fig. 4 is the schematic diagram of lens group according to an embodiment of the present invention, such as Fig. 4 institute Show, can by suitably optimizing the distance of three lens groups between any two, to enable and change when depth detection used in Two groups of camera lenses carry out the moment using most suitable combination to carry out ranging.For example, being carried out simultaneously using three lens groups in this programme Imaging, thus will obtain three depth maps, other than z1, further includes:
z2=EFL [1+ (B2/L2-L3)] and
z3=EFL [1+ (B1+B2/L1-L3)]。
The precision of three depth maps, resolution ratio, the meeting of neat in edge degree are blocked because of camera internal parameter factor and mutually, light According to the difference for waiting external parameters factor, there are different errors, are not computed correctly especially and leave some space there may be depth The cavity of value.The smallest depth map of quality Optimal Error can be chosen in three depth maps to carry out using can also utilize Above-mentioned sky is especially replenished to make up the biggish part of error in a certain depth map in the lesser part of error in other depth maps Depth map is combined also possible by hole using other modes.
It, will be from the received image information of another mobile terminal and above-mentioned depth obtaining after the depth information of matching characteristic Degree information is combined, and by the combination of the virtual media information of generation on true picture, it is preferred that in the scene of interaction Scenery can be defined by user oneself or search a few specific scenery of class according to the template prestored.Either figure Piece is also possible to video, can by received image information be fabricated to virtual 3D image and by the depth information in actual scene Applied in 3D image.For example, it is also possible to which depth map is converted to 3D map, and virtual 3D shape is arranged among the 3D map As thus size, position, the direction etc. of 3D image will be determined.3D map can be converted to by following formula:
Wherein, x and y is image coordinate, and X and Y are 3D map reference, and Z is depth, and fx and fy are focal length, and Cx and Cy are inclined Shifting parameter, the parameter of this matrix can demarcate in advance.3D map can also further be rotated, scaled and is displaced with Ideal AR playing environment is just provided.
It include 3D model either 2D image again in this terminal according to 2D image from the received image information of other terminals institute Generate 3D model.Later, the 3D model is positioned in the 3D map generated in above-mentioned steps, thus answering in such as long-distance video With under scene, virtual personage is preferably merged with environment.
Embodiment 2
A kind of image processing apparatus is additionally provided in the present embodiment, and the device is real for realizing above-described embodiment and preferably Mode is applied, the descriptions that have already been made will not be repeated.As used below, the soft of predetermined function may be implemented in term " module " The combination of part and/or hardware.Although device described in following embodiment is preferably realized with software, hardware, or The realization of the combination of software and hardware is also that may and be contemplated.
Fig. 5 is a kind of block diagram of image processing apparatus according to an embodiment of the present invention, as shown in Figure 5, comprising:
Shooting module 52, for being carried out respectively by the first lens group, the second lens group and third lens group to scene Shooting obtains the first image, the second image and third image;
Determining module 54, for determining that first is deep by the offset of pixel in the first image and second image Degree figure determines the second depth map by the offset of pixel in second image and the third image, and passes through described the The offset of pixel determines third depth map in one image and the third image;
Generation module 56, for will be in first depth map, second depth map and the third depth map One or more is combined with the image information received, generates virtual media information;
Composite module 58, for by the virtual media information and the first image, second image and described One of third image is combined.
Optionally, the generation module 56 includes:
First generation unit, for determining first depth map, second depth map and the third depth map The middle the smallest depth map of error, obtains the depth information of environmental characteristic in the smallest depth map of the error, the depth is believed The image information for ceasing and receiving is combined, and generates virtual media information;
Second generation unit, for determining first depth map, second depth map and the third depth map The middle the smallest depth map of error, by the lesser part of error in the depth map in addition to the smallest depth map of the error to institute It states the biggish corresponding part of error in the smallest depth map of error to be supplemented, the smallest depth of the error after obtaining supplement The depth information is combined by the depth information of environmental characteristic in figure with the image information received, generates virtual media Information;
Optionally, the generation module 56 includes:
Third generation unit, for determining first depth map, second depth map and the third depth map The middle highest depth map of clarity, obtains the depth information of environmental characteristic in the highest depth map of the clarity, by the depth Degree information is combined with the image information received, generates virtual media information;
4th generation unit, for determining first depth map, second depth map and the third depth map The middle highest depth map of clarity passes through the lesser part of error in the depth map in addition to the highest depth map of the clarity The biggish corresponding part of error in the highest depth map of the clarity is supplemented, the clarity after obtaining supplement is most The depth information is combined by the depth information of environmental characteristic in high depth map with the image information received, is generated Virtual media information;
Fig. 6 is a kind of block diagram of image processing apparatus according to the preferred embodiment of the invention, as shown in fig. 6, the generation Module 56 includes:
Adjustment unit 62, for according in first depth map, second depth map and the third depth map The adjustment of one or more depth maps size, direction of rotation and the moving direction of the described image information that receive;
Unit 64 is established, for according in first depth map, second depth map and the third depth map One or more depth maps establish three-dimensional scenic;
Simultaneously generation unit 66 is positioned, for described image information adjusted to be positioned in the three-dimensional scenic, Generate the virtual media information.
Optionally, described device further include:
Module is adjusted, for adjusting the brightness of the first image, second image and the third image and right Degree of ratio.
Optionally, the offset is the difference of the coordinate of same pixel in two images;Or
The offset is the difference of the coordinate of same pixel in the projected image of two images, and the projected image is to pass through The first image, second image and the third image are converted according to the correction matrix pre-saved respectively It obtains.
Optionally, first lens group, second lens group and the third lens group are located along the same line, And second lens group is between first lens group and the third lens group.
Optionally, the distance between first lens group and second lens group are less than second lens group and institute State the distance of third lens group.
Optionally, first lens group, second lens group and the third lens group visual field having the same Angle;And/or
First lens group, second lens group and the third lens group are imaged in infrared band.
It should be noted that above-mentioned modules can be realized by software or hardware, for the latter, Ke Yitong Following manner realization is crossed, but not limited to this: above-mentioned module is respectively positioned in same processor;Alternatively, above-mentioned modules are with any Combined form is located in different processors.
Embodiment 3
The embodiments of the present invention also provide a kind of storage medium, computer program is stored in the storage medium, wherein The computer program is arranged to execute the step in any of the above-described embodiment of the method when operation.
Optionally, in the present embodiment, above-mentioned storage medium can be set to store by executing based on following steps Calculation machine program:
S11 shoots scene to obtain first by the first lens group, the second lens group and third lens group respectively Image, the second image and third image;
S12 determines the first depth map by the offset of pixel in the first image and second image, passes through institute The offset for stating pixel in the second image and the third image determines the second depth map, and by the first image with it is described The offset of pixel determines third depth map in third image;
S13, by one or more of first depth map, second depth map and described third depth map with The image information received is combined, and generates virtual media information;
S14, by one of the virtual media information and the first image, second image and the third image It is combined.
Optionally, in the present embodiment, above-mentioned storage medium can include but is not limited to: USB flash disk, read-only memory (Read- Only Memory, referred to as ROM), it is random access memory (Random Access Memory, referred to as RAM), mobile hard The various media that can store computer program such as disk, magnetic or disk.
Embodiment 4
The embodiments of the present invention also provide a kind of electronic device, including memory and processor, stored in the memory There is computer program, which is arranged to run computer program to execute the step in any of the above-described embodiment of the method Suddenly.
Optionally, above-mentioned electronic device can also include transmission device and input-output equipment, wherein the transmission device It is connected with above-mentioned processor, which connects with above-mentioned processor.
Optionally, in the present embodiment, above-mentioned processor can be set to execute following steps by computer program:
S11 shoots scene to obtain first by the first lens group, the second lens group and third lens group respectively Image, the second image and third image;
S12 determines the first depth map by the offset of pixel in the first image and second image, passes through institute The offset for stating pixel in the second image and the third image determines the second depth map, and by the first image with it is described The offset of pixel determines third depth map in third image;
S13, by one or more of first depth map, second depth map and described third depth map with The image information received is combined, and generates virtual media information;
S14, by one of the virtual media information and the first image, second image and the third image It is combined.
Optionally, the specific example in the present embodiment can be with reference to described in above-described embodiment and optional embodiment Example, details are not described herein for the present embodiment.
Obviously, those skilled in the art should be understood that each module of the above invention or each step can be with general Computing device realize that they can be concentrated on a single computing device, or be distributed in multiple computing devices and formed Network on, optionally, they can be realized with the program code that computing device can perform, it is thus possible to which they are stored It is performed by computing device in the storage device, and in some cases, it can be to be different from shown in sequence execution herein Out or description the step of, perhaps they are fabricated to each integrated circuit modules or by them multiple modules or Step is fabricated to single integrated circuit module to realize.In this way, the present invention is not limited to any specific hardware and softwares to combine.
The foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, for the skill of this field For art personnel, the invention may be variously modified and varied.It is all within principle of the invention, it is made it is any modification, etc. With replacement, improvement etc., should all be included in the protection scope of the present invention.

Claims (10)

1. a kind of image processing method characterized by comprising
Scene is shot by the first lens group, the second lens group and third lens group respectively to obtain the first image, Two images and third image;
The first depth map is determined by the offset of pixel in the first image and second image, passes through second figure The offset of picture and pixel in the third image determines the second depth map, and passes through the first image and the third image The offset of middle pixel determines third depth map;
By one or more of first depth map, second depth map and described third depth map with receive Image information is combined, and generates virtual media information;
One of the virtual media information and the first image, second image and the third image are subjected to group It closes.
2. the method according to claim 1, wherein by first depth map, second depth map and One or more of described third depth map is combined with the image information received, is generated virtual media information and is included:
It determines first depth map, the smallest depth map of error in second depth map and the third depth map, obtains The depth information for taking environmental characteristic in the smallest depth map of the error, by the depth information and the image information that receives into Row combination, generates virtual media information;Or
It determines first depth map, the smallest depth map of error in second depth map and the third depth map, leads to The lesser part of error is crossed in the depth map in addition to the smallest depth map of the error in the smallest depth map of the error The biggish corresponding part of error is supplemented, the depth letter of environmental characteristic in the smallest depth map of the error after obtaining supplement Breath, the depth information is combined with the image information received, generates virtual media information.
3. the method according to claim 1, wherein by first depth map, second depth map and One or more of described third depth map is combined with the image information received, is generated virtual media information and is included:
According to one or more depth map tune in first depth map, second depth map and the third depth map Size, direction of rotation and the moving direction of the whole described image information received;
It is built according to one or more depth maps in first depth map, second depth map and the third depth map Vertical three-dimensional scenic;
Described image information adjusted is positioned in the three-dimensional scenic, generates the virtual media information.
4. the method according to claim 1, wherein
The offset is the difference of the coordinate of same pixel in two images;Or
The offset is the difference of the coordinate of same pixel in the projected image of two images, and the projected image is by by institute It states the first image, second image and the third image and transformation acquirement is carried out according to the correction matrix pre-saved respectively 's.
5. according to the method described in claim 4, it is characterized in that, first lens group, second lens group and institute It states third lens group to be located along the same line, and second lens group is located at first lens group and the third lens group Between.
6. according to the method described in claim 5, it is characterized in that, between first lens group and second lens group Distance is less than second lens group at a distance from the third lens group.
7. method according to any one of claim 1 to 6, which is characterized in that
First lens group, second lens group and the third lens group field angle having the same;And/or
First lens group, second lens group and the third lens group are imaged in infrared band.
8. a kind of image processing apparatus characterized by comprising
Shooting module, for shoot to scene by the first lens group, the second lens group and third lens group respectively To the first image, the second image and third image;
Determining module, for determining the first depth map by the offset of pixel in the first image and second image, The second depth map is determined by the offset of pixel in second image and the third image, and passes through the first image Third depth map is determined with the offset of pixel in the third image;
Generation module, for by one in first depth map, second depth map and the third depth map or It is multiple to be combined with the image information received, generate virtual media information;
Composite module is used for the virtual media information and the first image, second image and the third figure One of picture is combined.
9. a kind of storage medium, which is characterized in that be stored with computer program in the storage medium, wherein the computer Program is arranged to execute method described in any one of claim 1 to 7 when operation.
10. a kind of electronic device, including memory and processor, which is characterized in that be stored with computer journey in the memory Sequence, the processor are arranged to run the computer program to execute side described in any one of claim 1 to 7 Method.
CN201910037310.3A 2019-01-15 2019-01-15 Image processing method and device Active CN109922331B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910037310.3A CN109922331B (en) 2019-01-15 2019-01-15 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910037310.3A CN109922331B (en) 2019-01-15 2019-01-15 Image processing method and device

Publications (2)

Publication Number Publication Date
CN109922331A true CN109922331A (en) 2019-06-21
CN109922331B CN109922331B (en) 2021-12-07

Family

ID=66960429

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910037310.3A Active CN109922331B (en) 2019-01-15 2019-01-15 Image processing method and device

Country Status (1)

Country Link
CN (1) CN109922331B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102204259A (en) * 2007-11-15 2011-09-28 微软国际控股私有有限公司 Dual mode depth imaging
CN103853913A (en) * 2012-12-03 2014-06-11 三星电子株式会社 Method for operating augmented reality contents and device and system for supporting the same
CN104392045A (en) * 2014-11-25 2015-03-04 沈阳建筑大学 Real-time enhanced virtual reality system and method based on intelligent mobile terminal
WO2017172528A1 (en) * 2016-04-01 2017-10-05 Pcms Holdings, Inc. Apparatus and method for supporting interactive augmented reality functionalities
US20180139431A1 (en) * 2012-02-24 2018-05-17 Matterport, Inc. Capturing and aligning panoramic image and depth data
CN108182730A (en) * 2018-01-12 2018-06-19 北京小米移动软件有限公司 Actual situation object synthetic method and device
CN108307675A (en) * 2015-04-19 2018-07-20 快图凯曼有限公司 More baseline camera array system architectures of depth enhancing in being applied for VR/AR

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102204259A (en) * 2007-11-15 2011-09-28 微软国际控股私有有限公司 Dual mode depth imaging
US20180139431A1 (en) * 2012-02-24 2018-05-17 Matterport, Inc. Capturing and aligning panoramic image and depth data
CN103853913A (en) * 2012-12-03 2014-06-11 三星电子株式会社 Method for operating augmented reality contents and device and system for supporting the same
CN104392045A (en) * 2014-11-25 2015-03-04 沈阳建筑大学 Real-time enhanced virtual reality system and method based on intelligent mobile terminal
CN108307675A (en) * 2015-04-19 2018-07-20 快图凯曼有限公司 More baseline camera array system architectures of depth enhancing in being applied for VR/AR
WO2017172528A1 (en) * 2016-04-01 2017-10-05 Pcms Holdings, Inc. Apparatus and method for supporting interactive augmented reality functionalities
CN108182730A (en) * 2018-01-12 2018-06-19 北京小米移动软件有限公司 Actual situation object synthetic method and device

Also Published As

Publication number Publication date
CN109922331B (en) 2021-12-07

Similar Documents

Publication Publication Date Title
CN108765498B (en) Monocular vision tracking, device and storage medium
US10560628B2 (en) Elimination of distortion in 360-degree video playback
CN106550182B (en) Shared unmanned aerial vehicle viewing system
CN106331508B (en) Method and device for shooting composition
CN107666606B (en) Binocular panoramic picture acquisition methods and device
CN104835138B (en) Make foundation drawing picture and Aerial Images alignment
CN103945210B (en) A kind of multi-cam image pickup method realizing shallow Deep Canvas
CN107925753A (en) The method and system of 3D rendering seizure is carried out using dynamic camera
CN108596827B (en) Three-dimensional face model generation method and device and electronic equipment
US10726612B2 (en) Method and apparatus for reconstructing three-dimensional model of object
CN108257183A (en) A kind of camera lens axis calibrating method and device
CN108629829B (en) Three-dimensional modeling method and system of the one bulb curtain camera in conjunction with depth camera
CN106228530B (en) A kind of stereography method, device and stereo equipment
CN107911621A (en) A kind of image pickup method of panoramic picture, terminal device and storage medium
US10380761B2 (en) Locating method, locator, and locating system for head-mounted display
US20150172637A1 (en) Apparatus and method for generating three-dimensional output data
CN109274824A (en) Image pickup method and electronic equipment
CN112207821B (en) Target searching method of visual robot and robot
CN110009567A (en) For fish-eye image split-joint method and device
CN114943773A (en) Camera calibration method, device, equipment and storage medium
CN114080627A (en) Three-dimensional model generation method and three-dimensional model generation device
CN109842791B (en) Image processing method and device
WO2018175217A1 (en) System and method for relighting of real-time 3d captured content
CN110266955B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN109922331A (en) A kind of image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant