CN106604015B - A kind of image processing method and device - Google Patents
A kind of image processing method and device Download PDFInfo
- Publication number
- CN106604015B CN106604015B CN201611184529.9A CN201611184529A CN106604015B CN 106604015 B CN106604015 B CN 106604015B CN 201611184529 A CN201611184529 A CN 201611184529A CN 106604015 B CN106604015 B CN 106604015B
- Authority
- CN
- China
- Prior art keywords
- image
- bitmap
- picture
- scenery
- stereo
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/293—Generating mixed stereoscopic images; Generating mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/361—Reproducing mixed stereoscopic images; Reproducing mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
This application discloses a kind of image processing method and devices, are related to multimedia technology field, for solving the problems, such as that augmented reality AR images can not be the physical location that each scenery is presented in user.The present processes include:Obtain the image that described two rear cameras are shot respectively;According to two acquired images, the corresponding stereo-picture of any one image in described two images is generated, and generates three-dimensional bitmap corresponding to the stereo-picture;Material picture is extracted from another image, and the corresponding planar bitmaps of another described image are generated according to the material picture, another described image is the image in addition to image corresponding to the stereo-picture in described two images;The three-dimensional bitmap and the planar bitmaps are superimposed, and show the final bitmap after superposition.The application is suitable for processing image.
Description
Technical field
This application involves multimedia technology field more particularly to a kind of image processing methods and device.
Background technology
Augmented reality (English:Augmented Reality, AR) technology, being used for will be raw after apparatus such as computer analog simulation
At virtual information be added in flat image, to enable people to experience virtual information and flat image simultaneously, wherein
Virtual information is shown in the form of three-dimensional or plane.For example, passing through the external camera shooting of camera or desktop computer carried in notebook
Equipment obtains original plane image, is handled again original plane image using AR technologies by notebook later, to generate
Three-dimensional AR images.
When shooting is in the scenery of multiple planes, in the original plane image of acquisition, all scenery is in same
In a plane.During generating AR images by above-mentioned original plane image, each scenery in flat image can be generated
Corresponding three-dimensional model.Since flat image can not reflect the position relationship in space between multiple scenery, it generates
AR images in all three-dimensional model still in approximately the same plane.It, can be with but according to the physical location of each scenery
Solution is likely to be to multiple scenery in different planes.So, in the AR images that computer generates the position of each scenery and
The physical location of each scenery may be not exactly the same, also results in user when watching AR images, can experience reality
The relative position relation of all scenery and all scenery in two dimensional image present in environment also can not be just that user is presented
The physical location of each scenery, to reduce the visual experience of user.
Invention content
A kind of image processing method of the application offer and device, it can not be that each scenery is presented in user that can solve AR images
Physical location the problem of.
In order to achieve the above objectives, the application adopts the following technical scheme that:
In a first aspect, the application provides a kind of image processing method, the method is applied to a kind of tool and is taken the photograph there are two postposition
As the terminal of head, the method includes:
Obtain the image that described two rear cameras are shot respectively;
According to two acquired images, the corresponding stereo-picture of any one image in described two images is generated, and
Generate three-dimensional bitmap corresponding to the stereo-picture;
Material picture is extracted from another image, and corresponding according to another described image of material picture generation
Planar bitmaps, another described image are the image in addition to image corresponding to the stereo-picture in described two images;
The three-dimensional bitmap and the planar bitmaps are superimposed, and show the final bitmap after superposition.
Second aspect, the application provide a kind of image processing apparatus, and described image processing unit includes:
Acquisition module, the image shot respectively for obtaining described two rear cameras;
Generation module generates arbitrary in described two images for two images acquired in the acquisition module
The corresponding stereo-picture of one image, and generate three-dimensional bitmap corresponding to the stereo-picture;
The generation module is additionally operable to extract material picture from another image, and is generated according to the material picture
The corresponding planar bitmaps of described another image, another described image are in described two images except the stereo-picture corresponds to
Image other than image;
Laminating module, for being superimposed the three-dimensional bitmap and the planar bitmaps that the generation module generates, and by showing
Show that module shows the final bitmap after being superimposed by the laminating module.
Image processing method and device provided by the present application, are compared to and handle flat image by computer in the prior art,
Generate and be shown in AR images three-dimensional shown in approximately the same plane later, the application can be by configured in terminal two
Rear camera obtains two images, generates the stereo-picture of wherein any one image later, is given birth in conjunction with another image
At AR images.Although can have multiple identical scenery in the image of two rear cameras acquisition, since two postpositions are taken the photograph
As head is in different positions relative to the scenery to be shot, the angle of two rear cameras acquisition images is also resulted in not
Together, so as to obtaining the position relationship between the scenery observed in two angles.So, terminal can be according to upper
Position relationship is stated, determines the physical location of scenery, really restores actual environment when camera plane image later.Thus may be used
See, even if user can generate and be in watching AR image process if being not in the actual environment of camera plane image
The identical impression of actual environment, to improve the visual experience of user.
Description of the drawings
It to describe the technical solutions in the embodiments of the present invention more clearly, below will be to needed in the embodiment
Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for ability
For the those of ordinary skill of domain, without creative efforts, it can also be obtained according to these attached drawings other attached
Figure.
Fig. 1 is a kind of flow chart of image processing method provided in an embodiment of the present invention;
Fig. 2 is a kind of schematic diagram of image processing method provided in an embodiment of the present invention;
Fig. 3 is a kind of schematic diagram of image processing method provided in an embodiment of the present invention;
Fig. 4 is the flow chart of another image processing method provided in an embodiment of the present invention;
Fig. 5 is the schematic diagram of another image processing method provided in an embodiment of the present invention;
Fig. 6 is the flow chart of another image processing method provided in an embodiment of the present invention;
Fig. 7 is the schematic diagram of another image processing method provided in an embodiment of the present invention;
Fig. 8, Fig. 9 are the flow chart of another image processing method provided in an embodiment of the present invention;
Figure 10 is a kind of structural schematic diagram of image processing apparatus provided in an embodiment of the present invention;
Figure 11 is a kind of structural schematic diagram of terminal provided in an embodiment of the present invention.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation describes, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts all other
Embodiment shall fall within the protection scope of the present invention.
An embodiment of the present invention provides a kind of image processing method, this method can be applied to setting, and there are two postpositions to image
The terminal of head, which is specifically as follows smart mobile phone, tablet computer etc. and is capable of providing in embodiments of the present invention takes pictures and schemes
As the equipment of processing function.As shown in Figure 1, this method flow includes:
Step 101 obtains the image that two rear cameras are shot respectively.
It should be noted that two rear cameras can obtain two images simultaneously, the two images may complete phase
Together, it is also possible to different.In embodiments of the present invention, identical with the scenery majority in two images, it is deposited in the surrounding position of image
In fraction of difference.
In addition, there are two the terminals of rear camera when shooting image for setting, compared with the terminal of single camera, increase
Big light-inletting quantity and photosensitive area, and then the noise of image is inhibited, so that the overall picture of image is more clear.Two postpositions are taken the photograph
As head can also have different functions, for example, one of rear camera is responsible for imaging, another rear camera is responsible for
The data such as the depth of field are measured, may be implemented first to take pictures in this way and focus afterwards;One of camera charge capture details profile, another
Camera is responsible for filling color, and in this way compared with the image of the terminal taking of single camera, the parsing power of image is increased,
And the capturing ability of details is enhanced, additionally it is possible to realize the function of synchronous focusing, also can be used alone charge capture details wheel
Wide camera shoots black white image.
The generating process of AR images is as shown in Fig. 2, by shape library of increasing income (English:Open Graphics Library,
OpenGL) rending model generates stereo-picture, later by together with stereo-picture and plane Image Mosaic, obtains being in finally current bound
Face.Further, it is also possible to use computer vision library of increasing income (English:Open Source Computer Vision Library,
OpenCV) stereo-picture is generated.
Step 102, according to two acquired images, generate the corresponding stereogram of any one image in two images
Picture, and generate three-dimensional bitmap corresponding to stereo-picture.
It should be noted that since the stereo-picture that is generated by OpenGL rending models is polar plot, and stereo-picture and
The image of rear camera shooting is located in different figure layers, needs first to convert stereo-picture to bitmap by polar plot, i.e., vertical
Position figure.In addition, in embodiments of the present invention, two images of rear camera shooting are essentially identical, therefore, can be according to two
Any one image in image generates stereo-picture, for generating the image of stereo-picture, does not limit herein.
Furthermore it is possible to generate three-dimensional bitmap by the method for converting stereo-picture to pixel data.Stereo-picture is turned
The method code for turning to bitmap is as follows:
Step 103 extracts material picture from another image, and it is corresponding according to material picture to generate another image
Planar bitmaps.
Wherein, another image is the image in addition to image corresponding to stereo-picture in two images.
Since the scenery that light, the shadow etc. in flat image and stereo-picture can not be touched directly is slightly different, into
Before row stereo-picture and plane image superposition, the material picture in flat image can be first extracted, for example, as shown in figure 3,
Material picture " hollow box ", " square-shaped frame ", " arrow " and " circle " in " shooting image " is extracted, as plane position
Figure, this way it is possible to avoid the case where light, shadow etc. in flat image are not consistent with the image after superposition.
Step 104, the three-dimensional bitmap of superposition and planar bitmaps, and show the final bitmap after superposition.
It should be noted that the superposition of three-dimensional bitmap and planar bitmaps can be carried out using Canvas drawing boards.Finally
Virtual stereo-picture and flat image can be existed simultaneously in bitmap.
The application can obtain two images by two rear cameras configured in terminal, generate later wherein arbitrary
The stereo-picture of one image generates AR images in conjunction with another image.Although in the image of two rear cameras acquisition
There can be multiple identical scenery, but since two rear cameras relative to the scenery to be shot are in different positions,
Also result in the angle of two rear camera acquisition images different, so as to obtain the scenery observed in two angles
Between position relationship.So, terminal can determine the physical location of scenery, later really according to above-mentioned position relationship
Restore actual environment when camera plane image.It can be seen that even if user is not at the actual environment of camera plane image
In, also impression identical with actual environment is in can be generated, in watching AR image process to improve the vision of user
Experience.
In order to make three-dimensional bitmap and planar bitmaps after superposition meet the needs of user, in a realization of the embodiment of the present invention
In mode, it is thus necessary to determine that therefore superposition rule, on the basis of realization method as shown in Figure 1, is also implemented as such as Fig. 4
Shown in realization method.Wherein, step 104 is superimposed three-dimensional bitmap and planar bitmaps, can also be implemented as step 1041 and
Step 1042:
Step 1041 overlaps the designated position in three-dimensional bitmap with the designated position in planar bitmaps, and by three-dimensional position
Figure is added in planar bitmaps, obtains initially being superimposed bitmap.
Wherein, in initially superposition bitmap, the part of coincidence is the corresponding stereo-picture in designated position in three-dimensional bitmap.
Since the angle of the corresponding stereo-picture of three-dimensional bitmap corresponding flat image shooting with planar bitmaps is different, they
Have a few different part in two images, for example, in as shown in Figure 5 " three-dimensional bitmap " and " planar bitmaps " it is respective " no
Same part ".After getting three-dimensional bitmap and planar bitmaps, designated position is chosen, it will be vertical on three-dimensional bitmap designated position
Flat image on body image and planar bitmaps designated position overlaps, for example, as shown in figure 5, corresponding respectively choose three-dimensional bitmap
In " space framework " and planar bitmaps in " hollow box " be used as designated position, will " space framework " and " hollow box " weigh
It closes, three-dimensional bitmap is added on planar bitmaps, obtains initially being superimposed bitmap.
Step 1042 duplicates two positions of image according in the initial superposition bitmap, determines and described repeats to scheme
As the target location in the initial superposition bitmap, and multiimage is adjusted to target location.
Wherein, the multiimage includes at least the corresponding flat image of identical scenery and stereo-picture.
It, can be by the respective present position of flat image and stereo-picture it should be noted that when choosing target location
It compromise position, can also be using flat image position or stereo-picture position as target location as target location.
For example, as shown in figure 5, initially the flat image of square scenery is in " first position " in superposition bitmap, stereo-picture is in
" second position " chooses the position between " first position " and " second position " as target location, " the initial superposition of adjustment later
The flat image of square scenery and stereo-picture obtain " final bitmap " to target location in bitmap ".
When the three-dimensional bitmap of superposition is with planar bitmaps, code can also realize with the following method:
It can be seen that for the certain rule of additive process setting of three-dimensional bitmap and planar bitmaps, it can be initial to avoid adjustment
When being superimposed bitmap, cause final bitmap that cannot accurately reflect the physical location of scenery since adjustment amplitude is excessive.In the present invention
In embodiment, two bitmaps are superimposed after selecting designated position, it is only necessary to two of image are duplicated in bitmap to being initially superimposed
Subtle adjustment is made in position, can so that adjustment position after bitmap can be essentially identical with the physical location of scenery, carry
High reducing degree of the final bitmap to actual environment.
In order to allow users to understand more information by AR images, in a realization method of the embodiment of the present invention
In, it on the basis of realization method as shown in Figure 1, can be also implemented as with the scenery in automatic identification AR images, therefore
Realization method as shown in FIG. 6.Wherein, after the bitmap after executing the step 104 displays superposition, step can also be performed
105:
Step 105 identifies in final bitmap and specifies scenery, and shows the mark for specifying scenery.
Wherein, information of the mark for indicating specified scenery.
It should be noted that specified scenery includes at least the material objects such as word, picture, mark may include the name of specified scenery
Title, classification or feature etc..For example, in " bitmap after superposition " as shown in Figure 7, it is other in specified scenery " arrow ", with instruction
The form of frame shows that " arrow " identifies.It can use and various ways display mark, the mark such as indicate frame, be suspended in specified scenery top
Knowledge can also be shown in top, lower section, left or right of specified scenery etc., the display location for mark and display mode,
It does not limit herein.
When the blank position in the bitmap after superposition is less, it is difficult to all marks are shown, in this case, in order to make
User obtains better viewing experience, certain transparency display mark may be used, in this way, mark does not interfere with user and checks
It is identified the AR images blocked.Further, it is also possible to setting is used to indicate the button of hidden identification on terminal interface, when user not
When needing to check mark, mark can be hidden, user be avoided during watching AR images, because of the mark needed not look at
Know excessive and causes the viewing experience of user poor.
It can be seen that when specifying scenery display mark in AR images, even if user is not the case where knowing about specified scenery
Under, the information of specified scenery can also be got by mark, and then can understand again by above-mentioned mark and specify scenery more
Add detailed information, so, the information content that user gets when watching AR images is substantially increased, but also user is more
Understand the corresponding actual environment of AR images.
In order to make user further appreciate that specified scenery, in a realization method of the embodiment of the present invention, can search
With the relevant information of specified scenery and show, so that user checks, therefore, on the basis of realization method as shown in FIG. 6, also
It can be implemented as realization method as shown in Figure 8.Wherein, after the mark for executing the step the specified scenery of 105 displays, may be used also
To execute step 106 and step 107:
Step 106 receives operational order.
Wherein, operational order includes at least and chooses mark.
Wherein, it chooses the mode of mark to include at least to touch, click on or slide, for choosing the mode of mark, herein
It does not limit.
Step 107, the relevant information of Search Flags, and show relevant information.
Wherein, relevant information include at least will identify scanned for as keyword during the content that can find.
It should be noted that the position that terminal scans for includes at least terminal local storage or network.
It can be seen that if user, which needs to understand in detail, specifies scenery, operational order can be sent out to terminal, when terminal connects
After receiving operational order, it can search for automatically and identify relevant information, and above- mentioned information is shown so that user checks,
Reduce user to need to open search interface, then input the process that keyword scans for, simplifies user's operation, improve use
It experiences at family.
In order to show and identify relevant information, in a realization method of the embodiment of the present invention, it is desirable to provide can
Therefore the page of display information on the basis of realization method as shown in Figure 8, is also implemented as realization as shown in Figure 9
Mode.Wherein, step 107 shows relevant information, can also specifically execute as step 1071:
Step 1071 shows relevant information in current interface, or jumps to the interface different from current interface, and shows
Relevant information.
It should be noted that can be in current interface, the relevant information of the surrounding position of mark display mark can be with
Relevant information is shown in automatic jumping to new interface.
In embodiments of the present invention, it shows information in current interface, user can be facilitated to specify scape in control information inspection
Object, while nor affecting on user and checking other scenery;Show that information, display area bigger can make in the interface after redirecting
The more convenient acquisition information of user.
The embodiment of the present invention provides a kind of image processing apparatus 20, the image processing apparatus 20 for execute as Fig. 1, Fig. 4,
Fig. 6, Fig. 8 and method flow shown in Fig. 9, as shown in Figure 10, which includes:
Acquisition module 21, the image shot respectively for obtaining two rear cameras.
Generation module 22 generates any one in two images for two images acquired in acquisition module 21
The corresponding stereo-picture of image, and generate three-dimensional bitmap corresponding to stereo-picture.
Generation module 22 is additionally operable to extract material picture from another image, and generates another according to material picture
The corresponding planar bitmaps of image, another image are the image in addition to image corresponding to stereo-picture in two images.
Laminating module 23, three-dimensional bitmap and planar bitmaps for being superimposed the generation of generation module 22, and by display module 24
Final bitmap after display superposition.
In a realization method of the embodiment of the present invention, laminating module 23 is specifically used for:
Designated position in three-dimensional bitmap is overlapped with the designated position in planar bitmaps, and generation module 22 is generated
Three-dimensional bitmap is added in planar bitmaps, obtains initially being superimposed bitmap, and in initially superposition bitmap, the part of coincidence is three-dimensional position
The corresponding stereo-picture in designated position in figure;
According to two positions for duplicating image in initial superposition bitmap, determine multiimage in initially superposition bitmap
Target location, and adjust multiimage to target location, multiimage include at least the corresponding flat image of identical scenery and
Stereo-picture.
In a realization method of the embodiment of the present invention, which further includes:
Identification module 25 specifies scenery for being identified in the final bitmap that laminating module 23 is superimposed, and by display module
The mark of scenery is specified in 24 displays, identifies the information for indicating specified scenery.
In a realization method of the embodiment of the present invention, which further includes:
Receiving module 26, for receiving operational order, operational order includes at least and chooses mark.
Search module 27 is used for the relevant information of Search Flags, and shows relevant information, relevant information by display module 24
Including at least will identify scanned for as keyword during the content that can find.
In a realization method of the embodiment of the present invention, display module 24 is specifically used for:
The relevant information that search module 27 is searched for is shown in current interface, or jumps to the boundary different from current interface
Face, and show relevant information.
Image processing apparatus provided by the present application is compared to and handles flat image, Zhi Housheng by computer in the prior art
At and be shown in AR images three-dimensional shown in approximately the same plane, the application can be taken the photograph by two postpositions configured in terminal
As head two images of acquisition, the stereo-picture of wherein any one image is generated later, and AR figures are generated in conjunction with another image
Picture.Although can have multiple identical scenery in the image of two rear cameras acquisition, due to two rear camera phases
Different positions is in for the scenery to be shot, also results in the angle of two rear camera acquisition images different, from
And the position relationship between the scenery observed in two angles can be obtained.So, terminal can be according to upper rheme
Relationship is set, determines the physical location of scenery, really restores actual environment when camera plane image later.It can be seen that i.e.
So that user is not in the actual environment of camera plane image, can also generate in watching AR image process and be in reality
The identical impression of environment, to improve the visual experience of user.
The embodiment of the present invention provides a kind of terminal 30, and as shown in figure 11, which includes at least transceiver 31, processor
32.In embodiments of the present invention, which can also include memory 33 and bus 34.Wherein, bus 34 can be used for connecting
Transceiver 31, processor 32 and memory 33, memory 33 can be used for store terminal 30 execute as Fig. 1, Fig. 4, Fig. 6,
Generated data during method flow shown in Fig. 8 and Fig. 9.
Wherein, processor 32, the image shot respectively for obtaining two rear cameras.
Processor 32 is additionally operable to, according to two acquired images, it is corresponding to generate any one image in two images
Stereo-picture, and generate three-dimensional bitmap corresponding to stereo-picture.
Processor 32 is additionally operable to extract material picture from another image, and generates another figure according to material picture
As corresponding planar bitmaps, another image is the image in addition to image corresponding to stereo-picture in two images.
Processor 32 is additionally operable to be superimposed three-dimensional bitmap and planar bitmaps, and shows the final bitmap after superposition.
In a kind of possible realization method of the embodiment of the present invention, processor 32 is used for the specific bit in three-dimensional bitmap
It sets and is overlapped with the designated position in planar bitmaps, and three-dimensional bitmap is added in planar bitmaps, obtain initially being superimposed bitmap,
In initial superposition bitmap, the part of coincidence is the corresponding stereo-picture in designated position in three-dimensional bitmap.
Processor 32 is additionally operable to, according to two positions for duplicating image in initial superposition bitmap, determine multiimage
Target location in initially superposition bitmap, and multiimage is adjusted to target location, multiimage includes at least identical scenery
Corresponding flat image and stereo-picture.
In a kind of possible realization method of the embodiment of the present invention, processor 32 is additionally operable to identify in final bitmap and refers to
Determine scenery, and show the mark for specifying scenery, identifies the information for indicating specified scenery.
In a kind of possible realization method of the embodiment of the present invention, transceiver 31, for receiving operational order, operational order
It is identified including at least choosing.
Processor 32, is additionally operable to the relevant information of Search Flags, and shows relevant information, and relevant information is included at least and will be marked
Know the content that can be found during being scanned for as keyword.
In a kind of possible realization method of the embodiment of the present invention, processor 32 is additionally operable to show in current interface related
Information, or the interface different from current interface is jumped to, and show relevant information.
Terminal provided by the present application is compared to and handles flat image by computer in the prior art, generates and show later
The three-dimensional AR images shown in approximately the same plane, the application can be obtained by two rear cameras configured in terminal
Two images, generate the stereo-picture of wherein any one image later, and AR images are generated in conjunction with another image.Although two
There can be multiple identical scenery in the image of a rear camera acquisition, but since two rear cameras are relative to being clapped
The scenery taken the photograph is in different positions, also results in the angle of two rear camera acquisition images different, so as to obtain
Position relationship between the scenery observed in two angles.So, terminal can be determined according to above-mentioned position relationship
The physical location of scenery really restores actual environment when camera plane image later.It can be seen that even if user is not at
In the actual environment of camera plane image, also sense identical with actual environment is in can be generated in watching AR image process
By to improve the visual experience of user.
Each embodiment in this specification is described in a progressive manner, identical similar portion between each embodiment
Point just to refer each other, and each embodiment focuses on the differences from other embodiments.Especially for device reality
For applying example, since it is substantially similar to the method embodiment, so describing fairly simple, related place is referring to embodiment of the method
Part explanation.
One of ordinary skill in the art will appreciate that realizing all or part of flow in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the program can be stored in computer read/write memory medium
In, the program is when being executed, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, the storage medium can be magnetic
Dish, CD, read-only memory (English:Read-Only Memory, ROM) or random access memory (English:Random
Access Memory, RAM) etc..
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, all answer by the change or replacement that can be readily occurred in
It is included within the scope of the present invention.Therefore, protection scope of the present invention should be subject to the protection scope in claims.
Claims (10)
1. a kind of image processing method, which is characterized in that the method be applied to a kind of tool there are two rear camera terminal,
The method includes:
Obtain the image that described two rear cameras are shot respectively;
According to two acquired images, the corresponding stereo-picture of any one image in described two images is generated, and generate
Three-dimensional bitmap corresponding to the stereo-picture;
Material picture is extracted from another image, and the corresponding plane of another described image is generated according to the material picture
Bitmap, another described image are the image in addition to image corresponding to the stereo-picture in described two images;
The three-dimensional bitmap and the planar bitmaps are superimposed, and show the final bitmap after superposition.
2. according to the method described in claim 1, it is characterized in that, the superposition three-dimensional bitmap and the planar bitmaps,
It specifically includes:
Designated position in the three-dimensional bitmap is overlapped with the designated position in the planar bitmaps, and by the solid bitmap
It is added in the planar bitmaps, obtains initially being superimposed bitmap, in the initial superposition bitmap, the part of coincidence is described vertical
The corresponding stereo-picture in designated position in position figure;
According to two positions for duplicating image in the initial superposition bitmap, determine the multiimage described initial folded
Add the target location in bitmap, and adjust the multiimage to target location, the multiimage includes at least identical scenery
Corresponding flat image and stereo-picture.
3. according to the method described in claim 1, it is characterized in that, it is described display superposition after final bitmap after, it is described
Method further includes:
It is identified in the final bitmap and specifies scenery, and show the mark of the specified scenery, the mark is for indicating
State the information of specified scenery.
4. according to the method described in claim 3, it is characterized in that, after the mark of the display specified scenery, institute
The method of stating further includes:
Operational order is received, the operational order includes at least and chooses the mark;
The relevant information of the mark is searched for, and shows the relevant information, the relevant information is included at least the mark
The content that can be found during being scanned for as keyword.
5. according to the method described in claim 4, it is characterized in that, the display relevant information, specifically includes:
The relevant information is shown in current interface, or jumps to the interface different from the current interface, and described in display
Relevant information.
6. a kind of image processing apparatus, which is characterized in that described image processing unit includes:
Acquisition module, the image shot respectively for obtaining two rear cameras;
Generation module generates any one in described two images for two images acquired in the acquisition module
The corresponding stereo-picture of image, and generate three-dimensional bitmap corresponding to the stereo-picture;
The generation module is additionally operable to extract material picture from another image, and according to described in material picture generation
The corresponding planar bitmaps of another image, another described image are in described two images except the figure that the stereo-picture is corresponding
Image as other than;
Laminating module, for being superimposed the three-dimensional bitmap and the planar bitmaps that the generation module generates, and by display mould
Final bitmap after block display superposition.
7. image processing apparatus according to claim 6, which is characterized in that the laminating module is specifically used for:
Designated position in the three-dimensional bitmap is overlapped with the designated position in the planar bitmaps, and by the generation module
The three-dimensional bitmap generated is added in the planar bitmaps, obtains initially being superimposed bitmap, in the initial superposition bitmap,
The part of coincidence is the corresponding stereo-picture in designated position in the three-dimensional bitmap;
According to two positions for duplicating image in the initial superposition bitmap, determine the multiimage described initial folded
Add the target location in bitmap, and adjust the multiimage to target location, the multiimage includes at least identical scenery
Corresponding flat image and stereo-picture.
8. image processing apparatus according to claim 6, which is characterized in that described image processing unit further includes:
Identification module specifies scenery for being identified in the final bitmap that the laminating module is superimposed, and by the display
Module shows the mark of the specified scenery, information of the mark for indicating the specified scenery.
9. image processing apparatus according to claim 8, which is characterized in that described image processing unit further includes:
Receiving module, for receiving operational order, the operational order includes at least and chooses the mark;
Search module, the relevant information for searching for the mark, and the relevant information is shown by the display module, it is described
Relevant information is including at least the content that can be found during scanning for the mark as keyword.
10. image processing apparatus according to claim 9, which is characterized in that the display module is specifically used for:
The relevant information of described search block search is shown in current interface, or is jumped to different from the current interface
Interface, and show the relevant information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611184529.9A CN106604015B (en) | 2016-12-20 | 2016-12-20 | A kind of image processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611184529.9A CN106604015B (en) | 2016-12-20 | 2016-12-20 | A kind of image processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106604015A CN106604015A (en) | 2017-04-26 |
CN106604015B true CN106604015B (en) | 2018-09-14 |
Family
ID=58600190
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611184529.9A Active CN106604015B (en) | 2016-12-20 | 2016-12-20 | A kind of image processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106604015B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102427854B1 (en) * | 2017-09-25 | 2022-08-01 | 삼성전자주식회사 | Method and apparatus for rendering image |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102461181A (en) * | 2009-06-24 | 2012-05-16 | Lg电子株式会社 | Stereoscopic image reproduction device and method for providing 3d user interface |
CN102696057A (en) * | 2010-03-25 | 2012-09-26 | 比兹摩德莱恩有限公司 | Augmented reality systems |
CN103065338A (en) * | 2011-10-19 | 2013-04-24 | 北京千橡网景科技发展有限公司 | Method and device providing shadow for foreground image in background image |
CN104102678A (en) * | 2013-04-15 | 2014-10-15 | 腾讯科技(深圳)有限公司 | Method and device for realizing augmented reality |
CN104915915A (en) * | 2014-03-10 | 2015-09-16 | 博雅网络游戏开发(深圳)有限公司 | Picture displaying method and apparatus |
CN105051789A (en) * | 2013-03-25 | 2015-11-11 | 株式会社吉奥技术研究所 | Three-dimensional map display system |
CN105184858A (en) * | 2015-09-18 | 2015-12-23 | 上海历影数字科技有限公司 | Method for augmented reality mobile terminal |
CN105844714A (en) * | 2016-04-12 | 2016-08-10 | 广州凡拓数字创意科技股份有限公司 | Augmented reality based scenario display method and system |
-
2016
- 2016-12-20 CN CN201611184529.9A patent/CN106604015B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102461181A (en) * | 2009-06-24 | 2012-05-16 | Lg电子株式会社 | Stereoscopic image reproduction device and method for providing 3d user interface |
CN102696057A (en) * | 2010-03-25 | 2012-09-26 | 比兹摩德莱恩有限公司 | Augmented reality systems |
CN103065338A (en) * | 2011-10-19 | 2013-04-24 | 北京千橡网景科技发展有限公司 | Method and device providing shadow for foreground image in background image |
CN105051789A (en) * | 2013-03-25 | 2015-11-11 | 株式会社吉奥技术研究所 | Three-dimensional map display system |
CN104102678A (en) * | 2013-04-15 | 2014-10-15 | 腾讯科技(深圳)有限公司 | Method and device for realizing augmented reality |
CN104915915A (en) * | 2014-03-10 | 2015-09-16 | 博雅网络游戏开发(深圳)有限公司 | Picture displaying method and apparatus |
CN105184858A (en) * | 2015-09-18 | 2015-12-23 | 上海历影数字科技有限公司 | Method for augmented reality mobile terminal |
CN105844714A (en) * | 2016-04-12 | 2016-08-10 | 广州凡拓数字创意科技股份有限公司 | Augmented reality based scenario display method and system |
Also Published As
Publication number | Publication date |
---|---|
CN106604015A (en) | 2017-04-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11632537B2 (en) | Method and apparatus for obtaining binocular panoramic image, and storage medium | |
WO2015180659A1 (en) | Image processing method and image processing device | |
CN105187814B (en) | Image processing method and associated apparatus | |
EP3997662A1 (en) | Depth-aware photo editing | |
JP4938093B2 (en) | System and method for region classification of 2D images for 2D-TO-3D conversion | |
CN104641633B (en) | System and method for combining the data from multiple depth cameras | |
EP2328125A1 (en) | Image splicing method and device | |
US20070291035A1 (en) | Horizontal Perspective Representation | |
CN109891466A (en) | The enhancing of 3D model scans | |
US10726612B2 (en) | Method and apparatus for reconstructing three-dimensional model of object | |
US10453244B2 (en) | Multi-layer UV map based texture rendering for free-running FVV applications | |
CN108876706A (en) | It is generated according to the thumbnail of panoramic picture | |
CN108765537A (en) | A kind of processing method of image, device, electronic equipment and computer-readable medium | |
CN106997617A (en) | The virtual rendering method of mixed reality and device | |
CN108737810B (en) | Image processing method, device and 3-D imaging system | |
CN109191366A (en) | Multi-angle of view human body image synthetic method and device based on human body attitude | |
Kim et al. | Real-time panorama canvas of natural images | |
CN108269288B (en) | Intelligent special-shaped projection non-contact interaction system and method | |
da Silveira et al. | Omnidirectional visual computing: Foundations, challenges, and applications | |
CN106604015B (en) | A kind of image processing method and device | |
US20080111814A1 (en) | Geometric tagging | |
KR102176805B1 (en) | System and method for providing virtual reality contents indicated view direction | |
CN116708862A (en) | Virtual background generation method for live broadcasting room, computer equipment and storage medium | |
CN112511815A (en) | Image or video generation method and device | |
CN108305210B (en) | Data processing method, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |