CN107277346A - A kind of image processing method and terminal - Google Patents
A kind of image processing method and terminal Download PDFInfo
- Publication number
- CN107277346A CN107277346A CN201710392669.3A CN201710392669A CN107277346A CN 107277346 A CN107277346 A CN 107277346A CN 201710392669 A CN201710392669 A CN 201710392669A CN 107277346 A CN107277346 A CN 107277346A
- Authority
- CN
- China
- Prior art keywords
- image
- synthesized
- distance value
- target
- terminal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention discloses a kind of image processing method and terminal, wherein, image processing method includes:Obtain the first distance value of the first image to be synthesized, distance of the distance value for the object in image when taking pictures between camera lens;Obtain the second distance value related to the second image to be synthesized;According to the magnitude relationship between first distance value and the second distance value, size adjusting is carried out to the described first image to be synthesized, target image to be synthesized is obtained;Target image to be synthesized is blended into the described second image to be synthesized, so that user need not manually adjust the size of target composograph in composograph, the efficiency of image synthesis is not only increased, and improve the picture quality of composograph.
Description
Technical field
The present invention relates to field of computer technology, more particularly to a kind of image processing method and terminal.
Background technology
In daily life, user often has the demand of composograph.For example, when lacking someone in captured image
When, user would generally take out the image of the people from another image, then the image taken out is entered with captured image
Row synthesis, and then obtain required composograph.And user is in composograph, it usually needs manually adjust taken image
Size, so not only cumbersome, the efficiency of influence image synthesis, and manually adjust size and often cause taken figure
Object as in and object in captured image are uncoordinated, the picture quality of influence composograph.
The content of the invention
The embodiment of the present invention provides a kind of image processing method and terminal, it is possible to increase the efficiency of image synthesis and synthesis
The picture quality of image.
In a first aspect, the embodiments of the invention provide a kind of image processing method, the image processing method includes:
Obtain the first distance value of the first image to be synthesized, the distance value be object in image when taking pictures with phase
The distance between machine camera lens;
Obtain the second distance value related to the second image to be synthesized;
According to the magnitude relationship between first distance value and the second distance value, to the described first image to be synthesized
Size adjusting is carried out, target image to be synthesized is obtained;
Target image to be synthesized is blended into the described second image to be synthesized.
Second aspect, the embodiments of the invention provide a kind of terminal, the terminal includes:
First acquisition unit, the first distance value for obtaining the first image to be synthesized, the distance value is in image
Distance of the object when taking pictures between camera lens;
Second acquisition unit, for obtaining the second distance value related to the second image to be synthesized;
Size adjusting unit, it is right for according to the magnitude relationship between first distance value and the second distance value
First image to be synthesized carries out size adjusting, obtains target image to be synthesized;
Synthesis unit, for target image to be synthesized to be blended into the described second image to be synthesized.
The third aspect, the embodiments of the invention provide another terminal, including processor, input equipment, output equipment and
Memory, the processor, input equipment, output equipment and memory are connected with each other, wherein, the memory is used to store branch
The computer program that terminal performs the above method is held, the computer program includes programmed instruction, and the processor is configured to use
In calling described program to instruct, the method for performing above-mentioned first aspect.
Fourth aspect, the embodiments of the invention provide a kind of computer-readable recording medium, the computer-readable storage medium
Be stored with computer program, and the computer program includes programmed instruction, and described program instruction makes institute when being executed by a processor
The method for stating the above-mentioned first aspect of computing device.
The embodiment of the present invention is by the first distance value of the first image to be synthesized of acquisition, and the distance value is the quilt in image
Take the photograph distance of the thing when taking pictures between camera lens;Obtain the second distance value related to the second image to be synthesized;According to institute
The magnitude relationship between the first distance value and the second distance value is stated, size adjusting is carried out to the described first image to be synthesized,
Obtain target image to be synthesized;Target image to be synthesized is blended into the described second image to be synthesized, so that user
The size of target composograph need not be manually adjusted in composograph, the efficiency of image synthesis is not only increased, and improve
The picture quality of composograph.
Brief description of the drawings
Technical scheme, is used required in being described below to embodiment in order to illustrate the embodiments of the present invention more clearly
Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the present invention, general for this area
For logical technical staff, on the premise of not paying creative work, other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 is a kind of schematic flow diagram of image processing method provided in an embodiment of the present invention;
Fig. 2 is a kind of schematic flow diagram for image processing method that another embodiment of the present invention is provided;
Fig. 3 a are the schematic diagrames in user first object region selected from the first image;
Fig. 3 b are the schematic diagrames of user the 3rd target area selected in the second image to be synthesized;
Fig. 3 c are the schematic diagrames that target composograph is blended into the second image to be synthesized;
Fig. 3 d are the schematic diagrames of same object size of imaging when apart from camera lens different distance;
Fig. 4 is a kind of schematic block diagram of terminal provided in an embodiment of the present invention;
Fig. 5 is a kind of schematic block diagram for terminal that another embodiment of the present invention is provided;
Fig. 6 is a kind of schematic block diagram for terminal that yet another embodiment of the invention is provided.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation is described, it is clear that described embodiment is a part of embodiment of the invention, rather than whole embodiments.Based on this hair
Embodiment in bright, the every other implementation that those of ordinary skill in the art are obtained under the premise of creative work is not made
Example, belongs to the scope of protection of the invention.
It should be appreciated that ought be in this specification and in the appended claims in use, term " comprising " and "comprising" be indicated
Described feature, entirety, step, operation, the presence of element and/or component, but be not precluded from one or more of the other feature, it is whole
Body, step, operation, element, component and/or its presence or addition for gathering.
It is also understood that the term used in this description of the invention is merely for the sake of the mesh for describing specific embodiment
And be not intended to limit the present invention.As used in description of the invention and appended claims, unless on
Other situations are hereafter clearly indicated, otherwise " one " of singulative, " one " and "the" are intended to include plural form.
It will be further appreciated that, the term "and/or" used in description of the invention and appended claims is
Refer to any combinations of one or more of the associated item listed and be possible to combination, and including these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt
Be construed to " when ... " or " once " or " in response to determining " or " in response to detecting ".Similarly, phrase " if it is determined that " or
" if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to true
It is fixed " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
In the specific implementation, the terminal described in the embodiment of the present invention is including but not limited to such as with touch sensitive surface
The mobile phone, laptop computer or tablet PC of (for example, touch-screen display and/or touch pad) etc it is other just
Portable device.It is to be further understood that in certain embodiments, the equipment not portable communication device, but with touching
Touch the desktop computer of sensing surface (for example, touch-screen display and/or touch pad).
In discussion below, the terminal including display and touch sensitive surface is described.It is, however, to be understood that
It is that terminal can include one or more of the other physical user-interface device of such as physical keyboard, mouse and/or control-rod.
Terminal supports various application programs, such as one or more of following:Drawing application program, demonstration application journey
Sequence, word-processing application, website create application program, disk imprinting application program, spreadsheet applications, game application
Program, telephony application, videoconference application, email application, instant messaging applications, exercise
Support application program, photo management application program, digital camera application program, digital camera application program, web-browsing application
Program, digital music player application and/or video frequency player application program.
The various application programs that can be performed in terminal can use such as touch sensitive surface at least one is public
Physical user-interface device.It can adjust and/or change among applications and/or in corresponding application programs and touch sensitive table
The corresponding information shown in the one or more functions and terminal in face.So, the public physical structure of terminal is (for example, touch quick
Sense surface) the various application programs with user interface directly perceived and transparent for a user can be supported.
Referring to Fig. 1, Fig. 1 is a kind of schematic flow diagram of image processing method provided in an embodiment of the present invention.The present embodiment
The executive agent of middle image processing method is terminal.Terminal can be the mobile terminals such as mobile phone, tablet personal computer, but be not limited to
This, can also be other-end.Image processing method as shown in Figure 1 may comprise steps of:
S101:The first distance value of the first image to be synthesized is obtained, the distance value is that the object in image is being taken pictures
When the distance between with camera lens.
During terminal normal work, if receiving image synthetic instruction, the first image to be synthesized is obtained and second to be synthesized
Image.Image synthetic instruction, which is used to identify, is blended into the first image to be synthesized in second image to be synthesized.First figure to be synthesized
The distance value of the pixel is included in the Pixel Information of picture and each pixel in the second image to be synthesized.Pixel away from
From value be used for identify the corresponding point shot of pixel when taking pictures with a distance from plane where camera lens distance.The distance of pixel
Value is bigger, and its corresponding point shot is more remote apart from the plane where camera lens when taking pictures, and the size of imaging is smaller;It is on the contrary
It is as the same.
Terminal receives image synthetic instruction, Ke Yiwei:Terminal detects user and clicks on image synthesis application (such as terminal
The U.S. figure application of upper installation) in image synthesis option;Or can be:Terminal detects user will in image synthesis application
It is medium that one image is dragged to another image.If terminal detects user and (is dragged an image in image synthesis application
Dynamic image) be dragged in another image (background image), then it is the first image to be synthesized to recognize trailing image, another
It is the second image to be synthesized to open not trailing background image.
First image to be synthesized can take out from the first image and preserve to the image of picture library in advance for user.First
Image can be shot by terminal camera and be preserved to the image of picture library for user, or can also be downloaded simultaneously for users from networks
Preserve to the image of picture library etc., the distance value of the pixel is included in the Pixel Information of each pixel in the first image.
Second image to be synthesized can be shot for user and preserved to the image of picture library by terminal camera, or can also be user from
Network is downloaded and preserved to the image of picture library etc., is not limited herein.
If user from first image takes out first to be synthesized image in advance, and be saved in picture library, then terminal
When detecting the image synthesis option during user clicks on image synthesis application, acquisition user selected first treats from picture library
Composograph.If not storing the image that user takes out from any image in picture library, terminal is detecting user's point
When hitting the image synthesis option in image synthesis application, the first image of user's selection is obtained from picture library.If terminal detection is used
First object region is chosen at family in the first image, then the image in first object region is taken out from the first image, and will scratch
The image recognition in the first object region of taking-up is the first image to be synthesized.
First image to be synthesized can be one, or at least two.When the first image to be synthesized is at least two
When, at least two images can derive from same image, can also come to be derived from different images, not be limited herein.
Terminal is got after the first image to be synthesized, obtains the first distance value of the first image to be synthesized.First distance value
Distance for identifying object plane where camera lens when taking pictures in the first image to be synthesized.First image can
With including foreground and background.Prospect is the object corresponding image close to camera lens when taking pictures, and background is remote in when taking pictures
From the corresponding image of object of camera lens.For example, if the first image includes people and building, people is located at before building
Side, that is, people is close to camera lens when taking pictures, and building is away from camera lens, then the artificial prospect in the first image, and building is
Background.
The first image to be synthesized taken out from the first image can only include prospect, for example, only include people;Also may be used
Only to include background, for example, only include building;Or foreground and background can also be included simultaneously, for example not only include people but also including
Building, now, people and building can be used as destination object to be synthesized.
If only including prospect in the first image to be synthesized or only including background, terminal can obtain the first image to be synthesized
In each self-corresponding distance value of all pixels point, and the average distance value of all pixels point in the first image to be synthesized
Determine the first distance value.If not only having included prospect in the first image to be synthesized but also including background, terminal can be to be synthesized to first
Image carries out target detection, and the target region detected is identified as into the second target area.Terminal is according to the second target
The distance value of all pixels point in region determines the first distance value.If for example, not only having included people but also bag in the first image to be synthesized
Building is included, the image to be synthesized of terminal-pair first carries out target detection, if the area where the second target area behaviour detected
Each self-corresponding distance value of all pixels point in domain, then terminal acquisition people region, and according to all pictures of people region
The average distance value of vegetarian refreshments determines the first distance value.If the second target area that terminal is detected is building region,
Terminal obtains each self-corresponding distance value of all pixels point of building region, and according to all of building region
The average distance value of pixel determines the first distance value.
S102:Obtain the second distance value related to the second image to be synthesized.
Terminal is also obtained and second after image synthetic instruction is received, or after the second image to be synthesized is got
The related second distance value of image to be synthesized.The purpose that terminal obtains second distance value is in order to according to the first distance value and second
Magnitude relationship between distance value is adjusted to the size of the first image to be synthesized so that shot in the first image to be synthesized
Thing and the object of the 3rd target area in the second image to be synthesized are mutually coordinated.Wherein, the distance value of the 3rd target area is the
Two distance values.
Terminal can detect user the 3rd target area selected in the second image to be synthesized, and obtain the 3rd target
The distance value in region, and the distance value of the 3rd target area is identified as second distance value.Wherein, the distance of the 3rd target area
It is worth the distance for identifying object plane where camera lens when taking pictures in the 3rd target area.
Specifically, if terminal is touch screen terminal, terminal can detect touch-control behaviour of the user in the second image to be synthesized
Make, the 3rd target area according to selected by touch control operation of the user in the second image to be synthesized determines user.For example, terminal
If detecting some region in user's the second image to be synthesized of click, the region recognition that user is clicked on is that user selects
The 3rd target area.If terminal is not touch screen terminal, terminal can detect that user passes through input equipment (such as mouse)
The selected region being made up of closed curve (or straight line) in the second image to be synthesized, will be by closed curve (or straight line) structure
Into region recognition be user choose the 3rd target area.
Terminal can be in the 3rd target area each self-corresponding distance value of all pixels point determine second distance value;
The corresponding distance value of partial pixel point that can also be in the 3rd target area determines second distance value, if for example, the 3rd mesh
Marking region both includes people, also including the background after the person, then terminal can be each right according to all pixels point of people region
The distance value answered determines second distance value, or in some cases, and terminal can also be according to all pixels point of background area
Each self-corresponding distance value determines second distance value.
Optionally, terminal can also provide distance value input frame, and distance value is directly inputted for user, and then defeated according to user
The distance value entered determines its selected reference zone so that the object of the first image to be synthesized and the object of reference zone
Mutually coordinate.If terminal detects user's transmission range value in distance value input frame, by user in distance value input frame it is defeated
The distance value entered is identified as second distance value.If for example, the distance value that user inputs in distance value input frame is 5 meters, table
Show that distance value in the second image to be synthesized is 5 meters of region as reference zone by user, and think so that the first figure to be synthesized
The object of picture is mutually coordinated with the object of reference zone.Terminal is identified as second distance value by 5 meters.
S103:According to the magnitude relationship between first distance value and the second distance value, wait to close to described first
Size adjusting is carried out into image, target image to be synthesized is obtained.
Terminal is got after the first distance value and second distance value, compares the first distance value and the size of second distance value is closed
System, and the size of the first image to be synthesized is adjusted according to comparative result, by the first image to be synthesized after size adjusting
It is used as target image to be synthesized.
If specifically, comparative result, which is the first distance value, is more than second distance value, illustrating in the first image to be synthesized
Object of the object when shooting in the distant of camera lens, the second target area is when shooting apart from camera mirror
Head it is closer to the distance, i.e., the size of the object in the first image to be synthesized is smaller, the chi of the object in the second target area
It is very little relatively large, now, in order to ensure that the object in the first image to be synthesized and the object in the second target area are mutually assisted
Adjust, the image to be synthesized of terminal-pair first carries out diminution processing and obtains target image to be synthesized.
If comparative result, which is the first distance value, is less than second distance value, illustrate that the object in the first image to be synthesized exists
Apart from the closer to the distance of camera lens during shooting, object in the second target area when shooting apart from camera lens distance
Farther out, i.e., the size of the object in the first image to be synthesized is larger, and the size of the object in the second target area is relatively
It is small, now, in order to ensure that the object in the first image to be synthesized and the object in the second target area are mutually coordinated, terminal-pair
First image to be synthesized is amplified processing and obtains target image to be synthesized.
S104:Target image to be synthesized is blended into the second image to be synthesized.
The image to be synthesized of terminal-pair first carries out size adjusting and obtained after target image to be synthesized, by target image to be synthesized
It is blended into the second image to be synthesized, obtains composograph.
Obtained specifically, the image to be synthesized of terminal-pair first carries out size adjusting after target image to be synthesized, user can be with
Selection target synthesizes region in the second image to be synthesized, and the first image to be synthesized is blended into the selected target of user by terminal
Synthesize region.
If for example, terminal detects the 4th target that target image to be synthesized is dragged in the second image to be synthesized by user
Region, the then target for the 4th target area being identified as into user's selection synthesizes region.Terminal can be covered with the first image to be synthesized
Lid target synthesizes the image in region, and the pixel value that target can also be synthesized into all pixels point in region replaces with first and waits to close
The pixel value of corresponding pixel points into image.Wherein, pixel value includes color value (such as tristimulus value) or distance value, herein
It is not limited.
Such scheme, terminal obtains the first distance value of the first image to be synthesized, and the distance value is shot in image
Distance of the thing when taking pictures between camera lens;Obtain the second distance value related to the second image to be synthesized;According to described
Magnitude relationship between first distance value and the second distance value, carries out size adjusting to the described first image to be synthesized, obtains
To target image to be synthesized;Target image to be synthesized is blended into the described second image to be synthesized, so that user exists
The size of target composograph need not be manually adjusted during composograph, the efficiency of image synthesis is not only increased, and improved
The picture quality of composograph.
Referring to Fig. 2, Fig. 2 is a kind of schematic flow diagram for image processing method that another embodiment of the present invention is provided.This reality
The executive agent for applying image processing method in example is terminal.Terminal can be the mobile terminals such as mobile phone, tablet personal computer, but not limit
Can also be other-end in this.Image processing method as shown in Figure 2 may comprise steps of:
S201:The first distance value of the first image to be synthesized is obtained, the distance value is that the object in image is being taken pictures
When the distance between with camera lens.
During terminal normal work, if receiving image synthetic instruction, the first image to be synthesized is obtained and second to be synthesized
Image.Image synthetic instruction, which is used to identify, is blended into the first image to be synthesized in second image to be synthesized.First figure to be synthesized
Distance value is included in the Pixel Information of picture and each pixel in the second image to be synthesized.
Terminal receives image synthetic instruction, Ke Yiwei:Terminal detects user and clicks on image synthesis application (such as terminal
The U.S. figure application of upper installation) in image synthesis option;Or can be:Terminal detects user will in image synthesis application
It is medium that one image is dragged to another image.If terminal detects user and (is dragged an image in image synthesis application
Dynamic image) be dragged in another image (background image), then it is the first image to be synthesized to recognize trailing image, another
It is the second image to be synthesized to open not trailing background image.
First image to be synthesized can take out from the first image and preserve to the image of picture library in advance for user.First
Image can be shot by terminal camera and be preserved to the image of picture library for user, or can also be downloaded simultaneously for users from networks
Preserve to the image of picture library etc., distance value is included in the Pixel Information of each pixel in the first image.Second is to be synthesized
Image can be shot by terminal camera and be preserved to the image of picture library for user, or can also be downloaded simultaneously for users from networks
Preserve to the image of picture library etc., it is not limited herein.
If user from first image takes out first to be synthesized image in advance, and be saved in picture library, then terminal
When detecting the image synthesis option during user clicks on image synthesis application, acquisition user selected first treats from picture library
Composograph.It is understood that the first image to be synthesized can be one, or at least two.When first to be synthesized
When image is at least two, at least two images can derive from same image, can also come to be derived from different images, herein not
It is limited.
If not storing the image that user takes out from any image in picture library, terminal is detecting user's click
When image in image synthesis application synthesizes option, the first image of user's selection can be obtained from picture library.First image can
With including foreground and background.Prospect is the object corresponding image close to camera lens when taking pictures, and background is remote in when taking pictures
From the corresponding image of object of camera lens.For example, if the first image includes people and building, people is located at before building
Side, that is, people is close to camera lens when taking pictures, and building is away from camera lens, then the artificial prospect in the first image, and building is
Background.
Further, step S201 comprises the following steps:
S2011:The first object region that detection user chooses in the first image.
Terminal is got from picture library after the first image of user's selection, detects user chooses in the first image first
Target area.First object region can be made up of for user is selected in the first image any closed curve (or straight line)
Region.If terminal detects user and have chosen some region by closed curve (or straight line) in the first image, it will use
The region recognition being made up of closed curve (or straight line) selected by family is first object region.
It is understood that first object region can be rectangle, or other arbitrary shapes, do not limit herein
System.First object region can only include prospect, for example, only include people;Background can also only be included, for example, only include building;
Or foreground and background can also be included simultaneously, for example not only included people but also including building, now, people and building can be made
For destination object to be synthesized.
S2012:Take out the image in the first object region from described first image, and by the first object area
The image recognition in domain is the described first image to be synthesized.
Terminal takes out the image in first object region from the first image, and by the figure in the first object region taken out
As being identified as the first image to be synthesized.
As shown in Figure 3 a, Fig. 3 a are the schematic diagram in user first object region selected from the first image.User exists
Selected first object region is the region that dotted line frame is constituted in first image, then terminal takes out the figure in the dotted line frame region
Picture, and be the first image to be synthesized by the image recognition taken out.
S2013:Target detection is carried out to the described first image to be synthesized, and the target region detected is identified as
Second target area.
S2014:The distance value of second target area is obtained, and the distance value of second target area is identified as
First distance value.
The image to be synthesized of terminal-pair first carries out target detection, determines the second target area in the first image to be synthesized,
And the distance value of the second target area is identified as to the first distance value of the first image to be synthesized.
If specifically, only including prospect in the first image to be synthesized or only including background, the second mesh that terminal is detected
It can be the corresponding whole region of the first image to be synthesized to mark region.If for example, only including people in the first image to be synthesized, eventually
It is the corresponding whole region of the first composograph to hold the second target area detected.
If not only including prospect but also including background in the first image to be synthesized, the second target area that terminal is detected can be with
For the corresponding region of prospect, or the corresponding region of background, it is not limited herein.For example, as shown in Figure 3 a, first waits to close
People's (prospect) was not only included into image but also including background, then the second target area that terminal is detected can be the area where people
Domain, or background area.
Terminal can obtain each self-corresponding distance value of all pixels point in the second target area, and according to the second target
Each self-corresponding distance value of all pixels point in region determines the distance value in first object region.If for example, first is to be synthesized
Not only people had been included in image but also including building, the image to be synthesized of terminal-pair first carries out target detection, if the second mesh detected
The region where region is behaved is marked, then each self-corresponding distance value of all pixels point of terminal acquisition people region, and according to
The average distance value of all pixels point of people region determines the distance value of the second target area.If terminal detect second
Target area is building region, then terminal obtains each self-corresponding distance of all pixels point of building region
Value, and determine according to the average distance value of all pixels point of building region the distance value of the second target area.
S202:Obtain the second distance value related to the second image to be synthesized.
Terminal is also obtained and second after image synthetic instruction is received, or after the second image to be synthesized is got
The related second distance value of image to be synthesized.The purpose that terminal obtains second distance value is in order to according to the first distance value and second
Magnitude relationship between distance value is adjusted to the size of the first image to be synthesized so that shot in the first image to be synthesized
Thing and the object of the 3rd target area in the second image to be synthesized are mutually coordinated.Wherein, the distance value of the 3rd target area is the
Two distance values.
Further, step S202 comprises the following steps:
S2021:The 3rd target area that detection user chooses in the described second image to be synthesized;
S2022:The distance value of the 3rd target area is obtained, and the distance value of the 3rd target area is identified as
The second distance value.
Terminal can detect user the 3rd target area selected in the second image to be synthesized, and obtain the 3rd target
The distance value in region, and the distance value of the 3rd target area is identified as second distance value.
Specifically, if terminal is touch screen terminal, terminal can detect touch-control behaviour of the user in the second image to be synthesized
Make, the 3rd target area according to selected by touch control operation of the user in the second image to be synthesized determines user.For example, terminal
If detecting some region in user's the second image to be synthesized of click, the region recognition that user is clicked on is that user selects
The 3rd target area.As shown in Figure 3 b, Fig. 3 b are user the 3rd target area selected in the second image to be synthesized
Schematic diagram, user click second portrait in the second image to be synthesized where region, then terminal is by second portrait institute
Region recognition be user select the 3rd target area.
If terminal is not touch screen terminal, terminal can detect that user is treated by input equipment (such as mouse) second
The selected region being made up of closed curve (or straight line), the region that will be made up of closed curve (or straight line) in composograph
It is identified as the 3rd target area of user's selection.
Terminal can be in the 3rd target area each self-corresponding distance value of all pixels point determine second distance value;
The corresponding distance value of partial pixel point that can also be in the 3rd target area determines second distance value, if for example, the 3rd mesh
Marking region both includes people, also including the background after the person, then terminal can be each right according to all pixels point of people region
The distance value answered determines second distance value, or in some cases, and terminal can also be according to all pixels point of background area
Each self-corresponding distance value determines second distance value.
S203:According to the magnitude relationship between first distance value and the second distance value, wait to close to described first
Size adjusting is carried out into image, target image to be synthesized is obtained.
Terminal is got after the first distance value and second distance value, compares the first distance value and the size of second distance value is closed
System, and the size of the first image to be synthesized is adjusted according to comparative result, by the first image to be synthesized after size adjusting
It is used as target image to be synthesized.
Further, step S203 may comprise steps of:
If first distance value is more than the second distance value, place is amplified to the described first image to be synthesized
Reason, obtains target image to be synthesized;
If first distance value is less than the second distance value, the described first image to be synthesized is carried out at diminution
Reason, obtains target image to be synthesized.
If the first distance value is more than second distance value, illustrate the distance when shooting of the object in the first image to be synthesized
Camera lens it is distant, object in the second target area when shooting apart from the closer to the distance of camera lens, i.e., the
The size of object in one image to be synthesized is smaller, the object in the second target area it is relatively large sized, now, be
Ensure that the object in the first image to be synthesized is mutually coordinated with the object in the second target area, terminal-pair first is to be synthesized
Image carries out diminution processing and obtains target image to be synthesized.If the first distance value is less than second distance value, illustrate that first waits to close
Into object of the object in image when shooting in the closer to the distance of camera lens, the second target area when shooting
The size of object in the image to be synthesized of the distant of camera lens, i.e., first is larger, in the second target area
The size of object is relatively small, now, in order to ensure in the object and the second target area in the first image to be synthesized
Object is mutually coordinated, and the image to be synthesized of terminal-pair first is amplified processing and obtains target image to be synthesized.
During actually taking pictures, distance different (i.e. same object of the same object apart from plane where camera lens
Distance value it is different), the size of imaging is different.As shown in Figure 3 d, Fig. 3 d are same objects different apart from camera lens
Apart from when imaging size schematic diagram.It is nearlyer (distance value is smaller) apart from camera lens that object is can be seen that from Fig. 3 d,
The size of imaging is bigger, and the corresponding visual angle of object (illustrate only in including vertical visual angle and horizontal view angle, Fig. 3 d and regard vertically
Angle) it is bigger;Object is more remote (distance value is bigger) apart from camera lens, and the size of imaging is smaller, the corresponding visual angle of object
It is smaller.
Wherein, when vertical visual angle is taken pictures for mark, the two ends of nose on object vertical direction section respectively with phase
The angle formed after machine camera lens line;When horizontal view angle is taken pictures for mark, nose in object horizontal direction section
The angle of two ends respectively with being formed after camera lens line.As shown in Figure 3 d, angle a1 and angle a2 be respectively same object away from
Corresponding vertical visual angle when from value be d1 and distance value is d2.
It should be noted that in the Pixel Information of the first image to be synthesized and each pixel in the second image to be synthesized
Also include the corresponding angle value of the pixel.Assuming that the straight line where the line of pixel corresponding point and camera lens shot is
First straight line, then the angle value of pixel be used to identify the size of angle formed by the axis of first straight line and camera lens.Eventually
End directly can get the corresponding angle value of each pixel from the first image to be synthesized or the second image to be synthesized.
Terminal determines to enter the first image to be synthesized according to the magnitude relationship between the first distance value and second distance value
Row is zoomed in or out after processing, can obtain the vertical visual angle and horizontal view angle of the first image to be synthesized, and according to getting
Vertical visual angle, the first distance value, second distance value and default scaling formula calculate the first image to be synthesized in vertical side
Upward zooms in or out ratio, according to the horizontal view angle got, the first distance value, second distance value and default pantograph ratio
Example formula calculate the first image to be synthesized in the horizontal direction zoom in or out ratio.
Specifically, as shown in Figure 3 d, it is assumed that the first distance value of the first image to be synthesized is d1, in the second image to be synthesized
The second target area second distance value be d2 (d1<D2), the first pixel on the first image to be synthesized that terminal is got
The corresponding angle values of point Q1 are q1, and the corresponding angle values of the second pixel Q2 are q2 (using the axis of camera lens as 0 benchmark), then
The vertical visual angle of first image to be synthesized is a1=q1+q2, and terminal calculates the vertical visual angle a2 of target value according to trigonometric function,
Then a2/a1 is the diminution ratio of the first image in the vertical direction to be synthesized.Terminal is by the first image to be synthesized in vertical side
A2/a1 is reduced upwards.The diminution ratio of first image to be synthesized in the horizontal direction is similar with vertical direction, no longer goes to live in the household of one's in-laws on getting married herein
State.
Wherein, the first pixel Q2 and the second pixel Q2 line are corresponding for the first image in the vertical direction to be synthesized
Nose section.
S204:Target image to be synthesized is blended into the second image to be synthesized.
The image to be synthesized of terminal-pair first carries out size adjusting and obtained after target image to be synthesized, by target image to be synthesized
It is blended into the second image to be synthesized.
Further, step S204 can also comprise the following steps:
S2041:The target synthesis region that detection user chooses in the described second image to be synthesized.
S2042:Target image to be synthesized is blended into the target synthesis region.
The image to be synthesized of terminal-pair first carries out size adjusting and obtained after target image to be synthesized, and user can treat second
Selection target synthesizes region in composograph.The target synthesis region that terminal detection user selects in the second image to be synthesized,
And target image to be synthesized is blended into the selected target synthesis region of user.For example, as shown in Figure 3 c, Fig. 3 c are by target
Composograph is blended into the schematic diagram of the second image to be synthesized.Target synthesis region of the user selected in the second composograph
For blank (background) region on the right of second portrait, then terminal synthesizes target image to be synthesized in sky on the right of second portrait
(background) region in vain.
Specifically, user can determine mesh by position of the drag target image to be synthesized in the second image to be synthesized
Mark synthesis region.If terminal detects the 4th target area that target image to be synthesized is dragged in the second image to be synthesized by user
Domain, the then target for the 4th target area being identified as into user's selection synthesizes region.
Terminal can synthesize the image in region with the first image coverage goal to be synthesized, can also synthesize target in region
The pixel value of all pixels point replace with the pixel values of corresponding pixel points in the first image to be synthesized.Wherein, pixel value includes
Color value (such as tristimulus value) or distance value, it is not limited herein.
Such scheme, terminal obtains the first distance value of the first image to be synthesized, and the distance value is shot in image
Distance of the thing when taking pictures between camera lens;Obtain the second distance value related to the second image to be synthesized;According to described
Magnitude relationship between first distance value and the second distance value, carries out size adjusting to the described first image to be synthesized, obtains
To target image to be synthesized;Target image to be synthesized is blended into the described second image to be synthesized, so that user exists
The size of target composograph need not be manually adjusted during composograph, the efficiency of image synthesis is not only increased, and improved
The picture quality of composograph.
Terminal is according to the first distance value, second distance value, the visual angle of the first image to be synthesized and default scaling
Formula calculates the scaling of the first image to be synthesized, and the first image to be synthesized is zoomed in and out according to scaling, so that
It ensure that the first image to be synthesized is indeformable.
Referring to Fig. 4, Fig. 4 is a kind of schematic block diagram of terminal provided in an embodiment of the present invention.Terminal 400 can be intelligence
The mobile terminals such as mobile phone, tablet personal computer.The each unit that the terminal 400 of the present embodiment includes is used to perform the corresponding embodiments of Fig. 1
In each step, specifically refer to the associated description in the corresponding embodiments of Fig. 1 and Fig. 1, do not repeat herein.The end of the present embodiment
End 400 includes first acquisition unit 401, second acquisition unit 402, size adjusting unit 403 and synthesis unit 404.
First acquisition unit 401 is used for the first distance value for obtaining the first image to be synthesized, and the distance value is in image
Distance of the object when taking pictures between camera lens.First distance value is sent to size and adjusted by first acquisition unit 401
Whole unit 403.
Second acquisition unit 402 is used to obtain the second distance value related to the second image to be synthesized.Second acquisition unit
402 send second distance value to size adjusting unit 403.
Size adjusting unit 403 is used for the first distance value for receiving the transmission of first acquisition unit 401 and the second acquisition is single
The second distance value that member 402 is sent, according to the magnitude relationship between first distance value and the second distance value, to described
First image to be synthesized carries out size adjusting, obtains target image to be synthesized.Size adjusting unit 403 is by target image to be synthesized
Send to synthesis unit 404.
Synthesis unit 404 is used for the target image to be synthesized for receiving the transmission of size adjusting unit 403, and the target is waited to close
The second image to be synthesized is blended into image.
Such scheme, terminal obtains the first distance value of the first image to be synthesized, and the distance value is shot in image
Distance of the thing when taking pictures between camera lens;Obtain the second distance value related to the second image to be synthesized;According to described
Magnitude relationship between first distance value and the second distance value, carries out size adjusting to the described first image to be synthesized, obtains
To target image to be synthesized;Target image to be synthesized is blended into the described second image to be synthesized, so that user exists
The size of target composograph need not be manually adjusted during composograph, the efficiency of image synthesis is not only increased, and improved
The picture quality of composograph.
Referring to Fig. 5, Fig. 5 is a kind of schematic block diagram for terminal that another embodiment of the present invention is provided.Terminal 500 can be
The mobile terminals such as smart mobile phone, tablet personal computer, can also be other-end, not be limited herein.The terminal 500 of the present embodiment is wrapped
The each unit included is used to perform each step in the corresponding embodiments of Fig. 2, specifically refers in the corresponding embodiments of Fig. 2 and Fig. 2
Associated description, do not repeat herein.The terminal 500 of the present embodiment includes first acquisition unit 501, second acquisition unit 502, chi
Very little adjustment unit 503 and synthesis unit 504.
First acquisition unit 501 includes the first detection unit 511, takes unit 512, the second detection unit 513 and first
Determining unit 514;Second acquisition unit 502 includes the 3rd detection unit 521 and the second determining unit 522;Synthesis unit 504 is wrapped
Include the 4th detection unit 541 and image composing unit 542.
501 the first detection unit 511 is used to detect user chooses in the first image first in first acquisition unit
Target area.First detection unit 511 sends in first object region to taking unit 512.
Take unit 512 be used for receive the first detection unit 511 transmission first object region, taken from the first image
Go out the corresponding image in first object region, and be the first image to be synthesized by the corresponding image recognition in the first object region.
Unit 512 is taken to send the first image to be synthesized to the second detection unit 513.
Second detection unit 513 is used to receive the first image to be synthesized for taking the transmission of unit 512, waits to close to described first
Target detection is carried out into image, and the target region detected is identified as the second target area.Second detection unit 513
The second target area detected is sent to the first determining unit 514.
First determining unit 514 is used to receive the second target area that the second detection unit 513 is sent, and obtains the second target
The second distance value in region, and the second distance value is identified as first distance value.First determining unit 514 is by first
Distance value is sent to size adjusting unit 503.
The 3rd detection unit 522 in second acquisition unit 502 is used to detect user in the described second image to be synthesized
The 3rd target area chosen.3rd detection unit 522 sends the 3rd target area to the second determining unit 522.
Second determining unit 522 is used to receive the 3rd target area that the 3rd detection unit 522 is sent, and obtains the described 3rd
The distance value of target area, and the distance value of the 3rd target area is identified as the second distance value.Second determines list
Member 522 sends second distance value to size adjusting unit.
Size adjusting unit 503 is used to receive the first distance value and the second determination list that the first determining unit 514 is sent
The second distance value that member 522 is sent, according to the magnitude relationship between first distance value and the second distance value, to described
First image to be synthesized carries out size adjusting, obtains target image to be synthesized.Size adjusting unit 503 is by target image to be synthesized
Send the image composing unit 542 into synthesis unit 504.
Further, if size adjusting unit 503 is more than the second distance value specifically for first distance value,
Processing is amplified to the described first image to be synthesized, target image to be synthesized is obtained;If first distance value is less than
The second distance value, then carry out diminution processing to the described first image to be synthesized, obtain target image to be synthesized.
The 4th detection unit 541 in synthesis unit 504 is used to detect that user chooses in the described second image to be synthesized
Target synthesis region.4th detection unit 541 sends in target synthesis region to image composing unit 542.
Image composing unit 542 is used for target image to be synthesized and the 4th inspection for receiving the transmission of size adjusting unit 503
The target synthesis region that unit 541 is sent is surveyed, target image to be synthesized is blended into the target and synthesizes region.
Such scheme, terminal obtains the first distance value of the first image to be synthesized, and the distance value is shot in image
Distance of the thing when taking pictures between camera lens;Obtain the second distance value related to the second image to be synthesized;According to described
Magnitude relationship between first distance value and the second distance value, carries out size adjusting to the described first image to be synthesized, obtains
To target image to be synthesized;Target image to be synthesized is blended into the described second image to be synthesized, so that user exists
The size of target composograph need not be manually adjusted during composograph, the efficiency of image synthesis is not only increased, and improved
The picture quality of composograph.
Terminal is according to the first distance value, second distance value, the visual angle of the first image to be synthesized and default scaling
Formula calculates the scaling of the first image to be synthesized, and the first image to be synthesized is zoomed in and out according to scaling, so that
It ensure that the first image to be synthesized is indeformable.
Referring to Fig. 6, Fig. 6 is a kind of schematic block diagram for terminal that yet another embodiment of the invention is provided.This reality as shown in Figure 6
The terminal 600 applied in example can include:It is one or more processors 601, one or more input equipments 602, one or more
Then output equipment 603 and one or more memories 604.Above-mentioned processor 601, input equipment 602, then output equipment 603 and
Memory 604 completes mutual communication by communication bus 605.Memory 604 is used to store computer program, the calculating
Machine program includes programmed instruction.Processor 601 is used for the programmed instruction for performing the storage of memory 604.Wherein, the quilt of processor 601
It is configured to call described program instruction to perform following operate:
Processor 601 is used for the first distance value for obtaining the first image to be synthesized, and the distance value is shot in image
Distance of the thing when taking pictures between camera lens.
Processor 601 is additionally operable to obtain the second distance value related to the second image to be synthesized.
Processor 601 is additionally operable to according to the magnitude relationship between first distance value and the second distance value, to institute
State the first image to be synthesized and carry out size adjusting, obtain target image to be synthesized.
Processor 601 is additionally operable to target image to be synthesized being blended into the second image to be synthesized.
The first object region that processor 601 is chosen specifically for detection user in the first image.
Processor 601 from described first image specifically for taking out the image in the first object region, and by institute
The image recognition for stating first object region is the described first image to be synthesized.
Processor 601 is specifically for carrying out target detection to the described first image to be synthesized, and by the target detected institute
It is the second target area in region recognition.
Processor 601 is specifically for obtaining the distance value of second target area, and by second target area
Distance value is identified as first distance value.
The 3rd target area that processor 601 is chosen specifically for detection user in the described second image to be synthesized.
Processor 601 is specifically for obtaining the distance value of the 3rd target area, and by the 3rd target area
Distance value is identified as the second distance value.
If processor 601 is more than the second distance value specifically for first distance value, wait to close to described first
Processing is amplified into image, target image to be synthesized is obtained.
If processor 601 is less than the second distance value specifically for first distance value, wait to close to described first
Diminution processing is carried out into image, target image to be synthesized is obtained.
Processor 601 synthesizes region specifically for the target that detection user chooses in the described second image to be synthesized.
Processor 601 synthesizes region specifically for target image to be synthesized is blended into the target.
Such scheme, terminal obtains the first distance value of the first image to be synthesized, and the distance value is shot in image
Distance of the thing when taking pictures between camera lens;Obtain the second distance value related to the second image to be synthesized;According to described
Magnitude relationship between first distance value and the second distance value, carries out size adjusting to the described first image to be synthesized, obtains
To target image to be synthesized;Target image to be synthesized is blended into the described second image to be synthesized, so that user exists
The size of target composograph need not be manually adjusted during composograph, the efficiency of image synthesis is not only increased, and improved
The picture quality of composograph.
Terminal is according to the first distance value, second distance value, the visual angle of the first image to be synthesized and default scaling
Formula calculates the scaling of the first image to be synthesized, and the first image to be synthesized is zoomed in and out according to scaling, so that
It ensure that the first image to be synthesized is indeformable.
It should be appreciated that in embodiments of the present invention, alleged processor 601 can be CPU (Central
Processing Unit, CPU), the processor can also be other general processors, digital signal processor (Digital
Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit,
ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other FPGAs
Device, discrete gate or transistor logic, discrete hardware components etc..General processor can be microprocessor or this at
It can also be any conventional processor etc. to manage device.
Input equipment 602 can include Trackpad, fingerprint adopt sensor (finger print information that is used to gathering user and fingerprint
Directional information), microphone etc., output equipment 603 can include display (LCD etc.), loudspeaker etc..
The memory 604 can include read-only storage and random access memory, and to processor 601 provide instruction and
Data.The a part of of memory 604 can also include nonvolatile RAM.For example, memory 604 can also be deposited
Store up the information of device type.
In the specific implementation, processor 601, input equipment 602, the output equipment 603 described in the embodiment of the present invention can
The implementation described in the first embodiment and second embodiment of image processing method provided in an embodiment of the present invention is performed,
Also the implementation of the terminal described by the embodiment of the present invention is can perform, be will not be repeated here.
A kind of computer-readable recording medium, the computer-readable storage medium are provided in another embodiment of the invention
Matter is stored with computer program, and the computer program includes programmed instruction, and described program instruction is realized when being executed by processor:
Obtain the first distance value of the first image to be synthesized, the distance value be object in image when taking pictures with phase
The distance between machine camera lens;
Obtain the second distance value related to the second image to be synthesized;
According to the magnitude relationship between first distance value and the second distance value, to the described first image to be synthesized
Size adjusting is carried out, target image to be synthesized is obtained;
Target image to be synthesized is blended into the second image to be synthesized.
Further, also realized when the computer program is executed by processor:
The first object region that detection user chooses in the first image;
Take out the image in the first object region from described first image, and by the figure in the first object region
As being identified as the described first image to be synthesized;
Target detection is carried out to the described first image to be synthesized, and the target region detected is identified as the second mesh
Mark region;
The distance value of second target area is obtained, and the distance value of second target area is identified as described
One distance value.
Further, also realized when the computer program is executed by processor:
The 3rd target area that detection user chooses in the described second image to be synthesized;
The distance value of the 3rd target area is obtained, and the distance value of the 3rd target area is identified as described
Two distance values.
Further, also realized when the computer program is executed by processor:
If first distance value is more than the second distance value, place is amplified to the described first image to be synthesized
Reason, obtains target image to be synthesized;
If first distance value is less than the second distance value, the described first image to be synthesized is carried out at diminution
Reason, obtains target image to be synthesized.
Further, also realized when the computer program is executed by processor:
The target synthesis region that detection user chooses in the described second image to be synthesized;
Target image to be synthesized is blended into the target synthesis region.
The computer-readable recording medium can be the internal storage unit of the terminal described in foregoing any embodiment, example
Such as the hard disk or internal memory of terminal.The computer-readable recording medium can also be the External memory equipment of the terminal, for example
The plug-in type hard disk being equipped with the terminal, intelligent memory card (Smart Media Card, SMC), secure digital (Secure
Digital, SD) card, flash card (Flash Card) etc..Further, the computer-readable recording medium can also be wrapped both
Including the internal storage unit of the terminal also includes External memory equipment.The computer-readable recording medium is used to store described
Other programs and data needed for computer program and the terminal.The computer-readable recording medium can be also used for temporarily
The data that ground storage has been exported or will exported.
Those of ordinary skill in the art are it is to be appreciated that the list of each example described with reference to the embodiments described herein
Member and algorithm steps, can be realized with electronic hardware, computer software or the combination of the two, in order to clearly demonstrate hardware
With the interchangeability of software, the composition and step of each example are generally described according to function in the above description.This
A little functions are performed with hardware or software mode actually, depending on the application-specific and design constraint of technical scheme.Specially
Industry technical staff can realize described function to each specific application using distinct methods, but this realization is not
It is considered as beyond the scope of this invention.
It is apparent to those skilled in the art that, for convenience of description and succinctly, the end of foregoing description
End and the specific work process of unit, may be referred to the corresponding process in preceding method embodiment, will not be repeated here.
, can be by it in several embodiments provided herein, it should be understood that disclosed terminal and method
Its mode is realized.For example, device embodiment described above is only schematical, for example, the division of the unit, only
Only a kind of division of logic function, can there is other dividing mode when actually realizing, such as multiple units or component can be tied
Another system is closed or is desirably integrated into, or some features can be ignored, or do not perform.In addition, shown or discussed phase
Coupling or direct-coupling or communication connection between mutually can be INDIRECT COUPLING or the communication by some interfaces, device or unit
Connection or electricity, mechanical or other forms are connected.
The unit illustrated as separating component can be or may not be it is physically separate, it is aobvious as unit
The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be selected to realize scheme of the embodiment of the present invention according to the actual needs
Purpose.
In addition, each functional unit in each embodiment of the invention can be integrated in a processing unit, can also
It is that unit is individually physically present or two or more units are integrated in a unit.It is above-mentioned integrated
Unit can both be realized in the form of hardware, it would however also be possible to employ the form of SFU software functional unit is realized.
If the integrated unit is realized using in the form of SFU software functional unit and as independent production marketing or used
When, it can be stored in a computer read/write memory medium.Understood based on such, technical scheme is substantially
The part contributed in other words to prior art, or all or part of the technical scheme can be in the form of software product
Embody, the computer software product is stored in a storage medium, including some instructions are to cause a computer
Equipment (can be personal computer, server, or network equipment etc.) performs the complete of each embodiment methods described of the invention
Portion or part steps.And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage (ROM, Read-Only
Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. are various can store journey
The medium of sequence code.
The foregoing is only a specific embodiment of the invention, but protection scope of the present invention is not limited thereto, any
Those familiar with the art the invention discloses technical scope in, various equivalent modifications can be readily occurred in or replaced
Change, these modifications or substitutions should be all included within the scope of the present invention.Therefore, protection scope of the present invention should be with right
It is required that protection domain be defined.
Claims (10)
1. a kind of image processing method, it is characterised in that including:
Obtain the first distance value of the first image to be synthesized, the distance value be object in image when taking pictures with camera mirror
The distance between head;
Obtain the second distance value related to the second image to be synthesized;
According to the magnitude relationship between first distance value and the second distance value, the described first image to be synthesized is carried out
Size adjusting, obtains target image to be synthesized;
Target image to be synthesized is blended into the described second image to be synthesized.
2. image processing method according to claim 1, it is characterised in that the first of the acquisition first image to be synthesized
Distance value, including:
The first object region that detection user chooses in the first image;
The image in the first object region is taken out from described first image, and the image in the first object region is known
Wei not first image to be synthesized;
Target detection is carried out to the described first image to be synthesized, and the target region detected is identified as the second target area
Domain;
Obtain the distance value of second target area, and by the distance value of second target area be identified as described first away from
From value.
3. image processing method according to claim 1, it is characterised in that the acquisition is related to the second image to be synthesized
Second distance value, including:
The 3rd target area that detection user chooses in the described second image to be synthesized;
Obtain the distance value of the 3rd target area, and by the distance value of the 3rd target area be identified as described second away from
From value.
4. image processing method according to claim 1, it is characterised in that it is described according to first distance value with it is described
Magnitude relationship between second distance value, carries out size adjusting to the described first image to be synthesized, obtains target image to be synthesized,
Including:
If first distance value is more than the second distance value, processing is amplified to the described first image to be synthesized, obtained
To target image to be synthesized;
If first distance value is less than the second distance value, diminution processing is carried out to the described first image to be synthesized, obtained
To target image to be synthesized.
5. the image processing method according to any one of Claims 1-4, it is characterised in that described to wait to close by the target
The second image to be synthesized is blended into image, including:
The target synthesis region that detection user chooses in the described second image to be synthesized;
Target image to be synthesized is blended into the target synthesis region.
6. a kind of terminal, it is characterised in that including:
First acquisition unit, the first distance value for obtaining the first image to be synthesized, the distance value is shot in image
Distance of the thing when taking pictures between camera lens;
Second acquisition unit, for obtaining the second distance value related to the second image to be synthesized;
Size adjusting unit, for according to the magnitude relationship between first distance value and the second distance value, to described
First image to be synthesized carries out size adjusting, obtains target image to be synthesized;
Synthesis unit, for target image to be synthesized to be blended into the described second image to be synthesized.
7. terminal according to claim 6, it is characterised in that the first acquisition unit includes:
First detection unit, for detecting the first object region that user chooses in the first image;
Unit is taken, for taking out the corresponding image in first object region from the first image, and by the first object area
The corresponding image recognition in domain is the first image to be synthesized;
Second detection unit, for carrying out target detection to the described first image to be synthesized, and by the target location detected
Domain is identified as the second target area;
First determining unit, the second distance value for obtaining second target area, and the second distance value is recognized
For first distance value.
8. terminal according to claim 6, it is characterised in that the second acquisition unit includes:
3rd detection unit, for detecting the 3rd target area that user chooses in the described second image to be synthesized;
Second determining unit, the distance value for obtaining the 3rd target area, and by the distance of the 3rd target area
Value is identified as the second distance value.
It is the processor, defeated 9. a kind of terminal, it is characterised in that including processor, input equipment, output equipment and memory
Enter equipment, output equipment and memory to be connected with each other, wherein, the memory is used to store computer program, the computer
Program includes programmed instruction, and the processor is arranged to call described program to instruct, and performs such as any one of claim 1-5
Described method.
10. a kind of computer-readable recording medium, it is characterised in that the computer-readable storage medium is stored with computer program,
The computer program includes programmed instruction, and described program instruction makes the computing device such as right when being executed by a processor
It is required that the method described in any one of 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710392669.3A CN107277346A (en) | 2017-05-27 | 2017-05-27 | A kind of image processing method and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710392669.3A CN107277346A (en) | 2017-05-27 | 2017-05-27 | A kind of image processing method and terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107277346A true CN107277346A (en) | 2017-10-20 |
Family
ID=60065768
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710392669.3A Withdrawn CN107277346A (en) | 2017-05-27 | 2017-05-27 | A kind of image processing method and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107277346A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107943389A (en) * | 2017-11-14 | 2018-04-20 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN108282622A (en) * | 2018-01-29 | 2018-07-13 | 三星电子(中国)研发中心 | Method, photo taking and device |
CN109144369A (en) * | 2018-09-21 | 2019-01-04 | 维沃移动通信有限公司 | A kind of image processing method and terminal device |
CN109361850A (en) * | 2018-09-28 | 2019-02-19 | Oppo广东移动通信有限公司 | Image processing method, device, terminal device and storage medium |
CN112862678A (en) * | 2021-01-26 | 2021-05-28 | 中国铁道科学研究院集团有限公司 | Unmanned aerial vehicle image splicing method and device and storage medium |
CN113911868A (en) * | 2020-07-09 | 2022-01-11 | 东芝电梯株式会社 | User detection system of elevator |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20070097183A (en) * | 2006-03-28 | 2007-10-04 | 주식회사 팬택 | An image communication method of a mobile communication |
CN102055834A (en) * | 2009-10-30 | 2011-05-11 | Tcl集团股份有限公司 | Double-camera photographing method of mobile terminal |
CN103226806A (en) * | 2013-04-03 | 2013-07-31 | 广东欧珀移动通信有限公司 | Method and camera system for enlarging picture partially |
CN103856719A (en) * | 2014-03-26 | 2014-06-11 | 深圳市金立通信设备有限公司 | Photographing method and terminal |
CN104125412A (en) * | 2014-06-16 | 2014-10-29 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN105578028A (en) * | 2015-07-28 | 2016-05-11 | 宇龙计算机通信科技(深圳)有限公司 | Photographing method and terminal |
-
2017
- 2017-05-27 CN CN201710392669.3A patent/CN107277346A/en not_active Withdrawn
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20070097183A (en) * | 2006-03-28 | 2007-10-04 | 주식회사 팬택 | An image communication method of a mobile communication |
CN102055834A (en) * | 2009-10-30 | 2011-05-11 | Tcl集团股份有限公司 | Double-camera photographing method of mobile terminal |
CN103226806A (en) * | 2013-04-03 | 2013-07-31 | 广东欧珀移动通信有限公司 | Method and camera system for enlarging picture partially |
CN103856719A (en) * | 2014-03-26 | 2014-06-11 | 深圳市金立通信设备有限公司 | Photographing method and terminal |
CN104125412A (en) * | 2014-06-16 | 2014-10-29 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN105578028A (en) * | 2015-07-28 | 2016-05-11 | 宇龙计算机通信科技(深圳)有限公司 | Photographing method and terminal |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107943389A (en) * | 2017-11-14 | 2018-04-20 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN108282622A (en) * | 2018-01-29 | 2018-07-13 | 三星电子(中国)研发中心 | Method, photo taking and device |
CN109144369A (en) * | 2018-09-21 | 2019-01-04 | 维沃移动通信有限公司 | A kind of image processing method and terminal device |
CN109144369B (en) * | 2018-09-21 | 2020-10-20 | 维沃移动通信有限公司 | Image processing method and terminal equipment |
CN109361850A (en) * | 2018-09-28 | 2019-02-19 | Oppo广东移动通信有限公司 | Image processing method, device, terminal device and storage medium |
CN109361850B (en) * | 2018-09-28 | 2021-06-15 | Oppo广东移动通信有限公司 | Image processing method, image processing device, terminal equipment and storage medium |
CN113911868A (en) * | 2020-07-09 | 2022-01-11 | 东芝电梯株式会社 | User detection system of elevator |
CN113911868B (en) * | 2020-07-09 | 2023-05-26 | 东芝电梯株式会社 | Elevator user detection system |
CN112862678A (en) * | 2021-01-26 | 2021-05-28 | 中国铁道科学研究院集团有限公司 | Unmanned aerial vehicle image splicing method and device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107277346A (en) | A kind of image processing method and terminal | |
CN110163806B (en) | Image processing method, device and storage medium | |
EP3547218B1 (en) | File processing device and method, and graphical user interface | |
CN110896451B (en) | Preview picture display method, electronic device and computer readable storage medium | |
US20150186035A1 (en) | Image processing for introducing blurring effects to an image | |
CN106454139A (en) | Shooting method and mobile terminal | |
CN112102164B (en) | Image processing method, device, terminal and storage medium | |
CN107295272A (en) | The method and terminal of a kind of image procossing | |
CN108629727A (en) | Method, terminal and the medium of watermark are generated according to color | |
CN106911892A (en) | The method and terminal of a kind of image procossing | |
CN110119733B (en) | Page identification method and device, terminal equipment and computer readable storage medium | |
CN109086742A (en) | scene recognition method, scene recognition device and mobile terminal | |
CN107426493A (en) | A kind of image pickup method and terminal for blurring background | |
EP3822758A1 (en) | Method and apparatus for setting background of ui control | |
CN107578371A (en) | Image processing method and device, electronic equipment and medium | |
US20210335391A1 (en) | Resource display method, device, apparatus, and storage medium | |
CN107155059A (en) | A kind of image preview method and terminal | |
CN111028276A (en) | Image alignment method and device, storage medium and electronic equipment | |
CN114096994A (en) | Image alignment method and device, electronic equipment and storage medium | |
CN108111747A (en) | A kind of image processing method, terminal device and computer-readable medium | |
CN107608719A (en) | A kind of interface operation method, terminal and computer-readable recording medium | |
CN107426490A (en) | A kind of photographic method and terminal | |
CN106548117B (en) | A kind of face image processing process and device | |
CN110618852B (en) | View processing method, view processing device and terminal equipment | |
CN105635809A (en) | Image processing method and device and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20171020 |