CN109474786A - A kind of preview image generation method and terminal - Google Patents

A kind of preview image generation method and terminal Download PDF

Info

Publication number
CN109474786A
CN109474786A CN201811581777.6A CN201811581777A CN109474786A CN 109474786 A CN109474786 A CN 109474786A CN 201811581777 A CN201811581777 A CN 201811581777A CN 109474786 A CN109474786 A CN 109474786A
Authority
CN
China
Prior art keywords
image
camera
subgraph
object region
fov
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811581777.6A
Other languages
Chinese (zh)
Other versions
CN109474786B (en
Inventor
王康康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811581777.6A priority Critical patent/CN109474786B/en
Publication of CN109474786A publication Critical patent/CN109474786A/en
Application granted granted Critical
Publication of CN109474786B publication Critical patent/CN109474786B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention provides a kind of preview image generation method and terminals.The described method includes: obtaining first position of the target object in the first image that the first camera acquires;Obtain the corresponding second position of target pattern mode;Based on first position and the second position, object region is determined;Obtain the second subgraph for being located at object region in the first image in the first subgraph of object region and the second image of second camera acquisition;First image and the second image are subjected to image co-registration processing, generate third image, and third image is exported as targets preview image and is shown.In the embodiment of the present invention, the intelligence composition when taking pictures, the image of the different two cameras acquisition in visual angle is merged, the position for synthesizing a frame target object makes the preview image of image entirety composition more harmony and natural, in this way, if user takes pictures with the preview image, the good photo of patterning effect can be shot.

Description

A kind of preview image generation method and terminal
Technical field
The present embodiments relate to field of communication technology more particularly to a kind of preview image generation methods and terminal.
Background technique
With the development of intelligent terminal and universal, the function of intelligent terminal is become stronger day by day, and camera function has become intelligent end Hold indispensable one of function.Wherein, when shooting photo, the quality of composition has very big shadow to the effect for finally taking photo It rings.Good composition can make photo outstanding, and bad composition can make photo give uncomfortable feeling.General composition Quality depends on the experience of photographer individual, if not having relatively good understanding to photography, the photo that ordinary people takes is difficult Possess relatively good composition, is not so perfect so as to cause the photo taken.
Summary of the invention
The embodiment of the invention provides a kind of preview image generation method and terminals, are being clapped with solving user in the prior art The problem of being difficult to shoot patterning effect good picture when taking.
In order to solve the above-mentioned technical problem, the present invention adopts the following technical scheme:
In a first aspect, providing a kind of preview image generation method, it is applied to terminal, the terminal includes: to be set to institute Terminal ipsilateral the first camera and second camera are stated, the visual angle of first camera is less than the view of the second camera Angle.
Wherein, which comprises
Obtain first position of the target object in the first image that first camera acquires;
Obtain the corresponding second position of target pattern mode;Wherein, the second position is the target object described Desired locations in first image;
Based on the first position and the second position, object region is determined;Wherein, the object region The image-region in field range, position of the target object in the object region are shot for first camera It sets identical as the second position;
It obtains in the first image and is adopted positioned at the first subgraph of the object region and the second camera It is located at the second subgraph of the object region in second image of collection;
First subgraph and second subgraph are subjected to image co-registration processing, generate third image, and by institute Third image is stated as targets preview image and exports display;Wherein, position of the target object in the targets preview image It sets identical as the second position.
Second aspect, provides a kind of terminal, the terminal include: be set to the first ipsilateral camera of the terminal and Second camera, the visual angle of first camera are less than the visual angle of the second camera.
Wherein, the terminal further include:
First obtains module, for obtaining first position of the target object in the first image that the first camera acquires;
Second obtains module, for obtaining the corresponding second position of target pattern mode;Wherein, the second position is institute State desired locations of the target object in the first image;
First determining module obtains module for obtaining the first position and described second that module obtains based on described first The second position of acquisition, determines object region;Wherein, the object region is that first camera shoots visual field Image-region in range, position of the target object in the object region are identical as the second position;
Third obtains module, for obtaining the target image for being located at first determining module in the first image and determining It is located at the second subgraph of the object region in second image of the first subgraph and the second camera acquisition in region;
Processing module carries out image and melts for the third to be obtained to the first subgraph of module acquisition and the second subgraph Conjunction processing generates third image, and the third image is exported as targets preview image and is shown;Wherein, the target pair As the position in the targets preview image is identical as the second position.
The third aspect provides a kind of terminal, comprising: processor, memory and is stored on the memory and can be The computer program run on the processor is realized as described above pre- when the computer program is executed by the processor Look at image generating method the step of.
Fourth aspect provides a kind of computer readable storage medium, stores meter on the computer readable storage medium The step of calculation machine program, the computer program realizes preview image generation method as described above when being executed by processor.
In the embodiment of the present invention, in order to allow users to shooting the preferable photo of patterning effect, the intelligence composition when taking pictures, The image of the different two cameras acquisition in visual angle is merged, the position of one frame target object of synthesis makes image entirety composition The more preview image of harmony and natural, in this way, patterning effect can be shot if user takes pictures with the preview image Good photo.Technical solution in the embodiment of the present invention is merged using two different image datas in visual angle, can automatically into The adjustment of row preview screen content, the preview screen of optimal mode of composition is presented to user, does not need customer mobile terminal and goes Most suitable composition is matched, possesses shooting experience abundant without user, it can user is helped to shoot outstanding photo.
Detailed description of the invention
Fig. 1 shows the flow charts of preview image generation method provided in an embodiment of the present invention;
Fig. 2 indicates the schematic diagram of the first image and object region provided in an embodiment of the present invention;
Fig. 3 indicates the first image in the second coordinate system provided in an embodiment of the present invention, object region and The schematic diagram of two images;
Fig. 4 indicates one of the block diagram of terminal provided in an embodiment of the present invention;
Fig. 5 indicates the two of the block diagram of terminal provided in an embodiment of the present invention.
Specific embodiment
The exemplary embodiment that the present invention will be described in more detail below with reference to accompanying drawings.Although showing the present invention in attached drawing Exemplary embodiment, it being understood, however, that may be realized in various forms the present invention without should be by embodiments set forth here It is limited.It is to be able to thoroughly understand the present invention on the contrary, providing these embodiments, and can be by the scope of the present invention It is fully disclosed to those skilled in the art.
One aspect according to an embodiment of the present invention provides a kind of preview image generation method.This method is applied to eventually End, which can be mobile terminal, and (such as mobile phone, laptop, palm PC, car-mounted terminal, wearable is set tablet computer It is standby etc.), desktop computer or smart television etc..The terminal includes: to be set to terminal ipsilateral the first camera and second camera. The visual angle of first camera is less than the visual angle of second camera.Preferably, the first camera is standard camera head, second camera For wide-angle camera, specifically, the wide-angle camera includes: common wide-angle camera or ultra wide-angle imaging head.Wherein, first Camera and second camera can be front camera, be also possible to rear camera.
As shown in Figure 1, the preview image generation method includes:
Step 101: obtaining first position of the target object in the first image that the first camera acquires.
Wherein, target object is the image of the main body of the first camera shooting.Main body described here can be first and take the photograph As head automatically determines, certain objects identification such as is carried out to the first image, by the certain objects recognized (such as personage, animal) As main body.In addition to this, which is also possible to what user specified, operates for example, receiving user to the input of the first image, It is operated according to input and determines main body.Specifically, it as received user to the clicking operation of the first image, will click at operating position pair Based on the object answered determines.After determining main body, it can determine corresponding image (the i.e. target pair of main body in the first image As) and relative position (hereinafter referred to as first position) of the target object in the first image.
Step 102: obtaining the corresponding second position of target pattern mode.
Wherein, the second position described here is desired locations of the target object in the first image.
In the embodiment of the present invention, after determining target object, the second position of the target object in the first image is determined (i.e. Desired locations or ideal position).Wherein, when the second position is in the first image when target object, the entirety of the first image Patterning effect is preferable.Wherein, the determination of the second position can be according to mode of composition in the prior art (such as nine grids composition Method, diagonal line composition method, triangle composition method or spiral composition method etc.) it is determined.
Wherein, determining that target object, can be by target object in the first image behind the second position in the first image The second position compared with first position, the second position and first position it is inconsistent when, into subsequent step 103;? It, then can be without subsequent step when the second position is consistent with first position.
Step 102: being based on first position and the second position, determine object region.
Wherein, object region described here is the image-region in first camera shooting field range, It is identical as the size of the first image.The object region can be understood as making the image under the preferable shooting angle of patterning effect Range, thus when determining object region, it should meet: position of the target object in object region, with target pair As the second position in the first image is identical, in this way, being carried out with object region in the photo of shooting acquisition, target object Position, can satisfy composition demand, keep photo entirety patterning effect preferable.
For example, dashed rectangle indicates object region as shown in Fig. 2, solid line boxes indicate the first image 201 in figure 202, solid line circle indicates first position 203 of the target object in the first image 201, and dashed circle indicates target object the The second position 204 in one image 201.In object region 202, the position of target object and target object are in the first figure As the second position 204 in 201 is identical.
Step 104: what the first subgraph and second camera for being located at object region in the first image of acquisition acquired It is located at the second subgraph of object region in second image.
In the embodiment of the present invention, after determining object region, obtains and be located at object region in the first image It is located at image (the i.e. second son of object region in image (i.e. the first subgraph) and the second image of second camera acquisition Image).
Step 105: the first subgraph and the second subgraph being subjected to image co-registration processing, generate third image, and by the Three images are exported as targets preview image and are shown.
Wherein, third image is in object region, and identical as object region size.Target object is Position in three images, it is identical as the second position of the target object in the first image.
As shown in Figure 2, only parts of images is in object region 202 the first image 201, in order to obtain complete mesh Image in logo image region, in the embodiment of the present invention, when the camera shooting of starting first is shot, also starting second camera is obtained Take the second image.Since second camera is bigger than the angular field of view of the first camera, more picture materials can be got, because This, can carry out fusion treatment for the first image and the second image, generate and and object-image region interior in object region The identical third image of domain size.
It is identical as the second position of the target object in the first image due to position of the target object in third image, I.e. third image is a preferable image of frame patterning effect, therefore, third image is exported as preview image and is shown, is used at this time Family is taken pictures with the preview image, can shoot the photo of patterning effect.
Technical solution in the embodiment of the present invention is merged using two different image datas in visual angle, can automatically into The adjustment of row preview screen content, the preview screen of optimal mode of composition is presented to user, does not need customer mobile terminal and goes Most suitable composition is matched, possesses shooting experience abundant without user, it can user is helped to shoot outstanding photo.
Specifically, it includes: in preset at least two composition that step 102, which obtains the corresponding second position of target pattern mode, In mode, the second position target pattern mode nearest apart from first position is determined;According to target pattern mode, target pair is obtained As in the second position of the first image.
In the embodiment of the present invention, it is stored in advance in terminal there are many mode of composition, is determining target object in the first figure As in the second position when, a target pattern mode can be selected to determine target object from this variety of mode of composition The second position.Preferably for the target pattern mode selected, second of the target object determined in the first image It sets, compared with the target object that other modes of composition determine is in the second position in the first image, with target object in the first figure The distance of first position as in is nearest, in this way, mistake of the object region with respect to the first image can be maximally reduced Position distance keeps the first image as much as possible in object region.
It further, include: according to first when being in the first subgraph of object region in obtaining the first image The regional scope of image and object region in the first coordinate system, the figure that will be in the first image in object region As region is determined as the first subgraph.
Wherein, the first coordinate system is the coordinate system established according to the image of the first camera acquisition.
In the embodiment of the present invention, the image according to the acquisition of the first camera establishes the first coordinate system.Obtaining the first image In the first subgraph in the object region when, according to the first image and object region in the first coordinate system First image is completely in the image in object region, is determined as the first subgraph by position.The is determined according to coordinate system One image is in the image in object region, and determination process can be made more accurate.
For example, it is assumed that it is w_n that the field angle (Field of view, abbreviation FOV) of the first camera, which is fov_n wide, it is high For h_n, wherein the subsequent n of underscore is the initial of normal.It is to sit with the lower left corner of the first camera acquired image It marks origin (0,0), image-region is with { image lower left corner x-axis coordinate, image lower left corner y-axis coordinate, width of the image in x-axis direction Degree, height of the image in y-axis direction } it indicates.By this rule, the image-region of the first image should be expressed as { 0,0, w_n, h_n }. Assume again that, in the first coordinate system, first position of the target object in the first image be (x, y), the second position be (x ', Y '), then the corresponding image-region of object region should indicate are as follows: x-x ', y-y ', w_n, h_n }.
If A, x-x ' > 0 and y-y ' > 0, the first image image-region Chong Die with object region is (hereinafter referred to as First subgraph) are as follows: and x-x ', y-y ', w_n-| x-x ' |, h_n- | y-y ' |.
If B, x-x '>0 and y-y '<0, the first subgraph are as follows: x-x ', and 0, w_n-| x-x ' |, h_n- | y-y ' |.
If C, x-x '<0 and y-y '>0, the first subgraph are as follows: 0, y-y ', w_n-| x-x ' |, h_n- | y-y ' |.
If D, x-x ' < 0 and y-y ' < 0, normal camera should participate in the region of fusion are as follows: and 0,0, w_n-| x-x ' |, h_n-|y-y’|}。
It further, include: by first when the second subgraph being in object region in obtaining the second image Image and object region map in the second coordinate system;According to the first image, the second image and object region Regional scope in two coordinate systems determines the third subgraph in the second image in object region;By third subgraph As the image with the first image non-overlap is determined as the second subgraph.
Wherein, the second coordinate system is the coordinate system established according to the image of second camera acquisition.
In the embodiment of the present invention, the image according to second camera acquisition establishes the second coordinate system.Obtaining the second image In the second subgraph in the object region when, the first image and object region are first mapped into the second coordinate system In, then according to the first image, the regional scope of the second image and object region in the second coordinate system, determine the second son Image.The image that the second image is in object region is determined according to coordinate system, and determination process can be made more accurate.
Wherein, that the first image and object region are mapped to the result in the second coordinate system is as shown in Figure 3.In Fig. 3 In, big solid line boxes indicate the second image 205, and small solid line boxes indicate the first image mapped in the second coordinate system 201, dashed rectangle indicates to map to object region 202 in the second coordinate system, and solid line circle indicates target object the First position 203 in one image 201, dashed circle indicate the second position 204 of the target object in the first image 201.
Wherein, after mapping to the first image and object region in the second coordinate system, general first image and The middle image area of two images is overlapped.
Above-mentioned technical proposal in order to better understand is exemplified below.
Assuming that the field angle of second camera is fov_w, width w_w, a height of h_w, wherein the subsequent w of underscore is The initial of wide.Using the lower left corner of second camera acquired image as coordinate origin (0,0), image-region is with { lower-left Angle x-axis coordinate, lower left corner y-axis coordinate, upper right corner x-axis coordinate, upper right corner y-axis coordinate } it indicates.It is regular by this, the second image Image-region should indicate are as follows: { 0,0, w_w, h_w }.First image maps in the second coordinate system the central area for being in the second image Domain, image-region should indicate are as follows: { (w_w-w_w*fov_n/fov_w)/2, (h_w-h_w*fov_n/fov_w)/2, w_w* Fov_n/fov_w, h_w*fov_n/fov_w }.Object region should indicate are as follows: { (w_w-w_w*fov_n/fov_w)/2+ (x-x ') * (w_w/w_n) * (fov_n/fov_w, (h_w-h_w*fov_n/fov_w)/2+ (y-y ') * (h_w/h_n) * (fov_ N/fov_w), w_w* (fov_n/fov_w), h_w* (fov_n/fov_w) }.
If A, x-x ' > 0 and y-y ' > 0, the second subgraph are as follows:
{
(w_w-w_w*fov_n/fov_w)/2+ | x-x ' | * (w_w/w_n) * (fov_n/fov_w), (h_w-h_w*fov_ N/fov_w)/2+h_w* (fov_n/fov_w), w_w* (fov_n/fov_w), | y-y ' | * (h_w/h_n) * (fov_n/fov_w)
}
With
{
(w_w-w_w*fov_n/fov_w)/2+w_w* (fov_n/fov_w), (h_w-h_w*fov_n/fov_w)/2+ | y- Y ' | * (h_w/h_n) * (fov_n/fov_w), | x-x ' | * (w_w/w_n) * (fov_n/fov_w), h_w* (fov_n/fov_w)- |y’–y|*(h_w/h_n)*(fov_n/fov_w)
}。
If B, x-x '>0 and y-y '<0, the second subgraph are as follows:
{
(w_w-w_w*fov_n/fov_w)/2+ | x '-x | * (w_w/w_n) * (fov_n/fov_w), (h_w-h_w*fov_ N/fov_w)/2- | y '-y | * (h_w/h_n) * (fov_n/fov_w), w_w* (fov_n/fov_w), | y-y ' | * (h_w/h_n) * (fov_n/fov_w)
}
With
{
(w_w-w_w*fov_n/fov_w)/2+w_w*fov_n/fov_w, (h_w-h_w*fov_n/fov_w)/2, | x-x ' | * (w_w/w_n) * (fov_n/fov_w), h_w* (fov_n/fov_w)-| y '-y | * (h_w/h_n) * (fov_n/fov_w)
}。
If C, x-x '<0 and y-y '>0, the second subgraph are as follows:
{
(w_w-w_w*fov_n/fov_w)/2- | x '-x | * (w_w/w_n) * (fov_n/fov_w), (h_w-h_w*fov_ N/fov_w)/2+h_w* (fov_n/fov_w), w_w* (fov_n/fov_w), | y-y ' | * (h_w/h_n) * (fov_n/fov_w)
}
With
{
(w_w-w_w*fov_n/fov_w)/2- | x '-x | * (w_w/w_n) * (fov_n/fov_w), (h_w-h_w*fov_ N/fov_w)/2 | y '-y | * (h_w/h_n) * (fov_n/fov_w), | x-x ' | * (w_w/w_n) * (fov_n/fov_w), h_n* (h_w/h_n)*(fov_n/fov_w)-|y’–y|*(h_w/h_n)*(fov_n/fov_w)
}。
If D, x-x ' < 0 and y-y ' < 0, the second subgraph are as follows:
{
(w_w-w_w*fov_n/fov_w)/2- | x '-x | * (w_w/w_n) * (fov_n/fov_w), (h_w-h_w*fov_ N/fov_w)/2- | y '-y | * (h_w/h_n) * (fov_n/fov_w), w_w* (fov_n/fov_w), | y-y ' | * (h_w/h_n) * (fov_n/fov_w)
}
With
{
(w_w-w_w*fov_n/fov_w)/2- | x '-x | * (w_w/w_n) * (fov_n/fov_w), (h_w-h_w*fov_ N/fov_w)/2, | x-x ' | * (w_w/w_n) * (fov_n/fov_w), h_n* (h_w/h_n) * (fov_n/fov_w)-| y '-y | * (h_w/h_n)*(fov_n/fov_w)
}。
It should be noted that can first judge object-image region after mapping to object region in the second coordinate system Whether domain is completely in the image-region of the second image.If object region is completely in the image-region of the second image It is interior, then it can proceed with follow-up process;If object region is non-fully in the image-region of the second image, explanation can not Obtain the image in object region well, then it can be without intelligent composition, directly by the image of the first camera acquisition It exports and shows as preview image.
Specifically, the first subgraph and the second subgraph are subjected to fusion treatment, generating third image includes:
According to the pixel ratio for the image that the first camera and second camera obtain, place is zoomed in and out to the second subgraph Reason;The second subgraph after first subgraph and scaling processing is subjected to fusion treatment, generates third image.
Since the first subgraph and the second subgraph are to carry out area based on the field angle of the first camera and second camera Domain selection, but for the region of same field angle, the number of pixels that the image of two cameras acquisition is included is different , therefore, after choosing image-region, before image-region is carried out fusion treatment, need based on the first camera and second The pixel ratio situation of camera does scaling processing to the image of the second subgraph, after the completion of scaling processing, then will carry out first The fusion treatment of subgraph and the second subgraph, so that the image after synthesis seems more natural, coordinates.
Wherein, it according to the corresponding relationship of the field angle of the first camera and second camera, contracts to the second subgraph When putting processing specifically:
According to preset scaling formula:Processing is zoomed in and out to the second subgraph.
Wherein, fov_n indicates the visual field angle value of the first camera, and w_n indicates width value (the i.e. first camera shooting of the first image The width of the field angle of head), h_n indicates the height value (i.e. the height of the field angle of the first camera) of the first image;Fov_w table Show the visual field angle value of second camera, w_w indicates the width value (i.e. the width of the field angle of second camera) of the second image, h_ W indicates the height value (i.e. the height of the field angle of second camera) of the second image.
Wherein, which is the pixel ratio of the image obtained according to the first camera and second camera Determining, processing is zoomed in and out to the second subgraph by preset formula, treatment process can be made more accurate.
Further, target object is determined before the second position in the first image in step 101, this method is also wrapped It includes:
It detects in the image of the first camera acquisition with the presence or absence of target object;In the first image of the first camera acquisition In there are in the case where target object, control second camera is active;In the first image of the first camera acquisition In there is no in the case where target object, control second camera is in a dormant state.
In the embodiment of the present invention, terminal can start first after receiving the enabled instruction of camera with normally-open preview Camera collection image, and detect the image that whether there is main body in the image that the first camera acquires, i.e. the first camera is It is no to take main body.
Wherein, when the first camera does not take main body, control second camera in a dormant state, that is, is in and powers on But the state of image is not acquired.Due to when the first camera does not take main body, will not generally trigger intelligent composition, and second Camera only just may be useful for when triggering intelligent composition, therefore, when the first camera does not take main body, can control the Two cameras in a dormant state, can not only reduce always on power consumption consumption, but also can satisfy use at any time can in this way With the needs of quick start.Wherein, if second camera has been active when the first camera does not take main body, It then controls second camera and dormant state is adjusted to by state of activation.
Wherein, when the first camera takes main body, it is believed that need to trigger intelligent composition, then the second camera shooting of control at this time Head is active, and control second camera acquires image data.
Preferably, terminal can first control second camera in a dormant state after receiving the enabled instruction of camera. When the first camera does not take main body, state of activation is adjusted to by dormant state in control second camera, to reduce function Consumption.
In conclusion in the embodiment of the present invention, in order to allow users to the shooting preferable photo of patterning effect, when taking pictures Intelligent composition merges the image of the different two cameras acquisition in visual angle, and the position of one frame target object of synthesis makes figure As the preview image of whole composition more harmony and natural, in this way, can be shot if user takes pictures with the preview image The good photo of patterning effect out.Technical solution in the embodiment of the present invention is melted using two different image datas in visual angle It closes, the adjustment of preview screen content can be carried out automatically, the preview screen of optimal mode of composition is presented to user, does not need user Mobile terminal goes to match most suitable composition, possesses shooting experience abundant without user, it can user is helped to shoot Outstanding photo.
Other side according to an embodiment of the present invention provides a kind of terminal, is able to achieve above-mentioned preview image generation side Details in method, and reach identical effect.
Wherein, the terminal includes: to be set to the terminal ipsilateral the first camera and second camera, and described first The visual angle of camera is less than the visual angle of the second camera.
As shown in figure 4, the terminal 400 further include:
First obtains module 401, for obtaining target object first in the first image that the first camera acquires It sets.
Second obtains module 402, for obtaining the corresponding second position of target pattern mode.
Wherein, the second position is desired locations of the target object in the first image.
First determining module 403, the first position and described second for being obtained based on the first acquisition module 401 are obtained The second position that modulus block 402 obtains, determines object region.
Wherein, the object region is the image-region in first camera shooting field range, the mesh It is identical as the second position to mark position of the object in the object region.
Third obtains module 404, for obtaining the mesh for being located at first determining module 403 in the first image and determining It is located at the second of the object region in second image of the first subgraph and the second camera acquisition in logo image region Subgraph.
Processing module 405, the first subgraph and the second subgraph for obtaining third acquisition module 404 carry out Image co-registration processing generates third image, and the third image is exported as targets preview image and is shown.
Wherein, position of the target object in the targets preview image is identical as the second position.
Further, the second acquisition module 402 includes:
First determination unit, in preset at least two mode of composition, determining described in the second position distance The nearest target pattern mode in first position.
First acquisition unit, the target pattern mode for being determined according to first determination unit, obtains the target The second position of the object in the first image.
Further, the third acquisition module 404 includes:
Second determination unit, for the area according to the first image and the object region in the first coordinate system The image being in the object region in the first image is determined as the first subgraph by domain range.
Wherein, first coordinate system is the coordinate system that the image acquired according to first camera is established.
Further, the third acquisition module 404 includes:
Map unit, for mapping to the first image and the object region in the second coordinate system.
Third determination unit, for mapping to the first image and the object region in the map unit After in second coordinate system, the third subgraph in second image in the object region is determined.
4th determination unit, the third subgraph and the first image for determining the third determination unit are non-heavy Folded image is determined as the second subgraph.
Wherein, second coordinate system is the coordinate system that the image acquired according to the second camera is established.
Further, the processing module 405 includes:
Scaling processing unit, for the pixel ratio according to first camera and the image of second camera acquisition Value, zooms in and out processing to second subgraph.
Processing unit, for by first subgraph and by the second subgraph after the scaling processing unit scaling processing As carrying out fusion treatment, the third image is generated.
Specifically, the scaling processing unit is specifically used for:
According to preset scaling formula:Processing is zoomed in and out to second subgraph.
Wherein, the preset scaling formula is the image obtained according to first camera and the second camera Pixel ratio determine.
Wherein, fov_n indicates the visual field angle value of first camera, and w_n indicates the width value of the first image, h_ The height value of n expression the first image;Fov_w indicates the visual field angle value of the second camera, and w_w indicates second figure The width value of picture, h_w indicate the height value of second image.
Specifically, the terminal further include:
Detection module, for detecting in the image that first camera acquires with the presence or absence of target object;
First control module, for there are mesh in the image that the detection module detects the first camera acquisition In the case where marking object, controls the second camera and be active;
Second control module, for being not present in the image that the detection module detects the first camera acquisition In the case where target object, the second camera is controlled in a dormant state.
In the embodiment of the present invention, in order to allow users to shooting the preferable photo of patterning effect, the intelligence composition when taking pictures, The image of the different two cameras acquisition in visual angle is merged, the position of one frame target object of synthesis makes image entirety composition The more preview image of harmony and natural, in this way, patterning effect can be shot if user takes pictures with the preview image Good photo.Technical solution in the embodiment of the present invention is merged using two different image datas in visual angle, can automatically into The adjustment of row preview screen content, the preview screen of optimal mode of composition is presented to user, does not need customer mobile terminal and goes Most suitable composition is matched, possesses shooting experience abundant without user, it can user is helped to shoot outstanding photo.
A kind of hardware structural diagram of Fig. 5 mobile terminal of each embodiment to realize the present invention.
The terminal 500 includes but is not limited to: radio frequency unit 501, network module 502, audio output unit 503, input are single Member 504, sensor 505, display unit 506, user input unit 507, interface unit 508, memory 509, processor 510, And the equal components of power supply 511.It will be understood by those skilled in the art that the not structure paired terminal of terminal structure shown in Fig. 5 It limits, terminal may include perhaps combining certain components or different component layouts than illustrating more or fewer components. In embodiments of the present invention, terminal include but is not limited to mobile phone, tablet computer, laptop, palm PC, car-mounted terminal, Wearable device and pedometer etc..
Processor 510, for obtaining first position of the target object in the first image that first camera acquires The second position corresponding with target pattern mode is obtained;Based on the first position and the second position, target image is determined Region;What the first subgraph and the second camera for being located at the object region in acquisition the first image acquired It is located at the second subgraph of the object region in second image;By first subgraph and second subgraph into The processing of row image co-registration generates third image, and by display unit 506 that the third image is defeated as targets preview image It shows out.
Wherein, the second position is desired locations of the target object in the first image;The target figure As region be first camera shoot field range in image-region, the target object is in the object region In position it is identical as the second position;Position of the target object in the targets preview image and the second It sets identical.
In the embodiment of the present invention, in order to allow users to shooting the preferable photo of patterning effect, the intelligence composition when taking pictures, The image of the different two cameras acquisition in visual angle is merged, the position of one frame target object of synthesis makes image entirety composition The more preview image of harmony and natural, in this way, patterning effect can be shot if user takes pictures with the preview image Good photo.Technical solution in the embodiment of the present invention is merged using two different image datas in visual angle, can automatically into The adjustment of row preview screen content, the preview screen of optimal mode of composition is presented to user, does not need customer mobile terminal and goes Most suitable composition is matched, possesses shooting experience abundant without user, it can user is helped to shoot outstanding photo.
It should be understood that the embodiment of the present invention in, radio frequency unit 501 can be used for receiving and sending messages or communication process in, signal Send and receive, specifically, by from base station downlink data receive after, to processor 510 handle;In addition, by uplink Data are sent to base station.In general, radio frequency unit 501 includes but is not limited to antenna, at least one amplifier, transceiver, coupling Device, low-noise amplifier, duplexer etc..In addition, radio frequency unit 501 can also by wireless communication system and network and other set Standby communication.
Terminal provides wireless broadband internet by network module 502 for user and accesses, and such as user is helped to receive and dispatch electricity Sub- mail, browsing webpage and access streaming video etc..
Audio output unit 503 can be received by radio frequency unit 501 or network module 502 or in memory 509 The audio data of storage is converted into audio signal and exports to be sound.Moreover, audio output unit 503 can also provide and end The relevant audio output of specific function (for example, call signal receives sound, message sink sound etc.) that end 500 executes.Sound Frequency output unit 503 includes loudspeaker, buzzer and receiver etc..
Input unit 504 is for receiving audio or video signal.Input unit 504 may include graphics processor (Graphics Processing Unit, GPU) 5041 and microphone 5042, graphics processor 5041 is in video acquisition mode Or the image data of the static images or video obtained in image capture mode by image capture apparatus (such as camera) carries out Reason.Treated, and picture frame may be displayed on display unit 506.Through graphics processor 5041, treated that picture frame can be deposited Storage is sent in memory 509 (or other storage mediums) or via radio frequency unit 501 or network module 502.Mike Wind 5042 can receive sound, and can be audio data by such acoustic processing.Treated audio data can be The format output that mobile communication base station can be sent to via radio frequency unit 501 is converted in the case where telephone calling model.
Terminal 500 further includes at least one sensor 505, such as optical sensor, motion sensor and other sensors. Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein ambient light sensor can be according to ambient light Light and shade adjusts the brightness of display panel 5061, and proximity sensor can close display panel when terminal 500 is moved in one's ear 5061 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect in all directions (generally three axis) and add The size of speed can detect that size and the direction of gravity when static, can be used to identify terminal posture (such as horizontal/vertical screen switching, Dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;Sensor 505 can be with Including fingerprint sensor, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer, hygrometer, thermometer, Infrared sensor etc., details are not described herein.
Display unit 506 is for showing information input by user or being supplied to the information of user.Display unit 506 can wrap Display panel 5061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used Forms such as (Organic Light-Emitting Diode, OLED) configure display panel 5061.
User input unit 507 can be used for receiving the number or character information of input, and generates and set with the user of terminal It sets and the related key signals of function control inputs.Specifically, user input unit 507 include touch panel 5071 and other Input equipment 5072.Touch panel 5071, also referred to as touch screen, collect user on it or nearby touch operation (such as User is using any suitable objects or attachment such as finger, stylus on touch panel 5071 or near touch panel 5071 Operation).Touch panel 5071 may include both touch detecting apparatus and touch controller.Wherein, touch detecting apparatus is examined The touch orientation of user is surveyed, and detects touch operation bring signal, transmits a signal to touch controller;Touch controller from Touch information is received on touch detecting apparatus, and is converted into contact coordinate, then gives processor 510, receives processor 510 The order sent simultaneously is executed.Furthermore, it is possible to using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves Realize touch panel 5071.In addition to touch panel 5071, user input unit 507 can also include other input equipments 5072. Specifically, other input equipments 5072 can include but is not limited to physical keyboard, function key (such as volume control button, switch Key etc.), trace ball, mouse, operating stick, details are not described herein.
Further, touch panel 5071 can be covered on display panel 5061, when touch panel 5071 is detected at it On or near touch operation after, send processor 510 to determine the type of touch event, be followed by subsequent processing device 510 according to touching The type for touching event provides corresponding visual output on display panel 5061.Although in Fig. 5, touch panel 5071 and display Panel 5061 is the function that outputs and inputs of realizing terminal as two independent components, but in certain embodiments, it can The function that outputs and inputs of terminal is realized so that touch panel 5071 and display panel 5061 is integrated, is not limited herein specifically It is fixed.
Interface unit 508 is the interface that external device (ED) is connect with terminal 500.For example, external device (ED) may include it is wired or Wireless head-band earphone port, external power supply (or battery charger) port, wired or wireless data port, memory card port, For connecting port, the port audio input/output (I/O), video i/o port, ear port of the device with identification module Etc..Interface unit 508 can be used for receiving the input (for example, data information, electric power etc.) from external device (ED) and will One or more elements that the input received is transferred in terminal 500 or can be used for terminal 500 and external device (ED) it Between transmit data.
Memory 509 can be used for storing software program and various data.Memory 509 can mainly include storing program area The storage data area and, wherein storing program area can (such as the sound of application program needed for storage program area, at least one function Sound playing function, image player function etc.) etc.;Storage data area can store according to mobile phone use created data (such as Audio data, phone directory etc.) etc..In addition, memory 509 may include high-speed random access memory, it can also include non-easy The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 510 is the control centre of terminal, using the various pieces of various interfaces and the entire terminal of connection, is led to It crosses operation or executes the software program and/or module being stored in memory 509, and call and be stored in memory 509 Data execute the various functions and processing data of terminal, to carry out integral monitoring to terminal.Processor 510 may include one Or multiple processing units;Preferably, processor 510 can integrate application processor and modem processor, wherein application processing The main processing operation system of device, user interface and application program etc., modem processor mainly handles wireless communication.It can manage Solution, above-mentioned modem processor can not also be integrated into processor 510.
Terminal 500 can also include the power supply 511 (such as battery) powered to all parts, it is preferred that power supply 511 can be with It is logically contiguous by power-supply management system and processor 510, thus by power-supply management system realize management charging, electric discharge, with And the functions such as power managed.
In addition, terminal 500 includes some unshowned functional modules, details are not described herein.
Preferably, the embodiment of the present invention also provides a kind of terminal, including processor 510, and memory 509 is stored in storage It is real when which is executed by processor 510 on device 509 and the computer program that can be run on the processor 510 Each process of existing above-mentioned preview image generation method embodiment, and identical technical effect can be reached, to avoid repeating, here It repeats no more.
The embodiment of the present invention also provides a kind of computer readable storage medium, and meter is stored on computer readable storage medium Calculation machine program, the computer program realize each process of above-mentioned preview image generation method embodiment when being executed by processor, And identical technical effect can be reached, to avoid repeating, which is not described herein again.Wherein, the computer readable storage medium, Such as read-only memory (Read-Only Memory, abbreviation ROM), random access memory (Random Access Memory, letter Claim RAM), magnetic or disk etc..
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do There is also other identical elements in the process, method of element, article or device.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art The part contributed out can be embodied in the form of software products, which is stored in a storage medium In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal (can be mobile phone, computer, service Device, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned specific Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much Form belongs within protection of the invention.

Claims (16)

1. a kind of preview image generation method, it is applied to terminal, the terminal includes: to be set to that the terminal is ipsilateral first to be taken the photograph As head and second camera, the visual angle of first camera is less than the visual angle of the second camera, which is characterized in that described Method includes:
Obtain first position of the target object in the first image that first camera acquires;
Obtain the corresponding second position of target pattern mode;Wherein, the second position is the target object described first Desired locations in image;
Based on the first position and the second position, object region is determined;Wherein, the object region is institute State the first camera shooting field range in image-region, position of the target object in the object region with The second position is identical;
What the first subgraph and the second camera for being located at the object region in acquisition the first image acquired It is located at the second subgraph of the object region in second image;
First subgraph and second subgraph are subjected to image co-registration processing, generate third image, and by described the Three images are exported as targets preview image and are shown;Wherein, position of the target object in the targets preview image with The second position is identical.
2. the method according to claim 1, wherein the corresponding second position of the acquisition target pattern mode, Include:
In preset at least two mode of composition, the second position target pattern nearest apart from the first position is determined Mode;
According to the target pattern mode, the second position of the target object in the first image is obtained.
3. the method according to claim 1, wherein being located at the target figure in the acquisition the first image As first subgraph in region, comprising:
According to the regional scope of the first image and the object region in the first coordinate system, by the first image In image in the object region be determined as the first subgraph;
Wherein, first coordinate system is the coordinate system that the image acquired according to first camera is established.
4. according to the method described in claim 3, it is characterized in that, second image for obtaining the second camera acquisition In be located at the object region the second subgraph, comprising:
The first image and the object region are mapped in the second coordinate system;
According to the region model of the first image, second image and the object region in second coordinate system It encloses, determines the third subgraph in second image in the object region;
The image of the third subgraph and the first image non-overlap is determined as the second subgraph;
Wherein, second coordinate system is the coordinate system that the image acquired according to the second camera is established.
5. according to the method described in claim 4, it is characterized in that, described by first subgraph and second subgraph Image co-registration processing is carried out, third image is generated, comprising:
According to the pixel ratio for the image that first camera and the second camera acquire, to second subgraph into Row scaling processing;
The second subgraph after first subgraph and scaling processing is subjected to fusion treatment, generates the third image.
6. according to the method described in claim 5, it is characterized in that, described image according to first camera with described second The pixel ratio system of the image of head acquisition, zooms in and out processing to second subgraph, comprising:
According to preset scaling formula:Processing is zoomed in and out to second subgraph;Wherein, The preset scaling formula is that the pixel ratio of the image acquired according to first camera and the second camera is true Fixed;
Wherein, fov_n indicates the visual field angle value of first camera, and w_n indicates the width value of the first image, h_n table Show the height value of the first image;Fov_w indicates the visual field angle value of the second camera, and w_w indicates second image Width value, h_w indicates the height value of second image.
7. the method according to claim 1, wherein obtaining what target object was acquired in first camera Before first position in first image, the method also includes:
It detects in the first image of the first camera acquisition with the presence or absence of target object;
There are in the case where target object in the first image of first camera acquisition, control at the second camera In state of activation;
In the case where target object is not present in the first image of first camera acquisition, the second camera is controlled In a dormant state.
8. a kind of terminal, the terminal includes: to be set to the terminal ipsilateral the first camera and second camera, described The visual angle of one camera is less than the visual angle of the second camera, which is characterized in that the terminal further include:
First obtains module, for obtaining first position of the target object in the first image that the first camera acquires;
Second obtains module, for obtaining the corresponding second position of target pattern mode;Wherein, the second position is the mesh Mark desired locations of the object in the first image;
First determining module obtains module acquisition for obtaining the first position and described second that module obtains based on described first The second position, determine object region;Wherein, the object region is that first camera shoots field range Interior image-region, position of the target object in the object region are identical as the second position;
Third obtains module, for obtaining the object region for being located at first determining module in the first image and determining The first subgraph and second camera acquisition the second image in be located at the object region the second subgraph;
Processing module, for obtaining the third at the first subgraph and the second subgraph progress image co-registration that module obtains Reason generates third image, and the third image is exported as targets preview image and is shown;Wherein, the target object exists Position in the targets preview image is identical as the second position.
9. terminal according to claim 8, which is characterized in that described second, which obtains module, includes:
First determination unit, for determining the second position apart from described first in preset at least two mode of composition The nearest target pattern mode in position;
First acquisition unit, the target pattern mode for being determined according to first determination unit, obtains the target object The second position in the first image.
10. terminal according to claim 8, which is characterized in that the third obtains module and includes:
Second determination unit, for the region model according to the first image and the object region in the first coordinate system It encloses, the image being in the object region in the first image is determined as the first subgraph;
Wherein, first coordinate system is the coordinate system that the image acquired according to first camera is established.
11. terminal according to claim 10, which is characterized in that the third obtains module and includes:
Map unit, for mapping to the first image and the object region in the second coordinate system;
Third determination unit, for the first image and the object region to be mapped to second in the map unit After in coordinate system, the third subgraph in second image in the object region is determined;
4th determination unit, third subgraph and the first image non-overlap for determining the third determination unit Image is determined as the second subgraph;
Wherein, second coordinate system is the coordinate system that the image acquired according to the second camera is established.
12. terminal according to claim 11, which is characterized in that the processing module includes:
Scaling processing unit, the pixel ratio of the image for being obtained according to first camera and the second camera, Processing is zoomed in and out to second subgraph;
Processing unit, for by first subgraph with by the second subgraph after the scaling processing unit scaling processing into Row fusion treatment generates the third image.
13. terminal according to claim 12, which is characterized in that the scaling processing unit is specifically used for:
According to preset scaling formula:Processing is zoomed in and out to second subgraph;
Wherein, the preset scaling formula is the picture of the image obtained according to first camera and the second camera What plain ratio determined;
Wherein, fov_n indicates the visual field angle value of first camera, and w_n indicates the width value of the first image, h_n table Show the height value of the first image;Fov_w indicates the visual field angle value of the second camera, and w_w indicates second image Width value, h_w indicates the height value of second image.
14. terminal according to claim 8, which is characterized in that further include:
Detection module, for detecting in the image that first camera acquires with the presence or absence of target object;
First control module, for there are targets pair in the image that the detection module detects the first camera acquisition As in the case where, controls the second camera and be active;
Second control module, for there is no targets in the image that the detection module detects the first camera acquisition In the case where object, the second camera is controlled in a dormant state.
15. a kind of terminal characterized by comprising processor, memory and be stored on the memory and can be at the place The computer program run on reason device is realized when the computer program is executed by the processor as appointed in claim 1 to 7 The step of preview image generation method described in one.
16. a kind of computer readable storage medium, which is characterized in that store computer journey on the computer readable storage medium Sequence realizes the preview image generation side as described in any one of claims 1 to 7 when the computer program is executed by processor The step of method.
CN201811581777.6A 2018-12-24 2018-12-24 Preview image generation method and terminal Active CN109474786B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811581777.6A CN109474786B (en) 2018-12-24 2018-12-24 Preview image generation method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811581777.6A CN109474786B (en) 2018-12-24 2018-12-24 Preview image generation method and terminal

Publications (2)

Publication Number Publication Date
CN109474786A true CN109474786A (en) 2019-03-15
CN109474786B CN109474786B (en) 2021-07-23

Family

ID=65676497

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811581777.6A Active CN109474786B (en) 2018-12-24 2018-12-24 Preview image generation method and terminal

Country Status (1)

Country Link
CN (1) CN109474786B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110830715A (en) * 2019-10-31 2020-02-21 维沃移动通信(杭州)有限公司 Photographing method and electronic equipment
CN111277760A (en) * 2020-02-28 2020-06-12 Oppo广东移动通信有限公司 Shooting composition method, terminal and storage medium
CN111770277A (en) * 2020-07-31 2020-10-13 RealMe重庆移动通信有限公司 Auxiliary shooting method, terminal and storage medium
CN112215048A (en) * 2019-07-12 2021-01-12 中国移动通信有限公司研究院 3D target detection method and device and computer readable storage medium
CN112637515A (en) * 2020-12-22 2021-04-09 维沃软件技术有限公司 Shooting method and device and electronic equipment
CN114065881A (en) * 2020-07-29 2022-02-18 阿里巴巴集团控股有限公司 Relationship establishing method, object identifying device, object identifying prompting method, object identifying device, object identifying equipment and storage medium
EP4106315A4 (en) * 2020-03-20 2023-08-16 Huawei Technologies Co., Ltd. Photographing method and device

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6915432B1 (en) * 1999-01-29 2005-07-05 International Business Machines Corporation Composing a realigned image
CN101843092A (en) * 2008-09-08 2010-09-22 索尼公司 Image pickup apparatus, image pickup method, and program
CN103797787A (en) * 2012-09-10 2014-05-14 华为技术有限公司 Image processing method and image processing device
CN103905713A (en) * 2012-12-26 2014-07-02 联想(北京)有限公司 Control method, control device and electronic equipment
CN104754228A (en) * 2015-03-27 2015-07-01 广东欧珀移动通信有限公司 Mobile terminal and method for taking photos by using cameras of mobile terminal
US9369612B2 (en) * 2013-08-20 2016-06-14 Hanwha Techwin Co., Ltd. Image fusion system and method
CN106888349A (en) * 2017-03-30 2017-06-23 努比亚技术有限公司 A kind of image pickup method and device
CN107395971A (en) * 2017-07-31 2017-11-24 努比亚技术有限公司 A kind of image-pickup method, equipment and computer-readable recording medium
CN107592467A (en) * 2017-10-20 2018-01-16 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN108377339A (en) * 2018-05-07 2018-08-07 维沃移动通信有限公司 A kind of photographic method and camera arrangement
CN108513070A (en) * 2018-04-04 2018-09-07 维沃移动通信有限公司 A kind of image processing method, mobile terminal and computer readable storage medium
CN109068055A (en) * 2018-08-10 2018-12-21 维沃移动通信有限公司 A kind of patterning process, terminal and storage medium
CN111541845A (en) * 2020-04-30 2020-08-14 维沃移动通信(杭州)有限公司 Image processing method and device and electronic equipment

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6915432B1 (en) * 1999-01-29 2005-07-05 International Business Machines Corporation Composing a realigned image
CN101843092A (en) * 2008-09-08 2010-09-22 索尼公司 Image pickup apparatus, image pickup method, and program
CN103797787A (en) * 2012-09-10 2014-05-14 华为技术有限公司 Image processing method and image processing device
CN103905713A (en) * 2012-12-26 2014-07-02 联想(北京)有限公司 Control method, control device and electronic equipment
US9369612B2 (en) * 2013-08-20 2016-06-14 Hanwha Techwin Co., Ltd. Image fusion system and method
CN104754228A (en) * 2015-03-27 2015-07-01 广东欧珀移动通信有限公司 Mobile terminal and method for taking photos by using cameras of mobile terminal
CN106888349A (en) * 2017-03-30 2017-06-23 努比亚技术有限公司 A kind of image pickup method and device
CN107395971A (en) * 2017-07-31 2017-11-24 努比亚技术有限公司 A kind of image-pickup method, equipment and computer-readable recording medium
CN107592467A (en) * 2017-10-20 2018-01-16 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN108513070A (en) * 2018-04-04 2018-09-07 维沃移动通信有限公司 A kind of image processing method, mobile terminal and computer readable storage medium
CN108377339A (en) * 2018-05-07 2018-08-07 维沃移动通信有限公司 A kind of photographic method and camera arrangement
CN109068055A (en) * 2018-08-10 2018-12-21 维沃移动通信有限公司 A kind of patterning process, terminal and storage medium
CN111541845A (en) * 2020-04-30 2020-08-14 维沃移动通信(杭州)有限公司 Image processing method and device and electronic equipment

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215048A (en) * 2019-07-12 2021-01-12 中国移动通信有限公司研究院 3D target detection method and device and computer readable storage medium
CN112215048B (en) * 2019-07-12 2024-03-22 中国移动通信有限公司研究院 3D target detection method, device and computer readable storage medium
CN110830715A (en) * 2019-10-31 2020-02-21 维沃移动通信(杭州)有限公司 Photographing method and electronic equipment
CN111277760A (en) * 2020-02-28 2020-06-12 Oppo广东移动通信有限公司 Shooting composition method, terminal and storage medium
CN111277760B (en) * 2020-02-28 2022-02-01 Oppo广东移动通信有限公司 Shooting composition method, terminal and storage medium
EP4106315A4 (en) * 2020-03-20 2023-08-16 Huawei Technologies Co., Ltd. Photographing method and device
CN114065881A (en) * 2020-07-29 2022-02-18 阿里巴巴集团控股有限公司 Relationship establishing method, object identifying device, object identifying prompting method, object identifying device, object identifying equipment and storage medium
CN114065881B (en) * 2020-07-29 2024-04-02 浙江莲荷科技有限公司 Relationship establishment, object identification and prompting methods, devices, equipment and storage medium
CN111770277A (en) * 2020-07-31 2020-10-13 RealMe重庆移动通信有限公司 Auxiliary shooting method, terminal and storage medium
CN112637515A (en) * 2020-12-22 2021-04-09 维沃软件技术有限公司 Shooting method and device and electronic equipment

Also Published As

Publication number Publication date
CN109474786B (en) 2021-07-23

Similar Documents

Publication Publication Date Title
CN111541845B (en) Image processing method and device and electronic equipment
CN109474786A (en) A kind of preview image generation method and terminal
CN110427110B (en) Live broadcast method and device and live broadcast server
WO2019174628A1 (en) Photographing method and mobile terminal
WO2021115479A1 (en) Display control method and electronic device
CN109361865A (en) A kind of image pickup method and terminal
CN109246360A (en) A kind of reminding method and mobile terminal
CN108632413B (en) A kind of photographic method and mobile terminal
CN107483836B (en) A kind of image pickup method and mobile terminal
CN109348020A (en) A kind of photographic method and mobile terminal
CN107864336B (en) A kind of image processing method, mobile terminal
CN108924412A (en) A kind of image pickup method and terminal device
CN108449546A (en) A kind of photographic method and mobile terminal
CN108683850A (en) A kind of shooting reminding method and mobile terminal
CN108881544A (en) A kind of method taken pictures and mobile terminal
CN110839174A (en) Image processing method and device, computer equipment and storage medium
CN108984082A (en) A kind of image display method and mobile terminal
CN110110571A (en) A kind of barcode scanning method and mobile terminal
CN109120800A (en) A kind of application icon method of adjustment and mobile terminal
CN110290331A (en) A kind of screen control method and terminal
CN108174110A (en) A kind of photographic method and flexible screen terminal
CN109104564A (en) A kind of shooting reminding method and terminal device
CN107959755A (en) A kind of photographic method and mobile terminal
CN108317992A (en) A kind of object distance measurement method and terminal device
CN108156386B (en) Panoramic photographing method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant