CN104135609A - A method and a device for assisting in photographing, and a terminal - Google Patents
A method and a device for assisting in photographing, and a terminal Download PDFInfo
- Publication number
- CN104135609A CN104135609A CN201410302630.4A CN201410302630A CN104135609A CN 104135609 A CN104135609 A CN 104135609A CN 201410302630 A CN201410302630 A CN 201410302630A CN 104135609 A CN104135609 A CN 104135609A
- Authority
- CN
- China
- Prior art keywords
- template image
- picture frame
- mentioned
- real time
- characteristic point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention provides a method and a device for assisting in photographing, and a terminal. The above method includes: acquiring an image frame in real time in the case of shooting a scene; matching, in real time, the image frame acquired currently with a template image corresponding to the scene; and outputting, in real time, a match degree obtained by the matching. According to the technical solution provided by the invention, through outputting the above match degree in real time, a user can be assisted in acquiring multiple photographs in which locations and gestures of the photographed object are substantially consistent, thus improving effectively user experience.
Description
Technical field
The present invention relates to the communications field, in particular to a kind of auxiliary photo-taking method, device and terminal.
Background technology
In work and life, conventionally there is the demand of taking pictures as follows: to same subject, need at set intervals to carry out once photo taking, the result of then taking pictures composition image sequence, thus compare or record.For example, take scene at same position every day, and analyze the variation of weather according to photographed scene.
Because needs compare or for observing effect better, it is basically identical needing position and the attitude of subject in image, the camera site of capture apparatus and angle are basically identical.In correlation technique, by camera being fixed a position, and be fixed as a kind of attitude, thereby photograph above-mentioned one group of image.For this processing mode, the interval time of taking pictures is shorter than being easier to operation, if but the interval time of taking pictures longer, be difficult to camera to be fixed on a position, and be fixed as a kind of attitude, so may not possess operability.
Therefore, in shooting process, how helping user to obtain position and the basically identical multiple pictures of attitude of subject, is current technical problem urgently to be resolved hurrily.
Summary of the invention
Object of the present invention, is to provide a kind of auxiliary photo-taking method, device and terminal, one of to address the above problem at least.
According to a first aspect of the invention, provide a kind of auxiliary photo-taking method.
Auxiliary photo-taking method according to the present invention comprises: in the time carrying out scene shooting, obtain in real time picture frame; By the current above-mentioned picture frame obtaining in real time the template image corresponding with above-mentioned scene mate; The matching degree that output matching obtains in real time.
Above-mentioned picture frame is mated and comprised with above-mentioned template image in real time: the First Characteristic point that obtains above-mentioned template image; Obtain the Second Characteristic point of above-mentioned picture frame; Use above-mentioned First Characteristic point and above-mentioned Second Characteristic point to mate in real time and obtain above-mentioned matching degree.
Obtain above-mentioned First Characteristic point and above-mentioned Second Characteristic point comprises: respectively above-mentioned picture frame and above-mentioned template image are extracted to contour feature; In the contour feature extracting, delete the contour feature that length is less than first threshold; Delete the image angle point that response is less than Second Threshold; Contour feature and the angle point carried out after deletion action are defined as to characteristic point; Characteristic point is carried out to Screening Treatment, until characteristic point equiblibrium mass distribution, and determine the edge direction in each characteristic point neighborhood, obtain final characteristic point.
Above-mentioned picture frame is mated and comprised with above-mentioned template image in real time: by analyzing image texture, respectively above-mentioned picture frame and above-mentioned template image are divided into the region that multiple textures are single; For each region after dividing in above-mentioned picture frame and above-mentioned template image, block-by-block mates.
Before above-mentioned picture frame is mated with above-mentioned template image in real time, also comprise: response user operates, the coupling weight in each selection region of above-mentioned template image is set; Above-mentioned picture frame is mated and comprised with above-mentioned template image in real time: determine above-mentioned matching degree according to above-mentioned coupling weight.
Before the current above-mentioned picture frame obtaining is mated with default template image in real time, also comprise: after above-mentioned scene is taken pictures for the first time, the photo obtaining is defined as to above-mentioned template image; From above-mentioned scene is taken pictures for the second time, after taking pictures, all photos that current shooting is obtained are averaging at every turn, obtain the photo after average, are defined as above-mentioned template image.
Before obtaining picture frame in real time, also comprise: above-mentioned template image is processed, obtained taking reference picture, wherein, above-mentioned shooting reference picture is the First Characteristic point obtaining from above-mentioned template image, or above-mentioned shooting reference picture is translucent above-mentioned template image; By the image stack of above-mentioned shooting reference picture and the shooting in real time of above-mentioned terminal.
Obtain at output matching above-mentioned matching degree time, also comprise: by Feature Points Matching, obtain the transformation matrix that above-mentioned picture frame is converted into above-mentioned template image; Adopt above-mentioned transformation matrix to carry out conversion process to above-mentioned current picture frame, to transform to the attitude of above-mentioned template image.
According to a second aspect of the invention, provide a kind of auxiliary photo-taking device.
Auxiliary photo-taking device according to the present invention comprises: the first acquisition module, in the time carrying out scene shooting, obtains picture frame in real time; Matching module, for by the current above-mentioned picture frame obtaining in real time the template image corresponding with above-mentioned scene mate; Output module, for the matching degree that output matching obtains in real time.
Above-mentioned matching module comprises: the first acquiring unit, for extracting the First Characteristic point of above-mentioned template image; Second acquisition unit, for obtaining the Second Characteristic point of above-mentioned picture frame; The first matching unit, obtains above-mentioned matching degree for using above-mentioned First Characteristic point and above-mentioned Second Characteristic point to mate in real time.
Above-mentioned matching module comprises: texture analysis module, for by analyzing image texture, is divided into by above-mentioned picture frame and above-mentioned template image the region that multiple textures are single respectively; The second matching unit, for the each region for after above-mentioned picture frame and the division of above-mentioned template image, block-by-block mates.
Said apparatus also comprises: module is set, for responding user's operation, the coupling weight in each selection region in above-mentioned template image is set; Above-mentioned matching module also comprises: determining unit, and for determining above-mentioned matching degree according to above-mentioned coupling weight.
Said apparatus also comprises: the first determination module, for after above-mentioned scene is taken pictures for the first time, is defined as above-mentioned template image by the photo obtaining; The second determination module, for from above-mentioned scene is taken pictures for the second time, after taking pictures, all photos that current shooting is obtained are averaging at every turn, obtain the photo after average, are defined as above-mentioned template image.
Said apparatus also comprises: the second acquisition module, for above-mentioned template image is processed, obtain taking reference picture, wherein, above-mentioned shooting reference picture comprises the First Characteristic point obtaining from above-mentioned template image, or above-mentioned shooting reference picture is translucent above-mentioned template image; Laminating module, for superposeing the image of above-mentioned shooting reference picture and the shooting in real time of above-mentioned terminal.
Said apparatus also comprises: the 3rd acquisition module, for by Feature Points Matching, obtains the transformation matrix that above-mentioned picture frame is converted into above-mentioned template image; Processing module, for adopting above-mentioned transformation matrix to carry out conversion process to above-mentioned current picture frame, to transform to the attitude of above-mentioned template image.
According to a third aspect of the invention we, provide a kind of terminal.
Terminal according to the present invention comprises: one or more processors; Memory; With one or more modules, above-mentioned one or more module stores are in above-mentioned memory and be configured to be carried out by above-mentioned one or more processors, and above-mentioned one or more modules are used for: carrying out scene while taking, obtain in real time picture frame; By the current above-mentioned picture frame obtaining in real time the template image corresponding with above-mentioned scene mate; The matching degree that output matching obtains in real time.
The technical scheme that embodiment of the present disclosure provides can comprise following beneficial effect: with of the prior art longer when the interval time of taking pictures, be difficult to camera be fixed on a position and be fixed as a kind of attitude, thereby do not possess operability and compare, in the time that scene is taken, by the current above-mentioned picture frame obtaining in real time the template image corresponding with above-mentioned scene mate, the matching degree that output matching obtains in real time, can obtain position and the basically identical multiple pictures of attitude of subject by assisted user, experience thereby effectively improve user.
Should be understood that, it is only exemplary that above general description and details are hereinafter described, and can not limit the disclosure.
Brief description of the drawings
Fig. 1 is according to the flow chart of the auxiliary photo-taking method of the embodiment of the present invention;
Fig. 2 is according to the flow chart of the auxiliary photo-taking method of the embodiment of the present invention one;
Fig. 3 is according to the flow chart of the auxiliary photo-taking method of the embodiment of the present invention one;
Fig. 4 is according to the structured flowchart of the auxiliary photo-taking device of the embodiment of the present invention;
Fig. 5 is according to the structured flowchart of the auxiliary photo-taking device of the embodiment of the present invention one;
Fig. 6 is according to the structured flowchart of the auxiliary photo-taking device of the embodiment of the present invention two; And
Fig. 7 is according to the structural representation of the terminal of the embodiment of the present invention.
Accompanying drawing is herein merged in specification and forms the part of this specification, shows embodiment according to the invention, and is used from and explains principle of the present invention with specification one.
Embodiment
Also by reference to the accompanying drawings the present invention is described in further detail below by specific embodiment.
Fig. 1 is according to the flow chart of the auxiliary photo-taking method of the embodiment of the present invention.As shown in Figure 1, this auxiliary photo-taking method mainly comprises following processing:
In step S101, in the time carrying out scene shooting, obtain in real time picture frame;
In step S103, by the current above-mentioned picture frame obtaining in real time the template image corresponding with above-mentioned scene mate;
In step S105, the matching degree that output matching obtains in real time.
In method shown in Fig. 1, in the time carrying out scene shooting, by the current above-mentioned picture frame obtaining in real time the template image corresponding with above-mentioned scene mate, the matching degree that output matching obtains in real time, can obtain position and the basically identical multiple pictures of attitude of subject by assisted user, experience thereby effectively improve user.
Template image mentioned above can be determined in the following manner: after taking pictures for the first time for above-mentioned scene, the photo obtaining is defined as to above-mentioned template image; From taking pictures for the second time for above-mentioned scene, after taking pictures, all photo weightings that current shooting is obtained are averaging at every turn, obtain the photo after average, are defined as above-mentioned template image.
By above-mentioned processing, from terminal is taken pictures for the second time, after taking pictures at every turn, because being is updated to above-mentioned template image in photo after all photo weightings that current shooting is obtained are averaging, therefore, effectively ensured to take for scene position and the attitude of subject between each the photo obtaining more approaching.
Wherein, for the method that is averaging of image, can be as follows:
Take pictures altogether and obtain 10 photos as example taking user, need to be handled as follows for each pixel:
The pixel value (for example rgb value) of pixel corresponding in every pictures is added, and obtains average pixel value divided by 10.
After each pixel is processed, by whole pixel composition the average images after treatment.
Wherein, in the time that image is averaging, can be weighted, also can not be weighted.
Certainly, in specific implementation process, also can all the time the photo obtaining after taking pictures be for the first time defined as to above-mentioned template image, this scheme is simpler, is easier to realize.
Before obtaining picture frame in real time, can also comprise following processing: above-mentioned template image is processed, obtain taking reference picture, wherein, this shooting reference picture is that (First Characteristic point refers to the characteristic point of obtaining from template image to the First Characteristic point obtaining from template image herein, Second Characteristic point refers to the characteristic point of obtaining from the current picture frame obtaining), or above-mentioned shooting reference picture is translucent above-mentioned template image; By the image stack of above-mentioned shooting reference picture and the shooting in real time of above-mentioned terminal, particularly, be superimposed upon the real-time image upper strata of taking of above-mentioned terminal and export to user taking reference picture.
For assisted user better obtain with template image in the position of subject and the photo that attitude more approaches, can on the image of taking in real time, stack take reference picture, guiding user adjusts to optimum position, thereby obtains the photo more mating with template image.
Can adopt various ways to obtain shooting reference picture guides user to adjust to optimum position.For example, mode one: obtain First Characteristic point from above-mentioned template image, First Characteristic point is superimposed upon to the image upper strata of above-mentioned terminal shooting in real time and exports to user, and user, in the shooting process for above-mentioned scene, constantly adjusts camera site according to First Characteristic point; Mode two: template image is carried out to translucentization processing, obtain the template image of translucentization, and this template image of translucentization is superimposed upon the image upper strata that terminal is taken in real time, user, in the shooting process for above-mentioned scene, constantly adjusts camera site according to the template image of translucentization.
In above-mentioned steps S103, can adopt various ways that above-mentioned picture frame is mated with above-mentioned template image in real time.For example:
Mode one: the First Characteristic point that obtains above-mentioned template image; Obtain the Second Characteristic point of above-mentioned picture frame; Use above-mentioned First Characteristic point and the above-mentioned Second Characteristic above-mentioned template image of naming a person for a particular job to mate in real time and obtain above-mentioned matching degree with above-mentioned picture frame.
The above-mentioned First Characteristic point that obtains template image may further include following processing:
1, above-mentioned template image is extracted to contour feature;
2,, in the contour feature extracting, deletion length is less than the contour feature of first threshold and responds the image angle point that is less than Second Threshold;
3, contour feature and the angle point carried out after deletion action are defined as to characteristic point, characteristic point is carried out to Screening Treatment, until characteristic point equiblibrium mass distribution, and determine the edge direction in each characteristic point neighborhood, obtain final characteristic point.
In like manner, the above-mentioned Second Characteristic point that obtains picture frame also may further include following processing:
1, above-mentioned picture frame is extracted to contour feature;
2,, in the contour feature extracting, deletion length is less than the contour feature of first threshold and responds the image angle point that is less than Second Threshold;
3, contour feature and the angle point carried out after deletion action are defined as to characteristic point, characteristic point is carried out to Screening Treatment, until characteristic point equiblibrium mass distribution, and determine the edge direction in each characteristic point neighborhood, obtain final characteristic point.
Can from template image or picture frame, extract in the following ways contour feature: template image or picture frame are carried out to binary conversion treatment, template image or picture frame are converted to the image that only includes two kinds of colors of black and white, to distinguish target area and the background area of each image.In template image from this binary conversion treatment or the target area of picture frame, extract contour feature.
Use above-mentioned First Characteristic point and the above-mentioned Second Characteristic above-mentioned template image of naming a person for a particular job to mate in real time and may further include following processing with above-mentioned picture frame:
First to the two width images (template image and the current picture frame obtaining) of needs coupling extract minutiae respectively, obtain two characteristic point set.Centered by each characteristic point, calculate the edge direction in this characteristic point field, pixel value in its neighborhood is as the descriptor of this characteristic point, determine the coordinate position of each characteristic point in template image, search at above-mentioned picture frame the neighborhood centered by characteristic point existing in regional area corresponding to this coordinate position respectively.Calculate the poor quadratic sum of pixel of the lap of two corresponding neighborhoods.Determine the matching degree of two corresponding neighborhoods according to the result of calculation of quadratic sum.The matching degree result of calculation of corresponding neighborhood in comprehensive two width images, determines the matching degree of two width images.
Mode two: by analyzing image texture, respectively above-mentioned picture frame and above-mentioned template image are divided into the region that multiple textures are single; For each region after dividing in above-mentioned picture frame and above-mentioned template image, block-by-block mates.
Wherein, texture analysis refers to by certain image processing techniques and extracts textural characteristics, thereby obtains the processing procedure of the quantitative or qualitative description of texture.Particularly, need to carry out analysis of texture to image, carry out region growing according to texture similitude degree, picture frame and above-mentioned template image are divided into respectively to multiple texture single areas.
For each region after dividing in above-mentioned picture frame and above-mentioned template image, block-by-block mates and may further include following processing: determine the coordinate position of each texture region in template image, search respectively the texture region in the regional area corresponding with this coordinate position in above-mentioned picture frame.Calculate the solution parallax value of two corresponding texture regions, obtain the dense disparity map of texture single area.Determine the matching degree of two corresponding texture regions according to result of calculation.The matching degree result of calculation of corresponding texture region in comprehensive two width images, determines the matching degree of two width images.
Before above-mentioned picture frame is mated with above-mentioned template image in real time, can also comprise following processing: response user operates, the coupling weight in each selection region of above-mentioned template image is set; Above-mentioned picture frame is mated and comprised with above-mentioned template image in real time: determine above-mentioned matching degree according to above-mentioned coupling weight.
User can be at template image (for example, the first photo after taking) in manually select to need the region of emphasis coupling, or the region of non-emphasis coupling, construct one and mates weight map, emphasis matching area has larger weight, and non-key area weight is less.For example, the matching degree result of calculation of corresponding texture region in comprehensive two width images above-mentioned, while determining the matching degree of two width images, can consider to mate weighted value corresponding to regional in weight map, need weighted value corresponding to the region of emphasis coupling larger, weighted value corresponding to the region of non-emphasis coupling is less, and even some can be set to zero without weighted value corresponding to region of paying close attention to.Therefore,, owing to combining this parameter of weighted value in the time calculating the matching degree of two width images, further optimized matching result.
Obtain at output matching above-mentioned matching degree time, can also comprise following processing: by Feature Points Matching, obtain the transformation matrix that above-mentioned picture frame is converted into above-mentioned template image; Adopt above-mentioned transformation matrix to carry out conversion process to above-mentioned current picture frame, to transform to the attitude of above-mentioned template image.
In scheme above-mentioned, for example, although in the time that user takes, can provide in real time current matching degree (, coupling mark), but still need user manually to adjust, find an accurate position.Use characteristic point matching technique, obtains the transformation matrix of present image to template image, present image is transformed to the attitude of template image.Like this, user without very accurate, only need general alignment, thereby has effectively improved user's experience in the time adjusting.
Further describe above-mentioned execution mode below in conjunction with Fig. 2 and Fig. 3.
Fig. 2 is according to the flow chart of the auxiliary photo-taking method of the embodiment of the present invention one.As shown in Figure 2, terminal is for the first time when photographing image frame, and this auxiliary photo-taking method mainly comprises following processing:
In step S201, after above-mentioned scene is taken pictures for the first time, obtain picture frame.
In step S203, the picture frame obtaining is carried out to fuzzy, down-sampled processing.
Because every twice shooting time interval in the present invention is longer, photographed, except main body is constant, may has and move within narrow limits, as the variation of flowers, plants and trees in scenery, change color, there is moving object interference etc.The present invention only pays close attention to the coupling in large scale in image, is not paid close attention to for slight change, so image can be carried out to Gaussian Blur, then down-sampled, so not only can remove noise, but also reduce processing complexity.
In step S205, the picture frame obtaining is removed to illumination effect.
Because twice shooting weather, illumination may have very large difference, thereby have influence on images match, so before coupling need image to process to remove illumination effect.For example, can adopt Retinex image enchancing method, reduce the impact of illumination on image.
In step S207, to the picture frame extract minutiae obtaining.
Extract minutiae may further include following processing:
1, above-mentioned picture frame is extracted to contour feature.
2,, in the contour feature extracting, deletion length is less than the edge contour of first threshold and responds the image angle point that is less than Second Threshold.Wherein, above-mentioned first threshold and Second Threshold can dynamically arrange according to actual conditions.
3, contour feature and the angle point carried out after deletion action are defined as to characteristic point, characteristic point is carried out to Screening Treatment, until characteristic point equiblibrium mass distribution, be that on whole image, in unit are, the number difference of characteristic point should be not excessive, in case excessively assemble in certain region of image, while causing coupling, excessively tend to intensive texture region.
4, calculate the edge direction in each characteristic point neighborhood.
In step S209, the characteristic point of extraction is preserved, wherein, current image frame is set to template image mentioned above.
Picture frame is being carried out after above-mentioned processing, can effectively remove noise, reduce and process complexity, and reduce the impact of illumination on image.In addition, characteristic point is carried out to Screening Treatment, equilibrium characteristic point distributes, and can prevent from excessively assembling in certain region of image, excessively tends to intensive texture region while causing coupling.Therefore,, by above-mentioned processing, be convenient to follow-up more effectively carries out image coupling.
Fig. 3 is according to the flow chart of the auxiliary photo-taking method of the embodiment of the present invention two.As shown in Figure 3, except terminal is for the first time photographing image frame, all the other each photographing image frames, mainly comprise following processing:
In step S301, above-mentioned terminal pins obtains picture frame after above-mentioned scene is taken pictures.
In step S303, the picture frame obtaining is carried out to fuzzy, down-sampled processing.
In step S305, the picture frame obtaining is removed to illumination effect.
In step S307, to the picture frame extract minutiae obtaining.Specifically can, referring to the description of above-mentioned steps S207, repeat no more herein.
In step S309, use the characteristic point of extracting in the characteristic point extracted in this picture frame and template image to carry out images match, obtain mating mark.
In step S311, by coupling mark, terminal is adjusted to optimum position (being the position of Corresponding matching mark maximum), photographic images by guiding user.
In step S313, sequence of calculation image averaging figure (all photos that obtain by current shooting are averaging) is also updated to template image, according to step S303, step S305, and the characteristic point of step S307 abstraction sequence image averaging figure, and preserve.
By the processing of Fig. 2 and Fig. 3, after terminal pins is taken repeatedly to scene, can obtain a basically identical sequence image of subject position and attitude, this sequence image can compare easily, whole sequence image is except tiling launches, the ways of presentation being more suitable for is dynamically to represent, as saves as GIF form or short-sighted frequency.
Fig. 4 is according to the structured flowchart of the auxiliary photo-taking device of the embodiment of the present invention.As shown in Figure 4, this auxiliary photo-taking device mainly comprises: the first acquisition module 40, for carry out scene take time, obtain in real time picture frame; Matching module 42, is connected with above-mentioned the first acquisition module 40, for the current above-mentioned picture frame obtaining is being mated in real time with about template image corresponding to above-mentioned scene; Output module 44, is connected with above-mentioned matching module 42, for the matching degree that output matching obtains in real time.
In device shown in Fig. 4, in the time carrying out scene shooting, matching module 32 by the current above-mentioned picture frame obtaining in real time the template image corresponding with above-mentioned scene mate, output module 44 matching degree that output matching obtains in real time, can obtain position and the basically identical multiple pictures of attitude of subject by assisted user, experience thereby effectively improve user.
As shown in Figure 5, above-mentioned matching module 42 may further include: the first acquiring unit 420, for extracting the First Characteristic point of above-mentioned template image; Second acquisition unit 422, for obtaining the Second Characteristic point of above-mentioned picture frame; The first matching unit 424, is connected with second acquisition unit 422 respectively at the first acquiring unit 420, obtains above-mentioned matching degree for using above-mentioned First Characteristic point and above-mentioned Second Characteristic point to mate in real time.
As shown in Figure 6, above-mentioned matching module 42 may further include: texture analysis module 426, for by analyzing image texture, is divided into by above-mentioned picture frame and above-mentioned template image the region that multiple textures are single respectively; The second matching unit 428, is connected with texture analysis module 426, and for the each region for after above-mentioned picture frame and the division of above-mentioned template image, block-by-block mates.
As shown in Figure 5 and Figure 6, said apparatus can also comprise: module 46 is set, is connected with above-mentioned matching module 42, for responding user's operation, the coupling weight in each selection region in above-mentioned template image is set; Above-mentioned matching module 42 can also comprise determining unit 430, for determining above-mentioned matching degree according to above-mentioned coupling weight.
As shown in Figure 5 and Figure 6, said apparatus can also comprise: the first determination module 48, be connected with above-mentioned matching module 42, and for after above-mentioned scene is taken pictures for the first time, the photo obtaining is defined as to above-mentioned template image; The second determination module 50, is connected with above-mentioned matching module 42, and for from above-mentioned scene is taken pictures for the second time, after taking pictures, all photos that current shooting is obtained are averaging at every turn, obtains the photo after average, is defined as above-mentioned template image.
As shown in Figure 5 and Figure 6, said apparatus can also comprise: the second acquisition module 52, be connected with described output module 44, for above-mentioned template image is processed, obtain taking reference picture, wherein, above-mentioned shooting reference picture comprises the First Characteristic point obtaining from above-mentioned template image, or above-mentioned shooting reference picture is translucent above-mentioned template image; Laminating module, for superposeing the image of above-mentioned shooting reference picture and the shooting in real time of above-mentioned terminal.
As shown in Figure 5 and Figure 6, said apparatus can also comprise: the 3rd acquisition module 54, be connected with matching module 42, and for by Feature Points Matching, obtain the transformation matrix that above-mentioned picture frame is converted into above-mentioned template image; Processing module 56, is connected with above-mentioned the 3rd acquisition module 54, for adopting above-mentioned transformation matrix to carry out conversion process to above-mentioned current picture frame, to transform to the attitude of above-mentioned template image.
Each module in said apparatus, the execution mode that each unit mutually combines specifically can, referring to the description of Fig. 1 to Fig. 3, repeat no more herein.
Fig. 7 is according to the structural representation of the terminal of the embodiment of the present invention.As shown in Figure 7, this terminal can be for the auxiliary photo-taking method of implementing to provide in above-described embodiment.Wherein, this terminal can be mobile phone, digital camera, panel computer pad, Wearable mobile device (as intelligent glasses) etc.
Terminal can comprise communication unit 710, includes the memory 720 of one or more computer-readable recording mediums, input unit 730, display unit 740, transducer 750, voicefrequency circuit 760, Wireless Fidelity (wireless fidelity, referred to as WiFi) module 770, include one or one parts such as processor 780 and power supply 790 of processing above core.It will be understood by those skilled in the art that the not restriction of structure paired terminal of the terminal structure shown in Fig. 7, can comprise the parts more more or less than diagram, or combine some parts, or different parts are arranged.Wherein:
Communication unit 710 can be used for receiving and sending messages or communication process in, the reception of signal and transmission, this communication unit 710 can be radio frequency (Radio Frequency, referred to as RF) circuit, router, modulator-demodulator, etc. network communication equipment.Especially, in the time that communication unit 710 is RF circuit, after the downlink information of base station is received, transfer to more than one or one processor 780 to process; In addition, send to base station by relating to up data.Conventionally, include but not limited to antenna, at least one amplifier, tuner, one or more oscillator, subscriber identity module (SIM) card, transceiver, coupler, low noise amplifier (Low Noise Amplifier, referred to as LNA), duplexer etc. as the RF circuit of communication unit.In addition, communication unit 710 can also be by radio communication and network and other devices communicatings.Radio communication can be used arbitrary communication standard or agreement, include but not limited to global system for mobile communications (Global System of Mobile communication, referred to as GSM), general packet radio service (General Packet Radio Service, referred to as GPRS), code division multiple access (Code Division Multiple Access, referred to as CDMA), Wideband Code Division Multiple Access (WCDMA) (Wideband Code Division Multiple Access, referred to as WCDMA), Long Term Evolution (Long Term Evolution, referred to as LTE), Email, Short Message Service (Short Messaging Service, referred to as SMS) etc.Memory 720 can be used for storing software program and module, and processor 780 is stored in software program and the module of memory 720 by operation, thereby carries out various function application and data processing.Memory 720 can mainly comprise storage program district and storage data field, wherein, and the application program (such as sound-playing function, image player function etc.) that storage program district can storage operation system, at least one function is required etc.; The data (such as voice data, phone directory etc.) that create according to the use of terminal etc. can be stored in storage data field.In addition, memory 720 can comprise high-speed random access memory, can also comprise nonvolatile memory, for example at least one disk memory, flush memory device or other volatile solid-state parts.Correspondingly, memory 720 can also comprise Memory Controller, so that processor 780 and the access of input unit 730 to memory 720 to be provided.
Input unit 730 can be used for receiving numeral or the character information of input, and generation is inputted with user arranges and function control is relevant keyboard, mouse, action bars, optics or trace ball signal.Input unit 730 can comprise touch-sensitive surperficial 731 and other input equipments 732.Touch-sensitive surperficial 731, also referred to as touch display screen or Trackpad, can collect
User or near touch operation (using any applicable object or near the operations of annex on touch-sensitive surperficial 731 or touch-sensitive surperficial 731 such as finger, stylus such as user), and drives corresponding jockey according to predefined formula thereon.
Optionally, touch-sensitive surperficial 731 can comprise touch detecting apparatus and two parts of touch controller.Wherein, touch detecting apparatus detects user's touch orientation, and detects the signal that touch operation brings, and sends signal to touch controller; Touch controller receives touch information from touch detecting apparatus, and converts it to contact coordinate, then gives processor 780, and the order that energy receiving processor 780 is sent is also carried out.In addition, can adopt the polytypes such as resistance-type, condenser type, infrared ray and surface acoustic wave to realize touch-sensitive surperficial 731.Except touch-sensitive surperficial 731, input unit 730 can also comprise other input equipments 732.
Other input equipments 732 can include but not limited to one or more in physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, action bars etc.
Display unit 740 can be used for showing the information inputted by user or the various graphical user interface of the information that offers user and terminal, and these graphical user interface can be made up of figure, text, icon, video and its combination in any.Display unit 740 can comprise display floater 741, optional, can adopt the form such as liquid crystal display, Organic Light Emitting Diode to configure display floater 741.Further, touch-sensitive surperficial 731 can cover display floater 741, when touch-sensitive surperficial 731 detect thereon or near touch operation after, send processor 780 to determine the type of touch event, corresponding vision output is provided according to the type of touch event with preprocessor 780 on display floater 741.Although in Fig. 7, touch-sensitive surperficial 731 with display floater 741 be as two independently parts realize input and input function, in certain embodiments, can by touch-sensitive surperficial 731 and display floater 741 integrated and realize input and output function.
Terminal also can comprise at least one transducer 750, such as optical sensor, motion sensor and other transducers.Optical sensor can comprise ambient light sensor and proximity transducer, and wherein, ambient light sensor can regulate according to the light and shade of ambient light the brightness of display floater 741, and proximity transducer can, in the time that fast mobile terminal arrives in one's ear, cut out display floater 741 and/or backlight.As the one of motion sensor, Gravity accelerometer can detect the size of the acceleration that (is generally three axles) in all directions, when static, can detect size and the direction of gravity, can be used for identifying application (such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, knock) of mobile phone attitude etc.; As for also other transducers such as configurable gyroscope, barometer, hygrometer, thermometer, infrared ray sensor of terminal, do not repeat them here.
Voicefrequency circuit 760, loud speaker 761, microphone 762 can provide the audio interface between user and terminal.Voicefrequency circuit 760 can, by the signal of telecommunication after the voice data conversion receiving, be transferred to loud speaker 761, is converted to voice signal output by loud speaker 761; On the other hand, the voice signal of collection is converted to the signal of telecommunication by microphone 762, after being received by voicefrequency circuit 760, be converted to voice data, after again voice data output processor 180 being processed, through RF circuit 710 to send to such as another terminal equipment, or export voice data to memory 720 so as further process.Voicefrequency circuit 760 also may comprise earphone jack, so that communicating by letter of peripheral hardware earphone and terminal to be provided.
In order to realize radio communication, on this terminal equipment, can dispose wireless communication unit 770, this wireless communication unit 770 can be WiFi module.WiFi belongs to short range wireless transmission technology, terminal by wireless communication unit 770 can help that user sends and receive e-mail, browsing page and access streaming video etc., it provides wireless broadband internet access for user.Although Fig. 7 shows wireless communication unit 770, be understandable that, it does not belong to must forming of terminal, completely can be as required in the essential scope that does not change invention and omit.
Processor 780 is control centres of terminal, utilize the various piece of various interface and the whole mobile phone of connection, by moving or carry out the software program and/or the module that are stored in memory 720, and call the data that are stored in memory 720, carry out various functions and the deal with data of terminal, thereby mobile phone is carried out to integral monitoring.Optionally, processor 780 can comprise one or more processing cores; Preferably, processor 780 can integrated application processor and modem processor, and wherein, application processor is mainly processed operating system, user interface and application program etc., and modem processor is mainly processed radio communication.Be understandable that, above-mentioned modem processor also can not be integrated in processor 780.
Terminal also comprises the power supply 790 (such as battery) to all parts power supply, preferably, power supply can be connected with processor 780 logics by power-supply management system, thereby realizes the functions such as management charging, electric discharge and power managed by power-supply management system.Power supply 790 can also comprise the random component such as one or more direct current or AC power, recharging system, power failure detection circuit, power supply changeover device or inverter, power supply status indicator.
Although not shown, terminal can also comprise camera, bluetooth module etc., does not repeat them here.
In the present embodiment, the display unit of terminal equipment is touch-screen display, terminal equipment also includes memory, and one or more than one program, one of them or more than one program are stored in memory, and are configured to be contained for carrying out the instruction of following operation by more than one or one program package of more than one or one processor execution:
In the time that scene is taken, obtain in real time picture frame;
By the current above-mentioned picture frame obtaining in real time the template image corresponding with above-mentioned scene mate; And
The matching degree that output matching obtains in real time.
Above-mentioned picture frame is mated and comprised with above-mentioned template image in real time: the First Characteristic point that obtains above-mentioned template image; Obtain the Second Characteristic point of above-mentioned picture frame; Use above-mentioned First Characteristic point and above-mentioned Second Characteristic point to mate in real time and obtain above-mentioned matching degree.This is key point of the present invention, also should write more specifically
Obtain above-mentioned First Characteristic point and above-mentioned Second Characteristic point comprises: respectively above-mentioned picture frame and above-mentioned template image are extracted to contour feature; In the contour feature extracting, deletion length is less than the contour feature of first threshold and responds the image angle point that is less than Second Threshold; Contour feature and the angle point carried out after deletion action are defined as to characteristic point; Characteristic point is carried out to Screening Treatment, until characteristic point equiblibrium mass distribution, and determine the edge direction in each characteristic point neighborhood, obtain final characteristic point.
Above-mentioned picture frame is mated and comprised with above-mentioned template image in real time: by analyzing image texture, respectively above-mentioned picture frame and above-mentioned template image are divided into the region that multiple textures are single; For each region after dividing in above-mentioned picture frame and above-mentioned template image, block-by-block mates.
Before above-mentioned picture frame is mated with above-mentioned template image in real time, also comprise: response user operates, the coupling weight in each selection region of above-mentioned template image is set; Above-mentioned picture frame is mated and comprised with above-mentioned template image in real time: determine above-mentioned matching degree according to above-mentioned coupling weight.
Above-mentioned instruction can also comprise: after above-mentioned terminal pins is taken pictures for the first time to above-mentioned scene, the photo obtaining is defined as to above-mentioned template image; Above-mentioned scene being taken pictures for the second time from above-mentioned terminal pins, after taking pictures, all photos that current shooting is obtained are averaging at every turn, obtain the photo after average, are defined as above-mentioned template image.
Before obtaining picture frame in real time, can also comprise: above-mentioned template image is processed, obtained taking reference picture, wherein, above-mentioned shooting reference picture is the First Characteristic point obtaining from above-mentioned template image, or above-mentioned shooting reference picture is translucent above-mentioned template image; By the image stack of above-mentioned shooting reference picture and the shooting in real time of above-mentioned terminal.
Obtain at output matching above-mentioned matching degree time, also comprise: by Feature Points Matching, obtain the transformation matrix that above-mentioned picture frame is converted into above-mentioned template image; Adopt above-mentioned transformation matrix to carry out conversion process to above-mentioned current picture frame, to transform to the attitude of above-mentioned template image.
In sum, by embodiment provided by the invention, when terminal different time is repeatedly taken Same Scene, without fixing this terminal, provide in real time and the matching degree of template image, terminal is adjusted to optimum position by guiding user, or use characteristic point matching technique, find the transformation matrix of present image to template image, present image is transformed to template attitude.And then assisted user obtains position and the basically identical multiple pictures of attitude of subject.In addition, for assisted user better obtain with template image in the position of subject and the photo that attitude more approaches, on the image of taking in real time, reference picture is taken in stack, and terminal is adjusted to optimum position by guiding user, further improved user's experience.
One of ordinary skill in the art will appreciate that all or part of processing in above-described embodiment method is can carry out the hardware that instruction is relevant by program to complete, described program can be stored in a kind of computer-readable recording medium.
One of ordinary skill in the art will appreciate that all or part of processing in said method embodiment is to complete by the relevant hardware of program command, aforesaid program can be stored in a kind of computer read/write memory medium, this program is in the time carrying out, execution comprises the step of preceding method embodiment, and aforesaid storage medium comprises: the various media that can be program code stored such as ROM, RAM, magnetic disc or CD.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any amendment of doing, be equal to replacement, improvement etc., within all should being included in protection scope of the present invention.
Claims (16)
1. an auxiliary photo-taking method, is characterized in that, comprising:
In the time carrying out scene shooting, obtain in real time picture frame;
By the current described picture frame obtaining in real time the template image corresponding with described scene mate;
The matching degree that output matching obtains in real time.
2. method according to claim 1, is characterized in that, described picture frame is mated and comprised with described template image in real time:
Obtain the First Characteristic point of described template image;
Obtain the Second Characteristic point of described picture frame;
Use described First Characteristic point and described Second Characteristic point to mate in real time and obtain described matching degree.
3. method according to claim 2, is characterized in that, obtains described First Characteristic point and described Second Characteristic point comprises:
Respectively described picture frame and described template image are extracted to contour feature;
In the contour feature extracting, delete the contour feature that length is less than first threshold;
Delete the image angle point that response is less than Second Threshold;
Contour feature and the angle point carried out after deletion action are defined as to characteristic point;
Characteristic point is carried out to Screening Treatment, until characteristic point equiblibrium mass distribution, and determine the edge direction in each characteristic point neighborhood, obtain final characteristic point.
4. method according to claim 1, is characterized in that, described picture frame is mated and comprised with described template image in real time:
By analyzing image texture, respectively described picture frame and described template image are divided into the region that multiple textures are single;
For each region after dividing in described picture frame and described template image, block-by-block mates.
5. method according to claim 1, is characterized in that,
Before described picture frame is mated with described template image in real time, also comprise: response user operates, the coupling weight in each selection region of described template image is set;
Described picture frame is mated and comprised with described template image in real time: determine described matching degree according to described coupling weight.
6. method according to claim 1, is characterized in that, described method also comprises:
After described scene is taken pictures for the first time, the photo obtaining is defined as to described template image;
From described scene is taken pictures for the second time, after taking pictures, all photos that current shooting is obtained are averaging at every turn, obtain the photo after average, are defined as described template image.
7. method according to claim 1, is characterized in that, before obtaining picture frame in real time, also comprises:
Described template image is processed, obtained taking reference picture, wherein, described shooting reference picture is the First Characteristic point obtaining from described template image, or described shooting reference picture is translucent described template image;
By the image stack of described shooting reference picture and the shooting in real time of described terminal.
8. according to the method described in any one in claim 1 to 7, it is characterized in that, when the described matching degree that obtains at output matching, also comprise:
By Feature Points Matching, obtain the transformation matrix that described picture frame is converted into described template image;
Adopt described transformation matrix to carry out conversion process to described current picture frame, to transform to the attitude of described template image.
9. an auxiliary photo-taking device, is characterized in that, comprising:
The first acquisition module, in the time carrying out scene shooting, obtains picture frame in real time;
Matching module, for by the current described picture frame obtaining in real time the template image corresponding with described scene mate;
Output module, for the matching degree that output matching obtains in real time.
10. device according to claim 9, is characterized in that, described matching module comprises:
The first acquiring unit, for extracting the First Characteristic point of described template image;
Second acquisition unit, for obtaining the Second Characteristic point of described picture frame;
The first matching unit, obtains described matching degree for using described First Characteristic point and described Second Characteristic point to mate in real time.
11. devices according to claim 9, is characterized in that, described matching module comprises:
Texture analysis module, for by analyzing image texture, is divided into by described picture frame and described template image the region that multiple textures are single respectively;
The second matching unit, for the each region for after described picture frame and the division of described template image, block-by-block mates.
12. devices according to claim 9, is characterized in that, described device also comprises: module is set, for responding user's operation, the coupling weight in each selection region in described template image is set;
Described matching module also comprises: determining unit, and for determining described matching degree according to described coupling weight.
13. devices according to claim 9, is characterized in that, also comprise:
The first determination module, for after described scene is taken pictures for the first time, is defined as described template image by the photo obtaining;
The second determination module, for from described scene is taken pictures for the second time, after taking pictures, all photos that current shooting is obtained are averaging at every turn, obtain the photo after average, are defined as described template image.
14. devices according to claim 9, is characterized in that, also comprise:
The second acquisition module, for described template image is processed, obtains taking reference picture, and wherein, described shooting reference picture comprises the First Characteristic point obtaining from described template image, or described shooting reference picture is translucent described template image;
Laminating module, for superposeing the image of described shooting reference picture and the shooting in real time of described terminal.
15. according to the device described in any one in claim 9 to 14, it is characterized in that, also comprises:
The 3rd acquisition module, for by Feature Points Matching, obtains the transformation matrix that described picture frame is converted into described template image;
Processing module, for adopting described transformation matrix to carry out conversion process to described current picture frame, to transform to the attitude of described template image.
16. 1 kinds of terminals, is characterized in that, comprising: one or more processors; Memory; With one or more modules, described one or more module stores are in described memory and be configured to be carried out by described one or more processors, and described one or more modules are used for:
In the time carrying out scene shooting, obtain in real time picture frame;
By the current described picture frame obtaining in real time the template image corresponding with described scene mate;
The matching degree that output matching obtains in real time.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410302630.4A CN104135609B (en) | 2014-06-27 | 2014-06-27 | Auxiliary photo-taking method, apparatus and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410302630.4A CN104135609B (en) | 2014-06-27 | 2014-06-27 | Auxiliary photo-taking method, apparatus and terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104135609A true CN104135609A (en) | 2014-11-05 |
CN104135609B CN104135609B (en) | 2018-02-23 |
Family
ID=51808123
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410302630.4A Active CN104135609B (en) | 2014-06-27 | 2014-06-27 | Auxiliary photo-taking method, apparatus and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104135609B (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104333696A (en) * | 2014-11-19 | 2015-02-04 | 北京奇虎科技有限公司 | View-finding processing method, view-finding processing device and client |
CN105488756A (en) * | 2015-11-26 | 2016-04-13 | 努比亚技术有限公司 | Picture synthesizing method and device |
CN105827930A (en) * | 2015-05-27 | 2016-08-03 | 广东维沃软件技术有限公司 | Method and device of auxiliary photographing |
CN107018333A (en) * | 2017-05-27 | 2017-08-04 | 北京小米移动软件有限公司 | Shoot template and recommend method, device and capture apparatus |
CN107197153A (en) * | 2017-06-19 | 2017-09-22 | 上海传英信息技术有限公司 | The image pickup method and filming apparatus of a kind of photo |
WO2018000299A1 (en) * | 2016-06-30 | 2018-01-04 | Orange | Method for assisting acquisition of picture by device |
CN107580182A (en) * | 2017-08-28 | 2018-01-12 | 维沃移动通信有限公司 | A kind of grasp shoot method, mobile terminal and computer-readable recording medium |
CN108108268A (en) * | 2017-11-28 | 2018-06-01 | 北京川上科技有限公司 | Reboot process method and apparatus are exited in a kind of video record application |
CN108111752A (en) * | 2017-12-12 | 2018-06-01 | 北京达佳互联信息技术有限公司 | video capture method, device and mobile terminal |
CN109257541A (en) * | 2018-11-20 | 2019-01-22 | 厦门盈趣科技股份有限公司 | Householder method of photographing and device |
CN110113523A (en) * | 2019-03-15 | 2019-08-09 | 深圳壹账通智能科技有限公司 | Intelligent photographing method, device, computer equipment and storage medium |
CN110266958A (en) * | 2019-07-12 | 2019-09-20 | 北京小米移动软件有限公司 | A kind of image pickup method, apparatus and system |
CN110493517A (en) * | 2019-08-14 | 2019-11-22 | 广州三星通信技术研究有限公司 | The auxiliary shooting method and image capture apparatus of image capture apparatus |
CN111131702A (en) * | 2019-12-25 | 2020-05-08 | 航天信息股份有限公司 | Method and device for acquiring image, storage medium and electronic equipment |
WO2020133204A1 (en) * | 2018-12-28 | 2020-07-02 | Qualcomm Incorporated | Apparatus and method to correct the angle and location of the camera |
CN111770276A (en) * | 2020-07-07 | 2020-10-13 | 上海掌门科技有限公司 | Camera AI intelligent auxiliary photographing method, equipment and computer readable medium |
CN112199547A (en) * | 2019-07-08 | 2021-01-08 | Oppo广东移动通信有限公司 | Image processing method and device, storage medium and electronic equipment |
CN113301259A (en) * | 2018-02-15 | 2021-08-24 | 奥多比公司 | Intelligent guidance for capturing digital images aligned with a target image model |
CN113784039A (en) * | 2021-08-03 | 2021-12-10 | 北京达佳互联信息技术有限公司 | Head portrait processing method and device, electronic equipment and computer readable storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090256933A1 (en) * | 2008-03-24 | 2009-10-15 | Sony Corporation | Imaging apparatus, control method thereof, and program |
CN101690164A (en) * | 2007-07-11 | 2010-03-31 | 索尼爱立信移动通讯股份有限公司 | Enhanced image capturing functionality |
CN101996308A (en) * | 2009-08-19 | 2011-03-30 | 北京中星微电子有限公司 | Human face identification method and system and human face model training method and system |
CN102074001A (en) * | 2010-11-25 | 2011-05-25 | 上海合合信息科技发展有限公司 | Method and system for stitching text images |
CN102814006A (en) * | 2011-06-10 | 2012-12-12 | 三菱电机株式会社 | Image contrast device, patient positioning device and image contrast method |
CN103366374A (en) * | 2013-07-12 | 2013-10-23 | 重庆大学 | Fire fighting access obstacle detection method based on image matching |
-
2014
- 2014-06-27 CN CN201410302630.4A patent/CN104135609B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101690164A (en) * | 2007-07-11 | 2010-03-31 | 索尼爱立信移动通讯股份有限公司 | Enhanced image capturing functionality |
US20090256933A1 (en) * | 2008-03-24 | 2009-10-15 | Sony Corporation | Imaging apparatus, control method thereof, and program |
CN101996308A (en) * | 2009-08-19 | 2011-03-30 | 北京中星微电子有限公司 | Human face identification method and system and human face model training method and system |
CN102074001A (en) * | 2010-11-25 | 2011-05-25 | 上海合合信息科技发展有限公司 | Method and system for stitching text images |
CN102814006A (en) * | 2011-06-10 | 2012-12-12 | 三菱电机株式会社 | Image contrast device, patient positioning device and image contrast method |
CN103366374A (en) * | 2013-07-12 | 2013-10-23 | 重庆大学 | Fire fighting access obstacle detection method based on image matching |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104333696A (en) * | 2014-11-19 | 2015-02-04 | 北京奇虎科技有限公司 | View-finding processing method, view-finding processing device and client |
CN105827930A (en) * | 2015-05-27 | 2016-08-03 | 广东维沃软件技术有限公司 | Method and device of auxiliary photographing |
CN105488756A (en) * | 2015-11-26 | 2016-04-13 | 努比亚技术有限公司 | Picture synthesizing method and device |
CN105488756B (en) * | 2015-11-26 | 2019-03-29 | 努比亚技术有限公司 | Picture synthetic method and device |
WO2018000299A1 (en) * | 2016-06-30 | 2018-01-04 | Orange | Method for assisting acquisition of picture by device |
CN107018333A (en) * | 2017-05-27 | 2017-08-04 | 北京小米移动软件有限公司 | Shoot template and recommend method, device and capture apparatus |
CN107197153A (en) * | 2017-06-19 | 2017-09-22 | 上海传英信息技术有限公司 | The image pickup method and filming apparatus of a kind of photo |
CN107197153B (en) * | 2017-06-19 | 2024-03-15 | 上海传英信息技术有限公司 | Shooting method and shooting device for photo |
CN107580182A (en) * | 2017-08-28 | 2018-01-12 | 维沃移动通信有限公司 | A kind of grasp shoot method, mobile terminal and computer-readable recording medium |
CN108108268A (en) * | 2017-11-28 | 2018-06-01 | 北京川上科技有限公司 | Reboot process method and apparatus are exited in a kind of video record application |
CN108111752A (en) * | 2017-12-12 | 2018-06-01 | 北京达佳互联信息技术有限公司 | video capture method, device and mobile terminal |
CN113301259A (en) * | 2018-02-15 | 2021-08-24 | 奥多比公司 | Intelligent guidance for capturing digital images aligned with a target image model |
CN113301259B (en) * | 2018-02-15 | 2023-05-30 | 奥多比公司 | Computer readable medium, system and method for guiding a user to capture a digital image |
CN109257541A (en) * | 2018-11-20 | 2019-01-22 | 厦门盈趣科技股份有限公司 | Householder method of photographing and device |
WO2020133204A1 (en) * | 2018-12-28 | 2020-07-02 | Qualcomm Incorporated | Apparatus and method to correct the angle and location of the camera |
CN110113523A (en) * | 2019-03-15 | 2019-08-09 | 深圳壹账通智能科技有限公司 | Intelligent photographing method, device, computer equipment and storage medium |
CN112199547A (en) * | 2019-07-08 | 2021-01-08 | Oppo广东移动通信有限公司 | Image processing method and device, storage medium and electronic equipment |
CN110266958A (en) * | 2019-07-12 | 2019-09-20 | 北京小米移动软件有限公司 | A kind of image pickup method, apparatus and system |
CN110266958B (en) * | 2019-07-12 | 2022-03-01 | 北京小米移动软件有限公司 | Shooting method, device and system |
CN110493517A (en) * | 2019-08-14 | 2019-11-22 | 广州三星通信技术研究有限公司 | The auxiliary shooting method and image capture apparatus of image capture apparatus |
CN111131702A (en) * | 2019-12-25 | 2020-05-08 | 航天信息股份有限公司 | Method and device for acquiring image, storage medium and electronic equipment |
CN111770276A (en) * | 2020-07-07 | 2020-10-13 | 上海掌门科技有限公司 | Camera AI intelligent auxiliary photographing method, equipment and computer readable medium |
CN113784039A (en) * | 2021-08-03 | 2021-12-10 | 北京达佳互联信息技术有限公司 | Head portrait processing method and device, electronic equipment and computer readable storage medium |
CN113784039B (en) * | 2021-08-03 | 2023-07-11 | 北京达佳互联信息技术有限公司 | Head portrait processing method, head portrait processing device, electronic equipment and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN104135609B (en) | 2018-02-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104135609A (en) | A method and a device for assisting in photographing, and a terminal | |
US20220224665A1 (en) | Notification Message Preview Method and Electronic Device | |
CN110059685B (en) | Character area detection method, device and storage medium | |
US10003785B2 (en) | Method and apparatus for generating images | |
CN106296617B (en) | The processing method and processing device of facial image | |
CN105096241A (en) | Face image beautifying device and method | |
CN104967790B (en) | Method, photo taking, device and mobile terminal | |
CN110049244A (en) | Image pickup method, device, storage medium and electronic equipment | |
CN103458190A (en) | Photographing method, photographing device and terminal device | |
CN103871051A (en) | Image processing method, device and electronic equipment | |
CN103414814A (en) | Picture processing method and device and terminal device | |
CN107770451A (en) | Take pictures method, apparatus, terminal and the storage medium of processing | |
CN103533247A (en) | Self-photographing method, device and terminal equipment | |
CN112130714B (en) | Keyword search method capable of learning and electronic equipment | |
CN108921941A (en) | Image processing method, device, storage medium and electronic equipment | |
CN109086680A (en) | Image processing method, device, storage medium and electronic equipment | |
CN113542580B (en) | Method and device for removing light spots of glasses and electronic equipment | |
CN108259758A (en) | Image processing method, device, storage medium and electronic equipment | |
CN105279186A (en) | Image processing method and system | |
CN115129410B (en) | Desktop wallpaper configuration method and device, electronic equipment and readable storage medium | |
CN111325220B (en) | Image generation method, device, equipment and storage medium | |
CN111754386A (en) | Image area shielding method, device, equipment and storage medium | |
CN108427938A (en) | Image processing method, device, storage medium and electronic equipment | |
CN111857793B (en) | Training method, device, equipment and storage medium of network model | |
CN108259771A (en) | Image processing method, device, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |