CN104994276A - Photographing method and device - Google Patents

Photographing method and device Download PDF

Info

Publication number
CN104994276A
CN104994276A CN201510362377.6A CN201510362377A CN104994276A CN 104994276 A CN104994276 A CN 104994276A CN 201510362377 A CN201510362377 A CN 201510362377A CN 104994276 A CN104994276 A CN 104994276A
Authority
CN
China
Prior art keywords
handwriting
person
closed area
processor
finder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510362377.6A
Other languages
Chinese (zh)
Other versions
CN104994276B (en
Inventor
薛昉
纪崴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics China R&D Center, Samsung Electronics Co Ltd filed Critical Samsung Electronics China R&D Center
Priority to CN201510362377.6A priority Critical patent/CN104994276B/en
Publication of CN104994276A publication Critical patent/CN104994276A/en
Application granted granted Critical
Publication of CN104994276B publication Critical patent/CN104994276B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Telephone Function (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a photographing method, and the method is used for mobile equipment. The mobile equipment is provided with a processor, a pluggable handwriting pen, and a touch screen achieving input through the handwriting pen, wherein the touch screen is electrically connected with the processor. The method is used in the processor, and comprises the steps: detecting the plugging of the handwriting pen, and then controlling the mobile equipment to go into a mode of image synthesis; obtaining a first image from a first image layer disposed on the touch screen; obtaining the handwriting, drawn on the touch screen, of the handwriting pen from a second image layer displayed on the touch screen, and selecting one of closed regions generated by the handwriting as a viewing frame; obtaining a second image in the range of the viewing frame, and then enabling the first image, the handwriting and the second image to synthesized into a third image. The method provided by the invention allows a user to synthesize the images in real time during photographing, does not need to enable the user to synthesize the images by himself or herself after photographing, and saves the operation time of the user. The invention also discloses a photographing device.

Description

A kind of method and apparatus of shooting
Technical field
The present invention relates to mobile device technical field, particularly relate to a kind of method and apparatus of shooting.
Background technology
At present, the software and hardware function of mobile device from strength to strength, not only can realize basic telephony feature, but is used as a multi-functional terminal.Such as camera function, the camera function of mobile device along with the progressively raising of hardware very practical, and general camera can be substituted to a great extent.Further, in existing mobile device, major part arranges front-facing camera and post-positioned pick-up head, for user's choice for use.
Such as, but along with the demand of user individual, under some shooting sight, user needs the image taking mobile device front and rear, while the scenery of shooting front, adds the head portrait of user and generates a photo simultaneously.In such cases, according to prior art, only have the image first taking mobile device front scenery, then take the head portrait image of user, undertaken processing by image synthesizing software again and generate user need photo, this adds the use fussy degree of user undoubtedly.
So, need badly a kind of can the mobile device of sonoCT, meet the individual demand of user.
Summary of the invention
In view of this, the present invention proposes a kind of method and apparatus of shooting, to solve problems of the prior art.
For achieving the above object, the technical scheme of the embodiment of the present invention is achieved in that
The embodiment of the present invention provides a kind of method of taking pictures, in mobile device, described mobile device be configured with processor, can plug writing pen and by writing pen input touch-screen, described touch-screen is electrically connected with described processor;
Described method is used for described processor and runs when taking pictures program, comprising:
Described processor is found out after writing pen is pulled out, and controls mobile device and enters Images uniting pattern;
Described processor obtains the first image being shown in the first layer in touch-screen;
Described processor obtains described writing pen and is drawn on person's handwriting on described touch-screen being shown in the second layer in touch-screen, and one of closed area selecting described person's handwriting to generate is as view-finder; Wherein, described second layer is positioned at above described first layer;
Then described processor obtains the second image in the scope of described view-finder, is then the 3rd image by described first image, described person's handwriting and described second Images uniting.
Alternatively, one of closed area that person's handwriting described in described processor selection generates, as view-finder, specifically comprises:
Described processor, according to the described person's handwriting drawn, judges the closed area close with described person's handwriting, and is shown in touch-screen by described closed area; Wherein, described processor prestores multiple closed area;
Described processor judges whether to receive the select command of the described closed area of selection of user, if so, then using select described closed area as view-finder;
If nothing, then described processor is drawn according to described person's handwriting the closed area generated and is automatically selected view-finder.
Alternatively, described processor is drawn according to described person's handwriting the closed area generated and is automatically selected described view-finder, specifically comprises:
Described processor identifies the closed area that described person's handwriting generates after judging that described person's handwriting is drawn;
If described person's handwriting forms more than one closed area, then described processor calculates the area Si of each closed area, and draws the area S of maximum closed area max; Wherein, i=1 ~ N, N are greater than 1;
Described processor calculates the focus point gi of each closed area, and draws the focus point g of maximum closed area max;
Described processor calculates the focus point g of each focus point gi and maximum closed area maxair line distance Ti;
Described processor calculates the weighted average α i of each closed area; Wherein, α i=Si/S max* p-Ti*q, p and q are weight coefficient, and are all greater than 0;
The maximum α in weighted average α i chosen by described processor max, judge α maxwhether be greater than the Weighted Threshold of setting; If so, then α is selected maxcorresponding closed area is view-finder; If not, then maximum closed area S is selected maxcorresponding closed area is view-finder.
Alternatively, if person's handwriting does not form closed area described in described processor identification, then point out user's failure, and wait for person's handwriting input next time.
Alternatively, when person's handwriting drawn by described writing pen:
The contact coordinate data of described writing pen are reached described processor by described touch-screen;
Contact coordinate data described in described processor for recording, and according to described contact coordinate data, person's handwriting is shown in described touch-screen in real time;
If described processor does not obtain the contact coordinate data of described writing pen within the first threshold time of setting, then judge that described person's handwriting is drawn complete.
The embodiment of the present invention provides a kind of device of taking pictures, in mobile device, described mobile device be configured with processor, can plug writing pen and by writing pen input touch-screen, described touch-screen is electrically connected with described processor;
This device is used for described processor and runs when taking pictures program, comprising:
Module found out by writing pen, and find out after writing pen is pulled out, notice mobile device enters Images uniting pattern;
First image collection module, obtains the first image being shown in the first layer in touch-screen;
Person's handwriting acquisition module, obtains described writing pen and be drawn on person's handwriting on described touch-screen being shown in the second layer in touch-screen, and one of closed area selecting described person's handwriting to generate is as view-finder; Wherein, described second layer is positioned at above described first layer;
Second image collection module, obtains the second image in the scope of described view-finder;
Described first image, described person's handwriting and described second Images uniting are the 3rd image by image synthesis unit.
Alternatively, one of closed area that described person's handwriting acquisition module selects described person's handwriting to generate, as view-finder, specifically comprises:
Described person's handwriting acquisition module, according to the described person's handwriting drawn, judges the closed area close with described person's handwriting, and is shown in touch-screen by described closed area; Wherein, described person's handwriting acquisition module prestores multiple closed area;
The select command of the described closed area of selection that described person's handwriting acquisition module judges whether to receive user, if so, then using select described closed area as view-finder;
If nothing, then described person's handwriting acquisition module is drawn according to described person's handwriting the closed area generated and is automatically selected view-finder.
Alternatively, described person's handwriting acquisition module is drawn according to described person's handwriting the closed area generated and is automatically selected view-finder, specifically comprises:
Described person's handwriting acquisition module identifies the closed area that described person's handwriting generates after judging that described person's handwriting is drawn;
If described person's handwriting forms more than one closed area, then described person's handwriting acquisition module calculates the area Si of each closed area, and draws the area S of maximum closed area max; Wherein, i=1 ~ N, N are greater than 1;
Described person's handwriting acquisition module calculates the focus point gi of each closed area, and draws the focus point g of maximum closed area max;
Described person's handwriting acquisition module calculates the focus point g of each focus point gi and maximum closed area maxair line distance Ti;
Described person's handwriting acquisition module calculates the weighted average α i of each closed area; Wherein, α i=Si/S max* p-Ti*q, p and q are weight coefficient, and are all greater than 0;
Described person's handwriting acquisition module chooses the maximum α in weighted average α i max, judge α maxwhether be greater than the Weighted Threshold of setting; If so, then α is selected maxcorresponding closed area is view-finder; If not, then maximum closed area S is selected maxcorresponding closed area is view-finder.
Alternatively, if person's handwriting does not form closed area described in the identification of described person's handwriting acquisition module, then point out user's failure, and wait for person's handwriting input next time.
Alternatively, when person's handwriting drawn by described writing pen:
The contact coordinate data of described writing pen are reached described person's handwriting acquisition module by described touch-screen;
Contact coordinate data described in described person's handwriting acquisition module record, and according to described contact coordinate data, person's handwriting is shown in described touch-screen in real time;
If described person's handwriting acquisition module does not obtain the person's handwriting that described writing pen is drawn within the first threshold time of setting, then judge that described person's handwriting is drawn complete.
Method of taking pictures of the present invention, the first image is obtained in the first layer of touch-screen, person's handwriting is drawn in the second layer, and the second image is obtained in the view-finder that person's handwriting generates, then the 3rd image is synthesized, thus allow user can composograph in real time when taking pictures, avoid user and carry out composograph voluntarily more upon taking a picture, thus save the operating time of user.
Accompanying drawing explanation
Fig. 1 is the structured flowchart of the mobile device of the embodiment of the present invention;
Fig. 2 is the method flow diagram of taking pictures of the embodiment of the present invention;
The method flow diagram of view-finder is selected when Fig. 3 is taking pictures of the embodiment of the present invention;
It is the method flow diagram of the outline colouring of view-finder when Fig. 4 is taking pictures of the embodiment of the present invention;
Fig. 5 a ~ 5c selects three kinds of exemplary plot of view-finder when being and taking pictures in the embodiment of the present invention;
Fig. 6 a ~ 6c is three kinds of exemplary plot of composograph when taking pictures in the embodiment of the present invention;
Fig. 7 is the structured flowchart of the device of taking pictures of the embodiment of the present invention.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearly understand, below by way of specific embodiment and see accompanying drawing, the present invention is described in detail.
Can not synthesize image in real time to solve the mobile device mentioned in background technology, and then cause user to use loaded down with trivial details problem, the embodiment of the invention discloses a kind of method of taking pictures.The method is used in mobile device, described mobile device be configured with processor, can the writing pen of plug and the touch-screen by writing pen input.
This mobile device can for but be not limited to smart mobile phone, panel computer etc., its structured flowchart as shown in Figure 1, mobile device be configured with can the writing pen of plug, the touch-screen by writing pen input, front-facing camera, post-positioned pick-up head and memory; Wherein, touch-screen, front-facing camera, post-positioned pick-up head, memory are all electrically connected with processor.
Writing pen is placed in the slot of mobile device, and when being positioned at slot and processor be in "on" position, when slot extracted by writing pen, power-off between processor and writing pen, processor just learns that writing pen is extracted.
When program is taken pictures in mobile device operation, see Fig. 2, the method is used for, in described processor, comprising:
101, find out after writing pen is pulled out, control mobile device and enter Images uniting pattern, then enter step 102.
How to find out writing pen for processor to be pulled out, foregoing teachings is introduced, and just repeats no more herein.
In a specific embodiment, after the writing pen of mobile device is pulled out, eject dialog box in the touch-screen of mobile device, user can carry out clicking entering Images uniting pattern.
Under this Images uniting pattern, touch-screen can be divided into two layer, thus allows the image shown on the touchscreen can be presented in different layer, and user chooses image in different layer, last composograph, thus composograph can be generated in real time.Following step can be described in detail this.
102, obtain the first image being shown in the first layer in touch-screen, then enter step 103.
This first image can be taken for post-positioned pick-up head.When program is taken pictures in operation, described processor controls described post-positioned pick-up head and opens; Also the first image can be made to be front-facing camera shooting by setting, under this kind of situation, when program is taken pictures in operation, described processor controls front-facing camera and opens.
In addition, the photo that also can select in photograph album for user of the first image.When selecting photo, processor accepts the instruction that user triggers on the touchscreen, and transfers this photo in memory, and then shows in touch-screen.
103, obtain described writing pen and be drawn on person's handwriting on described touch-screen being shown in the second layer in touch-screen, then enter step 104.
Wherein, the second layer is positioned at the top of the first layer.When showing, the image of the second layer is be suspended in the image on the first layer.
Specifically, when processor obtains the person's handwriting of writing pen drafting:
The contact coordinate data of described writing pen are reached described processor by touch-screen;
Contact coordinate data described in described processor for recording, and according to described contact coordinate data, person's handwriting is shown in described touch-screen in real time; Wherein, the single contact touch-screen of writing pen is wall scroll person's handwriting to leaving the person's handwriting drawn between touch-screen.
104, one of closed area selecting person's handwriting to generate is as view-finder.
In this step 104, specifically comprise the following steps:
1041, according to the described person's handwriting drawn, judge the closed area close with described person's handwriting, and described closed area is shown in touch-screen.
Need to illustrate in this step, in processor, prestore multiple closed area.The plurality of closed area, when off-duty takes pictures program, is stored in the memory of mobile device.After entering the program of taking pictures, read by processor and call;
During embody rule, the multiple closed areas prestored in processor, comprising: the predefined closed area of mobile device, and the closed area that user draws.
When judging the closed area close with person's handwriting, such as user depicts a horizontal line, processor automatic decision, and judge that there are square, pentagon, rectangle, hexagon, triangle etc. in the closed area close with this horizontal line, and be shown in the form of a list in touch-screen, click selection for user.
When judging the closed area close with person's handwriting, processor in real time according to the person's handwriting that user draws, can judge close closed area and is shown in touch-screen.Such as user is after depicting a horizontal line, and continue again to depict an oblique line with a certain end points of this horizontal line, then processor judges that close closed area has: triangle, parallelogram etc., and renewal is shown in touch-screen.
1042, judge whether the select command receiving user, if so, then enter step 1044; If not, then show not exist in processor the closed area that user wants, then processor performs step 1043, to realize the person's handwriting drawn according to user, the closed area that automatic decision generates.
1043, judge whether complete person's handwriting is drawn, and if so, then enters step 1045; If not, then step 103 is returned.
Specifically, if processor does not obtain the contact coordinate data of described writing pen within the first threshold time of setting, then judge that person's handwriting is drawn complete.Wherein, this first threshold time can sets itself, such as 15 seconds, 20 seconds etc.
1044, using the closed area chosen as view-finder, then enter step 105.
1045, identify whether the closed area that person's handwriting is formed is at least one; If so, step 1046 is entered; If not, step 1047 is entered.
1046, choose in this closed area one as view-finder, then enter step 105.
More specifically, when person's handwriting forms more than one closed area, described processor chooses in this closed area one as view-finder, see Fig. 3, specifically comprises:
A1, calculate the area Si of each closed area, and draw the area S of maximum closed area max, then enter step a2; Wherein, it is N number of for forming closed area, i=1 ~ N, and N is greater than 1.
A2, calculate the focus point gi of each closed area, and draw the focus point g of maximum closed area max, then enter step a3.
A3, calculate the focus point g of each focus point gi and maximum closed area maxair line distance Ti, then enter step a4.
A4, calculate the weighted average α i of each closed area, then enter step a5.Wherein, α i=Si/S max* p-Ti*q, p and q are weight coefficient, and are all greater than 0.
This weight coefficient is empirical value, needs the numerical value that comes out through investigation and testing evaluation, can according to actual conditions sets itself, the present embodiment just will not enumerate.
A5, the maximum α chosen in weighted average α i max, judge α maxwhether be greater than the Weighted Threshold of setting; If so, step a6 is entered; If not, step a7 is entered.
A6, selection α maxcorresponding closed area is view-finder;
A7, select maximum closed area S maxcorresponding closed area is view-finder.
When selecting view-finder, if there is the more than one closed area satisfied condition, then user is pointed out to select one of them closed area as view-finder.
Further, it should be noted that, after view-finder is chosen, the image that person's handwriting is formed overall can be moved to suitable position by user in the second layer, and can not carry out cutting to the first image in the first layer.
Alternatively, after processor selection view-finder, processor can also give the person's handwriting forming view-finder colouring automatically, see Fig. 4, comprises step b1 ~ b3:
B1, processor calculate the average tone s of the first layer outside view-finder, then enter step b2;
B2, processor obtain the contrary tone t of tone s average with this from palette, then enter step b3;
B3, processor this tone t gives the person's handwriting forming view-finder colouring.
1047, point out user failure, and wait for person's handwriting input next time, then return step 103.
If processor does not receive person's handwriting within the Second Threshold time continued, then control mobile device and exit Images uniting pattern.This Second Threshold time can sets itself, such as 5 minutes, 10 minutes etc.
105, in the scope of described view-finder, obtain the second image, then enter step 106.
This second image can be taken for front-facing camera.After view-finder chosen by processor, notify that described front-facing camera is opened; Also can take for post-positioned pick-up head.
It should be noted that, the first image and the second image can be respectively post-positioned pick-up head and front-facing camera shooting herein, but can not be the shooting of same camera simultaneously.When realizing, processor enter take pictures program time, can select to open one of them camera (being generally post-positioned pick-up head), then after the complete view-finder of follow-up selection, open another camera, to ensure the camera shooting that the first image and the second image are respectively different.
In addition, alternatively, the picture that also can select in mobile device photograph album for user of this second image, during concrete operations, after view-finder is selected, in touch-screen, show the dialog box of ejection, processor acquisition user clicks order, opens photograph album and selects picture by user.
The example of a concrete selection closed area is shown in Fig. 5 a ~ 5c, Fig. 6 a ~ 6c.
In Fig. 5 a, be formed with two closed areas, and two closed areas are inclusion relation.
Under this kind of situation, calculate the weighted average α of closed area 402 2be greater than the weighted average α of closed area 401 1, and α 2be greater than the Weighted Threshold of setting.Then closed area 402 is selected to be view-finder, as shown in Figure 5 a.Further, notice front-facing camera is opened, and in the scope of closed area 402, obtains the second image, the such as head portrait of user oneself, as shown in Figure 6 a.
In Fig. 5 b, be formed with three closed areas, and three closed areas are annexation.
Under this kind of situation, calculate the weighted average α of closed area 403 ~ 405 3~ α 5, the weighted average α calculated 3~ α 5all be less than the Weighted Threshold of setting, then the closed area 403 selecting area maximum is view-finder, as shown in Figure 5 b.
Further, notice front-facing camera is opened, and in the scope of closed area 403, obtains the second image, the such as head portrait of user oneself, as shown in Figure 6 b.
Situation in Fig. 5 c is more complicated, is formed with five closed areas, and wherein two closed areas are inclusion relation, and is separation relation with other three closed areas.
Same above-mentioned steps, obtains the weighted average α of closed area 406 6maximum, and be greater than the Weighted Threshold of setting, then choose closed area 406 for view-finder, and in the scope of closed area 406, obtain the second image, as fig. 6 c.
106, be the 3rd image by described first image, described person's handwriting and described second Images uniting.
In this step, can placement processor after setting-up time, synthesize the 3rd image voluntarily; Also placement processor can obtain the instruction of user, such as user presses " taking pictures " button, then synthesize the 3rd image.
After synthesizing the 3rd image, processor by the 3rd Image Saving in holder.
From upper step, method of taking pictures of the present invention, by obtaining the first image in the first layer of touch-screen, person's handwriting is drawn in the second layer, and the second image is obtained in the view-finder that person's handwriting generates, then synthesize the 3rd image, thus user's composograph in real time when taking pictures can be allowed, avoid user and carry out composograph voluntarily more upon taking a picture, thus save the operating time of user.
Below with an instantiation, the method that above-mentioned realization is taken pictures is described in detail.
For the smart mobile phone of Samsung, user wants the landscape image of taking front, and the head portrait image of affix oneself, synthesize a photo.
First, control smart mobile phone and enter the program of taking pictures, opened by post-positioned pick-up head, the image that post-positioned pick-up head can be taken is shown on touch-screen in real time.
Then, extract writing pen, select to enter Images uniting pattern.This pattern needs user to select in the dialog box ejected, also can by arranging default option, and after making to extract writing pen, just acquiescence enters Images uniting pattern.
User uses writing pen to draw person's handwriting on the touchscreen, and close closed area, according to this person's handwriting, is shown in the upper right corner of touch-screen by processor in real time in the form of a list, selects for user.
If user is non-selected, then processor is after drafting person's handwriting, identifies the closed area that person's handwriting is formed, and selects one of them as view-finder.
After view-finder is selected, the person's handwriting forming view-finder colouring given automatically by processor, then automatically opens front-facing camera.Image captured by front-facing camera is shown in view-finder in real time, selects for user the image needing shooting.In the process, person's handwriting can as required, move integrally to object region by user.
The Images uniting of the front-facing camera shooting in image, person's handwriting and view-finder that post-positioned pick-up head is taken by processor is an image and stores.During actual use, the virtual push button that can be clicked " taking pictures " by user carrys out Trigger processor composograph.
If user also wants to continue to generate composograph, then repaint person's handwriting on the touchscreen; If user exits the program of taking pictures, then exit Images uniting pattern, flow process terminates.
Above-mentioned is the description to the method for taking pictures in the embodiment of the present invention, is described below by the device of taking pictures in the embodiment of the present invention.
The embodiment of the present invention also provides a kind of device of taking pictures, see Fig. 7, in the processor of mobile device.
Wherein, mobile device be configured with can plug writing pen and by writing pen input touch-screen, described touch-screen is electrically connected with described processor;
This device comprises: writing pen finds out module, the first image collection module, person's handwriting acquisition module, the second image collection module, image synthesis unit.Wherein, the first image collection module, the second image collection module are all connected with post-positioned pick-up head, front-facing camera and memory with the arbitrary module in image synthesis unit; Person's handwriting acquisition module is connected with touch-screen.
When program is taken pictures in mobile device operation:
Module found out by writing pen, and find out after writing pen is pulled out, notice mobile device enters Images uniting pattern;
First image collection module, obtains the first image being shown in the first layer in touch-screen;
Person's handwriting acquisition module, obtains described writing pen and be drawn on person's handwriting on described touch-screen being shown in the second layer in touch-screen, and one of closed area selecting described person's handwriting to generate is as view-finder; Wherein, described second layer is positioned at above described first layer.
Wherein, when person's handwriting drawn by writing pen:
The contact coordinate data of described writing pen are reached described person's handwriting acquisition module by described touch-screen;
Contact coordinate data described in described person's handwriting acquisition module record, and according to described contact coordinate data, person's handwriting is shown in described touch-screen in real time;
If described person's handwriting acquisition module does not obtain the person's handwriting that described writing pen is drawn within the first threshold time of setting, then judge that described person's handwriting is drawn complete.
More specifically, one of closed area that described person's handwriting acquisition module selects described person's handwriting to generate, as view-finder, specifically comprises:
Described person's handwriting acquisition module, according to the described person's handwriting drawn, judges the closed area close with described person's handwriting, and is shown in touch-screen by described closed area; Wherein, described person's handwriting acquisition module prestores multiple closed area;
The select command of the described closed area of selection that described person's handwriting acquisition module judges whether to receive user, if so, then using select described closed area as view-finder; If nothing, then described person's handwriting acquisition module is drawn according to described person's handwriting the closed area generated and is automatically selected view-finder.
In a concrete example, person's handwriting acquisition module is drawn according to described person's handwriting the closed area generated and is automatically selected view-finder, specifically comprises:
Described person's handwriting acquisition module identifies the closed area that described person's handwriting generates after judging that described person's handwriting is drawn:
If described person's handwriting forms more than one closed area, then described person's handwriting acquisition module calculates the area Si of each closed area, and draws the area S of maximum closed area max; Wherein, i=1 ~ N, N are greater than 1;
If person's handwriting does not form closed area described in the identification of described person's handwriting acquisition module, then point out user's failure, and wait for person's handwriting input next time;
Described person's handwriting acquisition module calculates the focus point gi of each closed area, and draws the focus point g of maximum closed area max;
Described person's handwriting acquisition module calculates the focus point g of each focus point gi and maximum closed area maxair line distance Ti;
Described person's handwriting acquisition module calculates the weighted average α i of each closed area; Wherein, α i=Si/S max* p-Ti*q, p and q are weight coefficient, and are all greater than 0;
Described person's handwriting acquisition module chooses the maximum α in weighted average α i max, judge α maxwhether be greater than the Weighted Threshold of setting; If so, then α is selected maxcorresponding closed area is view-finder; If not, then maximum closed area S is selected maxcorresponding closed area is view-finder.
Alternatively; after view-finder selected by person's handwriting acquisition module device; automatically the person's handwriting forming view-finder colouring can also be given; comprise: person's handwriting acquisition module calculates the average tone s of the first layer outside view-finder; then obtain the contrary tone t of tone s average with this from palette, finally give the person's handwriting forming view-finder colouring with this tone t.
Second image collection module, obtains the second image in the scope of described view-finder.
Described first image, described person's handwriting and described second Images uniting are the 3rd image by image synthesis unit.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, within the spirit and principles in the present invention all, any amendment made, equivalent replacement, improvement etc., all should be included within the scope of protection of the invention.

Claims (10)

1. a method of taking pictures, is characterized in that, in mobile device, described mobile device be configured with processor, can plug writing pen and by writing pen input touch-screen, described touch-screen is electrically connected with described processor;
Described method is used for described processor and runs when taking pictures program, comprising:
Find out after writing pen is pulled out, control mobile device and enter Images uniting pattern;
Obtain the first image being shown in the first layer in touch-screen;
Obtain described writing pen and be drawn on person's handwriting on described touch-screen being shown in the second layer in touch-screen, and one of closed area selecting described person's handwriting to generate is as view-finder; Wherein, described second layer is positioned at above described first layer;
Then in the scope of described view-finder, obtaining the second image, is then the 3rd image by described first image, described person's handwriting and described second Images uniting.
2. method according to claim 1, is characterized in that, one of closed area that person's handwriting described in described processor selection generates, as view-finder, specifically comprises:
Described processor, according to the described person's handwriting drawn, judges the closed area close with described person's handwriting, and is shown in touch-screen by described closed area; Wherein, described processor prestores multiple closed area;
Described processor judges whether to receive the select command of the described closed area of selection of user, if so, then using select described closed area as view-finder;
If nothing, then described processor is drawn according to described person's handwriting the closed area generated and is automatically selected view-finder.
3. method according to claim 2, is characterized in that, described processor is drawn according to described person's handwriting the closed area generated and automatically selected described view-finder, specifically comprises:
Described processor identifies the closed area that described person's handwriting generates after judging that described person's handwriting is drawn;
If described person's handwriting forms more than one closed area, then described processor calculates the area Si of each closed area, and draws the area S of maximum closed area max; Wherein, i=1 ~ N, N are greater than 1;
Described processor calculates the focus point gi of each closed area, and draws the focus point g of maximum closed area max;
Described processor calculates the focus point g of each focus point gi and maximum closed area maxair line distance Ti;
Described processor calculates the weighted average α i of each closed area; Wherein, α i=Si/S max* p-Ti*q, p and q are weight coefficient, and are all greater than 0;
The maximum α in weighted average α i chosen by described processor max, judge α maxwhether be greater than the Weighted Threshold of setting; If so, then α is selected maxcorresponding closed area is view-finder; If not, then maximum closed area S is selected maxcorresponding closed area is view-finder.
4. method according to claim 3, is characterized in that, if person's handwriting does not form closed area described in described processor identification, then points out user's failure, and waits for person's handwriting input next time.
5. method according to claim 4, is characterized in that, when person's handwriting drawn by described writing pen:
The contact coordinate data of described writing pen are reached described processor by described touch-screen;
Contact coordinate data described in described processor for recording, and according to described contact coordinate data, person's handwriting is shown in described touch-screen in real time;
If described processor does not obtain the contact coordinate data of described writing pen within the first threshold time of setting, then judge that described person's handwriting is drawn complete.
6. a device of taking pictures, is characterized in that, in mobile device, described mobile device be configured with processor, can plug writing pen and by writing pen input touch-screen, described touch-screen is electrically connected with described processor;
This device is used for described processor and runs when taking pictures program, comprising:
Module found out by writing pen, and find out after writing pen is pulled out, notice mobile device enters Images uniting pattern;
First image collection module, obtains the first image being shown in the first layer in touch-screen;
Person's handwriting acquisition module, obtains described writing pen and be drawn on person's handwriting on described touch-screen being shown in the second layer in touch-screen, and one of closed area selecting described person's handwriting to generate is as view-finder; Wherein, described second layer is positioned at above described first layer;
Second image collection module, obtains the second image in the scope of described view-finder;
Described first image, described person's handwriting and described second Images uniting are the 3rd image by image synthesis unit.
7. device according to claim 6, is characterized in that, one of closed area that described person's handwriting acquisition module selects described person's handwriting to generate, as view-finder, specifically comprises:
Described person's handwriting acquisition module, according to the described person's handwriting drawn, judges the closed area close with described person's handwriting, and is shown in touch-screen by described closed area; Wherein, described person's handwriting acquisition module prestores multiple closed area;
The select command of the described closed area of selection that described person's handwriting acquisition module judges whether to receive user, if so, then using select described closed area as view-finder;
If nothing, then described person's handwriting acquisition module is drawn according to described person's handwriting the closed area generated and is automatically selected view-finder.
8. device according to claim 7, is characterized in that, described person's handwriting acquisition module is drawn according to described person's handwriting the closed area generated and automatically selected view-finder, specifically comprises:
Described person's handwriting acquisition module identifies the closed area that described person's handwriting generates after judging that described person's handwriting is drawn;
If described person's handwriting forms more than one closed area, then described person's handwriting acquisition module calculates the area Si of each closed area, and draws the area S of maximum closed area max; Wherein, i=1 ~ N, N are greater than 1;
Described person's handwriting acquisition module calculates the focus point gi of each closed area, and draws the focus point g of maximum closed area max;
Described person's handwriting acquisition module calculates the focus point g of each focus point gi and maximum closed area maxair line distance Ti;
Described person's handwriting acquisition module calculates the weighted average α i of each closed area; Wherein, α i=Si/S max* p-Ti*q, p and q are weight coefficient, and are all greater than 0;
Described person's handwriting acquisition module chooses the maximum α in weighted average α i max, judge α maxwhether be greater than the Weighted Threshold of setting; If so, then α is selected maxcorresponding closed area is view-finder; If not, then maximum closed area S is selected maxcorresponding closed area is view-finder.
9. device according to claim 8, is characterized in that, if person's handwriting does not form closed area described in the identification of described person's handwriting acquisition module, then points out user's failure, and waits for person's handwriting input next time.
10. device according to claim 9, is characterized in that, when person's handwriting drawn by described writing pen:
The contact coordinate data of described writing pen are reached described person's handwriting acquisition module by described touch-screen;
Contact coordinate data described in described person's handwriting acquisition module record, and according to described contact coordinate data, person's handwriting is shown in described touch-screen in real time;
If described person's handwriting acquisition module does not obtain the person's handwriting that described writing pen is drawn within the first threshold time of setting, then judge that described person's handwriting is drawn complete.
CN201510362377.6A 2015-06-26 2015-06-26 A kind of method and apparatus of shooting Active CN104994276B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510362377.6A CN104994276B (en) 2015-06-26 2015-06-26 A kind of method and apparatus of shooting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510362377.6A CN104994276B (en) 2015-06-26 2015-06-26 A kind of method and apparatus of shooting

Publications (2)

Publication Number Publication Date
CN104994276A true CN104994276A (en) 2015-10-21
CN104994276B CN104994276B (en) 2018-10-16

Family

ID=54306026

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510362377.6A Active CN104994276B (en) 2015-06-26 2015-06-26 A kind of method and apparatus of shooting

Country Status (1)

Country Link
CN (1) CN104994276B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105353907A (en) * 2015-10-30 2016-02-24 努比亚技术有限公司 Photographing method and apparatus
WO2018161534A1 (en) * 2017-03-09 2018-09-13 青岛海信移动通信技术股份有限公司 Image display method, dual screen terminal and computer readable non-volatile storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102546900A (en) * 2010-12-07 2012-07-04 比亚迪股份有限公司 Mobile communication terminal and method for processing information by aid of same
CN202488611U (en) * 2012-03-31 2012-10-10 北京三星通信技术研究有限公司 Signal processing circuit of mobile equipment and mobile equipment
CN102843514A (en) * 2012-08-13 2012-12-26 东莞宇龙通信科技有限公司 Photo shooting and processing method and mobile terminal
CN102857617A (en) * 2011-06-28 2013-01-02 希姆通信息技术(上海)有限公司 Mobile terminal equipment with handwriting function and functions of camera
US20150160837A1 (en) * 2013-12-09 2015-06-11 Samsung Electronics Co., Ltd. Method and device for processing and displaying a plurality of images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102546900A (en) * 2010-12-07 2012-07-04 比亚迪股份有限公司 Mobile communication terminal and method for processing information by aid of same
CN102857617A (en) * 2011-06-28 2013-01-02 希姆通信息技术(上海)有限公司 Mobile terminal equipment with handwriting function and functions of camera
CN202488611U (en) * 2012-03-31 2012-10-10 北京三星通信技术研究有限公司 Signal processing circuit of mobile equipment and mobile equipment
CN102843514A (en) * 2012-08-13 2012-12-26 东莞宇龙通信科技有限公司 Photo shooting and processing method and mobile terminal
US20150160837A1 (en) * 2013-12-09 2015-06-11 Samsung Electronics Co., Ltd. Method and device for processing and displaying a plurality of images

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105353907A (en) * 2015-10-30 2016-02-24 努比亚技术有限公司 Photographing method and apparatus
WO2018161534A1 (en) * 2017-03-09 2018-09-13 青岛海信移动通信技术股份有限公司 Image display method, dual screen terminal and computer readable non-volatile storage medium

Also Published As

Publication number Publication date
CN104994276B (en) 2018-10-16

Similar Documents

Publication Publication Date Title
US11356597B2 (en) Method and apparatus for supporting image processing, and computer-readable recording medium for executing the method
US9185286B2 (en) Combining effective images in electronic device having a plurality of cameras
CN105847674B (en) A kind of preview image processing method and mobile terminal based on mobile terminal
CN109040474B (en) Photo display method, device, terminal and storage medium
CN113395419A (en) Electronic device, control method, and computer-readable medium
EP2728853A2 (en) Method and device for controlling a camera
CN112135046B (en) Video shooting method, video shooting device and electronic equipment
US9706108B2 (en) Information processing apparatus and associated methodology for determining imaging modes
CN106416222A (en) Real-time capture exposure adjust gestures
CN103188434B (en) Method and device of image collection
CN113840070A (en) Shooting method, shooting device, electronic equipment and medium
CN113923350A (en) Video shooting method and device, electronic equipment and readable storage medium
CN111083374A (en) Filter adding method and electronic equipment
CN103543916A (en) Information processing method and electronic equipment
KR20160088719A (en) Electronic device and method for capturing an image
CN104994276A (en) Photographing method and device
US8866934B2 (en) Image pickup apparatus capable of deleting video effect superimposed on moving image, method of controlling the apparatus, and moving image-recording apparatus, as well as storage medium
US20220283698A1 (en) Method for operating an electronic device in order to browse through photos
CN114025100A (en) Shooting method, shooting device, electronic equipment and readable storage medium
JP7463071B2 (en) Electronic device and method for controlling electronic device
CN109643154A (en) For controlling the method and system of the display of information and implementing the user terminal of this method
CN118317190A (en) Image shooting method and device and electronic equipment
CN117274057A (en) Image stitching method and device
JP2011097263A (en) Photographing device, and program
JP2016010079A (en) Display control apparatus

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant