Summary of the invention
In view of this, the technical problem to be solved in the present invention is to provide a kind of camera head and image capture method, cannot independent focusing and light-metering point to solve in the existing process of finding a view of taking pictures, be not easy to user and carry out according to the background light output under varying environment the shortcomings such as shooting of finding a view.Also namely, provide a kind of by focusing and light-metering point independent, carry out separately the scheme of finding a view to coke number and light value setting, and a kind of camera head of correspondence and image capture method.
It is as follows that the present invention solves the problems of the technologies described above adopted technical scheme:
A kind of camera head, it comprises:
Display unit module, for exploded view picture;
Receiving element module, for receiving the instruction operated the image that display unit module is shown;
Processing unit module, for the instruction received according to receiving element module, selects light-metering position and/or focusing position.
A kind of image capture method, it comprises:
Exploded view picture;
Receive the instruction that shown image is operated;
According to the instruction received, select light-metering position and/or focusing position.
According to embodiments of the invention, when user adopts the present invention to carry out preview composition, focusing area and photometry region are selected in the operation that can oppose as required, carry out composition, improve Consumer's Experience according to different scene.Make user can drag respectively in the process of finding a view focusing frame or light-metering frame carry out independence focus and light-metering, improve playability and Consumer's Experience.
Embodiment
Below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
Embodiment one:
Refer to Fig. 1, it is the module diagram of a kind of camera head of first embodiment of the invention.
A kind of camera head, it comprises: display unit module 11, receiving element module 12, and processing unit module 13.
Display unit module 11, for exploded view picture; Wherein, image can be the image that camera catches, the image that camera receives, or the image stored in camera head.In addition, described display unit module 11 is also for showing light-metering pattern and/focusing patterns.Camera and display unit module 11 are interconnected.Described camera is arranged at the outside of described camera head.Described camera can be front-facing camera, or post-positioned pick-up head, also can be independent camera, is interconnected by data wire and display unit module 11.Display unit module 11 can be LCDs or OLED display screen.
Receiving element module 12, for receiving the instruction operated the image that display unit module 11 is shown.Wherein, instruction can be the instruction of gesture operation, also can be the instruction of acoustic control/voice, or the instruction of touch-control input.Described instruction comprises: drag the light-metering pattern shown or click, and select the instruction of light-metering position; And/or the focusing patterns of showing is dragged or clicked, and select the instruction of focusing position.Receiving element module 12 can be mouse, keyboard, microphone, Trackpad, projector equipment, or the former various combinations.
Processing unit module 13, for the instruction received according to receiving element module 12, selects light-metering position and/or focusing position.Preferably, described processing unit module 13 comprises computing unit, the brightness value that described computing unit is used for the pixel covered by the light-metering pattern of at least one light-metering position selected is input value, calculated by the function preset, obtain output valve, described camera head is taken pictures based on described output valve.
For example, as shown in Figure 2, the instruction of touch-control input is utilize finger to carry out touch-control on the screen of the display unit module 11 of exploded view picture, user can select light-metering position, such as finger drags light-metering pattern 100, observes variation of exposure, and selects light-metering point as required; Also can select focusing position, such as point position focusing patterns 200 being dragged to and need focusing, focus.
Preferably, described at least two, light-metering position; And/or described focusing position at least two.
Preferably, described light-metering pattern is different from the color of focusing patterns, or shape is different.
Refer to Fig. 3, it is the schematic flow sheet of a kind of image capture method of the present invention.
A kind of image capture method, it comprises the following steps:
Step S1: exploded view picture;
Wherein, image can be the image that camera catches, the image that camera receives, or the image stored in camera head.Described image can also comprise light-metering pattern and/or focusing patterns.
Step S2: receive the instruction that shown image is operated;
Wherein, described instruction comprises: select the instruction of light-metering position and/or select the instruction of focusing position.
Step S3: according to the instruction received, selects light-metering position and/or focusing position.
Wherein, according to the instruction of the selection light-metering position received, select light-metering position.According to the instruction of the selection focusing position received, select focusing position.
The step of described selection light-metering position and/or focusing position, comprises further:
The light-metering pattern shown is dragged or clicked, and selects light-metering position; And/or
The focusing patterns of showing is dragged or clicked, and selects focusing position.
In other embodiments, described at least two, light-metering position; And/or described focusing position at least two.
In other embodiments, described light-metering pattern is different from the color of focusing patterns, or shape is different, is convenient to difference like this, is beneficial to user operation.
Preferably, the brightness value of the pixel covered with the light-metering pattern of at least one light-metering position selected, for input value, is calculated by the function preset, according to the output valve calculated, takes pictures.
Preferably, when the number at least two of focusing position, and image corresponding to different focusing positions and camera head exist the distribution of far and near distance, regulate focusing parameter to take pictures according to different focusing positions, thus optimize the definition (sharpness) of image of taking pictures.
Embodiment two:
Be illustrated with the mobile phone of Android platform, and refer to Fig. 4:
The dispatchTouchEvent that this product adopts Android to carry (distribute by touch event, the touch screen event-handling method that android system carries) carry out touch screen case distribution and process, the position of the coordinate of touch area with focus before frame and light-metering frame is compared, to judge that this time dragging or click event are the operations carried out for focusing or light-metering.Utilize calculateTapArea (coordinates computed region, calculates rectangular area centered by contact) method to carry out coordinate transform after judging and be converted into the utilizable driving coordinate of bottom by the screen coordinate of UI.By high pass interface setMeteringArea, (photometry region is set, interface for photometry region is transmitted to bottom) carry out the setting of photometry region, by JNI supplemental characteristic be delivered to HAL layer and finally received by bottom, proposed by the invention comprises following three modules by focusing and the independent method of carrying out finding a view of light-metering point:
(1) acquisition of focusing area and photometry region touch screen event and region decision: 1. .WindowManagerService (window management service, for the service of View in management window in Android ccf layer) distribute Touch event to current top Activity, function dispatchPointer (distribution contact in WindowManagerService, the method of message is sent) in WindowManagerService, by the client proxy object of an IWindow, message is sent to corresponding IWindow service end, a namely IWindow.Stub subclass, 2.. the implementation method dispatchPointer of IWindow.Stub subclass can be called after receiving message, 3.. the dispatchTouchEvent method of this View can be called after being delivered to the View of top layer, so far completed the acquisition of touch screen event.The touch screen coordinate figure of current acquisition and focusing area coordinate figure before and photometry region coordinate figure are compared and judges to draw that current click or drag area are as effective coverage or light-metering effective coverage or the inactive area of focusing.
(2) calculating of effective coverage coordinate and UI are to the coordinate transform driven:
Focusing area and photometry region is calculated according to current contact by calculateTapArea, (coordinate transform is prepared afterwards by the prepareMatrix in mapRect and the Util tool-class in Matrix, AndroidApp layer tool-class, for being bottom layer driving coordinate by upper interface Coordinate Conversion) upper interface Coordinate Conversion is bottom layer driving coordinate by coordinate transform
(3) Parameter transfer and physical layer interface call
After respective regions has been calculated, by setMeteringArea and setFocusArea of framework layer, (focusing area is set, interface for focusing area is transmitted to bottom) parameter is passed to JNI (Java Native Interface, JAVA calls this locality, for completing upper strata java language calling to bottom c language), and by an android_hardware_Camera (function of JNI layer, java language calling to c language in process camera model) be delivered to HAL layer, finally realized by native_set_parms.
Above-described embodiment, is only illustrate with Android platform, and is not limited to Android platform, at the iOS of Apple, the platforms such as the Windows of Microsoft or operating system can also realize.
According to the embodiment of the present invention, when user adopts the present invention to carry out preview composition, focusing area and photometry region are selected in the operation that can oppose as required, carry out composition, improve Consumer's Experience according to different scene.
Above with reference to the accompanying drawings of the preferred embodiments of the present invention, not thereby limit to interest field of the present invention.Those skilled in the art do not depart from the scope and spirit of the present invention, and multiple flexible program can be had to realize the present invention, and the feature such as an embodiment can be used for another embodiment and obtains another embodiment.All use do within technical conceive of the present invention any amendment, equivalently replace and improve, all should within interest field of the present invention.