CN103988227B - The method and apparatus locked for image capturing target - Google Patents

The method and apparatus locked for image capturing target Download PDF

Info

Publication number
CN103988227B
CN103988227B CN201180075524.1A CN201180075524A CN103988227B CN 103988227 B CN103988227 B CN 103988227B CN 201180075524 A CN201180075524 A CN 201180075524A CN 103988227 B CN103988227 B CN 103988227B
Authority
CN
China
Prior art keywords
image
different objects
camera unit
main object
different
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201180075524.1A
Other languages
Chinese (zh)
Other versions
CN103988227A (en
Inventor
O·卡莱沃
R·叙奥默拉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Technologies Oy
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Publication of CN103988227A publication Critical patent/CN103988227A/en
Application granted granted Critical
Publication of CN103988227B publication Critical patent/CN103988227B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Abstract

According to the exemplary embodiment of the present invention, a kind of method and corresponding device and computer program are disclosed, is used for:Image (505) is received from the imaging sensor of camera unit;The position (510) of different objects in the received image of monitoring;Determine which of different objects object, if any, be user it is interested or should be interested main object (515);Detect whether that main object is changed into by another pair in the different objects as blocking and in response to triggering the first action (520) to the detection blocked.

Description

The method and apparatus locked for image capturing target
Technical field
Present invention relates generally to image capturing target locking (targeting).
Background technology
In digital filming, view field image is directed to the camera sensor to form digital picture.In order to focus the light into It is known in such as camera arrangement in camera sensor, use object lens.Depth perception in photo can be limited by generating The depth field of precision is obtained.Than so-called focal plane closer to or farther object seem further fuzzy, this to highlight Desired object.When the change of the distance between camera and object or if the focal length of object lens is scaled and changed by zoom, then Automatic focus on enables camera to keep focusing on by selected object interested.
In order to focus on automatically, camera, which needs to know, to be focused on to which part of image.Therefore, automatic focus on can be with Possible target interested is found out using face detection or other algorithms.In simple terms, user is by by single focus point pair The object of quasiexpectation recognizes target point, for example, lock focus by the way that trigger button is pressed down on into half and then make Camera is returned and found a view in some other way, if desired.In some enhanced cameras, automatic focus on is configured To track object interested and it being kept to focus on, when the object is moved.This tracking mode is focused on automatically, also by Referred to as artificial intelligence servo or sequential focusing, such as it is very useful when the bird to flight is taken pictures.
The content of the invention
The each side of example of the present invention is illustrated in the claims.
According to the first exemplary aspect of the present invention there is provided a kind of device, including:
Input, for receiving image from the imaging sensor of camera unit;
Processor, is configured as:
The position of different objects in the received image of monitoring;
Determine which of different objects object, if any, be user it is interested or should be interested master Object;
Detect whether main object by another pair in the different objects as blocking and in response to being blocked to described Detect and trigger first and act.
Described device may further include output.
First action can send occlusion detection signal by the output.The processor can further by It is configured to determine that how much the camera should be moved transversely to avoid described block.It is described first action can further or Alternately include sending movable signal, the movable signal indicates that the camera unit should be avoided towards its movement The direction blocked.The movable signal may further include the determination camera should be moved how much.It is described to move The sending of dynamic signal can be limited by should be by mobile how many determinations, so as to only less than given threshold to the camera unit The movement of value be determined to be needs when send the movable signal.
First action can start continuous photographing mode.The processor can be configured to described During continuous photographing mode is as first action launching, detects and abandon automatically in the described image blocked described in appearance Some or all.
First action can be delayed image capture.When described image capture can be delayed by the maximum at most given Between section.The maximum time period can be 0.5 to 3 second.The maximum time period can be fixed.Alternately, it is described most The big period can dynamically define according to one or more factors.The factor can include it is following in one or It is multiple:The relative velocity of the main object being blocked and occlusion objects;Situation about being captured in the main object being blocked Under other images whether have had been taken;The definition of image earlier in the case of the main object being blocked at present is visible; The exposure period of estimation;Flashing light unit whether has been used, and if having used the flashing light unit, has alternatively also included institute State the possible flash frequency of flashing light unit.
The input can be configured to receive object information from automatic focusing unit.The object information can be with Including depth information.The processor can be configured as being used as the object information received to monitor in received image The basis of the different objects.
The position for monitoring the different objects can include determining that the position of the different objects in a lateral direction.Monitoring The position of the different objects can also relate to determine the different objects along the camera unit and the object it Between position in the axis i.e. depth that extends.The processor can be configured as by face detection and described by calculating The ratio of the facial characteristics of at least one in image determines the depth of object.At least one described facial characteristics can include by eye Eyeball, ear, face, eyebrow, point and/or two or more points of the size on head formation.Based on the face in consideration The typical sizes of portion's feature, the processor can be configured as estimating the camera unit and as the institute in described image State the distance between personage of one of different objects appearance.
The processor can be configured as performing the depth map based on the flight time.The processor can be configured To calculate the flight time based on known illumination sequential and described image sensor sequential.
The processor can be configured as skipping to master if an object is only identified in the image received The determination of object and the detection to blocking.
The processor can be configured as detecting the different objects in received image.In order to detect that the difference is right As the processor can be configured as making the camera unit:Change poly- in whole or most of can use in focusing range Jiao, with different focusing shooting images, and how the different piece of described image becomes poly- when being changed based on the focusing It is burnt and/or defocus to determine the object at different distance.Alternatively, or in addition, the processor can be configured as from institute State camera unit and receive depth map and it is determined that using the depth map during the different objects.The depth map can come Come from the automatic focusing unit.The processor can be configured to receive to described in recognizing on the view finder User's selection of one or more of object object.The vision that the identification can include on the display to object is dashed forward Go out display.Described highlight can include:Around object picture frame;Change the color of object;Change the brightness of object and/or right Degree of ratio;Or more every any combinations.Alternatively, or in addition, the different objects that identification is detected can include The existing different objects detected.
The processor can be configured as the different objects being used to monitor in received image using color Position.
According to the second exemplary aspect of the present invention there is provided a kind of method, including:
Image is received from the imaging sensor of camera unit;
The position of different objects in the received image of monitoring;
Determine which of different objects object, if any, be user it is interested or should be interested master Object;And
Detect whether main object by another pair in the different objects as blocking and in response to being blocked to described Detect and trigger first and act.
The different exemplary aspects and embodiment without restraining force of the present invention are had been illustrated with above.Above embodiment It is only used for explaining the selected aspect or step that can be utilized in an embodiment of the present invention.Some embodiments can be only referring to this The some exemplary aspects of invention are introduced.It should be appreciated that in terms of corresponding embodiment can also be applied to other examples.
Brief description of the drawings
Exemplary embodiment for a more complete understanding of the present invention, referring now to the description carried out below with reference to accompanying drawing, Wherein:
Fig. 1 shows the framework overview of the system of the exemplary embodiment of the present invention;
Fig. 2 shows the block diagram of the device of the exemplary embodiment of the present invention;
Fig. 3 shows the block diagram of the camera unit of the exemplary embodiment of the present invention;
Fig. 4 shows the exemplary view finder view of the exemplary embodiment of the present invention;And
Fig. 5 shows the flow chart of the process of the exemplary embodiment of the present invention.
Embodiment
Fig. 1 shows the framework overview of the system 100 of the exemplary embodiment of the present invention.The system includes having camera list The device 200 of first (260 in Fig. 2), the camera unit has the first visual field 110 and the second visual field 120.First visual field 110 It is main view and the user of device 200 is also presented on the view finder (270 in Fig. 2) of device 200.Second visual field by Exemplary embodiment is provided, and wherein the camera unit has than the much bigger resolution ratio for being currently imaged and visual field;Should Embodiment will be described in more detail in the ending vicinity of the literature.For simplicity, the first visual field 110 is referred to below as Visual field 110.
In various embodiments, device 200 is or including one or more of following:Mobile device, hand-held set Standby, mobile phone, digital camera, personal digital assistant, game station, portable game equipment, navigation equipment and vehicle-mounted user Equipment.
In visual field 110, there are the first and second imaging objects 10,20 of paintings smiling face.Second imaging object 20 is moved Its more early position dynamic and shown in broken lines.There are the 3rd imaging object 30 of part, the 3rd imaging object in the visual field 30 obliquely move on visual field 110.In these current positions, the first to the 3rd imaging object is not all other Object is blocked.In other words, each imaging object is fully visible for the camera unit 260 of device 200, although it is natural only from From the point of view of this side of device 200.However, in its more early position, the second imaging object 20 is blocking the first imaging object 10 (it is assumed that these objects occupy common plane on the direction of relative drawing).Blocking for one or more imaging objects is in example Situation about easily occurring when lineup's thing or animal is such as shot, and there are more image objects to be more possible in image.The term Image object refers to occurring the head of object in the picture, such as aircraft, dog or people.It is to be understood that generally, image Background can include various objects, such as mountain peak or several furniture.In some cases, the different piece shape of a physical object Composition from image object.For example, the hand of people may block its face, in this case, the hand and face are counted as The image object of separation.It is that different images object can be counted as being recognized by processor 210 or by this it is user-defined these Image section.
Fig. 2 shows the block diagram of the device 200 of the exemplary embodiment of the present invention.Device 200 includes communication interface 220, coupling Close the processor 210 of communication interface modules 220 and be coupled to the memory 240 of processor 210.Memory 240 includes work Memory and nonvolatile memory, such as read-only storage, flash memories, light or magnetic memory.In memory 240, Typically at least it is initial in the nonvolatile memory, be stored with and be operationally loaded into processor 210 and by processor 210 The software 250 of execution.Software 250 can include one or more software modules, and can be the shape of computer program product Formula, the computer program product is stored in the software in storage medium.Device 200 further comprises being respectively coupled to the processing The camera unit 260 and view finder 270 of device.
It should be appreciated that any coupling in the literature refers to coupling functionally or in operation;In the element of coupling Between can have intervening components or circuit.
Communication interface modules 220 is configured as providing local communication by one or more link-locals.These links can To be wire link and/or Radio Link.Communication interface 220 can further or alternately be implemented to be suitable to build with other users Vertical link or the telecommunication link (for example, using internet) transmitted suitable for data.These telecommunication links can be made With the link of any one in wireless local area network link, bluetooth, ultra wide band, honeycomb or satellite communication link.Communication interface 220 can The appropriate slot of device 200 or adapter, the card of port etc. can be inserted into be integrated into device 200 or be integrated into.Although Fig. 2 shows a communication interface 220, but the device can include multiple communication interfaces 220.
Processor 210 be, for example, CPU (CPU), microprocessor, digital signal processor (DSP), at figure Manage the combination of unit, application specific integrated circuit (ASIC), field programmable gate array, microcontroller or these elements.Fig. 2 shows one Individual processor 210, but device 200 can include multiple processors.
As previously described, memory 240 can include volatibility and nonvolatile memory, such as read-only storage (ROM), programmable read only memory (PROM), Erasable Programmable Read Only Memory EPROM (EPROM), random access memory (RAM), sudden strain of a muscle Deposit memory, data disks, optical memory, magnetic memory, small card etc..In some exemplary embodiments, in device 200 only There is volatibility or nonvolatile memory.In addition, in some exemplary embodiments, the device includes multiple memories. In some exemplary embodiments, various elements are integrated.For example, memory 240 be configured to device 200 a part or by Insert slot, port etc..In addition, memory 240 may be used as the sole purpose of data storage, or it is configured to be used as it A part for the device of its purpose such as processing data.Similar option is also able to think for various other elements.
It will be appreciated by those skilled in the art that except these elements shown in Fig. 2, device 200 can include other elements, all Such as such as further input/output (I/O) circuit of microphone, display and adjunct circuit, storage chip, special integrated electricity Road (ASIC), dedicated processes circuit source code/decoding circuit, channel coding/decoding circuit, encryption/decryption circuit etc..Separately Outside, device 200 can include disposable or rechargeable battery (not shown), for being supplied when external power source is unavailable for the device Electricity.
Recognize that in the literature with different scopes be also useful using term device.In some wider range of rights It is required that with example, the device can only refer to the subset of these features occurred in Fig. 2 or even be implemented without in Fig. 2 Any feature.In one exemplary embodiment, the term device refers to processor 210, is configured as from the camera list The incoming line of the processor 210 of first receive information and the output for being configured as providing the processor 210 of information to the view finder Circuit.
Fig. 3 shows the block diagram of the camera unit 260 of the exemplary embodiment of the present invention.Camera unit 260 includes thing Mirror 261, be configured as to object lens 261 focus automatic focusing unit 262, optional mechanical shutter 263, imaging sensor 264 With input and/or output 265.In one exemplary embodiment, camera unit 260 is configured as from automatic focusing unit The 262 automatic focus informations of output.In one exemplary embodiment, the camera unit is additionally configured to receive by I/O265 Instruction for automatic focusing unit 262.
Fig. 4 shows the exemplary view finder view 400 of the exemplary embodiment of the present invention.On the view finder, there are two Window or pane:(first visual field 110 in this example) is presented in main window, and real time camera image is found a view, and object window is in Existing detected image object.In one exemplary embodiment, object window is only detecting at least one object Shi Beicheng It is existing.In the main window, the camera images are presented, shown here as the various articles on desk.Items in the images Between mesh, five objects (the first object to the 5th object 410-450) are detected as potential object of interest.It should realize Arrive, Fig. 4 is merely illustrative setting and some projects therefore at the image neighboring area and is not recognized as potential sense Object of interest.Corresponding icon or the image object 410 ' to 450 ' that may be reduced equally are shown in the object window.One In individual exemplary embodiment, the view finder is presented on the touchscreen.User object can be switched (toggle) for "ON" or "Off", for being handled as image object interested.In one exemplary embodiment, it is allowed to which the user is by referring to Desired one or more image objects are selected to the appropriate section of the screen, so that device 200 identifies selected image The border of object or frame.In a further exemplary embodiment, it is allowed to user for example by around desired image object picture frame come Manual frame selects (frame) desired image object.
Fig. 4 presents rectangular image object.In other exemplary embodiments of the invention, it is also possible to other shapes.For example, image The shape of object can be with dynamic adaptation in meeting the image object shape of itself.For example, if selected for cup, the cup The region of shape can be defined for the image object.
Fig. 5 shows the flow chart of the process of the exemplary embodiment of the present invention, which illustrates various possible features.Fig. 5 Shown in some or all of the step computing device as the processor 210 of such as device 200.It should be understood that It is that, although these arrows point to another frame from a frame in Figure 5, different steps need not in Figure 5 be occurred by them Order perform.
In step 505, image is received from the imaging sensor of camera unit.In the received image of 510 monitorings The position of different objects.User main object interested is determined 515.In step 520, detect whether that any main object becomes Into be blocked and if it does, then triggering first act.Step 525 determines camera unit should be moved transversely how many To avoid blocking for detected main object.In 530 lateral attitudes for determining different objects.It is based on 535 in such as literature One or more of disclosed distinct methods determine position of the different objects in depth.If in the image received Identify less than two objects, then can skip in the determination of 515 pairs of main objects and in 520 pairs of detections blocked.
New image capture and reproduction experiencing system
In an illustrative aspect, develop a kind of new capture and reproduce experiencing system, it makes it possible to by a use Capturing scenes and moment are come in family in an equipment with high-quality relevant picture and video.The packet captured contains multiple images And/or multiple videos, it is recorded simultaneously and can include the information of object that is from separation or can overlapping each other.It is deep Degree information makes it possible to that different objects are carried out simply to recognize or detect.The information for capturing/recording, it may be possible to separation it is every The information of individual image object, it is easy to edit, reduce or reproduce again, because the information is organized well.
, can also be by using from also in another or multiple camera arrangements except 2D images and video system A camera arrangement information come improve 3D experience.Some have further illustrated the one of invention discussed below It is a little to realize.
One example images acquisition procedure has the following steps:
1. obtain depth map using the camera.In one exemplary embodiment, the depth map is based on light by processor From device to image object and flight time of return mechanism produces.It is true that flight time can use camera sensor It is fixed.In a further exemplary embodiment, depth map is produced using 3D imagings.
2. depth map is used, from view finder (user's selection) or automatic (algorithm based on such as face recognition) identification manually The object of image.
3. when identifying object, definition region (rectangle or free form are come around the object for being marked as object of interest Shape) (referring to Fig. 4)
4. start to monitor or track object interested and object (figure interested is alternatively recognized on view finder 4)。
5. blocking to the object that is recognized is detected, if or not identifying image object alternatively or if only recognizing If going out an image object identifies less than two image objects, the detection to blocking is avoided.
6. when user is for example taken pictures by squeezing trigger command device, trigger one or more actions.These actions It can be related to:
A. the single photo of full screen is generated;
B. the photo of the separation of full photo i.e. full resolution image and object interested is shot (from single big resolution ratio Photograph image sets up photo library), the full resolution image uses whole pixels that camera sensor is provided in image (may not Including the edge for digital image stabilization);
C. the full screen photo and video of object interested are shot with big resolution ratio or down-sampling resolution ratio;
D. the photo and/or video of any other combination, all big female photos if any multiple subobjects are shot;
E. the image and/or image stream of relatively early capture, such as rest image sequence or video segment are recorded;
G. being continuously shot for rest image is started;
H. the subgraph of object interested is extracted in captured rest image, at the same abandon other parts or with than The resolution ratio that the image of image object interested is lower stores the other parts.
The example of post processing/editor of captured images includes:
1. allow user to check content.Each image includes object set, and shooting near one another image in time Define photograph collection.For each photograph collection, it is allowed to which user is based on following these image/videos of viewing:
A. personage/object
B. supergraph picture+subgraph
C. the video for supergraph picture+add thereon
D. video (video of object 1, it includes the video of object 2) inside video
And these any other combination e..
2. enhancing is handled, provided associated features are such as provided and handled and storage multiple images and video, for example, i.e. When shortcut (shortcut).These associated features can include:
A. delete;
B. mark;And/or
C. generation collects.
Various types of equipment can be used for implementing different exemplary embodiments.The system requirements and feature bag of some examples Containing one or more of the following:
The image capturing unit of-such as imaging sensor has enough resolution ratio (for example, 10 to 200 million pixels, typical case The pixel of ground 20 to 50 million) make it possible to capture multiple images and/or video in high quality;
- camera framework provides selection, bi-directional scaling or down-sampling and in real time processing for multiple individual images streams;
- support automatic and/or select (for example, touch, posture) to select recorded object by user;
- support tracking and the video capture selected object (attention during tracking:Video and image stabilization can be with bases In the relatively overall background visual field as shown in the second visual field 120 in Fig. 1 or the visual picture object recorded based on camera sensor To provide);
- distance of generation depth map or measurement object is supported, and alternatively, support during recording and/or in image And/or during the playback of video in view finder display depth figure;
- each photo and the whole scene (attention of these objects can be recorded when each user indicates and captured:Equally, when Between offsets (carry sequential t-n ..., t ... t+m image) can be used to make it possible to preferably capture correct wink Between or enhancing playback experience).In one exemplary embodiment, blocked if detected, abandon to be expected to send by user and clap Take the photograph some or all of these images of command record;
- it can continuously record the video (attention of selected object:Although whole image objects for detecting or recognizing are just Tracked, but only image object interested can optionally be carried out videograph);
- can notify for example to block;
- new position of camera can be advised, blocked when the position is when the relative motion in view of these image objects It can be reduced or avoided blocking for having;
- the capture (photo collage effect) that can optimize from the image generation recorded;
- view finder and selected one or more objects can be shown with whole scene, vice versa;
- suitably it can be used for videograph, view finder and rest image by bi-directional scaling photo, without recording always Full resolution and high frequency video or image;
- video in video, video can be shown in different photo sequences, photo, there is other different present or dynamic Draw (for example, similar view finder view or flash demo);
- in the case of 3D camera units, the tracking information of different objects can be transmitted from a camera unit Arrive another.
The scope for these claims that appear below is not limited in any way, is explained or is applied, it is disclosed herein Having the technical effect that for one or more of these exemplary embodiments can be with automatic detection user image object interested Block.One or more of these exemplary embodiments disclosed herein another technical effect is that can be with to the detection blocked It is used to control the operation of camera to mitigate the adverse effect blocked.One in these exemplary embodiments disclosed herein It is individual or multiple another technical effect is that the detection of image object can be used for two or more items in the following:Automatically Focusing, detection and the generation of the image or video of the separation of expression user individual images object interested to blocking.Here Another technique effect of one or more of these disclosed exemplary embodiments is used to capture and reproduce both comprising generation New imaging experience ability;More relevant informations can be recorded with scene in a flash from same;These can more easily be recorded More high quality graphics of object;Cheap equipment can be used without requiring multiple equipment and/or multiple users;It can be used as Automatic party camera or enhanced monitoring camera;And/or only cut out areas may require that with full resolution and with whole corrections Handled.
In context of this document, " computer-readable medium " can be included, store, communicate, propagate or pass It is defeated being used by instruction execution system, device or equipment such as computer or such as counted with instruction execution system, device or equipment Any media or means of these relevant instructions of calculation machine, an example of computer such as Fig. 2 described in and description device 200.Computer-readable medium can include computer-readable recording medium, and computer-readable recording medium can be included Or store being used by instruction execution system, device or equipment such as computer or all with instruction execution system, device or equipment Such as any medium or means of these relevant instructions of computer.
If desired, difference in functionality discussed here can be performed in a different order and/or simultaneously with each other.This Outside, if desired, one or more of function described above can be optional or can be combined.
Although elaborating each aspect of the present invention in the independent claim, the other side of the present invention includes coming from Other combinations of described embodiment and/or the feature of dependent claims and the feature of dependent claims, and not only Only it is these combinations being expressly recited in these claims.
Although it is also noted herein that described above is the exemplary embodiment of the present invention, these descriptions should not be Treat in the sense that limitation.But, having can be carried out without departing from the scope of the present invention as defined in appended claims It is several to change and modifications.

Claims (15)

1. a kind of device locked for image capturing target, including:
Input, for receiving image from the imaging sensor of camera unit;
Processor, is configured as:
The position of different objects in the received image of monitoring;
If the main object for having user interested in the different objects or should be interested, which in the different objects is determined Individual object is the main object;
Detect whether that main object is changed into by another pair in the different objects as blocking and in response to being blocked to described Detect and trigger first and act;And
Determine that how much the camera unit should be moved transversely to avoid blocking main object.
2. device according to claim 1, wherein first action includes sending movable signal, the movable signal refers to Show that the camera unit should be avoided the direction blocked towards its movement.
3. device according to claim 2, wherein further comprise should to the camera unit for the movable signal By mobile how many determination.
4. the device according to Claims 2 or 3, wherein sending for the movable signal is limited by the camera unit Should be by mobile how many determination, only to send the movement when movement less than given threshold value is determined to be and needed Signal.
5. device according to claim 1, wherein first action is to start continuous photographing mode.
6. device according to claim 5, wherein the processor is configured in the continuous photographing mode Some or all of the described image that period is detected and blocked automatically described in discarding appearance image.
7. device according to claim 1, wherein first action is delayed image capture.
8. the capture of device according to claim 7, wherein described image is delayed by most given maximum time period.
9. device according to claim 8, wherein the maximum time period is dynamically determined according to one or more factors Justice, the factor includes one or more of the following:The main object and the relative velocity of occlusion objects being blocked;Institute State in the case that the main object being blocked is captured and other images whether have had been taken;It is visible in the main object being blocked at present In the case of one or more images earlier definition;The exposure period of estimation;Whether flashing light unit has been used, and If having used the flashing light unit, include the flash frequency of the flashing light unit.
10. device according to any one of claim 1 to 3, wherein described device are configured to from automatic poly- Burnt unit receives object information.
11. device according to claim 10, wherein in order to detect the different objects, the processor is configured as making The camera unit:Change focusing in focusing range in whole or most of can use, with different focusing shooting images, and How the different piece of described image becomes to focus on and/or defocus to determine at different distance when being changed based on the focusing Object.
12. device according to claim 10, wherein the processor is configured as receiving deep from the camera unit Degree figure and it is determined that using the depth map during the different objects.
13. a kind of method locked for image capturing target, including:
Image is received from the imaging sensor of camera unit;
The position of different objects in the received image of monitoring;
If the main object for having user interested in the different objects or should be interested, which in the different objects is determined Individual object is the main object;
Detect whether that main object is changed into by another pair in the different objects as blocking and in response to being blocked to described Detect and trigger first and act;And
Determine that how much the camera unit should be moved transversely to avoid blocking main object.
14. method according to claim 13, wherein in order to detect the different objects, making the camera unit: It is whole or it is most of can be focused on changing in focusing range, with different focusing shooting images, and changed based on the focusing How the different piece of described image becomes to focus on and/or defocus to determine the object at different distance during change.
15. the method according to claim 13 or 14, further comprise from the camera unit receive depth map and The depth map is used when detecting the different objects.
CN201180075524.1A 2011-12-16 2011-12-16 The method and apparatus locked for image capturing target Active CN103988227B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/FI2011/051121 WO2013087974A1 (en) 2011-12-16 2011-12-16 Method and apparatus for image capture targeting

Publications (2)

Publication Number Publication Date
CN103988227A CN103988227A (en) 2014-08-13
CN103988227B true CN103988227B (en) 2017-08-04

Family

ID=48611900

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201180075524.1A Active CN103988227B (en) 2011-12-16 2011-12-16 The method and apparatus locked for image capturing target

Country Status (4)

Country Link
US (1) US9813607B2 (en)
EP (1) EP2791899B1 (en)
CN (1) CN103988227B (en)
WO (1) WO2013087974A1 (en)

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9138636B2 (en) 2007-05-16 2015-09-22 Eyecue Vision Technologies Ltd. System and method for calculating values in tile games
US9595108B2 (en) 2009-08-04 2017-03-14 Eyecue Vision Technologies Ltd. System and method for object extraction
EP2462537A1 (en) 2009-08-04 2012-06-13 Eyecue Vision Technologies Ltd. System and method for object extraction
US9906838B2 (en) 2010-07-12 2018-02-27 Time Warner Cable Enterprises Llc Apparatus and methods for content delivery and message exchange across multiple content delivery networks
US9336452B2 (en) 2011-01-16 2016-05-10 Eyecue Vision Technologies Ltd. System and method for identification of printed matter in an image
US20140169697A1 (en) * 2012-12-19 2014-06-19 Lifetouch Inc. Editor for assembled group images
KR20140102443A (en) * 2013-02-14 2014-08-22 삼성전자주식회사 Object tracking method using camera and camera system for object tracking
US10514256B1 (en) * 2013-05-06 2019-12-24 Amazon Technologies, Inc. Single source multi camera vision system
KR20150030082A (en) * 2013-09-11 2015-03-19 엘지전자 주식회사 Mobile terminal and control control method for the mobile terminal
KR102119659B1 (en) * 2013-09-23 2020-06-08 엘지전자 주식회사 Display device and control method thereof
US9533413B2 (en) 2014-03-13 2017-01-03 Brain Corporation Trainable modular robotic apparatus and methods
US9987743B2 (en) 2014-03-13 2018-06-05 Brain Corporation Trainable modular robotic apparatus and methods
CN104038666B (en) * 2014-04-22 2017-10-27 深圳英飞拓科技股份有限公司 A kind of video shelter detection method and device
US20170251169A1 (en) * 2014-06-03 2017-08-31 Gopro, Inc. Apparatus and methods for context based video data compression
US20160037138A1 (en) * 2014-08-04 2016-02-04 Danny UDLER Dynamic System and Method for Detecting Drowning
US9891789B2 (en) * 2014-12-16 2018-02-13 Honeywell International Inc. System and method of interactive image and video based contextual alarm viewing
US10438277B1 (en) 2014-12-23 2019-10-08 Amazon Technologies, Inc. Determining an item involved in an event
US10552750B1 (en) 2014-12-23 2020-02-04 Amazon Technologies, Inc. Disambiguating between multiple users
US10475185B1 (en) 2014-12-23 2019-11-12 Amazon Technologies, Inc. Associating a user with an event
US9686463B2 (en) * 2015-03-10 2017-06-20 Qualcomm Incorporated Systems and methods for continuous auto focus (CAF)
CN104717428A (en) * 2015-03-12 2015-06-17 深圳华博高科光电技术有限公司 Surveillance camera and control method thereof
US9840003B2 (en) 2015-06-24 2017-12-12 Brain Corporation Apparatus and methods for safe navigation of robotic devices
JP2017017624A (en) * 2015-07-03 2017-01-19 ソニー株式会社 Imaging device, image processing method, and electronic apparatus
TWI706181B (en) * 2015-09-11 2020-10-01 新加坡商海特根微光學公司 Imaging devices having autofocus control
US10044927B2 (en) * 2015-10-19 2018-08-07 Stmicroelectronics International N.V. Capturing a stable image using an ambient light sensor-based trigger
JP6333871B2 (en) * 2016-02-25 2018-05-30 ファナック株式会社 Image processing apparatus for displaying an object detected from an input image
US10210660B2 (en) 2016-04-06 2019-02-19 Facebook, Inc. Removing occlusion in camera views
US10594920B2 (en) * 2016-06-15 2020-03-17 Stmicroelectronics, Inc. Glass detection with time of flight sensor
CN105915805A (en) * 2016-06-15 2016-08-31 北京光年无限科技有限公司 Photographing method for intelligent robot
EP3293703B1 (en) * 2016-09-07 2023-03-22 Airbus Operations GmbH Virtual window device and method for operating a virtual window device
US10817735B2 (en) 2017-07-28 2020-10-27 Google Llc Need-sensitive image and location capture system and method
CN107944960A (en) * 2017-11-27 2018-04-20 深圳码隆科技有限公司 A kind of self-service method and apparatus
US10521961B2 (en) 2017-12-10 2019-12-31 International Business Machines Corporation Establishing a region of interest for a graphical user interface for finding and depicting individuals
CN109214316B (en) * 2018-08-21 2020-08-25 北京深瞐科技有限公司 Perimeter protection method and device
WO2020140210A1 (en) * 2019-01-02 2020-07-09 Hangzhou Taro Positioning Technology Co., Ltd. Automated film-making using image-based object tracking

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101166234A (en) * 2006-10-16 2008-04-23 船井电机株式会社 Device with imaging function
EP2056256A2 (en) * 2007-10-30 2009-05-06 Navteq North America, LLC System and method for revealing occluded objects in an image dataset
CN101873882A (en) * 2008-03-14 2010-10-27 科乐美数码娱乐株式会社 Image generation device, image generation method, information recording medium, and program

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3752510B2 (en) * 1996-04-15 2006-03-08 イーストマン コダック カンパニー Automatic subject detection method for images
US6466205B2 (en) * 1998-11-19 2002-10-15 Push Entertainment, Inc. System and method for creating 3D models from 2D sequential image data
JP4333223B2 (en) 2003-06-11 2009-09-16 株式会社ニコン Automatic photographing device
US8933889B2 (en) * 2005-07-29 2015-01-13 Nokia Corporation Method and device for augmented reality message hiding and revealing
JP4572815B2 (en) * 2005-11-18 2010-11-04 富士フイルム株式会社 Imaging apparatus and imaging method
EP1966648A4 (en) * 2005-12-30 2011-06-15 Nokia Corp Method and device for controlling auto focusing of a video camera by tracking a region-of-interest
US7551754B2 (en) 2006-02-24 2009-06-23 Fotonation Vision Limited Method and apparatus for selective rejection of digital images
US9217868B2 (en) * 2007-01-12 2015-12-22 Kopin Corporation Monocular display device
WO2008131823A1 (en) 2007-04-30 2008-11-06 Fotonation Vision Limited Method and apparatus for automatically controlling the decisive moment for an image acquisition device
JP4964807B2 (en) * 2008-03-07 2012-07-04 パナソニック株式会社 Imaging apparatus and imaging method
US8866920B2 (en) * 2008-05-20 2014-10-21 Pelican Imaging Corporation Capturing and processing of images using monolithic camera array with heterogeneous imagers
JP5064313B2 (en) * 2008-06-20 2012-10-31 オリンパス株式会社 Portable information terminal
JP2010098354A (en) * 2008-10-14 2010-04-30 Panasonic Corp Imaging apparatus
JP5127692B2 (en) * 2008-12-25 2013-01-23 キヤノン株式会社 Imaging apparatus and tracking method thereof
JP5267149B2 (en) 2009-01-19 2013-08-21 ソニー株式会社 Display control apparatus, display control method, and program
US20100238161A1 (en) * 2009-03-19 2010-09-23 Kenneth Varga Computer-aided system for 360º heads up display of safety/mission critical data
US20100240988A1 (en) * 2009-03-19 2010-09-23 Kenneth Varga Computer-aided system for 360 degree heads up display of safety/mission critical data
JP5522757B2 (en) * 2009-05-12 2014-06-18 コーニンクレッカ フィリップス エヌ ヴェ Camera, system having camera, method of operating camera, and method of deconvolving recorded image
JP2011023898A (en) * 2009-07-14 2011-02-03 Panasonic Corp Display device, display method, and integrated circuit
EP2502115A4 (en) * 2009-11-20 2013-11-06 Pelican Imaging Corp Capturing and processing of images using monolithic camera array with heterogeneous imagers
CN107346061B (en) * 2012-08-21 2020-04-24 快图有限公司 System and method for parallax detection and correction in images captured using an array camera

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101166234A (en) * 2006-10-16 2008-04-23 船井电机株式会社 Device with imaging function
EP2056256A2 (en) * 2007-10-30 2009-05-06 Navteq North America, LLC System and method for revealing occluded objects in an image dataset
CN101873882A (en) * 2008-03-14 2010-10-27 科乐美数码娱乐株式会社 Image generation device, image generation method, information recording medium, and program

Also Published As

Publication number Publication date
EP2791899A1 (en) 2014-10-22
EP2791899A4 (en) 2015-10-28
EP2791899B1 (en) 2016-11-09
WO2013087974A1 (en) 2013-06-20
US20140320668A1 (en) 2014-10-30
US9813607B2 (en) 2017-11-07
CN103988227A (en) 2014-08-13

Similar Documents

Publication Publication Date Title
CN103988227B (en) The method and apparatus locked for image capturing target
US11860511B2 (en) Image pickup device and method of tracking subject thereof
US8199221B2 (en) Image recording apparatus, image recording method, image processing apparatus, image processing method, and program
US7512262B2 (en) Stereo-based image processing
US8493493B2 (en) Imaging apparatus, imaging apparatus control method, and computer program
US20130235086A1 (en) Electronic zoom device, electronic zoom method, and program
CN107038362B (en) Image processing apparatus, image processing method, and computer-readable recording medium
CN109791558B (en) Automatic selection of micro-images
WO2019104569A1 (en) Focusing method and device, and readable storage medium
KR102082300B1 (en) Apparatus and method for generating or reproducing three-dimensional image
TWI477887B (en) Image processing device, image processing method and recording medium
JP2014050022A (en) Image processing device, imaging device, and program
JP2019186791A (en) Imaging apparatus, control method of the imaging apparatus, and control program
CN104935807B (en) Photographic device, image capture method and computer-readable recording medium
JP5003666B2 (en) Imaging apparatus, imaging method, image signal reproducing apparatus, and image signal reproducing method
JP2012124767A (en) Imaging apparatus
CN105467741A (en) Panoramic shooting method and terminal
US11659135B2 (en) Slow or fast motion video using depth information
CN114697528A (en) Image processor, electronic device and focusing control method
WO2023240489A1 (en) Photographic method and apparatus, and storage medium
CN111586281B (en) Scene processing method and device
CN109447929B (en) Image synthesis method and device
JP2024033748A (en) Image processing device, imaging device, image processing method, imaging device control method, program
JP2023122427A (en) Image processing apparatus, control method therefor, program and storage medium
JP5659856B2 (en) Imaging apparatus, imaging method, and program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20160217

Address after: Espoo, Finland

Applicant after: Technology Co., Ltd. of Nokia

Address before: Espoo, Finland

Applicant before: Nokia Oyj

GR01 Patent grant
GR01 Patent grant