GB2529435A - A Method of Generating A Framed Video Stream - Google Patents

A Method of Generating A Framed Video Stream Download PDF

Info

Publication number
GB2529435A
GB2529435A GB1414743.3A GB201414743A GB2529435A GB 2529435 A GB2529435 A GB 2529435A GB 201414743 A GB201414743 A GB 201414743A GB 2529435 A GB2529435 A GB 2529435A
Authority
GB
United Kingdom
Prior art keywords
video stream
camera
framing
frames
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1414743.3A
Other versions
GB2529435B (en
GB201414743D0 (en
Inventor
Michael Tusch
Ilya Romanenko
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apical Ltd
Original Assignee
Apical Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apical Ltd filed Critical Apical Ltd
Priority to GB1414743.3A priority Critical patent/GB2529435B/en
Publication of GB201414743D0 publication Critical patent/GB201414743D0/en
Priority to US14/825,963 priority patent/US9904979B2/en
Publication of GB2529435A publication Critical patent/GB2529435A/en
Application granted granted Critical
Publication of GB2529435B publication Critical patent/GB2529435B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6811Motion detection based on the image signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/683Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory

Abstract

A method for framing a video stream by capturing a video 110 using a camera 105, detecting motion of the camera 120, detecting the presence of an object 115 in the video, determining the position of the object in at least one of the frames, and generating a framed video stream 135 using a framing in dependence on the motion of the camera and the position of the object. The method may scale the object to a predetermined size with respect to the size of the framed video and the object may also be positioned offset from the centre of the frames. The framed video stream may be of a lower resolution than the captured video stream, i.e. cropped. Also claimed is a system comprising a camera and processing unit to perform the method of framing a video stream, where they may form a single apparatus.

Description

A METHOD OF GENERATING A FRAMED VIDEO STREAM
Technical Field
The present invention relates to a method for processing a video stream and producing a framed video stream.
Background
When capturing a video stream of a subject in a scene, framing the scene effectively, for example to produce an aesthetically pleasing composition of the scene, may be difficult, especially if the subject is in motion. For example, the camera operator may not move the camera smoothly, and may be unable to track accurately the object's motion. A preview of the video stream may be available to the user while capturing the video stream but this may be of limited use, for example if bright light is incident on a camera preview screen, or if the camera is held such that the screen is not easily visible.
In addition, objects may not always be present in a video stream. For example, an object may exit an area being filmed, or may be obscured by another object.
A method is required for improving the automatic framing of a video stream.
Summary
According to a first aspect of the present invention, there is provided a method of framing a video stream, the method comprising: capturing a video stream having frames using a camera, an object being present in the video stream during at least part of the frames; detecting a motion of the camera; detecting the presence of the object in the video stream; determining a position of the object within at least one frame in which the object is present; and generating a framed video stream using a framing in dependence on the motion of the camera and the position of the obj ect.
The method improves the framing of a video stream by making the framing dependent on two instead of one parameter. For example the framing can follow the motion of the object when the object is present in the video stream, and be stabilised according to the motion of the camera when the object is not present. An aesthetically pleasing composition may thus be obtained while the object is present and also while the object is not present.
The invention further relates to a system for framing a video stream, the system comprising a camera and a processing unit; the processing unit including at least one processor; and at least one memory including computer program instructions, the at least one memory and the computer program instructions being configured to, with the at least one processor, cause the apparatus to perform a method of: capturing a video stream having frames using the camera, an object being present in the video stream during at least part of the frames; detecting a motion of the camera; detecting the presence of the obj ect in the video stream; determining a position of the object thin at least one frame in which the object is present; and generating a framed video stream using a framing in dependence on the motion of the camera and the position of the obj ect.
Further features and advantages of the invention will become apparent from the following description of preferred embodiments of the invention, given by way of example only, which is made with reference to the accompanying drawings.
Brief Description of the Drawings
Figure 1 shows a method of framing a video stream, Figure 2 shows a series of frames comprising a video stream, in which an object is present in some frames and not present in other frames.
Figure 3 shows extraction of a crop window in a frame of a video stream, and production of a frame of a framed video stream from the crop window.
Figure 4 shows generation of a framed video stream in which the framing depends on the size of an object in the video stream.
Figure 5 shows generation of a frame of a framed video stream in which the framing depends on the orientation of an object in the video stream.
Figure 6 shows two systems implementing the method of figure 1
Detailed Description
Figure 1 shows schematically a method according to one embodiment, in which a video stream may be automatically framed in order to compensate for camera motion and changing position of an object. Framing may be defined as the process of composing frames of a video stream, generally with the aim of producing an aesthetically pleasing result. For example, framing may have the aim of including certain objects in a frame, or positioning certain objects at defined positions in a frame.
A camera 05 captures a video stream 0 comprising frames, An object, which may for example be a person or a part of a person such as a head, is present in the video stream during at least part of the frames, i.e. one frame or a plurality of frames of the video stream. The video stream 110 is analysed in an object tracking step 115 to detect the presence of the object, and to determine the position of the object in at least one frame in which the object is present.
The detection of the object may for example use known algorithms such as extraction of a histogram of oriented gradients, The histogram of oriented gradients may be analysed by a feature classifier such as a support vector machine, Use of a support vector machine in this manner is known, for example in "Support-Vector Networks" (Vapnik, Machine Learning, 273-297 (1995), Kluwer Academic Publishers), and involves comparison of part of an image with a template produced by training the algorithm on images containing identified objects. The object detection may, in some embodiments, involve formation of a feature vector from features of the image, to which a trained classifier is applied. Object detection algorithms may output a detection score indicating the confidence of detection, with scores over a threshold value being interpreted as detection of the presence of the object in a frame. Once an object has been detected, its position within a frame can be determined. The position of the object may, for example, be expressed as the centre of a box bounding the object.
Motion of the camera 105 with respect to the scene being imaged is detected (step 120), This may be performed by direct analysis of camera motion 125, for example by use of accelerometers mounted on the camera 105. As another example, motion of the camera 105 may be determined by analysis of the video stream 110 using known techniques such as dividing the frames of the video stream 110 into tiles, and determining a motion vector for each tile. Techniques for stabilization of video streams to compensate for camera shake are known in the art. Such techniques include use of an optical stabilization module to move the camera sensor with respect to the lens (for example as described in US 3942862) and digital processing including selection of a sub-frame of the full frame whose position relative to the full frame moves in a manner opposing the motion of the camera (for example as described in US 7956898).
A framing step 130 generates a framed video stream 135, using a framing in dependence on the motion of the camera 105 and the position of the object. The framing may include cropping and/or scaling the video stream, for example selecting a sub-frame comprising a region of frame of the video stream provided by the camera and discarding the pixels outside that sub-frame. The sub-frame is preferably rectangular, with a specified origin and dimensions. The sequence of sub-frames selected from a sequence of frames from the camera forms a framed video stream 135.
Figure 2 shows a video stream 110 comprising frames 205, 210, 215, 220, 225, 230. An object 235, in this case a person, is present in some frames 210, 215, 225, but not present in other frames 205, 220, 230, According to some embodiments of the method, the framing may depend only or at least partly on the motion of the camera when the object is not present in the video stream, such as in frames 205, 220, 230, and depend only or at least partly on the position of the obj ect when the obj ect is present, such as in frames 210, 2t5, 225. There are four possible combinations of dependence: only on the motion when no object is present and only on the position when the object is present; at least partly on the motion when no object is present and only on the position when the object is present; only on the motion when no object is present and at least partly on the position when the obj ect is present; and at least partly on the motion when no object is present and at least partly on the position when the object is present.
For example, when the object is present, the framing may depend only on the position of the object, for example such that the object remains in substantially the same position within each frame, such as the middle of the frame, When the object is not present, the video stream may be framed depending on the motion of the camera to compensate for the motion; that is to say the video süeam may be stabilised.
According to some aspects of the method, when the obj ect is present in the video stream, the framing may simultaneously depend on the motion of the camera and on the position of the object. The relative degree of dependence on the motion of the camera and position of the object may depend on a relative weighting factor. The relative weighting factor may depend on a detection score assigned to the object by an object detection algorithm, For example if an object is identified with a high degree of certainty, the framing may depend almost entirely on the motion of the object, whereas if the object is identified with a low degree of certainty, the framing may depend to a greater degree on the motion of the camera.
In some embodiments, multiple objects may be identified in a video stream, or in a single frame. The method may include selection of a single one of these objects for determining the framing, the selection being based on its position within a frame of the video stream, or based on the size of the object, or based on the type of the object (for example "person" or "car"). The selection may alternatively be performed be manual selection of an object by a user.
In some embodiments, as shown in figure 3, a frame 305 of the video stream from the camera 105 has a first, higher resolution than is required in the final framed video stream 135, The framing may then be described as selection of a crop window 310. The area of the frame 305 covered by the crop window 310 corresponds to a frame 315 of the framed video stream 135. In this manner, generation of the framed video stream includes cropping a set of frames of the captured video stream, such that the cropped frames have a second, lower resolution than the original frames. The cropped frame may be up-scaled to a higher resolution, such that the resolution of the frames of the framed video stream is substantially the same as or higher than the resolution of frames of the video stream from the camera, If the framing is expressed as a crop window 310, the position of the crop window 310 in a given frame may be expressed as the displacement of the crop window 310 with respect to the position of the crop window in the previous frame. The relative contributions of the camera motion and the object position to the framing may, for example, be combined as follows, where Ar is the horizontal displacement and Ày is the vertical displacement of the crop window 310 with respect to the position in the previous frame: Ax = aF1(S) -E (1- ay = aF1(6,) -E (1 -where band 8, are the horizontal and vertical displacements of the object from its position in the previous frame; ó and 8, are the amount of horizontal and vertical motion of the camera relative to the previous frame; Fi and F2 are spatial/temporal filters which may be applied to the motion of the camera and/or to the motion of the object to smooth the motion of the crop window between frames; and a is the relative weighting factor as described above.
In some aspects of the invention, the spatial/temporal filters are not applied.
This is equivalent to setting:
U -
11⁄4J-'x) -= F2(-o) = = in the equations above. In aspects in which spatial/temporal filters Fi, P2 are applied, they may be applied to frames in which the object is present, or frames in which the obj ect is not present, or both, in order to smooth the motion of the crop window between frames. An example of such a filter is a linear temporal filter FLT, which maybe defined as: FLT(x(t)) = /?x(t) + (1-fflx(t -1) where xQ) is the position of the crop window in frame t, and /Jis a temporal smoothing parameter. A similar filter may be applied to the motion of the camera. A smaller value of the temporal smoothing parameter causes a smoother motion of the crop window between frames. More complex temporal filters, such as a non-linear filter with a stability region, or a spatio-temporal filter which takes into account the size of the displacement, may alternatively be incorporated into the method.
In some embodiments, the relative weighting factor is typicafly equa' to I if an object is detected in a given frame, and equal toO otherwise, with the consequence that the framing is dependent entirely on the position of the object when the obj ect is present, and entirely on the motion of the camera when the object is not present. In other embodiments, in frames in which the object is detected, the relative weighting factor may have a value between 0 and 1. This provides a balance between tracking object position and compensating for camera motion. In such embodiments, a higher value of the relative weighting factor causes a greater degree of dependence on the position of the object, and a lower value of the relative weighting factor causes a greater degree of dependence on the motion of the camera. Spatial and/or temporal filtering may be applied when determining the weighting factor, for example to effect a smooth transition between dependence of framing on position of the object and dependence on motion of the camera.
The framing may include scaling the crop window by a scaling factor, which may be dependent on the size of the object. In some embodiments, this may be implemented as depicted in figure 4. Frames 405 of the video stream 110 captured by the camera 105 include an object 4t0, in this case a person, the size of which varies between frames. For example, the person may move closer to the camera. The framing includes, apart from any change in position of crop windows, selecting crop windows 415 dependent on the size of the object, and scaling these such that the size of the object in frames 420 of the framed video stream is substantially equal to a predetermined size with respect to the frames 420 of the framed video stream 135. For example, the method may scale the crop windows 415 such that the size of the object in frames 420 of the framed video stream 135 remains constant within a predetermined threshold, such as tO% of the height or width of the crop window.
The scaling factor, here termed 7, may be defined as: Z=yS where y is a scaling parameter and S is a measure of the size of the detected object, for example its height or width. It may be desirable to apply spatial or temporal filtering to the scaling factor in order to ensure smooth transitions of the crop window between frames, In such embodiments, the scaling factor may be defined as: Z = aF(yS) where a is equal to I if the object is present in a given frame and equal to 0 otherwise, and F is a filter, for example a linear temporal filter as described above.
According to some aspects of the invention, the framing may include positioning the object in a position offset from the centre of a frame, in which the offset may depend on the orientation of the object. The orientation maybe identified by the application of multiple classifiers to the video stream, each classifier being trained to identify an object in a different orientation. For example, classifiers may be used which are trained to identify a human head oriented to the right or to the left with respect to the camera or facing the camera. These classifiers may typically output a detection score, with the classifier with the highest score indicating the best estimate of the orientation of the object.
In an exemplary embodiment, to obtain an aesthetically pleasing composition it may be desirable to position a right-facing person not in the centre but to the left of the centre of a frame of the framed video stream 135 and vice versa. Figure 5 shows a framing according to some such embodiments. A frame 505 of the video stream 110 from the camera 05 contains an object 510, in this case a person facing to the right of the frame. A crop window 515 is positioned such that the person is positioned to the left of the centre 517 of the crop window by an offset 520. The corresponding frame 525 of the framed video stream 135 thus contains the object 510 having the offset 520 to the left of the centre of the frame. The offset may be expressed as a modification of the co-ordinates of the centre of the crop window (X Y) to (X+Xj, Y+Yo), where Xo and Yb are the offsets. For example, if the object is a person, it may be desirable to offset the crop window by a third of the width of the crop window if the person is facing left or right, to offset the crop window by half of the height of the crop window if the person is facing up or down, and not to offset the crop window if the person is facing the camera, If the width of the crop window is h and the height of the crop window is v, this may be expressed as: Xb = h/3 if person is facing left; = -h'3 if person is facing right; = 0 if person is facing camera; Yo = v/2 if person is facing up; and Yo = -v/2 if person is facing down.
Two exemplary embodiments of a system for carrying out the above described methods are shown in figure 6. Figure 6a shows a source 605 providing a video stream, which may for example be the image sensor of a digital camera. The source 605 is connected to a processing unit 610. The processing unit is also connected to at least one memory 615 containing computer program instructions, configured cany out the method as described above and to produce a framed video stream as described above.
The source 605, processing system 610 and memory 6 t S form a system 620 for framing a video stream; the components of the system may be integrated in a single apparatus 620, e.g. a camera.
Figure 6b shows a camera 625 providing a video stream. The camera is connected to a processing unit 610 and a memory 615 as described above. The processing unit 610 and memory 615 may be included within a computer 630 separate from the camera.
The above embodiments are to be understood as illustrative examples of the invention. Further embodiments of the invention are envisaged. For example, the source may be a memory within a computer, and the source 605, processing system 610 and memory 615 may all be contained within a computer. It is to be understood that any feature described in relation to any one embodiment may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the embodiments, or any combination of any other of the embodiments. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the invention, which is defined in the accompanying claims.

Claims (9)

  1. CLAIMSI. A method of framing a video stream, the method comprising: capturing a video stream having frames using a camera, an object being present in the video stream during at least part of the frames; detecting a motion of the camera; detecting the presence of the obj ect in the video stream; determining a position of the object thin at least one frame in which the object is present; and generating a framed video stream using a framing in dependence on the motion of the camera and the position of the obj ect.
  2. 2. A method according to claim 1, in which the dependence is determined by the presence of the obj ect such that the framing is in dependence on the motion of the camera when the object is not present in the video stream, and the framing is in dependence on the position of the object when the object is present in the video stream.
  3. 3. A method according to claim 2, in which the framing is simultaneously dependent on the motion of the camera and on the position of the obj ect when the object is present in the video stream.
  4. 4. A method according to claim 3, in which the framing is dependent on the motion of the camera and also dependent on the position of the object, the relative degree of dependence on the motion of the camera and dependence on the position of the object being dependent on a relative weighting factor, the relative weighting factor depending on a detection score assigned to the object by the detecting.
  5. 5. A method according to claim 4, including applying spatial and / or temporal filtering to determining the relative weighting factor.
  6. 6. A method according to any preceding claim, including applying spatial and / or temporal filtering to the motion of the camera and / or to the position of the object.
  7. 7. A method according to any preceding claim, in which the framing includes scaling the video stream for obtaining a size of the object in the framed video stream substantially equal to a predetermined size with respect to a size of frames of the framed video stream.
  8. 8. A method according to any preceding claim, in which the generating the framed video stream indudes positioning the object in a position offset from a centre of frames of the framed video stream, the offset depending on an orientation of the object.
  9. 9. A method according to any preceding claim, in which a set of frames of the captured video stream have a first resolution, and in which the generation of the framed video stream includes cropping the set of frames of the captured video stream, such that frames of the framed video stream have a second resolution lower than the first resolution.tO. A method according to any preceding claim, in which the object is a person.11. A system for framing a video stream, the system comprising a camera and a processing unit; the processing unit including at least one processor; and at least one memory including computer program instructions, the at least one memory arid the computer program instructions being configured to, with the at least one processor, cause the apparatus to perform a method of: capturing a video stream having frames using the camera, an object being present in the video stream during at least part of the frames; detecting a motion of the camera; detecting the presence of the object in the video stream; determining a position ofthe object within at least one frame in which the object is present and generating a framed video stream using a framing in dependence on the motion of the camera and the position of the object.12. A system according to claim 11, in which the camera and the processing unit are integrated in a single apparatus.
GB1414743.3A 2014-08-19 2014-08-19 A Method of Generating A Framed Video Stream Active GB2529435B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1414743.3A GB2529435B (en) 2014-08-19 2014-08-19 A Method of Generating A Framed Video Stream
US14/825,963 US9904979B2 (en) 2014-08-19 2015-08-13 Method of generating a framed video system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1414743.3A GB2529435B (en) 2014-08-19 2014-08-19 A Method of Generating A Framed Video Stream

Publications (3)

Publication Number Publication Date
GB201414743D0 GB201414743D0 (en) 2014-10-01
GB2529435A true GB2529435A (en) 2016-02-24
GB2529435B GB2529435B (en) 2020-09-02

Family

ID=51662670

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1414743.3A Active GB2529435B (en) 2014-08-19 2014-08-19 A Method of Generating A Framed Video Stream

Country Status (2)

Country Link
US (1) US9904979B2 (en)
GB (1) GB2529435B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110235083B (en) * 2017-01-02 2023-06-30 广州异构智能科技有限公司 Unsupervised learning of object recognition methods and systems
DE102017205093A1 (en) * 2017-03-27 2018-09-27 Conti Temic Microelectronic Gmbh Method and system for predicting sensor signals of a vehicle
CN111612875A (en) * 2020-04-23 2020-09-01 北京达佳互联信息技术有限公司 Dynamic image generation method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5371539A (en) * 1991-10-18 1994-12-06 Sanyo Electric Co., Ltd. Video camera with electronic picture stabilizer
US6784927B1 (en) * 1997-12-22 2004-08-31 Canon Kabushiki Kaisha Image processing apparatus and image processing method, and storage medium
GB2411310A (en) * 2004-02-19 2005-08-24 Bosch Gmbh Robert Image stabilisation using field of view and image analysis.
WO2011065960A1 (en) * 2009-11-30 2011-06-03 Hewlett-Packard Development Company, L.P. Stabilizing a subject of interest in captured video

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10271017B2 (en) * 2012-09-13 2019-04-23 General Electric Company System and method for generating an activity summary of a person
US8494231B2 (en) * 2010-11-01 2013-07-23 Microsoft Corporation Face recognition in video content
US8643746B2 (en) * 2011-05-18 2014-02-04 Intellectual Ventures Fund 83 Llc Video summary including a particular person

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5371539A (en) * 1991-10-18 1994-12-06 Sanyo Electric Co., Ltd. Video camera with electronic picture stabilizer
US6784927B1 (en) * 1997-12-22 2004-08-31 Canon Kabushiki Kaisha Image processing apparatus and image processing method, and storage medium
GB2411310A (en) * 2004-02-19 2005-08-24 Bosch Gmbh Robert Image stabilisation using field of view and image analysis.
WO2011065960A1 (en) * 2009-11-30 2011-06-03 Hewlett-Packard Development Company, L.P. Stabilizing a subject of interest in captured video

Also Published As

Publication number Publication date
GB2529435B (en) 2020-09-02
US9904979B2 (en) 2018-02-27
GB201414743D0 (en) 2014-10-01
US20160057446A1 (en) 2016-02-25

Similar Documents

Publication Publication Date Title
KR101142316B1 (en) Image selection device and method for selecting image
CN107948517B (en) Preview picture blurring processing method, device and equipment
CN108111749B (en) Image processing method and device
US9600898B2 (en) Method and apparatus for separating foreground image, and computer-readable recording medium
TWI483612B (en) Converting the video plane is a perspective view of the video system
US9531938B2 (en) Image-capturing apparatus
CN106604005B (en) A kind of projection TV Atomatic focusing method and system
US9131155B1 (en) Digital video stabilization for multi-view systems
WO2019105297A1 (en) Image blurring method and apparatus, mobile device, and storage medium
US9674441B2 (en) Image processing apparatus, image processing method, and storage medium
WO2011046633A1 (en) Method and apparatus for image stabilization
JP2006259900A (en) Image processing system, image processor and processing method, recording medium, and program
CN106600548B (en) Fisheye camera image processing method and system
WO2019105298A1 (en) Image blurring processing method, device, mobile device and storage medium
KR101178777B1 (en) Image processing apparatus, image processing method and computer readable-medium
CN107872631B (en) Image shooting method and device based on double cameras and mobile terminal
JP2016066177A (en) Area detection device, area detection method, image processing apparatus, image processing method, program and recording medium
CN110245549B (en) Real-time face and object manipulation
JP2010239440A (en) Image compositing apparatus and program
US9904979B2 (en) Method of generating a framed video system
CN104902168B (en) A kind of image combining method, device and capture apparatus
CN113610865B (en) Image processing method, device, electronic equipment and computer readable storage medium
JP6833483B2 (en) Subject tracking device, its control method, control program, and imaging device
JP7005338B2 (en) Information processing equipment and its control method and program
JP2007096480A (en) Object tracking apparatus and object tracking method

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20220929 AND 20221005