WO2015128294A1 - Method and apparatus for determining an orientation of a video - Google Patents

Method and apparatus for determining an orientation of a video Download PDF

Info

Publication number
WO2015128294A1
WO2015128294A1 PCT/EP2015/053746 EP2015053746W WO2015128294A1 WO 2015128294 A1 WO2015128294 A1 WO 2015128294A1 EP 2015053746 W EP2015053746 W EP 2015053746W WO 2015128294 A1 WO2015128294 A1 WO 2015128294A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
orientation
rotation
translation
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/EP2015/053746
Other languages
English (en)
French (fr)
Inventor
Claire-Hélène Demarty
Lionel Oisel
Patrick Perez
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thomson Licensing SAS
Original Assignee
Thomson Licensing SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing SAS filed Critical Thomson Licensing SAS
Priority to US15/122,167 priority Critical patent/US10147199B2/en
Priority to CN201580009769.2A priority patent/CN106030658B/zh
Priority to JP2016550835A priority patent/JP2017512398A/ja
Priority to KR1020167022482A priority patent/KR20160126985A/ko
Priority to EP15706783.6A priority patent/EP3111419A1/en
Publication of WO2015128294A1 publication Critical patent/WO2015128294A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Definitions

  • the present disclosure generally relates to image processing.
  • the present disclosure relates to a method and an apparatus for determining an orientation of a video.
  • the disclosure provides a method and an apparatus for determining an orientation of a video with reasonable computation load for each orientation-homogeneous part of the video.
  • some features are extracted based on the estimated motion (dominant or object-based) of the video scene. From these motion-based features, some frame-based orientation information is computed together with potential changes in orientation. Together with the temporal orientation information, the disclosure also results in an associated segmentation into orientation- homogeneous parts of the video.
  • a method for determining an orientation of a video comprises the steps of: estimating a motion of the video; extracting translation-based parameters from the estimated motion of the video; and computing at least one feature giving the evolution of the horizontal translation over time against the evolution of the vertical translation according to the translation based parameters, the feature being used for determining the orientation of the video.
  • the method further comprises: extracting rotation-based parameters from the estimated motion of the video; splitting the video into at least one segment separated by the rotations detected according to the rotation-based parameters; and determining the orientation of the video as a function of an integration of said at least one feature over each of said at least one segment.
  • an apparatus for determining an orientation of a video comprises a processor configured to: estimating a motion of the video; extracting translation- based parameters from the estimated motion of the video; and computing at least one feature giving the evolution of the horizontal translation over time against the evolution of the vertical translation according to the translation based parameters, the feature being used for determining the orientation of the video.
  • a computer program product downloadable from a communication network and/or recorded on a medium readable by computer and/or executable by a processor.
  • the computer program product comprises program code instructions for implementing the steps of the method according to one aspect of the disclosure.
  • a Non-transitory computer- readable medium comprising a computer program product recorded thereon and capable of being run by a processor.
  • the Non-transitory computer- readable medium includes program code instructions for implementing the steps of a method according to one aspect of the disclosure.
  • Figure 1 is a flow chart showing a method for determining an orientation of a video according to an embodiment of the disclosure
  • Figure 2 is a diagram showing an evolution of the difference of the absolute horizontal and vertical translations over time, while integrated;
  • Figure 3 is a diagram showing an example of the evolution of the rotation parameter, while integrated
  • Figure 4 is a flow chart showing a method for determining an orientation of a video according to another embodiment of the disclosure.
  • Figure 5 is an exemplary diagram showing different possible orientations before and after a clock-wise camera rotation (Case 2 in Table 1 );
  • Figure 6 is an exemplary diagram showing different possible orientations before and after a counter-clock-wise camera rotation (Case 3 in Table 1 );
  • Figure 7 is block diagram showing a computer device on which the method for determining an orientation of a video according to an embodiment of the disclosure may be implemented.
  • An embodiment of the disclosure provides a method for determining an orientation of a video. Next, the method of the embodiment of the present disclosure will be described in details.
  • Figure 1 is a flow chart showing a method for determining an orientation of a video according to an embodiment of the disclosure.
  • embodiments of the disclosure will be discussed with only distinguishing between 4 orientations of 0°, 90°, -90° and 180° for each frame of the video, i.e. between the two landscape (0°, 180°) and portrait (90°, -90°) orientations.
  • the present disclosure therefore will only discuss a classification of the frames into these 4 classes and no further precise orientation angle will be extracted.
  • the system will provide a first classification into portrait/landscape without distinguishing between the two possible portrait orientations (90°, -90°) and between the two possible landscape orientations (0°, 180°).
  • the disclosure can also be applied to the cases with more complicated orientation classification.
  • step S101 it estimates motion of the video. It should be noted that a dominant motion of the video is preferred for the motion estimation in the step S101 . But some object-based motion estimation may be relied on when no dominant motion in the video is to be estimated.
  • the dominant motion estimation can be carried out by computing, at each instant, a parametric approximation of the dominant motion of the video.
  • Some known solutions can be used for the estimation of dominant motion. For example, the following document disclosed a start-of -the- art technology for this purpose:
  • step S1 02 it extracts translation-based parameters and rotation-based parameters from the estimated dominant motion of the video.
  • Motion estimators output the estimated motion model from which an estimation of the translations in both horizontal and vertical directions together with an estimation of the rotation can be extracted.
  • a motion model can be accessed from the dominant motion estimation, which may contain parameters depending on the motion to be estimated.
  • an affine model with 6 parameters can be considered as the matrix below:
  • the first two parameters aO and bO respectively correspond to Tx and Ty, the translation values in the x axis and y axis.
  • the four remaining parameters provide information on the rotation and zooming motions.
  • the translation-based parameters and rotation-based parameters can be integrated over a certain period of time.
  • the trapezoidal rule which is a well-known technique for approximating the definite integral, can be applied on a given window.
  • the size of the given window can be empirically fixed to, for example, 20 frames. But this size can be adapted according to the context. From the dominant motion estimation, only small translation and rotation values can be accessed from one frame to another. From these values it is difficult to accurately detect rotations and translations.
  • the advantage of the integration is to provide a larger view of the motion on a longer period of time.
  • step S1 03 it computes at least one feature giving the evolution of the horizontal translation over time against the evolution of the vertical translation according to the translation based parameters.
  • a feature can give some clue about whether the video was captured in a portrait or landscape mode. In most of the cases, when the amplitude of the translational component of dominant motion is substantially larger in the horizontal direction than in the vertical one, it is likely that the video was captured in landscape mode, as more panning user than tilting will tend to be used during the capturing of a scene of the video.
  • Figure 2 shows an example of evolution of such a feature over time from integrated values of Tx and Ty. Positive values tend to indicate a landscape orientation of the video, while negative values tend to indicate a portrait orientation of the video.
  • the above feature may be smoothed over a given sliding time window to improve the accuracy of the analysis.
  • step S104 in parallel to the step S103, it splits the video into segments separated by the rotations detected according to the rotation-based parameters, whether integrated or not.
  • the rotation-based parameter will in turn give some information on whether the camera was rotated during the capture.
  • the segmentation can be carried out through a thresholding of the extracted rotation- based parameters.
  • Figure 3 is a diagram showing an example of the evolution of the rotation-based parameters after integration.
  • the thresholding will give three segments: segment 1 before the rotation, segment 3 after the rotation and segment 2 which corresponds to the rotation and for which the system will not give any orientation information in the embodiment.
  • Figure 3 illustrates an example of splitting the video wherein a simple threshold is applied. Regions above the threshold correspond to clockwise rotations, regions below the opposite of the threshold correspond to counterclockwise rotations, and regions therebetween correspond to regions without rotations.
  • the threshold value was fixed to an empirical value of 0.2. But more adaptive can be made according to the video content, for example, something that will only keep values below mean-2sigma.Optionally, a refinement can be added in the detection of the rotation boundaries.
  • This refinement will start with a higher threshold that gives markers of potential rotations. Then it goes temporally backward in order to find the rotation start which corresponds to the frame for which the absolute value of the rotation parameter is very low (ie. lower than epsilon value). Next it searches forward for the end of the rotation, which corresponds to the first frame for which the absolute value of the rotation parameter is lower than epsilon.
  • step S105 it determines the orientation of the video according to an integration of the at least one feature obtained in the step S103 over each segment obtained by the step S104.
  • the orientation of the video is determined by computing the orientation of each segment before and after rotations by integrating Featuretrans, for all frames of the segment, in order to get one representative single value of Featuretrans per segment. In this embodiment, it can simply be the computation of the area below the curve in Figure 2(see hatched regions) over the segments. In another embodiment, an additional threshold T is used on the translation-based parameter. The number of continuous parts in which Featuretrans>T and Featuretrans ⁇ -T is counted, plus the duration of these parts. A simple normalized sum of these two counters will provide a new Integrated' value of Featuretrans.
  • an integrated value of Featuretrans over the whole video or over windows of a predefined size is computed. Specifically, if no rotation was detected, then there is only one big segment in the whole video. In this case, the same integration process for Featuretrans can be simply applied as what is done to the segment, which however is done to the whole video. As a variant, if the video is very long, then it can be processed window by window. In this case, the integration will be done once again but over a predefined window size.
  • orientation is portrait.
  • some additional processing such as for example face detection
  • face detection can be applied to distinguish further between the two portrait or the two landscape orientations. It can be appreciated by a person skilled in the art that, by detecting faces in pictures, some information of the most likely orientation of a picture can be obtained, as it is very unlikely that people and therefore faces will be upside down. It is also quite unlikely to have people and therefore faces lying in images than people standing. Such information can be used to further distinguish orientations of a video. No further details will be given in this respect.
  • Figure 4 is a flow chart showing a method for determining an orientation of a video according to another embodiment of the disclosure.
  • steps S401 -S405 are respectively identical to steps S101 -S105 in Figure 1 .
  • a further step S406 is added, which further distinguishes the orientation of the video obtained in the step S405 according to the angle of rotation (for example this angle can be extracted from the rotation-based parameter extracted in the step S402).
  • the added step S406 can help distinguishing between the two portraits and the two landscape orientations determined in the step S405.
  • step S404 information about whether a rotation took place in the video or not can be obtained (the absolute value of the rotation parameter is above a given threshold).
  • the absolute value of the rotation parameter is above a given threshold.
  • information on the direction of the rotation can be accessed (if rotation_parameter > 0, then the scene rotates clock-wise, i.e. the camera was turned counter-clock-wise, relatively to the center of the frame, if rotation_parameter ⁇ 0, then the scene rotates counter-clock-wise, i.e. the camera was turned clock-wise, relatively to the center of the frame).
  • the final rotation value for the video that corresponds to the rotation may either be an integrated value of the rotation parameter over the segment, or simply: - /+ max_over_segment(abs(rotation_parameter). The sign of this quantity will depend on the sign of the rotation_parameter over current segment.
  • orient_before and orient_after mean respectively the orientation of all frames before the rotation and the orientation of all frames after the rotation.
  • a rotation parameter is studied in each column. Depending on the sign and value of a parameter, some insight can be obtained on both the fact that a rotation takes place and in this case, in which direction the rotation is taken: clock wise, counter clock wise.
  • Figures 5 and 6 illustrate different cases discussed in the Table 1 . From Figures 5 and 6 and from the rotation direction, there are only a few possibilities for the translation values before and after the rotation.
  • Figure 5 is an exemplary diagram showing different possible orientations before and after a clock-wise camera rotation (Case 2 in Table 1 ). Only the first two cases are considered in the following description.
  • Figure 6 is an exemplary diagram showing different possible orientations before and after a counter-clock-wise camera rotation (Case 3 in Table 1 ). Similarly, only the first two cases are considered in the following description.
  • step S406 can help remove this false alarm. Choosing one or the other between these two on-going procedures may depend on a confidence value we may have on both the rotation parameter and the translation parameter.
  • the motion of some objects in the scene can be estimated and some translation information of these objects, if any, can be used instead of that of the dominant motion.
  • the disclosure advantageously offers the orientation information for all frames of a given video, even when a rotation occurs during the capture. It is based on some light motion-based processing applied on the video and hence does not need an offline learning process.
  • the step of motion estimation can be realized in real time and the on-line orientation detection process is applied instantaneously after the motion estimation either over the whole video, or over some predefined time windows.
  • the disclosure can be implemented on a video player which for example provides playback of previously captured videos.
  • the video player comprises but is not limited videolan on PC, online video websites and smart phones.
  • An embodiment of the disclosure provides a corresponding apparatus for determining an orientation of a video.
  • FIG. 7 is block diagram showing a computer device 700 on which the method for determining an orientation of a video according to an embodiment of the disclosure may be implemented.
  • the computer device 700 can be any kind of suitable computer or device capable of performing calculations, such as a standard Personal Computer (PC).
  • the device 700 comprises at least one processor 710, RAM memory 720 and a user interface 730 for interacting with a user.
  • the skilled person will appreciate that the illustrated computer is very simplified for reasons of clarity and that a real computer in addition would comprise features such as network connections and persistent storage devices.
  • the user interface 730 a user can input/select a video for playback.
  • the result of the determined orientation of the video can also be outputted, if needed, to the user by the user interface 730.
  • the processor 710 comprises a first unit for estimating a motion of the video.
  • the processor 710 further comprises a second unit for extracting translation- based parameters and rotation-based parameters from the estimated motion of the video.
  • the processor 710 further comprises a third unit for computing at least one feature giving the evolution of the horizontal translation over time against the evolution of the vertical translation according to the translation based parameters.
  • the processor 710 further comprises a fourth unit for splitting the video into segments separated by the rotations detected according to the rotation-based parameters.
  • the processor 710 further comprises a fifth unit for determining the orientation of the video as a function of an integration of said at least one feature over each of said segments.
  • An embodiment of the disclosure provides a computer program product downloadable from a communication network and/or recorded on a medium readable by computer and/or executable by a processor, comprising program code instructions for implementing the steps of the method described above.
  • An embodiment of the disclosure provides a non-transitory computer-readable medium comprising a computer program product recorded thereon and capable of being run by a processor, including program code instructions for implementing the steps of a method described above.
  • the present disclosure may be implemented in various forms of hardware, software, firmware, special purpose processors, or a combination thereof.
  • the software is preferably implemented as an application program tangibly embodied on a program storage device.
  • the application program may be uploaded to, and executed by, a machine comprising any suitable architecture.
  • the machine is implemented on a computer platform having hardware such as one or more central processing units (CPU), a random access memory (RAM), and input/output (I/O) interface(s).
  • CPU central processing units
  • RAM random access memory
  • I/O input/output
  • the computer platform also includes an operating system and microinstruction code.
  • the various processes and functions described herein may either be part of the microinstruction code or part of the application program (or a combination thereof), which is executed via the operating system.
  • various other peripheral devices may be connected to the computer platform such as an additional data storage device and a printing device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)
PCT/EP2015/053746 2014-02-27 2015-02-23 Method and apparatus for determining an orientation of a video Ceased WO2015128294A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US15/122,167 US10147199B2 (en) 2014-02-27 2015-02-23 Method and apparatus for determining an orientation of a video
CN201580009769.2A CN106030658B (zh) 2014-02-27 2015-02-23 用于确定视频的方位的方法及装置
JP2016550835A JP2017512398A (ja) 2014-02-27 2015-02-23 映像を提示する方法及び装置
KR1020167022482A KR20160126985A (ko) 2014-02-27 2015-02-23 비디오의 방향을 결정하기 위한 방법 및 장치
EP15706783.6A EP3111419A1 (en) 2014-02-27 2015-02-23 Method and apparatus for determining an orientation of a video

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP14305275 2014-02-27
EP14305275.1 2014-02-27

Publications (1)

Publication Number Publication Date
WO2015128294A1 true WO2015128294A1 (en) 2015-09-03

Family

ID=50288014

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2015/053746 Ceased WO2015128294A1 (en) 2014-02-27 2015-02-23 Method and apparatus for determining an orientation of a video

Country Status (6)

Country Link
US (1) US10147199B2 (enExample)
EP (1) EP3111419A1 (enExample)
JP (1) JP2017512398A (enExample)
KR (1) KR20160126985A (enExample)
CN (1) CN106030658B (enExample)
WO (1) WO2015128294A1 (enExample)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020110710A1 (ja) * 2018-11-28 2020-06-04 富士フイルム株式会社 撮像装置、撮像方法、及びプログラム
US11178531B2 (en) * 2019-03-26 2021-11-16 International Business Machines Corporation Link devices using their relative positions
US11172132B1 (en) * 2020-04-16 2021-11-09 Lenovo (Singapore) Pte. Ltd. Image save orientation determined based on orientation of object within image
CN119234245A (zh) * 2022-05-27 2024-12-31 三星电子株式会社 用于视频的倾斜校正的方法和电子设备

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4209852A (en) * 1974-11-11 1980-06-24 Hyatt Gilbert P Signal processing and memory arrangement
JPH08191411A (ja) * 1994-11-08 1996-07-23 Matsushita Electric Ind Co Ltd シーン判別方法および代表画像記録・表示装置
JPH09214973A (ja) * 1996-01-30 1997-08-15 Tsushin Hoso Kiko 動画像符号化装置及び動画像復号化装置
US6459822B1 (en) * 1998-08-26 2002-10-01 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Video image stabilization and registration
JP4328000B2 (ja) * 2000-08-02 2009-09-09 富士通株式会社 動画像符号化装置および動画像の特殊効果シーン検出装置
AU768455B2 (en) * 2000-12-18 2003-12-11 Canon Kabushiki Kaisha A method for analysing apparent motion in digital video sequences
KR20040071945A (ko) * 2003-02-07 2004-08-16 엘지전자 주식회사 부화면 조정이 가능한 영상표시기기 및 그 방법
WO2004111687A2 (en) 2003-06-12 2004-12-23 Honda Motor Co., Ltd. Target orientation estimation using depth sensing
US7312819B2 (en) * 2003-11-24 2007-12-25 Microsoft Corporation Robust camera motion analysis for home video
US8279283B2 (en) * 2005-11-18 2012-10-02 Utc Fire & Security Americas Corporation, Inc. Methods and systems for operating a video surveillance system
US7889794B2 (en) * 2006-02-03 2011-02-15 Eastman Kodak Company Extracting key frame candidates from video clip
JP4958497B2 (ja) * 2006-08-07 2012-06-20 キヤノン株式会社 位置姿勢測定装置及び位置姿勢測定方法、複合現実感提示システム、コンピュータプログラム及び記憶媒体
US20110255844A1 (en) * 2007-10-29 2011-10-20 France Telecom System and method for parsing a video sequence
US8238612B2 (en) * 2008-05-06 2012-08-07 Honeywell International Inc. Method and apparatus for vision based motion determination
JP5240044B2 (ja) * 2009-04-23 2013-07-17 ソニー株式会社 特徴区間判定装置、特徴区間判定方法、およびプログラム
US9124804B2 (en) * 2010-03-22 2015-09-01 Microsoft Technology Licensing, Llc Using accelerometer information for determining orientation of pictures and video images
WO2012044218A1 (en) * 2010-10-01 2012-04-05 Saab Ab A method and an apparatus for image-based navigation
US8953847B2 (en) 2010-10-01 2015-02-10 Saab Ab Method and apparatus for solving position and orientation from correlated point features in images
US8879890B2 (en) * 2011-02-21 2014-11-04 Kodak Alaris Inc. Method for media reliving playback
US9082452B2 (en) * 2011-02-21 2015-07-14 Kodak Alaris Inc. Method for media reliving on demand
US8335350B2 (en) 2011-02-24 2012-12-18 Eastman Kodak Company Extracting motion information from digital video sequences
CA2829290C (en) 2011-03-10 2017-10-17 Vidyo, Inc. Render-orientation information in video bitstream
US8744169B2 (en) * 2011-05-31 2014-06-03 Toyota Motor Europe Nv/Sa Voting strategy for visual ego-motion from stereo
JP2013012821A (ja) * 2011-06-28 2013-01-17 Sony Corp 画像フォーマット判別装置、画像フォーマット判別方法および電子機器
US8708494B1 (en) * 2012-01-30 2014-04-29 Ditto Technologies, Inc. Displaying glasses with recorded images
US9529426B2 (en) * 2012-02-08 2016-12-27 Microsoft Technology Licensing, Llc Head pose tracking using a depth camera
US9367145B2 (en) * 2013-03-14 2016-06-14 Qualcomm Incorporated Intelligent display image orientation based on relative motion detection

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BOUTHEMY P ET AL: "A UNFIED APPROACH TO SHOT CHANGE DETECTION AND CAMERA MOTION CHARACTERIZATION", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 9, no. 7, 1 October 1999 (1999-10-01), pages 1030 - 1044, XP000853337, ISSN: 1051-8215, DOI: 10.1109/76.795057 *
ODOBEZ J M ET AL: "Robust multiresolution estimation of parametric motion models", JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, ACADEMIC PRESS, INC, US, vol. 6, no. 4, 1 December 1995 (1995-12-01), pages 348 - 365, XP002362839, ISSN: 1047-3203, DOI: 10.1006/JVCI.1995.1029 *

Also Published As

Publication number Publication date
CN106030658B (zh) 2019-07-23
US20160371828A1 (en) 2016-12-22
CN106030658A (zh) 2016-10-12
EP3111419A1 (en) 2017-01-04
JP2017512398A (ja) 2017-05-18
US10147199B2 (en) 2018-12-04
KR20160126985A (ko) 2016-11-02

Similar Documents

Publication Publication Date Title
US10121229B2 (en) Self-portrait enhancement techniques
US9270899B1 (en) Segmentation approaches for object recognition
CN106464802B (zh) 增强的图像捕获
EP3576017A1 (en) Method, apparatus, and device for determining pose of object in image, and storage medium
US9008366B1 (en) Bio-inspired method of ground object cueing in airborne motion imagery
CN106165391B (zh) 增强的图像捕获
WO2018137623A1 (zh) 图像处理方法、装置以及电子设备
EP3206163B1 (en) Image processing method, mobile device and method for generating a video image database
US8922662B1 (en) Dynamic image selection
CN106464803A (zh) 增强的图像捕获
CN106844492A (zh) 一种人脸识别的方法、客户端、服务器及系统
CN113239937A (zh) 镜头偏移检测方法、装置、电子设备及可读存储介质
US20180068451A1 (en) Systems and methods for creating a cinemagraph
US10147199B2 (en) Method and apparatus for determining an orientation of a video
WO2020000382A1 (en) Motion-based object detection method, object detection apparatus and electronic device
Chen et al. Variational fusion of time-of-flight and stereo data for depth estimation using edge-selective joint filtering
US10198842B2 (en) Method of generating a synthetic image
CN110095111A (zh) 一种地图场景的构建方法、构建系统及相关装置
WO2024001617A1 (zh) 玩手机行为识别方法及装置
US11195298B2 (en) Information processing apparatus, system, method for controlling information processing apparatus, and non-transitory computer readable storage medium
CN103327251B (zh) 一种多媒体拍摄处理方法、装置及终端设备
CN114862909A (zh) 图像处理方法、电子设备及相关产品
JPWO2018179119A1 (ja) 映像解析装置、映像解析方法およびプログラム
US10282633B2 (en) Cross-asset media analysis and processing
Duanmu et al. A multi-view pedestrian tracking framework based on graph matching

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15706783

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016550835

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20167022482

Country of ref document: KR

Kind code of ref document: A

REEP Request for entry into the european phase

Ref document number: 2015706783

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2015706783

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 15122167

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE