WO2022098050A1 - A method and an electronic device for video processing - Google Patents

A method and an electronic device for video processing Download PDF

Info

Publication number
WO2022098050A1
WO2022098050A1 PCT/KR2021/015690 KR2021015690W WO2022098050A1 WO 2022098050 A1 WO2022098050 A1 WO 2022098050A1 KR 2021015690 W KR2021015690 W KR 2021015690W WO 2022098050 A1 WO2022098050 A1 WO 2022098050A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
target object
motion trajectory
motion
time period
Prior art date
Application number
PCT/KR2021/015690
Other languages
French (fr)
Inventor
Suqin XU
Qiugan SHI
Original Assignee
Samsung Electronics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co., Ltd. filed Critical Samsung Electronics Co., Ltd.
Publication of WO2022098050A1 publication Critical patent/WO2022098050A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/587Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection

Definitions

  • FIG. 1 illustrates a flowchart of a method for video processing, according to an exemplary embodiment of the present disclosure
  • FIG. 5b illustrates a diagram illustrating a process of determining a search area according to an embodiment of the present disclosure
  • FIG. 8 illustrates a diagram of adding tags to a video according to an exemplary embodiment of the present disclosure
  • FIG. 9 illustrates a block diagram of a device for video processing according to an exemplary embodiment of the present disclosure
  • the acquiring of the motion trajectory of the target object may comprise: acquiring a relative position of the target object relative to the camera at each time within the preset time period by using the ultra-wideband (UWB) positioning technology; determining a coordinate system based on the relative position of the target object relative to the camera, and determining a UWB distance and a UWB angle of the target object relative to the camera at each time in the coordinate system; and determining the motion trajectory of the target object in the preset time period according to the UWB distance and the UWB angle of the target object relative to the camera at each time.
  • UWB ultra-wideband
  • the video processing may comprise at least one of the video encoding or video compression, video interpolation, and adding tags to the video.
  • the trajectory acquiring unit may be configured to: acquire a relative position of the target object relative to the camera at each time within the preset time period by using the ultra-wideband (UWB) positioning technology; determine a coordinate system based on the relative position of the target object relative to the camera, and determine a UWB distance and a UWB angle of the target object relative to the camera at each time in the coordinate system; and determine the motion trajectory of the target object in the preset time period according to the UWB distance and the UWB angle of the target object relative to the camera at each time.
  • UWB ultra-wideband
  • the trajectory acquiring unit may be configured to: acquire an acceleration of the camera in the preset time period; and determine the moving speed of the camera based on the acceleration of the camera, and determine the motion trajectory of the camera in the preset time period based on the moving speed of the camera.
  • the motion vector searching unit may be configured to: determine whether the target movement vector is the motion vector of the target object in the video captured by the camera; and search for the motion vector of the target object around and based on the target movement vector in the video captured by the camera, when the target movement vector is not the motion vector of the target object in the video captured by the camera.
  • a computing apparatus including a processor, a storage having stored thereon a computer program, which when executed by a processor, implements the method for video processing according to the present disclosure.
  • controller means any device, system or part thereof that controls at least one operation. Such a controller may be implemented in hardware or a combination of hardware and software and/or firmware. The functionality associated with any particular controller may be centralized or distributed, whether locally or remotely.
  • an acceleration of the camera in the preset time period may be acquired, and then the moving speed of the camera is determined based on the acceleration of the camera, and the motion trajectory of the camera in the preset time period is determined based on the moving speed of the camera.
  • FIG. 12 illustrates an example double-sided two-way ranging with three messages 1200 according to embodiments of the present disclosure
  • the embodiment of the double-sided two-way ranging with three messages 800 illustrated in FIG. 12 is for illustration only. FIG. 12 does not limit the scope of the present disclosure to any particular implementation.
  • the double-sided two-way ranging with three messages 1200 may be performed in the electronic device according to the embodiment of the present disclosure.
  • FIG. 2 illustrates a diagram of a target object and an electronic apparatus containing a camera.
  • the gyroscope and acceleration sensor included in the electronic apparatus (210) may obtain the movement of the camera (the camera on the electronic apparatus (210)), thus a motion trajectory M 2 of the camera of the electronic apparatus (210) at any time can be obtained.
  • the moving speed of the camera can be acquired by performing an integration operation on the acceleration, and the displacement (i.e., motion trajectory) of the camera in the preset time period can be obtained by performing an integration operation on the moving speed of the camera.
  • step S104 the video captured by the camera is processed using the motion vector of the target object.
  • FIG. 3 illustrates a diagram of the motion of the target object and the camera in a preset time period.
  • the target movement vector in video captured by the camera is calculated based on vector and vector . Since , , the target movement vector .
  • FIG. 13 illustrates a diagram of a 3-dimensional coordinate system for determining a distance and a relative position between an electronic device and a target object, according to an embodiment of the present disclosure.
  • the electronic device (1300) includes three UWB chips (1310, 1320, 1330), and positions of the first UWB chip (1310) is P1 (x1, y1, 0), and the position of the second UWB chip (1320) is P2 (x2, y2, 0), the position of the third UWB chip (1330) is P3 (x3, y3, 0).
  • the origin point O of the 3D coordinate system may be determined as center of one side of the triangle formed by connecting the three UWB chips (1310, 1320, 1330).
  • a point on the xy plane located at the same distance from P1, P2, and P3 may be determined as the origin point O.
  • Z-axis is determined as a vertical line with respect to the xy plane.
  • FIG. 4 illustrates a diagram of a frame interpolating according to an exemplary embodiment of the present disclosure.
  • the frame obtained by photographing the target object located at point C in time T1 is F1
  • the frame obtained by photographing the target object located at point D in time T2 is F2
  • a vector between the camera and the target object at time T1 is H(T1)
  • a vector between the camera and the target object at time T2 is H(T2).
  • the H(T1) and H(T2) can be obtained using UWB technology.
  • the motion estimation vector between the F1 at the time of T1 and F2 at the time of T2 may be determined using the vector of H(T2)-H(T1).
  • the motion estimation vector between the F1 and the F2 at the time of T1+(T2-T1)/2 may be determined using a vector of H(T1+(T2-T1)/2) - H(T1).
  • the video processing comprises at least one of the video encoding or video compression, video interpolation, and adding tags to the video.
  • the trajectory acquiring unit 91 may be configured to acquire a relative position of the target object relative to the camera at each time within the preset time period by using the ultra-wideband (UWB) positioning technology, determine a coordinate system based on the relative position of the target object relative to the camera, and determine a UWB distance and a UWB angle of the target object relative to the camera at each time in the coordinate system, and determine the motion trajectory of the target object in the preset time period according to the UWB distance and the UWB angle of the target object relative to the camera at each time.
  • UWB ultra-wideband
  • a computing device 10 includes a storage 101, a processor 102 having stored thereon a computer program, which when executed by the processor 102, implements the method for video processing according to an exemplary embodiment of the present disclosure is implemented.
  • the computing device in the embodiment of the present disclosure may include, but is not limited to, devices such as a mobile phone, a notebook computer, a PDA (personal digital assistant), a PAD (tablet computer), a camera, a watch, a learning machine, and the like.
  • devices such as a mobile phone, a notebook computer, a PDA (personal digital assistant), a PAD (tablet computer), a camera, a watch, a learning machine, and the like.
  • PDA personal digital assistant
  • PAD tablet computer
  • FIG. 10 is only an example, and should not make any limitation to the function and scope of use of the embodiments of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A method and a device for video processing are provided. The method for video processing, comprising: acquiring a motion trajectory of a target object as a first motion trajectory, and acquiring a motion trajectory of a camera as a second motion trajectory, in a preset time period,calculating a target movement vector of the target object relative to the camera within the preset time period, based on the first motion trajectory and the second motion trajectory; searching for a motion vector of the target object in a video captured by the camera based on the target movement vector; and processing the video captured by the camera using the motion vector of the target object.

Description

A METHOD AND AN ELECTRONIC DEVICE FOR VIDEO PROCESSING
The present disclosure relates to a field of video processing technique, and more particularly, to a method and an electronic device for video processing.
In video processing, it is usually necessary to calculate the motion vectors of two adjacent frames, and the searching for the motion vectors is a computationally intensive project.
Ultra-wideband (UWB) positioning technology is widely used in indoor positioning, and even is applied to file sharing.
The embodiments of the disclosure provide a method and an electronic device for processing video using a position information of the target object relative to the camera.
The features of the exemplary embodiments of the present disclosure will become more apparent from the following description taken in conjunction with the accompanying drawings that exemplarily illustrates embodiments, in which:
FIG. 1 illustrates a flowchart of a method for video processing, according to an exemplary embodiment of the present disclosure;
FIG. 2 illustrates a diagram of a target object and an electronic apparatus containing a camera;
FIG. 3 illustrates a diagram of the motion of the target object and the camera in a preset time period;
FIG. 4 illustrates a diagram of a frame interpolating according to an exemplary embodiment of the present disclosure;
FIG. 5a illustrates a diagram illustrating a process of determining a motion vector used when generating an interpolation frame according to an embodiment of the present disclosure;
FIG. 5b illustrates a diagram illustrating a process of determining a search area according to an embodiment of the present disclosure;
FIG. 5c illustrates a diagram illustrating a motion vector of the target object according to an embodiment of the present disclosure;
FIG. 6 illustrates a diagram of video interpolating according to an exemplary embodiment of the present disclosure;
FIG. 7 illustrates a diagram of interpolating a video from 30 FPS to 60 FPS;
FIG. 8 illustrates a diagram of adding tags to a video according to an exemplary embodiment of the present disclosure;
FIG. 9 illustrates a block diagram of a device for video processing according to an exemplary embodiment of the present disclosure;
FIG. 10 illustrates a schematic diagram of a computing device according to an exemplary embodiment of the present disclosure;
FIG. 11 illustrates an example single-sided two-way ranging according to embodiments of the present disclosure;
FIG. 12 illustrates an example double-sided two-way ranging with three messages according to embodiments of the present disclosure;
FIG. 13 illustrates a diagram of a 3-dimensional coordinate system for determining a distance and a relative position between an electronic device and a target object, according to an embodiment of the present disclosure;
FIGs.14 and 15 illustrate diagrams a method of calculating a coordinate of a target device in a three-dimensional coordinate system according to an embodiment of the present disclosure.
An exemplary embodiment of the present disclosure is to provide a method and an electronic device for video processing, so as to improve the efficiency of acquiring motion vectors by reducing the amount of calculation for acquiring motion vectors, and then improve the efficiency of video processing.
According to an exemplary embodiment of the present disclosure, there is provided a method for video processing. The method includes: acquiring a motion trajectory of a target object as a first motion trajectory, and acquiring a motion trajectory of a camera as a second motion trajectory, in a preset time period; calculating a target movement vector of the target object relative to the camera within the preset time period, based on the first motion trajectory and the second motion trajectory; searching for a motion vector of the target object in a video captured by the camera based on the target movement vector; and processing the video captured by the camera according to the motion vector of the target object.
Alternatively, the acquiring of the motion trajectory of the target object may comprise: acquiring a relative position of the target object relative to the camera at each time within the preset time period by using the ultra-wideband (UWB) positioning technology; determining a coordinate system based on the relative position of the target object relative to the camera, and determining a UWB distance and a UWB angle of the target object relative to the camera at each time in the coordinate system; and determining the motion trajectory of the target object in the preset time period according to the UWB distance and the UWB angle of the target object relative to the camera at each time.
Alternatively, the acquiring of the motion trajectory of the camera may comprise: acquiring an acceleration of the camera in the preset time period; and determining the moving speed of the camera based on the acceleration of the camera, and determining the motion trajectory of the camera in the preset time period based on the moving speed of the camera.
Alternatively, the searching for the motion vector of the target object in the video captured by the camera based on the target movement vector may comprise: determining whether the target movement vector is the motion vector of the target object in the video captured by the camera; and searching for the motion vector of the target object around and based on the target movement vector in the video captured by the camera, when the target movement vector is not the motion vector of the target object in the video captured by the camera.
Alternatively, the video processing may comprise at least one of the video encoding or video compression, video interpolation, and adding tags to the video.
According to an exemplary embodiment of the present disclosure, there is provided an electronic device for video processing. The electronic device includes: a trajectory acquiring unit configured to acquire a motion trajectory of a target object as a first motion trajectory, and acquire a motion trajectory of a camera as a second motion trajectory, in a preset time period; a movement vector calculating unit configured to calculate a target movement vector of the target object relative to the camera within the preset time period, based on the first motion trajectory and the second motion trajectory; a motion vector searching unit configured to search for a motion vector of the target object in a video captured by the camera based on the target movement vector; and a video processing unit configured to process the video captured by the camera according to the motion vector of the target object.
Alternatively, the trajectory acquiring unit may be configured to: acquire a relative position of the target object relative to the camera at each time within the preset time period by using the ultra-wideband (UWB) positioning technology; determine a coordinate system based on the relative position of the target object relative to the camera, and determine a UWB distance and a UWB angle of the target object relative to the camera at each time in the coordinate system; and determine the motion trajectory of the target object in the preset time period according to the UWB distance and the UWB angle of the target object relative to the camera at each time.
Alternatively, the trajectory acquiring unit may be configured to: acquire an acceleration of the camera in the preset time period; and determine the moving speed of the camera based on the acceleration of the camera, and determine the motion trajectory of the camera in the preset time period based on the moving speed of the camera.
Alternatively, the motion vector searching unit may be configured to: determine whether the target movement vector is the motion vector of the target object in the video captured by the camera; and search for the motion vector of the target object around and based on the target movement vector in the video captured by the camera, when the target movement vector is not the motion vector of the target object in the video captured by the camera.
Alternatively, the video processing may comprise at least one of the video encoding or video compression, video interpolation, and adding tags to the video.
According to an exemplary embodiment of the present disclosure, a computer readable storage medium having stored thereon a computer program, which when executed by a processor, implements the method for video processing according to the present disclosure.
According to an exemplary embodiment of the present disclosure, there is provided a computing apparatus, including a processor, a storage having stored thereon a computer program, which when executed by a processor, implements the method for video processing according to the present disclosure.
The method and device for video processing according to the exemplary embodiments of the present disclosure, by acquiring a motion trajectory of a target object as a first motion trajectory and acquiring a motion trajectory of a camera as a second motion trajectory in a preset time period, calculating a target movement vector of the target object relative to the camera within the preset time period based on the first motion trajectory and the second motion trajectory, searching for a motion vector of the target object in a video captured by the camera based on the target movement vector, and processing the video captured by the camera according to the motion vector of the target object, thus by acquiring the motion vector based on the motion trajectory of the target object and the motion trajectory of the camera, the amount of calculation for acquiring the motion vector is reduced, the efficiency of acquiring the motion vector is improved, and then the efficiency of video processing is improved. When the method for video processing according to an exemplary embodiment of the present disclosure is used for video encoding or video compression, the speed of video encoding or video compression can be increased. When the method for video processing according to an exemplary embodiment of the present disclosure is used for video interpolation, the speed and accuracy of video interpolation can be improved. When the method for video processing according to an exemplary embodiment of the present disclosure is used for adding tags to the video, the speed of adding tags to the video can be increased, and the semantic accuracy of the added video tags can be improved.
Additional aspects and/or advantages of the present general inventive concept will be set forth in part in the description which follows, still a portion will be apparent from the description or may be learned by the implementation of the general concept of the present disclosure.
Before undertaking the DETAILED DESCRIPTION below, it may be advantageous to set forth definitions of certain words and phrases used throughout this patent document. The term "couple" and its derivatives refer to any direct or indirect communication between two or more elements, whether or not those elements are in physical contact with one another. The terms "transmit," "receive," and "communicate," as well as derivatives thereof, encompass both direct and indirect communication. The terms "include" and "comprise," as well as derivatives thereof, mean inclusion without limitation. The term "or" is inclusive, meaning and/or. The phrase "associated with," as well as derivatives thereof, means to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, have a relationship to or with, or the like. The term "controller" means any device, system or part thereof that controls at least one operation. Such a controller may be implemented in hardware or a combination of hardware and software and/or firmware. The functionality associated with any particular controller may be centralized or distributed, whether locally or remotely. The phrase "at least one of," when used with a list of items, means that different combinations of one or more of the listed items may be used, and only one item in the list may be needed. For example, "at least one of: A, B, and C" includes any of the following combinations: A, B, C, A and B, A and C, B and C, and A and B and C.
Moreover, various functions described below can be implemented or supported by one or more computer programs, each of which is formed from computer readable program code and embodied in a computer readable medium. The terms "application" and "program" refer to one or more computer programs, software components, sets of instructions, procedures, functions, objects, classes, instances, related data, or a portion thereof adapted for implementation in a suitable computer readable program code. The phrase "computer readable program code" includes any type of computer code, including source code, object code, and executable code. The phrase "computer readable medium" includes any type of medium capable of being accessed by a computer, such as read only memory (ROM), random access memory (RAM), a hard disk drive, a compact disc (CD), a digital video disc (DVD), or any other type of memory. A "non-transitory" computer readable medium excludes wired, wireless, optical, or other communication links that transport transitory electrical or other signals. A non-transitory computer readable medium includes media where data can be permanently stored and media where data can be stored and later overwritten, such as a rewritable optical disc or an erasable memory device.
Definitions for other certain words and phrases are provided throughout this patent document. Those of ordinary skill in the art should understand that in many if not most instances, such definitions apply to prior as well as future uses of such defined words and phrases.
Reference will now be made in detail to exemplary embodiments of the present disclosure, and examples of the embodiments are illustrated in the accompanying drawings, wherein same reference numerals refer to same parts throughout. The embodiments will be illustrated below, by referring to the accompanying drawings, so as to explain the present disclosure.
FIG. 1 illustrates a flowchart of a method for video processing, according to an exemplary embodiment of the present disclosure. Here, the video processing includes at least one of the video encoding or video compression, video interpolation, and adding tags to the video. The method for video processing in FIG. 1 may be executed by an electronic apparatus with a camera function. The electronic apparatus may be, for example, but not limited to, a mobile phone, a PDA (personal digital assistant), a PAD (tablet computer), a camera, a watch, a learning machine, and the like.
Referring to FIG. 1, in step S101, in a preset time period, a motion trajectory of a target object is acquired as a first motion trajectory, and a motion trajectory of a camera is acquired as a second motion trajectory. Here, the preset time period is a short time or an extremely short time, such as, but not limited to, 0.001 ms, 0.05 ms, 0.1 ms, 0.2 ms, 0.3 ms, 1s, 2s, and the like.
In an exemplary embodiment of the present disclosure, when acquiring the motion trajectory of the target object, distance from the camera to the target object may be acquired, and a coordinate system with respect to the camera is determined. Then, coordinates of the target object are determined in the coordinate system using the distance in the preset time period. The motion trajectory of the target object is determined based on a difference of coordinates of the target object in the preset time period.
The distance from the camera to the target object may be acquired by acquiring a time of flight between the camera and the target object according to an ultra-wideband (UWB) and multiplying the time of flight to a wave velocity. The result value after multiplying may be determined as the distance.
Assuming that the preset time period is (T0, T1) time period, the position of the camera in time T0 is A, the position of the target object in time T0 is C, the position of the camera in time T1 is B, and the position of the target object in time T1 is D, the first motion trajectory is determined based on the displacement between the positions A and B of the camera at the time T0 and T1, and the second motion trajectory is determined based on the displacement between the position of the target object at the time T0 and T1. The motion vector of the target object is determined based on a difference between a first vector representing the first motion trajectory and a second vector representing the second motion trajectory.
In an exemplary embodiment of the present disclosure, when acquiring the motion trajectory of the target object, firstly, a relative position of the target object relative to the camera at each time within the preset time period may be acquired by using the ultra-wideband (UWB) positioning technology, a coordinate system may be determined based on the relative position of the target object relative to the camera, and a UWB distance and a UWB angle of the target object relative to the camera at each time are determined in the coordinate system, and then the motion trajectory of the target object in the preset time period is determined according to the UWB distance and the UWB angle of the target object relative to the camera at each time, so that the trajectory of the target object is acquired easily. Specifically, the relative position of the camera and the target object at any time may be acquired due to UWB positioning technology. Therefore, the motion trajectory M1 of the target object at any time may be obtained. When acquiring the relative position of the target object with respect to the camera at each time within the preset time period by using the UWB positioning technology, the distance from the camera to the target object may be acquired by means of two-way flight.
In an exemplary embodiment of the present disclosure, when acquiring the motion trajectory of the camera, firstly, an acceleration of the camera in the preset time period may be acquired, and then the moving speed of the camera is determined based on the acceleration of the camera, and the motion trajectory of the camera in the preset time period is determined based on the moving speed of the camera.
FIG. 11 illustrates an example single-sided two-way ranging 1100 according to embodiments of the present disclosure. The embodiment of the single-sided two-way ranging 1100 illustrated in FIG. 11 is for illustration only. FIG. 11 does not limit the scope of the present disclosure to any particular implementation. The single-sided two-way ranging 700 may be performed between the camera and the target object. In FIG. 11, device A may be the camera or the electronic device including the camera, and other device B may be the target object. For example, in case that a person takes a phone that has UWB chip and shoots a video on the person as the target object using a phone, the phone is determined as device A, and a device owned by the target object in the video may be determined as device B. The UWB technology can obtain the relative position of the camera between the target object at any time. Thus, a target's track of motion M1 at some intervals can be determined.
SS-TWR (Single-Sided Two-Way Ranging) involves a simple measurement of the round-trip delay of a single message from the initiator to the responder and a response sent back to the initiator. The operation of SS-TWR is as shown in FIG. 11, where device A initiates the exchange and device B responds to complete the exchange. Each device precisely timestamps the transmission and reception times of the message frames, and so can calculate times Tround and Treply by simple subtraction. Hence, the resultant time-of-flight, Tprop, can be estimated by the equation:
Figure PCTKR2021015690-appb-img-000001
FIG. 12 illustrates an example double-sided two-way ranging with three messages 1200 according to embodiments of the present disclosure The embodiment of the double-sided two-way ranging with three messages 800 illustrated in FIG. 12 is for illustration only. FIG. 12 does not limit the scope of the present disclosure to any particular implementation. The double-sided two-way ranging with three messages 1200 may be performed in the electronic device according to the embodiment of the present disclosure.
DS-TWR with three messages is illustrated in FIG. 12, which reduces the estimation error induced by clock drift from long response delays. Device A is the initiator to initialize the first round-trip measurement, while device B as the responder, responses to complete the first round-trip measurement, and meanwhile initialize the second round-trip measurement. Each device precisely timestamps the transmission and reception times of the messages, and the resultant time-of-flight estimate, Tprop, can be calculated by the expression:
Figure PCTKR2021015690-appb-img-000002
In case that the resultant time-of-flight estimate, Tprop is determined, the distance D between the device A and the device B can be obtained by the Tprop*c, where C is the wave velocity.
FIG. 2 illustrates a diagram of a target object and an electronic apparatus containing a camera. Specifically, while taking a picture or shooting a video on a target (220), the gyroscope and acceleration sensor included in the electronic apparatus (210) may obtain the movement of the camera (the camera on the electronic apparatus (210)), thus a motion trajectory M2 of the camera of the electronic apparatus (210) at any time can be obtained. After the acceleration of the camera in the preset time period is acquired, the moving speed of the camera can be acquired by performing an integration operation on the acceleration, and the displacement (i.e., motion trajectory) of the camera in the preset time period can be obtained by performing an integration operation on the moving speed of the camera.
Referring back to FIG. 1, in step S102, a target movement vector of the target object relative to the camera within the preset time period is calculated, based on the first motion trajectory and the second motion trajectory.
In step S103, a motion vector of the target object is searched for in a video captured by the camera based on the target movement vector.
In step S104, the video captured by the camera is processed using the motion vector of the target object.
FIG. 3 illustrates a diagram of the motion of the target object and the camera in a preset time period.
Specifically, as shown in FIG. 3, the target movement vector of the target object relative to the camera within a preset time period may be calculated through the following steps:
a. A video of the motion of the target object is captured by using the camera of the electronic apparatus. It is assumed that the camera of the electronic apparatus is initially located on point A (310) and the target is initially located on point C (330), and the camera moves from point A (310) to point B (320) and the target moves from point C (330) to point D (340) after a very short time T1(preset time period). In other words, it is assumed that the camera located on point A (310) firstly captures the target object located on point C (330), and after a preset time period T1, the camera moves to point B (320) and captures the target object located on point D (340). Furthermore, it is assumed that the UWB technology can obtain the relative position of the camera between the target object at any time. So a target's track of motion M1 can be determined at some intervals.
b. Since the camera moves from point A (310) to point B (320) with the electronic apparatus within the very short time T1(preset time period), the vector
Figure PCTKR2021015690-appb-img-000003
may be acquired from a motion trajectory M2 of the electronic apparatus.
c. Since the target object moves from point C (330) to point D (340), vector
Figure PCTKR2021015690-appb-img-000004
and vector
Figure PCTKR2021015690-appb-img-000005
may be acquired from the motion trajectory M1 of the target object. Using UWB technology, the vector
Figure PCTKR2021015690-appb-img-000006
between the camera located at point A (310) and the target object located on point C (330) can be acquired, and after time t1, the vector
Figure PCTKR2021015690-appb-img-000007
between the camera located at point B (320) and the target object located on point D (340) can be obtained.
d. The target movement vector in video captured by the camera is calculated based on vector
Figure PCTKR2021015690-appb-img-000008
and vector
Figure PCTKR2021015690-appb-img-000009
. Since
Figure PCTKR2021015690-appb-img-000010
Figure PCTKR2021015690-appb-img-000011
, the target movement vector
Figure PCTKR2021015690-appb-img-000012
.
In the exemplary embodiments of the present disclosure, there may be some special situations:
(1) The electronic apparatus does not move, that is,
Figure PCTKR2021015690-appb-img-000013
. The target object moves from point C to point D,
Figure PCTKR2021015690-appb-img-000014
.
(2) The target object does not move, and the electronic apparatus moves from point A to point B,
Figure PCTKR2021015690-appb-img-000015
.
(3) Neither the target object nor the electronic apparatus moves,
Figure PCTKR2021015690-appb-img-000016
.
FIG. 13 illustrates a diagram of a 3-dimensional coordinate system for determining a distance and a relative position between an electronic device and a target object, according to an embodiment of the present disclosure.
Referring to FIG. 13, it is assumed that the electronic device (1300) includes three UWB chips (1310, 1320, 1330), and positions of the first UWB chip (1310) is P1 (x1, y1, 0), and the position of the second UWB chip (1320) is P2 (x2, y2, 0), the position of the third UWB chip (1330) is P3 (x3, y3, 0). The origin point O of the 3D coordinate system may be determined as center of one side of the triangle formed by connecting the three UWB chips (1310, 1320, 1330). Alternatively, a point on the xy plane located at the same distance from P1, P2, and P3 may be determined as the origin point O. In other words, the origin point O may be determined by calculating the origin point O, which satisfies OP1=OP2=OP3. Z-axis is determined as a vertical line with respect to the xy plane.
FIGs.14 and 15 illustrate diagrams a method of calculating a coordinate of a target device in a three-dimensional coordinate system according to an embodiment of the present disclosure.
Referring to FIG. 14, it is assumed that the electronic device (1410) includes three UWB chips (1410, 1420, 1430), and positions of the first UWB chip (1411) is P1 (x1, y1, 0), and the position of the second UWB chip (1412) is P2 (x2, y2, 0), the position of the third UWB chip (1413) is P3 (x3, y3, 0). Using UWB technology, distances d1, d2, and d3 from each of the three UWB chips (1411, 1412, 1413) to the target device (1420) can be determined. Using the distances d1, d2, d3 and the set positions P1, P2, and P3 of the three UWB chips (1411, 1412, 1413), the coordinate P(x, y, z) of a target device (1420, 1500) can be determined by calculating x, y and z through the following equations:
Figure PCTKR2021015690-appb-img-000017
In this way, the coordinates of the target devices (1420, 1500) may be determined in a three-dimensional coordinate based on the electronic device, and the UWB distance and UWB angle (θ) may be determined using the three-dimensional coordinates.
Meanwhile, when using the 3-dimensional coordinate system, a motion vector, which is not on the same plane with the captured image or video of the camera can be projected onto a plane of the image or video captured by the camera. For example, assuming that the image or video captured by the camera is defined on the x-y plane in x-y-z coordinate system, then any vector can be projected on the x-y plane, and scaled appropriately to the captured image or video of the camera.
In an exemplary embodiment of the present disclosure, when searching for the motion vector of the target object in the video captured by the camera based on the target movement vector, firstly, whether the target movement vector is the motion vector of the target object in the video captured by the camera may be determined.The motion vector of the target object is searched for around and based on the target movement vector in the video captured by the camera when the target movement vector is not the motion vector of the target object in the video captured by the camera. In case that the electronic apparatus does not move, the target movement vector may be used to determine the motion vector of the target object in the video captured by the camera. The target movement vector can be mapped to the motion vector of the target object in the video captured by the camera based the distance between the target object and the camera.
In an exemplary embodiment of the present disclosure, the motion vector of the target object may be used for the video processing such as video encoding or video compression, video interpolation, and adding tags to the video.
In an exemplary embodiment of the present disclosure, the motion vector of the target object may be used to determine a search area for motion prediction. Since the motion vector of the target object may indicate an approximate movement direction of an object included in an image, so a search area to be used for motion prediction can be efficiently determined using the motion vector of the target object.
FIG. 4 illustrates a diagram of a frame interpolating according to an exemplary embodiment of the present disclosure.
Referring to FIG. 4, when generating a video of 60 FPS from a video of 30 FPS video, an interpolation frame is generated using two frames of a 30 FPS video, and the interpolation frame is inserted between the two frames. By doing so, the frame rate can be increased. For example, when a frame acquired at time T1 is F1 and a frame acquired at time T2 is F2, an F12 interpolation frame is generated using the F1 frame and the F2 frame, and the F12 interpolation frame is inserted as frame in time T1+(T2-T1)/2. When generating the interpolation frame, the interpolation frame may be efficiently generated by using the above-described motion vector of the target object.
FIG. 5a illustrates a diagram illustrating a process of determining a motion vector used when generating an interpolation frame according to an embodiment of the present disclosure.
Referring to FIG. 5a, it is assumed that the frame obtained by photographing the target object located at point C in time T1 is F1, and the frame obtained by photographing the target object located at point D in time T2 is F2, a vector between the camera and the target object at time T1 is H(T1), and a vector between the camera and the target object at time T2 is H(T2). As mentioned above, the H(T1) and H(T2) can be obtained using UWB technology. The motion estimation vector between the F1 at the time of T1 and F2 at the time of T2 may be determined using the vector of H(T2)-H(T1). The motion estimation vector between the F1 and the F2 at the time of T1+(T2-T1)/2 may be determined using a vector of H(T1+(T2-T1)/2) - H(T1).
FIG. 5b illustrates a diagram illustrating a process of determining a search area according to an embodiment of the present disclosure.
FIG. 5c illustrates a diagram illustrating a motion vector of the target object according to an embodiment of the present disclosure.
Motion estimation is a process of determining a corresponding block of a reference frame most similar to the current block within a predetermined search area of a reference frame, and obtaining a motion vector indicating a position difference between the current block and the corresponding block of the reference frame. In general, motion estimation is performed within a predetermined search region of a reference frame. According to an embodiment of the present disclosure, a motion vector of the target object may be used when determining a search area for motion prediction.
Since the motion vector of the target object may indicate an approximate movement direction of an object included in an image, so a search area to be used for motion prediction can be efficiently determined using the motion vector of the target object. Referring to FIG. 5c, assuming that the motion vector of the object obtained between the tth frame (540) and (t+m)th frame are P(560) and P'(561), respectively, then an approximate movement direction of data in a frame may be estimated using one of P(560) and P'(561), and a search range may be determined using the estimated movement direction.
Referring to FIG. 5b, it is assumed that a motion estimation for a current block c (531) of a (t+m) th frame (530) is performed by referring to a reference frame tth frame (520). c' (521) indicates a corresponding block of the current block c (531) located on the reference frame(520). Furthermore, it is assumed that the approximate movement direction of the object included in an image is determined as a vector (522). In this case, an area within a predetermined range around a position spaced apart by the vector (522) from the location of the current block c (531) may be determined as the search area.
FIG. 6 illustrates a diagram of video interpolating according to an exemplary embodiment of the present disclosure. FIG. 7 illustrates a diagram of interpolating a video from 30 FPS to 60 FPS.
As shown in FIG. 6, a frame (or two or more frames) is inserted between the tth frame and the t+1th frame of the video based on the motion vector, so that the video frame rate is increased. Specifically, when interpolating a video from 30 FPS to 60 FPS, the time T1 for acquiring the target movement vector is set to T/2 if the time difference between two adjacent frames is T. As shown in FIG. 7, after interpolating the video from 30 FPS to 60 FPS, the video is clearer.
FIG. 8 illustrates a diagram of adding tags to a video according to an exemplary embodiment of the present disclosure. As shown in FIG. 8, two tags "Mountain Bike" and "Mountain Bike Show" may be added to the video according to the motion trajectory of the target object, so that the added tags are clearer and more in line with the meaning of the video.
Specifically, when the method for video processing according to an exemplary embodiment of the present disclosure is used for video encoding or video compression, the speed of video encoding or video compression can be increased. When the method for video processing according to an exemplary embodiment of the present disclosure is used for video interpolation, the speed and accuracy of video interpolation can be improved. When the method for video processing according to an exemplary embodiment of the present disclosure is used for adding tags to the video, the speed of adding tags to the video can be increased, and the semantic accuracy of the added video tags can be improved.
In addition, according to the exemplary embodiments of the present disclosure, a computer readable storage medium having stored thereon a computer program is provided, when the computer program is executed, the method for video processing according to an exemplary embodiment of the present disclosure is implemented.
In an exemplary embodiment of the present disclosure, the computer readable storage medium can carry one or more programs that, when executed, the follow steps may be implemented: acquiring a motion trajectory of a target object as a first motion trajectory, and acquiring a motion trajectory of a camera as a second motion trajectory, in a preset time period; calculating a target movement vector of the target object relative to the camera within the preset time period, based on the first motion trajectory and the second motion trajectory; searching for a motion vector of the target object in a video captured by the camera based on the target movement vector; and processing the video captured by the camera according to the motion vector of the target object.
The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or, equipment or any combination of the above. More specific examples of computer readable storage media may include, but not limited to, electrical connections having one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above mentioned. In the embodiments of the present disclosure, a computer readable storage medium may be any tangible medium that can contain or store a computer program, which can be used by or in connection with an instruction execution system, device, or, equipment. The computer program embodied on the computer readable storage medium can be transmitted by any suitable medium, including but not limited to: wire, fiber optic cable, RF (radio frequency), etc., or any suitable combination of the foregoing. The computer readable storage medium can be included in any device; it can also be present separately and not incorporated into the device.
The method for video processing according to the exemplary embodiments of the present disclosure has been described above with reference to FIG. 1 to FIG. 8. Hereinafter, a device for video processing according to the exemplary embodiments of the present disclosure and units thereof will be described with reference to FIG. 9.
FIG. 9 illustrates a block diagram of a device for video processing, according to an exemplary embodiment of the present disclosure.
In an exemplary embodiment of the present disclosure, the video processing comprises at least one of the video encoding or video compression, video interpolation, and adding tags to the video.
Referring to FIG. 9, the device for video processing includes a trajectory acquiring unit 91, a movement vector calculating unit 92, a motion vector searching unit 93, and a video processing unit 94.
The trajectory acquiring unit 91 is configured to acquire a motion trajectory of a target object as a first motion trajectory, and acquire a motion trajectory of a camera as a second motion trajectory, in a preset time period.
In an exemplary embodiment of the present disclosure, the trajectory acquiring unit 91 may be configured to acquire a relative position of the target object relative to the camera at each time within the preset time period by using the ultra-wideband (UWB) positioning technology, determine a coordinate system based on the relative position of the target object relative to the camera, and determine a UWB distance and a UWB angle of the target object relative to the camera at each time in the coordinate system, and determine the motion trajectory of the target object in the preset time period according to the UWB distance and the UWB angle of the target object relative to the camera at each time.
In an exemplary embodiment of the present disclosure, the trajectory acquiring unit 91 may be configured to acquire an acceleration of the camera in the preset time period, and determine the moving speed of the camera based on the acceleration of the camera, and determine the motion trajectory of the camera in the preset time period based on the moving speed of the camera.
The movement vector calculating unit 92 is configured to calculate a target movement vector of the target object relative to the camera within the preset time period, based on the first motion trajectory and the second motion trajectory.
The motion vector searching unit 93 is configured to search for a motion vector of the target object in a video captured by the camera based on the target movement vector.
In an exemplary embodiment of the present disclosure, the motion vector searching unit 93 may be configured to determine whether the target movement vector is the motion vector of the target object in the video captured by the camera, and search for the motion vector of the target object around and based on the target movement vector in the video captured by the camera, when the target movement vector is not the motion vector of the target object in the video captured by the camera.
The video processing unit 94 is configured to process the video captured by the camera according to the motion vector of the target object.
The device for video processing according to the exemplary embodiments of the present disclosure has been described above with reference to FIG. 9. Hereinafter, a computing device according to the exemplary embodiments of the present disclosure will be described with reference to FIG. 10.
FIG. 10 illustrates a schematic diagram of a computing device according to an exemplary embodiment of the present disclosure.
Referring to FIG. 10, a computing device 10 according to an exemplary embodiment of the present disclosure includes a storage 101, a processor 102 having stored thereon a computer program, which when executed by the processor 102, implements the method for video processing according to an exemplary embodiment of the present disclosure is implemented.
In an exemplary embodiment of the present disclosure, when the computer program is executed by the processor 102, the follow steps may be implemented: acquiring a motion trajectory of a target object as a first motion trajectory, and acquiring a motion trajectory of a camera as a second motion trajectory, in a preset time period; calculating a target movement vector of the target object relative to the camera within the preset time period, based on the first motion trajectory and the second motion trajectory; searching for a motion vector of the target object in a video captured by the camera based on the target movement vector; and processing the video captured by the camera according to the motion vector of the target object.
The computing device in the embodiment of the present disclosure may include, but is not limited to, devices such as a mobile phone, a notebook computer, a PDA (personal digital assistant), a PAD (tablet computer), a camera, a watch, a learning machine, and the like. The computing device shown in FIG. 10 is only an example, and should not make any limitation to the function and scope of use of the embodiments of the present disclosure.
The method and device for video processing according to an exemplary embodiment of the present disclosure have been described above with reference to FIGS. 1-15. However, it should be understood that the device for video processing and units therein shown in FIGS. 9 may be respectively configured to execute software, hardware, firmware, or any combination of them of a specific function. The computing device as shown in FIG. 10 is not limited to including the components shown above, but some components may be added or deleted as needed, and the above components may also be combined.
The method and device for video processing according to the exemplary embodiments of the present disclosure, by acquiring a motion trajectory of a target object as a first motion trajectory and acquiring a motion trajectory of a camera as a second motion trajectory in a preset time period, calculating a target movement vector of the target object relative to the camera within the preset time period based on the first motion trajectory and the second motion trajectory, searching for a motion vector of the target object in a video captured by the camera based on the target movement vector, and processing the video captured by the camera according to the motion vector of the target object, thus by acquiring the motion vector based on the motion trajectory of the target object and the motion trajectory of the camera, the amount of calculation for acquiring the motion vector is reduced, the efficiency of acquiring the motion vector is improved, and then the efficiency of video processing is improved. When the method for video processing according to an exemplary embodiment of the present disclosure is used for video encoding or video compression, the speed of video encoding or video compression can be increased. When the method for video processing according to an exemplary embodiment of the present disclosure is used for video interpolation, the speed and accuracy of video interpolation can be improved. When the method for video processing according to an exemplary embodiment of the present disclosure is used for adding tags to the video, the speed of adding tags to the video can be increased, and the semantic accuracy of the added video tags can be improved.
While the present disclosure has been shown and described with reference to certain exemplary embodiments thereof, it should be understood by those skilled in the art that various changes in form and details may be made therein without departing from the principle and spirit of the present disclosure which are defined by the appended claims.

Claims (15)

  1. A method for video processing, comprising:
    acquiring a motion trajectory of a target object as a first motion trajectory, and a motion trajectory of a camera as a second motion trajectory, in a preset time period;
    calculating a target movement vector of the target object relative to the camera within the preset time period, based on the first motion trajectory and the second motion trajectory;
    searching for a motion vector of the target object in a video captured by the camera based on the target movement vector; and
    processing the video captured by the camera using the motion vector of the target object.
  2. The method of claim 1, wherein the acquiring of the motion trajectory of the target object comprises:
    acquiring distance from the camera to the target object;
    determining a coordinate system with respect to the camera;
    determining coordinates of the target object in the coordinate system using the distance in the preset time period.
  3. The method of claim 2, wherein the motion trajectory of the target object is determined based on a difference of coordinates of the target object in the preset time period.
  4. The method of claim 2, wherein the acquiring distance from the camera to the target object comprises:
    acquiring a time of flight between the camera and the target object according to an ultra-wideband (UWB); and
    determining the distance by multiplying the time of flight to a wave velocity.
  5. The method of claim 1, wherein, assuming that the preset time period is (T0, T1) time period, the position of the camera in time T0 is A, the position of the target object in time T0 is C, the position of the camera in time T1 is B, and the position of the target object in time T1 is D, the first motion trajectory is determined based on the displacement between the positions A and B of the camera at the time T0 and T1, and the second motion trajectory is determined based on the displacement between the position of the target object at the time T0 and T1.
  6. The method of claim 5, the motion vector of the target object is determined based on a difference between a first vector representing the first motion trajectory and a second vector representing the second motion trajectory.
  7. The method of claim 1, wherein the acquiring of the motion trajectory of the target object comprises:
    acquiring a relative position of the target object relative to the camera within the preset time period by using the ultra-wideband (UWB) positioning;
    determining a coordinate system based on the relative position of the target object relative to the camera, and determining a UWB distance and a UWB angle of the target object relative to the camera in the coordinate system; and
    determining the motion trajectory of the target object during the preset time period according to the UWB distance and the UWB angle of the target object relative to the camera.
  8. The method of claim 1, wherein the acquiring of the motion trajectory of the camera comprises:
    acquiring an acceleration of the camera in the preset time period; and
    determining a moving speed of the camera based on the acceleration of the camera, and determining the motion trajectory of the camera in the preset time period based on the moving speed of the camera.
  9. The method of claim 1, wherein the processing the video captured by the camera comprises:
    determining a search area of a reference frame based on the motion vector of the target object; and
    searching for a motion vector of a current block of a current frame in the video captured by the camera within the search area of the reference frame.
  10. An electronic device for video processing, comprising:
    a trajectory acquiring unit configured to acquire a motion trajectory of a target object as a first motion trajectory, and acquire a motion trajectory of a camera as a second motion trajectory, in a preset time period;
    a movement vector calculating unit configured to calculate a target movement vector of the target object relative to the camera within the preset time period, based on the first motion trajectory and the second motion trajectory;
    a motion vector searching unit configured to search for a motion vector of the target object in a video captured by the camera based on the target movement vector; and
    a video processing unit configured to process the video captured by the camera using the motion vector of the target object.
  11. The electronic device of claim 10, wherein the trajectory acquiring unit is configured to:
    acquire distance from the camera to the target object;
    determine a coordinate system with respect to the camera;
    determine coordinates of the target object in the coordinate system using the distance in the preset time period.
  12. The electronic device of claim 11, wherein the motion trajectory of the target object is determined based on a difference of coordinates of the target object in the preset time period.
  13. The electronic device of claim 11, wherein the trajectory acquiring unit is configured to:
    acquire a time of flight between the camera and the target object according to an ultra-wideband (UWB); and
    determine the distance by multiplying the time of flight to a wave velocity.
  14. The electronic device of claim 10, wherein, assuming that the preset time period is (T0, T1) time period, the position of the camera in time T0 is A, the position of the target object in time T0 is C, the position of the camera in time T1 is B, and the position of the target object in time T1 is D, the first motion trajectory is determined based on the displacement between the positions A and B of the camera at the time T0 and T1, and the second motion trajectory is determined based on the displacement between the position of the target object at the time T0 and T1.
  15. The electronic device of claim 14, the video processing unit is configured to:
    determine a search area of a reference frame based on the motion vector of the target object; and
    search for a motion vector of a current block of a current frame in the video captured by the camera within the search area of the reference frame.
PCT/KR2021/015690 2020-11-04 2021-11-02 A method and an electronic device for video processing WO2022098050A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011215856.2 2020-11-04
CN202011215856.2A CN112383677B (en) 2020-11-04 2020-11-04 Video processing method and device

Publications (1)

Publication Number Publication Date
WO2022098050A1 true WO2022098050A1 (en) 2022-05-12

Family

ID=74578808

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/015690 WO2022098050A1 (en) 2020-11-04 2021-11-02 A method and an electronic device for video processing

Country Status (2)

Country Link
CN (1) CN112383677B (en)
WO (1) WO2022098050A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469930B (en) * 2021-09-06 2021-12-07 腾讯科技(深圳)有限公司 Image processing method and device and computer equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140125510A1 (en) * 2012-11-02 2014-05-08 National Taiwan University Method for detecting the motion of object by ultra-wideband radar imaging and system thereof
KR20150130901A (en) * 2014-05-14 2015-11-24 한화테크윈 주식회사 Camera apparatus and method of object tracking using the same
US20170155840A1 (en) * 2015-11-27 2017-06-01 Casio Computer Co., Ltd. Motion detecting device, motion detecting method, and non-transitory recording medium
US20180156914A1 (en) * 2016-12-05 2018-06-07 Trackman A/S Device, System, and Method for Tracking an Object Using Radar Data and Imager Data
US20180199057A1 (en) * 2017-01-12 2018-07-12 Mediatek Inc. Method and Apparatus of Candidate Skipping for Predictor Refinement in Video Coding

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4178480B2 (en) * 2006-06-14 2008-11-12 ソニー株式会社 Image processing apparatus, image processing method, imaging apparatus, and imaging method
CN103905735B (en) * 2014-04-17 2017-10-27 深圳市世尊科技有限公司 The mobile terminal and its dynamic for chasing after shooting function with dynamic chase after shooting method
CN105872371B (en) * 2016-03-31 2019-04-02 纳恩博(北京)科技有限公司 A kind of information processing method and electronic equipment
CN111161354A (en) * 2019-12-30 2020-05-15 广东博智林机器人有限公司 Camera pose determining method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140125510A1 (en) * 2012-11-02 2014-05-08 National Taiwan University Method for detecting the motion of object by ultra-wideband radar imaging and system thereof
KR20150130901A (en) * 2014-05-14 2015-11-24 한화테크윈 주식회사 Camera apparatus and method of object tracking using the same
US20170155840A1 (en) * 2015-11-27 2017-06-01 Casio Computer Co., Ltd. Motion detecting device, motion detecting method, and non-transitory recording medium
US20180156914A1 (en) * 2016-12-05 2018-06-07 Trackman A/S Device, System, and Method for Tracking an Object Using Radar Data and Imager Data
US20180199057A1 (en) * 2017-01-12 2018-07-12 Mediatek Inc. Method and Apparatus of Candidate Skipping for Predictor Refinement in Video Coding

Also Published As

Publication number Publication date
CN112383677B (en) 2023-04-28
CN112383677A (en) 2021-02-19

Similar Documents

Publication Publication Date Title
WO2018128355A1 (en) Robot and electronic device for performing hand-eye calibration
WO2019066563A1 (en) Camera pose determination and tracking
WO2017217713A1 (en) Method and apparatus for providing augmented reality services
WO2012081740A1 (en) Reference signal sending method and system for measuring location, location measuring method, device, and system using same, and time synchronization method and device using same
WO2021033927A1 (en) Method for calculating location and electronic device therefor
JPH11306363A (en) Image input device and image inputting method
WO2022098050A1 (en) A method and an electronic device for video processing
WO2020017890A1 (en) System and method for 3d association of detected objects
WO2021125578A1 (en) Position recognition method and system based on visual information processing
CN112819860A (en) Visual inertial system initialization method and device, medium and electronic equipment
WO2020262808A1 (en) Method for providing service related to electronic device by forming zone, and device therefor
WO2022086248A1 (en) Electronic device and method supporting zoom function
CN112818898B (en) Model training method and device and electronic equipment
WO2021086018A1 (en) Method for displaying three-dimensional augmented reality
WO2017030233A1 (en) Method for position detection by mobile computing device, and mobile computing device performing same
WO2024001526A1 (en) Image processing method and apparatus, and electronic device
WO2017217595A1 (en) Server and system for implementing augmented reality image based on positioning information
CN114529452A (en) Method and device for displaying image and electronic equipment
WO2024075953A1 (en) System and method for acquiring mobile robot map image and object position based on multiple cameras
WO2020222373A1 (en) Indoor positioning device and method
WO2021025242A1 (en) Electronic device and method thereof for identifying object virtual image by reflection in indoor environment
CN114567727B (en) Shooting control system, shooting control method and device, storage medium and electronic equipment
WO2017200289A1 (en) Lens module capable of constructing 360-degree image, method for constructing 360-degree image, computer program for implementing method, and storage medium for storing computer program
WO2024128393A1 (en) Hologram production method using video capturing function of user terminal, and method for reading hologram security code produced by same
WO2022255546A1 (en) Virtual indoor space content providing method and server therefor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21889515

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21889515

Country of ref document: EP

Kind code of ref document: A1