CN111684784A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN111684784A
CN111684784A CN201980008888.4A CN201980008888A CN111684784A CN 111684784 A CN111684784 A CN 111684784A CN 201980008888 A CN201980008888 A CN 201980008888A CN 111684784 A CN111684784 A CN 111684784A
Authority
CN
China
Prior art keywords
image
image frame
distortion
frame
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201980008888.4A
Other languages
Chinese (zh)
Other versions
CN111684784B (en
Inventor
杨曾雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN111684784A publication Critical patent/CN111684784A/en
Application granted granted Critical
Publication of CN111684784B publication Critical patent/CN111684784B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2228Video assist systems used in motion picture production, e.g. video cameras connected to viewfinders of motion picture cameras or related video signal processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

An image processing method and apparatus, the method comprising: reading a first image frame in a video file; adjusting distortion parameters of at least one image block in the first image frame to generate a second image frame; replacing the first image frame with the second image frame. According to the method and the device, the second image frame with better visual effect is obtained by adjusting the distortion parameters of the image blocks of the first image frame in the video file, and then the second image frame is used for replacing the first image frame, so that the motion effect presented by the image is more in line with the visual requirement of human eyes when the video file is played, and the video playing speed does not need to be adjusted.

Description

Image processing method and device
Technical Field
The present invention relates to the field of image processing, and in particular, to an image processing method and apparatus.
Background
In general, a user can record related sports activities or daily activities such as riding, parachuting, skiing, surfing, strolling and the like through a shooting device, and a video code stream obtained by the shooting device generally restores a real shooting scene. In some use scenes, in order to obtain a better motion shooting effect, the shooting device is fixed on the mobile equipment and moves through the mobile equipment, so that the shooting device shoots a motion scene. However, if the mobile device moves too fast or too slow, the scene change in the video stream may be too fast or too slow, and the visual effect of the video stream may not meet the viewing requirement of human eyes.
In the related art, a user can change the visual effect of a corresponding video code stream by adjusting the video playing speed, for example, for a video code stream with a too fast scene change, the video playing speed is reduced; and for the video code stream with too slow scene change, increasing the video playing speed. Adjusting the video playing rate is not only cumbersome, but also difficult to achieve better visual effect.
Disclosure of Invention
The invention provides an image processing method and device.
Specifically, the invention is realized by the following technical scheme:
according to a first aspect of the present invention, there is provided an image processing method, the method comprising:
reading a first image frame in a video file;
adjusting distortion parameters of at least one image block in the first image frame to generate a second image frame;
replacing the first image frame with the second image frame.
According to a second aspect of the present invention, there is provided an image processing apparatus comprising:
storage means for storing program instructions;
one or more processors that invoke program instructions stored in the storage device, the one or more processors individually or collectively configured when the program instructions are executed to:
reading a first image frame in a video file;
adjusting distortion parameters of at least one image block in the first image frame to generate a second image frame;
replacing the first image frame with the second image frame.
According to a third aspect of the invention, there is provided a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
reading a first image frame in a video file;
increasing distortion parameters of at least one image block in the first image frame to generate a second image frame;
replacing the first image frame with the second image frame.
According to a fourth aspect of the present invention, there is provided a method of enhancing a sense of motion of an image, the method comprising:
reading a video file;
increasing distortion parameters of local regions of image frames in the video file;
and generating a new video file based on the adjusted video file.
According to a fifth aspect of the present invention, there is provided a method of enhancing a sense of motion of an image, the method comprising:
reading a first image frame in a video file;
increasing distortion parameters of at least one image block in the first image frame to generate a second image frame;
replacing the first image frame with the second image frame.
According to a sixth aspect of the present invention, there is provided a mobile terminal comprising:
a camera for obtaining a video file;
storage means for storing program instructions;
one or more processors that invoke program instructions stored in the storage device, the one or more processors individually or collectively configured when the program instructions are executed to:
reading a first image frame in a video file;
increasing distortion parameters of at least one image block in the first image frame to generate a second image frame;
replacing the first image frame with the second image frame.
According to a seventh aspect of the invention, there is provided a drone comprising:
a body;
the shooting device is carried on the machine body and used for obtaining a video file;
storage means for storing program instructions;
one or more processors that invoke program instructions stored in the storage device, the one or more processors individually or collectively configured when the program instructions are executed to:
reading a first image frame in a video file;
increasing distortion parameters of at least one image block in the first image frame to generate a second image frame;
replacing the first image frame with the second image frame.
According to an eighth aspect of the present invention, there is provided a handheld tripod head comprising:
a camera for obtaining a video file;
storage means for storing program instructions;
one or more processors that invoke program instructions stored in the storage device, the one or more processors individually or collectively configured when the program instructions are executed to:
reading a first image frame in a video file;
increasing distortion parameters of at least one image block in the first image frame to generate a second image frame;
replacing the first image frame with the second image frame.
According to a ninth aspect of the present invention, there is provided a photographing apparatus including:
the image acquisition module is used for acquiring a video file;
storage means for storing program instructions;
one or more processors that invoke program instructions stored in the storage device, the one or more processors individually or collectively configured when the program instructions are executed to:
reading a first image frame in a video file;
increasing distortion parameters of at least one image block in the first image frame to generate a second image frame;
replacing the first image frame with the second image frame.
According to the technical scheme provided by the embodiment of the invention, the second image frame with better visual effect is obtained by adjusting the distortion parameter of the image block of the first image frame in the video file, and then the second image frame is used for replacing the first image frame, so that the motion effect presented by the image is more in line with the visual requirement of human eyes when the video file is played, and the video playing speed is not required to be adjusted.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
FIG. 1 is a flow chart of a method of image processing in an embodiment of the invention;
FIG. 2 is a graph of distance from different locations in a first image frame to the center of the image versus distortion parameter in an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a manner of dividing an image block of a first image frame according to an embodiment of the invention;
FIG. 4A is a schematic diagram of an optical flow field of a first image frame in an embodiment of the invention;
FIG. 4B is a schematic diagram of an optical flow field of a first image frame in an embodiment of the invention;
FIG. 5A is an implementation of adjusting distortion parameters of at least one image block in a first image frame according to an embodiment of the invention;
FIG. 5B is an implementation of adjusting distortion parameters of at least one image block in a first image frame in another embodiment of the invention;
FIG. 6 is a block diagram of an image processing apparatus according to an embodiment of the present invention;
FIG. 7 is a flow chart of a method of enhancing image motion perception in an embodiment of the present invention;
FIG. 8 is a flow chart of a method of enhancing the perception of motion of an image in another embodiment of the present invention;
fig. 9 is a block diagram of a mobile terminal according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a drone in an embodiment of the invention;
fig. 11 is a block diagram of a handheld pan/tilt head according to an embodiment of the present invention;
fig. 12 is a block diagram of the imaging apparatus according to the embodiment of the present invention.
Detailed Description
The imaging process of the shooting device is essentially the conversion of a coordinate system, firstly points in space are converted from a world coordinate system to a camera coordinate system, then the points are projected to an imaging plane (an image physical coordinate system), and finally data on the imaging plane are converted to an image pixel coordinate system. But distortions are introduced due to lens manufacturing accuracy and variations in the assembly process, resulting in distortion of the original image, the distortion of the lens typically including radial distortion.
Radial distortion is distortion distributed along the radius of the lens, which arises because rays are more curved away from the center of the lens than near the center, and includes mainly both barrel distortion and pincushion distortion.
The embodiment of the invention mainly adjusts the image parameters (such as radial distortion) to realize the adjustment of the motion sense of the image, so that the adjusted image presents different visual effects according to different adjustment parameters.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The features of the following examples and embodiments may be combined with each other without conflict.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention. As shown in fig. 1, the image processing method may include the steps of:
s101: reading a first image frame in a video file;
wherein the video file includes video frames, which may include at least one first image frame. Alternatively, the video frame includes a series of first image frames obtained by time-series shooting, for example, for a shooting device with a shooting frame rate of 30fps, the video frame obtained at 1min includes the first image frame of 1800 frames (frames).
The number of video frames in a video file may include one or more. In some embodiments, the video file includes a series of video frames captured in time sequence, and the number of the first image frames in each video frame may be the same or different. It is understood that a series of video frames in the video file can also be determined for the same video frame according to shooting time sequence division. In some embodiments, a video file may include a plurality of video frames obtained from a single shot scene in a sequential temporal sequence. In some embodiments, the video file may include a plurality of video frames, but there may be shooting timing discontinuities for two or more of the plurality of video frames.
The video file may be obtained according to the actual application scenario of image processing, for example, in some embodiments, the image processing method is applied to an image processing device such as a computer. Optionally, the video file is sent by an external device, and received and processed by the processing device; optionally, the video file is pre-stored in a local disk of the image processing device, and the video file is directly read from the local disk after the image processing device is triggered. In some embodiments, the image processing method is applied to the shooting device, and the video file is a video file cached in the shooting process of the shooting device. The main body for acquiring the video file can be a camera with a camera shooting function, a video camera, a smart phone, an intelligent terminal, a shooting stabilizer, an unmanned aerial vehicle and the like.
S102: adjusting distortion parameters of at least one image block in a first image frame to generate a second image frame;
in some embodiments, the image block is a part of an image frame, which may for example be an area comprising the first image frame that is far from the center of the image. Referring to fig. 2, it is shown a relationship curve between the distances (ordinate) from different positions in the first image frame to the image center and the distortion parameters (abscissa), and it can be determined from fig. 2 that the radial distortion is more obvious in the region of the first image frame farther from the image center, so that the visual effect of the image edge can be improved by adjusting the distortion parameters in the region of the first image frame farther from the image center.
The first image frame may be divided into a plurality of image blocks based on different manners, and the first image frame may be divided into a plurality of image blocks based on the field of view; the first image frame may also be divided into a plurality of image blocks based on an Optical Flow Vector.
Because the distortion of different view field areas is different in the first image frame, wherein the distortion of the view field area which is closer to the center of the image is smaller, and the distortion of the view field area which is farther from the center of the image is larger, the first image frame can be divided into a plurality of image blocks based on the view field, the image blocks are divided according to the view field change rule, then the distortion parameters of the image blocks are adjusted, and the obtained second image frame is closer to the view field change rule. Optionally, the image blocks are distributed on concentric circles with the center of the field of view as a center, and the region of the first image frame far from the center of the image may include a region of the field of view far from the center of the first image frame. In this embodiment, the larger the field of view corresponding to the field of view region farther from the center of the circle is, the larger the field of view is, so that the size of the field of view can be used as the priority of the image block for selecting the distortion parameter adjustment, for example, the field of view region can be selected as the image block for performing the distortion parameter adjustment according to the priority of the field of view from large to small in the first image frame, so as to improve the visual effect of the image more obviously.
Further, in the first image frame, a difference value of radii of concentric circles corresponding to two adjacent image blocks is a preset difference value, that is, the first image frame is equally divided into a plurality of image blocks. For example, as shown in fig. 3, the first image frame may be equally divided into 10 image blocks, and the Field of view areas are 0.1Field (herein abbreviated as F), 0.2F, …, 0.7F, 0.8F, 0.9F, and 1.0F, respectively, from the center of the Field of view of the first image frame to the outer side of the first image frame. The radius of the concentric circle corresponding to 0.1F can be set as required, the radius difference of the concentric circles corresponding to two adjacent image blocks is a preset difference, and the preset difference is determined based on the radius of the concentric circle corresponding to 0.1F and the number of the field areas to be divided. Of course, fig. 3 is an example of dividing the first image frame according to the field of view, and for image block division, the image may be equally divided in the radial region of the field of view, equally divided according to the area of the field of view, or unequally divided; the number of blocks that can be divided may be 10 image blocks, or the number of divided blocks may be set otherwise. In the case of non-uniform division of the image, because the human visual system is more sensitive to the change of the focus central region, the granularity of the non-uniform division can be increased from fine granularity to coarse granularity, such as 0.05F, 0.1F, 0.2F, 0.35F, 0.55F, … …, and the uniform division is beneficial to the embodiment of the change of the image details.
In addition, when the shooting device moves, the optical flow field of the shot image usually shows a certain rule, as shown in fig. 4A. Wherein the optical flow field is generated using video captured by a camera. Data from sensors associated with the camera may be used to help generate an optical flow field that is useful for encoding video data captured by the camera. The sensors associated with the cameras may be on the cameras, a support structure (e.g., UAV) of the cameras, and/or a carrier (e.g., pan-tilt) that supports the cameras on the support structure. Alternatively, the sensor associated with the camera may be remote from the camera, the carrier, and/or a support structure of the camera. The optical flow field displays the motion trend of the same pixel between a plurality of adjacent frames in the video, so that the optical flow field not only can be used for video coding and decoding, but also can be used for dividing image blocks in the distortion stretching operation process. In fig. 4A, the camera is moved forward in a substantially forward manner, the movement resulting in an optical flow field that is substantially perpendicular to the camera surface. Based on the optical flow field, the field of view area can be divided in concentric circles, thereby obtaining a division manner similar to fig. 3. In fig. 4B, due to the relative motion relationship of the camera, the optical flow field has a certain degree of curvature, and in a plurality of consecutive frames, the manner of dividing the field area in each frame may be different, and the division manner of the image block is corrected by the curvature degree predicted by the optical flow field. Of course, the image block is not necessarily divided by the optical flow field, and the division of the image block may also be realized by feedback parameters of a motion sensor or prediction of the motion trend of the photographing device by the photographing image of the photographing device.
For example, a support structure of the camera may support one or more sensors. In an example, the support structure may be a UAV. Any description of the sensors of the UAV may be applied to any type of support structure for the camera. The UAV may include one or more visual sensors, such as image sensors. For example, the image sensor may be a monocular camera, a stereo vision camera, a radar, a sonar, or an infrared camera. The UAV may further include other sensors that may be used to determine the position of the UAV or that may be useful for generating optical flow field information, such as Global Positioning System (GPS) sensors, inertial sensors (e.g., accelerometers, gyroscopes, magnetometers), lidar, ultrasonic sensors, acoustic sensors, WiFi sensors that may be used as part of or separate from an Inertial Measurement Unit (IMU). The UAV may have sensors onboard the UAV that gather information directly from the environment without contacting additional components for additional information or processing that are not onboard the UAV. For example, the sensors that collect data directly in the environment may be visual sensors or audio sensors.
Alternatively, the UAV may have sensors that are mounted on the UAV, but that are in contact with one or more components not mounted on the UAV to collect data about the environment. For example, the sensor that contacts a component off-board the UAV to collect data about the environment may be a GPS sensor or another sensor that relies on a connection to another device (such as a satellite, tower, router, server, or other external device). Various examples of sensors may include, but are not limited to, a location sensor (e.g., a Global Positioning System (GPS) sensor, a mobile device transmitter capable of location triangulation), a visual sensor (e.g., an imaging device such as a camera capable of detecting visible, infrared, or ultraviolet light), a proximity sensor or range sensor (e.g., an ultrasonic sensor, a lidar, a time-of-flight or depth camera), an inertial sensor (e.g., an accelerometer, a gyroscope, an Inertial Measurement Unit (IMU)), an altitude sensor, an attitude sensor (e.g., a compass), a pressure sensor (e.g., a barometer), an audio sensor (e.g., a microphone), or a magnetometer (e.g., an electromagnetic sensor). Any suitable number and combination of sensors may be used, such as one, two, three, four, five or more sensors. Alternatively, data may be received from different types (e.g., two, three, four, five, or more types) of sensors. Different types of sensors may measure different types of signals or information (e.g., position, orientation, velocity, acceleration, proximity, pressure, etc.) and/or utilize different types of measurement techniques to obtain data.
Any of these sensors may also be provided off-board the UAV. The sensor may be associated with a UAV. For example, the sensors may detect characteristics of the UAV, such as a position of the UAV, a velocity of the UAV, an acceleration of the UAV, an orientation of the UAV, noise generated by the UAV, light emitted or reflected from the UAV, heat generated by the UAV, or any other characteristic of the UAV. The sensors may collect data that may be used alone or in combination with sensor data from on the UAV to generate optical flow field information.
The sensors may include any suitable combination of active sensors (e.g., sensors that generate and measure energy from their own energy source) and passive sensors (e.g., sensors that detect available energy). As another example, some sensors may generate absolute measurement data provided in accordance with a global coordinate system (e.g., position data provided by a GPS sensor, altitude data provided by a compass or magnetometer), while other sensors may generate relative measurement data provided in accordance with a local coordinate system (e.g., relative angular velocity provided by a gyroscope; relative translational acceleration provided by an accelerometer; relative altitude information provided by a vision sensor; relative distance information provided by an ultrasonic sensor, lidar, or camera during flight). Sensors, either onboard or off-board the UAV, may collect information such as the position of the UAV, the location of other objects, the orientation of the UAV, or environmental information. A single sensor may be able to collect a complete set of information in the environment, or a group of sensors may work together to collect a complete set of information in the environment. The sensors may be used for mapping of locations, navigation between locations, detection of obstacles, or detection of targets. Further, and in accordance with the present invention, sensors may be used to gather data for generating an optical flow field for efficient encoding of video data captured by the UAV.
Thus, the UAV may also have an optical flow field generator. The optical flow field generator may be arranged to be mounted on the UAV (e.g., in the UAV body or arm, on a camera, or on a carrier). Alternatively, the generated optical flow field may be set off-board the UAV (e.g., at a remote server, cloud computing infrastructure, remote terminal, or ground station). The optical-flow field generator may have one or more processors individually or collectively configured to generate the optical-flow field based on sensor data associated with the UAV. The optical flow field demonstrates how light flows within an image frame. This optical flow indicates how the captured object moves between image frames. In particular, the optical flow field can describe characteristics of how an object captured by the camera moves. For example, video captured within the FOV of a camera may include one or more stationary objects or movable objects. In an example, the optical flow field may be used to determine the velocity or acceleration of an object moving in the video. The optical flow field may also be used to show the direction of movement of objects within the video. An example of an optical flow field describing an object moving within a video is described below with respect to fig. 4A, 4B.
Sensor data used to generate the optical flow field may be obtained by one or more sensors associated with the UAV. Additionally or alternatively, the sensor data may be obtained by an external source, such as an external monitoring system. The external sensor data may be provided to the UAV using a communications channel. Thus, an optical flow field may be generated at the UAV. Alternatively, the optical flow field may be generated external to the UAV. Specifically, the UAV may provide sensor information associated with the UAV to one or more external processors. The one or more external processors may then generate an optical flow field using sensor data associated with the UAV. Further, the one or more external processors may provide the generated optical flow field to the UAV. The optical-flow field generator (whether mounted or not mounted on the UAV) may receive data from a sensor associated with the UAV (whether mounted, not mounted, or any combination thereof) that may be used to generate the optical-flow field.
The sensor data may optionally include information about the spatial layout (e.g., coordinates, translational position, height, orientation) of the camera or the movement (e.g., linear velocity, angular velocity, linear acceleration, angular acceleration) of the camera. The sensor data may be capable of detecting a zoom state (e.g., focal length, degree of zoom in or zoom out) of the camera. Sensor data may be useful for calculating how the FOV of a camera may change.
Fig. 4A and 4B provide examples of optical flow fields. Specifically, fig. 4A,4B illustrate an optical flow field associated with an enlargement feature associated with a camera, according to an embodiment of the present invention. In an example, the magnification feature may occur based on: the shooting device amplifies the object; a support area of the aircraft that allows the camera to move closer; or a combination of both. As seen in fig. 4A,4B, the movement is greater at the edges of the optical flow field than at the middle of the optical flow field. In addition, the directionality of amplification is equal across the optical flow field. In other words, there is no significant offset in vertical or horizontal distance, as each direction moves in a similar manner.
The relationship of the perceived size of objects within the optical-flow field may vary based on the location of objects within the optical-flow field. For example, when generating an optical flow field based on a zoom-in action, real-life objects of the same size may appear larger than they are farther from the edge of the optical flow field. This is illustrated in fig. 4A,4B, which shows a first sphere 410 near the normalized minimum at the center of the optical flow field and a second sphere 420 near the periphery of the optical flow field. Although the first and second spheres 410 and 420 have equal sizes, they appear to have different sizes when viewed in the context of the optical flow field. Thus, the perceived size of the object may vary across the optical flow field. In particular, when an object is placed at different locations on the optical flow field, the perceived size of the object may vary in a linear, directly proportional, inversely proportional manner, or modeled by another equation.
Therefore, in some embodiments, the image blocks are obtained by dividing the image blocks according to the optical flow field of the video frame where the first image frame is located, and the distortion parameters of the image blocks divided according to the optical flow field are adjusted, so that the obtained second image frame is closer to the optical flow field change rule. Alternatively, the optical flow field shown in fig. 4B divides the first image frame into a plurality of image blocks based on the optical flow field shown in fig. 4A. Of course, the first image frame may also be divided into a plurality of image blocks based on optical flow fields of other shapes.
When image processing is performed, the same division strategy can be used for dividing each first image frame in the video file into a plurality of image blocks, and different division strategies can also be used for dividing each first image frame in the video file into a plurality of image blocks. In the following embodiments, the same division strategy is adopted to divide each first image frame in the video file into a plurality of image blocks, so that the image processing difficulty is reduced. In each first image frame, image blocks with the same position relative to the center of the image may be referred to as image blocks with the same position.
In this step, distortion parameters may be adjusted for each first image frame in the video frame, and distortion parameters may also be adjusted for some first image frames in the video frame, which is specifically selected as needed.
S103: the second image frame is used in place of the first image frame.
It should be noted that, in the embodiment of the present invention, a video frame obtained by replacing the first image frame with the second image frame is referred to as a new video frame.
In this embodiment, the distortion parameter is adjusted for each first image frame in the video frame, and after the second image frame is used to replace the first image frame, the visual effect of the new video frame is better.
Next, an implementation manner of generating the second image frame by adjusting a distortion parameter of at least one image block in the first image frame in S102 will be described in detail through an embodiment corresponding to fig. 5A and 5B.
The video file includes, for example, a series of video frames obtained by shooting in step S101.
Specifically, as shown in fig. 5A, the implementation manner of S102 may include: and increasing the distortion parameter size of at least one image block in each first image frame of the video frame (namely stretching distortion) to generate a second image frame corresponding to the first image frame. In this embodiment, the effect of increasing distortion is achieved by stretching the distortion of at least one image block in the first image frame, so that the acceleration sense of the first image frame is more severe, the distortion adjustment strategy of the embodiment shown in fig. 5A is more suitable for a video frame with less obvious motion sense (slower inter-frame change), and after the video frame is processed by using the distortion adjustment strategy of the embodiment shown in fig. 5A, the motion sense of a new video frame is more obvious than that of an unprocessed video frame.
Furthermore, the implementation manner of increasing the size of the distortion parameter of at least one image block in each first image frame of a video frame may also be different, for example, the size of the distortion parameter of at least one image block in the first image frames of multiple video frames with consecutive shooting time sequences may be increased in a linear manner or a nonlinear manner.
As one of the feasible implementation manners, when the distortion parameter of at least one image block in each first image frame of a video frame is increased, specifically, the distortion parameter of at least one image block in a first image frame of a plurality of video frames with continuous shooting time sequence is sequentially increased, that is, the distortion parameter of the image block is sequentially increased in a linear manner for image blocks at the same position in the first image frame with continuous shooting time sequence, and the implementation manner adopts the linear manner, so that the accelerated transition of the motion sense of a new video frame is smoother.
As another possible implementation manner, the size of the distortion parameter of at least one image block in a first image frame of a plurality of video frames with continuous shooting time sequence is increased in a nonlinear manner, so as to obtain a new video frame with more intense motion acceleration. Optionally, the gradient of the increased distortion parameter of the image block is determined according to the distortion parameter of the corresponding image block and the first preset coefficient. For example, the gradient of the distortion parameter of the image block increasing in size is the product of the distortion parameter of the corresponding image block and the first preset coefficient, that is, the distortion parameter of the image block after the distortion parameter increasing processing is: the original distortion parameter of the image block + a first preset coefficient, wherein the original distortion parameter of the image block is the distortion parameter of the image block before distortion adjustment. It is understood that the strategy adopted by the gradient of increasing the distortion parameter size of the image block is not limited to the product of the distortion parameter size of the corresponding image block and the first preset coefficient, and may be other strategies. In an alternative embodiment, the processing of the distortion parameters may be determined by the motion of the camera, for example, from sensor feedback carried by the carrier to determine whether to distortion stretch the image. For example, according to the parameters obtained by the IMU continuously obtained in time sequence during the shooting of continuous video, when the IMU shows that the current shooting device has no acceleration motion (the carrier is moving at a static or low speed and even speed), the image block is not subjected to the stretching processing operation or only subjected to the elementary distortion stretching.
Further, the size of the distorted black stretching amount is determined by the feedback parameters of the IMU. When the IMU displays that the current shooting device has accelerated motion, distortion stretching operation is carried out on the image frames at the same time in the time sequence, the proportion of the distortion stretching can be positively correlated with data fed back by the IMU, so that the motion sense of the image can be enhanced, the size of the distortion stretching is matched with the data change output by the IMU, and the distortion stretching parameter change of the whole video is smoother. For the viewer, because the distortion stretching parameters are consistent with the actual motion scene reflected by the video, the viewing effect is more natural while the image motion sense is increased.
As a preferred embodiment, the amount of distortion of the image block stretched by the motion sensor feedback data can be expressed by the formula: and Dis is determined by K Aimu + M, wherein Dis is the distortion stretching amount, K is the stretching coefficient, Aimu is a parameter fed back by the motion sensor, and M is a constant parameter and is used for performing distortion stretching on the image in a static state or a low-speed state.
As a preferred embodiment, the amount of distortion of the image block stretched by the motion sensor feedback data can be expressed by the formula: Dis-K1 Aimu + K1 Aimu1/2+ M1, where Dis is the amount of distortion stretching, K1, K2 is the stretch coefficient, Aimu is the motion sensor feedback parameter, and M1 is a common parameter for distortion stretching of an image in a still or low speed state.
Furthermore, in the same video frame, the strategy of increasing the distortion parameter size of each first image frame is the same, so that the effect transition of distortion stretching in the same video frame is smoother. Optionally, in each first image frame of the same video frame, the positions of the image blocks subjected to distortion parameter increase are the same. For example, the size of the distortion parameter of 1.0F of each first image frame in the first video frame is increased, and the sizes of the distortion parameters of 1.0F and 0.9F of each first image frame in the second video frame are increased respectively.
Furthermore, in each first image frame of the same video frame, the first preset coefficients corresponding to the image blocks at the same position are the same, the first preset coefficients corresponding to the image blocks at different positions are different, and the distortion parameters of the image blocks at different positions in the first image frame are increased by adopting different distortion increasing strategies, so that the distortion difference between different positions of the first image frame is increased, and the motion sense of the first image frame is enhanced. Optionally, in the same first image frame, the first preset coefficients corresponding to the image blocks at different positions in each image block with the increased distortion magnitude are positively correlated with the distance from the image block to the image center, that is, the larger the first preset coefficient corresponding to the image block farther from the image center is, the more obvious the distortion stretching effect of the corresponding image block is, so as to enhance the motion sense of the first image frame. Further optionally, in the same first image frame, the difference value between the first preset coefficients corresponding to the image blocks at two adjacent positions in each image block with the increased distortion size is a preset size, for example, for each first image frame of a certain video frame, the first preset coefficient corresponding to 1.0F is 30%, the first preset coefficient corresponding to 0.9F is 25%, that is, the preset size is 5%, and it can be understood that the preset size may also be other numerical values. That is, in the same first image frame of this embodiment, the first preset coefficients corresponding to the image blocks at different positions in each image block with increased distortion magnitude are positively correlated to the distance from the image block to the image center, and in the same first image frame, the difference between the first preset coefficients corresponding to the image blocks at two adjacent positions in each image block with increased distortion magnitude is a preset magnitude. Of course, in other embodiments, in the same first image frame, the first preset coefficients corresponding to the image blocks at different positions in each image block with increased distortion magnitude are positively correlated with the distance from the image block to the center of the image, but in the same first image frame, the difference between the first preset coefficients corresponding to the image blocks at two adjacent positions in each image block with increased distortion magnitude is not a fixed value.
In addition, the strategies for increasing the size of the distortion parameter of the first image frame among different video frames are different, and the distortion parameter of the first image frame among different video frames is increased by adopting different distortion increasing strategies, so that the distortion difference of the first image frame among different video frames is increased, and the motion feeling among different video frames is enhanced.
Optionally, in some embodiments, the number of image blocks with increased distortion parameter size for a first image frame between different video frames sequentially increases with a shooting time sequence of the video frames, for example, if the shooting time sequence of the first video frame is prior to that of a second video frame for a first video frame and a second video frame with continuous shooting time sequences, the number of image blocks with increased distortion parameter size for the first image frame of the first video frame is 1; the number of image blocks for which the distortion parameter size is increased for the first image frame of the second video frame is 2. Further optionally, image blocks of the first image frames between different video frames with increased distortion parameter sizes are increased along a direction close to the image center, for example, the distortion parameter size of 1.0F of each first image frame in the first video frame is increased, and the distortion parameter sizes of 1.0F and 0.9F of each first image frame in the second video frame are respectively increased. Of course, other modes of sequentially increasing the number of image blocks with the increased distortion parameter size of the first image frame among different video frames along with the shooting time sequence of the video frames can be selected.
In some embodiments, in image blocks in which the distortion parameter size of the first image frame between different video frames is increased, the first preset coefficients corresponding to the image blocks at the same position sequentially increase along with the shooting time sequence of the video frames, for example, for a first video frame and a second video frame with continuous shooting time sequences, the shooting time sequence of the first video frame is prior to that of the second video frame, the first preset coefficient corresponding to 1.0F of each first image frame in the first video frame is 25%, and the first preset coefficient corresponding to 1.0F of each first image frame in the second video frame is 30%, so that the inter-frame motion sense enhancement of different video frames is realized.
In a specific implementation manner, when image processing is performed, the lens field is divided into 4 fields of 0.9F, 0.8F, 0.7F and 0.6F, a video file includes a 1min video frame, the shooting frame rate of the video frame is 30fps, the video frame includes 1800 frames of first image frames, the video frame can be equally divided into 4 segments, which are respectively a segment a, a segment B, a segment C and a segment D, each segment includes 450 frames of first image frames, and the shooting time sequences of the segment a, the segment B, the segment C and the segment D are consecutive. The following algorithmic operations may be performed on each segment of the video frame:
(1) for the A-segment video frame, stretching the distortion parameter of 0.9F by 25%, and keeping other view fields unchanged;
(2) for the B-segment video frame, stretching the distortion parameter of 0.9F by 30%, stretching the distortion parameter of 0.8F by 25%, and keeping other fields unchanged;
(3) for the C-segment video frame, stretching the distortion parameter of 0.9F by 35%, stretching the distortion parameter of 0.8F by 30%, stretching the distortion parameter of 0.7F by 25%, and keeping other fields unchanged;
(4) for the D-segment video frame, stretching the distortion parameter of 0.9F by 40%, stretching the distortion parameter of 0.8F by 35%, stretching the distortion parameter of 0.7F by 30% and stretching the distortion parameter of 0.6F by 25%;
after the distortion stretching operation of the 4 segments of video frames, the 4 segments are recombined and encoded into a new video file. During playing, a visual effect of accelerated motion is presented in the order of A → B → C → D. Without loss of generality, 25%, 30%, 35%, 40% are exemplary values for the stretch ratio of each video frame. And the setting can be carried out according to the acceleration value fed back by the IMU and a preset coefficient. To improve the visual smoothing effect, the stretch value may be varied in steps between every two adjacent frames.
As shown in fig. 5B, an implementation of S102 may include: the size of the distortion parameter of at least one image block in each first image frame of the video frame is reduced (i.e. distortion is weakened) to generate a second image frame corresponding to the first image frame. The present embodiment makes the sense of acceleration of the first image frame smoother by attenuating the distortion of at least one image block in the first image frame. The distortion adjustment strategy of the embodiment shown in fig. 5B is more suitable for video frames with a severe acceleration feeling (fast inter-frame change), and after the video frames are processed by the distortion adjustment strategy of the embodiment shown in fig. 5B, the acceleration feeling of new video frames is slower than that of unprocessed video frames.
Further, the size of the distortion parameter of at least one image block in each first image frame of the video frame is reduced, and the implementation manner of generating the second image frame corresponding to the first image frame may also be different, for example, the size of the distortion parameter of at least one image block in the first image frames of a plurality of video frames with continuous shooting time sequence may be reduced in a linear manner or a non-linear manner.
As one of the feasible implementation manners, when the distortion parameter of at least one image block in each first image frame of a video frame is reduced, specifically, the distortion parameter of at least one image block in a first image frame of a plurality of video frames with continuous shooting time sequence is sequentially reduced, that is, the distortion parameter of the image block is sequentially reduced for image blocks at the same position in the first image frame with continuous shooting time sequence in a linear manner, according to the shooting time sequence, the implementation manner adopts the linear manner, and the acceleration-sensitive weakening transition of a new video frame is smoother.
As another possible implementation manner, the size of the distortion parameter of at least one image block in a first image frame of a plurality of video frames with continuous shooting time sequence is subjected to reduction processing in a nonlinear manner to obtain a new video frame with reduced acceleration feeling. Optionally, the gradient of the image block with the reduced distortion parameter size is determined according to the distortion parameter size of the corresponding image block and the second preset coefficient. For example, the gradient of the image block with the reduced distortion parameter size is a product of the distortion parameter size of the corresponding image block and a second preset coefficient, that is, the distortion parameter size of the image block after the distortion parameter size reduction processing is as follows: the original distortion parameter of the image block-a second preset coefficient, wherein the original distortion parameter of the image block is the distortion parameter of the image block before distortion adjustment. It is to be understood that the strategy adopted by the gradient for reducing the distortion parameter size of the image block is not limited to the product of the distortion parameter size of the corresponding image block and the second preset coefficient, and may be other strategies.
Furthermore, in the same video frame, the strategy of reducing the distortion parameter size of each first image frame is the same, so that the effect transition of the distortion weakening in the same video frame is smoother. Optionally, in each first image frame of the same video frame, the positions of the image blocks subjected to distortion parameter reduction are the same. For example, the size of the distortion parameter of 1.0F of each first image frame in the first video frame is subjected to reduction processing, and the sizes of the distortion parameters of 1.0F and 0.9F of each first image frame in the second video frame are subjected to reduction processing, respectively.
Furthermore, in each first image frame of the same video frame, the second preset coefficients corresponding to the image blocks at the same position are the same, the second preset coefficients corresponding to the image blocks at different positions are different, and the distortion parameters of the image blocks at different positions in the first image frame are reduced by adopting different distortion reduction strategies, so that the distortion difference between different positions of the first image frame is weakened, and the acceleration feeling of the first image frame is weakened. Optionally, in the same first image frame, the second preset coefficients corresponding to the image blocks at different positions in each image block for reducing the distortion size are positively correlated with the distance from the image block to the image center, that is, the larger the second preset coefficient corresponding to the image block farther from the image center is, the more obvious the distortion stretching effect of the corresponding image block is, so as to weaken the acceleration sense of the first image frame. Further optionally, in the same first image frame, the difference between the second preset coefficients corresponding to the image blocks at two adjacent positions in each image block for which the distortion size is reduced is a preset size, for example, for each first image frame of a certain video frame, the second preset coefficient corresponding to 1.0F is 30%, and the second preset coefficient corresponding to 0.9F is 25%, that is, the preset size is 5%, and it can be understood that the preset size may also be other numerical values. That is, in the same first image frame of this embodiment, the second preset coefficients corresponding to the image blocks at different positions in each image block with reduced distortion magnitude are positively correlated to the distance from the image block to the image center, and in the same first image frame, the difference between the second preset coefficients corresponding to the image blocks at two adjacent positions in each image block with reduced distortion magnitude is a preset magnitude. Of course, in other embodiments, in the same first image frame, the second preset coefficients corresponding to the image blocks at different positions in each image block subjected to distortion size reduction are positively correlated with the distance from the image block to the center of the image, but in the same first image frame, the difference between the second preset coefficients corresponding to the image blocks at two adjacent positions in each image block subjected to distortion size reduction is not a fixed value.
In addition, the strategies for carrying out distortion parameter size reduction processing on the first image frame among different video frames are different, and the distortion parameter size reduction processing is carried out on the first image frame among different video frames by adopting different distortion reduction strategies, so that the distortion difference of the first image frame among different video frames is reduced, and the acceleration feeling among different video frames is weakened.
Optionally, in some embodiments, the number of image blocks for which the distortion parameter size of the first image frame between different video frames is reduced sequentially increases with the shooting time sequence of the video frames, for example, if the shooting time sequence of the first video frame is prior to the shooting time sequence of the second video frame for the first video frame and the second video frame with consecutive shooting time sequences, the number of image blocks for which the distortion parameter size of the first image frame of the first video frame is reduced is 1; the number of image blocks subjected to distortion parameter size reduction for the first image frame of the second video frame is 2. Further optionally, the image blocks with the reduced distortion parameter size of the first image frames between different video frames are increased along the direction close to the image center, for example, the distortion parameter size of 1.0F of each first image frame in the first video frame is reduced, and the distortion parameter sizes of 1.0F and 0.9F of each first image frame in the second video frame are respectively reduced. Of course, other modes of sequentially increasing the number of image blocks of which the distortion parameters are reduced in the first image frame among different video frames along with the shooting time sequence of the video frames can be selected.
In some embodiments, in the image blocks with the size of the distortion parameter reduced in the first image frames among different video frames, the second preset coefficients corresponding to the image blocks at the same position are sequentially reduced along with the shooting time sequence of the video frames, for example, for a first video frame and a second video frame with continuous shooting time sequences, the shooting time sequence of the first video frame is prior to that of the second video frame, the second preset coefficient corresponding to 1.0F of each first image frame in the first video frame is 25%, and the second preset coefficient corresponding to 1.0F of each first image frame in the second video frame is 30%, so that the reduction of the acceleration feeling among different video frames is realized.
When the distortion parameter of at least one image block in the first image frame is adjusted in the step S102 to generate the second image frame, the distortion parameter of at least one image block in the first image frame may be adjusted based on a preset template, and optionally, the preset template includes the size of the distortion parameter after the distortion parameter of each image block is adjusted; optionally, the preset template may include information about a gradient in which a distortion parameter of each image block is adjusted, such as the first preset parameter or the second preset parameter in the above embodiment.
Certainly, when the distortion parameter of at least one image block in the first image frame is adjusted in S102 to generate the second image frame, the related information of the adjusted gradient may also be determined based on the distortion parameter of each image block in each first image frame, for example, the first preset parameter or the second preset parameter of the corresponding image block may be determined according to the distortion parameter of each image block in the same first image frame, for example, the gradient for increasing the distortion parameter is also larger for the image block with a larger distortion parameter; alternatively, the gradient for reducing the distortion parameter is also larger for the image block with the larger distortion parameter.
The image processing method of the above embodiment can be applied to image processing equipment such as a computer and the like capable of performing image processing, and can also be applied to a shooting device.
Next, the image processing method applied to the imaging apparatus will be described in detail.
Compared with an image processing mode of performing post-editing on a video file through image processing equipment, the image processing method is applied to the shooting device, the shooting effect of the shooting device can be simplified, the expected motion sense can be achieved, and the post-editing cost is reduced.
The shooting device of this embodiment can carry on mobile device such as unmanned aerial vehicle or bicycle.
In this embodiment, the video file is a real-time video code stream acquired by the shooting device, and the shooting device performs distortion adjustment on the real-time video stream as required to obtain a video frame meeting an expected motion feeling. Specifically, the shooting device may include an image capturing module and a processor, where the image capturing module is configured to capture a video stream and store the video stream in a cache of the shooting device, and the processor of the shooting device obtains a real-time video stream from the cache and performs distortion adjustment on the real-time video stream. Optionally, the image capturing module includes a lens and an imaging sensor, such as a CCD or a CMOS image sensor, which is matched with the lens.
The photographing device entering the distortion adjustment program (i.e., the image processing method of the above embodiment) may be actively triggered by the user according to the requirement, or may be actively triggered by the photographing device according to the motion parameter of the photographing device. Further, the image processing method of the embodiment may further include: and controlling the shooting device to enter a distortion adjusting program when the shooting device is determined to meet the distortion adjusting strategy.
The manner in which the camera determines whether it satisfies the distortion adjustment policy may include, but is not limited to, the following:
in a first implementation, a trigger signal is obtained, where the trigger signal is used to instruct a shooting device to perform distortion adjustment so as to enhance or weaken the motion sense of a real-time video code stream. Optionally, the trigger signal includes a first trigger signal, which is used to instruct the shooting device to enhance a motion sense of the real-time video code stream, and after the shooting device acquires the first trigger signal, the distortion adjustment policy (also referred to as a first distortion adjustment policy) of the embodiment shown in fig. 5A is used to perform distortion adjustment on the real-time video stream; optionally, the trigger signal includes a second trigger signal, which is used to instruct the shooting device to weaken a motion sense of the real-time video code stream, and after the shooting device acquires the second trigger signal, the distortion adjustment policy (also referred to as a second distortion adjustment policy) in the embodiment shown in fig. 5B is used to perform distortion adjustment on the real-time video stream.
The trigger signal may be generated based on different ways, for example, in some embodiments, the trigger signal may be sent by an external device, such as a remote controller that controls the drone when the camera is mounted on the drone, or a terminal device that communicates with the drone, such as a cell phone, a wearable device, or the like.
In some embodiments, the trigger signal is generated by a first control of the camera, which may include a key, a button, or a knob. Optionally, the first control portion includes two first control portions, one of the first control portions generates a first trigger signal after being triggered, the other of the first control portions generates a second trigger signal after being triggered, and only one of the two first control portions can be triggered at the same time. The first control part is electrically coupled with the processor of the shooting device, and a trigger signal generated by triggering the first controller is transmitted to the processor of the shooting device, so that the processor of the shooting device is triggered to enter a distortion adjustment strategy.
In a second implementation manner, the speed of the shooting device is obtained, and when the speed is greater than or equal to a preset speed threshold, it is determined that the shooting device meets the first distortion adjustment strategy. If the speed is smaller than the preset speed threshold, the acceleration of the shooting device can be further obtained, and when the acceleration is larger than the preset acceleration threshold, the shooting device is determined to meet the first distortion adjustment strategy. Of course, the speed and acceleration of the camera may also be acquired simultaneously. The first distortion adjustment strategy is used for instructing the shooting device to perform an increasing process on a distortion parameter of at least one image block in the first image frame to generate a second image frame, and particularly, refer to corresponding parts of the embodiment shown in fig. 5A.
In a third implementation mode, the speed and the acceleration of the shooting device are obtained; and when the speed is less than the preset speed threshold and the acceleration is less than or equal to the preset acceleration threshold, determining that the shooting device meets a second distortion adjustment strategy. The second distortion adjustment strategy is used for instructing the shooting device to perform reduction processing on the distortion parameter of at least one image block in the first image frame to generate a second image frame, and particularly, refer to the corresponding part of the embodiment shown in fig. 5B.
The preset speed threshold and the preset acceleration threshold can be set according to needs.
In the second and third implementations, the speed and acceleration are determined based on detection data of a gyroscope on the camera and/or a real-time video code stream captured by the camera. The gyroscope can be arranged on the inner side of the center of a lens of the shooting device, the lens center is mapped in a first image frame, namely the image center, the distortion of the image center in the first image frame is minimum, and the determined distortion adjusting strategy is more in line with the actual shooting scene by detecting the speed and the acceleration of the lens center. Determining the speed and the acceleration based on the detection data of the gyroscope on the shooting device and/or the real-time video code stream shot by the shooting device is the prior art and is not described herein again.
Further, when it is determined that the shooting device meets the distortion adjustment strategy, after the shooting device is controlled to enter a distortion adjustment program, if a stop signal is obtained, the currently running distortion adjustment program is stopped, and different user requirements are met. The stop signal is used for instructing the shooting device to stop the currently running distortion adjusting program.
The stop signal may also be generated based on different ways, for example, in some embodiments, the stop signal is sent by an external device, such as a remote controller that controls the drone when the camera is mounted on the drone, or a terminal device that communicates with the drone, such as a cell phone, a wearable device, and so on.
In some embodiments, the stop signal is triggered by a second control of the camera, which may include a key, button, or knob. Optionally, the second control part and the first control part are the same component, and when the first control part is in a trigger state, a trigger signal is generated; when the first control part is switched from the trigger state to the non-trigger state, a stop signal is generated.
Corresponding to the image processing method of the above embodiment, an embodiment of the present invention further provides an image processing apparatus, and referring to fig. 6, the image processing apparatus 100 may include: a storage device 110 and one or more processors 120.
The storage device 110 is used for storing program instructions. The one or more processors 120, invoking program instructions stored in the storage device, the one or more processors 120, when executed, being individually or collectively configured to: reading a first image frame in a video file; adjusting distortion parameters of at least one image block in the first image frame to generate a second image frame; replacing the first image frame with the second image frame.
The processor 120 may implement the image processing method according to the embodiment of the present invention shown in fig. 1, and the image processing apparatus of the present embodiment will be described with reference to the image processing method of the above embodiment.
The image processing apparatus of the present embodiment may be an image processing device, and the image processing device may be a device such as a computer having an image processing capability; the image processing device may also be a camera device with a camera function, such as a camera, a video camera, a smart phone, an intelligent terminal, a shooting stabilizer, an unmanned aerial vehicle, or the like.
When the image processing device is a shooting device, the processor may include a processor of the shooting device, and further, the shooting device further includes an image acquisition module (e.g., a camera), which can refer to corresponding parts of the above embodiments, and details are not repeated here. In addition, the shooting device can also include first control part and/or second control part, and/or the shooting device can communicate with external equipment, and when the shooting device carried on unmanned aerial vehicle, this external equipment can be for controlling unmanned aerial vehicle's remote controller, or with the terminal equipment of unmanned aerial vehicle communication, like cell-phone, wearing equipment etc..
Furthermore, an embodiment of the present invention also provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the image processing method of the above-described embodiment. Specifically, the program realizes the following steps when being executed by a processor: reading a first image frame in a video file; increasing distortion parameters of at least one image block in the first image frame to generate a second image frame; replacing the first image frame with the second image frame.
Further, referring to fig. 7, an embodiment of the present invention further provides a method for enhancing a moving feeling of an image, where the method may include the following steps:
s701: reading a video file;
the implementation process of this step can be referred to corresponding parts of S101, and is not described in detail.
S702: increasing distortion parameters of local regions of image frames in the video file;
in this step, the local area may include at least one image block in each image frame of the video file, where the dividing manner of the image block and the manner of performing the augmentation processing on the distortion parameter of the local area of the image frame in the video file may refer to the description of the corresponding part of the image processing method in the above embodiment, and are not described herein again.
S703: and generating a new video file based on the adjusted video file.
The method for enhancing image motion sense of the embodiment shown in fig. 7 can be applied to an image processing device such as a computer, and can also be applied to a shooting device, which can be a camera with a shooting function, a video camera, a smart phone, a smart terminal, a shooting stabilizer, an unmanned aerial vehicle, and the like.
Referring to fig. 8, an embodiment of the present invention further provides a method for enhancing a moving sense of an image, which may include the following steps:
s801: reading a first image frame in a video file;
s802: increasing distortion parameters of at least one image block in the first image frame to generate a second image frame;
s803: replacing the first image frame with the second image frame.
The implementation process of the method for enhancing image motion feeling in the embodiment shown in fig. 8 can be referred to the description of the stretching distortion part in the image processing method in the above embodiment, and is not repeated here.
The method for enhancing image motion sense of the embodiment shown in fig. 8 can be applied to an image processing device such as a computer, and can also be applied to a shooting device, which can be a camera with a shooting function, a video camera, a smart phone, a smart terminal, a shooting stabilizer, an unmanned aerial vehicle, and the like.
Corresponding to the method for enhancing image motion sense of the embodiment shown in fig. 8, an embodiment of the present invention further provides a mobile terminal, and referring to fig. 9, the mobile terminal 200 includes: a camera 210, a storage 220, and one or more processors 230. The shooting device 210 is used for obtaining a video file; storage 220 for storing program instructions; one or more processors 230 invoking program instructions stored in the storage device, the one or more processors 230 being individually or collectively configured when the program instructions are executed to: reading a first image frame in a video file; increasing distortion parameters of at least one image block in the first image frame to generate a second image frame; replacing the first image frame with the second image frame.
The video file of the embodiment is a real-time video code stream captured by the capturing device 210.
Corresponding to the method for enhancing the moving feeling of the image in the embodiment shown in fig. 8, the embodiment of the invention further provides a drone, and referring to fig. 10, the drone 300 includes a main body 310, a shooting device 320 mounted on the main body, a storage device 330, and one or more processors 340. The shooting device 320 is used for obtaining a video file; storage 330 for storing program instructions; one or more processors 340 invoking program instructions stored in the storage device, the one or more processors 340, individually or collectively, being configured to, when the program instructions are executed: reading a first image frame in a video file; increasing distortion parameters of at least one image block in the first image frame to generate a second image frame; replacing the first image frame with the second image frame.
The video file of the embodiment is a real-time video code stream shot by the shooting device 310.
It should be noted that the unmanned aerial vehicle 300 according to the embodiment of the present invention is an aerial photography unmanned aerial vehicle, and other unmanned aerial vehicles without a camera function do not belong to the protection subject of the embodiment.
The unmanned aerial vehicle 300 may be a multi-rotor unmanned aerial vehicle or a fixed-wing unmanned aerial vehicle, and the type of the unmanned aerial vehicle is not particularly limited in the embodiment of the present invention.
Further, the shooting device 320 may be mounted on the body 310 through a pan-tilt, and the shooting device 320 is stabilized by a pan-tilt, where the pan-tilt may be a two-axis pan-tilt or a three-axis pan-tilt, and this is not particularly limited in the embodiment of the present invention.
Corresponding to the method for enhancing image motion feeling in the embodiment shown in fig. 8, an embodiment of the present invention further provides a handheld pan/tilt head, and referring to fig. 11, the handheld pan/tilt head 400 includes: camera 410, storage 420, and one or more processors 430. The shooting device 410 is used for obtaining a video file; storage 420 for storing program instructions; one or more processors 430 invoking program instructions stored in the storage device, the one or more processors 430 being individually or collectively configured to, when the program instructions are executed: reading a first image frame in a video file; increasing distortion parameters of at least one image block in the first image frame to generate a second image frame; replacing the first image frame with the second image frame.
The video file of the embodiment is a real-time video code stream shot by the shooting device 410.
It should be noted that the handheld pan/tilt head 400 according to the embodiment of the present invention refers to a pan/tilt head with a camera function, and other pan/tilt heads without a camera function do not belong to the protection main body of the embodiment.
This handheld cloud platform 400 can be the diaxon cloud platform, also can be the triaxial cloud platform to satisfy the different demands that increase steady.
Corresponding to the method for enhancing the moving feeling of the image in the embodiment shown in fig. 8, an embodiment of the present invention further provides a camera, referring to fig. 12, where the camera 500 includes: an image acquisition module 510, a storage 520, and one or more processors 530. The image acquisition module 510 is configured to obtain a video file; storage 520 to store program instructions; one or more processors 530 invoking program instructions stored in the storage device, the one or more processors 530 being individually or collectively configured when the program instructions are executed to: reading a first image frame in a video file; increasing distortion parameters of at least one image block in the first image frame to generate a second image frame; replacing the first image frame with the second image frame.
The video file of this embodiment is a real-time video code stream obtained by the image capture module 510.
This shoot device 500 can be for having the camera of the function of making a video recording, the camera, the smart mobile phone, and intelligent terminal shoots the stabilizer (like handheld cloud platform), unmanned vehicles (like unmanned aerial vehicle) and so on.
The storage device may include a volatile memory (volatile memory), such as a random-access memory (RAM); the storage device may also include a non-volatile memory (non-volatile), such as a flash memory (flash memory), a Hard Disk Drive (HDD) or a solid-state drive (SSD); the storage 110 may also comprise a combination of memories of the kind described above.
The processor may be a Central Processing Unit (CPU). The processor may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above disclosure is intended to be illustrative of only some embodiments of the invention, and is not intended to limit the scope of the invention.

Claims (67)

1. An image processing method, characterized in that the method comprises:
reading a first image frame in a video file;
adjusting distortion parameters of at least one image block in the first image frame to generate a second image frame;
replacing the first image frame with the second image frame.
2. The method of claim 1, wherein the image block comprises an area of the first image frame that is away from a center of the image.
3. Method according to claim 1 or 2, wherein said image blocks are distributed on concentric circles centered on the center of the field of view.
4. The method according to claim 3, wherein in the first image frame, a difference between radii of corresponding concentric circles of two adjacent image blocks is a preset difference.
5. The method according to claim 1 or 2, wherein the image blocks are obtained according to an optical flow field division of a video frame in which the first image frame is located.
6. The method of claim 1, wherein the video file comprises a series of video frames captured in a time-sequential manner, the video frames including at least one of the first image frames;
the adjusting distortion parameters of at least one image block in the first image frame to generate a second image frame comprises:
and increasing the distortion parameter of at least one image block in each first image frame of the video frame to generate a second image frame corresponding to the first image frame.
7. The method as claimed in claim 6, wherein the increasing the size of the distortion parameter of at least one image block in each first image frame of the video frames to generate a second image frame corresponding to the first image frame comprises:
and sequentially increasing the distortion parameter of at least one image block in a first image frame of a plurality of video frames with continuous shooting time sequences.
8. The method as claimed in claim 6, wherein the gradient of increasing distortion parameter size of the image block is determined according to the distortion parameter size of the corresponding image block and a first predetermined coefficient.
9. The method as claimed in claim 8, wherein the gradient of increasing distortion parameter size of the image block is the product of the distortion parameter size of the corresponding image block and the first predetermined coefficient.
10. The method of claim 8, wherein the strategy for increasing the distortion parameter size of each first image frame is the same in the same video frame.
11. The method according to claim 10, wherein the image blocks with increased distortion parameters are located in the same position in each first image frame of the same video frame.
12. The method according to claim 11, wherein in each first image frame of the same video frame, the first predetermined coefficients corresponding to the image blocks at the same position are the same, and the first predetermined coefficients corresponding to the image blocks at different positions are different.
13. The method according to claim 12, wherein the first preset coefficients corresponding to image blocks at different positions in each image block with increased distortion magnitude in the same first image frame are positively correlated with the distance from the image block to the image center.
14. The method as claimed in claim 13, wherein the difference between the first predetermined coefficients corresponding to the image blocks at two adjacent positions in each image block with increased distortion size in the same first image frame is a predetermined size.
15. The method according to claim 8 or 10, wherein the strategy for performing the distortion parameter size increasing process on the first image frame is different among different video frames.
16. The method according to claim 15, wherein the number of image blocks with increased distortion parameter size for the first image frame between different video frames sequentially increases with the shooting timing of the video frames.
17. The method of claim 16, wherein the image blocks in the first image frame between different video frames with increasing distortion parameter size increase in a direction closer to the center of the image.
18. The method according to claim 15, wherein in the image blocks with the increased distortion parameter size of the first image frame between different video frames, the first preset coefficients corresponding to the image blocks at the same position sequentially increase with the shooting time sequence of the video frames.
19. The method of claim 1, wherein the video file comprises a series of video frames captured in a time-sequential manner, the video frames including at least one of the first image frames;
the adjusting distortion parameters of at least one image block in the first image frame to generate a second image frame comprises:
and reducing the distortion parameter of at least one image block in each first image frame of the video frame to generate a second image frame corresponding to the first image frame.
20. The method of claim 19, wherein the reducing the distortion parameter size of at least one image block in each first image frame of the video frames to generate a second image frame corresponding to the first image frame comprises:
and sequentially reducing the distortion parameter of at least one image block in a first image frame of a plurality of video frames with continuous shooting time sequence.
21. The method as claimed in claim 19, wherein the gradient of the tile with the reduced distortion parameter size is determined according to the distortion parameter size of the corresponding tile and a second predetermined coefficient.
22. The method of claim 21, wherein the gradient of the tile's decreasing magnitude of the distortion parameter is a product of the magnitude of the distortion parameter of the corresponding tile and the second predetermined coefficient.
23. The method of claim 1, wherein the image processing method is applied in a camera, and the video file is a real-time video code stream obtained by the camera.
24. The method of claim 23, further comprising:
and controlling the shooting device to enter a distortion adjusting program when the shooting device is determined to meet the distortion adjusting strategy.
25. The method of claim 24, wherein the determining that the camera satisfies a distortion adjustment policy comprises:
acquiring a trigger signal, wherein the trigger signal is used for indicating the shooting device to carry out distortion adjustment so as to enhance or weaken the motion sense of the real-time video code stream;
wherein the trigger signal is sent by an external device or triggered and generated by a first control part of the shooting device.
26. The method of claim 24, wherein the determining that the camera satisfies a distortion adjustment policy comprises:
acquiring the speed, or the speed and the acceleration of the shooting device;
when the speed is greater than or equal to a preset speed threshold, or the speed is less than a preset speed threshold and the acceleration is greater than a preset acceleration threshold, determining that the shooting device meets a first distortion adjustment strategy;
the first distortion adjustment strategy is used for instructing the shooting device to increase the size of a distortion parameter of at least one image block in the first image frame so as to generate a second image frame.
27. The method of claim 24, wherein the determining that the camera satisfies a distortion adjustment policy comprises:
acquiring the speed and the acceleration of the shooting device;
when the speed is smaller than a preset speed threshold and the acceleration is smaller than or equal to a preset acceleration threshold, determining that the shooting device meets a second distortion adjustment strategy;
the second distortion adjustment strategy is used for instructing the shooting device to reduce the size of a distortion parameter of at least one image block in the first image frame so as to generate a second image frame.
28. The method of claim 26 or 27, wherein the speed, or the speed and the acceleration are determined based on detection data of a gyroscope on the camera and/or a live video stream captured by the camera.
29. The method of claim 25, wherein after controlling the camera to enter a distortion adjustment procedure upon determining that the camera satisfies a distortion adjustment policy, further comprising:
acquiring a stop signal, wherein the stop signal is used for indicating a shooting device to stop a currently running distortion adjusting program;
stopping the currently running distortion adjustment program.
30. The method of claim 29, wherein the stop signal is sent by an external device or triggered by a second control of the camera.
31. An image processing apparatus characterized by comprising:
storage means for storing program instructions;
one or more processors that invoke program instructions stored in the storage device, the one or more processors individually or collectively configured when the program instructions are executed to:
reading a first image frame in a video file;
adjusting distortion parameters of at least one image block in the first image frame to generate a second image frame;
replacing the first image frame with the second image frame.
32. The apparatus of claim 31, wherein the image block comprises an area of the first image frame away from a center of the image.
33. The apparatus according to claim 31 or 32, wherein the image blocks are distributed on concentric circles centered on the center of the field of view.
34. The apparatus according to claim 33, wherein in the first image frame, a difference between radii of corresponding concentric circles of two adjacent image blocks is a preset difference.
35. The apparatus according to claim 31 or 32, wherein the image blocks are obtained according to an optical flow field division of a video frame in which the first image frame is located.
36. The apparatus of claim 31, wherein the video file comprises a series of video frames captured in a time-sequential manner, the video frames including at least one of the first image frames;
the one or more processors are further configured, individually or collectively, to:
and increasing the distortion parameter of at least one image block in each first image frame of the video frame to generate a second image frame corresponding to the first image frame.
37. The apparatus of claim 36, wherein the one or more processors are further configured, individually or collectively, to:
and sequentially increasing the distortion parameter of at least one image block in a first image frame of a plurality of video frames with continuous shooting time sequences.
38. The apparatus of claim 36, wherein the gradient of increasing distortion parameter size of the image block is determined according to the distortion parameter size of the corresponding image block and a first predetermined coefficient.
39. The apparatus of claim 38, wherein the gradient of the increased distortion parameter size of the image block is a product of the distortion parameter size of the corresponding image block and the first predetermined coefficient.
40. The apparatus of claim 38, wherein the strategy for increasing the distortion parameter size of each first image frame is the same in the same video frame.
41. The apparatus of claim 40, wherein the image blocks with increased distortion parameters are located at the same position in each first image frame of the same video frame.
42. The apparatus as claimed in claim 41, wherein in each first image frame of the same video frame, the first predetermined coefficients corresponding to the image blocks at the same position are the same, and the first predetermined coefficients corresponding to the image blocks at different positions are different.
43. The apparatus according to claim 42, wherein in the same first image frame, the first predetermined coefficients corresponding to image blocks at different positions in each image block with increased distortion magnitude are positively correlated with the distance from the image block to the image center.
44. The apparatus as claimed in claim 43, wherein the difference between the first predetermined coefficients corresponding to the image blocks at two adjacent positions in each image block with increased distortion size is a predetermined size in the same first image frame.
45. The apparatus of claim 38 or 40, wherein the strategy for performing the distortion parameter size increasing process on the first image frame is different between different video frames.
46. The apparatus according to claim 45, wherein the number of image blocks with increased distortion parameter size for a first image frame between different video frames sequentially increases with the shooting timing of the video frames.
47. The apparatus of claim 46 wherein the image blocks in the first image frame between different video frames having increasing distortion parameter sizes increase in a direction closer to the center of the image.
48. The apparatus according to claim 45, wherein in image blocks in which the size of the distortion parameter is increased in a first image frame between different video frames, first preset coefficients corresponding to image blocks at the same position are sequentially increased along with a shooting timing sequence of the video frames.
49. The apparatus of claim 31, wherein the video file comprises a series of video frames captured in a time-sequential manner, the video frames including at least one of the first image frames;
the one or more processors are further configured, individually or collectively, to:
and reducing the distortion parameter of at least one image block in each first image frame of the video frame to generate a second image frame corresponding to the first image frame.
50. The apparatus of claim 49, wherein the one or more processors are further configured, individually or collectively, to:
and sequentially reducing the distortion parameter of at least one image block in a first image frame of a plurality of video frames with continuous shooting time sequence.
51. The apparatus of claim 49, wherein the gradient of the tile with the reduced distortion parameter size is determined according to the distortion parameter size of the corresponding tile and a second predetermined coefficient.
52. The apparatus of claim 51, wherein the gradient of the tile's decreasing magnitude of the distortion parameter is a product of the magnitude of the distortion parameter of the corresponding tile and the second predetermined coefficient.
53. The apparatus of claim 31, wherein the image processing apparatus is a camera, and the processor comprises a processor of the camera;
the shooting device also comprises an image acquisition module used for shooting video code streams, and the image acquisition module is electrically coupled with a processor of the shooting device;
the video file is a real-time video code stream acquired by the shooting device.
54. The apparatus of claim 53, wherein the one or more processors are further configured, individually or collectively, to:
and controlling the shooting device to enter a distortion adjusting program when the shooting device is determined to meet the distortion adjusting strategy.
55. The apparatus of claim 54, wherein the one or more processors are further configured, individually or collectively, to:
if the trigger signal is acquired, determining that the shooting device meets a distortion adjustment strategy;
the trigger signal is used for indicating the shooting device to carry out distortion adjustment so as to enhance or weaken the motion sense of the real-time video code stream;
wherein the trigger signal is sent by an external device or triggered and generated by a first control part of the shooting device.
56. The apparatus of claim 54, wherein the one or more processors are further configured, individually or collectively, to:
acquiring the speed, or the speed and the acceleration of the shooting device;
when the speed is greater than or equal to a preset speed threshold, or the speed is less than a preset speed threshold and the acceleration is greater than a preset acceleration threshold, determining that the shooting device meets a first distortion adjustment strategy and determining that the shooting device meets a distortion adjustment strategy;
the first distortion adjustment strategy is used for instructing the shooting device to increase the size of a distortion parameter of at least one image block in the first image frame so as to generate a second image frame.
57. The apparatus of claim 54, wherein the one or more processors are further configured, individually or collectively, to:
acquiring the speed and the acceleration of the shooting device;
when the speed is less than a preset speed threshold and the acceleration is less than or equal to a preset acceleration threshold, determining that the shooting device meets a second distortion adjustment strategy and determining that the shooting device meets a distortion adjustment strategy;
the second distortion adjustment strategy is used for instructing the shooting device to reduce the size of a distortion parameter of at least one image block in the first image frame so as to generate a second image frame.
58. The device of claim 56 or 57, wherein the speed, or the speed and the acceleration are determined based on detection data of a gyroscope on the camera and/or a live video stream captured by the camera.
59. The apparatus of claim 55, wherein the one or more processors are further configured, individually or collectively, to:
when the shooting device is determined to meet the distortion adjustment strategy, after the shooting device is controlled to enter a distortion adjustment program, if a stop signal is obtained, the currently running distortion adjustment program is stopped;
the stop signal is used for instructing the shooting device to stop the currently running distortion adjusting program.
60. The apparatus according to claim 59, wherein the stop signal is sent by an external device or triggered by a second control part of the photographing apparatus.
61. A computer-readable storage medium, on which a computer program is stored, which program, when executed by a processor, performs the steps of:
reading a first image frame in a video file;
increasing distortion parameters of at least one image block in the first image frame to generate a second image frame;
replacing the first image frame with the second image frame.
62. A method of enhancing a sense of motion of an image, the method comprising:
reading a video file;
increasing distortion parameters of local regions of image frames in the video file;
and generating a new video file based on the adjusted video file.
63. A method of enhancing a sense of motion of an image, the method comprising:
reading a first image frame in a video file;
increasing distortion parameters of at least one image block in the first image frame to generate a second image frame;
replacing the first image frame with the second image frame.
64. A mobile terminal, characterized in that the mobile terminal comprises:
a camera for obtaining a video file;
storage means for storing program instructions;
one or more processors that invoke program instructions stored in the storage device, the one or more processors individually or collectively configured when the program instructions are executed to:
reading a first image frame in a video file;
increasing distortion parameters of at least one image block in the first image frame to generate a second image frame;
replacing the first image frame with the second image frame.
65. A drone, characterized in that it comprises:
a body;
the shooting device is carried on the machine body and used for obtaining a video file;
storage means for storing program instructions;
one or more processors that invoke program instructions stored in the storage device, the one or more processors individually or collectively configured when the program instructions are executed to:
reading a first image frame in a video file;
increasing distortion parameters of at least one image block in the first image frame to generate a second image frame;
replacing the first image frame with the second image frame.
66. A handheld cloud platform, its characterized in that, handheld cloud platform includes:
a camera for obtaining a video file;
storage means for storing program instructions;
one or more processors that invoke program instructions stored in the storage device, the one or more processors individually or collectively configured when the program instructions are executed to:
reading a first image frame in a video file;
increasing distortion parameters of at least one image block in the first image frame to generate a second image frame;
replacing the first image frame with the second image frame.
67. A photographing apparatus, characterized by comprising:
the image acquisition module is used for acquiring a video file;
storage means for storing program instructions;
one or more processors that invoke program instructions stored in the storage device, the one or more processors individually or collectively configured when the program instructions are executed to:
reading a first image frame in a video file;
increasing distortion parameters of at least one image block in the first image frame to generate a second image frame;
replacing the first image frame with the second image frame.
CN201980008888.4A 2019-04-23 2019-04-23 Image processing method and device Active CN111684784B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/083920 WO2020215214A1 (en) 2019-04-23 2019-04-23 Image processing method and apparatus

Publications (2)

Publication Number Publication Date
CN111684784A true CN111684784A (en) 2020-09-18
CN111684784B CN111684784B (en) 2022-10-25

Family

ID=72451466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980008888.4A Active CN111684784B (en) 2019-04-23 2019-04-23 Image processing method and device

Country Status (2)

Country Link
CN (1) CN111684784B (en)
WO (1) WO2020215214A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114449237B (en) * 2020-10-31 2023-09-29 华为技术有限公司 Method for anti-distortion and anti-dispersion and related equipment

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040062448A1 (en) * 2000-03-01 2004-04-01 Wenjun Zeng Distortion-adaptive visual frequency weighting
US20050265453A1 (en) * 2004-05-07 2005-12-01 Yasushi Saito Image processing apparatus and method, recording medium, and program
US20060238653A1 (en) * 2005-04-07 2006-10-26 Sony Corporation Image processing apparatus, image processing method, and computer program
US20060262184A1 (en) * 2004-11-05 2006-11-23 Yissum Research Development Company Of The Hebrew University Of Jerusalem Method and system for spatio-temporal video warping
RU2006142839A (en) * 2006-12-04 2008-06-10 Государственное образовательное учреждение высшего профессионального образовани Курский государственный технический университет (RU) METHOD FOR AUTOMATIC DETERMINATION AND CORRECTION OF RADIAL DISTORTION ON DIGITAL IMAGE
CN101309367A (en) * 2007-03-27 2008-11-19 富士胶片株式会社 Imaging apparatus
US20090073324A1 (en) * 2007-09-18 2009-03-19 Kar-Han Tan View Projection for Dynamic Configurations
US20120147232A1 (en) * 2010-12-08 2012-06-14 Canon Kabushiki Kaisha Imaging apparatus
US20120307155A1 (en) * 2011-05-31 2012-12-06 Michael Gleicher Video processing with region-based warping
US20130155292A1 (en) * 2011-12-14 2013-06-20 Samsung Electronics Co., Ltd. Imaging apparatus and method
US20140085514A1 (en) * 2012-09-21 2014-03-27 Htc Corporation Methods for image processing of face regions and electronic devices using the same
CN104902139A (en) * 2015-04-30 2015-09-09 北京小鸟看看科技有限公司 Head-mounted display and video data processing method of head-mounted display
CN107154027A (en) * 2017-04-17 2017-09-12 深圳大学 Compensation method and device that a kind of fault image restores
US9813693B1 (en) * 2014-06-27 2017-11-07 Amazon Technologies, Inc. Accounting for perspective effects in images

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040062448A1 (en) * 2000-03-01 2004-04-01 Wenjun Zeng Distortion-adaptive visual frequency weighting
US20050265453A1 (en) * 2004-05-07 2005-12-01 Yasushi Saito Image processing apparatus and method, recording medium, and program
US20060262184A1 (en) * 2004-11-05 2006-11-23 Yissum Research Development Company Of The Hebrew University Of Jerusalem Method and system for spatio-temporal video warping
US20060238653A1 (en) * 2005-04-07 2006-10-26 Sony Corporation Image processing apparatus, image processing method, and computer program
RU2006142839A (en) * 2006-12-04 2008-06-10 Государственное образовательное учреждение высшего профессионального образовани Курский государственный технический университет (RU) METHOD FOR AUTOMATIC DETERMINATION AND CORRECTION OF RADIAL DISTORTION ON DIGITAL IMAGE
CN101309367A (en) * 2007-03-27 2008-11-19 富士胶片株式会社 Imaging apparatus
US20090073324A1 (en) * 2007-09-18 2009-03-19 Kar-Han Tan View Projection for Dynamic Configurations
US20120147232A1 (en) * 2010-12-08 2012-06-14 Canon Kabushiki Kaisha Imaging apparatus
US20120307155A1 (en) * 2011-05-31 2012-12-06 Michael Gleicher Video processing with region-based warping
US20130155292A1 (en) * 2011-12-14 2013-06-20 Samsung Electronics Co., Ltd. Imaging apparatus and method
US20140085514A1 (en) * 2012-09-21 2014-03-27 Htc Corporation Methods for image processing of face regions and electronic devices using the same
US9813693B1 (en) * 2014-06-27 2017-11-07 Amazon Technologies, Inc. Accounting for perspective effects in images
CN104902139A (en) * 2015-04-30 2015-09-09 北京小鸟看看科技有限公司 Head-mounted display and video data processing method of head-mounted display
CN107154027A (en) * 2017-04-17 2017-09-12 深圳大学 Compensation method and device that a kind of fault image restores

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
马立华: "高速运动物体视觉图像的畸变", 《宜春学院学报》 *

Also Published As

Publication number Publication date
CN111684784B (en) 2022-10-25
WO2020215214A1 (en) 2020-10-29

Similar Documents

Publication Publication Date Title
US10936894B2 (en) Systems and methods for processing image data based on region-of-interest (ROI) of a user
US20210329177A1 (en) Systems and methods for video processing and display
US10217021B2 (en) Method for determining the position of a portable device
JP6944136B2 (en) Image processing device and image processing method
CN104126299B (en) Video image stabilisation
CN105187723B (en) A kind of image pickup processing method of unmanned vehicle
KR20190008193A (en) GENERATING APPARATUS AND GENERATING METHOD
JP6944137B2 (en) Image processing device and image processing method
JP6944138B2 (en) Image processing device and image processing method
CN105763790A (en) Video System For Piloting Drone In Immersive Mode
JP2020506487A (en) Apparatus and method for obtaining depth information from a scene
CN108141540B (en) Omnidirectional camera with motion detection
US20210112194A1 (en) Method and device for taking group photo
KR102378860B1 (en) Image processing apparatus and image processing method
WO2017112800A1 (en) Macro image stabilization method, system and devices
CN111684784B (en) Image processing method and device
CN110036411B (en) Apparatus and method for generating electronic three-dimensional roaming environment
US20230103650A1 (en) System and method for providing scene information
CN110770649A (en) Multi-camera system for tracking one or more objects through a scene
CN114586335A (en) Image processing apparatus, image processing method, program, and recording medium
US11812007B2 (en) Disparity map building using guide node
WO2022201825A1 (en) Information processing device, information processing method, and information processing system
KR102529908B1 (en) Apparatus and method for generating time-lapse image using camera in vehicle
JP2020056627A (en) Imaging device
EP4348375A1 (en) System and method for providing scene information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant